text
stringlengths
1
7.76k
source
stringlengths
17
81
The Shallow and the Deep A biased introduction to neural networks and old school machine learning Michael Biehl
The+Shallow+and+the+Deep_Page_5_Chunk3901
Published by University of Groningen Press Broerstraat 4 9712 CP Groningen The Netherlands First published in the Netherlands © 2023 Michael Biehl, Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, Groningen Comments, corrections and suggestions are welcome, contact: m.biehl@rug.nl Please cite as: Biehl, M. (2023). The Shallow and the Deep: A biased introduction to neural networks and old school machine learning. University of Groningen Press. This book has been published open access thanks to the financial support of the Open Access Textbook Fund of the University of Groningen. Cover design: Bas Ekkers Coverphoto: Michael Biehl Production: LINE UP boek en media bv ISBN (print) 9789403430287 ISBN (ePDF) 9789403430270 DOI https://doi.org/ 10.21827/648c59c1a467e This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The full licence terms are available at creativecommons.org/ licenses/ by-nc-sa/4.0/legalcode
The+Shallow+and+the+Deep_Page_6_Chunk3902
Preface Stop calling everything AI. — Michael I. Jordan, in [Pre21] The subtitle of these lecture notes is “A biased introduction to neural networks and old school machine learning” for good reasons. Although the aim was to give an accessible introduction to the field, it has been clear from the beginning that it would not end up as a comprehensive, complete overview. The focus is on classical machine learning, many recent developments cannot be covered. Personally, I first got in touch with neural networks in my early life as a physicist. At the time, it was sufficient to read a handful of papers and perhaps a little later the good book [HKP91] to be up-to-date and able to contribute a piece to the big puzzle. Am I exaggerating and somewhat nostalgic? Probably. But the situation has definitely changed a lot. Nowadays, an overwhelming flood of publications makes it difficult to filter out the relevant information and keep up with the developments. The selection of topics in these notes has been determined to a large extent by my own research interests and early experiences. This is definitely true for the initial focus on the simple perceptron, the hydrogen atom of neural network research as Manfred Opper put it [Opp90]. Moreover, the bulk of this text deals with shallow systems for supervised learning, in particular classification, which reflects my main interest in the field. The notes may be perceived as old school, certainly by some dedicated fol- lowers of fashion [Dav66]. Admittedly, the text does not address the most recent developments in e.g. Deep Learning and its applications. However, in my humble opinion it is invaluable to have a solid background knowledge of the basics before exploring the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, the emphasis is on basic concepts and theoretical background, with specific aspects selected from a personal and clearly biased viewpoint. In a sense, the goal is to de-mystify machine learning and neural networks without iii
The+Shallow+and+the+Deep_Page_7_Chunk3903
iv losing the appreciation for their fascinating power and versatility. Very often, this involves a look into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. I have aimed at pointing the interested reader to many resources for further exploration of the area. Therefore, the list of references in the bibliography, although by no means complete, is slightly more extensive than initially envi- sioned. The starting point for these notes was the desire to provide more comprehen- sive material than the presentation slides in the MSc level course Neural Net- works (renamed Neural Networks and Computational Intelligence later) which I have been giving at the University of Groningen. A thorough archeological investigation of the text and figures would also reveal traces of the courses The- orie Neuronaler Netzwerke and Unüberwachtes Lernen, that I taught way back when in the Physics program at the University of Würzburg. My writing activity was greatly boosted on the occasion of the wonderful 30th Canary Islands Winter School in 2018, devoted to Big Data analysis in Astronomy, where I had the honor to give a series of lectures on supervised learning, see [MSK19,Bie19] for course materials and video-recorded lectures. Last not least I would like to acknowledge constructive feedback from several “generations” of students who followed the course and from many colleagues and collaborators. In particular, I thank Elisa Oostwal and Janis Norden for a critical reading of the manuscript and many suggestions for improvements. Groningen, June 2023 The mysterious machine learning machine © Catharina M. Gerigk and Elina L. van den Brandhof Reproduced with kind permission of the artists.
The+Shallow+and+the+Deep_Page_8_Chunk3904
Contents Preface iii 1 From neurons to networks 1 1.1 Spiking neurons and synaptic interactions . . . . . . . . . . . . . 3 1.2 Firing rate models . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.1 Neural activity and synaptic interaction . . . . . . . . . . 5 1.2.2 Sigmoidal activation functions . . . . . . . . . . . . . . . 6 1.2.3 Hebbian learning . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Network architectures . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Attractor networks and the Hopfield model . . . . . . . . 10 1.3.2 Feed-forward layered neural networks . . . . . . . . . . . 12 1.3.3 Other architectures . . . . . . . . . . . . . . . . . . . . . . 15 2 Learning from example data 17 2.1 Learning scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.1 Unsupervised learning . . . . . . . . . . . . . . . . . . . . 17 2.1.2 Supervised learning . . . . . . . . . . . . . . . . . . . . . 19 2.1.3 Other learning scenarios . . . . . . . . . . . . . . . . . . . 22 2.2 Machine Learning vs. Statistical Modelling . . . . . . . . . . . . 23 2.2.1 Differences and commonalities . . . . . . . . . . . . . . . 23 2.2.2 An example case: linear regression . . . . . . . . . . . . . 24 2.2.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3 The Perceptron 31 3.1 History and literature . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Linearly separable functions . . . . . . . . . . . . . . . . . . . . . 33 3.3 The Rosenblatt perceptron . . . . . . . . . . . . . . . . . . . . . 36 3.3.1 The perceptron storage problem . . . . . . . . . . . . . . 36 3.3.2 Iterative Hebbian training algorithms . . . . . . . . . . . 37 3.3.3 The Rosenblatt perceptron algorithm . . . . . . . . . . . 39 3.3.4 The perceptron algorithm as gradient descent . . . . . . . 41 3.3.5 The Perceptron Convergence Theorem . . . . . . . . . . . 42 3.3.6 A few remarks . . . . . . . . . . . . . . . . . . . . . . . . 45 3.4 The capacity of a hyperplane . . . . . . . . . . . . . . . . . . . . 46 v
The+Shallow+and+the+Deep_Page_9_Chunk3905
vi CONTENTS 3.4.1 The number of linearly separable dichotomies . . . . . . . 46 3.4.2 Discussion of the result . . . . . . . . . . . . . . . . . . . 52 3.4.3 Time for a pizza or some cake . . . . . . . . . . . . . . . . 53 3.5 Learning a linearly separable rule . . . . . . . . . . . . . . . . . . 55 3.5.1 Student-teacher scenario . . . . . . . . . . . . . . . . . . . 55 3.5.2 Learning in version space . . . . . . . . . . . . . . . . . . 57 3.5.3 Generalization begins where storage ends . . . . . . . . . 60 3.5.4 Optimal generalization . . . . . . . . . . . . . . . . . . . . 62 3.6 The perceptron of optimal stability . . . . . . . . . . . . . . . . . 63 3.6.1 The stability criterion . . . . . . . . . . . . . . . . . . . . 63 3.6.2 The MinOver algorithm . . . . . . . . . . . . . . . . . . . 65 3.7 Optimal stability by quadratic optimization . . . . . . . . . . . . 67 3.7.1 Optimal stability reformulated . . . . . . . . . . . . . . . 67 3.7.2 The Adaptive Linear Neuron - Adaline . . . . . . . . . . . 68 3.7.3 The Adaptive Perceptron Algorithm - AdaTron . . . . . . 73 3.7.4 Support vectors . . . . . . . . . . . . . . . . . . . . . . . . 78 3.8 Inhom. lin. sep. functions revisited . . . . . . . . . . . . . . . . . 80 3.9 Some remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4 Beyond linear separability 83 4.1 Perceptron with errors . . . . . . . . . . . . . . . . . . . . . . . . 85 4.1.1 Minimal number of errors . . . . . . . . . . . . . . . . . . 85 4.1.2 Soft margin classifier . . . . . . . . . . . . . . . . . . . . . 87 4.2 Layered networks of perceptron-like units . . . . . . . . . . . . . 90 4.2.1 Committee and parity machines . . . . . . . . . . . . . . 91 4.2.2 The parity machine: a universal classifier . . . . . . . . . 92 4.2.3 The capacity of machines . . . . . . . . . . . . . . . . . . 95 4.3 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . 97 4.3.1 Non-linear transformation to higher dimension . . . . . . 98 4.3.2 Large margin classifier . . . . . . . . . . . . . . . . . . . . 99 4.3.3 The kernel trick . . . . . . . . . . . . . . . . . . . . . . . 100 4.3.4 A few remarks . . . . . . . . . . . .
The+Shallow+and+the+Deep_Page_10_Chunk3906
. . . . . . . . . . . . 104 5 Feed-forward networks for regression and classification 107 5.1 Feed-forward networks as non-linear function approximators . . . 107 5.1.1 Architecture and input-output relation . . . . . . . . . . . 108 5.1.2 Universal approximators . . . . . . . . . . . . . . . . . . . 109 5.2 Gradient based training of feed-forward nets . . . . . . . . . . . . 114 5.2.1 Computing the gradient: Backpropagation of Error . . . . 116 5.2.2 Batch gradient descent . . . . . . . . . . . . . . . . . . . . 116 5.2.3 Stochastic gradient descent . . . . . . . . . . . . . . . . . 119 5.2.4 Practical aspects and modifications . . . . . . . . . . . . . 122 5.3 Objective functions . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.3.1 Cost functions for regression . . . . . . . . . . . . . . . . 124 5.3.2 Cost functions for classification . . . . . . . . . . . . . . . 125 5.4 Activation functions . . . . . . . . . . . . . . . . . . . . . . . . . 127
The+Shallow+and+the+Deep_Page_10_Chunk3907
CONTENTS vii 5.4.1 Sigmoidal and related functions . . . . . . . . . . . . . . . 127 5.4.2 One-sided and unbounded activation functions . . . . . . 128 5.4.3 Exponential and normalized activations . . . . . . . . . . 130 5.4.4 Remark: universal function approximation . . . . . . . . . 131 5.5 Specific architectures . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.5.1 Popular shallow networks . . . . . . . . . . . . . . . . . . 132 5.5.2 Deep and convolutional neural networks . . . . . . . . . . 135 6 Distance-based classifiers 141 6.1 Prototype-based classifiers . . . . . . . . . . . . . . . . . . . . . . 143 6.1.1 Nearest Neighbor and Nearest Prototype Classifiers . . . 143 6.1.2 Learning Vector Quantization . . . . . . . . . . . . . . . . 144 6.1.3 LVQ training algorithms . . . . . . . . . . . . . . . . . . . 145 6.2 Distance measures and relevance learning . . . . . . . . . . . . . 148 6.2.1 LVQ beyond Euclidean distance . . . . . . . . . . . . . . 148 6.2.2 Adaptive distances in relevance learning . . . . . . . . . . 149 6.3 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . 153 7 Model evaluation and regularization 155 7.1 Bias and variance, over- and underfitting . . . . . . . . . . . . . 155 7.1.1 Decomposition of the error . . . . . . . . . . . . . . . . . 156 7.1.2 The bias-variance dilemma . . . . . . . . . . . . . . . . . 158 7.1.3 Beyond the classical bias-variance trade-off(?) . . . . . . 161 7.2 Controlling the network complexity . . . . . . . . . . . . . . . . . 163 7.2.1 Early stopping . . . . . . . . . . . . . . . . . . . . . . . . 163 7.2.2 Weight decay and related concepts . . . . . . . . . . . . . 164 7.2.3 Constructive algorithms . . . . . . . . . . . . . . . . . . . 167 7.2.4 Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 7.2.5 Weight-sharing . . . . . . . . . . . . . . . . . . . . . . . . 169 7.2.6 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 7.3 Cross-validation and related methods . . . . . . . . . . . . . . . . 171 7.3.1 n-fold cross-validation and related schemes . . . . . . . . 172 7.3.2 Model and parameter selection . . . . . . . . . . . . . . . 175 7.4 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . 175 7.4.1 Measures for regression . . . . . . . . . . . . . . . . .
The+Shallow+and+the+Deep_Page_11_Chunk3908
. . 176 7.4.2 Measures for classification . . . . . . . . . . . . . . . . . . 176 7.4.3 Receiver Operating Characteristics . . . . . . . . . . . . . 177 7.4.4 The area under the ROC curve . . . . . . . . . . . . . . . 180 7.4.5 Alternative measures for two-class problems . . . . . . . . 181 7.4.6 Multi-class problems . . . . . . . . . . . . . . . . . . . . . 182 7.4.7 Averages of class-wise quality measures . . . . . . . . . . 183 7.5 Interpretable systems . . . . . . . . . . . . . . . . . . . . . . . . . 185
The+Shallow+and+the+Deep_Page_11_Chunk3909
viii CONTENTS 8 Preprocessing and unsupervised learning 187 8.1 Normalization and transformations . . . . . . . . . . . . . . . . . 188 8.1.1 Coordinate-wise transformations . . . . . . . . . . . . . . 189 8.1.2 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . 191 8.2 Dimensionality reduction . . . . . . . . . . . . . . . . . . . . . . 192 8.2.1 Low-dimensional embedding . . . . . . . . . . . . . . . . . 194 8.2.2 Multi-dimensional Scaling . . . . . . . . . . . . . . . . . . 194 8.2.3 Neighborhood Embedding . . . . . . . . . . . . . . . . . . 195 8.2.4 Feature selection . . . . . . . . . . . . . . . . . . . . . . . 196 8.3 PCA and related methods . . . . . . . . . . . . . . . . . . . . . . 197 8.3.1 Principal Component Analysis . . . . . . . . . . . . . . . 198 8.3.2 PCA by Hebbian learning . . . . . . . . . . . . . . . . . . 200 8.3.3 Independent Component Analysis . . . . . . . . . . . . . 203 8.4 Clustering and Vector Quantization . . . . . . . . . . . . . . . . 204 8.4.1 Basic clustering methods . . . . . . . . . . . . . . . . . . 205 8.4.2 Competitive learning for Vector Quantization . . . . . . . 206 8.4.3 Practical issues and extensions of VQ . . . . . . . . . . . 208 8.5 Density estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 211 8.5.1 Parametric density estimation . . . . . . . . . . . . . . . . 211 8.5.2 Gaussian Mixture Models . . . . . . . . . . . . . . . . . . 212 8.6 Missing values and imputation techniques . . . . . . . . . . . . . 215 8.6.1 Approaches without explicit imputation . . . . . . . . . . 216 8.6.2 Imputation based on available data . . . . . . . . . . . . . 216 8.7 Over- and undersampling, augmentation . . . . . . . . . . . . . . 218 8.7.1 Weighted cost functions . . . . . . . . . . . . . . . . . . . 218 8.7.2 Undersampling . . . . . . . . . . . . . . . . . . . . . . . . 218 8.7.3 Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . 219 8.7.4 Data augmentation . . . . . . . . . . . . . . . . . . . . . . 220 Concluding quote 223 A Optimization 225 A.1 Multi-dimensional Taylor expansion . . . . . . . . . . . . . . . . 225 A.2 Local extrema and saddle points . . . . . . . . . . . . . . . . . . 227 A.2.1 Necessary and sufficient conditions . . . . . . . . .
The+Shallow+and+the+Deep_Page_12_Chunk3910
. . . . 227 A.2.2 Example: unsolvable systems of linear equations . . . . . 228 A.3 Constrained optimization . . . . . . . . . . . . . . . . . . . . . . 230 A.3.1 Equality constraints . . . . . . . . . . . . . . . . . . . . . 230 A.3.2 Example: under-determined linear equations . . . . . . . 231 A.3.3 Inequality constraints . . . . . . . . . . . . . . . . . . . . 232 A.3.4 The Wolfe Dual for convex problems . . . . . . . . . . . . 234 A.4 Gradient based optimization . . . . . . . . . . . . . . . . . . . . . 235 A.4.1 Gradient and directional derivative . . . . . . . . . . . . . 235 A.4.2 Gradient descent . . . . . . . . . . . . . . . . . . . . . . . 236 A.4.3 The gradient under coordinate transformations . . . . . . 238 A.5 Variants of gradient descent . . . . . . . . . . . . . . . . . . . . . 239
The+Shallow+and+the+Deep_Page_12_Chunk3911
CONTENTS ix A.5.1 Coordinate descent . . . . . . . . . . . . . . . . . . . . . . 239 A.5.2 Constrained problems and projected gradients . . . . . . 240 A.5.3 Stochastic gradient descent . . . . . . . . . . . . . . . . . 240 A.6 Example calculation of a gradient . . . . . . . . . . . . . . . . . . 243 List of figures 246 List of algorithms 247 Abbrev. and acronyms 248 Bibliography 250 1. Abandon the idea that you are ever going to finish. — John Steinbeck, Six Writing Tips
The+Shallow+and+the+Deep_Page_13_Chunk3912
x CONTENTS
The+Shallow+and+the+Deep_Page_14_Chunk3913
Chapter 1 From neurons to networks Reality is overrated anyway. — Unknown To understand and explain the brain’s fascinating capabilities1 remains one of the greatest scientific challenges ever. This is particularly true for its plas- ticity, i.e. the ability to learn from experience, to adapt to and to survive in ever-changing environments. Ultimately, the performance of the brain must rely on its hardware (or wet- ware [LB89]) and emerges from the cooperative behavior of its many, relatively simple, yet highly interconnected building blocks: the neurons. The human cortex, for instance, comprises an estimated number of 1012 neurons and each individual cell can be connected to thousands of others. In this introduction to Neural Networks and Computational Intelligence we will study artificial neural networks and related systems, designed for the pur- pose of adaptive information processing. The degree to which these systems relate to their biological counterparts is, generally speaking, quite limited. How- ever, their development was greatly inspired by key aspects of biological neurons and networks. Therefore, it is useful to be aware of the conceptual connections between artificial and biological systems, at least on a basic level. Quite often, technical solutions are inspired by natural systems without copy- ing all their properties in detail. Due to biological constraints, nature (i.e. evolu- tion) might have found highly complex solutions to certain problems that could be dealt with in a simpler fashion in a technical realization. A somewhat over- used example in this context is the construction of efficient aircraft, which by no means requires the use of moving wings in order to imitate bird flight faithfully. Of course, it is unclear a priori which of the details are essential and which ones can be left out in artificial systems. Obviously, this also depends on the 1(including the capability of being fascinated) 1
The+Shallow+and+the+Deep_Page_15_Chunk3914
2 1. FROM NEURONS TO NETWORKS specific task and context. Consequently, the interaction between the neuro- sciences and machine learning research continues to play an important role for the further development of both. In this introductory text we will consider learning systems, which draw on only the most basic mechanisms. Therefore, this chapter is only meant as a very brief overview, which should allow to relate some of the concepts in artificial neural computation to their biological background. The reader should be aware that the presentation is certainly over-simplifying and probably not quite up- to-date in all aspects. Recommended textbooks and other sources In most of this introductory chapter, detailed citations concerning specific topics will not be provided. Instead, the following list points the reader to selected textbooks, reviews or lecture notes. They range from brief and superficial to very comprehensive and detailed reviews of the biological background. The same is true for the discussion of the different conceptual levels on which biological systems can be modelled. References point to the full citation information in the bibliography. Note that the selection is certainly incomplete and biased by personal preferences. ◦K. Guerney’s (Neural Networks) gives a very basic overview and provides a glossary of biological or biologically inspired terms [Gue97]. ◦The first sections of Neural Networks and Learning Machines by S. Haykin cover the relevant topics in slightly greater depth [Hay09]. ◦The classical textbook Neural Networks: An Introduction to the Theory of Neural Computation by J.A. Hertz, A. Krogh and R.G. Palmer discusses the inspiration from biological neurons and networks in the first chapters. It also provides a thorough analysis of the Hopfield model from a statistical physics perspective [HKP91]. ◦H. Horner and R. Kühn give a brief general introduction in Neural Net- works [HK98], including a basic discussion of the biological background. ◦Models of biological neurons, their bio-chemistry and bio-physics are in the focus of C. Koch’s comprehensive monograph on the Biophysics of computation [Koc98]. It discusses the different modelling approaches and relates them to experimental data obtained from real world neurons. ◦T. Kohonen has introduced important prototype-based learning schemes. An entire chapter of his seminal work Self-Organizing Maps is devoted to the Justification of Neural Modeling [Koh97]. ◦H. Ritter, T. Martinetz and K. Schulten give an overview and also discuss some aspects of the organization of the brain in terms of maps in their monograph Neural Computation and Self-Organizing Maps [RMS92]. ◦M. van Rossum’s lecture notes on Neural Computation provide an overview of biological information processing and models of neural activity, synaptic interaction and plasticity. Moreover, modelling approaches are discussed in some detail [Ros16].
The+Shallow+and+the+Deep_Page_16_Chunk3915
1.1. SPIKING NEURONS AND SYNAPTIC INTERACTIONS 3 1.1 Spiking neurons and synaptic interactions The physiology and functionality of the biological systems is highly complex, already on the single neuron level. Sophisticated modelling frameworks have been developed that take into account the relevant electro-chemical processes in great detail in order to represent the biology as faithfully as possible. This includes the famous Hodgkin-Huxley model and variants thereof. They describe the state of cell compartments in terms of an electrostatic potential, which is due to varying ion concentrations on both sides of the cell membrane. A number of ion channels and pumps control the concentrations and, thus, govern the membrane potential. The original Hodgkin-Huxley model describes its temporal evolution in terms of four coupled ordinary differential equations, the parameters of which can be fitted to experimental data measured in real world neurons. Whenever the membrane potential reaches a threshold value, for instance triggered by the injection of an external current, a short, localized electrical pulse is generated. The term action potential or the more sloppy spike will be used synonymously. The neuron is said to fire when a spike is generated. The action potential discharges the membrane locally and propagates along the membrane. As illustrated in Figure 1.1 (left panel), a strongly elongated extension is attached to the soma, the so-called axon. From a purely technical point of view, it serves as a cable along which action potentials can travel. Of course, the actual electro-chemical processes are significantly different from the flow of electrons in a conventional copper cable, for instance. In fact, action potentials jump between short gaps in the myelin sheath, an insulating layer around the axon. By means of saltatory conduction, action potentials spread along the axonic branches of the firing neuron and eventually reach the points where the branches connect to the dendrites of other neurons. Such a connection, termed synapse, is shown schematically in Fig. 1.1 (right panel). Upon arrival of a spike, so-called neuro-transmitters are released into the synap- tic cleft, i.e. the gap between pre-synaptic axon branch and the post-synaptic dendrite. The transmitters are received on the post-synaptic side by substance specific receptors. Thus, in the synapse, the action potential is not transferred directly through a physical contact point, but chemically.2 The effect that an ar- riving spike has on the post-synaptic neuron depends on the detailed properties of the synapse: ◦If the synapse is of the excitatory type, the post-synaptic membrane po- tential increases upon arrival of the pre-synaptic spike, ◦When a spike arrives at an inhibitory synapse, the post-synaptic mem- brane potential decreases. Both excitatory and inhibitory synapses can have varying strengths, as reflected 2 Note that also so-called gap junctions exist which can function as bi-directional electrical synapses, see e.g. [CL04] for further information and references.
The+Shallow+and+the+Deep_Page_17_Chunk3916
4 1. FROM NEURONS TO NETWORKS Figure 1.1: Schematic illustration of neurons (pyramidal cells) and their con- nections. Left: Pre-synaptic and post-synaptic neurons with soma, dendritic tree, axon, and axonic branches. Right: The synaptic cleft with vesicles releas- ing neuro-transmitters and corresponding receptors on the post-synaptic side. Redrawn after [Kat66]. in the magnitude of the change that a spike imposes on the post-synaptic mem- brane potential. Consequently, the membrane potential of a particular cell will vary over time, depending on the actual activities of the neurons it receives spikes from through excitatory and inhibitory synapses. When the threshold for spike generation is reached, the neuron fires itself and, thus, influences the potential and activity of all its post-synaptic neighbors. All in all, a set of interconnected neurons forms a complex dynamical system of threshold units which influence each other’s activity through generation and synaptic transmission of action potentials. The origin of a very successful approach to the modelling of neuronal ac- tivity dates back to Louis Lapicque in 1907. In the framework of the so-called Integrate-and-Fire (IaF) model, electro-chemical details accounted for in the Hodgkin-Huxley type of models are omitted (and were probably not known at the time). The membrane is simply represented by its conductance and ohmic resistance. All charge transport phenomena are combined in one effective elec- tric current, which summarizes the individual contributions of changing ion concentrations as well as leak currents through the membrane. Similarly, the precise form of spikes and details of their generation and transport are ignored. Instead, the firing is modelled as an all-or-nothing threshold process, which re- sults in an instantaneous discharge. A spike is represented by a structureless Dirac delta function which defines the time point of the event. Despite its sim- plicity compared to more realistic electro-chemical models, the IaF model can be fitted to physiological data and yields a fairly realistic description of neuronal activity.
The+Shallow+and+the+Deep_Page_18_Chunk3917
1.2. FIRING RATE MODELS 5 time [ms] Figure 1.2: Left (upper): Schematic illustration of an action potential, i.e. a short pulse on mV - and ms-scale. Left (lower): Spikes travel along the axon through saltatory conduction via gaps in the insulating myelin sheath. Right: Schematic illustration of how mean firing rates are derived from a temporal spike pattern. 1.2 Firing rate models In another step of abstraction, the description of neural activity is simplified by taking into account only the mean firing rate, e.g. obtained as the average number of spikes per unit time; the concept is illustrated in Fig. 1.2 (right panel). The implicit assumption is that most of the information in neural process- ing is contained in the mean activity and frequency of spikes of the neurons. Hence, the precise timing of individual action potentials is completely disre- garded. While the role of individual spike timing appears to be the topic of ongoing debate in the neurosciences3, the simplification clearly facilitates effi- cient simulations of very large networks of neurons and can be seen as the basis of virtually all artificial neural networks and learning systems considered in this text. 1.2.1 Neural activity and synaptic interaction The firing rate picture allows for a simple mathematical description of neural activity and synaptic interaction. Consider the mean activity Si of neuron i, which receives input from a set J of neurons with j ∕= i. Taking into account the fact that the firing rate of a biological neuron cannot exceed a certain maximum due to physiological and bio-chemical constraints, we can limit Si to a range of values 0 ≤Si where the upper limit 1 is given in arbitrary units. The resting state Si = 0 obviously corresponds to the absence of any spike generation. The activity of neuron i is given as a (non-linear) response of incoming spikes, which are - however - also represented only by the mean activities Sj: in Si = h(xi) with xi = 󰁛 j∈J wij Sj. (1.1) 3See, e.g., http://romainbrette.fr/category/blog/rate-vs-timing/ for further references.
The+Shallow+and+the+Deep_Page_19_Chunk3918
6 1. FROM NEURONS TO NETWORKS Here, the quantities wij ∈R represent the strength of the synapse connecting one neuron j ∈J with neuron i. Positive wij > 0 increase the so-called local potential xi if neuron j is active (Sj > 0), while wij < 0 contribute negative terms to the weighted sum. Note that real world chemical synapses are strictly uni-directional: even if connections wij and wji exist for a given pair of neurons, they would be physiologically separate, independent entities. 1.2.2 Sigmoidal activation functions A variety of different activation functions h(x) have been employed in artificial neural networks. A few specific types of functions will be introduced in a later chapter. Here we restrict the discussion to the by now classical sigmoidal acti- vation which arguably captures important characteristics of biological systems. It is plausible to assume the following mathematical properties of the acti- vation function h(x) of a given neuron (subscript i omitted) with local potential x as in Eq. (1.1): lim x→−∞h(x) = 0 (resting state, absence of spike generation) h′(x) ≥0 (monotonic increase of the excitation) lim x→+∞h(x) = 1 (maximum possible firing rate), (1.2) which takes into account the limitations of individual neural activity discussed in the previous section. Various activation or transfer functions have been suggested and considered in the literature. In the context of feed-forward neural networks, we will discuss several options in Sec. 5.4. A very important class of plausible activations is given by so-called sigmoidal functions, one prominent4 example being h(x) = 1 2 󰀕 1 + tanh 󰀅 γ(x −θ) 󰀆󰀖 (1.3) which clearly satisfies the conditions given above. The two important param- eters are the threshold θ, which localizes the steepest increase of activity, and the gain parameter γ, which quantifies the slope. It is important to note that θ does not directly correspond to the previously discussed threshold of the all- or-nothing generation of individual spikes: it marks the characteristic value of h at which the activation function is centered. 4Its popularity is partly due to the fact that the relation tanh′ = 1 −tanh2 facilitates a very efficient computation of the derivative, see also Chapter 5.
The+Shallow+and+the+Deep_Page_20_Chunk3919
1.2. FIRING RATE MODELS 7 Si γ θ xi = 󰁓 j wijSj Si θ xi = 󰁓 j wijSj Figure 1.3: Schematic illustration of symmetrized activation functions. Left: A sigmoidal transfer function with gain γ and threshold θ in the symmetrized representation, cf. Eq. (1.6). Right: The binary McCulloch Pitts activation as obtained in the limit γ →∞. Symmetrized representation of activity We will frequently consider a symmetrized description of neural activity in terms of modified activation functions: lim x→−∞g(x) = −1 (resting state, absence of spike generation) g′(x) ≥0 (monotonic increase of the excitation) lim x→+∞g(x) = 1 (maximum possible firing rate). (1.4) An example activation analogous to Eq. (1.3) is g(x) = tanh 󰀅 γ(x −θ) 󰀄 . (1.5) At first sight, this appears to be just an alternative assignment of a value S = −1 to the resting state. Note that in the original description with 0 < Sj < 1, a quiescent neuron does not influence its postsynaptic neurons explicitly. However, keeping the form of the activation as Si = g(xi) with xi = 󰁛 j∈J wij Sj (1.6) implies that the absence of activity (Sj = −1) in neuron j can now increase the firing rate of neuron i if connected through an inhibitory synapse wij < 0. This and other mathematical subtleties are clearly biologically implausible which is due to the somewhat artificial introduction of – in a sense – negative and positive activities which are treated in a symmetrized fashion. However, as we do not aim at describing biological reality, the above dis- cussed symmetrization can be justified. In fact, it simplifies the mathematical and computational treatment, and has contributed to, for instance, the fruitful popularization of neural networks in the statistical physics community in the 1980s and 1990s.
The+Shallow+and+the+Deep_Page_21_Chunk3920
8 1. FROM NEURONS TO NETWORKS McCulloch Pitts neurons Quite frequently, an even more drastic modification is considered: for infinite gain γ →∞the sigmoidal activation becomes a step function, see Fig. 1.3 (right panel) for an illustration. Eq. (1.5) for instance yields in this limit g(x) = sign(x −θ) = 󰀝 +1 if x ≥θ −1 if x < θ. (1.7) In this symmetrized version of a binary activation function, only two possible states are considered: either the model neuron is totally quiescent (S = −1) or it fires at maximum frequency, which is represented by S = +1. The extreme abstraction to binary activation states without the flexibility of a graded response was first discussed by McCulloch and Pitts in 1943, who originally denoted the quiescent state by S = 0. The persisting popularity of this model is due to its simplicity as well as its similarity to Boolean concepts in conventional computing. In the following, we will frequently resort to binary model neurons in the symmetrized version (1.7). In fact, the so-called percep- tron, as discussed in Chapter 3, can be interpreted as a single McCulloch Pitts unit which is connected to N input neurons. 1.2.3 Hebbian learning Probably the most intriguing property of biological neural networks is their abil- ity to learn. Instead of realizing only pre-wired functionalities, brains adapt to their environment or - in higher level terms - they can learn from experience. Many potential forms of plasticity and memory representation have been dis- cussed in the literature, including the chemical storage of information or learning through neurogenesis, i.e. the growth of new neurons. A very popular and plausible paradigm of learning is synaptic plasticity. A key mechanism, Hebbian Learning, is named after psychologist Donald Hebb, who published his work The Organization of Behavior in 1949 [Heb49]. The original hypothesis was formulated in terms of a pair of neurons, which are con- nected through an excitatory synapse: “When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” This is known as Hebb’s law and sometimes rephrased as “Neurons that fire together, wire together.” Hebbian Learning results in a memory effect which favors the simultaneous activity of neurons A and B in the future. Hence, it constitutes a form of learning through synaptic plasticity. The question to which extent the Hebbian paradigm reflects the biological reality of learning is subject of on-going debate. Alternative or complementing mechanisms have been suggested, see [SVG+18] for a recent example. In the
The+Shallow+and+the+Deep_Page_22_Chunk3921
1.3. NETWORK ARCHITECTURES 9 context of artificial neural networks, Hebbian synaptic plasticity provides a very plausible basis for the representation of learning in the models. In the mathematical framework of firing rate models presented in the previ- ous section, we can express Hebbian Learning quite elegantly, assuming that the synaptic change is simply proportional to the pre- and post-synaptic activity: ∆wAB ∝SASB. (1.8) Hence, the change ∆wAB of a particular synapse wAB depends only on locally available information: the activities of the pre-synaptic (SB) and the post- synaptic neuron (SA). For SA, SB > 0 this is quite close to the actual Hebbian hypothesis. The symmetrization with −1 < SA,B < +1 adds some biologically implau- sible aspects to the picture. For instance, an excitatory synapse connecting A and B would also be strengthened according to Eq. (1.8) if both neurons are quiescent at the same time, since in this case SASB > 0. Similarly, high activ- ity in A and low activity in B (or vice versa) with SASB < 0 would weaken an excitatory or strengthen an inhibitory synapse. In Hebb’s original formulation, however, only the presence of simultaneous activity should trigger changes of the involved synapse. Moreover, the mathematical formalism in (1.8) facilitates the possibility that an individual excitatory synapse can become inhibitory or vice versa, which is also questionable from the biological point of view. Many learning paradigms in artificial neural networks and other adaptive systems can be interpreted as Hebbian Learning in the sense of the above dis- cussion. Examples can be found in a variety of contexts, including supervised and unsupervised learning, see Sec. 2 for working definitions of these terms. Note that the actual interpretation of the term Hebbian Learning varies a lot in the literature. Occasionally, it is employed only in the context of unsuper- vised learning, since feedback from the environment is quite generally assumed to constitute non-local information. Here, we follow the wide-spread, rather relaxed use of the term for learning processes which depend on the states of the pre- and post-synaptic units as in Eq. (1.8). Frequently, learning can be seen as the optimization of suitable costs which are interpreted as a function of the network parameters, i.e. the synaptic strengths or weights. As we will see, in many cases numerical optimization procedures, which are for instance based on gradient descent, lead to update rules for the weights that resemble Hebbian Learning to a large extent. 1.3 Network architectures In the previous section we have considered types of model neurons which retain certain aspects of their biological counterparts and allow for a mathematical formulation of neural activity, synaptic interactions, and learning. This enables us to construct networks from, for instance, sigmoidal or McCulloch Pitts neu- rons, and model or simulate the dynamics of neurons and/or learning processes concerning the synaptic connections.
The+Shallow+and+the+Deep_Page_23_Chunk3922
10 1. FROM NEURONS TO NETWORKS In the following, only the most basic and clear-cut types of network architec- tures are introduced and discussed, namely fully connected recurrent networks and feed-forward layered networks. The possibilities for modifications of these networks, as well as for hybrid and intermediate types are nearly endless. Some more specific architectures will be introduced briefly later; in Section 5.5 and Chapter 6 various shallow and deep networks will be addressed. 1.3.1 Attractor networks and the Hopfield model Networks with very high or unstructured connectivity form dynamical systems of neurons which influence each other through synaptic interaction. In a network as shown in Figure 1.4 (left panel) the activity of a particular neuron depends on its synaptic input. Considering discrete timesteps t one obtains an update of the form Si(t + 1) = g 󰀳 󰁃󰁛 j∈J wij Sj(t) 󰀴 󰁄, (1.9) where the sum is taken over all units j ∈J which neuron i receives input from through a synapse wij ∕= 0. Eq. (1.9) can be interpreted as an update of all neurons in parallel. Alternatively, units could be visited in a deterministic or randomized sequential order. We will not discuss the subtle, yet important, differences between parallel and sequential dynamics here and refer the reader to the literature, e.g. [HKP91]. From an initial configuration S(0) at time t = 0 which comprises the indi- vidual activities S(0) = (S1(0), S2(0), . . . , SN(0))⊤, the dynamics generates a sequence of states S(t) which can be considered the system’s response to the initial stimulus. The term recurrent networks has been coined for this type of dynamical system. One of the most extreme, clear-cut example of a recurrent architecture is the fully connected Hopfield or Little-Hopfield model [Hop82,Lit74,HKP91]. A Hopfield network comprises N neurons of the McCulloch Pitts type which are fully connected by bi-directional synapses wij = wji ∈R (i, j = 1, 2, . . . N) with wii = 0 for all i. (1.10) While the exclusion of explicit, non-zero self-interactions wii appears plausible, the assumption of symmetric, bi-directional interactions clearly constitutes yet another serious deviation from biological reality. The dynamics of the binary units is given by Si(t + 1) = sign 󰀳 󰁅 󰁅 󰁃 N 󰁛 j=1 j∕=i wijSj(t) 󰀴 󰁆 󰁆 󰁄. (1.11) John Hopfield [Hop82] realized that the corresponding random sequential update can be seen as a zero temperature Metropolis Monte Carlo dynamics which is
The+Shallow+and+the+Deep_Page_24_Chunk3923
1.3. NETWORK ARCHITECTURES 11 ⇒ S(t = 0) S(t ≫1) Figure 1.4: Recurrent neural networks. Left: A network of N = 5 neurons with partial connectivity and uni-directional synapses. Right: Pattern retrieval from a noisy initial configuration in a Hopfield network of 2500 units, storing 100 activity patterns. Activities Sj = ±1 are shown as black and white ’pixels’, respectively. Initially, 40% of the Sj are flipped with respect to the pattern. The rightmost figure displays the system shortly before perfect retrieval is achieved. governed by an energy function of the form H(S(t)) = − N 󰁛 i,j=1 i<j wij Si(t) Sj(t). (1.12) The mathematical structure is analogous to the so-called Ising model in Statisti- cal Physics, see [HKP91,Kob97] for background and further references. There, the degrees of freedom Si = ±1 are typically termed spins and they repre- sent microscopic magnetic moments. Ising-like systems have been considered in a variety of scientific contexts ranging from the formation of binary alloys to abstract models of segregation in the social sciences. Positive weights wij obviously favor pairs of aligned Si = Sj which reduce the total energy of the system. For the modelling of magnetic materials one considers specific couplings wij as motivated by the physical interactions. For instance, constant positive wij = 1 are assumed in the so-called Ising ferromagnet, while randomly drawn interactions are employed to model disordered magnetic materials, so-called spin glasses [HKP91]. In the actual Hopfield model, however, synaptic weights wij are constructed or learned in order to facilitate a specific form of information processing [Hop82]. From a given set of uncorrelated, N-dimensional activity patterns P = {ξµ}P µ=1 with ξµ i ∈{−1, +1}, a weight matrix is constructed according to wij = wji = 1 P P 󰁛 µ=1 ξµ i ξµ j for i ∕= j and wii = 0 for all i, (1.13) where the constant pre-factor 1/P follows the convention in the literature. Ac- cording to Eq. (1.13), we can interpret the weights as empirical averages over
The+Shallow+and+the+Deep_Page_25_Chunk3924
12 1. FROM NEURONS TO NETWORKS the data set. Improved versions of the weight matrix for correlated patterns are also available. In principle, all perceptron training algorithms discussed later could be applied (per neuron) in the Hopfield network as well. The Hopfield network can operate as an auto-associative or content ad- dressable memory: If the system is prepared in an initial state S(t = 0) which differs from one of the patterns ξν ∈P only for a limited fraction of neurons with Si(0) = −ξν i , the dynamics can retrieve the original ξµ from corrupted or noisy information. Ideally, the temporal evolution under the updates (1.11) restores the pattern nearly perfectly and S(t) approaches ξν for large t. The retrieval of a stored pattern from a noisy initial state is illustrated in Fig. 1.4 (right panel). Note that the two-dimensional arrangement of neurons serves purely illustrative purposes. Since every neuron is connected to every other neuron, neighborhood relations do not define a low-dimensional topology in the Hopfield model. Successful retrieval of a stored pattern is only possible if the initial deviation of S(0) from ξν is not too large. Moreover, only a limited number of patterns can be stored and retrieved successfully. For random patterns with zero mean activities ξµ j = ±1, the statistical physics based theory of the Hopfield model (valid in the limit N →∞) shows that P ≤αrN must be satisfied. The value αr ≈0.14 marks the so-called capacity limit of the Hopfield model5. Note that the weight matrix construction (1.13) can also be interpreted as Hebbian Learning: Starting from a tabula rasa state of the synaptic strengths with zero weights, a single term of the form ξµ i ξµ j is added for each activity pattern, representing the neurons that are connected by synapse wij (and wji). Hence, Eq. (1.13) can be written as an iteration wij(0) = 0, wij(µ) = wij(µ −1) + 1 P ξµ i ξµ j , (1.14) where the incremental change of wij depends only on locally available informa- tion and is of the form “pre-synaptic × post-synaptic activity.” The Hopfield model serves as a prototypical example of highly connected neural networks. Potential applications include pattern recognition and image processing tasks. Perhaps more importantly, the model has provided many theoretical and conceptual insights into neural computation and continues to do so. More general recurrent neural networks are applied in various domains that require some sort of temporal or sequence-based information processing. This includes, among others, robotics, speech or handwriting recognition. 1.3.2 Feed-forward layered neural networks Throughout this reader, we will mainly deal with another clear-cut network architecture: layered feed-forward networks. In these systems, neurons are ar- ranged in layers and information is processed in a well-defined direction. 5This critical value is often referred to as αc in the literature, but it should not be confused with the storage capacity of feed-forward networks in, e.g., Sec. 4.4
The+Shallow+and+the+Deep_Page_26_Chunk3925
1.3. NETWORK ARCHITECTURES 13 The left panel of Fig. 1.5 shows a schematic illustration of a feed-forward architecture. A specific, single layer of units (the top layer in the illustration) represents external input to the system in terms of neural activity. In the biological context, one might think of the photoreceptors in the retina or other sensory neurons which can be activated by external stimuli. The state of the neurons in all other layers of the network is determined via synaptic interactions, and activations of the form S(k) i = g 󰀳 󰁃󰁛 j w(k) ij S(k−1) j 󰀴 󰁄. (1.15) Here, the activity S(k) i of neuron i in layer k is determined from the weighted sum of activities in the previous layer (k−1) only: information contained in the input is processed layer by layer. Ultimately, the last layer in the structure (bottom layer in the illustration) represents the network’s output, i.e. its response to the input or stimulus in the first layer. The illustration displays only a single output unit, but the extension to a layer of several outputs is straightforward. The essential property of the feed-forward network is the directed informa- tion processing: neurons receive only input from units in the previous layer. As a consequence, the network can be interpreted as the parameterization of an in- put/output relation, i.e. a mathematical function that maps the vector of input activations to a single or several output values. This interpretation still holds if nodes receive input from several previous layers, or in other words: connections may “skip” layers. For the sake of clarity and simplicity, we will not consider this option in the following. The feed-forward property and interpretation as a simple input/output re- lation is lost as soon as any form of feedback is present: inter-layer synapses or backward connections feeding information into previous (“higher”) layers intro- duce feedback loops, making it necessary to describe the system in terms of its full dynamics. Neurons that do not communicate directly with the environment, i.e. all units that are neither input nor output nodes, are termed hidden units (nodes, neurons) which form hidden layers in the feed-forward architecture. The right panel of Fig. 1.5 displays a more concrete example. The net- work comprises one layer of hidden units, here with activities σk ∈R, and a single output S. The response of the system to an input configuration ξ = (ξ1, ξ2, . . . , ξN) ∈RN is given as S(ξ) = g 󰀣K 󰁛 k=1 vk σk 󰀤 = g 󰀳 󰁃 K 󰁛 k=1 vk g 󰀕N 󰁛 j=1 wkj ξj 󰀖󰀴 󰁄. (1.16) Here we assume, for simplicity, that all hidden and output nodes employ the same activation function g(. . .). Obviously, this restriction can be relaxed by defining layer-specific or even individual activation functions. In Eq. (1.16) the
The+Shallow+and+the+Deep_Page_27_Chunk3926
14 1. FROM NEURONS TO NETWORKS S(ξ) ξj ∈R ξj ∈R wkj ∈R σk = g 󰀓󰁓 j wkjξj 󰀔 vk ∈R S(ξ) = g 󰀃󰁓 k vkσk 󰀄 Figure 1.5: Feed-forward neural networks.6 Left: A multilayered architecture with varying layer-size and a single output unit. Right: A convergent feed- forward network with a layer of input neurons, one hidden layer, and a single output unit. quantities wkj denote the weights connecting the j-th input component to the k- th hidden unit with k = 1, 2, . . . K, where K is the total number of hidden units (in the example K = 3). These weights can be grouped in vectors wk ∈RN, while the hidden-to-output connections are denoted by vk ∈R. Altogether, the architecture and connectivity, the activation function and its parameters (gain, threshold etc.), and the set of all weights determine the actual input/output function ξ ∈RN →S(ξ) ∈R parameterized by the feed-forward network. Again, the extension to several output units, i.e. multi-dimensional function values, is conceptually straightforward. Without going into details yet, we note that we control the function that is actually implemented by setting the weights and other free parameters in the network. If their determination is guided by a set of example data representing a target function, the term learning is used for this adaptation or fitting process. To be more precise, this situation constitutes an example of supervised learning as discussed in the next section. To summarize, a feed-forward neural network represents an adaptive param- eterization of a, in general non-linear, functional dependence. Under rather mild conditions, feed-forward networks with suitable, continuous activation functions are universal approximators. Loosely speaking, this means that a network can approximate any “non-malicious”, continuous function to arbitrary precision, provided the network comprises a sufficiently large (problem dependent) num- ber of hidden units in a suitable architecture, see Chapter 5. This property clearly motivates the use of feed-forward nets in quite general regression tasks. If the response of the network is discretized, for instance due to a threshold- ing operation that yields S(ξ) ∈{1, 2, . . . C} , (1.17) the system performs the assignment of all possible inputs ξ to one of C categories 6Following the author’s personal preference, layered networks are drawn from top (input) to bottom (output). Alternative orientations can be achieved by rotating the page.
The+Shallow+and+the+Deep_Page_28_Chunk3927
1.3. NETWORK ARCHITECTURES 15 or classes. Hence, the feed-forward network constitutes a classifier which can be adapted to example data by choice of weights and other free parameters. The simplest feed-forward classifier, the so-called perceptron, will serve as a very important example system in the following. The perceptron is defined as a linear threshold classifier with response S(ξ) = sign 󰀳 󰁃 N 󰁛 j=1 wjξj −θ 󰀴 󰁄 (1.18) to any possible input ξ ∈RN, corresponding to an assignment to one of two classes represented as S = ±1. Comparison with Eq. (1.7) shows that it can be interpreted as a single McCulloch Pitts neuron which receives input from N real-valued units. The perceptron will be discussed in detail in Chapter 3 as a basic architecture that provides valuable insights into the fundamentals of machine learning. 1.3.3 Other architectures Apart from the clear-cut, fully connected attractor neural networks and strictly feed-forward layered nets, a large variety of network types have been considered and designed, often with specific application domains in mind, see Chapter 5. Combinations of feed-forward structures with, for instance, layers of highly interconnected units are employed in the context of Reservoir Computing, see e.g. [SVC07,LJ09] for overviews and references. Recently, the use of feed-forward architectures has re-gained significant pop- ularity in the context of Deep Learning [GBC16, Hue19]. Specific designs and architectures of Deep Neural Networks including e.g. so-called convolutional or pooling layers will be discussed very briefly in Chapter 5. The framework of prototype-based learning is introduced in Chapter 6. Sys- tems like Learning Vector Quantization can also be interpreted as layered net- works with specific, distance-based activation functions in the hidden units (the prototypes) and a winner-takes-all or softmax output layer for classification or regression, respectively. Prototype-based systems in unsupervised learning will be discussed briefly in Chapter 6.
The+Shallow+and+the+Deep_Page_29_Chunk3928
16 1. FROM NEURONS TO NETWORKS
The+Shallow+and+the+Deep_Page_30_Chunk3929
Chapter 2 Learning from example data You live and learn. At any rate, you live. — Douglas Adams Different forms of machine learning were already briefly presented in Sec. 1. In the following section, we focus on the most clear-cut scenarios: supervised learning and unsupervised learning. In addition, we will briefly discuss the interesting and fruitful relations of machine learning with statistical modelling. 2.1 Learning scenarios The main chapters of these notes deal with supervised learning, with emphasis on classification and regression. Several of the introduced concepts and methods can however be transferred to unsupervised settings. This is also the case for the prototype-based methods discussed in Chapter 6. 2.1.1 Unsupervised learning Unsupervised learning is an umbrella term comprising various methods for the analysis of unlabeled data. Such data sets do not contain label information as- sociated with some pre-defined target as it would be the case in classification or regression. Moreover, there is no direct feedback available from the environment or a teacher that would facilitate the evaluation of the system’s performance. A comparison of its response with a given ground truth or approximate represen- tation thereof is not available or possible. For more about the background of unsupervised learning, the reader is referred to the literature, e.g. [Hay09, Bis95a, Bis06]. An introduction, spe- cific algorithms and applications can also be found in lecture notes by Dalya Baron [Bar19]. Here we only briefly discuss the framework of unsupervised data 17
The+Shallow+and+the+Deep_Page_31_Chunk3930
18 2. LEARNING FROM EXAMPLE DATA analysis in contrast to supervised learning. We briefly revisit some key methods of unsupervised learning in the context of preprocessing in Chapter 8. Potential aims of unsupervised learning are quite diverse, a few examples being: ◦Data Reduction Frequently it makes sense to represent large amounts of data by fewer exemplars or prototypes, which are of the same form and dimension as the original data and capture the essential properties of the original, larger data set. One important framework is that of Vector Quantization. ◦Compression: Another form of unsupervised learning aims at replacing original data by lower-dimensional representations without reducing the actual number of data points, see also Chapter 8. The mapping to lower dimension should obviously preserve information to a large extent. Compression could be done by explicitly selecting a reduced set of features, for instance. Alternative techniques provide explicit projections to a lower-dimensional space or representations that are guided by the preservation of relative distances or neighborhood relations. ◦Visualization Two or three-dimensional representations of a data set can be used for the purpose of visualizing a given data set. Hence, it can be viewed as a special case of compression and many techniques can used in both contexts. In addition, more specific tools have been devised for visualization tasks only. ◦Density Estimation: Often, an observed data set is interpreted as being generated in a stochas- tic process according to a model density. In a training process, parameters of the density are optimized, for instance aiming at a high likelihood as a measure of how well the model explains the observations, see Sec. 8.5. ◦Clustering One important goal of unsupervised learning is the grouping of observa- tions into clusters of similar data points which jointly display properties from the other groups or clusters in the data set. Most frequently, cluster- ing is formulated in terms of a specific (dis-)similarity or distance measure, which is used to compare different feature vectors. ◦preprocessing The above mentioned and other unsupervised techniques can be employed to identify representations of a data set suitable for further processing. Consequently, unsupervised learning is frequently considered a useful pre- processing step also for supervised learning tasks. Note that the above list is by far not complete. Furthermore, the goals men- tioned here can be closely related and, often, the same methods can be applied
The+Shallow+and+the+Deep_Page_32_Chunk3931
2.1. LEARNING SCENARIOS 19 to several of them. For instance, density estimation by means of Gaussian Mix- ture Models (GMM) could be interpreted as a probabilistic clustering method and the obtained centers of the GMM can also serve as prototypes in the context of Vector Quantization. Several relevant techniques of unsupervised learning are discussed in Chapter 8, where additional references can also be found. In a sense, in unsupervised learning there is no “right” or “wrong”. This can be illustrated in the context of a toy clustering problem: If we sort a num- ber of fruit according to shape and taste, we would most likely group pears and apples and oranges in three corresponding clusters. Alternatively, we can sort according to color only and end up with clusters of objects with like col- ors, e.g. combining green apples with green pears vs. yellowish and red fruit. Without further information or requirements defined by the environment, many clustering strategies and outcomes can be plausible. The example also illus- trates the fact that the choice of how the data is represented and which types of properties/features are considered important can determine the outcome of an unsupervised learning process to the largest extent. The important point to keep in mind is that, ultimately, the users define the goal of the unsupervised analysis themselves. Frequently this is done by formulating a specific cost function or objective function which reflects the task and guides the training process. The selection or definition of a cost function can be quite subjective and, moreover, its optimization can even completely fail to achieve the implicitly intended goal of the analysis, see Sec. 8.4.3 for the discussion of Vector Quantization as an example. As a consequence, the identification of an appropriate optimization crite- rion and objective function constitutes a key difficulty in unsupervised learning. Moreover, a suitable model and mathematical framework has to be chosen that serves the purpose in mind. 2.1.2 Supervised learning In supervised learning, available data comprises feature vectors1 together with target values. The data is analysed in order to tune parameters of a model, which can be used to predict the (hopefully correct) target values for novel data that was not contained in the training set. Generally speaking, supervised machine learning is a promising approach if the target task is difficult or impossible to define in terms of a set of simple rules, while example data is available that can be analysed. We will consider the following major tasks in supervised learning: ◦Regression In regression, the task is frequently to assign a real-valued quantity to each observed data point. An illustrative example could be the estimation of the weight of a cow, based on some measured features like the animal’s height and length. 1The discussion of non-vectorial, relational or other data structures is excluded here.
The+Shallow+and+the+Deep_Page_33_Chunk3932
20 2. LEARNING FROM EXAMPLE DATA ◦Classification The second important example of supervised problems is the assignment of observations to one of several categories or classes, i.e. to a discrete target value. A currently somewhat overstrained example is the discrimination of cats and dogs based on photographic images. A variety of problems can be formulated and interpreted as regression or classi- fication tasks, including time series prediction, risk assessment in medicine, or the pixel-wise segmentation of an image, to name only a few. Because target values are taken into account, we can define and evaluate clear quality criteria, e.g. the number of misclassifications for a given test set of data or the expected mean square error (MSE) in regression. In this sense, supervised learning appears well defined in comparison to unsupervised tasks, generally speaking. The well-defined quality criteria suggest naturally mean- ingful objective functions which can be used to guide the learning process with respect to the given training data. However, also in supervised learning, a number of issues have to be addressed carefully, including the selection of a suitable model. Mismatched, too simplistic, or overly complex systems can hinder the success of learning. This will be discussed from a quite general perspective in Chapter 7. Similarly, details of the training procedure may influence the performance severely. Furthermore, the actual representation of observations and the selection of appropriate features is essential for the success of supervised training as well. In the following, we will mostly consider a prototypical workflow of super- vised learning where a) a model or hypothesis about the target rule is formulated in a training phase by means of analysing a set of labeled examples. This could be done, for instance, by setting the weights of a feed-forward neural network. and b) the learned hypothesis, e.g. the network, can be applied to novel data in the working phase, after training. Frequently, an intermediate validation phase is inserted after (a) in order to estimate the expected performance of the system in phase (b) or in order to tune model (hyper-)parameters and compare different setups. In fact, validation constitutes a key step in supervised learning, see Chapter 7. It is important to keep in mind that many realistic situations deviate from this idealized scenario. Very often, the examples available for training and validation are not truly representative of the data that the system is confronted with in the working phase. The statistical properties and the actual target may even change while the system is trained. This very relevant problem is addressed in the context of so-called continual or life-long learning. A clear-cut strategy for the supervised training of a classifier is based on selecting only hypotheses that are consistent with the available training data and perfectly reproduce the target labels in the training set. As we will discuss at length in the context of the perceptron classifier, this strategy of learning
The+Shallow+and+the+Deep_Page_34_Chunk3933
2.1. LEARNING SCENARIOS 21 in version space relies on the assumption that (a) the target can be realized by the trained system in principle and that (b) the training data is perfectly reliable and noise-free. Although these assumptions are hardly ever realized in practice, the consideration of the idealized scenario provides insight into how learning occurs by elimination of hypotheses when more and more data becomes available. This can be illustrated in terms of a toy example. Assume that integer numbers have to be assigned to one of two classes denoted as “A” or “B”. Assume furthermore that the following example assignments are provided 4 →A 13 →B 6 →A 8 →A 11 →B as a training set. From these observations we could conclude, for instance, that A is the class of even integers, while B comprises all odd integers. However, we could also come to the conclusion that all integers i < 11 belong to class A and all others to B. Both hypotheses are perfectly consistent with the available data and so are many others. It is in fact possible to formulate an infinite number of consistent hypotheses based on the few examples given. As more data becomes available, we might have to revise or extend our analysis accordingly. An additional example 2 →B for instance, would rule out the above mentioned concepts, while the assignment of all prime numbers to class B would (still) constitute a consistent hypothesis now. We will discuss learning in version space in greater detail in the context of the perceptron and other networks with discrete output. Note that the strategy only makes sense if the example data is reliable and noise-free; the data itself has to be consistent with the unknown rule that we want to infer, obviously. The simple toy example also illustrates the fact that the space of allowed hypotheses has to be limited in order to facilitate learning at all. If possible hypotheses may be arbitrarily complex, we can always construct a consistent one by, for instance, simply taking over the given list of examples and claiming that “all other integers belong to class A” (or just as well “...to class B”). Obviously this approach would not infer any useful information from the data, and such a largely arbitrary hypothesis cannot be expected to generalize to integers outside the training set. This is a very simple example for an insight that could be summarized as as2 Generalization begins where storage ends. Merely storing the example set by training a very powerful system may com- pletely miss the ultimate goal of learning, which is inference of useful information about the underlying rule. We will study this effect more formally with respect to neural networks for classification. The above arguments are particularly clear in the context of classification. In regression, the concept of consistent hypotheses has to be softened, since agree- ment with the data set is in general quantified by a continuous error measure. 2(rephrasing “Generalization begins where learning ends” — T.M. Cover)
The+Shallow+and+the+Deep_Page_35_Chunk3934
22 2. LEARNING FROM EXAMPLE DATA However, the main idea of supervised learning remains the same: additional data provides evidence for some hypotheses while others become less likely. 2.1.3 Other learning scenarios A variety of specific, relevant scenarios can be considered which deviate from the clear-cut simple cases of supervised learning and unsupervised learning. The following examples highlight just some tasks or practical situations that require specific training strategies to cope with. Citations merely point to just one selected review, edited volume or monograph for further reference. ◦Semi-supervised Learning [CSZ06] Frequently, only a subset of the available data is labeled. Strategies have been developed which, in a sense, combine supervised and unsupervised techniques in such situations. ◦Reinforcement Learning [WO12] In various practical contexts, feedback on the performance of a learning system only becomes available after a sequence of decisions has been taken, for instance in the form of a cumulative reward. Examples would be the reward received only after a number of steps in a game or in a pathfinding problem in robotics. ◦Transfer Learning [YCC18] If the training samples are not representative for the data that the system is confronted with in the working phase, adjustments might be necessary in order to maintain acceptable performance. Just one example could be the analysis of medical images which were obtained by using similar, yet not identical technical platforms. ◦Lifelong or Continual Learning [DRAP15] Drift processes in non-stationary environments can play an important role in machine learning. The statistics of the observed example data and/or the target itself can change while the system is being trained. A system that learns to detect spam e-mail messages, for instance, has to be adapted constantly to the ever-changing strategies of the senders. ◦Causal Learning [PJS17] Regression systems and classifiers typically reflect correlations they have inferred from the data, which allow to make some form of prediction based on future observations. In general, this does not explicitly take causal relations into account. The reliable detection of causalities in a data set is a highly non-trivial task and requires specifically designed, sophisticated methods of analysis. In this material, we will focus almost exclusively on well-defined problems of supervised learning in stationary environments. In most cases, we will as- sume that training data is representative of the problem at hand and that it is complete and reliable to a certain extent.
The+Shallow+and+the+Deep_Page_36_Chunk3935
2.2. MACHINE LEARNING VS. STATISTICAL MODELLING 23 2.2 Machine Learning vs. Statistical Modelling In the sciences it happens quite frequently that the same or very similar concepts and techniques are developed or rediscovered in different (sub-)disciplines, either in parallel or with significant delay. While it is – generally speaking – quite inefficient to re-invent the wheel, a certain level of redundancy is probably inevitable in scientific research. The same questions can occur and re-occur in very different settings, and different communities will come up with specific approaches and answers. Moreover, it can be beneficial to come across certain problems in different contexts and to view them from different angles. It is not at all surprising that this is also true for the area of machine learning, which has been of inter-disciplinary nature right from the start, with contribu- tions from biology, psychology, mathematics, physics and others. 2.2.1 Differences and commonalities An area, which is often viewed as competing, complementary, or even supe- rior to machine learning is that of inference in statistical modelling. A simple web-search for, say, “Statistical Modelling versus Machine Learning” will yield numerous links to discussions of their differences and commonalities. Some of the statements that one may very likely come across are3: – The short answer is that there is no difference – Machine learning is just statistics, the rest is marketing – All machine learning algorithms are black boxes – Machine learning is the new statistics – Statistics is only for small data sets, machine learning is for big data – Statistical modelling has lead to irrelevant theory and questionable conclusions – Whatever machine learning will look like in ten years, I’m sure statisticians will be whining that they did it earlier and better. These and similar opinions reflect a certain level of competition, which can be counterproductive at times, to put it mildly. In the following, we will re- frain from choosing sides in this on-going debate. Instead, the relation between machine learning and statistical modelling will be highlighted in terms of an illustrative example. One of the most comprehensive, yet accessible presentations of statistical modelling based learning is given in the excellent textbook The Elements of Statistical Learning by T. Hastie, R. Tibshirani, and J. Friedman [HTF01]. A view on many important methods, including density estimation and Expectation Maximization algorithms is provided in Neural Networks for Pattern Recognition [Bis95a] and the more recent Pattern Recognition and Machine Learning [Bis06] by C. Bishop. 3Exact references and links are not provided in the best interest of the originators.
The+Shallow+and+the+Deep_Page_37_Chunk3936
24 2. LEARNING FROM EXAMPLE DATA In both, machine learning and statistical modelling, the aim is to extract information from observations or data and to formalize it. Most frequently, this is done by generating a mathematical model of some sort and fitting its parameters to the available data. Quite often, machine learning and statistical models have very similar or identical structures and, frequently, the same mathematical tools or algorithms are used. The differences lie in the emphasis that is usually put on different aspects of the modelling or learning: Generally speaking, the main aim of statistical inference is to describe, but also explain and understand the observed data in terms of models. These usually take into account explicit assumptions about statistical properties of the obser- vations. This includes the possible goal of confirming or falsifying hypotheses with a desired significance or confidence. In Machine Learning, on the contrary, the main motivation is to make pre- dictions with respect to novel data, based on patterns detected in the previous observations. Frequently, this does not rely on explicit assumptions in terms of statistical properties of the data but employs heuristic concepts of inference.4 The goal is not so much the faithful description or interpretation of the data, but rather, it is the application of the derived hypotheses to novel data that is in the center of interest. The corresponding performance, for instance quantified as an expected error in classification or regression, is the ultimate guideline. Obviously, these goals are far from being really disjoint in a clear-cut way. Genuine statistical methods like Bayesian classification can clearly be used with the exclusive aim of accurate prediction in mind. Likewise, sophisticated heuris- tic machine learning techniques like relevance learning are designed to obtain insight into mechanisms underlying the data, see Chapter 6. Very often, both perspectives suggest very similar or even identical meth- ods which can be used interchangeably. Frequently, it is only the underlying philosophy and motivation that distinguishes the two approaches. In the following section, we will have a look at a very basic, illustrative problem: linear regression. It will be re-visited as a prototypical supervised learning task a couple of times. Here, however, it serves as an illustration of the relation between machine learning and statistical modelling approaches and their underlying concepts. 2.2.2 An example case: linear regression Linear regression constitutes one of the earliest, most important and clearest examples of inference. As a, by now, historical application, consider the theory of an expanding universe according to which the velocity v of far away galaxies should be directly proportional to their distance d from the observer [Hub29]: v = Ho d. (2.1) 4However, it is very important to realize that implicit assumptions are always made, for instance when choosing a particular machine learning framework to begin with.
The+Shallow+and+the+Deep_Page_38_Chunk3937
2.2. MACHINE LEARNING VS. STATISTICAL MODELLING 25 v d Figure 2.1: Hubble dia- gram: the velocity v of galax- ies as a function of their dis- tance d, taken from [Hub29]. Note that the correct units of v should be km/s. According to PNAS, figure and article [Hub29] are in the public do- main. Here, Ho is the so-called Hubble constant which is named after Edwin Hubble, one of the key figures in modern astronomy. Hubble fitted an assumed linear dependence of the form (2.1) to observational data in 1929 and obtained as a rough estimate Ho ≈500 km/s Mpc , see Figure 2.1. The interested reader is referred to the astronomy literature for details, see e.g. [Huc18] for a quick start. Two major lessons can be learnt from this example: (a) simple linear regres- sion has been and continues to be a highly useful tool, even for very fundamental scientific questions, and (b) the predictive power of a fit depends strongly on the quality of the available data. The latter statement is evidenced by the fact that more recent estimates of the Hubble constant, based on more data of better quality, correspond to much lower values Ho ≈73.5 km/s Mpc [Huc18]. Obviously, a result of the form (2.1) summarizes experimental or observa- tional data in a descriptive fashion and allows us to formulate conclusions that we have drawn from available data. At the same time, it makes it possible to apply the underlying hypothesis on novel data. By doing so, we can test, confirm or falsify the model and its assumptions and detect the need for correc- tions. The topic of validating a given model will be addressed in greater detail in Chapter 7. Note that the following discussion is by no means intended to claim that lin- ear regression is a genuine machine learning method. It goes back to at least Leg- endre and Gauss, see https://en.wikipedia.org/wiki/Linear_regression#History. Here, it merely serves as a relatively simple, illustrative example problem. A heuristic machine learning approach Equation (2.1) represents a simple linear dependence of a target function v(d) on a single variable d ∈R. In the more general setting of multiple linear regression, a target value y(ξ) is assigned to a number of arguments which are concatenated in an N-dimensional vector ξ ∈RN. In the standard setting of multiple linear regression, a set of examples D = {ξµ, yµ}P µ=1 with ξµ ∈RN, yµ ∈R (2.2)
The+Shallow+and+the+Deep_Page_39_Chunk3938
26 2. LEARNING FROM EXAMPLE DATA is given. A hypothesis of the form fH(ξ) = N 󰁛 i=1 wi ξi = w⊤ξ = w · ξ with w ∈RN (2.3) is assumed to represent or approximate the dependence y(ξ) underlying the observed data set D. In analogy to other machine learning scenarios considered later, we will refer to the coefficients wj also as weights and combine them in a vector w ∈RN. Depending on the context, any of the equivalent notations for the scalar product in Eq. (2.3) will be used. Note that a constant term could be incorporated formally without explicit modification of Eq. (2.3). This can be achieved by decorating every input vector with an additional clamped dimension ξN+1 = −1 and introducing an auxiliary weight wN+1 = θ: 󰁨ξ = (ξ1, ξ2, ξ3, . . . , ξN, −1)⊤, 󰁨w = (w1, w2, w3, . . . wN, θ)⊤∈RN+1 ⇒󰁨w⊤󰁨ξ = w⊤ξ −θ. (2.4) Any inhomogeneous hypothesis fH(ξ) = w⊤ξ −θ including a constant term can be written as a homogeneous function in N + 1 dimensions for an appropriately extended input space, formally. Hence, we will not consider constant contribu- tions to the the hypothesis fH explicitly in the following. A similar argument will be used later in the context of linearly separable classifiers. A quite intuitive approach to the selection of the model parameters, i.e. the weights w, is to consider the available data and to aim at a small deviation of fH(ξµ) from the observed values yµ. Of the many possibilities to define and quantify this goal, the quadratic deviation or Sum of Squared Error (SSE) is probably the most frequently used one: ESSE = 1 2 P 󰁛 µ=1 󰀕 fH(ξµ) −yµ 󰀖2 where fH(ξµ) = w⊤ξµ (2.5) and the sum is over all examples in D. The quadratic deviation disregards whether fH(ξµ) is greater or lower than yµ. Note that the pre-factor 1/2 conveniently cancels out when taking deriva- tives with respect to the weights but is otherwise irrelevant. We will fre- quently replace the SSE by the Mean Squared Error (MSE) which is defined as EMSE = ESSE/P. The constant factor 1/P is, of course, also irrelevant for the minimization and the properties of the optima. Necessary and sufficient conditions for the presence of a (local) minimum in a differentiable cost function are briefly summarized and discussed in Appendix A.2.2, where we also point to additional literature. Here, we consider only the necessary first order condition for a weight vector w∗that minimizes ESSE : ∇wESSE󰀏󰀏 w=w∗ != 0 with ∇wESSE = P 󰁛 µ=1 󰀅 w⊤ξµ −yµ󰀆 ξµ. (2.6)
The+Shallow+and+the+Deep_Page_40_Chunk3939
2.2. MACHINE LEARNING VS. STATISTICAL MODELLING 27 Note that the SSE is also a popular objective function in the context of re- gression in multi-layered networks, see Chapter 5 and e.g. [Bis95a,Bis06,HTF01, HKP91,EB01]. With the convenient matrix and vector notation5 Y = 󰀃 y1, y2, . . . , yP 󰀄⊤∈RP , χ = 󰁫 ξ1, ξ2, . . . , ξP 󰁬⊤ ∈RP ×N (2.7) we can rewrite Eq. (2.6) and solve it formally: χ⊤(χw∗−Y ) != 0 ⇒w∗= 󰀅 χ⊤χ 󰀆−1 χ⊤ 󰁿 󰁾󰁽 󰂀 χ+ left Y (2.8) where χ+ left is the (left) Moore-Penrose pseudoinverse of the rectangular matrix χ [PP12,BH12]. Note that the solution can be written in precisely this form only if the (N ×N) matrix [χ⊤χ] is non-singular and, thus, [χ⊤χ]−1 exists. This can only be the case for P > N, i.e. when the system of P equations 󰀋 w⊤ξµ = yµ󰀌 in N unknowns is over-determined and cannot be solved exactly. For the precise definition of the Moore-Penrose and other generalized in- verses (also in the case P ≤N) see e.g. the Matrix Cookbook [PP12] as a comprehensive source of information in the context of matrix manipulations. Heuristically, in the case of singular matrices [χ⊤χ], one can enforce the existence of an inverse by adding a small contribution of the N-dimensional identity matrix IN: w∗ γ = 󰀅 χ⊤χ + γ IN 󰀆−1 χ⊤Y. (2.9) Since the symmetric [χ⊤χ] has only non-negative eigenvalues, the matrix on the r.h.s. of (2.9) is guaranteed to be non-singular for any γ > 0. We will re-visit the problem of linear regression in the context of perceptron training and more general regression later, see also Appendix A.3.2. There, we will also discuss the case of under-determined systems of solvable equations 󰀋 w⊤ξµ = yµ󰀌P µ=1. In analogy to the above, it is straightforward to show that the resulting weights w∗ λ correspond to the minimum of the modified objective function ESSE λ = 1 2 P 󰁛 µ=1 󰀕 fH(ξµ) −yµ 󰀖2 + 1 2γ w2. (2.10) Hence, we have effectively introduced a penalty term, which favors weight vectors with smaller norm | w |2. The concept is known as weight decay, see Sec. 7.2. Note that nearly singular matrices [χ⊤χ] would lead to large magnitude weights according to Eq. (2.8). This is our first encounter of regularization, i.e. the restriction of the search space in a learning problem with the goal of improving the outcome of the training process. In fact, weight decay is applied in a variety of problems and 5For better readability, χ is used instead of Ξ (the properly capitalized version of ξ).
The+Shallow+and+the+Deep_Page_41_Chunk3940
28 2. LEARNING FROM EXAMPLE DATA is by no means restricted to linear regression. Other methods of regularization will be discussed in the context of overfitting in neural networks in Sec. 7.2. We will revisit linear regression again in later chapters and show that it can also be formulated as the minimization of w2 under suitable constraints. This approach circumvents the problem of having to choose an appropriate weight decay parameter γ in Eqs. (2.9, 2.10). The statistical modelling perspective In a statistical modelling approach, we aim at explaining the observed data D in terms of an explicit model. To this end, we have to make and formalize certain assumptions. For instance, we can assume that the labels yµ are generated independently according to a conditional density of the form p(yµ | ξµ, w) = N(yµ | w⊤ξ, σ2) = 1 √ 2πσ exp 󰀗 −1 2σ2 󰀃 yµ −w⊤ξµ󰀄2󰀘 . (2.11) Hence, we assume that the observed targets essentially reflect a linear depen- dence but are subject to Gaussian noise: yµ = w⊤ξµ + σ ηµ (2.12) with independent, random quantities ηµ with 〈ηµ〉= 0 and 〈ηµην〉= δµν. In contrast to the previous, heuristic treatment, we start from an explicit assump- tion for how and why the observed values deviate from the linear dependence. In the following, we consider only w as parameters of our model, while σ is fixed. Extensions that include σ as an adaptive degree of freedom are very well possible but not essential for the comparison with the heuristic machine learning approach. In the simplest case we assume that example inputs ξµ are generated inde- pendently. Then, for a given model with weights w, the likelihood of observing a particular set of target values Y = 󰀃 y1, y2, . . . yP 󰀄⊤factorizes: p(Y | w, χ) = P 󰁜 µ=1 p(yµ | ξµ, w). (2.13) The corresponding log-likelihood reads log p(Y |χ, w) = P 󰁛 µ=1 log p(yµ |ξµ, w) = −P 2 log(2πσ2)−1 σ2 1 2 P 󰁛 µ=1 󰀃 yµ −w⊤ξµ󰀄2 , (2.14) where we inserted the Gaussian model (2.11). Now we note that the first term on the r.h.s. is constant with respect to w. Furthermore, the second term is proportional to −ESSE as given in Eq. (2.5). We conclude that the weights w∗that explain the data with Maximum Likelihood under the assumption of model (2.11) are exactly those that minimize the SSE. Hence, we arrive at the same formal solution as given in (2.8).
The+Shallow+and+the+Deep_Page_42_Chunk3941
2.2. MACHINE LEARNING VS. STATISTICAL MODELLING 29 This correspondence of the Maximum Likelihood solution in the Gaussian model with a quadratic error measure is of course due to the specific mathemat- ical form of the normal distribution and can be rediscovered in various other contexts. The assumption of Gaussian noise is rarely strictly justified, but it is very popular and appears natural in absence of more concrete knowledge. Frequently, it yields practical methods and can be seen as the basis of popular techniques like Principal Component Analysis, mixture models for clustering, or Linear Discriminant Analysis [HTF01,Bis06]. Note, however, that the statistical approach is more flexible in the sense that we could, for instance, replace the conditional model density in (2.11) by an alternative assumption and proceed along the same lines to obtain a suitable objective function in terms of the associated likelihood. Moreover, it is possible to incorporate prior knowledge, or prior beliefs, into the formalism. If we had reason to assume that weights with low magnitude are more likely to occur, even before any data is observed, we could express this in terms of an appropriate prior density, for instance po(w) ∝exp 󰀗 −1 2τ 2o w2 󰀘 . (2.15) Exploiting Bayes Theorem, P(A|B)P(B) = P(B|A)P(A), we obtain from the data likelihood P(D|w): p(w|D) ∝p(D|w) po(w). (2.16) With the proper normalization this represents the posterior probability of weights p(w|D) after having seen a data set D = {χ, Y }, and taking into account the data independent prior po(w). Assuming the independent generation of ξµ again and inserting the par- ticularly convenient Gaussian prior (2.15) we can write the logarithm of the posterior as log [p(w|D)] ∝−ESSE −1 2γ w2 + const. (2.17) with a suitable parameter γ that depends on τo and is obtained easily by working out the logarithm of p(w|D) from Eq. (2.16). The important observation is that maximizing the posterior probability with respect to the set of weights w is equivalent to minimizing the objective function given in Eq. (2.10). Hence, the Maximum A Posteriori (MAP) estimate of the parameters w is formally identical with the MSE estimate when amended by an appropriate weight decay term. Not surprisingly, many different names have been coined for this form of regularization and its variants, including L2- regularization, Tikhonov-regularization, and ridge-regression [HTF01, Bis95a, Bis06,DHS00]. The above discussed Maximum Likelihood and MAP results are examples of so-called point estimates: one particular set of model parameters (here: w) is selected according to the specific criterion in use. The statistical modelling idea allows us to go even further: In the framework of Bayesian Inference [HTF01]
The+Shallow+and+the+Deep_Page_43_Chunk3942
30 2. LEARNING FROM EXAMPLE DATA we can consider all possible model settings at a time, yielding the posterior predictive probability p(y|ξ, D) ∝ 󰁝 p(y|ξ, w) p(w|D) 󰁿󰁾󰁽󰂀 ∝p(D|w) po(w) dNw. (2.18) Properly normalized, this defines the probability of response y(ξ) to an arbitrary (novel) input ξ after having seen the data set D. It is obtained as an integral over the specific model responses p(y|ξ, w) given a particular w, but integrated over all possible models with the posterior p(w|D) as a weighting factor. The formalism yields a probabilistic assignment of the target y, which also makes it possible to quantify the associated uncertainty due to the data depen- dent variability of the model parameters. On the one hand, f this constitutes an appealing advantage over the simpler point estimates. On the other hand, the full formalism can be quite involved in practice. Frequently, one resorts to convenient parametric forms of model and prior densities and/or derives easy to handle (e.g. Gaussian) approximations of the posterior predictive distribution (2.18). 2.2.3 Conclusion Heuristic machine learning and statistical modeling based approaches differ sig- nificantly in terms of their conceptual foundations. However, in practice, these differences often blur or play a minor role. Very often, seemingly purely heuris- tic methods of machine learning can be derived from the statistical inference perspective under suitable model assumptions. Frequently, training algorithms and methods are very similar if not identical. Many statistics based methods can be used for the main goals of machine learning, i.e. prediction and gen- eralization. At the same time, sophisticated machine learning techniques can also aim at understanding and explaining the observed data, a goal which is frequently attributed to statistical learning. In summary, the author’s recommendation is to acquire knowledge of both, statistical modeling and heuristically motivated machine learning. We should use the best of both complementary frameworks without being dogmatic or religious in preferring one over the other. The never-ending debates and claims of superiority by both communities are essentially useless and even counter- productive. Instead, efforts should be combined in order to achieve a better understanding of data analysis and learning.
The+Shallow+and+the+Deep_Page_44_Chunk3943
Chapter 3 The Perceptron The perceptron has shown itself worthy despite (and even because of!) its severe limitations. It has many features to attract attention: its linearity; its intriguing learning theorem; its clear paradigmatic simplicity as a kind of parallel compu- tation. — Marvin Minsky and Seymour Papert in [MP69] 3.1 History and literature The term perceptron is used in a variety of meanings. Throughout this text, however, it will exclusively refer to a system representing inputs ξ ∈RN in a layer of units, which are connected to a single binary output unit of the McCulloch Pitts type. Its activation S(ξ) ∈{−1, +1} is taken to represent two possible responses.1 It corresponds to a linear threshold classifier or – in feed- forward network jargon – to an N–1 architecture with a single binary output, that does not comprise hidden units or layers. Other well-known linear threshold classifiers are Linear Discriminant Analy- sis (LDA), the Naive Bayes classifier, and the popular Logistic Regression (LR). Overviews and comparisons can be found in e.g. [DHS00,HTF01,Bis95a,Mur22]. These classical methods are based on concepts of statistical modelling and differ mostly in the way their parameters are determined or constructed from a given data set. In the literature, more general architectures with several layers and/or con- tinuous output are often referred to as (multilayer, soft, . . .) perceptrons. We will later consider layered networks which are constructed from perceptron-like 1Of course, any binary output, e.g. S ∈{0, 1}, could serve the same purpose. 31
The+Shallow+and+the+Deep_Page_45_Chunk3944
32 3. THE PERCEPTRON units, but in these lecture notes the term perceptron always refers to the single layer, binary classifier. Even the very simple, limited perceptron architecture is of interest for a multitude of reasons: ◦Pioneered by Frank Rosenblatt [Ros58, Ros61], the perceptron has been one of the earliest, very successful machine learning concepts and devices, and it was even realized in hardware, see Figure 3.1. ◦Rosenblatt also suggested an algorithm for perceptron training, which is guaranteed to converge, provided a suitable solution exists. The corre- sponding Perceptron Convergence Theorem is one of the most fundamen- tal results in machine learning and has contributed largely to the initial popularity and success of the field. It will be presented and proven in Sec. 3.3.3. ◦It serves as a prototypical model system that provides theoretical, mathe- matical and intuitive insights into the basic mechanisms of machine learn- ing. At the same time it is a building block from which to construct more powerful systems. As Manfred Opper [Opp90] put it: “The perceptron is the hydrogen atom of neural network research.” ◦In its modern, conceptually extended re-incarnation, the Support Vec- tor Machine (SVM) [SS02,CST00,STC04,Her02,DFO20], the perceptron persists to be used successfully in a large variety of practical applications. The precise relation of the SVM to the simple perceptron will be discussed in great detail in Section 4.3. ◦The history of the perceptron provides insights into how the scientific community deals with high expectations and disillusionments leading to the extreme over-reaction of stalling an entire field of research [Ola96]. Several original texts from the early days of the perceptron are available in the public domain. This includes an original article from 1958 [Ros58], the highly interesting official Manual of the Perceptron Mark I hardware [HRM+60] and Rosenblatt’s monograph Principles of Neurodynamics [Ros61]. An inter- esting TV documentation is available at [You07]. The so-called Perceptron Controversy and its perception and long-lasting impact on the machine learning community is analysed in an article entitled A Sociological Study of the Official History of the Perceptron Controversy by M. Ozaran [Ola96]. Clear presentations of the Rosenblatt algorithm can be found in virtually all texts that cover the perceptron. Discussions which are quite close to these lecture notes (apart from notation issues) are given in the monographs by J.A. Hertz, A. Krogh and R.G. Palmer [HKP91] and by S. Haykin in [Hay09], for instance. The counting argument for the number of linearly separable functions, see Sec. 3.4, is presented in e.g. [HKP91]. In this context, it should be useful to
The+Shallow+and+the+Deep_Page_46_Chunk3945
3.2. LINEARLY SEPARABLE FUNCTIONS 33 Figure 3.1: The Mark I Perceptron. Left: Hardware realization at Cornell Aeronautical Laboratory. Photo reproduced with kind permission from Cornell University Library.2 The input of the Mark I was realized via a retina of 400 photosensors. Adaptive weights were represented by potentiometers that could be tuned by electric motors. Right: Schematic outline of the Mark I architec- ture, based on a figure taken from [HRM+60]. The triangular, shaded region (added to the original illustration) marks a subset of units that is commonly referred to as the perceptron in this text. consult the original publications by R. Winder [Win61], T.M. Cover [Cov65] and G.J. Mitchison and R.M. Durbin [MD89a]. The latter work also presents the extension of the counting argument to two-layered networks (machines) with K hidden units. 3.2 Linearly separable functions The perceptron can be viewed as the simplest feed-forward neural network. It responds to real-valued inputs ξ ∈RN in terms of a binary output S ∈{−1, +1}. The response of a perceptron with weight vector w is obtained by applying a threshold operation to the weighted sum of inputs: Sw,θ(ξ) = sign (w · ξ −θ) = ±1, (3.1) where the N-dim. weight vector and the threshold θ parameterize the specific input/output relation. Its mathematical structure suggests an immediate geo- metrical interpretation of the perceptron which is illustrated in Fig. 3.3: The set of points in feature space 󰁱 󰁨ξ ∈RN 󰀏󰀏󰀏 󰀓 w · 󰁨ξ −θ 󰀔 = 0 󰁲 (3.2) 2Cornell University News Service records, #4-3-15. Division of Rare and Manuscript Col- lections, Cornell University Library. See https://digital.library.cornell.edu/catalog/ss:550351.
The+Shallow+and+the+Deep_Page_47_Chunk3946
34 3. THE PERCEPTRON ξ ∈RN (inputs) w ∈RN (weights) S = sign(w · ξ −Θ) = ±1 (output) Figure 3.2: Illustration of the single layer perceptron with N-dimensional inputs and a binary output of the McCulloch Pitts type. corresponds to a (hyper-)plane orthogonal to to w with an off-set θ |w| from the origin3. Inputs with w · ξ > θ result in perceptron output +1, while vectors ξ with w · ξ < θ yield the response −1. Hence, the perceptron realizes a linearly separable (lin. sep. ) function: feature vectors with perceptron output +1 are separated by the hyperplane (3.2) from those with output −1. Two cases can be distinguished: input/output relations of the form (3.1) with θ ∕= 0 are called inhomogeneously lin. sep., while homogeneously lin. sep. functions can be written as Sw(ξ) = sign (w · ξ ) . (3.3) For the latter, the corresponding hyperplane, cf. Fig. 3.3, has no offset (θ = 0) and includes the origin. In the following, we will focus on homogeneously linearly separable func- tions, mostly. This does not constitute an essential restriction because any inhomogeneously lin. sep. function can be interpreted as a homogeneous one in a higher dimensional space: Consider the function Sw,θ(ξ) = sign (w · ξ −θ) with w, ξ ∈RN. Now let us define the modified (N + 1)-dimensional weight vector 󰁨w = (w1, w2, . . . , wN, θ)⊤ and augment all feature vectors by an auxiliary, “clamped” input dimension: 󰁨ξ = (ξ1, ξ2, . . . , ξN, −1) ∈RN+1. We observe that 󰁨w · 󰁨ξ = w · ξ −θ and, thus, sign 󰀓 󰁨w · 󰁨ξ 󰀔 = sign (w · ξ −θ) . (3.4) As a consequence, a non-zero threshold θ in N-dimensions can always be re- written as an additional weight in a trivially augmented feature space and, formally, the two cases can be treated on the same grounds. Note that the ar- gument is analogous to the formal inclusion of a constant term in multiple linear regression, see Eq. (2.4). Later, we will encounter subtleties which require more precise considerations, but for now we will restrict ourselves to homogeneous functions and simply refer to them as linearly separable for brevity. 3The distance of the plane from the origin is exactly θ in case of normalized w with |w| = 1.
The+Shallow+and+the+Deep_Page_48_Chunk3947
3.2. LINEARLY SEPARABLE FUNCTIONS 35 θ 0 S =−1 S =+1 w/|w| Figure 3.3: Geometrical in- terpretation of the perceptron. The hyperplane orthogonal to w with off-set θ from the origin separates feature vectors with output S = +1 and S = −1, respectively. In the following, we will also call a set of P input/output pairs D = {ξµ, Sµ T }P µ=1 (3.5) (homogeneously) linearly separable, if at least one weight vector w exists with Sµ w = sign (w · ξµ) = Sµ T for all µ = 1, 2, . . . , P, (3.6) where we use the shorthand notation Sµ w ≡Sw(ξµ). The labels in D are denoted by ST = ±1, with the subscript T for target or training. A number of interesting and important questions related to linear separabil- ity come to mind: (Q1) Given a linearly separable data set, (how) can we find a perceptron weight vector w that satisfies (3.6)? (Q2) Given an arbitrary data set with binary target labels, can we determine whether it is linearly separable without (before) attempting to train a perceptron? Are there efficient iterative algorithms which indicate (at an early stage) that a data set is not linearly separable? (Q3) How serious is the restriction to linear separability in the space of binary target functions? How many linearly separable functions exist, i.e. in how many lin. sep. ways can we label P input vectors in N dimensions? (Q4) Can we learn an underlying linearly separable rule from the examples con- tained in D? How does the realization or “storage” of the given labels in D relate to the learning of the unknown rule? (Q5) If – for a lin. sep. D – several or many vectors w satisfy the conditions (3.6), which one is the best? What is a meaningful measure of quality and how can the corresponding optimal weight vector be found? (Q6) If D is not separable, can we still approximate the target classification by means of a perceptron? Which alternatives or extensions exist? Most of these questions (Q1–Q5) will be addressed and answered in the forth- coming sections, while Chapter 4 deals with the realization or approximation of classification schemes beyond linear separability (Q6).
The+Shallow+and+the+Deep_Page_49_Chunk3948
36 3. THE PERCEPTRON 3.3 The Rosenblatt perceptron To a large extent, the success of the perceptron has been due to the existence of a training algorithm and the associated convergence theorem, both presented by Frank Rosenblatt [Ros58,Ros61]. In the following, we first precisely define the basic goal of the training process, outline the general form of iterative perceptron algorithms and present Rosenblatt’s algorithm. As a key result, we reproduce the corresponding proof of convergence, eventually. 3.3.1 The perceptron storage problem Here we address question (Q1) of the list given in the previous section. First we consider the task of reproducing the labels of a given data set of form (3.5) through a perceptron. We define the so-called perceptron storage problem (PSP) as Perceptron Storage Problem (I) (3.7) For a given D = {ξµ, Sµ T }P µ=1 with ξµ ∈RN and Sµ T ∈{−1, +1} , find a vector w ∈RN with sign (w · ξµ) = Sµ T for all µ = 1, 2, . . . , P. The term storage refers to the fact that we are not (yet) aiming at the application of the function Sw(ξ) to vectors ξ ∕∈D. We are only interested in reproducing the correct assignment of labels within the data set by means of a perceptron network. Alternatively, this aim could be achieved by storing D in a memory table and look up the correct Sµ T when needed. In order to rewrite the PSP we note that sign(w · ξ) = S ⇔w · ξ S > 0. Defining the so-called local potentials (the term indicates a vague relation to the membrane potentials, cf. Sec. 1.1.) Eµ = w · ξµ Sµ T for µ = 1, 2, . . . , P, (3.8) we obtain an equivalent formulation of the PSP in terms of a set of inequalities: Perceptron Storage Problem (II) (3.9) For a given D = {ξµ, Sµ T }P µ=1 with ξµ ∈RN and Sµ T ∈{−1, +1} , find a vector w ∈RN with Eµ ≥c > 0 for all µ = 1, 2, . . . , P. Here, we have introduced a constant c > 0 as a margin in terms of the conditions Eµ > 0. Note that the actual value of c is essentially irrelevant: Consider vectors w1 and w2 = λw1 with λ > 0. Due to the linearity of the scalar product Eµ 1 = w1 · ξµ Sµ T ≥c > 0 implies Eµ 2 = w2 · ξµ Sµ T ≥λ c > 0. (3.10)
The+Shallow+and+the+Deep_Page_50_Chunk3949
3.3. THE ROSENBLATT PERCEPTRON 37 The existence of weights w with all Eµ ≥c > 0 also implies that a solution for any other positive constant can be constructed. This is a consequence of the fact that the function Sw(ξ) = sign(w · ξ) only depends on the direction of w in N-dim. feature space while it is invariant under changes of the norm |w|. 3.3.2 Iterative Hebbian training algorithms In order to answer question (Q1) in Sec. 3.2, we consider iterative learning al- gorithms which present a single 󰁱 ξν(t), Sν(t) T 󰁲 at time step t of the training process. Often, the sequence of examples corresponds to repeated cyclic presen- tation, i.e.4 ν(t) = 1, 2, 3, . . . , P, 1, 2, 3, . . . (3.11) where each loop through the examples in D is called an epoch in the literature. A frequently used alternative is random sequential presentation, where at each time step t one of the examples in D is selected with equal probability 1/P. The specific form of perceptron updates we consider in the following is: Generic iterative perceptron updates (weights) at discrete time step t - determine the index ν(t) of the current training example - compute the local potential Eν(t) = w(t) · ξν(t) Sν(t) T - update the weight vector according to w(t + 1) = w(t) + 1 N f(Eν(t)) ξν(t) Sν(t) T . (3.12) The scaling of the update with N is arbitrary and is used here to achieve con- sistency with the literature. In order to turn (3.12) into a practical training algorithm, the prescription has to be completed by specifying initial condition w, and by defining a stopping criterion, obviously. Together with the definition of the sequence ν(t), the so-called modulation function f(. . .) determines the actual training algorithm and we assume here that it depends only on the local potential of the actual training example. Note that (3.12) constitutes a realization of Hebbian learning: the change of a com- ponent wj of the weight vector is proportional to the“pre-synaptic” input ξµ j and the “post-synaptic” output Sµ T . As a consequence, the weight vector accumulates Hebbian terms ξµ Sµ T start- ing from the given initialization w(0). Most frequently, we will consider a so- called tabula rasa initialization, i.e. w(0) = 0. In this case, after performing 4Formally, this can be represented by the function ν(t) = mod[(t−1), P] + 1 for t ∈Z+
The+Shallow+and+the+Deep_Page_51_Chunk3950
38 3. THE PERCEPTRON updates at time steps t = 1, 2, . . . τ the weight vector is bound to have the form w(τ) = 1 N P 󰁛 µ=1 xµ(τ) ξµ Sµ T . (3.13) This implies that the resulting perceptron weight vector is a linear combi- nation of the vectors ξµ ∈D and the so-called embedding strengths xµ(τ) ∈R quantify their specific contributions. Assuming that w(0) = 0, it is also possible to rewrite the update (3.12) in terms of the embedding strengths. At the end of the training process, the actual weight vector can be constructed according to Eq. (3.13). Hence, the following formulation is equivalent to (3.12): Generic iterative perceptron updates (embedding strengths) at discrete time step t - determine the index ν(t) of the current training example - compute the local potential Eν(t) = w(t) · ξν(t) Sν(t) T - update the embedding strength xν(t) according to xν(t)(t + 1) = xν(t)(t) + f(Eν(t)) (3.14) (all other embedding strengths remain unchanged at time t). Many perceptron algorithms can be formulated directly in the weights, cf. (3.12), or in terms of the embedding strengths as in (3.14). While in principle equivalent, we note that the number of variables used to represent the percep- tron during training is N in the weight vector formulation and P if embedding strengths are updated. Thus, the computational efficiency of training and the corresponding storage needs will depend on the ratio P/N, in practice. Note that the simple correspondence between (3.12) and (3.14) can be lost if the structure of the actual updates is modified. Constraints of the form xµ ≥0 are imposed, for instance, in the AdaTron algorithm [AB89, BAK91] for the perceptron of optimal stability, see Sec. 3.6. This prevents a straightforward formulation of the training scheme as an iteration in weight space. Of course, one can always construct the weight vector via relation (3.13) if needed.
The+Shallow+and+the+Deep_Page_52_Chunk3951
3.3. THE ROSENBLATT PERCEPTRON 39 3.3.3 The Rosenblatt perceptron algorithm In terms of the generic algorithm (3.12, 3.14), the Rosenblatt perceptron algo- rithm is specified by - tabula rasa initial conditions: w(0) = 0 or, equivalently, {xµ(0) = 0}P µ=1 - deterministic, cyclic presentation of the examples in D according to (3.11) - and the modulation f(Eµ) = Θ [c −Eµ] = 󰀝 0 if Eµ > c 1 if Eµ ≤c (3.15) with the Heaviside function Θ[x] = 1 for x ≥0 and Θ[x] = 0 else. In summary, the prescriptions (3.12) and (3.14) become: Rosenblatt perceptron algorithm (3.16) at discrete time step t - determine the index ν(t) of the current example according to (3.11) - compute the local potential Eν(t) = w(t) · ξν(t) Sν(t) T - update the weight vector according to w(t + 1) = w(t) + 1 N Θ[c −Eν(t)]ξν(t) Sν(t) T (3.17) 󰀵 󰀹󰀷 or, equivalently, increment the corresponding embedding strength xν(t)(t + 1) = xν(t)(t) + Θ[c −Eν(t)] (all other embedding strengths remain unchanged at time t) 󰀶 󰀺󰀸 (3.18) The update (3.16) modifies w(t) only if the example input is misclassified by the current weight vector or correctly classified with Eν(t) < c. In this case, a Hebbian term is added. The quantity xν(t) remains unchanged or increases by 1 in every update step (3.18). Consequently, for tabula rasa initialization, the resulting embedding strengths are non-negative integers. Frequently, the simple setting c = 0 is considered and the resulting scheme is often called the (Rosenblatt) perceptron algorithm. The underlying principle of adding a Hebbian term for misclassified examples is referred to as “learning from mistakes”. It is the basis of several other training algorithms discussed in forthcoming sections. The (c = 0)-algorithm stops as soon as all examples in D are correctly classified: the evaluation of the modulation function yields Θ(−Eµ) = 0 in all forthcoming update steps. The algorithm is illustrated in terms of a simple two-dimensional feature space and a data set D comprising six labeled inputs, see Fig. 3.4. In this specific case, the perceptron classifies all feature vectors in
The+Shallow+and+the+Deep_Page_53_Chunk3952
40 3. THE PERCEPTRON t=0 t=1 t=2 t=3 t=4 t=5 t=6 —————————————————————– t = 0 : w(0) = 0 all xµ = 0 t = 1 : w(1) = 1 2ξ1 x1 →1 t = 2 : w(2) = w(1) zero update t = 3 : w(3) = w(2) + 1 2ξ3 x3 →1 t = 4 : w(4) = w(3) zero update t = 5 : w(5) = w(4) −1 2ξ5 x5 →1 t = 6 : w(6) = w(5) alg. terminates —————————————————————– Figure 3.4: Rosenblatt perceptron algorithm. Illustration of the training scheme with c = 0 in (3.16) in a two-dimensional feature space (N = 2). A set of six examples is presented sequentially. Empty circles represent feature vectors labeled with ST = −1, filled circles mark data from class ST = +1. Initial conditions correspond to w(0) = 0 (tabula rasa). Only one epoch of training is considered with ν(t) = t = 1, 2, . . . , 6. At each time step, ξt is marked by a shaded circle. The current weight vector is either updated by adding a Hebbian term if example t is misclassified (time steps t = 1, 3, 5) or it remains unchanged if the current classification is correct already (time steps t = 2, 4, 6). We refer to the latter as zero updates. Actual non-zero updates are given by the addition (St T = +1) or subtraction (St T = −1) of 1 N ξt which is displayed as an arrow in the illustration. The resulting weight vector is shown in the next time step. For the specific data set considered here, all examples are correctly classified after one epoch already. In general, the data set has to be presented several times before the Rosenblatt algorithm terminates.
The+Shallow+and+the+Deep_Page_54_Chunk3953
3.3. THE ROSENBLATT PERCEPTRON 41 D correctly, and the algorithm stops already after one sweep through the data set, i.e. one epoch of training. We will show in Sec. 3.3.5 that the Rosenblatt algorithm (3.15) converges in a finite number of steps and finds a weight vector that solves the percep- tron storage problem (3.7,3.9), provided the given data set is indeed linearly separable. 3.3.4 The perceptron algorithm as gradient descent We have introduced the Rosenblatt algorithm more or less intuitively as a form of iterative Hebbian learning. Alternatively, we can motivate it as the minimization of the cost function E(w) = 󰁛 µ=1 eµ(w) = 1 N P 󰁛 µ=1 Θ [c−w · ξµSµ T ] (c−w · ξµSµ T ) , (3.19) which is written as a sum over contributions eµ of the given examples in the data set. The example specific term is zero if the data point is classified correctly with w·ξµSµ T > c and it contributes the linear costs (c−w·ξµSµ T ) > 0 otherwise. Note that the objective function (3.19) is frequently referred to as the hinge loss, see for instance [HTF01]. Strictly speaking, the function E is not differentiable wherever w·ξµ = 0 for one or several examples. In order to cope with this difficulty one can resort to the consideration of the sub-derivative or sub-gradient method, see [Bot04] for a more detailed discussion. From a practical point of view, we can ignore this subtlety here and devise a gradient based minimization as outlined in Appendix A.4 and Sec. 5.2.3 by setting ∇w eµ = −1 N Θ [c −w · ξµSµ T ] ξµSµ T (3.20) for a single example. Here ∇w denotes the gradient with respect to the weight vector w ∈RN. Hence the update (3.17) can be written as w(t + 1) = w(t) −∇w eν(t)󰀏󰀏󰀏 w=w(t) (3.21) where the currently considered example is given by ν(t) according to (3.11). An update step along the negative gradient tends to decrease eν(t) and, thus, the total cost function E. We will encounter a number of learning algorithms in quite diverse settings which are guided by the gradient of an objective function with respect to the entire data set or single example contributions as above, see Appendix A.4 for a general discussion. The Rosenblatt algorithm resembles the minimization of E(w) by stochas- tic gradient descent (SGD), which is presented and discussed more generally in Appendix 5.2.3 and in Chapter 5. In contrast to SGD, however, the original prescription as suggested by Rosenblatt and discussed here presents examples
The+Shallow+and+the+Deep_Page_55_Chunk3954
42 3. THE PERCEPTRON in the deterministic sequential order (3.11). The formulation in terms of embed- ding strengths has the structure of coordinate descent as discussed in Appendix A.5.1: at every step, only one xµ is updated. However, its potential increment (3.18) is not given by the derivative ∂/∂xµ of E in Eq. (3.19).5 Another important difference to the generic SGD is that, due to the specific structure of the problem, it is not necessary to introduce and fine-tune a learning rate in order to achieve convergence. In fact, one can show that for tabula rasa initialization w(0) = 0, a constant learning rate η as in w(t + 1) = w(t) −η N ∇w eν(t)󰀏󰀏󰀏 w=w(t) = w(t) + η N Θ[c −Eν(t)]ξν(t) Sν(t) T would simply rescale the resulting embedding strengths and thus also the weight vector by a factor η as compared to (3.21). This would affect neither the con- vergence behavior nor the resulting classification scheme. Note, however, that the influence of a time-dependent η(t) and its interplay with a potential nor- malization of the weight vector is non-trivial, see for instance [BSS94]. While the interpretation as a gradient based method helps to relate the Rosenblatt algorithm to other machine learning schemes, we will not make use of it explicitly in the following section. For linearly separable data sets, conver- gence can be shown explicitly without referring to gradient descent. 3.3.5 The Perceptron Convergence Theorem The Perceptron Convergence Theorem is one of the most important, fundamen- tal results in the field. The guaranteed convergence of the Rosenblatt perceptron algorithm for linearly separable problems has played a key role for the popularity of the perceptron framework: Perceptron Convergence Theorem (PCT) (3.22) For linearly separable data D = {ξµ, Sµ T }P µ=1, the Rosenblatt perceptron algorithm stops after a finite number of update steps (3.17) or (3.18) and yields a weight vector w with w · ξµSµ T ≥c > 0 for all µ = 1, 2, . . . P. In the following we outline the proof of convergence. We consider a linearly separable D = {ξµ, Sµ T }P µ=1, which implies that at least one solution w∗of the Perceptron Storage Problem (3.7,3.9) exists with 󰀋 sign (w∗·ξµ) = Sµ T 󰀌P µ=1 or, equivalently, 󰀋 Eµ∗= w∗· ξµ Sµ T ≥c > 0 󰀌P µ=1 (3.23) for some positive constant c. 5It is left to the reader to work out the derivative explicitly.
The+Shallow+and+the+Deep_Page_56_Chunk3955
3.3. THE ROSENBLATT PERCEPTRON 43 We do not have to further specify w∗here. In fact, for a given D there could be many solutions of the form (3.23), but here it is sufficient to assume the existence of at least one. We will furthermore denote its squared norm as Q∗≡w∗· w∗= |w∗|2. (3.24) Note that any pair of vectors w, w∗∈RN satisfies 0 ≤(w · w∗)2 |w∗|2 |w|2 = cos2 ∠{w, w∗} ≤1. (3.25) As discussed above, the algorithm yields - after t time steps - a weight vector of the form w(t) = 1 N P 󰁛 µ=1 xµ(t) ξµSµ T for w(0) = 0. (3.26) In the Rosenblatt algorithm the quantity xµ(t) is an integer that counts how often example µ has contributed a Hebbian term to the weights, cf. Sec. 3.3.3, The total number of non-zero updates is, therefore, given by M(t) = P 󰁛 µ=1 xµ(t). (3.27) Now let us consider the projection R(t) = w(t) · w∗. Inserting (3.26) and ex- ploiting the condition (3.23) we obtain the following lower bound: R(t) = 1 N P 󰁛 µ=1 xµ(t) [w∗· ξµSµ T ] = 1 N P 󰁛 µ=1 xµ(t) Eµ∗ 󰁿󰁾󰁽󰂀 ≥c ≥1 N c M(t). (3.28) Similarly, we consider the squared norm Q(t) = w(t)·w(t) of the trained weight vector. At time step t with presentation of example ν(t) it changes as Q(t + 1) = 󰀕 w(t) + 1 N Θ 󰁫 c −Eν(t)󰁬 ξν(t) Sν(t) T 󰀖2 (3.29) = Q(t) + 2 N Θ 󰁫 c −Eν(t)󰁬 Eν(t) + 1 N 2 Θ2 󰁫 c −Eν(t)󰁬󰀏󰀏󰀏ξν(t)󰀏󰀏󰀏 2 In any finite data set D, one of the examples will have the largest norm. We can therefore always identify the quantity Γ ≡1 N max µ 󰁱 |ξµ|2󰁲P µ=1 , (3.30) where the scaling with dimension N is convenient in the following6. Next, we observe that Θ2(x) = Θ(x) for all x. Furthermore we note that 6We could also consider the simpler, less general case of normalized inputs |ξµ|2 = ΓN.
The+Shallow+and+the+Deep_Page_57_Chunk3956
44 3. THE PERCEPTRON Θ[c −Eν(t)] = 0 and Eν(t) ≥c in a zero learning step, while Θ[c −Eν(t)] = 1 and Eν(t) < c in a non-zero learning step. As a consequence, we can replace all Eν(t) by c in Eq. (3.29) to obtain the upper bound Q(t + 1) ≤Q(t) + 2 N c Θ[c −Eν(t)] + 1 N Γ Θ[c −Eν(t)] (3.31) Here we exploit the fact that Q changes only in non-zero updates with Θ[. . .] = 1. Taking into account the initial value Q(0) = 0, we can conclude that Q(t) ≤1 N (2 c + Γ) M(t), (3.32) where M(t) is the number of non-zero changes of Q. In summary, we have ob- tained the two bounds R(t) ≥1 N c M(t) and Q(t) ≤1 N (2c + Γ) M(t), (3.33) respectively. Exploiting Eq. (3.25) we can write 1 ≥(w(t) · w∗)2 (|w(t)||w∗|)2 = R2(t) Q∗Q(t) ≥ 1 N 2 c2 M 2(t) Q∗1 N (2c + Γ)M(t) = c2 Q∗N (2c + Γ) M(t). (3.34) We conclude that M(t) ≤M ∗= (2c + Γ) N Q∗ c2 , (3.35) where the right hand side involves only constants: N and Γ are obtained directly from the given data set. The constants c and Q∗characterize the assumed solution w∗, which is - of course - unknown a priori. However, in any case, Eq. (3.35) implies that the number of non-zero learning steps remains finite, if a solution w∗exists for the given D. In other words, after at most M ∗non-zero learning steps, the perceptron classifies all examples correctly with Eµ ≥c > 0. The number of required training epochs is also upper-bounded by M ∗, because - as long as the algorithm does not stop - at least one non-zero step must occur in every epoch. The dependence of M∗on the constant c deserves further attention: In the limit c →0, the upper bound appears to diverge (M∗→∞). However, if D is linearly separable, solutions w∗with Q∗∝c2 can be found for small values of c. This is due to the linear dependence of Eµ∗on |w∗| = √Q∗, which we already discussed in the context of Eq. (3.10). In the limit c →0 with Q∗∝c2 the upper bound becomes lim c→0 M ∗≈ΓN Q∗ c2 with Q∗ c2 = const. (3.36) Hence, we can express the PCT (3.22) without referring to a specific value of c.
The+Shallow+and+the+Deep_Page_58_Chunk3957
3.3. THE ROSENBLATT PERCEPTRON 45 3.3.6 A few remarks The number of training steps According to the PCT, the required number of training steps is finite for linearly separable data. However, their actual number depends on the detailed proper- ties of D and can be very large, see [Roj96] for a discussion of the computational complexity of the Rosenblatt algorithm and further references. More efficient alternatives to the simple Rosenblatt algorithm can be devised, for instance based on Linear Programming methods [Fle00,PAH19] as discussed in [Roj96]. Non-separable data For the convergence proof, we had to assume the existence of a solution, i.e. linear separability. The Perceptron Convergence Theorem does not provide direct insight into the algorithm’s performance if D is not separable. We will consider the problem of finding approximative solutions with minimum or low number of errors in non-separable data in Sec. 4.1. Existence of a solution In practice, it turns out difficult to decide whether a given D is linearly separable or not, cf. (Q2) in Sec. 3.2. If a solution is not found by the Rosenblatt algorithm after a number of steps, this could imply that it simply should be run for more steps or that, indeed, a solution does not exist. A theorem borrowed from the theory of duality in optimization, see Ap- pendix A.3.4 and [Fle00,PAH19], provides a surprisingly clear criterion for lin- ear separablity, which is closely related to Farkas’ Lemma [Fle00]. It is known as Gordan’s Theorem of the Alternative in the literature [BB00, Man69]. In a notation familiar from the previous sections it reads: Gordan’s Theorem of the Alternative (3.37) For a given matrix χ ∈RN×P , exactly one of the following statements is true: (1) a vector w ∈RN exists with [χ⊤w]µ > 0 for all µ = 1, 2, . . . , P or (2) a non-zero vector 󰂓y ∈RP exists with yµ ≥0 for all µ and χ 󰂓y = 0. If the matrix χ comprises the oriented input vectors of a given data set D, χ = 󰁫 ξ1S1 T , ξ2S2 T , . . . ξP SP T 󰁬 , we can interpret w as the weight vector of a perceptron and statement (1) corresponds to (homogeneous) linear separability of the data set.
The+Shallow+and+the+Deep_Page_59_Chunk3958
46 3. THE PERCEPTRON On the other hand, (2) states that a linear combination of the form P 󰁛 µ=1 yµ ξµ Sµ T = 0 with {yµ ≥0}P µ=1 and 󰂓y ∕= 0 exists. This goes beyond the condition of linearly dependent columns ξµSµ T of χ, which would be relevant in the context of solving equations. The consideration of inequalities requires the existence of at least one non-zero vector of non-negative coefficients 󰂓y resulting in a zero linear combination. Observing that χ⊤χ 󰂓y = 0 ⇔χ󰂓y = 0 we can re-formulate (2) also in terms of the null-space (the space of eigenvectors 󰂓y with eigenvalue zero) of the symmetric and positive semi-definite matrix C = [χ⊤χ]/N ∈RP ×P . We will encounter and make use of the matrix C frequently in later sections. In practice, checking the validity of condition (2) in Gordan’s Theorem may be quite involved: The computation of the null-space can be costly, already. Finding non-negative coefficients 󰂓y for zero linear combinations is also non- trivial, in general. Learning the unlearnable In this context, several algorithms have been suggested that converge for both separable and non-separable data sets. As one example, D. Nabutovsky and E. Domany suggest such an iterative training procedure in [ND91]. For lin. sep. problems, a consistent perceptron vector is found. However, unlike the Rosenblatt algorithm, the algorithm terminates also in the presence of non- separable data and - in this case - indicates that no solution exists. In [ND91], references to other, similar concepts are provided and the suggested scheme is compared with techniques based on Linear Programming. 3.4 The capacity of a hyperplane The existence of successful training algorithms for lin. sep. data is certainly of great value. The question remains how relevant linear separability is or, in other words, how serious the restriction to linearly separable functions is. Linear separability depends on details of the individual, given data set. How- ever, surprisingly general qualitative and quantitative results can be obtained which require only mild assumptions about the data. 3.4.1 The number of linearly separable dichotomies In this section we address (Q3) from Sec. 3.2 and determine the number C(P, N) of linearly separable binary class assignments or dichotomies7 of P feature vec- tors in an N-dimensional input space. Quite surprisingly, this is possible under rather mild conditions on the input data. 7“There are two kinds of people: Those who couch everything in dichotomies and those who do not.” — Unknown
The+Shallow+and+the+Deep_Page_60_Chunk3959
3.4. THE CAPACITY OF A HYPERPLANE 47 Figure 3.5: One-dimensional inputs ξ can be separated into two classes by the origin in C(P, 1) = 2 ways, independent of P. The derivation for general P and N has been published several times in the literature [Win61, Cov65, MD89a] and was also reviewed in [HKP91]. Here we follow the presentation in [MD89a] to a large extent. Interestingly, the result is closely related to very early findings concerning high-dimensional geometry obtained by the Swiss mathematician Ludwig Schläfli in the 19th century al- ready [Sch01]. More recently, very similar results have been re-discovered from a different point of view in the context of Deep Networks, see [Str19, PMB14, MPCB14]. They relate to more general theorems concerning Face-Count Formulas for Partitions of Space by Hyperplanes, which are due to T. Zaslavsky [Zas75]. Specifically, counting the number of response regions in layered networks with piecewise linear activation functions leads to very similar considerations and results [PMB14]. Some results for small N or P We first address straightforward cases involving low dimensions N and/or small numbers P of feature vectors. The first, obvious result is that C(1, N) = 2 for all N. It corresponds to the fact that a single point can always be mapped to S = ±1 by a a hyperplane through the origin and setting the orientation corresponding to the target label. Here we exclude feature vectors ξ = 0, which could always be avoided by an over-all translation of the data set. The second observation C(P, 1) = 2 for all P, reflects the insight that one-dimensional inputs ξ ∈R can only be separated by the origin in two ways: By setting S = sign[ξ] or by the inverted S = −sign[ξ], see Fig. 3.5 for an illustration. In N = 2 dimensions it is easy to see that for P = 2 typically all 2P dichotomies can be realized, as displayed in Fig. 3.6 (panel a). However, we have to exclude a special case: If the two input vectors are collinear with the origin, it is impossible to separate them by a line representing a homogeneously linearly separable function. For P = 3 we find that C(3, 2) < 23, see Fig. 3.6 (panels b,c) panel, even for generic data sets: In panel (b), of the eight possible dichotomies, the two cases with S1 = S2 = S3 = +1 or −1 cannot be represented by a plane through the
The+Shallow+and+the+Deep_Page_61_Chunk3960
48 3. THE PERCEPTRON a) b) c) d) Figure 3.6: Panel (a): Two two-dimensional feature vectors (large filled circles) in general position, i.e. not collinear with the origin (small filled circle). Here, four linearly separable dichotomies exist: either both inputs are assigned to the same class S1 = S2 ∈{−1, +1} or they are separated with S1 = −S2. Panels (b,c): Three two-dimensional feature vectors in general position. Of the 23 = 8 dichotomies only six are linearly separable, each one represented by one of the three planes with two possible orientations. In panel (b) the two dichotomies with S1 = S2 = S3 cannot be realized, in panel (c) the rightmost point cannot be separated from the other two. Panel (d): As indicated by the dotted line, two of the three feature vectors are collinear with the origin. As a consequence only 4 different linearly separable functions can be found. origin. Similarly, in panel (c) one of the data points cannot be separated from the other two, which also excludes two dichotomies. This implies that C(3, 2) = 6. Again, the result is only valid if no subset of two data points falls onto a line through the origin. In panel (d), a counterexample is displayed. Here, only four distinct dichotomies can be realized by a lin. sep. function, as it is not possible to separate the collinear points by a line through the origin. As a consequence they can only be assigned to the same class, as for instance by the two decision boundaries shown in the illustration. Similar, explicit insights can still be achieved for three-dimensional data, which is left to the reader as an exercise. Again, one has to exclude specific configurations in which two or more feature vectors are aligned. Moreover, subsets of three data points should not fall into a two-dimensional plane that includes the origin. General N and P Our intuition – certainly that of the author – usually fails in higher dimensional spaces, i.e. for N > 3. However, it is indeed possible to obtain C(P, N) for general N and P under rather mild assumptions. Similar to the exclusion of collinear input vectors in N = 2 and N = 3, we formulate the condition that the data set should be in general position [Cov65]:
The+Shallow+and+the+Deep_Page_62_Chunk3961
3.4. THE CAPACITY OF A HYPERPLANE 49 0 0 Figure 3.7: Two homogeneously linearly separable dichotomies DP =9 N=2 of the same set of two-dimensional feature vectors. In both panels, all separating planes in the grey-shaded areas would realize the same assignment of labels. General position condition (3.38) A set of vectors P = 󰀋 ξµ ∈RN 󰀌P µ=1 is in general position, if every subset {ξρ}K ρ=1 ⊂P containing K ≤N elements is linearly independent. For P > N, obviously, subsets of more than N elements in N dimensions are always linearly dependent. In sloppy terms, the general position conditions requires that the vectors in P do not form more hyperplanes than necessary. In a sense, the general position condition corresponds to the absence of degeneracies in the data. Real world data, however, is frequently prone to such degeneracies. Often, it is even assumed and exploited that nominally high-dimensional feature vectors fall into lower-dimensional manifolds due to correlations and interdependencies. In the modelling and simulation of learning processes, e.g. when investigating training algorithm performances, one often resorts to randomized feature vectors of N independent components [HKP91]. The popular example of zero mean / unit variance Gaussian random numbers, for instance, leads to data sets that are in general position with probability one. For the following argument we consider a fixed set of P input vectors ξ ∈RN in general position. A dichotomy DP N of these feature vectors assigns a set of binary labels: DP N : ξµ ∈RN →Sµ ∈{−1, +1} for µ = 1, 2, . . . P. The aim is to work out how many of the in total 2P possible dichotomies cor- respond to linearly separable functions. Their number will be referred to as C(P, N). As an illustration, Fig. 3.7 displays a set of P = 9 feature vectors in N = 2 dimensions. The set has been labeled in two different ways, both of which are linearly separable, as shown in the left and right panel of the figure. All planes
The+Shallow+and+the+Deep_Page_63_Chunk3962
50 3. THE PERCEPTRON 0 0 ξP +1 ξP +1 Figure 3.8: An additional feature vector ξP +1 is added to the data set of Fig. 3.7. Its label is either non-ambiguously determined by the linearly separable function as in the left panel, or the label is ambiguous, SP +1 = +1 or −1, as shown in the right panel. through the origin that fall into the shaded region would separate the two classes as represented by filled (S = +1) versus empty circles (S = −1). Now we assume that an additional feature vector ξP +1 is added to the data set, as displayed in Figure 3.8. We note that two essentially different situations can occur: In the left panel of the figure, all perceptrons that separate the already known P examples would assign ξP +1 to the same class, i.e. SP +1 = −1 in the illustration. The dichotomy is said to be non-ambiguous with respect to the new data point. On the contrary, in the right panel the response is ambiguous with respect to the additional feature vector: Some separating planes correspond to SP +1 = −1, others to SP +1 = +1. We will denote by8 Z(P, N) the number of ambiguous E(P, N) the number of non-ambiguous 󰀞 dichotomies DP N w.r.t. SP +1. We note immediately that the total number of linearly separable dichotomies is C(P, N) = Z(P, N) + E(P, N) since each labelling is either ambiguous or non-ambiguous with respect to SP +1. Obviously, each non-ambiguous dichotomy DP N contributes exactly one lin. sep. labeling to C(P +1, N). However, each of the Z(P, N) ambiguous labellings contributes two lin. sep. dichotomies in the enlarged data set with P +1 feature vectors, as we have the choice between SP +1 = +1 and −1 while retaining separability. Hence we have C(P + 1, N) = E(P, N) + 2 Z(P, N) = C(P, N) + Z(P, N). (3.39) Unfortunately this is not yet a useful recursion because Z(P, N) is not known. 8Here the symbols E and Z are inspired by the German terms eindeutig and zweideutig.
The+Shallow+and+the+Deep_Page_64_Chunk3963
3.4. THE CAPACITY OF A HYPERPLANE 51 ξP +1 ξP +1 H H Figure 3.9: The projection of data into the auxiliary subspace H ⊥ξP +1 is either linearly separable in H (right panel, ambiguous case) or not separable as in the non-ambiguous case (left panel), respectively. However the following consideration shows that Z(P, N) can be related to the number of lin. sep. dichotomies in N−1 dimensions. In Fig. 3.9 we introduce the auxiliary (N −1)-dimensional subspace H, i.e. the hyperplane through the origin which is orthogonal to ξP +1. In the left panel, H falls into the grey shaded area of separating hyperplanes, while in the right panel it does not. Next, we project the P feature vectors into the auxiliary space H as shown in the figure. In situations exemplified by the left panel, the projections of the P feature vectors into the (N −1)-dim. auxiliary space are not linearly separable in H. In the right panel however, examples with S = ±1 fall into opposite half-spaces in H. In other words, their labels correspond to a linearly separable dichotomy of the P examples in (N −1) dimensions. On can show that there is indeed a one-to-one correspondence between the Z(P, N) ambiguous lin. sep. functions and all linearly separable dichotomies in H. Assuming that there is nothing special about H, we conclude that Z(P, N) = C(P, N −1). Hence we obtain the recursion C(P + 1, N) = C(P, N) + C(P, N −1). (3.40) The condition is, in fact, that the data points are in general position as defined above in (3.38). If, for instance the added data point would fall onto a line with another data point and the origin, clearly H could not be considered a generic subspace. For data in general position, however, we can assume that H has the same properties as any (N −1)-dim. subspace would. The number of linearly separable dichotomies It is straightforward to show that the following expression satisfies the recursion and matches the above given initial values C(P, 1) = 2 and C(1, N) = 2: C(P, N) = 󰀻 󰁁 󰁁 󰀿 󰁁 󰁁 󰀽 2P for P ≤N 2 N−1 󰁛 i=0 󰀕P −1 i 󰀖 for P > N, (3.41)
The+Shallow+and+the+Deep_Page_65_Chunk3964
52 3. THE PERCEPTRON Figure 3.10: The fraction Pls = C(P, N)/2P of linearly separable functions versus α = P/N for three different values of N (5, 20, 100). C(P, N)/2P α = P/N N =5 N =100 with the familiar bionomial coefficients 󰀃m k 󰀄 = m! k!(m−k)!. The expression in the second line of (3.41) would also reproduce 2P for P < N, as 󰀃m k 󰀄 = 0 for k > m and 2 󰁓m−1 k=0 󰀃m k 󰀄 = 2m. The formal consideration of the two cases in (3.41) is only provided for the sake of clarity. The proof can be done by induction and exploits that 󰀕P i 󰀖 = 󰀕P −1 i −1 󰀖 + 󰀕P −1 i 󰀖 for arbitrary i, P and 󰀕P i 󰀖 = 0 for i < 0. Therefore, C(P + 1, N) according to Eq. (3.41) satisfies 2 N−1 󰁛 i=0 󰀕P i 󰀖 = 2 N−1 󰁛 i=0 󰀕P −1 i −1 󰀖 + 2 N−1 󰁛 i=0 󰀕P −1 i 󰀖 = 2 N−2 󰁛 i=0 󰀕P −1 i 󰀖 󰁿 󰁾󰁽 󰂀 C(P,N−1) + 2 N−1 󰁛 i=0 󰀕P −1 i 󰀖 󰁿 󰁾󰁽 󰂀 C(P,N) which corresponds to the recursion relation (3.40). Instead of the numbers C(P, N) themselves, we can also consider the fraction of linearly separable dichotomies Pls(P, N) = C(P, N) 2P = 󰀻 󰁁 󰁁 󰀿 󰁁 󰁁 󰀽 1 for P ≤N 21−P N−1 󰁛 i=0 󰀕P −1 i 󰀖 for P > N, (3.42) This allows to compare and represent the results graphically for different N as shown in Fig. 3.10. The fraction Pls can be interpreted as the probability for a set of randomly assigned labels {Sµ = ±1}P µ=1 to be linearly separable. 3.4.2 Discussion of the result For the above arguments and derivation it is essential to assume that the data set is in general position. Typically, violations of this condition will reduce the number of linearly separable dichotomies. In this sense, the results given in Eqs. (3.41) and (3.42) can be interpreted as an upper bound even in more realistic settings.
The+Shallow+and+the+Deep_Page_66_Chunk3965
3.4. THE CAPACITY OF A HYPERPLANE 53 Figure 3.10 shows the fraction of linearly separable dichotomies C(P, N)/2P as a function of P/N. This scaling allows to conveniently display several curves for selected values of N in one graph. The most striking and - at first sight - counterintuitive feature of the result (3.41) is that C(P, N) = 2P for P ≤N. It implies that all possible dichotomies are linearly separable, as long as the number of feature vectors does not exceed their dimension N. We have explicitly confirmed the statement for N = 2 in the discussion of Fig. 3.6 (panel a). For P > N, the fraction of linearly separable dichotomies decreases and approaches its limiting value Pls(α) →0 for α →∞. As exemplified in Fig. 3.10 for dimensions N = 5, 20, 100, the region with Psl ≈1 extends with increasing N and at the same time the decrease of Pls with α becomes steeper. Moreover, all curves intersect in α = P/N = 2. The behavior in the simultaneous limits N →∞, P →∞with finite α = P/N (3.43) is given as P ∞ ls = 󰀝 1 for α ≤2 0 for α > 2. Hence, in high dimensions, linearly separability is guaranteed (with probability one) up to P ≤2N, which motivates to define the so-called storage capacity of the perceptron: αc = 2. (3.44) In general, the storage capacities of more powerful network architectures are non-trivial to obtain, see for instance [EB01,WRB93]. For all N and P in the perceptron we have obtained the even stronger result that Pls = 1 exactly for α ≤1, i.e. up to P = N. The latter is often referred to as the Vapnik-Chervonenkis (VC) dimension of the perceptron, see [HTF01] for a discussion and further references. At first sight, the storage capacity of the perceptron or other student systems relates only to their ability to reproduce a given data set and implement the corresponding labels. However, as we will see in the following, storage capacity and VC-dimension play an important role with respect to the generalization ability and learning of a rule from example data. 3.4.3 Time for a pizza or some cake It is amusing to note that the counting of lin. sep. dichotomies is closely related to a popular mathematical puzzle known as the lazy caterer’s problem: The max- imum number of pieces obtained by K straight cuts through a two-dimensional pizza (or a pancake, depending on cultural background) [Wet78,MD09] is given by c(K, 2) = 󰀕K 0 󰀖 + 󰀕K 1 󰀖 + 󰀕K 2 󰀖 = 2 + K + K2 2 = 1, 2, 4, 7, 11, 16 . . . (3.45)
The+Shallow+and+the+Deep_Page_67_Chunk3966
54 3. THE PERCEPTRON ⇒ Figure 3.11: The Pizza Connnection: K planar cuts through the center (•) of an N-dim. sphere (left panel) correspond to K −1 arbitrary straight cuts through the (N −1)-dim. surface of each flattened hemisphere (right panel). The corresponding result for three-dimensional objects is known as the cake number [Wei99,Str19], i.e. the maximum number of pieces obtained by K planar cuts: c(K, 3) = 󰀕K 0 󰀖 + 󰀕K 1 󰀖 + 󰀕K 2 󰀖 + 󰀕K 3 󰀖 = 6 + 5K + K3 6 = 1, 2, 4, 8, 15, 26 . . . (3.46) In fact, one can show that the recursion relation c(K + 1, N) = c(K, N) + c(K, N −1) holds in complete analogy to Eq. (3.40), albeit with different initial values. See also [Str19] for the proof of a similar result. We can directly relate the C(P, N) with the generalized cake numbers c(K, N): The left panel of Figure 3.11 displays a sphere in N = 3 dimension, which is cut by four equatorial planes through the center. They correspond to four feature vectors and divide the normalized weight space into regions representing 14 lin. sep. dichotomies. Note that one of the planes (marked by the red circle) is oriented such that the corresponding upper hemisphere is shown in top view. A two-dimensional simplified representation of the situation is shown in the right panel, which displays a schematic projection onto the selected equatorial plane. Apparently, there is a 1:1 correspondence of K planar cuts through the cen- ter of a spherical cake to K −1 unrestricted cuts through a two-dimensional pizza which represents one of the hemispheres. The same holds for the lower hemisphere in the left panel. Quite generally, this yields the simple relation C(P, N) = 2 c(P −1, N −1) between the number of lin. sep. functions and the generalized cake numbers.
The+Shallow+and+the+Deep_Page_68_Chunk3967
3.5. LEARNING A LINEARLY SEPARABLE RULE 55 3.5 Learning a linearly separable rule Obviously it is not the ultimate goal of perceptron training to reproduce the labels in a given data set, only. This could be done quite efficiently by simply storing D in memory and look it up when needed. In general, it is the aim of machine learning to extract information from given data and formulate (parameterize) the acquired insight as a hypothesis, which can be applied to novel data not contained in D in the working phase. Addressing (Q4) from Sec. 3.2, we assume that an unknown, linearly sepa- rable function or rule exists, which assigns any possible input vector ξ ∈RN to the binary output SR(ξ) = ±1, where the subscript R stands for “rule”. Training of a perceptron from D = {ξµ, Sµ T } should infer some information about the unknown SR(ξ) as long as the training labels Sµ T are correlated with the correct Sµ R = SR(ξµ). In the simplest and most clear-cut situation we have Sµ T = Sµ R for all µ = 1, 2, . . . , P. (3.47) Hence, we assume that the set of training data D = {ξµ, Sµ R}P µ=1 comprises perfectly reliable examples for the application of the rule. It does, for instance, not contain mislabeled example data and is not corrupted by any form of noise in the input or output channel. We will use the notation Sµ R or SR(ξµ) for labels which are given by a rule, explicitly. Here, this indicates that the examples in D represent a linearly separable function, indeed. Note that more general data sets D = {ξµ, Sµ T }P µ=1 can also be linearly separable without direct correspondence of the Sµ T to a lin. sep. rule: a small set of examples for a non-separable function or examples corrupted by noise can be very well linearly separable. 3.5.1 Student-teacher scenario Even for data sets of reliable noise-free examples only, it is not a priori clear that a perceptron is able to reproduce the labels in D correctly, since the target might not be linearly separable. To further simplify our considerations, we restrict ourselves to cases, in which the rule is indeed given by a function of the form SR(ξ) = sign (w∗· ξ) (3.48) for a particular weight vector w∗. In fact, all weight vectors λw∗with λ > 0 would define the same rule and, therefore, we will assume |w∗| = 1 implicitly, without loss of generality. A perceptron with weights w∗is always correct9. It is therefore referred to as the teacher perceptron (or teacher, for short) and can be thought of as providing the example data from which to learn. Likewise, the trained perceptron with adaptive weights w will be termed the student. 9The characteristic trait of many teachers, at least in their self-perception.
The+Shallow+and+the+Deep_Page_69_Chunk3968
56 3. THE PERCEPTRON Figure 3.12: Generalization er- ror of the perceptron in a stu- dent/teacher scenario. For N- dim. random input vectors gener- ated according to an isotropic den- sity, the probability of disagree- ment between student vector w and teacher w∗is proportional to the red shaded area, i.e. to the angle ∠(w, w∗). w w∗ ∕= ∕= The choice of a specific student weight vector corresponds to a particular hypothesis, which is represented by the linearly separable function Sw(ξ) = sign (w · ξ) for all ξ ∈RN. (3.49) Student-teacher scenarios have been used extensively to model machine learning processes, aiming at a principled understanding of the relevant phe- nomena. They conveniently allow to control the complexity of the target rule vs. that of the trained system in model situations, thus enabling the systematic study of a variety of setups. In the following sections we will consider idealized situations, in which stu- dent and teacher both represent linearly separable functions. Under this con- dition, a plausible guideline for training the student from a set D = {ξµ, Sµ R} is to achieve perfect agreement with the unknown teacher in terms of the given examples. However, the agreement should not only concern data in D as in the storage problem; it should extend or generalize to novel data, ideally. Frequently, the so-called generalization error serves a measure of success of the learning process. In practical situations, it would correspond to the performance of the student with respect to novel data, for instance in a test set which was not used for training. In our idealized setup, we can revisit the basic geometrical interpretation of linearly separable functions. Figure 3.12 displays a student-teacher pair of weight vectors. The illustration is, obviously, two-dimensional. Note, however, that it can be interpreted as representing the two-dimensional subspace spanned by N-dimensional vectors w, w∗, more generally. We assume that a test input ξ is generated according to an isotropic, un- structured density anywhere in RN. The corresponding generalization error 󰂃g, i.e. the probability for a disagreement sign(w · ξ) ∕= sign(w∗· ξ) between student and teacher is directly proportional to the area of the shaded segments, i.e. to the angle ∠(w, w∗): 󰂃g = 1 π arccos 󰀕w · w∗ |w||w∗| 󰀖 . (3.50)
The+Shallow+and+the+Deep_Page_70_Chunk3969
3.5. LEARNING A LINEARLY SEPARABLE RULE 57 ξ w ∈RN |w|=const. • V(3) Figure 3.13: Dual geometrical interpretation of linear separability. Illustra- tion in terms of labeled input vectors ξ ∈R3 and L2-normalized weight vectors with |w| = const. on the surface of an N-dim. hypersphere. Left: A single, labeled input ξ1 separates all weight vectors with S1 w = +1 from those with S1 w = −1. Right: A set of P labeled input vectors (here: P = 3) defines the (darker) region V(3) of all weight vectors w with correct response Sw(ξµ) = Sµ R for µ = 1, 2, 3. For clarity, the vectors ξµ are not shown. Orthogonal student-teacher pairs would result in 󰂃g = 1/2, which corresponds to randomly guessing the output. Perfect agreement with 󰂃g = 0 is achieved for w 󰀂w∗. The latter statement holds true independent of the statistical properties of the test data, obviously. 3.5.2 Learning in version space In the classical setup of supervised learning, the training data comprises the only available information about the rule. If the target rule is known to be linearly separable and for reliable noise-free example data, it appears natural to require that the hypothesis is perfectly consistent with D. But is this a promising training strategy? In other words, can we expect to infer meaningful information about the unknown w∗by “just” solving the perceptron storage problem with respect to D? In order to obtain an intuitive insight, we re-visit and extend the geometri- cal interpretation of linear separability. Following the so-called dual geometric interpretation of linear separability, see Fig. 3.13 (left panel), we can interpret weight vectors w as points in an N-dim. space. Note that every vector ξ defines a hyperplane through the origin of this space, which separates weight vectors w ∈RN with positive w · ξ and Sw(ξ) = +1 from those with negative scalar product and Sw(ξ) = −1. Hence, given the correct target label SR(ξ), the plane orthogonal to ξ separates correct from wrong students in N-dimensional weight space. Consequently, a set D of P labeled feature vectors defines a region or volume of vectors w which reproduce Sw(ξµ) = SR(ξµ) for all µ = 1, 2, . . . , P, as
The+Shallow+and+the+Deep_Page_71_Chunk3970
58 3. THE PERCEPTRON V(3) = V(4) ξ1 ξ2 ξ3 ξ4 ξ1 ξ2 ξ3 ξ4 ξ5 Figure 3.14: Illustration of perceptron learning in version space. Left: The hyperplane associated with the additional example ξ4 does not in- tersect the version space V(3). Consequently, all weight vectors in V(3) already classify ξ4 correctly and V(4) = V(3). Right: The hyperplane associated with ξ5 cuts through V(4) and, consequently, V(5) corresponds to either the small triangular region (green) or the remaining dark area, depending on the actual target S5 R. illustrated in Fig. 3.13 (right panel). The set of all perceptron weight vectors which are consistent with the P examples in D, i.e. which give Sµ w = Sµ R for all µ = 1, 2, . . . , P, is termed version space and can be defined as V = 󰀝 w ∈RN, w2 =1 󰀏󰀏󰀏󰀏sign(w · ξµ) = Sµ R for all {ξµ, Sµ R} ∈D 󰀞 . (3.51) In the following, we will use the notation V(P ), if we want to refer to the number of examples in D explicitly. In the context of the previous section, C(P, N) in Eq. (3.41) can be inter- preted as the number of different version spaces that can be constructed for a set of P input vectors in N-dimensional space by assigning linearly separable target labels Sµ = ±1. It is important to note that defining V as a set of normalized vectors with w2 = 1 is convenient but not essential. Obviously, the normalization is irrelevant with respect to the conditions sign(w · ξµ) = Sµ R and could be replaced by any other constant norm or even be omitted. The version space is non-empty, V ∕= ∅, if and only if D is linearly separable: If a (normalized) teacher vector w∗defines the linearly separable rule repre- sented by the examples in D, then w∗∈V, necessarily. In words: at least the teacher vector itself must be inside version space. However, in absence of any information beyond the data set D, the unknown w∗could be located anywhere in V with equal likelihood. The term learning in version space refers to the idea of admitting only hy- potheses which are perfectly consistent with the example data. Let us assume
The+Shallow+and+the+Deep_Page_72_Chunk3971
3.5. LEARNING A LINEARLY SEPARABLE RULE 59 that, given a set D of P reliable examples for a linearly separable rule, we have identified some vector w ∈V. According to the Perceptron Convergence The- orem (3.22) this is always possible, e.g. by means of the Rosenblatt perceptron algorithm10. In our low-dimensional illustration, Fig. 3.13 (right panel), this means that we can always place a student vector w somewhere in the darker region representing V(3) for P = 3 training examples. Now, consider a fourth labeled example {ξ4, S4 R} as displayed in Fig. 3.14 (left panel). The hyperplane associated with ξ4 does not intersect the version space V(3). Consequently all student vectors w ∈V(3) fall into the same half- space with respect to the new example and yield the same perceptron output sign(w · ξ4). This implies in turn that, if the extended data set {ξµ, Sµ R}4 µ=1 is still linearly separable, only one value of the target label S4 R is possible, which must be S4 R = sign(w · ξ4) for w ∈V(3). Consequently we have that V(4) = V(3); the version space with respect to the extended data set remains the same as for the previously known P = 3 examples. Hence, our strategy of learning in version space does not require modifying the hypothesis or selecting a new student vector w to parameterize it. In this sense, the input/output pair {ξ4, S4 R} is un-informative in the given setting. The situation is different in the case illustrated in the right panel of Fig. 3.14. Here, the data set is amended by {ξ5, S5 R} with the plane orthogonal to ξ5 cutting through V(4) = V(3). Elements of V(4) on one side of the hyperplane correspond to the perceptron response Sw(ξ5) = +1, while the others yield Sw(ξ5) = −1. Depending on the actual target S5 R of the additional example, the version space V(5) corresponds to either the green area or the remaining lighter region in the illustration. In any case, the extended data set {ξµ, Sµ R}5 µ=1 is linearly separable and the new version space V(5) of consistent weight vectors is bound to be smaller than the previous V(4) = V(3). The volume of consistent weight vectors w shrinks due to the information associated with the new example data. As we add more examples to the data set, the corresponding version space can only remain the same or decrease in size. Indeed, one can show that V will shrink to a point with P/N →∞under the rather mild condition of general position discussed in the previous section. Together with the fact that the teacher w∗∈V(P ) for any P, we can con- clude that learning from version space will enforce w →w∗with increasing training set size. More precisely, we conclude that the angle ∠(w, w∗) →0 for unnormalized vectors w, w∗. Hence, we can expect that learning in version space yields hypotheses which agree with the unknown rule to a large extent, provided the data set contains many examples. In the sense of the above dis- cussed generalization error (3.50), it will achieve perfect generalization 󰂃g →0 for P →∞. 10. . . with subsequent normalization in order to match the definition (3.51) precisely.
The+Shallow+and+the+Deep_Page_73_Chunk3972
60 3. THE PERCEPTRON Figure 3.15: The generalization error 󰂃g vs. α = P/N as ob- tained from the number of ambigu- ous / non-ambiguous linearly sep- arable functions, cf. Eq. 3.52. The curves correspond to N = 5, 20, 100 and to the limit N →∞, respec- tively (from left to right). 󰂃g α = P/N 3.5.3 Generalization begins where storage ends The above, elementary and illustrative considerations do not provide more con- crete mathematical relations which, for instance, quantify the generalization error as a function of the training set size P. However, we can re-visit and exploit the result of Section 3.4, where we counted the number of linearly separable functions of P feature vectors in N dimensions under quite general assumptions [Win61,Cov65,MD89a]. Assume that the training process has exploited P examples for a linearly separable rule and that we have identified a student weight vector in V(P ) which corresponds to one of the C(P, N) linearly separable dichotomies of the data. Implicitly, we assume that the true rule represents any of the C(P, N) linearly separable functions with equal probability. Alternative approaches, e.g. from the statistical physics perspective, weight every dichotomy with the volume of the associated version space. We refrain from a detailed discus- sion of this subtlety and refer the reader to the more specialized literature [HKP91,EB01,WRB93]. Now assume that we present a novel input vector ξtest with target label Stest to the student. In complete analogy to the considerations illustrated in Fig. 3.14, the version space could be ambiguous or non-ambiguous with respect to the perceptron output Stest w . In the latter case we know that any w ∈V(P ) will provide the correct response. If the version space is ambiguous with respect to ξtest, a fraction of the weight vectors in V(P ) will be correct while the remaining ones would predict an incorrect label. Therefore, in absence of more detailed knowledge, we expect that the response of a random w ∈V(P ) will be incorrect with probability 1/2. As a consequence, the generalization error 󰂃g is proportional to the fraction of ambiguous dichotomies and we obtain 󰂃g = 1 2 Z(P, N) C(P, N) = 1 2 C(P, N −1) C(P, N) (3.52) with C(P, N) from Eq. (3.41). The result is displayed in Fig. 3.15 as a function of the scaled number of examples α = P/N. The curves represent input dimensions N = 5, 20, 100 and the limiting case N →∞, cf. Eq. (3.43).
The+Shallow+and+the+Deep_Page_74_Chunk3973
3.5. LEARNING A LINEARLY SEPARABLE RULE 61 ++ +− −+ −− +− w∗ w Figure 3.16: Linear separability for P < N. Left panel: Version space corresponding to two feature vectors in N = 3 dimensions with all (four) possible combinations of labels ±1. Right panel: Assume that, as an example, the region marked +−corresponds to the set of correct (normalized) weight vectors. The version space is large enough to allow for significant disagreement between student w and teacher vector w∗. Even 󰂃g = 1 is possible for extreme settings with w = −w∗. For arbitrary N and P we find that 󰂃g = 1/2 as long as P ≤N. For larger N, the region of 󰂃g = 1/2 extends, up to α = αc = 2 in the limit (3.43). This finding indicates that for relatively small data sets, the perceptron output is no better than unbiased random guessing by, for instance, flipping a coin! Hence, we learn from the above counting argument that storage without generalization is possible for small P/N. Figure 3.16 illustrates the situation in N = 3 dimensions. The version space for P < N allows to place the student vector very far away from the (unknown) teacher vector in V(P ) without causing disagreement on the training data. Even total misalignment with w = −w∗is possible in an extreme setting. The result (3.52) indicates that for a randomly selected student inside the version space of a randomly selected lin. sep. dichotomy, storage without learning the rule can be observed up to the Vapnik-Chervonenkis-dimension P = N. For large N →∞the region extends to the storage capacity, i.e. to P = 2N. On the positive side, we also see that for data set sizes which exceed the storage capacity, it is inevitable for the perceptron to perform better than ran- dom. Learning in version space is guaranteed to yield non-trivial generalization as soon as the number of examples exceeds the storage capacity (or the VC- dimension for finite N, P). These findings confirm the intuition that learning and generalization is not possible if the hypothesis space is too flexible. A very complex student system can implement large sets of example data without inferring useful information about the underlying rule.
The+Shallow+and+the+Deep_Page_75_Chunk3974
62 3. THE PERCEPTRON Asymptotic behavior for N →∞ For large N →∞with P/N = α one can show that the asymptotic form of Eq. (3.52) reads [Cov65] 󰂃g(α) = 󰀻 󰀿 󰀽 1/2 for α ≤2 1 2 1 α −1 for α > 2. (3.53) As an alternative to the counting argument, methods borrowed from statistical physics have been applied to compute the typical version space volume in high dimensions (N →∞) and yield the same basic dependence, i.e. 󰂃g ∝α−1 for α →∞ for various training algorithms in the presence of noise-free data, see for instance [EB01,WRB93,BSS94,Opp94]. However, the statistical physics based analysis and similar considerations also show that learning in version space typically performs better than random even for small training sets. It turns out beneficial to select specific students in version space. Even simple algorithms like the Rosenblatt scheme typically yield 󰂃g(α) < 1/2 already for small α. In the following sections we will consider, more specifically, the perceptron of optimal stability which achieves near optimal expected performance. 3.5.4 Optimal generalization In the student-teacher setup discussed above, we only know that the teacher w∗is located somewhere in V. In absence of additional knowledge it could be anywhere in the version space with equal probability. Learning in version space places the student w also somewhere in V. Its generalization error 󰂃g is, under very general circumstances, a decreasing func- tion of the angle ∠(w, w∗), which itself is a decreasing function of the Euclidean distance |w −w∗| for normalized weight vectors w, w∗, see Fig. 3.12. As a con- sequence, the smallest expectation value of 󰂃g over all possible positions w∗∈V would be achieved by placing the student vector in the center of mass wcm of the version space wcm = 󰁝 V w dNw. (3.54) By definition, it has the smallest average (angular) distance from all other points in the set. Note that the center of mass wcm of the normalized vectors in V itself is not normalized, but realizes the same classification as wcm/|wcm|. In principle, the definition (3.54) immediately suggests how to determine wcm for a given lin. sep. data set D: We would have to determine many, random elements w(i) ∈V independently and compute the simple empirical estimate [Wat93] w(est) cm = 1 M M 󰁛 i=1 w(i). (3.55)
The+Shallow+and+the+Deep_Page_76_Chunk3975
3.6. THE PERCEPTRON OF OPTIMAL STABILITY 63 In practice, sampling the version space with uniform density is a non-trivial task. The theoretical background and practical strategies for how to achieve the optimal generalization ability when learning a linearly separable rule are discussed in e.g. [Wat93,OH91,Ruj97]. 3.6 The perceptron of optimal stability Here we approach the question (Q5) of Sec. 3.2 of how to choose a good or near optimal weight vector in version space from a different perspective. We avoid the explicit computation of the center of mass of V and consider a well-defined problem of quadratic optimization instead. 3.6.1 The stability criterion The so-called stability of the perceptron has been established as a meaningful optimality criterion. We first consider the stability of a particular example, which is defined as κµ = Eµ |w| = w · ξµ Sµ T |w| . (3.56) Due to the linearity of Eµ in w, cf. Eq. (3.8), the quantity κµ is invariant under a rescaling of the form w →λw (λ > 0). In terms of the geometric interpreta- tion of linear separability, the scalar product of ξµSµ T and w/|w| measures the distance of the input vector from the separating hyperplane, see Fig. 3.17. More precisely κµ is an oriented distance: For κµ > 0, the input vector is classified correctly by the perceptron, while for κµ < 0 we have sign(w · ξµ) = −Sµ T and the input is located on the wrong side of the plane. The stability (its absolute value) quantifies how robust the perceptron re- sponse would be against small variations of ξµ. Examples with a large distance from the hyperplane will hardly be taken to the opposite side by noise in the input channel. We define the stability of the perceptron as the smallest of all κµ in D: κ(w) = min {κµ}P µ=1 . (3.57) Note that if the perceptron does not separate the classes correctly, κ(w) < 0 corresponds to the negative κµ with the largest absolute value. Positive stability κ(w) > 0 indicates that w is a solution of the PSP and separates the classes correctly in D. In this case, κ(w) corresponds to the smallest distance of any example from the decision boundary. It quantifies the size of the gap between the two classes or, in other words, the classification margin of the perceptron. In a linear separable problem it appears natural to select the perceptron weights which maximize κ(w). In principle, the concept of stability extends to negative κµ according to Eq. (3.56). Therefore, we can also define the perceptron of optimal stability without requiring linear separability of the data set D = {ξµ, Sµ T } with more general targets ST (ξµ) = ±1.
The+Shallow+and+the+Deep_Page_77_Chunk3976
64 3. THE PERCEPTRON κ κµ ξµ wmax κ Figure 3.17: Stability of the perceptron. Left: The stability κµ, defined in Eq. (3.56), corresponds to the oriented distance of ξµ from the plane orthogonal to w. The stability of the perceptron κ(w) is defined as the smallest κµ in the set of examples, i.e. κ(w) = minµ{κµ}. Here, all inputs are classified correctly with κµ > 0. Right: Restricting the student hypotheses to weight vectors with κ(w) > κ for a given data set, selects weight vectors w in the center region of the version space. The largest possible value of κ singles out the perceptron of optimal stability wmax. We define the perceptron of optimal stability (not very precisely termed the optimal perceptron, occasionally) as the target of the following problem: Perceptron of optimal stability (3.58) For a given data set D = {ξµ, Sµ T }P µ=1, find the vector wmax ∈RN with wmax = argmax w ∈RN κ(w) for κ(w) = min 󰁱 κµ = w·ξµSµ T |w| 󰁲P µ=1 . For linearly separable data, the search for w could be limited to the version space V, formally. However, this restriction is non-trivial to realize [Ruj97] and would not constitute an advantage in practice. Moreover, for more general data sets of unknown separability, the version space might not even exist (V = ∅) and κ(wmax) < 0. We will discuss the usefulness of a corresponding solution wmax with negative stability κmax < 0 later and focus on linearly separable problems in the following. The perceptron of optimal stability wmax does not exactly coincide with wcm, cf. Sec. 3.5.4, in general. The “center” as defined by the “maximum possible distance from all boundaries” is identical with the center of mass only if V has a regular, symmetric shape. However, one can expect that wmax is in general quite close to the true center of mass and could serve as an approximative realization.
The+Shallow+and+the+Deep_Page_78_Chunk3977
3.6. THE PERCEPTRON OF OPTIMAL STABILITY 65 As a consequence, the perceptron of optimal stability should display favor- able (near optimal) generalization behavior when trained from a given reliable, lin. sep. data set. In fact, the difference appears marginal from a practical perspective. For a theoretical comparison of optimal stability and optimal gen- eralization in the perceptron see [Wat93]. 3.6.2 The MinOver algorithm An intuitive algorithm that can be shown to achieve optimal stability for a given lin. sep. data set D has been suggested in [KM87]. The so-called MinOver algorithm performs Hebbian updates for the currently least stable example in D. We assume here tabula rasa initialization, i.e. w(0) = 0, but more general initial states could be considered. The prescription always aims at improving the least stable example. Note that for a given weight vector w(t), the minimal local potential coincides with minimum stability since κµ(t) = Eµ(t)/|w(t)|. The MinOver update is defined as follows: MinOver algorithm at discrete time step t with current w(t) - compute the local potentials Eµ(t) = w(t) · ξµ Sµ T for all examples in D - determine the index 󰁥µ of the training example with minimal overlap, i.e. with the currently lowest local potential: E󰁥µ = min {Eµ(t) }P µ=1 - update the weight vector according to w(t + 1) = w(t) + 1 N ξ󰁥µ S󰁥µ T (3.59) 󰀥or, equivalently, increment the corresponding embedding strength x󰁥µ(t + 1) = x󰁥µ(t) + 1. 󰀦 (3.60) A few remarks: • According to the original presentation of the algorithm [KM87], the Heb- bian update is only performed if the currently smallest local potential also satisfies Eν(t) ≤c for a given c > 0. This is reminiscent of learning from mistakes as in the Rosenblatt algorithm (3.3.3). However, as pointed out in [KM87], optimal stability is only achieved in the limit c →∞, which is equivalent to (3.59). • In the above formulation (3.59), MinOver updates the weights (or embed- ding strengths) even if all examples in D are classified correctly already. As the algorithm keeps performing non-zero Hebbian updates, the tempo- ral change of the weight vector w(t) itself does not constitute a reasonable
The+Shallow+and+the+Deep_Page_79_Chunk3978
66 3. THE PERCEPTRON stopping criterion. Instead, one of the following quantities could be con- sidered: - the angular change ∠ 󰀃 w(t), w(t+T) 󰀄 = 1 π arccos 󰀕w(t) · w(t+T) |w(t)||w(t+T)| 󰀖 or the argument of the arccos, for simplicity. - the total change of stabilities 󰁓P µ=1 󰁫 κµ(t) −κµ(t+T) 󰁬2 , for example. For these and similar criteria, reasonably large numbers of training steps T should be performed, e.g. with T ∝P, in order to allow for noticeable differences. In both criteria, changes of only the norm |w(t)| are disregarded because they do not affect the classification or its stability. • From the definition of the MinOver algorithm (3.59) we see that it can only yield non-negative, integer embedding strengths when the initializa- tion is tabula rasa. This feature of wmax will be encountered again in the following section, together with several other properties of optimal stability. • As proven in [KM87], the MinOver algorithm converges and yields the per- ceptron weights of optimal stability, if D is linearly separable. The proof of convergence is similar in spirit to that of the Perceptron Convergence Theorem (3.22). We refrain from reproducing it here. Instead, we show only that the perceptron of optimal stability can always be written in the form wmax = 1 N P 󰁛 µ=1 xµ max ξµ Sµ T with embedding strengths {xµ max ∈R}P µ=1 . (3.61) The existence of embedding strengths xmax will be recovered en passant in the next section. However, it is instructive to prove the statement explicitly here. To this end, we consider two perceptron weight vectors: the first one is assumed to be given as a linear combination of the familiar form w1 = P 󰁛 µ=1 xµ 1ξµ Sµ T with embedding strengths {xµ 1}P µ=1 , (3.62) while for the second weight vector we assume that w2 = w1 + δ with |δ| > 0 and δ · ξµ = 0 for all µ = 1, 2, . . . , P. (3.63) Hence, w2 cannot be written in terms of embedding strengths as it contains contributions which are orthogonal to all input vectors in D. If we consider the local potentials with respect to w2, we observe that Eµ 2 = w2 · ξµSµ T = w1 · ξµSµ T + δ · ξµSµ T 󰁿 󰁾󰁽 󰂀 =0 = Eµ 1 . (3.64)
The+Shallow+and+the+Deep_Page_80_Chunk3979
3.7. OPTIMAL STABILITY BY QUADRATIC OPTIMIZATION 67 On the other hand we have |w2|2 = |w1 + δ|2 = |w1|2 + 2 w1 · δ 󰁿󰁾󰁽󰂀 =0 + |δ|2 󰁿󰁾󰁽󰂀 >0 ⇒ |w2| > |w1|, (3.65) where the mixed term vanishes because w1 is a linear combination of the ξµ⊥δ. As a consequence, we observe that κµ 2 = Eµ 2 |w2| = Eµ 1 |w2| < Eµ 1 |w1| = κµ 1 for all µ, and therefore κ2 < κ1. (3.66) We conclude that any contribution orthogonal to all ξµ inevitably reduces the stability of a given weight vector. This implies that maximum stability is in- deed achieved by weights of the form (3.61). The result also implies that the framework of iterative Hebbian learning is sufficient to find the solution. Note that a non-zero δ in Eq. (3.63) cannot exist for P > N, in general. Obviously, if span 󰀓 {ξµ}P µ=1 󰀔 = RN, any N-dimensional vector including wmax can be written as a linear combination. In this case, Eq. (3.61) is trivially true. Our simple consideration does not yield restrictions on the possible values that the embedding strengths can assume. However, the proven convergence of the MinOver algorithm implies that the perceptron of optimal stability can always be written in terms of non-negative xµ max ≥0. We will recover this result more formally in the next section. 3.7 Optimal stability by quadratic optimization Here we will exploit the fact that optimal stability can be formulated as a prob- lem of constrained quadratic optimization. Consequently, a wealth of theoret- ical results and techniques from optimization theory becomes available [Fle00, PAH19]. They provide deeper insight into the structure of the problem and allow for the identification of efficient training algorithms, as we will exemplify in terms of the so-called AdaTron algorithm [AB89,BAK91] Before deriving and discussing this training scheme we re-visit another closely related, prototypical training scheme from the early days of neural network models: B. Widrow and M.E. Hoff’s Adaptive Linear Neuron or Adaline [WH60,WL90]. 3.7.1 Optimal stability reformulated For a given lin. sep. D = {ξµ, Sµ T }P µ=1, the perceptron of optimal stability corresponds to the solution of the following problem: maximize w ∈RN κ(w) where κ(w) = min 󰀝 κµ = Eµ |w| = w⊤ξµSµ T |w| 󰀞P µ=1 (3.67)
The+Shallow+and+the+Deep_Page_81_Chunk3980
68 3. THE PERCEPTRON which is just a more compact version of (3.58). Obviously, the stability κ can be made larger by increasing the minimal Eµ for constant norm |w|. Analogously, κ increases with decreasing norm |w| if all local potentials obey the constraint Eµ ≥c > 0. As discussed previously, the actual choice of the constant c is irrelevant because Eµ is linear in w and we can set c = 1 without loss of generality. This allows us to re-formulate the problem as follows: minimize w ∈RN N 2 w2 subject to inequality constraints {Eµ ≥1}P µ=1 . (3.68) Hence, we have rewritten the problem of maximal stability as the optimization of the quadratic cost function Nw2/2 under linear inequality constraints of the form Eµ = w⊤ξµSµ T ≥1. The solution wmax then displays the (optimal) stability κmax = 1 󰀱 |wmax|. Note that the pre-factor N/2 in (3.68) is irrelevant for the definition of the problem but is kept for convenience and consistency of notation. 3.7.2 The Adaptive Linear Neuron - Adaline The problem of optimal stability in the formulation (3.68) involves a system of inequalities. Before we address its solution in the following sections, we resort to the more familiar case of constraints given by equations, i.e. we consider the simpler optimization problem minimize w ∈RN N 2 w2 subject to constraints {Eµ = 1}P µ=1 . (3.69) This, in fact, has the form of a standard optimization problem with non-linear (here: quadratic) cost function and a set of (here: linear) constraints. A brief discussion of this type of problem is given in Appendix A.3.1. Historically, the problem (3.69) relates to the the so-called Adaptive Linear Neuron or Adaline model which was introduced by B. Widrow and M.E. Hoffin 1960 [WH60]. Like the Rosenblatt Perceptron, it constitutes one of the earliest artificial neural network models and truly groundbreaking work in the area of machine learning. A series of very instructive and entertaining videos about B. Widrow’s pioneering work is available at [Wid12]. Widrow realized Adaline systems in hardware as “resistors with memory” and introduced the term Memistor [Wid60].11 The concept was also extended to layered Madaline (Many Adaline) networks consisting of several linear units which were combined in a majority vote for classification [WL90,Wid12]. For our purposes, the Adaline can be interpreted as a single layer perceptron which differs from Rosenblatt’s model only in terms of the training procedure. 11Not to be confused with the more recent concept of Memristor elements [Chu71].
The+Shallow+and+the+Deep_Page_82_Chunk3981
3.7. OPTIMAL STABILITY BY QUADRATIC OPTIMIZATION 69 The Adaline framework essentially treats the problem of binary classification as a linear regression with subsequent thresholding of the continuous w · ξµ. We will take a rather formal perspective based on the theory of Lagrange multipliers [Fle00, PAH19] as outlined in Appendix A.3.1. The P linear con- straints of the form Eµ = w⊤ξµSµ T = 1 can be incorporated in the correspond- ing Lagrange function L 󰀕 w, {λµ}P µ=1 󰀖 = N 2 w2 − P 󰁛 µ=1 λµ 󰀕 w⊤ξµSµ T −1 󰀖 . (3.70) With the gradient ∇w = (∂/∂w1, ∂/∂w2, ..., ∂/∂wN)⊤in weight space we obtain ∇wL = Nw−󰁓 µ λµξµSµ T . Furthermore, ∂ ∂λν 󰁫󰁓 µ λµ(Eµ−1) 󰁬 = (Eν−1). Hence, the first order stationarity conditions for a solution w∗, {λ∗µ} of problem (3.69) read ∇wL|∗= 0 ⇒ w∗= 1 N P 󰁛 µ=1 λ∗µ ξµSµ T (3.71) and ∂L λµ 󰀏󰀏󰀏󰀏 ∗ = 0 ⇒ E∗µ = w∗⊤ξµSµ T = 1 for all µ (3.72) where the shorthand (. . .)|∗stands for the evaluation of (. . .) in w = w∗and λµ = λ∗µ for all µ. Note that sufficient (second oder) conditions for a solution of the problem are non-trivial to work out. The second condition (3.72) merely reproduces the original equality con- straints, which have to be satisfied by any candidate solution, obviously. The first, more interesting stationarity condition (3.71) implies that the formally in- troduced Lagrange parameters can be identified as the embedding strengths xµ and can be renamed accordingly. Hence, we can eliminate the weights from L and obtain L 󰀃 {xµ}P µ=1 󰀄 = 1 2N 󰁛 µ,ν xµSµ T ξµ⊤ξνSν T xν− 󰁛 µ xµ 󰀥 1 N 󰁛 ν xνSν T ξν 󰀦⊤ ξµSµ T + 󰁛 µ xµ = −1 2N P 󰁛 ν,µ=1 xµSµ T ξµ⊤ξνSν T xν + P 󰁛 µ=1 xµ. (3.73) It turns out useful to resort to a compact notation which again exploits that the weights are of the form w = 1 N 󰁓 µ xµξµSµ T . We introduce the symmetric correlation matrix C = C⊤∈RP ×P with elements Cµν = 1 N Sµ T Sν T ξµ⊤ξν (3.74) and define the P-dimensional vectors 󰂓x = (x1, x2, ...xP )⊤, 󰂓E = (E1, E2, ...EP )⊤ as well as the formal 󰂓1 = (1, 1, ..., 1)⊤∈RP , yielding, e.g., 󰂓x⊤󰂓1 = 󰁓 µ xµ.
The+Shallow+and+the+Deep_Page_83_Chunk3982
70 3. THE PERCEPTRON In addition, we use the notation 󰂓a > 󰂓b, which is popular in the optimization related literature and indicates that aµ > bµ for all µ = 1, 2, ....P. Analogous notations are employed for component-wise relations “<”, “≥” and “≤”. In the convenient matrix-vector notation, we have furthermore Eν = 1 N P 󰁛 µ=1 xµ Sµ T Sν T ξµ⊤ξν 󰁿 󰁾󰁽 󰂀 NCµν = [C󰂓x]ν i.e 󰂓E = C󰂓x (3.75) w2 = 1 N 2 󰁛 µ,ν xµxν 󰁽 󰂀󰁿 󰁾 Sµ T Sν T ξµ⊤ξν = 1 N 󰂓x⊤C 󰂓x = 1 N 󰂓x⊤󰂓E. (3.76) In this notation, the Lagrange function (3.73) becomes L(󰂓x) = −1/2 󰂓x⊤C 󰂓x + 󰂓x⊤󰂓1, which has to be maximized (!) with respect to 󰂓x, see the discussion of the Wolfe Dual in [Fle00, PAH19] and Appendix A.3.4. Consequently we can re-formulate (3.69) as the following unconstrained problem: maximize 󰂓x ∈RP f(󰂓x) = −1 2 󰂓x⊤C󰂓x + 󰂓x⊤󰂓1. (3.77) Compared to (3.69), the cost function appears slightly more complicated. In turn, however, the constraints are eliminated. While completely equivalent with (3.69), the re-written problem is given in terms of the embedding strengths 󰂓x ∈IRP only. Parallel Adaline algorithm In absence of constraints, it is straightforward to maximize f(󰂓x) and we could employ a variety of methods, in practice. For instance, we can resort to simple gradient ascent, cf. Appendix A.4, with tabula rasa initialization 󰂓x(0) = 0. We can also identify an equivalent update in terms of weights with initial w(0) = 0 by exploiting the relation w(t) = 1 N 󰁓P µ=1 xµ(t) ξµSµ T : Adaline algorithm, parallel updates 󰂓x(t + 1) = 󰂓x(t) + η∇xf = 󰂓x(t) + η 󰀓 󰂓1 −󰂓E(t) 󰀔 (3.78) 󰀥 or w(t + 1) = w(t) + η N P 󰁛 µ=1 󰀕 1 −Eµ(t) 󰀖 ξµSµ T 󰀦 , (3.79) where Eµ(t) = [C󰂓x(t)]µ = w(t)⊤ξµSµ T . The learning rate η controls the magni- tude of the update steps. This version of the Adaline algorithm corresponds to standard, so-called batch gradient descent in the space of embedding strengths 󰂓x. Hence, η can be finite but has to be small enough enough to ensure convergence. The pre- cise condition depends on the mathematical properties of the extremum, see Appendix A.4 for a general discussion.
The+Shallow+and+the+Deep_Page_84_Chunk3983
3.7. OPTIMAL STABILITY BY QUADRATIC OPTIMIZATION 71 Sequential Adaline algorithm As an important alternative to the above batch or parallel update, sequential gradient-based methods can be devised, which present the example data repeat- edly in, for instance, deterministic order and update a single embedding strength in each step. Intuitively, we would want to increase an embedding strength of any example with Eµ < 1 and we expect the magnitude of the updated xµ to increase with (1 −Eµ). The sequential Adaline algorithm corresponds to coordinate ascent in terms of 󰂓x, a variant of gradient based optimization which is briefly discussed in Ap- pendix A.5.1: Adaline algorithm, sequential updates (repeated presentation of D) – at time step t, present example µ(t) = 1, 2, 3, ..., P, 1, 2, 3, ... – update (only) the corresponding embedding strength xµ(t)(t + 1) = xµ(t) + 󰁨η 󰀓 1 −Eµ(t)(t) 󰀔 (3.80) 󰀥 or w(t + 1) = w(t) + 󰁨η N 󰀕 1 −Eµ(t)(t) 󰀖 ξµ(t)Sµ(t) T 󰀦 . (3.81) Convergence of the Adaline In the following we sketch the proof of convergence for the sequential Adaline algorithm (3.80) and obtain the corresponding condition on the learning rate 󰁨η. For the update of the current component xµ we have xµ →xµ + δµ with δµ = 󰁨η (1 −[C󰂓x]µ) = 󰁨η (1 −Eµ) (3.82) while all other embedding strengths remain unchanged: 󰂓δ = (0, . . . , δµ, 0, . . . 0)⊤. Therefore, the change of the objective function under update (3.82) is f(󰂓x + 󰂓δ) −f(󰂓x) = −1 2 󰂓δ⊤C󰂓δ −󰂓δ⊤C󰂓x + 󰂓δ⊤󰂓1 = −1 2(δµ)2 Cµµ + δµ (1 −Eµ) = 󰁨η(1 −Eµ)2 󰀗 1 −1 2 󰁨η Cµµ 󰀘 ≥0 for 󰁨η < 2 Cµµ . (3.83) Unless a solution 󰂓x with Eµ = 1 for all µ has been reached, the objective function will strictly increase in at least some of the individual steps of the sequential procedure as long as 󰁨η < 2/Cµµ. The diagonal elements of C are proportional to the norms of the correspond- ing feature vectors, Cµµ = (ξµ)2/N, which are of course available for any given data set. If we want to use a single, constant value of 󰁨η and guarantee monotonic increase of f under the iteration, the learning rate has to satisfy 0 < 󰁨η < 2 max{Cµµ}P µ=1 . (3.84)
The+Shallow+and+the+Deep_Page_85_Chunk3984
72 3. THE PERCEPTRON One can also show that the objective function is bounded from above if C is positive definite and C󰂓x = 󰂓1 has a solution: while the linear term 󰂓1⊤󰂓x in f, Eq. (3.77), can grow in an unbounded way, the negative quadratic term will always dominate and limit the increase. Hence, unless a solution with C󰂓x = 󰂓E = 󰂓1 exists and has been found, the iteration performs at least one non-zero update in each sweep through the data set and f increases monotonically. Moreover, the cost function is bounded from above. Therefore, the coordinate-ascent-like version (3.80) of the Adaline converges and maximizes f under constraints 󰂓E = 󰂓1, if the learning rate satisfies (3.84). Relation to linear regression and the SSE Several interesting relations to other problems and algorithms are noteworthy: As pointed out in [DO87], the coordinate ascent (3.82) is closely related to the well-known Gauß-Seidel and Gauß-Jacobi methods [Fle00] for the iterative solution of the system of linear equations C󰂓x = 󰂓1. For non-singular C, the problem could be solved directly by 󰂓x = C−1󰂓1, which gives xµ = [ C−1󰂓1 ]µ = 󰁛 ν [C−1]µν 1 and w = 1 N 󰁛 µ,ν [C−1]µνξµSµ T (3.85) in weight space. If we incorporate the outputs Sµ T = ±1 in the modified input vectors 󰁨ξ µ = Sµ T ξµ the problem becomes equivalent to regression with the par- ticular targets yµ = (Sµ T )2 = 1 for all data points. Hence, the Adaline problem has the same mathematical structure as linear regression considered in Sec. 2.2.2 and Appendix A.2.2, which immediately links Eq. (3.85) to the pseudoinverse solutions (2.8) and (A.24), respectively. It is interesting to note that although (3.79) is the weight space equivalent of (3.78), it cannot be interpreted as a gradient ascent along ∇wf of the cost function (3.77). As outlined in Appendix A.4.3, even linear transformations of variables do not preserve the gradient property, in general. However, (3.79, 3.81) can be seen as gradient descent in weight space with respect to a different, yet related cost function: the Sum of Squared Errors (SSE), cf. Eq. (2.5), for linear regression with target values Sµ T = ±1 : ESSE = 1 2 P 󰁛 µ=1 (1 −Eµ)2 = 1 2 P 󰁛 µ=1 󰀃 Sµ T −w⊤ξµ󰀄2 (3.86) with ∇wESSE = − P 󰁛 µ=1 (1 −Eµ)ξµSµ T . (3.87) Note that writing ESSE in terms of embedding strengths would involve the quadratic form 󰂓x⊤C⊤C󰂓x which is not present in (3.77).
The+Shallow+and+the+Deep_Page_86_Chunk3985
3.7. OPTIMAL STABILITY BY QUADRATIC OPTIMIZATION 73 This consideration also indicates what the behavior of the Adaline will be if 󰂓E = 1 cannot be satisfied: the algorithm finds an approximate solution by minimizing the SSE. The sequential algorithm (3.81) in weight space is equivalent to Widrow and Hoff’s original LMS (Least Mean Square) method for the Adaline [WH60, Wid12]. The LMS, also referred to as the delta-rule or the Widrow-Hoffal- gorithm in the literature, can be seen as the most important ancestor of the prominent Backpropagation of Error for multilayered neural networks, cf. Sec. A.4 and Chapter 5. A review of the history and conceptual relations between these classical algorithms is given in [WL90]. 3.7.3 The Adaptive Perceptron Algorithm - AdaTron In Rosenblatt’s perceptron algorithm an update is performed whenever the pre- sented example is misclassified. All non-zero Hebbian learning steps are per- formed with the same magnitude, independent of the example’s distance from the current decision plane. On the contrary, the Adaline learning rule is adaptive in the sense that the magnitude of the update depends on the actual deviation of Eµ from the target value 1. While this appears to make sense and is expected to speed up learning, it also results in negative Hebbian updates (“unlearning”) with η(1 −Eµ) < 0 if the example is correctly classified with large Eµ > 1, already. In fact, Adaline can yield negative embedding strengths xµ < 0 in order enforce Eµ = 1 when the example otherwise could have Eµ > 1. This somewhat counter-intuitive feature of the Adaline is accounted for in the Adaptive Perceptron (AdaTron) algorithm [AB89, BAK91]. It retains the concept of adaptivity but takes into account the inequality constraints in prob- lem (3.68), explicitly. In essence, it performs Adaline-like updates, but prohibits the embedding strengths from assuming negative values. As we will see, this rel- atively simple modification yields the perceptron of optimal stability for linearly separable data. The method of Lagrange multipliers has been extended to the treatment of inequality constraints [Fle00, PAH19], see also Appendix A. Therefore, we can make use of several well-established results from optimization theory in the following. We return to the problem of optimal stability in the form (3.68), which we repeat here for the sake of clarity: Perceptron of optimal stability (weight space) minimize w ∈RN N 2 w2 subject to inequality constraints {Eµ ≥1}P µ=1 . Formally, we have to consider the same Lagrange function as for equality constraints, already given in Eq. (3.70): L 󰀃 w, {λµ}P µ=1 󰀄 = N 2 w2 − P 󰁛 µ=1 λµ 󰀕 w⊤ξµSµ T −1 󰀖 . (3.88)
The+Shallow+and+the+Deep_Page_87_Chunk3986
74 3. THE PERCEPTRON The Kuhn-Tucker Theorem of optimization theory, see [Fle00,PAH19], pro- vides the first order necessary stationarity conditions for general, non-linear optimization problems with inequality and/or equality conditions. They are known as the so-called Kuhn-Tucker (KT) or Karush-Kuhn-Tucker conditions [Fle00,PAH19], see (A.26) in the Appendix. In our specific case, as worked out as an example in the Appendix, they read Kuhn-Tucker conditions (optimal stability) w∗ = 1 N P 󰁛 µ=1 λ∗µ ξµSµ T (embedding strengths λµ) (3.89) E∗µ = w∗⊤ξνSµ T ≥1 for all µ (linear separability) (3.90) λ∗µ ≥ 0 (not all λ∗µ = 0) (non-negative multipliers) (3.91) λ∗µ (E∗µ −1) = 0 for all µ (complementarity). (3.92) The first condition is the same as for equality constraints and follows directly from ∇wL = 0. As in the case of the Adaline, it implies that the solution of the problem can be written in the familiar form (3.61). Moreover, the Lagrange multipliers λµ play the role of the embedding strengths and we can set λµ = xµ in the following considerations. The second condition (3.90) merely reflects the original constraint, i.e. linear separability. Intuitively, the non-negativity of the multipliers, KT-condition (3.91), re- flects the fact that an inequality constraint is only active on one side of the hyperplane defined by Eµ = 1. As long as Eµ > 1, the corresponding multiplier could be set to λµ = 0 as the constraint is satisfied in the entire half-space. Since the solution can be written with xµ = λµ due to (3.89), we formally recover the insight from Sec. 3.6.2 which indicates that wmax can always be represented by non-negative embedding strengths. After renaming the Lagrange multipliers λµ to xµ, we refer to a solution 󰂓x∗of the problem (3.68) as a KT-point. Using our convenient matrix-vector notation, the KT conditions of the simplified problem read Kuhn-Tucker conditions (optimal stability, simplified) 󰂓E∗= C󰂓x∗≥󰂓1 (linear separability) (3.93) 󰂓x∗≥0 (󰂓x∗∕= 0) (non-negative embeddings)12 (3.94) x∗µ (E∗µ −1) = 0 for all µ (complementarity). (3.95) The most interesting condition is that of complementarity (3.92) and (3.95). It states that at optimal stability, any example with non-zero embedding will 12Obviously 󰂓x = 0, w = 0 cannot be a solution.
The+Shallow+and+the+Deep_Page_88_Chunk3987
3.7. OPTIMAL STABILITY BY QUADRATIC OPTIMIZATION 75 have E∗µ = 1 while input/output pairs with E∗µ > 1 do not contribute to the linear combination (3.61). We will discuss this property in greater detail later. Complementarity also implies that x∗µE∗µ = x∗µ for all µ and, therefore 󰂓x∗⊤C󰂓x∗= 󰂓x∗⊤󰂓E∗= P 󰁛 µ=1 x∗µE∗µ = P 󰁛 µ=1 x∗µ = 󰂓x∗⊤󰂓1. (3.96) Now consider two potentially different KT-points 󰂓x1 and 󰂓x2, both satisfying the first order stationarity conditions. By definition, the matrix C is symmetric and positive semi-definite: 󰂓u⊤C 󰂓u ∝ 󰁛 µ,ν uµSµ T ξµ · ξνSν T uν = 󰀣󰁛 µ uµξµSµ T 󰀤2 ≥0 for all 󰂓u ∈RP . Setting 󰂓u = 󰂓x1 −󰂓x2 we obtain with (3.96): 0 ≤ (󰂓x1 −󰂓x2)⊤C(󰂓x1 −󰂓x2) = 󰂓x⊤ 1 C󰂓x1 󰁿󰁾󰁽󰂀 =󰂓x⊤ 1 󰂓1 + 󰂓x⊤ 2 C󰂓x2 󰁿󰁾󰁽󰂀 =󰂓x⊤ 2 󰂓1 −󰂓x⊤ 1 C󰂓x2 −󰂓x2 ⊤C󰂓x1 = 󰂓x⊤ 1 󰀓 󰂓1 −C󰂓x2 󰀔 + 󰂓x⊤ 2 󰀓 󰂓1 −C󰂓x1 󰀔 . All components of the KT-points 󰂓x1,2 are non-negative, while all components of vectors (󰂓1 −C󰂓x1,2) ≤0 due to the linear separability condition. Therefore, we obtain 0 ≤(󰂓x1 −󰂓x2)⊤C(󰂓x1 −󰂓x2) ≤0 and hence: ⇒0 = (󰂓x1 −󰂓x2)⊤C(󰂓x1 −󰂓x2) ∝ 󰁛 µ,ν (xµ 1 −xµ 2)Sµ T ξµ · ξνSν T (xν 1 −xν 2) = 󰀥󰁛 µ (xµ 1 −xµ 2) ξµSµ T 󰀦2 ∝[w1 −w2]2 ⇒w1 = w2. We conclude that any two KT-points define the same weight vector, which corresponds to the perceptron of optimal stability. Even if 󰂓x1 ∕= 󰂓x2, which is possible in spite of C󰂓x1 = C󰂓x2 for singular matrices C, the solution is unique in terms of the weights. Moreover, this finding implies that any local solution (KT-point) of the problem (3.68) is indeed a global solution. In contrast to many other cost function based learning algorithms, local minima do not play a role in optimal stability. This is a special case of a more general result which applies to all convex optimization problems [Fle00,PAH19]. The stationarity conditions also facilitate a re-formulation of the optimiza- tion problem. Similar to the simplification of the Adaline problem, it essentially amounts to the elimination of the weights and the interpretation of Lagrange multipliers as embedding strengths. It constitutes a special case of what is known as the Wolfe Dual in optimization theory, see [Fle00] for the general def- inition and proof of the corresponding Duality Theorem. A brief introduction is given in Appendix A.3.4. Duality is frequently employed to rewrite a given problem in terms of a modified cost function with simplified constraints.
The+Shallow+and+the+Deep_Page_89_Chunk3988
76 3. THE PERCEPTRON As outlined in Appendix A.3.4 the following formulation is equivalent to problem (3.68): Perceptron of optimal stability (embedding strengths, dual problem) maximize 󰂓x f(󰂓x) = −1 2󰂓x⊤C󰂓x + 󰂓x⊤󰂓1 subject to 󰂓x ≥0. (3.97) Hence, the resulting Wolfe Dual still comprises constraints, albeit simpler ones. In this simplified formulation of optimal stability, see also Appendix A.3.4, the goal is to maximize the objective function (3.77) under the constraint 󰂓x ≥0 (with 󰂓x ∕= 0). The search for the optimum is therefore restricted to the positive hyperoctant of RP . These particularly simple conditions facilitate the design of a gradient-based active set algorithm, see Appendix A.3. Moreover, it is possible to combine this concept with coordinate ascent as discussed in Appendix A.5.1. The algorithm satisfies the constraint by simply clipping each xµ to zero whenever the gradient step would lead into the excluded region of xµ < 0: AdaTron algorithm, sequential updates (repeated presentation of D) – at time step t, present example µ(t) = 1, 2, 3, ..., P, 1, 2, 3, ... – update (only) the corresponding embedding strength: xµ(t)(t+1) = max 󰀝 0, xµ(t)(t) + 󰁨η 󰀕 1 −Eµ(t) 󰀖󰀞 , (3.98) where Eν = [C󰂓x]ν. The name AdaTron has been coined for this Adaptive Perceptron algorithm [AB89,BAK91]. By comparison with the sequential Ada- line algorithm (3.80) we observe that the change from equality constraints (Eµ = 1) to inequalities (Eµ ≥1) leads to the restriction of embedding strengths to non-negative values. Otherwise, the update +󰁨η (1 −Eµ) is adaptive in the sense of the discussion of Adaline. Unlike the Rosenblatt algorithm, Adaline and AdaTron can decrease an embedding strength if Eµ is already large. A parallel version of the algorithm can be formulated, which performs steps along the direction of ∇xf, but always ensures 󰂓x > 0 by limiting the step size where necessary. Thus, for sufficiently small values of 󰁨η, the cost function will also decrease monotonically [AB89, BAK91]. Here, we refrain from giving a more explicit mathematical formulation and discussion of the parallel updates. Convergence of the sequential AdaTron algorithm As we show in the following, the sequential AdaTron converges and yields op- timal stability if 0 < 󰁨η < 2/Cµµ for all µ, provided the data set D is linearly
The+Shallow+and+the+Deep_Page_90_Chunk3989
3.7. OPTIMAL STABILITY BY QUADRATIC OPTIMIZATION 77 separable [AB89,BAK91].12 We consider the learning step xµ →xµ + δµ with δµ = 󰀝 󰁨η (1 −Eµ) if −xµ ≤󰁨η (1 −Eµ) −xµ < 0 if −xµ > 󰁨η (1 −Eµ) (3.99) and see that in both cases |δµ| ≤󰁨η |(1 −Eµ)| =⇒1 −Eµ δµ ≥1 󰀱 󰁨η. Hence, similar to Eq. (3.83) for the Adaline algorithm, we obtain f(󰂓x+󰂓δ)−f(󰂓x) = −1 2 󰂓δ⊤C󰂓δ −󰂓δ⊤C󰂓x + 󰂓δ⊤󰂓1 = −1 2(δµ)2 Cµµ + δµ (1 −Eµ) = (δµ)2 󰀗1 −Eµ δµ −Cµµ 2 󰀘 ≥(δµ)2 󰀗1 󰁨η −Cµµ 2 󰀘 . (3.100) The r.h.s. is obviously non-negative for 0 < 󰁨η ≤2/ Cµµ. We recover the same condition (3.84) for the monotonic increase of f as in the sequential Adaline algorithm. Furthermore, it can be shown that f is bounded from above in the hyper- octant x ≥0 if the data set is linearly separable. For general convex problems with linear constraints it has been proven that the Wolfe Dual is bounded if and only if the original problem has a solution [Fle00]. Here, this property is closely related to Farkas’ Lemma and Gordan’s Theorem of the Alternative (3.37), which we discussed briefly in Sec. 3.3.6. The latter implies for matrices C ∈RP ×P that 󰁱 󰂓x | C 󰂓x ≥󰂓1 󰁲 = ∅ ⇔ C 󰂓v = 0 for some 󰂓v ∕= 0 with {vµ ≥0}P µ=1 . Note that if such a vector 󰂓v exists, it is straightforward to show that f in Eq. (3.77) cannot be bounded in 󰂓x > 0: Setting 󰂓x = γ󰂓v > 0, we obtain 󰂓x⊤C󰂓x = 0 and lim γ→∞f(󰂓x) = lim γ→∞󰂓v⊤󰂓1 →∞. In summary, we have a) For linearly separable D, the cost function f(󰂓x) is bounded from above in the hyperoctant 󰂓x ≥0. b) The function f(󰂓x) increases monotonically under non-zero, sequential Ada- Tron updates with 0 < 󰁨η < 2/Cµµ. c) By construction of the algorithm, any KT-point 󰂓x∗of the problem, cf. Eq. (3.93–3.95), is a fixed point of the AdaTron algorithm and vice versa. In combination, (a-c) imply that the algorithm converges to a KT-point which represents the (unique) perceptron of optimal stability. 12In [AB89,BAK91] normalized data with Cµµ = 1 was considered, yielding the condition 0 < 󰁨η < 2.
The+Shallow+and+the+Deep_Page_91_Chunk3990
78 3. THE PERCEPTRON Embedding strengths only In contrast to the Rosenblatt or Adaline algorithm, it is not straightforward to rewrite the AdaTron in terms of explicit updates in weight space. Obviously we can compute the new weight vector as w(t + 1) = 󰁓P µ=1 xµ(t + 1)ξµSµ T after each training step. However, a direct iteration of the weights w(t) cannot be provided without involving the xµ(t), explicitly. This is due to the non-linear clipping of embeddings at zero which has no simple equivalent in weight space. Non-separable data The outcome of the AdaTron training (3.98) for data that is not linearly sepa- rable is not obvious. Modifications of the SSE (3.86) which take into account that for Eµ > 1 the actual deviation is irrelevant have been considered in the literature, see the discussion in Sec. 4.1. For these, corresponding gradient descent methods in weight space can be derived, which - for suitable choices of the cost function - resemble the Ada- Tron scheme. However, they are not identical and, in particular, additional constraints have to be imposed in order to achieve optimal stability for linearly separable data. The concept of optimal stability has been generalized in the so-called soft margin classifier which tolerates misclassifications to a certain degree. A corre- sponding extension of the AdaTron algorithm is discussed in Sec. 4.1.2. Efficient algorithms The MinOver and AdaTron are presented here as prototypical approaches to solving the problem of optimal stability. While they are suitable for solving the problem in relatively small data sets, the computational load may become problematic when dealing with large numbers of examples and, consequently, a large number of variables xµ. A variety of algorithms have been devised aiming at computational effi- ciency and scalability, mainly in the context of Support Vector Machines, cf. Sec. 4.3. A prominent example is Sequential Minimal Optimization (SMO) approach [Pla98]. It is the basis of many implementations, see e.g. [svm99] for links and the SVM literature for further discussions and references [SS02, CST00,STC04,Her02]. 3.7.4 Support vectors The treatment of optimal stability as a constrained, convex optimization prob- lem has provided useful insights, from which practical algorithms such as the AdaTron and other schemes emerged, see [Ruj93] for another example. The ab- sence of local minima and the resulting availability of efficient optimization tools is one of the foundations of the Support Vector Machine and has contributed largely to its popularity [SS02,CST00,STC04,Her02,DFO20], see Sec. 4.3. One of the most important observations with respect to optimal stability is a consequence of the complementarity condition (3.95). It implies that in a
The+Shallow+and+the+Deep_Page_92_Chunk3991
3.7. OPTIMAL STABILITY BY QUADRATIC OPTIMIZATION 79 Eµ =1 ∝wmax Figure 3.18: Support Vectors in the perceptron of optimal stability. The arrow represents the normalized weight vector wmax/|wmax|. Class membership is indicated by filled (ST = +1) and open symbols (ST = −1), respectively. Support vectors, marked as squares, fall into the two hyperplanes with Eµ = 1, i.e. κµ = κmax. All other examples (circles) dis- play greater stability without being embedded explicitly. KT-point we have either 󰀝 Eµ = 1 xµ ≥0 󰀞 or 󰀝 Eµ > 1 xµ = 0 󰀞 . (3.101) In the geometrical interpretation of linear separability, the former set of feature vectors with xµ ≥0 falls into one of the two hyperplanes with w · ξ = −1 or +1, respectively. Both planes are parallel to the decision boundary and mark the maximum achievable gap between the two classes, see Figure 3.18. Only these examples, marked as squares in the figure, contribute to the linear combination (3.61) with non-zero embedding strength. They can be said to form the support of the weights and, therefore, are referred to as the set of support vectors. In a sense, they represent the hard cases in D which end up closest to the decision boundary. The remaining training data display larger distances from the separating hyperplane and do not explicitly contribute to the linear combination (3.61). For the support vectors we observe that wmax is the weight vector that solves the system of linear equations Eµ = 1 with minimal norm |w|. All other examples appear to be stabilized “accidentally”. Hence, if we were able to identify them beforehand in a given D, we could solve the simpler problem (3.77) restricted to the support vectors by applying, e.g., the Adaline algorithm. Unfortunately, the set of support vectors is not known a priori and their deter- mination is an integral part of the training process. It is important to realize that, despite the special role of the support vectors, all examples in the data set are relevant. Even if some of them end up with zero embedding, the composition of the entire D implicitly determines the set of support vectors and, thus, the actual weight vector. Remark: Complementarity in the MinOver algorithm The MinOver algorithm (3.59,3.60) as presented in Sec. 3.6.2 yields, in gen- eral, non-zero integer xµ > 0 for all µ. However, the complementary condition (3.101) is satisfied in the sense that the embedding strengths of the support vec- tors increase in the training process, while those of the other examples remain
The+Shallow+and+the+Deep_Page_93_Chunk3992
80 3. THE PERCEPTRON relatively small. In the limit of infinitely many update steps, properly rescaled xµ satisfy condition (3.92). 3.8 Inhom. lin. sep. functions revisited So far, following the arguments leading to Eq. (3.4) in Sec. 3.2 we have treated homogeneously and inhomogeneously linearly separable functions on the same footing. However, with respect to the stability criterion we have to take a subtlety into account: if we consider a perceptron with output S(ξ) = sign [w · ξ −θ] with adaptive quantities w ∈RN and θ ∈R, (3.102) the proper definition of the stabilities is κµ inh = (w · ξµ −θ) Sµ/ |w| (3.103) which corresponds to the oriented distance of feature vectors ξµ from the sepa- rating hyperplane given by (w·ξ −θ = 0). As a consequence, the corresponding perceptron of optimal stability is given by the solution of the problem min w,θ w2/2 subject to [w · ξµ −θ] Sµ ≥1 for all µ = 1, 2, . . . P. (3.104) Note that if for suitable θ, w2 can be minimized under the constraints, the choice of θ will influence the result, i.e. the achievable maximum stability. Hence, the optimization has to be done with respect to both w and θ. If we naively exploit the equivalence of inhomogeneous separability in N dimensions with homogeneous separability in N + 1 dimensions, c.f. Eq. (3.4), we define the modified weight vector 󰁨w = [w, θ] and augmented feature vectors 󰁨ξ = [ξ, −1]. The corresponding naive definition of the stabilities reads 󰁨κµ = 󰁨w · 󰁨ξ µSµ󰀱 |󰁨w|. (3.105) Here 󰁨w2 corresponds to (w2 + θ2) and the resulting optimal stability problem becomes min 󰁨w 󰁨w2/2 subject to 󰁨w · 󰁨ξ µSµ ≥1 for all µ = 1, 2, . . . P, (3.106) where the constraints are equivalent to those in Eq. (3.104). However, treating θ as an additional weight in the spirit of (3.4) has altered the objective function in comparison with (3.104) as it includes a contribution θ2 in the squared Euclidean norm 󰁨w2. Consequently the solution of (3.106) would differ from the (correct) optimal stability defined by Eq. (3.104). For the sake of clarity, we have explicitly limited our discussion of opti- mal stability to the case of homogenous separation with θ = 0. This also applies to the Support Vector Machine in Sec. 4.3. The corresponding modifi- cations of the algorithms and other results to the modified definitions (3.103) and (3.104) for nonzero local threshold are conceptually straightforward, see e.g. [Str19,HTF01,DFO20].
The+Shallow+and+the+Deep_Page_94_Chunk3993
3.9. SOME REMARKS 81 3.9 Some remarks In this chapter we have considered the simple perceptron as a prototypical machine learning system. It serves as a framework in which to obtain insights into the basic concepts of supervised learning. It also illustrates the importance of optimization techniques and the related theory in machine learning. The restriction to linearly separable functions is, of course, significant. In the next chapter we consider a number of ways to deal with non-separable data sets and address classification problems beyond linear separability. It is remarkable that the perceptron still ranks among the most frequently applied machine learning tools. This is due to the fact that the very successful Support Vector Machine is frequently used with a linear kernel, cf. Sec. 4.3. In this case, however, it reduces to a classifier that is equivalent to the simple perceptron of optimal stability or its extension to the soft margin classifier, cf. Sec. 4.1.2.
The+Shallow+and+the+Deep_Page_95_Chunk3994
82 3. THE PERCEPTRON
The+Shallow+and+the+Deep_Page_96_Chunk3995
Chapter 4 Beyond linear separability Non-linear means it’s hard to solve. — Arthur Mattuck In the previous sections we studied learning in version space as a basic train- ing strategy in terms of the simple perceptron classifier. We obtained important insights into the principles of learning a rule and obtained the concept of optimal stability. As pointed out already, learning in version space only makes sense under a number of conditions, which – unfortunately – are rarely fulfilled in realistic settings. The key assumptions are (a) The data set D = {ξµ, Sµ T }P µ=1 is perfectly reliable, labels are correct and noise–free. (b) The unknown rule is linearly separable, or more generally: the student complexity perfectly matches the target task. In practice, a variety of effects can impair the reliability of the training data: Some examples could be explicitly mislabeled by an unreliable expert to begin with. Class labels could be inverted due to some form of noise in the communi- cation between student and teacher. Alternatively (or additionally), some form of noise or corruption may have distorted the input vectors ξµ in D, potentially rendering some of the labels Sµ T inconsistent. In fact, real world training scenarios and data sets will hardly ever meet the conditions (a) and/or (b). From the perspective of perceptron training, i.e. restricting the hypothesis space to linearly separable functions, the following situations are much more likely to occur: 83
The+Shallow+and+the+Deep_Page_97_Chunk3996
84 4. BEYOND LINEAR SEPARABILITY i) The unknown target rule is linearly separable, but D contains mislabeled examples, for instance in the presence of noise. Depending on the number of examples P and the degree of the corruption the following may occur: i.1) In particular, small data sets D with few mislabeled samples can still be linearly separable and the labels Sµ T can be reproduced by a per- ceptron student. While the non-empty version space V is consistent with D, it is not perfectly representative of the target rule and the success of training will be impaired. i.2) The data set D is not linearly separable and, consequently, a version space of linearly separable functions does not exist: V = ∅. In this case, learning in version space is not even defined for the perceptron. Hence, the data set still contains information (albeit noisy or cor- rupted) about the rule, but it is not obvious how it can be extracted efficiently. ii) The target rule itself is not linearly separable and would require learning systems of greater complexity than the simple perceptron, ideally. Again, the consequences for perceptron training depend on the size of the available data set: ii.1) Small data sets may be linearly separable and one can expect or hope that a student w ∈V at least approximates the unknown rule to a certain extent, in this situation. ii.2) Larger data sets become non-separable and V = ∅should signal that the target is, in fact, more complex. Of course, superpositions of cases i) and ii) can also occur: we could have to deal with a non-separable rule, represented by a data set which in addition is subject to some form of corruption. As mentioned earlier, it is a difficult task in practice to determine whether a given data set D is linearly separable or not. Not finding a solution by use of the Rosenblatt algorithm, for instance, could simply indicate that a larger number of training steps is required. In [ND91], a perceptron-like algorithm is presented which either finds a solution or terminates and establishes non- separability [ND91], see also Sec. 3.3.6. In the following, we discuss a number of basic ideas and strategies for coping with non-separable data sets: ◦A simple perceptron could be trained to implement D approximately, in the sense that a large (possibly maximal) fraction of training labels is reproduced by the perceptron. In Sec. 4.1 we will present and discuss corresponding training schemes. ◦More powerful, layered networks for classification can be constructed from perceptron-like units. We will consider the so-called committee machine and parity machine as example two-layer systems in Sec. 4.2. We show that the latter constitutes a universal classifier.
The+Shallow+and+the+Deep_Page_98_Chunk3997
4.1. PERCEPTRON WITH ERRORS 85 ◦Many perceptrons (or other simple classifiers) can be combined into an ensemble in order to take advantage of a wisdom of the crowd effect [OM99,Urb00]. So-called Random Forests, i.e. ensembles of decision trees [Bre01], constitute one of the most prominent examples in the literature, currently. We refrain from a detailed discussion of ensembles and refer to the literature [OM99,Urb00] for further references. ◦A perceptron-like threshold operation of the form S = sign(. . .) with linear argument can be applied after a non-linear transformation of the feature vectors. Along these lines, the framework of the Support Vector Machines (SVM) was developed. It has been particularly successful and continues to do so. In Sec. 4.3 we will outline the concept and show that the SVM can be seen as a (highly non-trivial) conceptual extension of the simple perceptron. ◦Perhaps the most frequently applied strategy is to treat the classification as a regression problem. A network of continuous units (including the output) can be trained by standard methods and, eventually, the output is thresholded. We have seen one example already in terms of the Adaline scheme, see Sec. 3.7. We return to this strategy in Chapter 5. ◦Prototype-based systems [BHV16] offer another, very powerful framework for classification beyond linear separability. In Chapter 6 we will discuss the example of Learning Vector Quantization (LVQ), which implements – depending on the details – piecewise linear or piecewise quadratic decision boundaries. It is moreover a natural tool for multi-class classification. 4.1 Perceptron with errors Assume that a given data set {ξµ, Sµ T } with ξµ ∈RN and Sµ T = {−1, +1} is not linearly separable due to one or several of the reasons discussed above. In the following, we discuss extensions of perceptron training which toler- ate misclassification to a certain degree. Intuitively, this corresponds to the assumption that the data set is nearly linearly separable or in other words: the unknown rule can be approximated by a linearly separable function. 4.1.1 Minimal number of errors Aiming at a linearly separable approximation, one might want to minimize the number of misclassifications by choice of the weight vector w ∈RN in a simple perceptron student. While at a glance the idea appears plausible, we should realize that the outcome will be very sensitive to individual, misclassified examples in D. Formally, we can consider the following problem:
The+Shallow+and+the+Deep_Page_99_Chunk3998
86 4. BEYOND LINEAR SEPARABILITY Minimal number of errors (Perceptron) (4.1) For a given D = {ξµ, Sµ T }P µ=1 with ξµ ∈RN and Sµ T ∈{−1, +1} , minimize w ∈RN Herr(w) = P 󰁛 µ=1 󰂃(w, ξµ, Sµ T ) with 󰂃= 󰀝 1 if sign (w · ξµ) = −Sµ T 0 if sign (w · ξµ) = +Sµ T . Note that this cost function does not differentiate between nearly correct examples with Eµ ≈0 close to the decision boundary and clear cases where the feature vector falls deep into the incorrect half-space. The above defined problem proves very difficult. Note that the number of misclassified examples is obviously integer and can only display discontinuous changes with w. On the other hand, ∇w Herr = 0 almost everywhere in feature space. As a consequence, gradient based methods cannot be employed for the minimization or approximate minimization of Herr. A prescription that applies Hebbian update steps which can be seen as a modification of the Rosenblatt algorithm (3.15) has been introduced by S.I. Gallant [Gal90]. The so-called Pocket Algorithm relies on the stochastic pre- sentation of simple examples in combination with the principle of learning from mistake for a weight vector w. In addition to the randomized, yet otherwise conventional, Rosenblatt updates of the vector w, the so far best (with respect to Herr) weight vector 󰁥w is “put into the pocket”. It is only replaced when the on-going training process yields a vector w with a lower number of errors. Pocket algorithm (4.2) – initialize w(0) = 0 and 󰁥w = 0 (tabula rasa), set 󰁥H(0) = P – at time step t, select a single feature vector ξµ with class label Sµ T randomly from the data set D with equal probability 1/P – compute the local potential Eµ(t) = w(t) · ξµSµ T – for w(t), perform an update according to (Rosenblatt algorithm) w(t+1) = w(t) + 1 N Θ 󰁫 −Eµ(t) 󰁬 ξµ Sµ T . – compute H(t+1) = Herr󰀃 w(t+1) 󰀄 acc. to Eq. (4.1) – update the pocket vector 󰁥w(t) and 󰁥H(t) according to 󰁥w(t+1) = 󰀫 w(t+1) if H(t+1) < 󰁥H(t) (improvement) 󰁥w(t) if H(t+1) ≥󰁥H(t) (󰁥w unchanged) 󰁥H(t+1) = Herr󰀃󰁥w(t+1) 󰀄 .
The+Shallow+and+the+Deep_Page_100_Chunk3999
4.1. PERCEPTRON WITH ERRORS 87 Obviously, the number of errors 󰁥H(t) of the pocket vector 󰁥w(t) can never increase under the updates (4.2). Moreover, one can show that the stochastic selection of the training sample guarantees that, in principle, 󰁥w(t) approaches the minimum of Herr with probability one [Gal90,Roj96]. However, this quite weak convergence in probability does not allow to make statements about the expected number of updates required to achieve a solution of a particular quality. Certainly it is not possible to provide upper bounds as in the context of the Perceptron Convergence Theorem. Several alternative, well-defined approaches have been considered which can be shown to converge in a more conventional sense, at the expense of having to accept sub-optimal Herr. Along the lines discussed towards the end of Sec. 3.7.2, the gradient based minimization of several cost functions similar to Eq. (3.86) has been consid- ered in the literature. For instance [GG91] discusses objective functions which correspond to H[k](w) = 󰁓P µ=1 (c −Eµ)k Θ[c −Eµ] with Eµ = w · ξµSµ T in our notation. Note that the limiting case c = 0, k →0 would recover the non-differentiable Herr, Eq. (4.1). In [GG91] the above functions are, not quite precisely, referred to as the perceptron cost function for k = 1 and the adatron cost function for k = 2, respectively. We would like to point out that, despite the conceptual similarity, the case c = 1, k = 2 is not strictly equivalent to the AdaTron algorithm for non-separable data sets. Moreover, for lin. sep. data, any w ∈V gives H[2](w) = 0. Hence, optimal stability would have to be enforced through an additional minimization of the norm w2. 4.1.2 Soft margin classifier An explicit extension of the large margin concept has been suggested which allows for the controlled acceptance of misclassifications. For an introduction in the context of the SVM, see e.g. [SS02,DFO20] and references therein. The formalism presented there reduces to the soft margin perceptron in the case of a linear kernel function, see Sec. 4.3 for the relation. A corresponding, extended optimization problem similar to (3.68) reads: Soft margin perceptron (weight space) minimize w, 󰂓β N 2 w2 + γ P 󰁛 µ=1 βµ subject to {Eµ ≥1 −βµ}P µ=1 and {βµ ≥0}P µ=1 . (4.3) Here, we introduce so-called slack variables βµ ∈R with 󰀝βµ = 0 ⇔ Eµ ≥1 βµ > 0 ⇔ Eµ < 1.
The+Shallow+and+the+Deep_Page_101_Chunk4000