index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
400
Learning by Combining Memorization and Gradient Descent John C. Platt Synaptics, Inc. 2860 Zanker Road, Suite 206 San Jose, CA 95134 ABSTRACT We have created a radial basis function network that allocates a new computational unit whenever an unusual pattern is presented to the network. The network learns by allocating new units and adjusting the parameters of existing units. If the network performs poorly on a presented pattern, then a new unit is allocated which memorizes the response to the presented pattern. If the network performs well on a presented pattern, then the network parameters are updated using standard LMS gradient descent. For predicting the Mackey Glass chaotic time series, our network learns much faster than do those using back-propagation and uses a comparable number of synapses. 1 INTRODUCTION Currently, networks that perform function interpolation tend to fall into one of two categories: networks that use gradient descent for learning (e.g., back-propagation), and constructive networks that use memorization for learning (e.g., k-nearest neighbors). Networks that use gradient descent for learning tend to form very compact representations, but use many learning cycles to find that representation. Networks that memorize their inputs need to only be exposed to examples once, but grow linearly in the training set size. The network presented here strikes a compromise between memorization and gradient descent. It uses gradient descent for the "easy" input vectors and memorization for the "hard" input vectors. If the network performs well on a particular input 714 Learning by Combining Memorization and Gradient Descent 715 vector, or the particular input vector is already close to a stored vector, then the network adjusts its parameters using gradient descent. Otherwise, it memorizes the input vector and the corresponding output vector by allocating a new unit. The explicit storage of an input-output pair means that this pair can be used immediately to improve the performance of the system, instead of merely using that information for gradient descent. The network, called the resource-allocation network (RAN), uses units whose response is localized in input space. A unit with a non-local response needs to undergo gradient descent, because it has a non-zero output for a large fraction of the training data. Because RAN is a constructive network, it automatically adjusts the number of units to reflect the complexity of the function that is being interpolated. Fixed-size networks either use too few units, in which case the network memorizes poorly, or too many, in which case the network generalizes poorly. Parzen windows and K-nearest neighbors both require a number of stored patterns that grow linearly with the number of presented patterns. With RAN, the number of stored patterns grows sublinearly, and eventually reaches a maximum. 1.1 PREVIOUS WORK Previous workers have used networks with localized basis functions (Broomhead & Lowe, 1988) (Moody & Darken, 1988 & 89) (Poggio & Girosi, 1990). Moody has further extended his work by incorporating a hash table lookup (Moody, 1989). The hash table is a resource-allocating network where the values in the hash table only become non-zero if the entry in the hash table is activated by the corresponding presence of non-zero input probability. The RAN adjusts the centers of the Gaussian units based on the error at the output, like (Poggio & Girosi, 1990). Networks with centers placed on a high-dimensional grid, such as (Broomhead & Lowe, 1988) and (Moody, 1989), or networks that use unsupervised clustering for center placement, such as (Moody & Darken, 1988 & 89) generate larger networks than RAN, because they cannot move the centers to increase the accuracy. Previous workers have created function interpolation networks that allocate fewer units than the size of training set. Cascade-correlation (Fahlman & Lebiere, 1990), SONN (Tenorio & Lee, 1989), and MARS (Friedman, 1988) all construct networks by adding additional units. These algorithms work well. The RAN algorithm improves on these algorithms by making the addition of a unit as simple as possible. RAN uses simple algebra to find the parameters of a new unit, while cascadecorrelation and MARS use gradient descent and SONN uses simulated annealing. 2 THE ALGORITHM This section describes a resource-allocating network (RAN), which consists of a network, a strategy for allocating new units, and a learning rule for refining the network. 2.1 THE NETWORK The RAN is a two-layer radial-basis-function network. The first layer consists of 716 Platt units that respond to only a local region of the space of input values. The second layer linearly aggregates outputs from these units and creates the function that approximates the input-output mapping over the entire space. A simple function that implements a locally tuned unit is a Gaussian: Zj = L(Cjk - h)2, k Xj = exp( -Zj /wJ). (1) We use a C1 continuous polynomial approximation to speed up the algorithm, without loss of network accuracy: if z· < qw?' J J' otherwise; (2) where q = 2.67 is chosen empirically to make the best fit to a Gaussian. Each output of the network Yi is a sum of the outputs Xj, each weighted by the synaptic strength h ij plus a global polynomial. The Xj represent information about local parts of the space, while the polynomial represents global information: Yi = E hijXj + E Liklk + Ii· (3) j k The h ij Xj term can be thought of as a bump that is added or subtracted to the polynomial term Lk Likh + Ii to yield the desired function. The linear term is useful when the function has a strong linear component. In the results section, the Mackey-Glass equation was predicted with only a constant term. 2.2 THE LEARNING ALGORITHM The network starts with a blank slate: no patterns are yet stored. As patterns are presented to it, the network chooses to store some of them. At any given point the network has a current state, which reflects the patterns that have been stored previously. The allocator may allocate a new unit to memorize a pattern. After the new unit is allocated, the network output is equal to the desired output f. Let the index of this new unit be n. The peak of the response of the newly allocated unit is set to the memorized input vector, (4) The linear synapses on the second layer are set to the difference between the output of the network and the novel output, (5) Learning by Combining Memorization and Gradient Descent 717 The width of the response of the new unit is proportional to the distance from the nearest stored vector to the novel input vector, (6) where K is an overlap factor. As K grows larger, the responses of the units overlap more and more. The RAN uses a two-part memorization condition. An input-output pair (I, f) should be memorized if the input is far away from existing centers, III - Cne ares t II > oCt), (7) and if the difference between the desired output and the output of the network is large Ilf - y(l) I I > f. (8) Typically, f is a desired accuracy of output of the network. Errors larger than f are immediately corrected by the allocation of a new unit, while errors smaller than f are gradually repaired using gradient descent. The distance oCt) is the scale of resolution that the network is fitting at the tth input presentation. The learning starts with oCt) = 0max, which is the largest length scale of interest, typically the size of the entire input space of non-zero probability density. The distance oCt) shrinks until the it reaches Omin, which is the smallest length scale of interest. The network will average over features that are smaller than Omin. We used a function: 6(t) = max(omax exp( -tiT), Omin), (9) where T is a decay constant. At first, the system creates a coarse representation of the function, then refines the representation by allocating units with smaller and smaller widths. Finally, when the system has learned the entire function to the desired accuracy and length scale, it stops allocating new units altogether. The two-part memorization condition is necessary for creating a compact network. If only condition (7) is used, then the network will allocate units instead of using gradient descent to correct small errors. If only condition (8) is used, then fine-scale units may be allocated in order to represent coarse-scale features, which is wasteful. By allocating new units the RAN eventually represents the desired function ever more closely as the network is trained. Fewer units are needed for a given accuracy if the first-layer synapses Cj 1:, the second-level synapses hij , and the parameters for the global polynomial'Yi and Lil: are adjusted to decrease the error: £ = lIil - fll2 (Widrow & Hoff, 1960). We use gradient descent on the second-layer synapses to decrease the error whenever a new unit is not allocated: Ahij = a(1i - Yi)Xj, A'Yi = a(Ti - Yi), ALiI: = a(Ti - Yi)h. (10) 718 Platt In addition, we adjust the centers of the responses of units to decrease the error: (11) Equation (11) is derived from gradient descent and equation (1). Empirically, equation (11) also works for the polynomial approximation (2). 3 RESULTS One application of an interpolating RAN is to predict complex time series. As a test case, a chaotic time series can be generated with a nonlinear algebraic or differential equation. Such a series has some short-range time coherence, but longterm prediction is very difficult. The RAN was tested on a particular chaotic time series created by the Mackey-Glass delay-difference equation: x(t - r) x(t + 1) = (1- b)x(t) + a ( )10' l+xt-r (12) for a = 0.2, b = 0.1, and r = 17. We trained the network to predict the value x(T + dT), given the values x(T), x(T - 6), x(T - 12), and x(T - 18) as inputs. The network was tested using two different learning modes: off-line learning with a limited amount of data, and on-line learning with a large amount of data. The Mackey-Glass equation has been learned off-line, by other workers, using the backpropagation algorithm (Lapedes & Farber, 1987), and radial basis functions (Moody & Darken, 1989). We used RAN to predict the Mackey-Glass equations with the following parameters: a = 0.02, 400 learning epochs, 6max = 0.7, K, = 0.87 and 6m in = 0.07 reached after 100 epochs. RAN was simulated using f = 0.02 and f = 0.05. In all cases, dT = 85. Figure 1 shows the efficiency of the various learning algorithms: the smallest, most accurate algorithms are towards the lower left. When optimized for size of network (f = 0.05), the RAN has about as many weights as back-propagation and is just as accurate. The efficiency of RAN is roughly the same as back-propagation, but requires much less computation: RAN takes approximately 8 minutes of SUN-4 CPU time to reach the accuracy listed in figure 4, while back-propagation took approximately 30-60 minutes of Cray X-MP time. The Mackey-Glass equation has been learned using on-line techniques by hashing B-splines (Moody, 1989). We used on-line RAN using the following parameters; a = 0.05, 6max = 0.7, 6min = 0.07, K, = 0.87, and 6min reached after 5000 input presentations. Table 1 compares the on-line error versus the size of network for both RAN and the hashing B-spline (Moody, personal communication). In both cases, dT = 50. The RAN algorithm has similar accuracy to the hashing B-splines, but the number of units allocated is between a factor of 2 and 8 smaller. For more detailed results on the Mackey-Glass equation, see (Platt, 1991). Learning by Combining Memorization and Gradient Descent 719 o o = RAN ... = hashing B-spline o = standard RBF • = K-means RBF * = back -propagation o~ __________ -+ ____________ *-__________ ~ 100 1000 10000 100000 Nwnber of Weights Figure 1: The error on a test set versus the size of the network. Back-propagation stores the prediction function very compactly and accurately, but takes a large amount of computation to form the compact representation. RAN is as compact and accurate as back-propagation, but uses much less computation to form its representation. Table 1: Comparison between RAN and hashing B-splines Method Number of Units Normalized RMS Error RAN, f = 0.05 50 0.071 RAN, f = 0.02 143 0.054 Hashing B-spline 1 level of hierarchy 284 0.074 Hashing B-spline 2 levels of hierarchy 1166 0.044 4 CONCLUSIONS There are various desirable attributes for a network that learns: it should learn quickly, it should learn accurately, and it should form a compact representation. Formation of a compact representation is particularly important for networks that are implemented in hardware, because silicon area is at a premium. A compact representation is also important for statistical reasons: a network that has too many parameters can overfit data and generalize poorly. 720 Platt Many previous network algorithms either learned quickly at the expense of a compact representation, or formed a compact representation only after laborious computation. The RAN is a network that can find a compact representation with a reasonable amount of computation. Acknowledgements Thanks to Carver Mead, Carl Ruoff, and Fernando Pineda for useful comments on the paper. Special thanks to John Moody who not only provided useful comments on the paper, but also provided data on the hashing B-splines. References Broomhead, D., Lowe, D., 1988, Multivariable function interpolation and adaptive networks, Complex Systems, 2, 321-355. Fahlman, S. E., Lebiere, C., 1990, The Cascade-Correlation Learning Architecture, In: Advances in Neural Information Processing Systems 2, D. Touretzky, ed., 524532, Morgan-Kaufmann, San Mateo. Friedman, J. H., 1988, Multivariate Adaptive Regression Splines, Department of Statistics, Stanford University, Tech. Report LCSI02. Lapedes, A., Farber, R., 1987, Nonlinear Signal Processing Using Neural Networks: Prediction and System Modeling, Technical Report LA-UR-87-2662, Los Alamos National Laboratory, Los Alamos, NM. Moody, J, Darken, C., 1988, Learning with Localized Receptive Fields, In: Proceedings of the 1988 Connectionist Models Summer School, D. Touretzky, G. Hinton, T. Sejnowski, eds., 133-143, Morgan-Kaufmann, San Mateo. Moody, J, Darken, C., 1989, Fast Learning in Networks of Locally-Tuned Processing Units, Neural Computation, 1(2), 281-294. Moody, J., 1989, Fast Learning in Multi-Resolution Hierarchies, In: Advances in Neural Information Processing Systems 1, D. Touretzky, ed., 29-39, MorganKaufmann, San Mateo. Platt., J., 1991, A Resource-Allocating Network for Function Interpolation, Neural Computation, 3(2), to appear. Poggio, T., Girosi, F., 1990, Regularization Algorithms for Learning that are Equivalent to Multilayer Networks, Science, 247, 978-982. Powell, M. J. D., 1987, Radial Basis Functions for Multivariable Interpolation: A Review, In: Algorithms for Approximation, J. C. Mason, M. G. Cox, eds., Clarendon Press, Oxford. Tenorio, M. F., Lee, W., 1989, Self-Organizing Neural Networks for the Identification Problem, In: Advances in Neural Information Processing Systems 1, D. Touretzky, ed., 57-64, Morgan-Kaufmann, San Mateo. Widrow, B., Hoff, M., 1960, Adaptive Switching Circuits, In: 1960 IRE WESCON Convention Record, 96-104, IRE, New York.
1990
74
401
A Neural Network Approach for Three-Dimensional Object Recognition Volker 'bap Siemens AG, Central Reeearch and Development Otto-HaIm-Ring 6, 0.8000 Munchen 83 GermaD)' Ab.tract The model-bued neural vision Iystem presented here determines the p~ aition and identity of three-dimensional objects. Two ltereo imagee of a IC8ne are described in terms of Ihape primitives (line segments derived from edges in the lcenel) and their relational structure. A recurrent neural matching network solves the correlpondence problem by 888igning correIponding line segments in right and left ltereo images. A 3-D relational IC8ne description it then generated and matched by a second neural network against models in a model bue. The quality of the solutions and the convergence Ipeed were both improved by using mean field approximations. 1 INTRODUCTION Many machine vision IYlteDll and, to a large extent, &lao the human visual Iyatem, are model bued. The &Cenes are described in terDll of Ihape primitives and their relational Itructure, and the vision IYltem triel to find a match between the &cene delcriptions and 'familiar' objects in a model bue. In many lituations, IUch u robotia applicatioDl, the problem is intrinsically 3-D. Different approaches are pOl8ible. Poggio and Edelman (1990) describe a neural network that treat. the 3-D object recognition problem u a multivariate approximation problem. A certain number of 2-D viewl of the object are used to train a neural network to produce the Itandard view of that object. After training, new penpective viewl can be recogniled. In the approach presented here, the vision IYltem tries to capture the true 3-D Itructure of the 8cene. Two Itereo viewl of a lcene are used to generate a 3-D 306 A Neural Network Approach for Three-Dimensional Object Recognition 307 mode 1 scene P • ~ O~---I~~.PJ type: lIne segment type line segr:ent \ m ~ J \ P (1) (length): Jcm P< I) (length) angle. 3cm angle. - 30 degrees P- JO degrees PIl. ~ O~---t ... ~ .PI type line segment mU! type: line segment p( I ) (I ength) P ( I ) (length): Scm Scm ri{ log(lj/lj) Qir n/lj Figure 1: Match of primitive Pato Pi. Figure 2: Definitions of r, q, and 6 (left). The function ,..0 (right). description of the scene which is then matched against models in a model base. The stereo correspondence problem and the model matching problem are solved by two recurrent neural networks with very similar architectures. A neuron is assigned to every pOl8ible match between primitives in the left and right images or, respectively, the scene and the model base. The networks are designed to find the best matches by obeying certain uniqueness constraints. The networks are robust against the uncertainties in the descriptions of both the stereo images and the 3-D scene (Ihadow lines, missing lines). Since a partial match is sufficient for a successful model identification, opaque and partially occluded objects can be recognized. 2 THE NETWORK ARCHITECTURE Here, a general model matching tuk is considered. The activity of a match neuron mai (Figure 1) representl the certainty of a match between a primitive Pa in the model base and Pi in the lcene description. The interactions between neurons can be derived from the network'i energy function where the fixed points of the network correspond to the minima of the energy function. The first term in the energy 308 'Jresp function evaluates the match between the primitives Ep = -1/2 E leoimed· at (1) The function leo. is zero if the type of primitive Po is not equal to the type of primitive Pi. If both types are identical. leo. evaluates the agreement between parameters pf.(k) and pf(k) which describe properties of the primitives. Here. leo. = Io'(EI: /pf.(k) - pf(k)IIc1) is maximum if the parameters of Po and Pi match (Figures 1 and 2). The evaluation of the match between the relations of primitives in the scene and data base is performed by the energy term (Mjoisness, Gindi and Anadan, 1989) Es = -1/2 L Xo'~"J moim~i' (2) o,~,iJ The function Xoi = Io'(E.lp~,~(k)-~J(k)I/C7't) is maximum if the relation between Po and P~ matches the relatIon between Pi and Pi' The constraint that a primitive in the Kene Ihould only match to one or no primitive in the model base (column cODltraint) is implemented by the additional (penalty-) energy term (Utans et al .• 1989. Tresp and Gindi, 1990) Ec = E[((EmOi)-1)2Emoi). (3) i a a Ec is equal to zero only if in all columns, the sum over the activations of all neurons is equal to one or zero and positive otherwise. 2.1 DYNAMIC EQUATIONS AND MEAN FIELD THEORY 2.1.1 MFAI The neural network Ihould make binary decisionl, match or no match. but binary recurrent networks get easily Ituck in local minima. Bad local minima can be avoided by using an annealing strategy but annealing is time-conluming when simulated on a digital computer. Using a mean field approximation. one can obtain deterministic equations by retaining some of the advantages of the annealing process (Peterson and SOderberg, 1989). The network is interpreted as a IYltem of interacting units in thermal contact with a heat reservoir of temperature T. Such a system minimizes the free energy F = E - TS where 5 is the entropy of the system. At T = 0 the energy E is minimized. The mean value va. =< moi > of a neuron becomes lIai = 1/(1 + e-u•i/T ) with "oi = -IJE/lJllo" These equations can be updated synchronously, asynchronously or solved iteratively by moving only a Imall distance from the old value of "a' in the direction of the new mean field. At high temperatures T. the IYltem is in the trivial solution va. :: 1/2 VQ, i and the activations of all neuronl are in the linear region of the ligmoid function. The system can be described by linearized equatioDi. The magnitudes of all eigenValues of the corresponding tranlfer matrix are less than 1. At a critical temperature Tc, the magnitude of at least one of the eigenvalues becomes greater than one and the trivial solution becomes unstable. Tc and favorable weights for the different terms in the energy function can be found by an eigenvalue analYlis of the linearized equatioDl (Peterson and Soderberg, 1989). A Neural Network Approach for Three-Dimensional Object Recognition 309 2.1.2 MFA, The column constraint is satisfied by states with exactly one neuron or DO neuron 'on' in every column. If only these states are considered in the derivation of the mean field equations, one can obtain another set of mean field equations, "ai = 1 x eUoi/T /(1 + E" eU_i/T) with "ai = -8E/8"ai. The column constraint term (Equation 3) drops out of the energy function and the energy surface in simplified. The high temperature fixed point corresponds to "ai = 1/(N + 1) 'VOl, i where N is the number of rows. 3 THE CORRESPONDENCE PROBLEM To solve the correspondence problem, corresponding lines in left and right images have to be identified. A good 888umption is that the appearance of an object in the left image is a distortion and shifted venion of the appearance of the object in the right image with approximately the same scale and orientation. The machinery just developed can be applied if the left image is interpreted as the scene and the right image as the model. Figure 3 shows two stereo images of a simple scene and the segmentation of left and right images into line segments which are the only primitives in this application. Lines correspond to the edges, structure and contoun of the objects and shadow lines. The length of a line segment pf(l) = Ii is the descriptive parameter attached to each line segment Pi. Relations between line segments are only considered if they are in a local neighborhood: Xa,,,.ij is equal to zero if not both a) Po is attached to line segment p" and b) line segment Pi is attached to line segment Pi' Otherwise, Xa,,,.ij = #-,(14)0'' 4>iil/CT~ + Ira" riil/CT~ + 19a" - 9iil/CT;) where prj(l) = 4>ij is the angle between line segments, prj (2) = riJ the logarithm of the ratio of their lengths and pr,/3) = 9ij the attachment point (Shumaker et aI., 1989) (Figure 2). Here, we have two uniqueness constraints: only at most one neuron should be active in each column or each row. The row constraint is enforced by an energy term equiValent to Eo: ER = Ea[«Ei mai) - 1)2 E. rna']' 4 DESCRIPTION OF THE 3-D OBJECT STRUCTURE From the last section, we know which endpoints in the left image correspond to endpoints in the right image. If D is the separation of both (in parallel mounted) cameras, I the focal lengths of the cameras, Z" II', Z,., II,. the coordinates of a particular point in left and right images, the 3-D position of the point in camera coordinates z, II, z becomes z = DI/(z,. - z,), II = ZII,./ I, Z = ZZ,./ 1+ D/2. This information is used to generate the 3-D description of the visible portion of the objects in the scene. Knowing the true 3-D position of the endpoints of the line segments, the system concludes that the chair and the wardrobe are two distinct and spatially separated objects and that line segments 12 and 13 in the right image and 12 in the left image are not connected to either the chair or the wardrobe. On the other hand, it is not 310 Tresp rT "'" -M:1 I· .. · e u ~JJ 17 II,,~:~ • (";I2 II I I~ 1 I II I I left I r1ght Figure 3: Stereo images of a scene and segmented images. The stereo matching network matched all line segments that are present in both images correctly. obvious that the shadow lines under the wardrobe are not part of the wardrobe. 5 MATCHING OBJECTS AND MODELS The scene description now must be matched with stored models describing the complete 3-D structures of the models in the data base. The model description might be constructed by either explicitly measuring the dimensions of the models or by incrementally assembling the 3-D structure from several stereo views of the models. Descriptive parameters are the (true 3-D) length of line segments I, the (true 3-D) angles ~ between line segments and the (true 3-D) attachment points q. The knowledge about the 3-D structure allows a segmentation of the scene into different objects and the row constraint is only applied to neurons relating to the same object 0 in the scene ER' = Eo Ea[«EiEO mal) - 1)2 EiEO vail· Figure 4 shows the network after convergence. Except for the occluded leg, all line segments belonging to the chair could be matched correctly. All not occluded line segments of the wardrobe could be matched correctly except for its left front leg. The shadow lines in the image did not find a match. 6 3-D POSITION In many applications, one is also interested in determining the positions of the recognized objects in camera coordinates. In general, the transformation between ~ A Neural Network Approach for Three-Dimensional Object Recognition 311 _--11----,.....18 • __ 1 IIOde1 ~ • , I 11 \ . __ --",4----~ 13 • , ••••• , •• at.Il" .... " .. " ...... " ........ " .... .. ICWle • •••••• , ..... "" ........ " ..... 11 ......... " ... .. Ie .. • , ••••• , •• ,,,,,.,,, .. ,,.""' ... 11 ........... ,, ... .. I " 2 I "-k • • • • • , ~ I , Figure 4: 3-D matching network. 312 Tresp an object in a standard frame of reference Xo = (zo, 110, %0) and the transformed frame of reference Xs = (z" 11" z,) can be described by Xs = RXo, where R is a 4 x 4 matrix describing a rotation followed by a translation. R can be calculated if Xo and Xs are known for at leut 4 points using, for example, the pseudo inverse or an ADALINE. Knowing the coefficients of R, the object position can be calculated. If an ADALINE is used, the error after convergence is a meuure of the consistency of the transformation. A large error can be used u an indication that either a wrong model W&8 matched, or certain primitives were miscl&88ified. 7 DISCUSSION Both M F Al and M F A2 were used in the experiments. The same solutions were found in general, but due to the simpler energy 8urface, M F A2 allowed greater time steps and therefore converged 5 to 10 times futer. For more complex scenes, a hierarchical system could be considered. In the first step, simple objects such as 8quares, rectangles, and circles would be identified. These would then form the primitives in a second stage which would then recognize complete objects. It might also be pOllible to combine these two matching nets into one hierarchical net similar to the networks described by Mjolsne&8, Gindi and Anadan (1989). Acknowledgements I would like to acknowledge the contributions of Gene Gindi, Eric Mjolsnes& and Joachim Utans of Yale University to the design of the matching network. I thank Christian Evers for helping me to acquire the images. References Eric Mjolsness, Gene Gindi, P. Anadan. Neural Optimization in Model Matching and Perceptual Organization. Neural Computation 1, pp. 218-209, 1989. Carsten Peterson, Bo Soderberg. A new method for mapping optimization problems onto neural networks. International Journal 0/ Neural SJI.tem., Vol. I, No. I, pp. 3-22, 1989. T. Poggio, S. Edelman. A Network That Learns to Recognize Three-Dimensional Objects. Nature, No. 6255, pp. 263-266, January 1990. Grant Shumaker, Gene Gindi, Eric Mjolsnese, P. An&dan. Stickville: A Neural Net for Object Recognition via Graph Matching. Tech. Report No. 8908, Yale University, 1989. Volker Tresp, Gene Gindi. Invariant Object Recognition by Inexact Subgraph Matching with Applications in Industrial Part Recognition. International Neural Ndwork Conference, Paris, pp. 95-98, 1990. Joachim Utans, Gene Gindi, Eric Mjolsness, P. An&dan. Neural Networks for Object Recognition within Compositional Hierarchies, Initial Experiments. Tech. Report No. 8903, Yale University, 1989.
1990
75
402
Distributed Recursive Structure Processing Geraldine Legendre Department of Linguistics Yoshiro Miyata Optoelectronic Computing Systems Center University of Colorado Boulder, CO 80309-0430· Paul Smolensky Department of Computer Science Abstract Harmonic grammar (Legendre, et al., 1990) is a connectionist theory of linguistic well-formed ness based on the assumption that the well-formedness of a sentence can be measured by the harmony (negative energy) of the corresponding connectionist state. Assuming a lower-level connectionist network that obeys a few general connectionist principles but is otherwise unspecified, we construct a higher-level network with an equivalent harmony function that captures the most linguistically relevant global aspects of the lower level network. In this paper, we extend the tensor product representation (Smolensky 1990) to fully recursive representations of recursively structured objects like sentences in the lower-level network. We show theoretically and with an example the power of the new technique for parallel distributed structure processing. 1 Introduction A new technique is presented for representing recursive structures in connectionist networks. It has been developed in the context of the framework of Harmonic Grammar (Legendre et a1. 1990a, 1990b), a formalism for theories of linguistic well-formedness which involves two basic levels: At the lower level, elements of the problem domain are represented as distributed patterns of activity in a networkj At the higher level, the elements in the domain are represented locally and connection weights are interpreted as soft rules involving these elements. There are two aspects that are central to the framework. -The authors are listed in alphabetical order. 591 592 Legendre, Miyata, and Smolensky First, the connectionist well-formedness measure harmony (or negative "energy"), which we use to model linguistic well-formed ness , has the properties that it is preserved between the lower and the higher levels and that it is maximized in the network processing. Our previous work developed techniques for deriving harmonies at the higher level from linguistic data, which allowed us to make contact with existing higher-level analyses of a given linguistic phenomenon. This paper concentrates on the second aspect of the framework: how particular linguistic structures such as sentences can be efficiently represented and processed at the lower level. The next section describes a new method for representing tree structures in a network which is an extension of the tensor product representation proposed in (Smolensky 1990) that allows recursive tree structures to be represented and various tree operations to be performed in parallel. 2 Recursive tensor product representations A tensor product representation of a set of structures S assigns to each 8 E S a vector built up by superposing role-sensitive representations of its constituents. A role decomposition of S specifies the constituent structure of s by assigning to it an unordered set of filler-role bindings. For example, if S is the set of strings from the alphabet {a, b, chand 8 = cba, then we might choose a role decomposition in which the roles are absolute positions in the string (rl = first, r2 = second, ... ) and the constituents are the filler/role bindings {b/r2, a/rs, c/rl}. 1 In a tensor product representation a constituent - i.e., a filler/role binding - is represented by the tensor (or generalized outer) product of vectors representing the filler and role in isolation: fir is represented by the vector v = f®r, which is in fact a second-rank tensor whose elements are conveniently labelled by two subscripts and defined simply by vt.pp = ft.prp. Where do the filler and role vectors f and r come from? In the most straightforward case, each filler is a member of a simple set F (e.g. an alphabet) and each role is a member of a simple set R and the designer of the representation simply specifies vectors representing all the elements of F and R. In more complex cases, one or both of the sets F and R might be sets of structures which in turn can be viewed as having constituents, and which in turn can be represented using a tensor product representation. This recursive construction of the tensor product representations leads to tensor products of three or more vectors, creating tensors of rank three and higher, with elements conveniently labelled by three or more subscripts. The recursive structure of trees leads naturally to such a recursive construction of a tensor product representation. (The following analysis builds on Section 3.7.2 of (Smolensky 1990.» We consider binary trees (in which every node has at most two children) since the techniques developed below generalize immediately to trees with higher branching factor, and since the power of binary trees is well attested, e.g., by the success of Lisp, whose basic datastructure is the binary tree. Adopting the conventions and notations of Lisp, we assume for simplicity that the terminal nodes lThe other major kind of role decomposition considered in (Smolensky 1990) is contextual roles; under one such decomposition, one constituent of cba is "b in the role 'preceded by c and followed by a'''. Distributed Recursive Structure Processing 593 of the tree (those with no children), and only the terminal nodes, are labelled by symbols or atoms. The set of structures S we want to represent is the union of a set of atoms and the set of binary trees with terminal nodes labelled by these atoms. One way to view a binary tree, by analogy with how we viewed strings above, is as having a large number of positions with various locations relative to the root: we adopt positional roles rill labelled by binary strings (or bit vectors) such as Z = 0110 which is the position in a tree accessed by "caddar = car(cdr(cdr(car)))", that is, the left child (0; car) of the right child (1; cdr) of the right child of the left child of the root of the tree. Using this role decomposition, each constituent of a tree is an atom (the filler) bound to some role rill specifying its location; so if a tree s has a set of atoms {fi} at respective locations {zih then the vector representing s is 8 = Ei fi®rXi' A more recursive view of a binary tree sees it as having only two constituents: the atoms or subtrees which are the left and right children of the root. In this fully recursive role decomposition, fillers may either be atoms or trees: the set of possible fillers F is the same as the original set of structures S. The fully recursive role decomposition can be incorporated into the tensor product framework by making the vector spaces and operations a little more complex than in (Smolensky 1990). The goal is a representation obeying, Vs, p, q E S: s = cons(p, q) => 8 = p®rO + q®rl (1) Here, s = cons(p, q) is the tree with left subtree p and right subtree q, while 8, p and q are the vectors representing s, p and q. The only two roles in this recursive decomposition are ro, rl: the left and right children of root. These roles are represented by two vectors rO and rl' A fully recursive representation obeying Equation 1 can actually be constructed from the positional representation, by assuming that the (many) positional role vectors are constructed recursively from the (two) fully recursive role vectors according to: rxO = rx®rO rxl = rx®rl' For example, rOllO = rO®rl ®rl ®rO' 2 Thus the vectors representing positions at depth d in the tree are tensors of rank d (taking the root to be depth 0). As an example, the tree s = cons(A, cons(B, e)) = cons(p, q), where p = A and q = cons(B, e), is represented by 8 A®rO + B®rOl + C®rll = A®rO + B®rO®rl + C®rl ®rl A®rO + (B®rO + C®rl)®rl = p®rO + q®rl, in accordance with Equation 1. The complication in the vector spaces needed to accomplish this recursive analysis is one, that allows us to add together the tensors of different ranks representing different depths in the tree. All we need do is take the direct sum of the spaces of tensors of different rank; in effect, concatenating into a long vector all the elements 'By adopting this definition of rXt we are essentially taking the recursive structure that is implicit in the subscripts z labelling the positional role vectors, and mapping it into the structure of the vectors themselves. 594 Legendre, Miyata, and Smolensky of the tensors. For example, in S = cons(A, cons(B, C», depth 0 is 0, since s isn't an atom; depth 1 contains A, represented by the tensor S~~1 = AI;'rOP1' and depth 2 contains Band C, represented by S~~IP2 = Bl;'r Opl rlp2 + Cl;'rlpl rl p2 ' The tree I . h d h {S(O) (1) S(2) } h as a who e IS t en represente by t e sequence s = 1;" SI;'P1' I;'P1P2"" were the tensor for depth 0, S~), and the tensors for depths d> 2, S~~l"'PI.' are all zero. We let V denote the vector space of such sequences of tensors of rank 0, rank I, ... , up to some maximum depth D which may be infinite. Two elements of V are added (or "superimposed") simply by adding together the tensors of corresponding rank. This is our vector space for representing trees. a The vector operation cons for building the representation of a tree from that of its two subtrees is given by Equation 1. As an operation on V this can be written: cons : ({P~), P~~I' P~J1P2""}, {Q~), Q~~l' Q~~lP2''''}) 1-+ { (0) (1) } {Q(O) (1) } O'PI;' rOP1 'PI;'P1r OP2"" + 0, I;' rlp1 ,QI;'PlrlP2"" (Here, 0 denotes the zero vector in the space representing atoms.) In terms of matrices multiplying vectors in V, this can be written cons(p, q) = W consO p + W consl q (parallel to Equation 1) where the non-zero elements of the matrix W consO are W 0 -rO cons I;'P1P2,,,PI.PI.+l'I;'P1P2· .. PI. PHI and W consl is gotten by replacing ro with rl' Taking the car or cdr of a tree - extracting the left or right child - in the recursive decomposition is equivalent to unbinding either "0 or 7'1. As shown in (Smolensky 1990, Section 3.1), if the role vectors are linearly independent, this unbinding can be performed accurately, via a linear operation, specifically, a generalized inner product (tensor contraction) of the vector representing the tree with an unbinding vector Uo or ul' In general, the unbinding vectors are the dual basis to the role vectors; equivalently, they are the vectors comprising the inverse matrix to the matrix of all role vectors. If the role vectors are orthonormal (as in the simulation discussed below), the unbinding vectors are the same as the role vectors. The car operation can be written explicitly as an operation on V: {S(O) (1) (2) } car: 1;" SI;'P' SI;'P1P1 ' . .. .{Ep1 S~~l UOP1 ' Ep2 S~~IP2 UOp2' E p, S~~lP2P' UOp,' .. -} 3In the connectionist implementation simulated below, there is one unit for each element of each tensor in the sequence. In the simulation we report, seven atoms are represented by (binary) vectors in a three-dimensional space, so cp = O,1,2j rO and rl are vectors in a two-dimensional space, so p = 0,1. The number of units representing the portion of V for depth d is thus 3 . 24 and the total number of units representing depths up to D is 3(2D+l - 1). In tensor product representations, exact representation of deeply embedded structure does not come cheap. Distributed Recursive Structure Processing 595 (Replacing uo by u1 gives cdr.) The operation car can be realized as a matrix W car mapping V to V with non-zero elements: W car CPPJ P2""PI..CPP1P2"""PI.PHJ = uOPI.+J· W cdr is the same matrix, with uo replaced by u 1. 4 One of the main points of developing this connectionist representation of trees is to enable massively parallel processing. Whereas in the traditional sequential implementation of Lisp, symbol processing consists of a long sequence of car, cdr, and cons operations, here we can compose together the corresponding sequence of W car, W cdr' W consO and W cons1 operations into a single matrix operation. Adding some minimal nonlinearity allows us to compose more complex operations incorporating the equivalent of conditional branching. We now illustrate this with a simple linguistically motivated example. 3 An example The symbol manipulation problem we consider is that of transforming a tree representation of a syntactic parse of an English sentence into a tree representation of a predicate-calculus expression for the meaning of the sentence. We considered two possible syntactic structures: simple active sentences of the form ~ and passive sentences of the form~. Each was to be transformed into a tree representing V(A,P), namely v~. Here, the agent & and patient.£. of the verb V are both arbitrarily complex noun phrase trees. (Actually, the network could handle arbitrarily complex V's as well.) Aux is a marker for passive (eg. is in is feared.) The network was presented with an input tree of either type, represented as an activation vector using the fully recursive tensor product representation developed in the preceding section. The seven non-zero binary vectors oflength three coded seven atoms; the role vectors used were technique described above. The desired output was the same tensorial representation of the tree representing V(A, B). The filler vectors for the verb and for the constituent words of the two noun phrases should be unbound from their roles in the input tree and then bound to the appropriate roles in the output tree. Such transformation was performed, for an active sentence, by the operation cons ( cadr( s), cons( car( s), cddr( s))) on the input tree s, and for a passive sentence, by cons(cdadr(s), cons(cdddr(s), car(s))). These operations were implemented in the network as two weight matrices, W a and W p' 5 connecting the input units to the output units as shown in Figure 1. In additIon, the network had a circuit for tNote that in the caSe when the {rO,rl} are orthonormal, and therefore uo = 1'0, W car = W consO T i similarly, W cdr = W consl T . &The two weight matrices were constructed from the four basic matrices as Wa W consO W car W cdr + W cons1 (W consO W car + W cons1 W cdr W cdr) and Wp = W consO W cdr W car W cdr + W consl (W consO W cdr W cdr W cdr + W cons1 W car). 596 Legendre, Miyata, and Smolensky Output = cons{V,cons{C,cons(A,B») Input = cons(cons(A,B),cons(cons(Aux,V),cons(by,C)) Figure 1: Recursive tensor product network processing a passive sentence determining whether the input sentence was active or passive. In this example, it simply computed, by a weight matrix, the caddr of the input tree (where a passive sentence should have an Aux), and if it was the marker Aux, gated (with sigma-pi connections) W p , and otherwise gated Wa. Given this setting, the network was able to process arbitrary input sentences of either type, up to a certain depth (4 in this example) limited by the size of the network, properly and generated correct case role assignments. Figure 1 shows the network processing a passive sentence «A.B).«Aux.V).(by.C))) as in All connectionist, are feared by Minsky and generating (V.(C.(A.B») as output. 4 Discussion The formalism developed here for the recursive representation of trees generates quite different representations depending on the choice of the two fundamental role vectors rO and rl and the vectors for representing the atoms. At one extreme is the trivial fully local representation in which one connectionist unit is dedicated to each possible atom in each possible position: this is the special case in which rO and rl are chosen to be the canonical basis vectors (1 0) and (0 I), and the vectors representing the n atoms are also chosen to be the canonical basis vectors of n-space. The example of the previous section illustrated the case of (a) linearly dependent vectors for atoms and (b) orthonormal vectors for the roles that were "distributed" in that both elements of both vectors were non-zero. Property (a) permits the representation of many more than n atoms with n-dimensional vectors, and could be used to enrich the usual notions of symbolic computation by letting "similar atoms" be represented by vectors that are closer to each other than are "dissimilar atoms." Property (b) contributes no savings in units of the purely local case, amounting to a literal rotation in role space. But it does allow us Distributed Recursive Structure Processing 597 to demonstrate that fully distributed representations are as capable as fully local ones at supporting massively parallel structure processing. This point has been denied (often rather loudly) by advocates oflocal representations and by such critics as (Fodor & Pylyshyn 1988) and (Fodor & McLaughlin 1990) who have claimed that only connectionist implementations that preserve the concatenative structure of language-like representations of symbolic structures could be capable of true structure-sensitive processing. The case illustrated in our example is distributed in the sense that all units corresponding to depth d in the tree are involved in the representation of all the atoms at that depth. But different depths are kept separate in the formalism and in the network. We can go further by allowing the role vectors to be linearly dependent, sacrificing full accuracy and generality in structure processing for representation of greater depth in fewer units. This case is the subject of current research, but space limitations have prevented us from describing our preliminary results here. Returning to Harmonic Grammar, the next question is, having developed a fully recursive tensor product representation for lower-level representation of embedded structures such as those ubiquitous in syntax, what are the implications for wellformedness as measured by the harmony function? A first approximation to the natural language case is captured by context free grammars, in which the wellformedness of a subtree is independent of its level of embedding. It turns out that such depth-independent well-formed ness is captured by a simple equation governing the harmony function (or weight matrix). At the higher level where grammatical "rules" of Harmonic Grammar reside, this has the consequence that the numerical constant appearing in each soft constraint that constitutes a "rule" applies at all levels of embedding. This greatly constrains the parameters in the grammar. References [1] J. A. Fodor and B. P. McLaughlin. Connectionism and the problem of systematicity: Why smolensky's solution doesn't work. Cognition, 35:183-204, 1990. [2] J. A. Fodor and Z. W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. Cognition, 28:3-71, 1988. [3] G. Legendre, Y. Miyata, and P. Smolensky. Harmonic grammar - a formal multi-level connectionist theory of linguistic well-formedness: Theoretical foundations. In the Proceeding. of the twelveth meeting of the Cognitive Science Society, 1990a. [4] G. Legendre, Y. Miyata, and P. Smolensky. Harmonic grammar - a formal multilevel connectionist theory of linguistic well-formedness: An application. In the Proceedings of the twelveth meeting of the Cognitive Science Society, 1990b. [5] P. Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist networks. Artificial Intelligence, 46:159-216, 1990.
1990
76
403
Dynamics of Learning in Recurrent Feature-Discovery Networks Todd K. Leen Department of Computer Science and Engineering Oregon Graduate Institute of Science & Technology Beaverton, OR 97006-1999 Abstract The self-organization of recurrent feature-discovery networks is studied from the perspective of dynamical systems. Bifurcation theory reveals parameter regimes in which multiple equilibria or limit cycles coexist with the equilibrium at which the networks perform principal component analysis. 1 Introduction Oja (1982) made the remarkable observation that a simple model neuron with an Hebbian adaptation rule develops into a filter for the first principal component of the input distribution. Several researchers have extended Oja's work, developing networks that perform a complete principal component analysis (PCA). Sanger (1989) proposed an algorithm that uses a single layer of weights with a set of cascaded feedback projections to force nodes to filter for the principal components. This architecture singles out a particular node for each principal component. Oja (1989) and Oja and Karhunen (1985) give a related algorithm that projects inputs onto an orthogonal basis spanning the principal subspace, but does not necessarily filter for the principal components themselves. In another class of models, nodes are forced to learn different statistical features by a set of lateral connections. Rubner and Schulten (1990) use cascaded lateral connections; the ith node receives signals from the input and all nodes j with j < i. The lateral connections are modified by an anti-Hebbian learning rule that tends to de-correlate the node responses. Like Sanger's scheme, this architecture singles out a particular node for each principal component. Kung and Diamantaras (1990) propose a different learning rule on the same network topology. Foldiak (1989) simulates a network with full lateral connectivity, but does not discuss convergence. Dynamics of Learning in Recurrent &ature-Discovery Networks 71 The goal of this paper is to help form a more complete picture of feature-discovery models that use lateral signal flow. We discuss two models with particular emphasis on their learning dynamics. The models incorporate Hebbian and anti-Hebbian adaptation, and recurrent lateral connections. We give stability analyses and derive bifurcation diagrams for the models. Stability analysis gives a lower bound on the rate of adaptation the lateral connections, below which the equilibrium corresponding to peA is unstable. Bifurcation theory provides a description of the behavior near loss of stability. The bifurcation analyses reveal stable equilibria in which the weight vectors from the input are combinations of the eigenvectors of the input correlation. Limit cycles are also found. 2 The Single-Neuron Model In Oja's model the input, x E R N , is a random vector assumed to be drawn from a stationary probability distribution. The vector of synaptic weights is denoted w and the post-synaptic response is linear; y = x . w. The continuous-time, ensemble averaged form of the learning rule is w < x y > - < y2 > w Rw (w. Rw) w (1) where < ... > denotes the average over the ensemble of inputs, and R = < X xT > is the correlation matrix. The unit-magnitude eigenvectors of R are denoted ei, i = 1 ... N and are assumed to be ordered in decreasing magnitude of the associated eigenvalues Al > A2 > ... > AN > O. Oja shows that the weight vector asymptotically approaches ±el' The variance of the node's response is thus maximized and the node acts as a filter for the first principal component of the input distribution. 3 Extending the Single Neuron Model To extend the model to a system of M ::5 N nodes we consider a set of linear neurons with weight vectors (called the forward weights) Wl •• . WM connecting each to 'the--- __ ~ N -dimensional input. Without interactions between the nodes in the array, all M weight vectors would converge to ±el. We consider two approaches to building interactions that force nodes to filter for different statistical features. In the first approach an internode potential is constructed. This formulation results in a non-local model. The model is made local by introducing lateral connections that naturally acquire anti-Hebbian a.daptation. For reasons that will become clear, the resulting model is referred to as a minimal coupling scheme. In the second approach, we write equations of motion of the forward weights based directly on (1). The evolution of the lateral connection strengths will follow a simple anti-Hebbian rule. 3.1 Minimal Coupling The response of the ith node in the array is taken to be linear in the input (2) 72 Leen The adaptation of the forward weights is derived from the potential U 1 ~ 2 C ~ 2 L.J < Yi > + 2" L.J i i,k;i¢k 1 M C M -2 ~ (Wj . RWj) + 2 2: (Wj' R Wk)2, (3) J j,k;j¢k where C is a coupling constant. The first term of U generates the Hebb law, while the second term penalizes correlated node activity (Yuille et al. 1989). The equations of motion are constructed to perform gradient descent on U with a term added to bound the weight vectors, -V"w. U 2 < Yi > Wi M < x Yi > - c 2: < Yi Yj > < x Yj > < yi > Wi j¢i M RWi C L: (Wi' RWj) RWj (Wi' Rwd Wi. (4) j ¢i Note that Wi refers to the weight vector from the input to the ith node, not the ith component of the weight vector. Equation (4) is non-local as it involves correlations, < Yi Yj >, between nodes. In order to provide a purely local adaptation, we introduce a symmetric matrix of lateral connections 1Jij i, j = 1, ... , M 1Jii = O. These evolve according to 1Jij -d (1Jij + C < Yi Yj > ) -d (1Jij + C Wi . RWj ) where d is a rate constant. In the limit of fast adaptation (large d) 1Jij --+ -C < Yi Yj > . With this limiting behavior in mind, we replace (4) with M < XVi > + L: 1Jij < XYj > j¢i M 2 < Yi > Wi RWi + L: 1Jij RWj (Wi' RWi) Wi· j¢i Equations (5) and (6) specify the adaptation of the network. (5) (6) Notice that the response of the ith node is given by (2) and is thus independent of the signals carried on the lateral connections. In this sense the lateral signals affect node plasticity but not node response. This minimal coupling can also be derived as a low-order approximation to the model in §3.2 below. Dynamics of Learning in Recurrent &ature-Discovery Networks 73 3.1.1 Stability and Bifurcation By inspection the weight dynamics given by (5) and (6) have an equilibrium at (7) At this equilibrium the outputs are the first M principal components of input vectors. In suitable coordinates the linear part of the equations of motion break into block diagonal form with any possible instabilities constrained to 3 x 3 sub-blocks. Details of the stability and bifurcation analysis are given in Leen (1991). The principal component subspace is always asymptotically stable. However the equilibrium Xo is linearly stable if and only if d > do (Ai - Aj)2 (Ai + Aj) (8) A~ + A? I J C Co 1 1 ;:; (i,j) ;:; M. (9) > Ai + Aj , At Co or do there is a qualitative change (a bifurcation) in the learning dynamics. If the condition on d is violated, then there is a Hopf bifurcation to oscillating weights. At the critical value Co there is a bifurcation to multiple equilibria. The bifurcation normal form was found by Liapunov-Schmidt reduction (see e.g. Golubitsky and Schaeffer 1984) performed at the bifurcation point (Xo, Co). To deal effectively with the large dimensional phase space of the network, the calculations were performed on a symbolic algebra program. At the critical point (Xo, Co) there is a supercritical pitchfork bifurcation. Two unstable equilibria appear near Xo for C > Co. At these equilibria the forward weights are mixtures of eM and eM -1 and the lateral connection strengths are non-zero. Generically one expects a saddle-node bifurcation. However Xo is an equilibrium for all values of C, and the system has an inversion symmetry. These conditions preclude the saddle-node and transcritical bifurcations, and we are left with the pitchfork. The position of stable equilibria away from (Xo, Co) can be found by examining terms of order five and higher in the bifurcation expansion. Alternatively we examine the bifurcation from the homogeneous solution, Xh, in which all weight vectors are proportional to el. For a system of two nodes this equilibrium is asymptotically stable provided (10) If Al < 3A2' then there is a supercritical pitchfork bifurcation at Ch. Two stable equilibria emerge from Xh for C > Ch. At these stable equilibria, the forward weight vectors are mixtures of the first two correlation eigenvectors and the lateral connection strengths are nonzero. The complete bifurcation diagram for a system of two nodes is shown in Fig. 1. The upper portion of the figure shows the bifurcation at (Xo, Co). The horizontal line corresponds to the peA equilibrium Xo. This equilibrium is stable (heavy line) for 74 Leen C > Co, and unstable (light line) for C < Co. The subsidiary, unstable, equilibria that emerge from (Xo, Co) lie on the light, parabolic branches of the top diagram. Calculations indicate that the form of this bifurcation is independent of the number of nodes, and of the input dimension. Of course the value of Co increases with increasing number of nodes, c.f. (9). The lower portion of Fig. 1 shows the bifurcation from (Xh' Ch) for a system of two nodes. The horizontal line corresponds to the homogeneous equilibrium X h. This is stable for C < Ch and unstable for C > Ch. The stable equilibria consisting of mixtures of the correlation eigenvectors lie on the heavy parabolic branches of the diagram. For networks with more nodes, there are presumably further bifurcations along the supercritical stable branches emerging from (Xh' Ch); equilibria with qualitatively different eigenvector mixtures are observed in simulations. Each inset in the figure shows equilibrium forward weight vectors for both nodes in a two-node network. These configurations were generated by numerical integration of the equations of motion (5) and (6). The correlation matrix corresponds to an ensemble of noise vectors with short-range correlations between the components. Simulations of the corresponding discrete, pattern-by-pattern learning rule confirm the form of the weight vectors shown here. Figure 1: Bifurcation diagram for the minimal model 3.2 Full Coupling ~2 3.0tlr O.S . 2 4 6 8 10 AI Fig 2: Regions in the (>'1, >'2) plane corresponding to supercritical (shaded) and subcritical (unshaded) Hopf bifurcation. In a more conventional coupling scheme, the signals carried on the lateral connections affect the node activities directly. For linear node response, the vector of activities is given by (11) where y E RM, TJ is the !l1 x !If matrix of lateral connection st.rengths and w is an M x N matrix whose ith row is the forward weight vector to the ith node. The adaptation rule is w < yxT > _ Diag« yyT »w D TJ C < yyT >, TJii = 0, (12) (13) Dynamics of Learning in Recurrent Feature-Discovery Networks 75 where D and C are constants and Diag sets the off-diagonal elements of its argument equal to zero. This system also has the peA equilibrium Xo. This is linearly stable if D > 0 C > Co D (14) (15) Equation (14) tells us that the peA equilibrium is structurally unstable without the D'TJ term in (13). Without this term, the model reduces to that given by Foldiak (1989). That the latter generally does not converge to the peA equilibrium is consistent with the condition in (14). If, on the other hand, the condition on C is violated then the network undergoes a Hopf bifurcation leading to oscillations. Depending on the eigenvalue spectrum of the input correlation, this bifurcation may be subcritical (with stable limit cycles near Xo for C < Co), or supercritical (with unstable limit cycles near Xo for C > Co). Figure 2 shows the corresponding regions in the (.AI, .A2) plane for a network of two nodes with D = 1. Simulations show that even in the supercritical regime, stable limit cycles are found for C < Co, and for C > Co sufficiently close to Co. This suggests that the complete bifurcation diagram in the supercritical regime is shaped like the bottom of a wine bottle, with only the indentation shown in figure 2. Under the approximation u ~ 1 + 'TJ, the super-critical regime is significantly narrowed. 4 Discussion The primary goal of this study has been to give a theoretical description of learning in feature-discovery models; in particular models that use lateral interactions to ensure that nodes tune to different statistical features. The models presented here have several different limit sets (equilibria and cycles) whose stability and location in the weight space depends on the relative learning rates in the network, and on the eigenvalue spectrum of the input correlation. We have applied t.ools from bifurcation theory to qualitatively describe the location and determine stability of these different limiting solutions. This theoretical approach provides a unifying framework within which similar algorithms can be studied. Both models have equilibria at which the network performs peA. In addition, the minimal model has stable equilibria for which the forward weight vectors are mixtures of the correlation eigenvectors. Both models have regimes in which the weight vectors oscillate. The model given by Rubner et al. (1990) also loses stability through Hopf bifurcation for small values of the lateral learning rate. The minimal values of C in (9) and (15) for the stability of the peA equilibrium can become quite large for small correlation eigenvalues. These stringent conditions can be ameliorated in both models by the replacement d 'TJij -+ « Y; > + < YJ > ) 'TJij. However in the minimal model, this leads to degenerate bifurcations which have not been thoroughly examined. 76 Leen Finally, it remains to be seen whether the techniques employed here extend to similar systems with non-linear node activation (e.g. Carlson 1991) or to the problem of locating multiple minima in cost functions for supervised learning models. Acknowledgments This work was supported by the Office of Naval Research under contract N0001490-1349 and by DARPA grant MDA 972-88-J-1004 to the Department of Computer Science and Engineering. The author thanks Bill Baird for stimulating e-mail disCUSSlon. References Carlson, A. (1991) Anti-Hebbian learning in a non-linear neural network Bioi. Cybern., 64:171-176. Foldiak, P. (1989) Adaptive network for optimal linear feature extraction. In Proceedings of the JJCNN, pages I 401-405. Golubitsky, Martin and Schaeffer, David (1984) Singularities and Groups in Bifurcation Theory, Vol. I. Springer-Verlag, New York. Kung, S. and Diamantaras K. (1990) A neural network learning algorithm for adaptive principal component extraction (APEX). In Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing, pages 861-864. Leen, T. K. (1991) Dynamics oflearning in linear feature-discovery networks. Network: Computation in Neural Systems, to appear. Oja, E. (1982) A simplified neuron model as a principal component analyzer. J. Math. Biology, 15:267-273. Oja, E. (1989) Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1:61-68. Oja, E. and Karhunen, J. (1985) On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. J. of Math. Anal. and Appl., 106:69-84. Rubner, J. and Schulten K. (1990) Development of feature detectors by self-organization: A network model. BioI. Cybern., 62:193-199. Sanger, T. (1989) An optimality principle for unsupervised learning. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems 1. Morgan Kauffmann. Yuille, A.L, Kammen, D.M. and Cohen, D.S. (1989) Quadrature and the development of orientation selective cortical cells by Hebb rules. Bioi. Cybern., 61:183-194.
1990
77
404
Navigating through Temporal Difference Peter Dayan Centre for Cognitive Science &. Department of Physics University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW dayantcns.ed.ac.uk Abstract Barto, Sutton and Watkins [2] introduced a grid task as a didactic example of temporal difference planning and asynchronous dynamical pre>gramming. This paper considers the effects of changing the coding of the input stimulus, and demonstrates that the self-supervised learning of a particular form of hidden unit representation improves performance. 1 INTRODUCTION Temporal difference (TD) planning [6, 7] uses prediction for control. Consider an agent moving around a finite grid such as the one in figure 1 (the agent is incapable of crossing the barrier) trying to reach a goal whose position it does not know. If it can predict how far away from the goal it is at the current step, and how far away from the goal it is at the next step, after making a move, then it can decide whether or not that move was helpful or harmful. If, in addition, it can record this fact, then it can learn how to navigate to the goal. This generation of actions from predictions is closely related to the mechanism of dynamical programming. TD is used to learn the predictions in the first place. Consider the agent moving around randomly on the grid, receiving a negative reinforcement of -1 for every move it makes apart from moves which take it onto the goal. In this case, if it can estimat.e from every location it visits, how much reinforcement (discounted by how soon it arrives) it will get before it next reaches the goal, it will be predicting how far away it is, based on the random method of selecting actions. TD's mechanism of learning is to force the predictions to be consistent; the prediction from location a should be -1 more than the average of the predictions from the locations that can be reached in one step (hence the extra -1 reinforcement) from a. 464 Navigating Through Temporal Difference 465 If the agent initially selects each action with the same probability, then the estimate of future reinforcement from a will be monotonically related to how many steps a is away from the goal. This makes the predictions useful for criticising actions as above. In practice, the agent will modify its actions according to this criticism at the same time as learning the predictions based on those actions. Barto, Sutton and Watkins [2] develop this example, and show how the TD mechanism coupled with a punctate representation of the stimulus (referred to as'RBsw below) finds the optimal paths to the goal. 'RBsw ignores the cues shown in figure 1, and devotes one input unit to each location on the grid, which fires if and only if the agent is at that place. TD methods can however work with more general codes. Section 2 considers alternative representations, including ones that are sensitive to the orientation of the agent as it moves through the grid, and section 3 looks at a restricted form of la.tent learning - what the agent can divine about its environment in the absence of reinforcement. Both techniques can improve the speed of learning. 2 ALTERNATE REPRESENTATIONS Stimulus representations, the means by which the agent finds out from the environment where it is, can be classified along two dimensions; whether they are punctate or distributed, and whether they are directionally sensitive or in register with the world. Over most of the grid, a 'sensible' distributed representation, such as a coarse-coded one, would be expected to make learning faster, as information about the value and action functions could be shared across adjacent grid points. There are points of discontinuity in the actions, as in the region above the right hand arm of the barrier, but they are few. In his PhD thesis [9], Watkins considers a rather similar problem to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on a CMAC [1] coarse-coded representation of the space. Since his agent moves in a continuous bounded space, rather than being confined merely to discrete grid points, something of this sort is anyway essential. After the initial learning, Watkins arbitrarily makes the agent move ten times more slowly in a closed section of the space. This has a similar effect to the barrier in inducing a discontinuity in the action space. Despite the CMACS forcing the system to share information across such discontinuities, they were able to learn the task quickly. The other dimension over which representations may vary involves the extent to which they are sensitive to the direction in which the agent is facing. This is of interest if the agent must construe its location from the cues around the grid. In this case, rather than moving North, South, East or West, which are actions registered with the world, the agent should only move Ahead, Left or Right (Behind is disabled as an additional constraint), whose effects are also orientation dependent. This, together with the fact that the representation will be less compact (it having a larger input dimensionality) should make learning slower. Dynamical programming and its equivalents are notoriously subject to Bellman's curse of dimensionality, an engineering equivalent of exponential explosion in search. Table 1 shows four possible representations classified along these two dimensions. 466 Dayan Coarse ness Directionally Punctate Distributed Sensltlve R,x RA Insensltlve 'RBSW 'RCMAC Table 1: Representations. 'RBSW is the representation Barto, Sutton and Watkins used. R,x is punctate and directionally sensitive - it devotes four units to every grid point, one of which fires for each possible orientation of the agent. 'Rc~IAC' the equivalent of Watkins' representation, was not simulated, because its capabilities would not differ markedly from those of the mapping-based representation developed in the next section. n A is rather different from the other representations; it provides a test of a representation which is more directly associated with the sensory information that might be available directly from the cues. Figure 2 shows how 'RA works. Various identifiable cues, C1 . . . Cc (c = 7 in the figure) are scattered around the outside of the grid, and the agent has a fictitious 'retina' which rotates with it. This retina is divided into a number of angular buckets (8 in the figure), and each bucket has c units, the iSh one of which responds if the cue Ci is visible in that bucket. This representation is clearly directionally sensitive (if the agent is facing a different way, then so is its retina, and so no cue will be visible in the same bucket as it was before), and also distributed, since in general more than one cue will be visible from every location. Note that there is no restriction on the number of units that can fire in each bucket at any time - more than one will fire if more than one cue is visible there. Also, under the present system 'RA will in general not work if its coding is ambiguous - grid points must be distinguishable. Finally, it should be clear that 'RA is not biologically plausible. Figure 3 shows the learning curves for the three representations simulated. Each point is generated by switching off the learning temporarily after a certain number of iterations, starting the agent from everywhere in the grid, and averaging how many steps it takes in getting to the goal over and above the minimum necesary. It is apparent that n.x is substantially worse, but, surprisingly, that 'RA is actually better than 'RBSW . This implies that the added advantage of its distributed nature more than outweighs its disadvantages of having more components and being directionally sensitive. One of the motivations behind studying alternate representations is the experimental findings on place cells in the hippocampi of rats (amongst other species). These are cells that fire only when the rat is at a certain location in its environment. Although their existence has led to many hypotheses about rat cognitive mapping (see [5J for a substantial discussion of place cells and mapping), it is important to note that even with a map, there remains the computational1y intensive problem of navigation addressed, in this paper, by TD. 'RA, being closely related to the input stimuli is quite unlike a place cell code - the other representations all bear some similarities. Navigating Through Temporal Difference 467 3 GOAL-FREE LEARNING One of the problems with the TD system as described is that it is incapable oflatent learning in the absence of reinforcement or a goal. If the goal is just taken away, but the -1 reinforcements are still applied at each step, then the values assigned to each location will tend to -00. If both are removed, then although the agent will wander about its environment with random gay abandon, it will not pick up anything that could be used to speed subsequent learning. Latent learning experiments with rats in dry mazes prove fairly conclusively that rats running mazes in the absence of rewards and punishments learn almost as much as rats that are reinforced. One way to solve this problem is suggested by Sutton's DYNA architecture [7]. Briefly, this constructs a map of place x action -+ next place, and takes steps in the fictitious world constructed from its map in-between taking steps in the real world, as a way of ironing out the computational 'bumps' (ie inconsistencies) in the value and action functions. Instead, it is possible to avoid constructing a complete map by altering the representation of the environment used for learning the prediction function and optimal actions. The section on representations concluded that coarse-coded representations are generally better than punctate ones, since information can be shared between neighbouring points. However, not all neighbouring points are amenable to this sharing, because of discontinuities in the value and action functions. If there were a way of generating a coarse coded representation (generally from a punctate one) that is sensitive to the structure of the task, rather than arbitrarily assigned by the environment, it should provide the base for faster learning still. In this case, neighbouring points should only be coded together if they are not separated by the barrier. The initial exploration would allow the agent to learn this much about the structure of the environment. Consider a set of units whose job is to predict the future discounted sum of firings of the raw input lines. Using 'R.Bsw during the initial stage of learning when the act.ions are still random, if the agent is at location (3,3) of the grid, say, then the discounted prediction of how often it will be in (3,4) (ie the frequency with which the single unit representing (3,4) will fire) will be high, since this location is close. However, the prediction for (7,11) will be low, because it is very unlikely to get there quickly. Consider the effect of the barrier: locations on opposite sides of it, eg (1,6) and (2,6), though close in the Euclidean (or Manhattan) metric on the grid, are far apart in the task. This means that the discounted prediction of how often the agent will be at (1,6) given that it starts at (2,6), will be proportionately lower. Overall, the prediction units should act like a coarse code, sensitive to the structure of the task. As required, this information about the environment is entirely independent of whether or not the agent is reinforced during its exploration. In fact, the resulting 'map' will be more accurate if it is not, as its exploration will be more random. The output of the prediction units is taken as an additional source of information for the value and action functions. Since their main aim is to create intelligently distributed representations from punctate ones, it is only appropriate to use these prediction units for 'RBsw and 'R4X ' Figure 4 compares average learning curves for 'RBsw with and without these ex468 Dayan tra mapping units, and with and without 6000 steps of latent learning (LL) in the absence of any reinforcement. A significant improvement is apparent. Figure 5 shows one set of predictions based on the 1lBsw representation! after a few un-reinforced iterations. The predictions are clearly fairly well developed and smooth - a predictable exponentially decaying hump. The only deviations from this are at the barrier and along the edges, where the effects of impermeability and immobility are apparent. Figure 6 shows the same set of predictions but after 2000 reinforced iterations, by which time the agent reaches the goal almost optimally. The predictions degenerate from being roughly radially symmetric (bar the barrier) to being highly asymmetric. Once the agent has learnt how to get to the goal from some location, the path it will follow, and so the locations it will visit from there, is largely fixed. The asymptotic values of the predictions will therefore be 0 for units not on the path, and -( for those on the path, where r is the number of steps since the agent's start point and 'Y is the discounting factor weighting immediate versus distant reinforcement. This is a severe limitation since it implies that the topological information present in the early stages of learning disappears evaporates, and with it almost all the benefits of the prediction units. 4 DISCUSSION Navigation comprises two problems; where the agent and the goals in its environment are, and how it can get to them. Having some form of cognitive map, as is suggested by the existence of place cells, addresses the first, but leaves open the second. For the case of one goal, the simple TD method described here is one solution. TD planning methods are clearly robust to changes in the way the input stimulus is represented. Distributed codes, particularly ones that allow for the barrier, make learning faster. This is even true for 1lA' which is sensitive to the orientation of the agent. All these results require each location to have a unique representation - Mozer and Bachrach [4] and Chrisley [3] and references therein look at how ambiguities can be resolved using information on the sequence of states the agent traverses. Since these TD planning methods are totally general, just like dynamical programming, they are unlikely to scale well. Some evidence for this comes from the relatively poor performance of 1l.x , with its quadrupled input dimension. This puts the onus back either onto dividing the task into manageable chunks, or onto more sophisticated representation. A cknow ledgements I am very grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell, David Willshaw, and the attendees of the PDP Workshop at Edinburgh, the Connectionist Group at Amherst, and a spatial learning workshop at King's College Cambridge for their helpful comments. This work was funded by SERC. 1 Note that these are normalised to a maximum value of 10, for graphical convenience. Navigating Through Temporal Difference 469 References [1] Albus, JS (1975). A new approach to manipulator control: the Cerebellar Model Articulation Controller (CMAC). Transactions of the ASME: Journal of Dynamical Systems, Measurement and Control, 97, pp 220-227. [2] Barto, AG, Sutton, RS &. Watkins, CJCH (1989). Learning and Sequential Decision Making. Technical Report 89-95, Computer and Information Science, University of Massachusetts, Amherst, MA. [3] Chrisley, RL (1990). Cognitive map construction and use: A parallel distributed approach. In DS Touretzky, J Elman, TJ Sejnowski, &. GE Hinton, editors, Proceedings of the 1990 Con nectionist M odds Summer School. San Mateo, CA: Morgan Kaufmann. [4] Mozer, MC, &. Bachrach, J (1990). Discovering the structure of a reactive en vironment by exploration. In D Touretzky, editor, Advances in Neurallnformation Processing Systems, £, pp 439-446. San Mateo, CA: Morgan Kaufmann. [5] O'Keefe, J & Nadel, L (1978). The Hippocampus as a Cognitive Map. Oxford, England: Oxford University Press. [6] Sutton, RS (1988). Learning to predict by the methods of temporal difference. Machine Learning, 3, pp 9-44. [7] Sutton, RS (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic progranuning. In Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kaufmann. [8] Sutton, RS, &. Barto, AG. To appear. Time-derivative models of Pavlovian conditioning. In M Gabriel &. JW Moore, editors, Learning and Computational Neuroscience. Cambridge, MA: MIT Press. [9J \Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. University of Cambridge, England. Cl Agall \ C4 \ \ l\ , C2 ........ Cues , \ \ cs Goal Fig 1: The grid task C4 C6 C1 C2 ~ arrier C3 Cl OriCIIlltloD 'Retina' Anplar bucket cs C6 C1 •• Dot rlrina 1. flrina C3 Fig 2: The 'retina' for 1lA 470 Dayan Average extra steps to goal 200 --4X BSW A 150 , , , 1 , , 'I , . ) 100 1\ \ I ~ l 1\1 50 ~I I .1 ~ ,~ " 0 , 1 10 100 1000 Learning iterations Fig 3: Different representations Fig 5: Initial predictions from (5,6) Average extra steps to goal 200 No map --Map, DO LL Map, LL 150 , \ , , \ 100 , " \ " \ , " \ " \ 50 0 1 10 100 1000 Learning iterations Fig 4: Mapping with 'RBSW Fig 6: Predictions after 2000 iterations
1990
78
405
On The Circuit Complexity of Neural Networks v. P. Roychowdhury Information Systems Laboratory Stanford University Stanford, CA, 94305 A. Orlitsky AT &T Bell Laboratories 600 Mountain A venue Murray Hill, NJ, 07974 Abstract K. Y. Sill Information Systems Laboratory Stanford University Stanford, CA, 94305 T. Kailath Informat.ion Systems Laboratory Stanford U ni versity Stanford, CA, 94305 '~le introduce a geometric approach for investigating the power of threshold circuits. Viewing n-variable boolean functions as vectors in 'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input. functions required to implement. a threshold function; (3) a necessary condit.ion for an arbit.rary set of input. functions to implement a threshold function; (4) a lower bound on the error introduced in approximating boolean functions using sparse polynomials; (5) a limit on the effectiveness of the only known lower-bound technique (based on computing correlations among boolean functions) for the depth of threshold circuit.s implement.ing boolean functions, and (6) a constructive proof that every boolean function f of n input variables is a threshold function of polynomially many input functions, none of which is significantly correlated with f. Some of these results lead t.o genera.lizations of key results concerning threshold circuit complexity, particularly t.hose that are based on the so-called spectral or Ha.rmonic analysis approach. Moreover, our geometric approach yields simple proofs, based on elementary results from linear algebra, for many of these earlier results. 953 954 Roychowdhury, Orlitsky, Siu, and Kailath 1 Introduction An S-input threshold gate is characterized by S real weights 'WI, ••. , 'Ws . It takes S inputs: Xl, . .. , xs, each either +1 or -1, and outputs +1 if the linear combination 2::f=1 'WiXi is positive and -1 if the linear combination is negative. Threshold gates were recently used to implement several functions of practical interest (including: Parity, Addition, Multiplication, Division, and Comparison) with fewer gates and reduced depth than conventional circuits using AND, OR, and NOT gates [12,4, 11]. This success has led to a considerable amount of research on the power of threshold circuits [1, 10,9, 11,3, 13]. However, even simple questions remain unanswered. It is not known, for example, whether there is a function that can be computed by a depth-3 threshold circuit with polynomially many gates but cannot be computed by any depth-2 circuit with polynomially many threshold gates. Geometric approaches have proven useful for analyzing threshold gates. An S-input threshold gate corresponds to a hyperpla.ne in n.s. This has been used for example to count the number of boolean functions computable by a single threshold gate [6], and also to determine functions that cannot be implemented by a single threshold gate. However, t.hreshold circuits of depth two or more do not carry a simple geometric interpretation in 'R,s. The inputs to gates in the second level are themselves threshold functions, hence the linear combination computed at the second level is a non-linear function of the inputs. Lacking a geomet.ric view, researchers [5, 3] have used indirect approaches, applying harmonic-analysis t.echniques to analyze threshold gates. These techniques, apart from their complexity, restricted the input functions of the gates to be of very special types: input variables or parities of the input variables, t.hus not applying even t.o depth-t.wo cil'Cuits. In this paper, we describe a simple geometric relation between the output function of a threshold gate and its set of input functions. This applies to arbitrary sets of input functions. Using this relation , we can prove t.he following results: (1) upper bounds on (a) the number of threshold functions of any set of input functions, (b) the number of spurious memories in a IIopfield network, and (c) the number of functions implementable by threshold circuits of depth d; (2) a lower bound on the number of orthogonal input functions required to implement a threshold function; (3) a quantifiable necessary condition for a set of functions to implement a threshold function; (4) a lower bound on the error in approximating boolean functions using sparse polynomials; (5) a limit on the effectiveness of the correlation method used in [7] to prove t.hat a cert.ain function cannot be implement.ed by depth two circuit.s with polynomially many gates and polynomially bounded weights; (6) a proof that every function f is a threshold function of polynomially many input functions, none of which is significant.ly correlated wit.h f. Special cases of some of these results, where the input functions to a threshold gate are restricted to the input. variables, or parities of the input variables, were proven in [5, 3] using harmonic-analysis tools. Our technique shows that these tools are not needed, providing simpler proofs for more general results. Due to space limitations, we cannot present the full details of our results. Instead, we shall introduce the basic definitions followed by a technical summary of the results; the emphasis will be on pointing out the motivation and relating our results On The Circuit Complexity of Neural Networks 955 with those in the literature. The proofs and other technical details will appear in a complete journal paper. 2 Definitions and Background An n-variable boolean function is a mapping f : {-I, l}n {-I, I} . We view I as a (column) vector in n 2n. Each of 1's 2n com ponents is either -lor + 1 and represents f(x) for a distinct value assignment x of the n boolean variables. We view the S weights of an S-input threshold gate as a weight vector w = (WI, ... , Ws f in nS. Let the functions It, ... ,Is be the inputs of a threshold gate w. The gate computes a function f (or f is the output of the gate) if the following vector equation holds: where f = sgn (t j ,w,) ,=1 { +1 sgn(x) = -1 undefined if x > 0, if x < 0, if x = o. (1) Note that this definition requires that all components of 2::=1 liWi be nonzero. It is convenient to write Equat.ion (1) in a matrix form: f = sgn(Yw) where the input matrix y = [It··· fs] is a 2n by S matrix whose columns are the input functions. The function f, is a threshold function of It, ... , fs if t.here exist.s a threshold gate (i.e., w) with inputs It, ... , Is that computes I· These definitions form the basis of our approach. Each function, being a ±1 vector in n 2n , determines an orthant in 'R.2n. A function f is t.he output of a threshold gate whose input. functions are It, ... , fs if and only if t.he linear combination 2::=1 liWi defined by the gate lies inside the orthant determined by f. Definition 1 The correlation of two n-variable boolean functions It and his: Chh = UT f'2)/2 n ; the two functions are uncorrelated or orthogonal if Chh = O. Note that Chh = 1 - 2dlI(lt, 12)/2n , where dlI(lt, h) is the Hamming distance between It and 12; thus, the correlation can be interpreted as a measure of how 'close' the two functions are. Fix the input functions It, ... fs to a threshold gate. The correlation vector of a function I, with the input functions is Cfl' = (}TT f)/2 n = (C"I C, h ... CJJs f· Next, we define C as the maximum in magnitude among the correlation coefficients, i.e. ,C={IC",I : l::;i::;S}. 956 Roychowdhury, Orlitsky, Siu, and Kailath 3 Sumluary of Results The correlation between two n-variable functions is a multiple of 2-("-1), bounded between -1 and 1, hence can assume 2" + 1 values. The cOlTelation vector GJY = (GIJp . .. ,GJ It)T can therefore assume at most (2" + I)S different values. There are 22ft Boolean functions of n Boolean variables, hence many share the same correlation vector. However, the next theorem says that a tht·eshold function of II, ... , f s does not share its correlation vector with any other function. Uniqueness Theorem Let f be a threshold function of 11, ... , fs. Then, for all 9 f; f, Corollary 1 There are at most (2" + I)S threshold functions of any set of S input functions. The special case of the Uniqueness Theorem where the functions II, ... , fs are t.he input variables had been proven in [5, 9]. The proof used harmonic-analysis tools such as Parseval's theorem. It relied on the mutual orthogonality of the input functions (namely, CX"Xj = 0 for all i :f:. j). Another special case where the input functions are parities of the input variables was proven in [3]. The same proof was used; see e.g. , pages 419-422 of [9]. Our proof shows that the harmonicanalysis tools and assumpt.ions are not needed thereby (1) significantly simplifying the proof, and (2) showing that the functions It, ... , fs need not be orthogonal: the Uniqueness Theorem holds for all collections of functions. The more general result of the Uniqueness Theorem can be applied to obtain the following two new counting results. Corollary 2 The number of stable states in a Hopfield network with n elements which is programmed by the outer product rule to store s given vectors is :::; 2& log("+!). Corollary 3 Let Fn(S(71), d) be the number ofn-variable boolean functions computed by depth-d thresh.old circuits with fan-in bounded by S(n) (we assume S(n) ~ n). Then, for all d, n ~ 1, It follows easily from our geometric framework that if GJl' = 0 then f is not a threshold function of It, ... , f s: every linear combination of It, ... , f s is orthogonal to f, hence cannot intersect the orthant determined by f. Next, we consider the case where en' :f:. O. Define the generalized spectrum to be the S-dimensional vector: !3 = (!31, .. . ,!3s f = (yTy)-1yT f (the reason for the definition and\he name will be clarified soon). On The Circuit Complexity of Neural Networks 957 Spectral-bound TheoreIn If I is a linear threshold function of It, ... , Is I then s L IPd ~ 1, hence, i=l S > 1//3, where /3 = max {IPil: 1 ~ i ~ S} The Spectral-Bound theorem provides a way of lower bounding the number S of input functions. Specifically, if Pi is exponentially small (in n) for all i E {I, ... , S}, then S must be exponentially large. In the special case where the input functions are parities of the input variables, all input functions are orthogonal; hence yTy = 2n Is and P = ~yT I = Gn' . 2n Note that every parity function p is a basis function of the Hadamard transform, hence Glp is the spectral coefficient corresponding to p in the transform (see [8, 2] for more details on spectral representation of boolean functions). Therefore, the generalized spectrum in this case is the real spectrum of I. In that case, the Spectral-Bound Theorem implies that S > max{lcJJ\l~i~S} ' Therefore, the number of input functions needed is at least the reciprocal of the maximum magnitude among the spectral coefficients (i. e. , C). This special case was proved in [3]. Again, their proofs used harmonic-analysis tools and assumptions that we prove are unnecessa.ry, thereby generalizing them to arbitrary input functions. Moreover, our geometric approach considerably simplifies the exposition by presenting simple proofs based on elementary results from linear algebra. In general, we can show that if the input. functions Ii are orthogonal (i. e. , GI,l) = 0 for i f. j) or asymptotically orthogonal (i. e. , lim GI,l · = 0) then the number of n-oo } input functions S ~ I/C, where C is the largest (in magnitude) correlat.ion of the output function with any of its input function. We can also use the generalized spectrum to derive a lower bound on the error incurred in approxima.ting a boolean function, I, using a set of basis functions. The lower bound can then be applied to show that the Majority function cannot be closely approxim ated by a sparse polynomial. In particular, it can be shown that if a polynomial of the input variables with only polynomially many (in n) monomials is used to approximate an n variable Majority function then the approximation error is n(I/(log log n )3/2). This provides a direct spectral approach for proving lower bounds on the approximation error. The method of proving lower bounds on S in terms of the correlation coefficients GI I, of I with the possible input functions, can be termed the method of correlations. Hajnal et. al. [7] used a different a.'3pect of this method 1 to prove a lower bound on t.he depth of a threshold circuit that computes the Inner-product-mod-2 function. 1 They did not exactly use the correlation approach introduced in this paper, rather an equivalent framework. 958 Roychowdhury, Orlitsky, Siu, and Kailath Our techniques can be applied to investigate the method of correlations in more detail and prove some limits to its effectiveness. We can show that the number, S, of input functions need not be inversely proportional to the largest correlation coefficient 6. In particular, we give two constructive procedures showing that any function 1 is a threshold function of O( n) input functions each having an exponentially small correlation with I: IG,,;I ~ 2-(n-l). Construction 1 Every boolean function 1 01 n variables (Jor n even) can be expressed as a threshold function of 3n boolean functions: II, 12,"" hn such that (1) G". = 0, V 1 ~ i ~ 3n - 1, and (2) Gfhn = 2-(n-l). Construction 2 Every boolean function 1 of n variables can be expressed as a threshold function of 2n boolean functions: II, 12,"" hn such that (1) G,,; = 0, V 1 ~ i < 2n - 2, and (2) Gfhn_l = Gfhn = 2-(n-l). The results of the above constructions are surprising. For example, in Construction 1, the output function of the threshold gate is uncorrelated with all but one of the input functions, and the only non-zero correlation is the smallest possible (= 2-(n-I»). Note that 1 is not a threshold function of a set of input functions, each of which is orthogonal to I. The above results thus provide a comprehensive understanding of the so-called method of correlations. In particular: (1) If the input functions are mutually orthogonal (or asymptotically orthogonal), then the method of correlations is effective even if exponential weights are allowed, i. e. , if a function is exponentially small correlated with every function from a pool of possible input functions, then one would require exponentially many inputs to implement the given function using a threshold gate; (2) If the input functions are not mutually orthogonal, then the method of correlations need not be effective, i. e. , one can construct examples, where the output function is correlated exponentially small with every input function, and yet it ca.n be implemented as a threshold function of polynomially many input functions. Furthermore, the constructive procedures can also be considered as constituting a preliminary answer to the following question: Given an n-variable boolean function I, are there efficient procedures for expressing it as threshold functions of polynomially many (in 11,) input functions? A procedure for so decomposing a given function 1 will be referred to as a threshold-decomposition procedure; moreover, a decomposition procedure can be considered as efficient if the input functions have simpler threshold implementations than I (i.e., easier to implement or require less depth/size). Constructions 1 and 2 present two such threshold-decomposition procedures. At present, the efficiency of these constructions is not clear and further work is necessary. 'Ve hope, however, that the general methodology introduced here may lead to subsequent work resulting in more efficient threshold-decomposition procedures. 4 Concluding Renlarks We have out.lined a new geometric approach for investigating the properties of threshold circuits. In the process, we have developed a unified framework where many of the previous results can be derived simply as special cases, and without inOn The Circuit Complexity of Neural Networks 959 troducing too many seemingly difficult concepts. Moreover, we have derived several new results that quantify the input/output relationships of threshold gates, derive lower bounds on the number of input functions required to implement a given function using a threshold gate, and also analyze the limitations of a well-known lower bound technique for threshold circuit. Acknowledgenlents This work was supported in part by the Joint Services Program at Stanford University (US Army, US Navy, US Air Force) under Contract DAAL03-88-C-0011, the SDIO/IST, managed by the Army Research Office under Contract DAAL03-90-G0108, and the Department of the Navy, NASA Headquarters, Center for Aeronautics and Space Information Sciences under Grant NAG'V-419-S6. References [1] E. Allender. A note on the power of threshold circuits. IEEE Symp. Found. Compo Sci., 30, 1989. [2] Y. Bradman, A. Orlitsky, and J. Hennessy. A Spectral Lower Bound Technique for the size of Decision Trees and Two level AND/OR Circuits. IEEE Trans. on Computers, 39, No. 2:282-287, February 1990. [3] J. Bruck. Harmonic Analysis of Polynomial Threshold Functions. SIAM Journal on Discrete Mathematics, May 1990. [4] A. K. Chandra, L. Stockmeyer, a.nd U. Vishkin. Constant depth reducibility. Siam J. Comput., 13:423-439, 1984. [5] C. K. Chow. On The Characterization of Threshold Functions. Proc. Symp. on Switching Ci7'cuit Theory and Logical Design, pages 34-38, 1961. [6] T. M. Covel'. Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognit.ion. IEEE Trans. on Electronic Computers, EC-14:326-34, 1965. [7) A. Hajnal, W.1\la.ass, P. Pudlak, 1\L Szegedy, and G. Turan. Threshold circuits of bounded depth. IEEE Symp. Found. Compo Sci., 28:99-110, 1987. [8] R. J. Lechner. Harmonic analysis of switching functions. In A. Mukhopadhyay, editor, Recent Development in Switching Theory. Academic Press, 1971. [9] P. M. Lewis and C. L. Coates. Threshold Logic. John 'Viley & Sons, Inc., 1967. [10) I. Parberry a.nd G. Schnitger. Parallel Computation with Threshold Functions. Journal of Computer and System Sciences, 36(3):278-302, 1988. [11] J. Reif. On Threshold Circuits and Polynomial Computation. In Structure in Complexity Theory Symp., pages 118-123, 1987. [12] K. Y. Siu and J. Bruck. On the Power of Threshold Circuits with Small ""eights. to appear in SIAM J. Discrete Math. [13] K. Y. Siu, V. P. Roychowdhury, and T. Kailath. Computing with Almost Optimal Size Threshold Circuits. submitted to JCSS, 1990. Part XIV Perforl11ance COlll parisons
1990
79
406
Closed-Form Inversion of Backpropagation Networks: Theory and Optimization Issues Michael L. Rossen HNC, Inc. 5.501 Oberlin Drive San Diego, CA 92121 rossen@amos.ucsd.edu Abstract We describe a closed-form technique for mapping the output of a trained backpropagation network int.o input activity space. The mapping is an inverse mapping in the sense that, when the image of the mapping in input activity space is propagat.ed forward through the normal network dynamics, it reproduces the output used to generate that image. When more than one such inverse mappings exist, our inverse ma.pping is special in that it has no projection onto the nullspace of the activation flow operator for the entire network. An important by-product of our calculation, when more than one invel'se mappings exist, is an orthogonal basis set of a significant portion of the activation flow operator nullspace. This basis set can be used to obtain an alternate inverse mapping that is optimized for a particular rea.l-world application. 1 Overview This paper describes a closed-form technique for mapping a particular output of a trained backpropagation net.work int.o input activity space. The mapping produced by our technique is an inverse mapping in the sense that, when the image in input space of the mapping of an output a.ctivity is propa.gated forward through the norma.l network dynamics, it reproduces the output used to generate it.! \Vhen mult.iple inverse mappings exist, our inverse mapping is unique in that it has no 1 It is possible that no such inverse mappings exist. This point is addressed in sect.ion 4. 868 Closed-Form Inversion of Backpropagation Networks 869 projection onto the nullspace of the activation flow operator for the entire network. An important by-product of our calculation is an orthogonal basis set of a significant portion of this nullspace. Any vector within this nullspace can be added to the image from the inverse mapping, producing a new point in input space that is still an inverse mapping image in t.he above sense. Using this nullspace, the inverse mapping can be opt.imized for a particular applicat.ion by minimizing a cost function over the input element.s, relevant to that applicat.ion, to obtain the vector from the nullspace to add to the original inverse mapping image. For this reason and because of t.he closed-form we obt.ain for calculation of the network inverse mapping, our met.hod compares favorably to previously proposed iterative methods of network inversion [';Yidrow & Stearns, 1985, Linden & Kinderman, 1989]. We now briefly summarize our method of closed-form inversion of a backpropagation network. 2 The Inverse Mapping Operator To outline the calculation of our inverse mapping operator, we start by considering a trained feed-forward backpropagation network with one hidden layer and bipolar sigmoidal activation functions. \Ve calculate this inverse as a sequence of the inverses of the sub-operations constituting the dynamics of activation flow. If we use the 'I, II, 0' as suhscripts indicating input, hidden and output modules of model neurons, respectively. the act.ivation flow from input through hidden module to output module is: where Lo) u 0 W(O,H) 84H) u (:;) W(O,H) 8 u 0 'W(H,I) 8!{I) u : bipolar sigmoid funct.ion; W(dest,sotlrce) : rvlatl'ix operator of connection weights, indexed by 'solll'ce' and 'dest'{destination) modules; 4k) : Vector of activit.ies for module 'k'. (1) A is defined here as t.he activation flow operat.or for the entire network. The symbol 8 separat.es operators sequent.ially applied to the argument. Since the sub-operators constit.uting A are applied sequentially, the inverse that we calculate, A+ , is equal to a composit.ion of inverses of the individual sub-operators, with the order of the composition reversed from the order in activation flow. The closed-form mapping of a specified output !(O) to input space is then: where A+ GL'o) W?o,H) (:> u- 1 8 }V~I,I) 8 u- 1 8 f(o), u- 1 : Inverse of t.he bipolar logistic sigmoid; W(dest,soUJ'ce) : Pseudo-inverse of W(de$t,source) . (2) 870 Rossen Subject to the existence conditions discussed in section 4, !{I) is an inverse mapping of!{o) in that it reproduces f(O) when it is propagated forward through the network: f(O) A0~I). (3) We use singular value decomposit.ion (SVD), a well-known matrix analysis method (e.g., [Lancaster, 1985]), to ca.lculate a particular matrix inverse, the pseudo-inverse W(~ .) (also known as the Moore-Penrose inverse) of each connection weight matrix J " block. In the case of W( ll,I), for example, SVD yields the two unitary matrices, S(ll'!) and V(H,I), and a rectangular matrix V(H'!) , all zero except for the singular values on its diagonal, sllch that where S(fl,I)V(fl,!) V(H, I) V(fl,/) V(fl, 1) S(fl ,/) , VCH,/) , V("fl,J) : Transposes of SCH,/) , V(H,l), respectively; vtJ,J) : Pseudo-inverse of V(ll,I), which is simply it.s transpose wit.h each non-zero singular value replaced by its inverse. 3 Uniqueness and Optimization Considerations (4) (5) The pseudo-inverse (calculated by SVD or other methods) is one of a class of solutions t.o the inverse of a mat.rix operator that may exist, called generalized inverses. For our purposes, each of these generalized inverses, if they exist, are inverses in the useful sense tha.t when subst.it.ued for W(j,i) in eq. (2), the resultant !{/) will be and inverse mapping image as defined by eq. (3). When a matrix operator W does not have a nullspace, the pseudo-inverse is the only generalized inverse that exists. If W does have a nullspace, the pseudo-inverse is special in that its range cont.ains no projection onto the nullspace of W. It follows that if either of t.he mat.rix operat.ors )/\,'(H,J) or W(O,H) in eq. (1) have a nullspace, then multiple inverse mapping operators WIll exist. However, the inverse mapping operator A+ calculated llsing pseudo-inverses will be the only inverse mapping operator that has no projection in the nullspace of A. The derivation of these propert.ies follow in a straightforward manner from the discussion of generalized inverses in [Lancaster, 1985]. An interesting result of using SVD to obtain the pseudo-inverse is that: SVD provides a direct method for varying ~J) within the space of inverse mapping images ill input space of L ° J. This becomes clear when we note that if 1" = P(W(H,I») is the rank of W(H,!) , only the first 1" singular values in V(H,J) are non-zero. Thus, only the first r columns of S(H,/) and V(/l,J) participate in the activity flow of the network from input module to hidden module. Closed-Form Inversion of Backpropagation Networks 871 The columns {Y.(II'/)(i)h>r of V(lI,I) span the nllllspace of W(H,I). This nullspace is also the nullspace of A, or at least a significant portion thereof. 2 If ~J) is an inverse mapping image of f(0), then the addition of any vector from the nullspace to ~I) would still be an inverse mapping image of ~O), satisfying eq. (3). If an inverse mapping image ~I) obtained from eq. (2) is unphysical, or somehow inappropriate for a particular application, it could possibly be optimized by combining it with a vector from the nullspace of A. 4 Existence and Stability Considerations There are still implementational issues of importance to address: 1. For a given Lo), can eq. (2) produce some mapping image t(I)? 2. For a given Lo), will the image ~I) produced by eq. (2) be a true inverse mapping image; i.e .. will it. sat.isfy eq. (3)? If not, is it a best approximation in some sense? 3. How stable is an inverse mapping from f(0) tha.t produces the answer 'yes' to questions 1 and 2; i.e., if L 0) is perturbed to produce a new output point, will this new output point satisfy questions 1 and 2? In general, eq. (2) will produce an image for any output point generated by the forward dynamics of the network, eq. (1). If Lo) is chosen arbitrarily, however, then whether it is in t.he domain of A+ is purely a function of the network weights. The domain is restricted because t.he domain of the inverse sigmoid sub-operator is restricted to (-1, + 1). \Vhether an image produced by eq. (2) will be an inverse mapping image, i.e., satisfying eq. (3), is dependent on both the network weights and the network architecture. A strong sufficient condit.ion for guara.nteeing this condition is that the network have a c07l1'ergent archit.ecture; that is: • The dimension of input. space is greater than or equal to the dimension of output space . • The rank of V(H,I) is greater t.han or equal t.o the rank of'D(O,H)' The stability of inverse mappings of a desired output away from such an actual output depends wholly on the weights of the network. The range of singular values of weight mat.rix block W(O,H) can be used to address this issue. If the range is much more than one order of magnitude, then random perturbations about a given point in output space will often be outside the domain of A+. This is because the columns of S(O,H) and V(O,H) associated wit.h small singular values during forward 2Since its first sub-operation is linear, and the sigmoid non-linearity we employ maps zero to zero, the non-linear operator A can still have a nullspace. Subsequent layers of the network might add to this nullspace. however, and the added region may not be a linear subspace. 872 Rossen activity flow are associated with proportionately large inverse singular values in the inverse mapping. Thus, if singular value dO,Hi is small, a random perturbation wit.h a projection on column §{O,ll)(i) of S(O,H) will cause a large magnitude swing in the inverse sub-operator }V(6,[f) , with t.he result possibly outside the domain of u- 1 . 5 SUllllllary • 'Ve have shown t.hat. a closed-form inverse mapping operator of a backpropagat.ion network can be obt.ained using a composition of pseudo-inverses and inverse sigmoid operators. • This inverse mapping operat.or, specified in eq. (2), operating on any point in the network's output space, will obtain an inverse image of that point that sat.isfies eq. (3), if snch an invf'rse image exist.s. • "'hen many inverse images of an out.put. point exist, an extension of the SVD analyses used t.o ohtain t.he original inverse image can be used to obtain an alternate inverse image optimized t.o satisfy the problem const.raints of a particular application. • The existence of an inverse image of a particular output point. depends on that output point. and the network weight.s. The dependence on the network can be expressed conveniently in t.erms of the singular values and the singular value vectors of the net.work weight mat.rices. • Application for thesf' techniqllf'S include explanation of network operation and process control. References [Lancaster, 1985] Lancaster, P., & Tismenetsky, M. (1985). The TheaT'y af Matrices. Orlando: Academic. [Linden & Kinderman, 1989] Linden, A., & Kinderman, J. (1989). Inversion of multilayer nets. Proceedings of the Third Annual Inter1lational Joint Conference on Neural Networks, Fal II. 425-430. ["Tidrow & Stearns, 1985] 'Vidrow, B., & Stearns, S.D. (1985). Adpative Signal Processing. Englewood Cliffs: Prentice-Hall. Part XIII Learning and Generalization
1990
8
407
An Attractor Neural Network Model of Recall and Recognition Eytan Ruppin Department of Computer Science School of Mathematical Sciences Sackler Faculty of Exact Sciences Tel Aviv University 69978, Tel Aviv, Israel Abstract Yechezkel Yeshurun Department of Computer Science School of Mathematical Sciences Sackler Faculty of Exact Sciences Tel Aviv University 69978, Tel Aviv, Israel This work presents an Attractor Neural Network (ANN) model of Recall and Recognition. It is shown that an ANN model can qualitatively account for a wide range of experimental psychological data pertaining to the these two main aspects of memory access. Certain psychological phenomena are accounted for, including the effects of list-length, wordfrequency, presentation time, context shift, and aging. Thereafter, the probabilities of successful Recall and Recognition are estimated, in order to possibly enable further quantitative examination of the model. 1 Motivation The goal of this paper is to demonstrate that a Hopfield-based [Hop82] ANN model can qualitatively account for a wide range of experimental psychological data pertaining to the two main aspects of memory access, Recall and Recognition. Recall is defined as the ability to retrieve an item from a list of items (words) originally presented during a previous learning phase, given an appropriate cue (cued RecalQ, or spontaneously (free RecalQ. Recognition is defined as the ability to successfully acknowledge that a certain item has or has not appeared in the tutorial list learned before. The main prospects of ANN modeling is that some parameter values, that in former, 'classical' models of memory retrieval (see e.g. [GS84]) had to be explicitly assigned, can now be shown to be emergent properties of the model. 642 An Attractor Neural Network Model of Recall and Recognition 643 2 The Model The model consists of a Hopfield ANN, in which distributed patterns representing the learned items are stored during the learning phase, and are later presented as inputs during the test phase. In this framework, successful Recall and Recognition is defined. Some additional components are added to the basic Hopfield model to enable the modeling of the relevant psychological phenomena. 2.1 The Hopfield Model The Hopfield model's dynamics are composed of a non-linear, iterative, asynchronous transformation of the network state [Hop82]. The process may include a stochastic noise which is analogous to the 'temperature' T in statistical mechanics. Formally, the Hopfield model is described as follows: Let neuron's i state be a binary variable Si, taking the values ± 1 denoting a firing or a resting state, correspondingly. Let the network's state be a vector S specifying the binary values of all its neurons. Let Jij be the synaptic strength between neurons i and j. Then, hi, the input 'field' of neuron i is given by hi = L:f# JijSj. The neuron's dynamic behavior is described by Si (t + 1) = { 1, -1, with probability ~(1 + tgh( ~» with probability ~(1 - tgh( ~» Storing a new memory pattern eJ.' in the network is performed by modifying every ij element of the syna.ptic connection matrix according to JlYw = Ji1d + -keJ.' ieJ.' j. A Hopfield network will always converge to a stable state, and every stored memory is an attractor having an area surrounding it termed its basin of attraction [Hop82]. In addition to the stored memories, also other, non-memory states exist as stable states (local minima) of the network [AGS85]. The maximal number m of (randomly generated) memory patterns which can be stored in the basic Hopfield network of n neurons is m = eke • n, eke :::::: 0.14 [AGS85]. 2.2 Recall and Recognition in the model's framework 2.2.1 Recall Recall is considered successful when upon starting from an initial cue the network converges to a stable state which corresponds to the learned memory nearest to the input pattern. Inter-pattern distance is measured by the Hamming distance between the input and the learned item encodings. If the network converges to a non-memory stable state, its output will stand for a 'failure of recall' response. 1. IThe question of "How do such non-memory states bear the meaning of 'recall failure'?" is out of the scope of this work. However, a possible explanation is that during the learning phase 'meaning' is assigned to the stored patterns via connections formed with external patterns, and since non-memory states lack such associations with external patterns, they are 'meaningless', yielding the 'recall failure' response. Another possible mechanism is that every output pattern generated in the recall process passes also a recognition phase so that non-memory states are rejected, (see the following paragraph describing recognition in our model). 644 Ruppin and Yeshurun 2.2.2 Recognition Recognition is considered successful when the network arrives at a stable state during a time interval A, beginning from input presentation. In general, the shorter the distance between an input and its nearest memory, the faster is its convergence [AM88, KP88, RY90]. Since non-memory (non-learned) stable states have higher energy levels and much shallower basins of attraction than memorized stable states [AGS85, LN89], convergence to such states takes significantly longer timer. Therefore, there exists a range of possible values of A that enable successful recognition only of inputs similar to one of the stored memories. 2.3 Other features of the model • The context of the psychological experiments is represented as a substring of the input's encoding. In order to minimize inter-pattern correlation, the size of the context encoding relative to the total size of the memory encoding is kept small . • The total associational linkage of a learned item, is modeled as an external field vector E. When a learned memory pattern eJ.' is presented to the network, the value of the external field vector generated is Ei = h . ~J.', where h is an 'orientation' coefficient, expressing the association strength. Additional features, including a modified storage equation accounting for learning taking place at the test phase, and a storage decay parameter, are described in [RY90]. 3 The Modeling of experimental data. Regarding every phenomenon discussed, a brief description of the psychological findings is followed by an account of its modeling. We rely on the known results pertaining to Hopfield models to show that qualitatively, the psychological phenomena reviewed are emergent properties of the model. When such analytical evidence is lacking, simulations were performed in order to account for the experimental data. For a review of the psychological literature supporting the findings modeled see [GS84]. The List-Length Effect: It is known that the probability of successful Recall or Recognition of a particular item decreases as the length of list of learned items lllcreases. List length is expressed in memory load. Since It has been shown that the width of the memories basins of attraction monotonically decreases following an approximately inverse parabolic curve [Wei85], Recall performance should decrease as memory load is increased. We have examined the convergence time of the same set of input patterns at different values of memory load. As demonstrated in Fig. 1, it was found tha.t, as the memory load is increased, successful convergence has occurred (on the average) only after an increasingly growing number of asynchronous iterations. Hence, convergence takes more time and can result in Recognition failure, although memories' stability is maintained till the critical capacity a c is reached. An Attractor Neural Network Model of Recall and Recognition 645 4000.0 3000.0 en c ·2 2000.0 0 '~ 1000.0 0.0 L----'-_--'--_'----L_ '--......1.._....1....-----1_-'---1 10.0 20.0 30.0 40.0 50.0 60.0 Memory load Figurc I: Ilccogllitioll speed (No. of a.<;Ylldll·OIlOIlS iterations) a.c; a (1Il1ction of IIlcllIory load (No. of storcd memories). The nclwork's parameters arc n = 500, T = 0.28 The word-frequency effect: The more frequent a word is in language, the probabilit.y of recalling it increases, while the probability of recognizing it decreases. A word's frequency in the language is assumed to effect its retrieval through the stored word's semantic relations and associa.tions [Kat85, NCBK87]. It is assumed, that relative to low frequency words, high frequency words have more semantic relations and therefore more connections between the patterns representing them and other patterns stored in the memory (i.e., in other networks). This one-ta-many relationship is assumed to be reciprocal, i.e., each .. ~f the externally stored patterns has also connections projected to several of th€: stored patterns in the allocated network. The process leading to the formation of the external field E (acting upon the allocated network), generated by an input pattern n~arest to some stored memory pattern {IJ is assumed to be characterized as follows: 1. There is a threshold degree of overlap &ntin, such that E > 0 only when the allocated network's state overiap H IJ is higher than Omin. 2. At overlap values HIJ which are only moderately larger than Omin, hI-' is monotonically increasing, but as HI-' continues to rise, a certain 'optimal' point is reached, beyond which hIJ is monotonically decreasing. 3. High-frequency words have lower Omin values than low-frequency words. Recognition tests are characterized by a high initial value of overlap HI-', to some memory {IJ. The value of hIJ and EJ.' generated is post-optimal and therefore smaller than in the case of low-frequency words which have higher Omin values. In Recall tests the initial situation is characterized by low values of overlap HI-' to some nearest memory {I-'. only the overlap value of high-frequency words is sufficient for activating associated it.ems, i.e. HJ.' > Omin. 646 Ruppin and Yeshurun Presentation Time: Increasing the presentation time of learned words is known to improve both their Recall and Recognition. This is explained by the phenomenon of maintenance rehearsal; The memories' basins of attraction get deeper, since the 'energy' E of a given state equals to 2.:;:1 H/J 2 • Deeper basins of attraction are also wider [HFP83, KPKP90]. Therefore, the probability of successful Recall and Recognition of rehearsed items is increased. The effect of a uniform rehearsal is equivalent to a temperature decrease. Hence, increasing presentation time will attenuate and delay the List length phenomenon, till a certain limit. In a similar way, the Test Delay phenomenon is accounted for [RY90]. Context Shift: The term Context Shift refers to the change in context from the tutorial period to the test period. Studies examining the effect of context shift have shown a decrement in Recall performance with context shift, but little change in Recognition performance. As demonstrated in [RY90], when a context shift is simulated by flipping some of the context string's bits, Recall performance severely deteriorates while memories stability remains intact. No significant increase in the time (i.e. number of asynchronous iterations) required for convergence was found , thus maintaining the pre-shift probability of successful Recognition. Age differences in Recall and Recognition: It was found that older people perform more poorly on Recall tasks than they do on Recognition tasks [eM87]. These findings can be accounted for by the assumption that synapses are being weakened and deleted with aging, which although being controversial has gained some experimental support (see [RY90]). 'Ve have investigated the retrieval performance as a function of the input's initial overlap, various levels of synaptic dilution, and memory load: As demonstrated in Fig. 2, when the synaptic dilution is increased, a 'critical' phase is reached where memory retrieval of far-away input patterns is decreased but the retrieval of input patterns with a high level of ihitial overlap remains intact. As the memory load is increased, this 'critical' phase begins at lower levels of synaptic dilution. On the other hand, only a mild increase (of 15%) in recognition speed was found. ?;-'i. 1 u £J 2 n. '0 u .. a: Figure£ Til ... I'I"I'AJ,ilily (If I'1I(rr;s"ful n:Lriev:ol ,~rfllrrllAtlre IlS II runcliotl or memory lu,ullltlclll'e illput ,,,,II'CIII'I; iuitialll\'Crlal', lIt lwo clifTcJcuL cl ... grccs of "YIIAplic llilu. liun; ~O% ill lhe right·si,lc-J figllle, AI"I 5J';f, in lIlc Idl·sitlc<1 figure. Tlrc nclwork's parlllllc\crj; IIrC " = 500, l' = O.OS. An Attractor Neural Network Model of Recall and Recognition 647 The interested reader can find a description of the modeling of additional phenomena, including test position, word fragment completion, and distractor similarity, in [RY90]. 4 On a quantitative test of the model. 4.1 Estimating Recall performance In a given network, with n neurons and m memories, the radius r of the basins of attraction of the memories decreases as the memory load parameter (a = min) is increased. According to [MPRV87], n, m, and r are related according to the expression m = (1-2.r)l. n 4 logn' The concept of the basins of attraction implies a non-linear probability function with low probability when input vectors are further than the radius of attraction and high probability otherwise. The slope of this non-linearity increases as the noise level T is decreased. The probability Pc that a random input vector will converge to one of the stored memories can be estimated by Pc ~ l::~~ (~) . m. It is interesting to note that the rates of change of r and of Pc have distinct forms; Recall tests beginning from randomly generated cues would yield a very low rate of successful Recall (Pc). Yet, if one examines Recall by picking a stored memory, flipping some of its encoding bits, and presenting it as an input to the network (determining r), 'reasonable' levels of successful Recall can still be obtained even when a 'considerable' number of encoding bits are flipped. Pc can be also estimated by considering the context representation [RY90]. 4.2 Estimating Recognition performance The probability of correct Recognition depends mainly on the the length of the interval ~; assume that after an input pattern is presented to a network of n neurons, during the time interval ~, k iterations steps of a Monte Carlo simulation are performed: In each such step, a neuron is randomly selected, and then it examines whether or not it should flip its state, according to its input. We show that the probability Pg { d} that an input pattern will be successfully recognized is is bounded by Pg {d} ~ 1 - d . e -nlc • It can be seen that Recognition's success depends strongly on the initial input proximity to a stored memory, and even more strongly dependent on the number of allowed asynchronous iteration k, determined by the length of~. For a selection of k = n(ln(d) + e), one obtains Pg ~ 1-e-c . The expected number of iterations, (denoted as Exp( X» till successful convergence is achieved is E(X) = l:t=l E(Xd = n . l:t=l + ~ n .In(d). In the more general case, Let 0 denote the Hamming distance (between the network's state S and a stored memory) below which retrieval is considered successful. Then, the corrected estimations of retrieval performance are Pg ~ 1 - (!) . e -~.o, and E(X) ~ n .In( ~). In simulations we have performed, (n = 500, d = 20, 0 = 10), the 648 Ruppin and Yeshurun average number of iterations until successful convergence was in the range of 300 400, in excellent correspondence with the predicted expectation, E(X) = 500·ln(2). References [AGS85] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys. Rev. Lett., 55:1530, 1985. [AM88] S. I. Amari and K. Maginu. Statistical neurodynamics of associative memory. Neural Networks, 1:63, 1988. [CM87] F.I.M. Craik and J .M. McDowd. Age differences in recall and recognition. Journal of Experimental Psychology; Learning, Memory, and Cognition, 13(3):474, 1987. [GS84] G. Gillund and M. Shiffrin. A retrieval model for both recognition and recall. Psychological Review, 91:1, 1984. [HFP83] J.J. Hopfield, D. I. Fienstien, and R. G. Palmer. Unlearning' has a stabilizing effect in collective memories. Nature, 304:158, 1983. [Hop82] J.J. Hopfield. Neural networks and physical systems with emergent collective abilities. Proc. Nat. Acad. Sci. USA, 79:2554, 1982. [Kat85] T. Kato. Semantic-memory sources of episodic retrieval failure. Memory & Cognition, 13(5):442, 1985. [KP88] J. Komios and R. Paturi. Convergence results in an associative memory model. Neural Networks, 1:239, 1988. [KPKP90] B. Kagmar-Parsi and B. Kagmar-Parsi. On problem solving with hopfield neural networks. Bioi. Cybern., 62:415, 1990. [LN89] M. Lewenstein and A. Nowak. Fully connected neural networks with self-control of noise levels. Phys. Rev. Lett., 62(2):225, 1989. [MPRV87] R.J. McEliece, E.C. Posner, E.R. Rodemich, and S.S. Venkatesh. The capacity of the hopfield associative memory. IEEE Transactions on Information theory, IT-33( 4):461, 1987. [NCBK87] D.L Nelson, J.J. Canas, M.T. Bajo, and P.D. Keelan. Comparing word [RY90] [Wei85] fragment completion and cued recall with letter cues. Journal of Experimental Psychology: Learning, Memory and Cognition, 13(4) :542, 1987. E. Ruppin and Y. Yeshurun. Recall and recognition in an attractor neural network model of memory retrieval. Technical report, Dept. of Computer Science, Tel-Aviv University, 1990. G. Weisbuch. Scaling laws for the attractors of hopfield networks. J. Physique Lett., 46:L-623, 1985.
1990
80
408
Training Knowledge-Based Neural Networks to Recognize Genes in DNA Sequences Michiel O. Noordewier Computer Science Geoffrey G. Towell Computer Sciences University of Wisconsin Madison, WI 53706 Jude W. Shavlik Computer Sciences University of Wisconsin Madison, WI 53706 Rutgers University New Brunswick, NJ 08903 Abstract We describe the application of a hybrid symbolic/connectionist machine learning algorithm to the task of recognizing important genetic sequences. The symbolic portion of the KBANN system utilizes inference rules that provide a roughly-correct method for recognizing a class of DNA sequences known as eukaryotic splice-junctions. We then map this "domain theory" into a neural network and provide training examples. Using the samples, the neural network's learning algorithm adjusts the domain theory so that it properly classifies these DNA sequences. Our procedure constitutes a general method for incorporating preexisting knowledge into artificial neural networks. We present an experiment in molecular genetics that demonstrates the value of doing so. 1 Introduction Often one has some preconceived notions about how to perform some classification task. It would be useful to incorporate this knowledge into a, neural network, and then use some training examples to refine these approximately-correct rules of thumb. This paper describes the KBANN (Knowledge-Based Artificial Neural Networks) hybrid learning system and demonstrates its ability to learn in the complex domain of molecular genetics. Briefly, KBANN uses a knowledge base of hierarchically-structured rules (which may be both incomplete and incorrect) to form an artificial neural network (ANN). In so doing, KBANN makes it possible to apply neural learning techniques to the empirical improvement of knowledge bases. The task to be learned is the recognition of certain DNA (deoxyribonucleic acid) subsequences important in the expression of genes. A large governmental research 530 Training Knowledge-Based Neural Networks to Recognize Genes 531 DNA D t EJ precursormRNA t I I mRNA (after splicing) t protein folded protein Figure 1: Steps in the Expression of Genes program, called the Human Genome Initiative, has recently been undertaken to determine the sequence of DNA in humans, estimated to be 3 x 109 characters of information. This provides a strong impetus to develop genetic-analysis techniques based solely on the information contained in the sequence, rather than in combination with other chemical, physical, or genetic techniques. DNA contains the information by which a cell constructs protein molecules. The cellular expression of proteins proceeds by the creation of a "message" ribonucleic acid (mRNA) copy from the DNA template (Figure 1). This mRNA is then translated into a protein. One of the most unexpected findings in molecular biology is that large pieces of the mRNA are removed before it is translated further [1]. The utilized sequences (represented by boxes in Figure 1) are known as "exons", while the removed sequences are known as "introns", or intervening sequences. Since the discovery of such "split genes" over a decade ago, the nature of the splicing event has been the subject of intense research. The points at which DNA is removed (the boundaries of the boxes in Figure 1) are known as splice-junctions. The splice-junctions of eukaryotic1 mRNA precursors contain patterns similar to those in Figure 2. exon intron exon no (A/C) A Gil GT (A/G) A GT no (crr) 6 X (Crr) A Gil G (Grr) ... Figure 2: Canonical Splice-Junctions DNA is represented by a string of characters from the set {A,G,C,T}. In this figure, X represents any character, slashes represent disjunctive options, and subscripts indicate repetitions of a pattern. However, numerous other locations can resemble these canonical patterns. As a result, these patterns do not by themselves reliably imply the presence of a splicejunction. Evidently, if junctions are to be recognized on the basis of sequence information alone, longer-range sequence information will have to be included in 1 Eukaryotic cells contain nuclei, unlike prokaryotic cells such as bacterial and viruses. 532 Noordewier, Towell, and Shavlik the decision-making criteria. A central problem is therefore to determine the extent to which sequences surrounding splice-junctions differ from sequences surrounding spurious analogues. We have recently described a method [9, 12] that combines empirical and symbolic learning algorithms to recognize another class of genetic sequences known as bacterial promoters. Our hybrid KBANN system was demonstrated to be superior to other empirical learning systems including decision trees and nearest-neighbor algorithms. In addition, it was shown to more accurately classify promoters than the methods currently reported in the biological literature. In this manuscript we describe the application of KBANN to the recognition of splice-junctions, and show that it significantly increases generalization ability when compared to randomlyinitialized, single-hidden-Iayer networks (i.e., networks configured in the "usual" way). The paper concludes with a discussion of related research and the areas which our research is currently pursuing. 2 The KBANN Algorithm KBANN uses a knowledge base of domain-specific inference rules in the form of PROLOG-like clauses to define what is initially known about a topic. The knowledge base need be neither complete nor correct; it need only support approximately correct reasoning. KBANN translates knowledge bases into ANNs in which units and links correspond to parts of knowledge bases. A detailed explanation of the procedure used by KBANN to translate rules into an ANN can be found in [12]. As an example of the KBANN method, consider the artificial knowledge base in Figure 3a which defines membership in category A. Figure 3b represents the hierarchical structure of these rules: solid and dotted lines represent necessary and prohibitory dependencies, respectively. Figure 3c represents the ANN that results from the translation into a neural network of this knowledge base. Units X and Yin Figure 3c are introduced into the ANN to handle the disjunction in the knowledge base. Otherwise, units in the ANN correspond to consequents or antecedents in the knowledge base. The thick lines in Figure 3c represent the links in the ANN that correspond to dependencies in the explanation. The weight on thick solid lines is 3, while the weight on thick dotted lines is -3. The lighter solid lines represent the links added to the network to allow refinement of the initial rules. At present, KBANN is restricted to non-recursive, propositional (i.e., variable-free) sets of rules. Numbers beside the unit names in Figure 3c are the biases of the units. These biases are set so that the unit is active if and only if the corresponding consequent in the knowledge base is true. As this example illustrates, the use of KBANN to initialize ANNs has two principle benefits. First, it indicates the features believed to be important to an example's classification. Second, it specifies important derived features; through their deduction the complexity of an ANN's final decision is reduced. Training Knowledge-Based Neural Networks to Recognize Genes 533 A:- B, C. A ~ B :-notF, O. 1\ B :-notH. C :-1,1. ·0 .. . • • • • • • F G H I J K a b Figure 3: Translation of a Knowledge Base into an ANN 3 Problem Definition The splice-junction problem is to determine into which of the following three categories a specified location in a DNA sequence falls: (1) exon/intron borders, referred to as donors, (2) intron/exon borders, referred to as acceptors, and (3) neither. To address this problem we provide KBANN with two sets of information: a set of DNA sequences 60 nucleotides long that are classified as to the category membership of their center and a domain theory that describes when the center of a sequence corresponds to one of these three categories. Table 1 contains the initial domain theory used in the splice-junction recognition task. A special notation is used to specify locations in the DNA sequence. When a rule's antecedents refer to input features, they first state a relative location in the sequence vector, then the DNA symbol that must occur (e.g., @3=A). Positions are numbered negatively or positively depending on whether they occur before or after the possible junction location. By biological convention, position numbers of zero are not used. The set of rules was derived in a straightforward fashion from the biological literature [13]. Briefly, these rules state that a donor or acceptor sequence is present if characters from the canonical sequence (Figure 2) are present and triplets known as stop codons are absent in the appropriate positions. The examples were obtained by taking the documented split genes from all primate gene entries in Genbank release 64.1 [1] that are described as complete. Each training example consists of a window that covers 30 nucleotides before and after each donor and acceptor site. This procedure resulted in 751 examples of acceptor and 745 examples of donors. Negative examples are derived from similarly-sized windows, which did not cross an intron/exon boundary, sampled at random from these sequences. Note that this differs from the usual practice of generating random sequences with base-frequency composition the same as the positive instances. However, we feel that this provides a more realistic training set, since DNA is known to be highly non-random [3]. Although many more negative examples were available, we used approximately as many negative examples are there were both donor and acceptors. Thus, the total data set we used had 3190 examples. The network created by KBANN for the splice-junction problem has one output 534 Noordewier, Towell, and Shavlik Table 1: Knowledge Base for Splice-Junctions donor :- @-3=M, @-2=A, @-l=G, @l=G, @2=T, @3=R, @4=A, @5=G, @6=T, not(don-stop). don-stop :- @-3=T, @-2=A, @-l=A. don-stop :- @-4=T, @-3=A, @-2=G. don-stop :- @-3=T, @-2=A, @-l=G. don-stop :- @-4=T, @-3=G, @-2=A. don-stop :- @-3=T, @-2=G, @-l=A. don-stop :- @-5=T, @-4=A, @-3=A. don-stop :- @-4=T, @-3=A, @-2=A. don-stop :- @-5=T, @-4=A, @-3=G. don-stop :- @-5=T, @-4=G, @-3=A. acceptor :- pyr-rich, @-3=Y, @-2=A, @-l=G, @l=G, @2=K, not(ace-stop). pyr-rich :- 6 of (@-15=Y, @-14=Y, @-13=Y, @-12=Y, @-l1=Y, @-lO=Y, @-9=Y, @-8=Y, @-7=Y, @-6=Y.) ace-stop :- @l=T, @2=A, @3=A. ace-stop :- @2=T, @3=A, @4=A. acc-stop :- @l=T, @2=A, @3=G. acc-stop :- @2=T, @3=A, @4=G. acc-stop :- @l=T, @2=G, @3=A. acc-stop :- @2=T, @3=G, @4=A. ace-stop :- @3=T, @4=A, @5=A. acc-stop :- @3=T, @4=A, @5=G. acc-stop :- @3=T, @4=G, @5=A. R:- A. R:- G. Y:- C. Y:- T. M:- C. M:- A. K:- G. K:- T units for each category to be learned; and four input units for each nucleotide in the DNA training sequences, one for each of the four values in the DNA alphabet. In addition, the rules for ace-stop, don-stop, R, Y, and M are considered definitional. Thus, the weights on the links and biases into these units were frozen. Also, the second rule only requires that six of its 11 antecedents be true. Finally, there are no rules in Table 1 for recognizing negative examples. So we added four unassigned hidden units and connected them to all of the inputs and to the output for the neither category. The final result is that the network created by KBANN has 286 units: 3 output units, 240 input units, 31 fixed-weight hidden units, and 12 tunable hidden units. 4 Experimental Results Figure 4 contains a learning curve plotting the percentage of errors made on a set of "testing" examples by KBANN-initialized networks, as a function of the number of training examples. Training examples were obtained by randomly selecting examples from the population of 3190 examples described above. Testing examples consisted of all examples in the population that were not used for training. Each data point represents the average of 20 repetitions of this procedure. For comparison, the error rate for a randomly-initialized, fully-connected, two-layer ANN with 24 hidden units is also plotted in Figure 4. (This curve is expected to have an error rate of 67% for zero training examples. Test results were slightly better due to statistical fluctuations.) Clearly, the KBANN-initialized networks learned faster than randomly-initialized ANNs, making less than half the errors of the randomlyinitialized ANNs when there were 100 or fewer training examples. However, when Training Knowledge-Based Neural Networks to Recognize Genes 535 CD CIJ C) 60 :§ 45 m .... c: o 30 ~ L.. W c: 15 ~ &. , ~ • • • • • • • • ~ , , '6 ... --a 0 KBANN network - - 4_ - - - - - ~ - . Randomly-weighted network ..... ...... ... 6---__ 6_ ----6 ____ _ O~----r---~----~----r----' I i I I o 100 200 300 400 500 SOO 1000 1500 2000 Number of Training Examples Figure 4: Learning Curve for Splice Junctions large numbers of training examples were provided the randomly-initialized ANNs had a slightly lower error rate (5.5% vs. 6.4% for KBANN). All of the differences in the figure are statistically significant. 5 Related and Future Research Several others have investigated predicting splice-junctions. Staden [10] has devised a weight-matrix method that uses a perceptron-like algorithm to find a weighting function that discriminates two sets (true and false) of boundary patterns in known sequences. Nakata et al. [7] employ a combination of methods to distinguish between exons and introns, including Fickett's statistical method [5]. When applied to human sequences in the Genbank database; this approach correctly identified 81% of true splice-junctions. Finally, Lapedes et al. [6] also applied neural networks and decision-tree builders to the splice-junction task. They reported neural-network accuracies of 92% and claimed their neural-network approach performed significantly better than the other approaches in the literature at that time. The accuracy we report in this paper represents an improvement over these results. However, it should be noted that these experiments were not all performed under the same conditions. One weakness of neural networks is that it is hard to understand what they have learned. We are investigating methods for the automatic translation into symbolic rules of trained KBANN-initialized networks [11]. These techniques take advantage of the human-comprehensible starting configuration of KBANN's networks to create a small set of hierarchically-structured rules that accurately reflect what the network learned during training. We are also currently investigating the use of richer splicejunction domain theories, which we hope will improve KBANN'S accuracy. 536 N oordewier, lOwell, and Shavlik 6 Conclusion The KBANN approach allows ANN s to refine preexisting knowledge, generating ANN topologies that are well-suited to the task they are intended to learn. KBANN does this by using a knowledge base of approximately correct, domain-specific rules to determine the ANN's structure and initial weights. This provides an alternative to techniques that either shrink [2] or grow [4] networks to the "right" size. Our experiments on splice-junctions, and previously on bacterial promoters, [12] demonstrate that the KBANN approach can substantially reduce the number of training examples needed to reach a given level of accuracy on future examples. This research was partially supported by Office of Naval Research Grant N00014-90-J-1941, National Science Foundation Grant IRI-9002413, and Department of Energy Grant DE-FG02-91ER61129. References [1] R. J. Breathnach, J. L. Mandel, and P. Chambon. Ovalbumin gene is split in chicken DNA. Nature, 270:314-319, 1977. [2] Y. Le Cun, J. Denker, and S. Solla. Optimal brain damage. Advances in Neural Information Processing Systems 2, pages 598-605, 1990. [3] G. Dykes, R. Bambara, K. Marians, and R. Wu. On the statistical significance of primary structural features found in DNA-protein interaction sites. Nucleic Acids Research, 2:327-345, 1975. [4] S. Fahlman and C. Lebiere. The cascade-correlation learning architecture. Advances in Neural Information Processing Systems 2, pages 524-532, 1990. [5] J. W. Fickett. Recognition of protein coding regions in DNA sequences. Nucleic Acids Research, 10:5303-5318, 1982. [6] A. Lapedes, D. Barnes, C. Burks, R. Farber, and K. Sirotkin. Application of neural networks and other machine learning algorithms to DNA sequence analysis. In Computers and DNA, pages 157-182. Addison-Wesley, 1989. [7] K. Nakata, M. Kanehisa, and C. DeLisi. Prediction of splice junctions in mrna sequences. NucleIC Acids Research, 13:5327-5340, 1985. [8] M. C. O'Neill. Escherichia coli promoters: 1. Consensus as it relates to spacing class, specificity, repeat substructure, and three dimensional orgainzation. Journal of Biological Chemistry, 264:5522-5530, 1989. [9] J. W. Shavlik and G. G. Towell. An approach to combining explanation-based and neural learning algorithms. Connection Science, 1:233-255, 1989. [10] R. Staden. Computer methods to locate signals in DNA sequences. Nucleic Acids Research, 12:505-519, 1984. [11] G. G. Towell, M. Craven, and J. W. Shavlik. Automated interpretation of knowledge based neural networks. Technical report, University of Wisconsin, Computer Sciences Department, Madison, WI, 1991. [12] G. G. Towell, J. W. Shavlik, and M. O. Noordewier. Refinement of approximately correct domain theories by knowledge-based neural networks. In Proc. of the Eighth National Conf. on Artificial Intelligence, pages 861-866, Boston, MA, 1990. [13] J. D. Watson, N. H. Hopkins, J. W. Roberts, J. A. Steitz, and A. M. Weiner. Molecular Biology of the Gene, pages 634-647, 1987.
1990
81
409
ALCOVE: A Connectionist Model of Human Category Learning John K. Kruschke Department of Psychology and Cognitive Science Program Indiana University, Bloomington IN 47405-4201 USA e-mail: kruschke@ucs.indiana.edu Abstract ALCOVE is a connectionist model of human category learning that fits a broad spectrum of human learning data. Its architecture is based on wellestablished psychological theory, and is related to networks using radial basis functions. From the perspective of cognitive psychology, ALCOVE can be construed as a combination of exemplar-based representation and errordriven learning. From the perspective of connectionism, it can be seen as incorporating constraints into back-propagation networks appropriate for modelling human learning. 1 INTRODUCTION ALCOVE is intended to accurately model human, perhaps non-optimal, performance in category learning. While it is a feed-forward network that learns by gradient descent on error, it is unlike standard back propagation (Rumelhart, Hinton & '''illiams, 1986) in its architecture, its behavior, and its goals. Unlike the standard back-propagation network, which was motivated by generalizing neuron-like perceptrons, the architecture of ALCOVE was motivated by a molar-level psychological theory, Nosofsky's (1986) generalized context model (GCM). The psychologically constrained architecture results in behavior that captures the detailed course of human category learning in many situations where standard back propagation fares less well. And, unlike most applications of standard back propagation, the goal of ALCOVE is not to discover new (hidden-layer) representations after lengthy training, but rather to model the course of learning itself (Kruschke, 1990c), by determining which dimensions of the given representation are most relevant to the task, and how strongly to associate exemplars with categories. 649 650 Kruschke o 0 Category nodes. Learned association weights. Exemplar nodes. Learned attention strengths. Stimulus dimension nodes. Figure 1: The architecture of ALCOVE (Attention Learning covEring map). Exemplar nodes show their activation profile when r = q = 1 in Eqn. 1. 2 THE MODEL Like the GCM, ALCOVE assumes that input patterns can be represented as points in a. multi-dimensional psychological space, as determined by multi-dimensiona.l scaling algorithms (e.g., Shepard, 1962). Each input node encodes a single psychological dimension, with the activation of the node indicating the value of the stimulus on that dimension. Figure 1 shows the architecture of ALCOVE, illustrating the case of just two input dimensions. Each input node is gated by a dimensional attention strength ai. The attention strength on a dimension reflects the relevance of that dimension for the particular categorization task at hand, and the model learns to allocate more attention to relevant dimensions and less to irrelevant dimensions. Each hidden node corresponds to a position in the multi-dimensional stimulus space, with one hidden node placed at the position of every training exemplar. Each hidden node is activated according to the psychological similarity of the stimulus to the exemplar represented by the hidden node. The similarity function comes from the GCM and the work of Shepard (1962; 1987): Let the position of the ph hidden node be denoted as (hjl' hj2' ... ), and let the activation of the ph hidden node be denoted as ajid. Then (1) where c is a positive constant called the specificity of the node, where the sum is taken over all input dimensions, and where rand q are constants determining the similarity metric and similarity gradient, respectively. For separable psychologica.l ALCOVE: A Connectionist Model of Human Category Learning 651 (a) *: .... t •.... ~ __ ... J .... .&. ~ (b) •..• "';tbL ·~·· .....•..... Figure 2: (a) Increasing attention on the horizontal axis and decreasing attention on the vertical axis causes exemplars of the two categories (denoted by dots and + 's) to have greater between-category dissimilarity and greater within-category similarity. (After Nosofsky, 1986, Fig. 2.) (b) ALCOVE cannot differentially attend to diagonal axes. dimensions, the city-block metric (1' = 1) is used, while integra.l dimensions might call for a Euclidean metric (r = 2). An exponential similarity gradient (q = 1) is used here (Shepard, 1987; this volume), but a Gaussian similarity gradient (q = 2) can sometimes be appropriate. The dimensional attention strengths adjust themselves so that exemplars from different categories become less similar, and exemplars within categories become more similar. Consider a simple case of four stimuli that form the corners of a square in input space, as in Figure 2(a). The two left stimuli are mapped to one category (indicated by dots) and the two right stimuli are mapped to another category (indicated by +'s). ALCOVE learns to increase the attention strength on the horizontal axis, and to decrease the attention strength on the vertical axis. On the other hand, ALCOVE cannot stretch or shrink diagonally, as suggested in Figure 2(b). This constraint is an accurate reflection of human performance, in that categories separated by a diagonal boundary tend to take longer to learn than categories separa.ted by a boundary orthogonal to one dimension. Each hidden node is connected to output nodes that correspond to response categories. The connection from the lh hidden node to the kth category node hac; a connection weight denoted Wkj' called the association weight between the exemplar and the category. The output (category) nodes are activated by the linear rule used in the GCM and the network models of Gluck and Bower (1988a,b): aout - '" W ahid k L..J kj j . (2) hid j In ALCOVE, unlike the GCM, the association weights are learned and can take on any real value, including negative values. Category activations are mapped to response probabilities using the same choice rule as was used in the GCM and network models. Thus, Pr(I<) = exp( ¢ alt) / L exp( ¢ akut ) out k (3) 652 Kruschke where <p is a real-valued scaling constant. In other words, the probability of classifying the given stimulus into category K is determined by the magnitude of category K's activation relative to the sum of all category activations. The dimensional attention strengths, Ck'i' and the association weights, wkj ' are learned by gradient descent on sum-squared error, as used in standard back propagation (Rumelhart et al., 1986) and in the network models of Gluck and Bower (1988a,b). Details can be found in Kruschke (1990a,b). In fitting ALCOVE to human learning data, there are four free parameters: the fixed specificity c in Equation 1; the probability mapping constant <p in Equation 3; the association weight learning rate; and, the attention strength learning rate. In summary, ALCOVE extends Nosofsky's (1986) GCM by having a learning mechanism and by allowing any positive or negative values for association weights, and it extends Gluck and Bower's (1988a,b) network models by including explict attention strengths and by using continuous input dimensions. It is a combination of exemplar-based category representations with error-driven learning, as alluded to by Estes et al. (1989; see also Hurwitz, 1990). ALCOVE can also be construed as a form of (non-)radial basis function network, if r = q = 2 in Equation 1. In the form described here, the hidden nodes are placed at positions where training exemplars occur, but another option, described by Kruschke (1990a,b), is to scatter hidden nodes over the input space to form a covering map. Both these methods work well in fitting human data in some situations, but the exemplar-based a.pproach has advantages (Kruschke, 1990a,b). ALCOVE can also be compared to a standard back-propagation network that has adaptive attentional multipliers on its input nodes (cf. Mozer and Smolensky, 1989), but with fixed input-to-hidden weights (Kruschke 1990b, p.33). Such a network behaves similarly to a covering-map version of ALCOVE. Moreover, such back-prop networks are susceptible to catastrophic retroactive interference (Ratcliff, 1990; McCloskey & Cohen, 1989), unlike ALCOVE. 3 APPLICATIONS Several applications of ALCOVE to modelling human performance are detailed elsewhere (Kruschke, 1990a,b); a few will be summarized here. 3.1 RELATIVE DIFFICULTY OF CATEGORY STRUCTURES The classic work of Shepard, Hovland and Jenkins (1961) explored the relative difficulty of learning different category structures. As a simplified example, the linearly separable categories in Figure 2( a) are easier to learn than the exclusive-or problem (which would have the top-left and bottom-right exemplars mapped to one category, and the top-right and bottom-left mapped to the other). Shepard et al. carefully considered several candidate explanations for the varying difficulties, and concluded that some form of attentionallearning was necessary to a.ccount for their results. That is, people seemed to be able to determine which dimensions were relevant or irrelevant, and they allocated attention to dimensions a.ccordingly. Category structures with fewer relevant dimensions were easier to learn. ALCOVE has just the sort of attentional learning mechanism called for, and can match the relative difficulties observed by Shepard et al. ALCOVE: A Connectionist Model of Human Category Learning 653 3.2 BASE-RATE NEGLECT A recent series of experiments (Gluck & Bower, 1988b; Estes et aI., 1989; Shanks, 1990; Nosofsky et aI., 1991) investigated category learning when the assignment of exemplars to categories was probabilistic and the base rates of the categories were unequal. In these experiments, there were two categories (one "rare" and the other "common") and four binary-valued stimulus dimensions. The stimulus values were denoted sl and sl * for the first dimension, s2 and s2* for the second dimension, and so on. The probalities were arranged such that over the course of training, the normative probability of each category, given sl alone, was 50%. However, when presented with feature sl alone, human subjects classified it as the rare category significantly more than 50% of the time. It was as if people were neglecting the base rates of the categories. Gluck and Bower (1988b) and Estes et aI. (1989) compared two candidate models to account for the apparent base-rate neglect. One was a simple exemplar-based model that kept track of each training exemplar, and made predictions of categoriza tions by summing up frequencies of occurence of each stimulus value for each category. The exemplar-based model was unable to predict base-rate neglect. The second model they considered, the "double-node network," was a one-layer error-driven network that encoded each binary-valued dimension with a pair of input nodes. The double-node model was able show base-rate neglect. ALCOVE is an exemplar-based model, and so it is challenged by those results. In fact, Kruschke (1990a,b) and Nosofsky et aI. (1991) show that ALCOVE fits the trailby-trial learning and base-rate neglect data as well as or better than the double-node model. 3.3 THREE-STAGE LEARNING OF RULES AND EXCEPTIONS One of the best-known connectionist models of human learning is Rumelhart and McClelland's (1986) model of verb past tense acquistion. One of the main phenomena they wished to model was three-stage learning of irregular verbs: First a few high-frequency irregulars are learned; second, many regular verbs are learned with some interference to the previously learned irregulars; and third, the high-frequency irregulars are re-Iearned. l In order to reproduce three-stage learning in their model, Rumelhart and McClelland had to change the training corpus during learning, so that early on the network was trained with ten verbs, 80% of which were irregular, and later the network was trained with 420 verbs, only 20% of which were irregular. It remains a challenge to connectionist models to show three-stage learning of rules and exceptions while keeping the training set constant. While ALCOVE has not been applied to the verb-learning situation (and perhaps should not be, as a multi-dimensional similarity-space might not be a tractable representation for verbs), it can show three-stage learning of rules and exceptions in simpler but analogous situations. Figure 3 shows an arrangement of training exemplars, most of which can be classified by the simple rule, "if it's to the right lThere is evidence that three-stage learning is only very subtle in verb past tense acquisition (e.g., Marcus, 1990), but whether it exists more robustly in the simpler cat.egory learning domains addressed by ALCOVE is still an open question. 654 Kruschke 0.9 G @ ~ GJ 0.1 ,..... ... 0 @ ~ 8 [3 u ~ 0.7 I @ @ @: ~ [!J u '-" Q.c 0 @ I 8 [!) 0.6 O.S learning trial Figure 3: Left panel shows arrangement of rule-following (R) and exceptional (E) cases. Right panel shows the performance of ALCOVE. The ratio of E to R cases and all parameters of the model were fixed throughout training. of the dashed line, then it's in the 'rectangle' category, otherwise it's in the 'oval' category." The rule-following cases are marked with an "R." There are two exceptional cases near the dashed line, marked with an "E." Exceptional exemplars occurred 4 times as often as rule-following exemplars. The right panel of Figure 3 shows that ALCOVE initially learns the E cases better than the R cases, but that later in learning the R cases surpass the E's. The reason is that early in learning, ALCOVE is primarily building up association weights and has not yet shifted much attention away from the irrelevant dimension. Associations from the E cases grow more quickly because they are more frequent. Once the associations are established, then there is a basis for attention to be shifted away fWlll the irrelevant dimension, rapidly improving performance on the R cases. At the time of this writing, these results have the status of a provocative demonstration, but experiments with human subjects in similar learning situations are presently being undertaken. Acknow ledgment This research was supported in part by Biomedical Research Support Grant RR 7031-25 from the National Institutes of Health. References Estes, Vv. K., Campbell, J. A., Hatsopoulos, N., & Hurwitz, J. B. (1989). Base-rate effects in category learning: A comparison of parallel network and memory storageretrieval models. J. Exp. Psych. Learning, Memory and Cognition, 15, 556-576. Gluck, M. A. & Bower, G. H. (1988a). Evaluating an adaptive network model of human learning. J. of Memory and Language, 27, 166-195. Gluck, M. A. & Bower, G. H. (1988b). From conditioning to category learning: An adaptive network model. J. Exp. Psych. General, 117, 227-247. Hurwitz, J. B. (1990). A hidden-pattern unit network model of category learning. Doctoral dissertation, Harvard University. ALCOVE: A Connectionist Model of Human Category Learning 655 Kruschke, J. K. (1990a). A connectionist model of category learning. Doctoral dissertation, University of California at Berkeley. Available from University Microfilms International. Kruschke, J. K. (1990b). ALCOVE: A connectionist model of category learning. Research Report 19, Cognitive Science Program, Indiana University. Kruschke, J. K. (1990c). How connectionist models learn: The course of learning in connectionist networks. Behavioral and Brain Sciences, 13, 498-499. Marcus, G. F., Ullman, M., Pinker, S., Hollander, M., Rosen, T. J., & Xu, F. (1990). Overregularization. Occasional Paper #41, MIT Center for Cognitive Science. McCloskey, M. & Cohen, N. J. (1989). Catastrophic interference in connectionist networks: the sequential learning problem. In: G. Bower (ed.), The Psychology of Learning and ~Motivation, Vol. 24. New York: Academic Press. Mozer, M. C., & Smolensky, P. (1989). Skeletonization: A technique for trimming the fat from a network via relevance assessment. In: D. S. Touretzky (ed.), Advances in Neural Information Processing Systems, I, pp. 107-115. San Mateo, CA: Morgan Kaufmann. Nosofsky, R. M. (1986). Attention, similarity and the identification-categorization relationship. J. Exp. Psych. General, 115, 39-57. Nosofsky, R. M., Kruschke, J. K., & McKinley, S. (1991). Comparisons between adaptive network and exemplar models of classification learning. Research Report 35, Cognitive Science Program, Indiana University. Ratcliff, R. (1990). Connectionist models of recognition memory: Constraints imposed by learning and forgett.ing functions. Psychological Review, 97, 285-308. Rumelhart, D. E., Hinton, G. E., & ''''illiams, R. J. (1986). Learning internal representations by back-propagating errors. In: D. E. Rumelhart & .J. L. McClelland (eds.), Parallel Distributed Processing, Vol. 1, pp. 318-362. Cambridge, ~'lA: MIT Press. Rumelhart, D. E., & McClelland, J. L. (1986). On learning the past tenses of english verbs. In: J. L. McClelland & D. E. Rumelhart (eds.), Parallel Distributed Processing, Vol. 2, pp. 216-271. Cambridge, MA: MIT Press. Shanks, D. R. (1990). Connectionism and the learning of probabilistic concepts. Quarterly J. Exp. Psych., 42A, 209-237. Shepard, R. N. (1962). The analysis of proximities: Multidimensional scaling with an unknown distance function, I & II. Psychometrika, 27, 125-140, 219-246. Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science, 237, 1317-1323. Shepard, R. N., Hovland, C. 1., & Jenkins, H. M. (1961) . Learning and memoriza.tion of classifications. Psychological Monographs, 75(13), Whole No. 517.
1990
82
410
Multi-Layer Perceptrons with B-SpIine Receptive Field Functions Stephen H. Lane, Marshall G. Flax, David A. Handelman and JackJ. Gelfand Human Information Processing Group Department of Psychology Princeton University Princeton, New Jersey 08544 ABSTRACT Multi-layer perceptrons are often slow to learn nonlinear functions with complex local structure due to the global nature of their function approximations. It is shown that standard multi-layer perceptrons are actually a special case of a more general network formulation that incorporates B-splines into the node computations. This allows novel spline network architectures to be developed that can combine the generalization capabilities and scaling properties of global multi-layer feedforward networks with the computational efficiency and learning speed of local computational paradigms. Simulation results are presented for the well known spiral problem of Weiland and of Lang and Witbrock to show the effectiveness of the Spline Net approach. 1. INTRODUCTION Recently, it has been shown that multi-layer feedforward neural networks, such as Multi-Layer Perceptrons (MLPs) , are theoretically capable of representing arbitrary mappings, provided that a sufficient number of units are included in the hidden layers (Hornik et aI., 1989). Since all network weights are updated with each training exemplar, these networks construct global approximations to multi-input/multi-output function data in a manner analogous to fitting a low-order polynomial through a set of 684 Multi-Layer Perceptrons with B-Spline Receptive Field Functions 685 data points. This is illustrated by the cubic polynomial "Global Fit" of the data points in Fig. 1. I ~LocalFit ~GlobaiFit Figure 1. Global vs. Local Function Approximation Consequently, multi-layer perceptrons are capable of generalizing (extrapolating! interpolating) their response to regions of the input space where little or no training data is present, using a quantity of connection weights that typically scales quadratically with the number of hidden nodes. The global nature of the weight updating, however, tends to blur the details of local structures, slows the rate of learning, and makes the accuracy of the resulting function approximation sensitive to the order of presentation of the training data. It is well known that many sensorimotor structures in the brain are organized using neurons that possess locally-tuned overlapping receptive fields (Hubel and Wiesel, 1962). Several neural network computational paradigms such as CMACs (Cerebel1ar Model Articulation Controllers) (Albus, 1973) and Radial Basis Functions (RBFs) (Moody and Darken, 1988) have been quite successful representing complex nonlinear functions using this same organizing principle. These networks construct local approximations to multi-input/multi-output function data that are analogous to fitting a least-squares spline through a set of data points using piecewise polynomials or other basis functions. This is illustrated as the cubic spline "Local Fit" in Fig. 1. The main benefits of using local approximation techniques to represent complex nonlinear functions include fast learning and reduced sensitivity to the order of presentation of training data. In many cases, however, in order to represent the function to the desired degree of smoothness, the number of basis functions required to adequately span the input space can scale exponentially with the number of inputs (Lane et aI., 1991a,b). The work presented in this paper is part of a larger effort (Lane et aI, 1991a) to develop a general neural network formulation that can combine the generalization capabilities and scaling properties of global multi-layer feed forward networks with the computational efficiency and learning speed of local network paradigms. It is shown in the sequel that this can be accomplished by incorporating B-Spline receptive fields into the node connection functions of Multi-Layer Perceptrons. 686 Lane, Flax, Handelman, and Gelfand 2. MULTI·LAYER PERCEPTRONS WITH B·SPLINE RECEPTIVE FIELD FUNCTIONS Standard Multi-Layer Perceptrons (MLPs) can be represented using node equations of the form, (1) where llL is the number of nodes in layer L and the cf; are linear connection functions between nodes in layers Land (L-1) such that, (2) 0'(-) is the standard sigmoidal nonlinearity, yf-l is the output of a node in layer L-1, y~-l = 1, and the wf; are adjustable network weights. Some typical linear connection functions are shown in Fig. 2. cfo corresponds to a threshold input. ~2 y' L-1 2 ~o 1----Figure 2. Typical MLP Node Connection Functions Incorporating B-Spline receptive field functions (Lane et aI., 1991a) into the node computations of eq. (1) allows more general connection functions (e.g. piecewise linear, quadratic, cubic, etc.) to be formulated. The corresponding B-Spline MLP (Spline Net) is derived by redefining the connection functions of eq. (2) such that, L( L-l) \' L BG ( L-l) cij Yj = ~ W ijk nk Yj (3) This enables the construction of a more general neural network architecture that has node equations of the form, L Yi = (4) Multi-Layer Perceptrons with B-Spline Receptive Field Functions 687 The B~(Yf-1) are B-spline receptive field functions (Lane et al» 1989,19913) of order n and support G» while the 'wtk are the spline network weights. The order, n» corresponds to the number of coefficients in the polynomial pieces. For example, linear splines are of order n=2» whereas cubic splines are of order n=4. The advantage of the more general B-Spline connection functions of eq. (3) is that it allows varying degrees of "locality" to be added to the network computations since network weights are now activated based on the value of yf-1. The wtk are modified by backpropagating the output error only to the G weights in each connection function associated with active (i.e. nonzero) receptive field functions. The Dh-Iayer weights are updated using the method of steepest descent learning such that, L L f3 L L(I L)BG ( L-1) Wijk Eo- Wijk + ej Yi - Yi nk Yj (5) where ef is the output error back-propagated to the ith node in layer L and ~ is the learning rate (Lane et aI., 19913). In the more general Spline Net formulation of eqs. (3-5), each node input has P+G-1 receptive fields and P+G-1 weights associated with it, but only G are active at anyone time. P determines the number of partitions in the input space of the connection functions. Standard MLP networks are a degenerate case of the Spline Net architecture» as they can be realized with B-Spline receptive field functions of order n=2, with P=1 and G=2. Due to the connectivity of the B-Spline receptive field functions, for the case when P> 1, the resulting network architecture corresponds to multiply-connected MLPs» where any given MLP is active within only one hypercube in the input space, but has weights that are shared with MLPs on the neighboring hypercubes. The amount of computation required in each layer of a Spline Net during both learning and function approximation is proportional to G, and independent of P. Formulating the connection functions of eq. (3) with linear (n=2) B-Splines allows connection functions such as those shown in Fig. 3 to be learned. Figure 3. Spline Net Connection Functions Using Linear B-Splines (n=2) The connection functions shown in Fig. 3 have P=4 partitions (5 knots) on the interval yj-1 E[O,1]. The number of input partitions, P» determines the degree of locality of 688 Lane, Flax, Handelman, and Gelfand the resulting function approximation since the local shape of the connection function is determined from the current node input activation interval. Networks constructed using the Spline Net formulation are reminiscent of the form and function of Kolmogorov-Lorenz networks (Baron and Baron, 1988). A neurobiological interpretation of a Spline Net is that it is composed of neurons that have dendritic branches with synapses that operate as a function of the level of activation at a given node or network input. This is shown in the network architecture of Fig. 4b where the standard three-layer MLP network of Fig. 4a has been redrawn using B-Spline receptive field functions with n=2, P=4 and G=2. Figure 4. Three-Layer Spline Net Architecture, n=2,P=4,G=2 ~- , 5 The horizontal arrows projecting from the right of each network node in Fig. 4b represent the node outputs. The overlapping triangles on the node output represent the receptive field functions of neurons in the next layer. These receptive field functions are summed with weighted connections in the dendritic branches to form the inputs to the next network layer. In the architecture shown in Fig. 4b, only two receptive fields are active for any given value of a node output. Therefore for this single hidden-layer network architecture, given any value for the inputs (x1,xV, at most Nw = 30 weights will be active where, (6) s is the number of network inputs and 11 is the number of nodes in the hidden layer, which for this case is 2s+ 1 = 5. Multi-Layer Perceptrons with B-Spline Receptive Field Functions 689 3. SIMULATION RESULTS In order to evaluate the impact of local computation on MLP performance, the well known spiral problem of Weiland and of Lang and Witbrock (1988) was chosen as a benchmark. Simulations were conducted using a Spline Net architecture having one hidden layer with 5 hidden nodes and linear B-Splines with support, G=2 (Fig. 4). All trials used the "vanilla" back-prop learning rule of eq. (5) with ~ = l/{2P). The connection function weights were initialized in each node such that the resulting connection functions were continuous linear functions with arbitrary slope. From previous experience (Lane et aI., 1989), it was known that the number of receptive field partitions can drastically affect network learning and performance. Therefore, the connection function partitions were bifurcated during training to see the effect on network generalization capability and learning speed. The bifurcation consisted of splitting every receptive field in half after increments of lOOK (100,(00) training points, each time doubling the number of connection function partitions and weights in the network nodes. A more adaptive approach would monitor the slope of the learning curve to determine when to split the partitions. New weights were initializing such that the connection functions before and after the bifurcation retained the same shape. All simulation results presented in Figs. 5-12 were generated using 800K training points. The left-most column of Fig. 5 represents the two learned connection functions that lead to each hidden node depicted in Fig. 4. The elements in the second column are the hidden node response to excitation over the unit square, while the plots in the third column are the connection functions from the hidden layer to the output node. The fourth column shows the hidden node outputs after being passed through their respective connection functions. The network output shown in the fifth column is the algebraic sum of the hidden node responses shown in the fourth column. The Spline Net was initialized as a standard MLP with P=1. Figure 6 shows the evolution of the two connection functions to the third hidden node in Fig. 4 after every lOOK training points. Around 400K (P=8) the connection functions start to take on a characteristic shape. For 1'>8, the creation of additional partitions has little effect on the shape of the connection functions. Figure 7 shows the associated learning curve, while Fig. 8 is an enlarged version of the network output. These results indicate that the bifurcation schedule introduces additional degrees of freedom (weights) to the network in such a way as to carve out coarse global features first, then incrementally capture finer and finer localized details later. This is in contrast to the results shown in Figs. 9 and 10 where the training (using the same 800K points as in Figs. 7 and 8) was begun on a network having P=l28 initial partitions. Figure 11 shows the Spline Net output after 800K training iterations using 112 discrete points located on the two spirals. Lang and Witbrock (1988) state that similar spiral results could only be obtained using a MLP network with 3 hidden layers (including jump connections) and 50,000,000 training iterations. The use of a Spline Net with a bifurcation schedule enabled the learning to be sped up by almost two orders of magnitude, indicating there is a significant performance advantage in trading-off number of hidden layers for node complexity. 690 Lane, Flax, Handelman, and Gelfand Hidden Node Connection Functions Output Node Hidden Node Connection Response Functions Hidden Node Outputs After Connection Functions Output Node Response Figure 5. Spiral Learning with Bifurcation Schedule P=l P=2 P=4 P=8 P=16 P=32 P=128 Figure 6. Evolution of Connection Functions to Third Hidden Node Multi-Layer Perceptrons with B-Spline Receptive Field Functions 691 3000 2500 ~ 2000 CI) ~ 1500 1000 500 o o : ~ \. \ \.... , i'-200 400 600 800 1000 Training Iteration Figure 7. Learning Curve with Bifurcation Schedule Mean Square Error vs. Training Iteration 3000 . . , , 2500 2000 1500 (P=128) ~ ~ "'\. '" ~ . 1000 500 o o 200 400 600 800 1000 Figure 9. Learning Curve without Bifurcation Schedule Mean Square Error vs. Training Iteration 3000 ~ ~ 2500 2000 1500 1000 \ \ '" : ~ i 500 ~ ; -" ~ ,e ~ o o 200 400 600 800 1000 Figure 11. Learning Curve with Bifurcation Schedule Mean Square Error vs. Training Iteration (112 Discrete Points) Figure 8. Output Node Response with Bifurcation Figure 10. Output Node Response without Bifurcation Figure 12. Output Node Response with Bifurcation (112 Discrete Points) 692 Lane, Flax, Handelman, and Gelfand 4. CONCLUSIONS It was shown that the introduction of B-Splines into the node connection functions of Multi-Layer Perceptrons allows more general neural network architectures to be developed. The resulting Spline Net architecture combines the fast learning and computational efficiency of strictly local neural network approaches with the scaling and generalization properties of the more established global MLP approach. Similarity to Kolmogorov-Lorenz networks can be used to suggest an initial number of hidden layer nodes. The number of node connection function partitions chosen affects both network generalization capability and learning performance. It was shown that use of a bifurcation schedule to determine the number of node input partitions speeds learning and improves network generalization. Results indicate that Spline Nets solve difficult learning problems by trading-off number of hidden layers for node complexity. Acknowledgements Stephen H. Lane and David A. Handelman are also employed by Robicon Systems Inc., Princeton, NJ. This research has been supported through a grant from the James S. McDonnell Foundation and a contract from the DARPA Neural Network Program. References Albus, J. (1975) "A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)," 1. Dyn. Sys. Meas. Control, vol. 97, pp. 270-277. Barron, A.R. and Barron, R.L. (1988) "Statistical Learning Networks: A Unifying View," Proc. 20th Symp. on the Interface - Computing and Statistics, pp. 192-203. Hornik, K. Stinchcombe, M. and White, H. (1989) "Multi-layer Feedforward Networks are Universal Approximators," Neural Networks, vol. 2, pp. 359-366. Hubel, D. and Wiesel, T.N. (1962) "Receptive Fields, Binocular Interaction and Functional Architecture in Cat's Visual Cortex," 1. Physiology, vol. 160, no. 106. Lane, S.H., Handelman, D.A. and Gelfand, JJ. (1989) "Development of Adaptive BSplines Using CMAC Neural Networks", 1989 I1CNN, Wash. DC., June 1989. Lane, S.H., Flax, M.B., Handelman, D.A. and Gelfand, JJ. (1991a) "Function Approximation in Multi-Layer Neural Networks with B-Spline Receptive Field Functions," Princeton University Cognitive Science Lab Report No. 42, in prep for 1. of Int'l Neural Network Society. Lane, S.H., Handelman, D.A. and Gelfand, JJ. (1991b) "Higher-Order CMAC Neural Networks-Theory and Practice," to appear Amer. Contr. Conf., Boston, MA, June,1991. Lang, K.J. and Witbrock, MJ. (1988) "Learning to Tell Two Spirals Apart," Proc. 1988 Connectionist Model Summer School, D. Touretzky, G. Hinton, and T. Sejnowski, Eds. Moody, J. and Darken, C. (1988) "Learning with Localized Receptive Fields," Proc. 1988 Connectionist Model Summer School, D. Touretzky, G. Hinton, T.Sejnowski, Eds.
1990
83
411
Bumptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street. Suite 600 Berkeley. California 94704 Abstract A new class of data structures called "bumptrees" is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot ann mapping learning task. Applications to density estimation. classification. and constraint representation and learning are also outlined. 1 WHAT IS A BUMPTREE? A bumptree is a new geometric data structure which is useful for efficiently learning. representing. and evaluating geometric relationships in a variety of contexts. They are a natural generalization of several hierarchical geometric data structures including oct-trees. k-d trees. balltrees and boxtrees. They are useful for many geometric learning tasks including approximating functions. constraint surfaces. classification regions. and probability densities from samples. In the function approximation case. the approach is related to radial basis function neural networks, but supports faster construction, faster access, and more flexible modification. We provide empirical data comparing bumptrees with radial basis functions in section 2. A bumptree is used to provide efficient access to a collection of functions on a Euclidean space of interest. It is a complete binary tree in which a leaf corresponds to each function of interest There are also functions associated with each internal node and the defining constraint is that each interior node's function must be everwhere larger than each of the 693 694 Omohundro functions associated with the leaves beneath it In many cases the leaf functions will be peaked in locali7..ed regions. which is the origin of the name. A simple kind of bump function is spherically symmetric about a center and vanishes outside of a specified ball. Figure 1 shows the structure of a two-dimensional bumptree in this setting. Ball supported bump ~~ &)b e"0f a 2-d leaf functions abc d e f tree structure Figure 1: A two-dimensional bumptree. tree functions A particularly important special case of bumptrees is used to access collections of Gaussian functions on multi-dimensional spaces. Such collections are used. for example. in representing smooth probability distribution functions as a Gaussian mixture and arises in many adaptive kernel estimation schemes. It is convenient to represent the quadratic exponents of the Gaussians in the tree rather than the Gaussians themselves. The simplest approach is to use quadratic functions for the internal nodes as well as the leaves as shown in Figure 2. though other classes of internal node functions can sometimes provide faster access. A abc d Figure 2: A bumptree for holding Gaussians. Many of the other hierarchical geometric data structures may be seen as special cases of bumptrees by choosing appropriate internal node functions as shown in Figure 3. Regions may be represented by functions which take the value 1 inside the region and which vanish outside of it. The function shown in Figure 3D is aligned along a coordinate axis and is constant on one side of a specified value and decreases quadratically on the other side. It is represented by specifying the coordinate which is cut, the cut location. the constant value (0 in some situations), and the coefficient of quadratic decrease. Such a function may be evaluated extremely efficiently on a data point and so is useful for fast pruning operations. Such evaluations are effectively what is used in (Sproull. 1990) to implement fast nearest neighbor computation. The bumptree structure generalizes this kind of query to allow for different scales for different points and directions. The empirical results presented in the next section are based on bumptrees with this kind of internal node function. Bumptrees for Efficient Function, Constraint, and Classification Learning 695 cy A. B. c. D. Figure 3: Internal bump functions for A) oct-trees, kd-trees, boxtrees (Omohundro, 1987), B) and C) for balItrees (Omohundro, 1989), and D) for Sproull's higher performance kd-tree (Sproull, 1990). There are several approaches to choosing a tree structure to build over given leaf data. Each of the algorithms studied for ball tree construction in (Omohundro, 1989) may be applied to the more general task of bumptree construction. The fastest approach is analogous to the basic k-d tree construction technique (Friedman, et. al, 1977) and is top down and recursively splits the functions into two sets of almost the same size. This is what is used in the simulations described in the next section. The slowest but most effective approach builds the tree bottom up, greedily deciding on the best pair of functions to join under a single parent node. Intermediate in speed and quality are incremental approaches which allow one to dynamically insert and delete leaf functions. Bumptrees may be used to efficiently support many important queries. The simplest kind of query presents a point in the space and asks for all leaf functions which have a value at that point which is larger than a specified value. The bumptree allows a search from the root to prune any subtrees whose root function is smaller than the specified value at the point. More interesting queries are based on branch and bound and generalize the nearest neighbor queries that k-d trees support. A typical example in the case of a collection of Gaussians is to request all Gaussians in the set whose value at a specified point is within a specified factor (say .(01) of the Gaussian whose value is largest at that point. The search proceeds down the most promising branches rust, continually maintains the largest value found at any point, and prunes away subtrees which are not within the given factor of the current largest function value. 2 THE ROBOT MAPPING LEARNING TASK ........... System ~--..... Kinematic space ----_~. Visual space R3 .. R6 Figure 4: Robot arm mapping task. 696 Omohundro Figure 4 shows the setup which defines the mapping learning task we used to study the effectiveness of the balltree data structure. This setup was investigated extensively by (Mel. 1990) and involves a camera looking at a robot arm. The kinematic state of the ann is defmed by three angle control coordinates and the visual state by six visual coordinates of highlighted spots on the arm. The mapping from kinematic to visual space is a nonlinear map from three dimensions to six. The system attempts to learn this mapping by flailing the ann around and observing the visual state for a variety of randomly chosen kinematic states. From such a set of random input/output pairs. the system must generalize the mapping to inputs it has not seen before. This mapping task was chosen as fairly representative of typical problems arising in vision and robotics. The radial basis function approach to mapping learning is to represent a function as a linear combination of functions which are spherically symmetric around chosen centers f (x) = L wjgj (x - Xj) . In the simplest form. which we use here. the basis functions are j centered on the input points. More recent variations have fewer basis functions than sample points and choose centers by clustering. The timing results given here would be in terms of the number of basis functions rather than the number of sample points for a variation of this type. Many fonns for the basis functions themselves have been suggested. In our study both Gaussian and linearly increasing functions gave similar results. The coefficients of the radial basis functions are chosen so that the sum forms a least squares best fit to the data. Such fits require a time proportional to the cube of the number of parameters in general. The experiments reported here were done using the singular value decomposition to compute the best fit coefficients. The approach to mapping learning based on bumptrees builds local models of the mapping in each region of the space using data associated with only the training samples which are nearest that region. These local models are combined in a convex way according to "influence" functions which are associated with each model. Each influence function is peaked in the region for which it is most salient. The bumptree structure organizes the local models so that only the few models which have a great influence on a query sample need to be evaluated. If the influence functions vanish outside of a compact region. then the tree is used to prune the branches which have no influence. If a model's influence merely dies off with distance, then the branch and bound technique is used to determine contributions that are greater than a specified error bound. If a set of bump functions sum to one at each point in a region of interest. they are called a "partition of unity". We fonn influence bumps by dividing a set of smooth bumps (either Gaussians or smooth bumps that vanish outside a sphere) by their sum to form an easily computed partiton of unity. Our local models are affine functions determined by a least squares fit to local samples. When these are combined according to the partition of unity, the value at each point is a convex combination of the local model values. The error of the full model is therefore bounded by the errors of the local models and yet the full approximation is as smooth as the local bump functions. These results may be used to give precise bounds on the average number of samples needed to achieve a given approximation error for functions with a bounded second derivative. In this approach. linear fits are only done on a small set of local samples. avoiding the computationally expensive fits over the whole data set required by radial basis functions. This locality also allows us to easily update the model online as new data arrives. Bumptrees for Efficient Function, Constraint, and Classification Learning 697 bi(x) If bi (x) are bump functions such as Gaussians. then ftj (x) = fonns a partition Lbj(X) j of unity. If mi (x) are the local affine models. then the final smoothly interpolated approximating function is /(x) = Lfti(x)mi(x). The influence bumps are centered on the i sample points with a width determined by the sample density. The affine model associated with each influence bump is detennined by a weighted least squares fit of the sample points nearest the bwnp center in which the weight decreases with distance. Because it performs a global fit, for a given number of samples points, the radial basis function approach achieves a smaller error than the approach based on bumptrees. In terms of construction time to achieve a given error, however, bwnptrees are the clear winner.Figure 5 shows how the mean square error for the robot arm mapping task decreases as a function of the time to construct the mapping. Mean Square Error 0.010 0.008 0.006 0.004 0.002 0.000-1-.....;;;;::. .... ........ .... -...-.......... o 40 80 120 160 Learning time (sees) Figure 5: Mean square error as a function of learning time. Perhaps even more important for applications than learning time is retrieval time. Retrieval using radial basis functions requires that the value of each basis function be computed on each query input and that these results be combined according to the best fit weight matrix. This time increases linearly as a function of the number of basis functions in the representation. In the bumptree approach, only those influence bumps and affine models which are not pruned away by the bumptree retrieval need perform any computation on an input. Figure 6 shows the retrieval time as a function of number of training samples for the robot mapping task. The retrieval time for radial basis functions crosses that for balltrees at about 100 samples and increases linearly off the graph. The balltree algorithm has a retrieval time which empirically grows very slowly and doesn't require much more time even when 10,000 samples are represented. While not shown here, the representation may be improved in both size and generalization capacity by a best first merging technique. The idea is to consider merging two local models and their influence bumps into a single model. The pair which increases the error the least 698 Omohundro is merged frrst and the process is repeated until no pair is left whose meger wouldn't exceed an error criterion. This algorithm does a good job of discovering and representing linear parts of a map with a single model and putting many higher resolution models in areas with strong nonlinearities. Retrieval time (sees) 0.030 Gaussian RBF Bumptree 0.020 0.010 O.OOO-+---r-..-..... --r~..-.......... ~..-........... o 2K 4K 6K 8K 10K Figure 6: Retrieval time as a function of number of training samples. 3 EXTENSIONS TO OTHER TASKS The bumptree structure is useful for implementing efficient versions of a variety of other geometric learning tasks (Omohundro, 1990). Perhaps the most fundamental such task is density estimation which attempts to model a probability distribution on a space on the basis of samples drawn from that distribution. One powerful technique is adaptive kernel estimation (Devroye and Gyorfi, 1985). The estimated distribution is represented as a Gaussian mixture in which a spherically symmetric Gaussian is centered on each data point and the widths are chosen according to the local density of samples. A best-first merging technique may often be used to produce mixtures consisting of many fewer non-symmetric Gaussians. A bumptree may be used to fmd and organize such Gaussians. Possible internal node functions include both quadratics and the faster to evaluate functions shown in Figure 3D. It is possible to effICiently perform many operations on probability densities represented in this way. The most basic query is to return the density at a given location. The bumptree may be used with branch and bound to achieve retrieval in logarithmic expected time. It is also possible to quickly fmd marginal probabilities by integrating along certain dimensions. The tree is used to quickly identify the Gaussian which contribute. Conditional distributions may also be represented in this form and bumptrees may be used to compose two such distributions. Above we discussed mapping learning and evaluation. In many situations there are not the natural input and output variables required for a mapping. If a probability distribution is peaked on a lower dimensional surface, it may be thought of as a constraint. Networks of Bumptrees for Efficient Function, Constraint, and Classification Learning 699 constraints which may be imposed in any order among variables are natural for describing many problems. Bumptrees open up several possibilities f(X' efficiently representing and propagating smooth constraints on continuous variables. The most basic query is to specify known external constraints on certain variables and allow the network to further impose whatever constraints it can. Multi-dimensional product Ganssians can be used to represent joint ranges in a set of variables. The operation of imposing a constraint surface may be thought of as multiplying an external constraint Gaussian by the function representing the constraint distribution. Because the product of two Gaussians is a Gaussian, this operation always produces Gaussian mixtures and bumptrees may be used to facilitate the operation. A representation of constraints which is more like that used above for mappings consbUcts surfaces from local affine patches weighted by influence functions. We have developed a local analog of principle components analysis which builds up surfaces from random samples drawn from them. As with the mapping structures, a best-frrst merging operation may be used to discover affine sbUcture in a constraint surface. Finally, bumptrees may be used to enhance the performance of classifiers. One approach is to directly implement Bayes classifiers using the adaptive kernel density estimator described above for each class t s distribution function. A separate bumptree may be used for each class or with a more sophisticated branch and bound. a single tree may be used for the whole set of classes. In summary, bumptrees are a natural generalization of several hierarchical geometric access structures and may be used to enhance the performance of many neural network like algorithms. While we compared radial basis functions against a different mapping learning technique, bumptrees may be used to boost the retrieval performance of radial basis functions directly when the basis functions decay away from their centers. Many other neural network approaches in which much of the network does not perform useful work for every query are also susceptible to sometimes dramatic speedups through the use of this kind of access SbUCture. References L. Devroye and L. Gyorfi. (1985) Nonparametric Density Estimation: The Ll View, New York: Wiley. ]. H. Friedman, ]. L. Bentley and R. A. Finkel. (1977) An algorithm for finding best matches in logarithmic expected time. ACM Trans. Math. Software 3:209-226. B. Mel. (1990) Connectionist Robot Motion Planning. A Neurally-Inspired Approach to Visually-Guided Reaching. San Diego, CA: Academic Press. S. M. Omohundro. (1987) Efficient algorithms with neural network behavior. Complex Systems 1:273-347. S. M. Omohundro. (1989) Five balltree construction algorithms. International Computer Science Institute Technical Report TR-89-063. S. M. Omohundro. (1990) Geometric learning algorithms. Physica D 42:307-321. R. F. Sproull. (1990) Refmements to Nearest-Neighbor Searching in k-d Trees. Sutherland. Sproull and Associates Technical Report SSAPP # 184c. to appear in Algorithmica.
1990
84
412
Planning with an Adaptive World Model Sebastian B. Thrun German National Research Center for Computer Science (GMD) Knut Moller University of Bonn Department of Computer Science D-5300 Bonn, FRG Alexander Linden German National Research Center for Computer Science (GMD) D-5205 St. Augustin, FRG D-5205 St. Augustin, FRG Abstract We present a new connectionist planning method [TML90]. By interaction with an unknown environment, a world model is progressively constructed using gradient descent. For deriving optimal actions with respect to future reinforcement, planning is applied in two steps: an experience network proposes a plan which is subsequently optimized by gradient descent with a chain of world models, so that an optimal reinforcement may be obtained when it is actually run. The appropriateness of this method is demonstrated by a robotics application and a pole balancing task. 1 INTRODUCTION Whenever decisions are to be made with respect to some events in the future, planning has been proved to be an important and powerful concept in problem solving. Planning is applicable if an autonomous agent interacts with a world, and if a reinforcement is available which measures only the over-all performance of the agent. Then the problem of optimizing actions yields the temporal credit assignment problem [Sut84], i.e. the problem of assigning particular reinforcements to particular actions in the past. The problem becomes more complicated if no knowledge about the world is available in advance. Many connectionist approaches so far solve this problem directly, using techniques based on the interaction of an adaptive world model and an adaptive controller [Bar89, Jor89, Mun87]. Although such controllers are very fast after training, training itself is rather complex, mainly because of two reasons: a) Since future is not considered explicitly, future effects must be directly encoded into the world model. This complicates model training. b) Since the controller is trained with the world model, training of the former lags behind the latter. Moreover, if there do exist 450 Planning with an Adaptive World Model 451 : : state Figure 1: The training of the model network is a system identification task. Internal parameters are estimated by gradient descent, e.g. by backpropagation. several optimal actions, such controllers will only generate at most one regardless of all others, since they represent many-to-one functions. E.g., changing the objective function implies the need of an expensive retraining. In order to overcome these problems, we applied a planning technique to reinforcement learning problems. A model network which approximates the behavior of the world is used for looking ahead into future and optimizing actions by gradient descent with respect to future reinforcement. In addition, an experience network is trained in order to accelerate and improve planning. 2 LOOK-AHEAD PLANNING 2.1 SYSTEM IDENTIFICATION Planning needs a world model. Training of the world model,is adopted from [Bar89, Jor89, Mun87]. Formally, the world maps actions to subsequent states and reinforcements (Fig. 1). The world model used here is a standard non-recurrent or a recurrent connectionist network which is trained by backpropagation or related gradient descent algorithms [WZ88, TS90]. Each time an action is performed on the world their resulting state and reinforcement is compared with the corresponding prediction by the model network. The difference is used for adapting the internal parameters of the model in small steps, in order to improve its accuracy. The resulting model approximates the world's behavior. Our planning technique relies mainly on two fundamental steps: Firstly, a plan is proposed either by some heuristic or by a so-called experience network. Secondly, this plan is optimized progressively by gradient descent in action space. First, we will consider the second step. 2.2 PLAN OPTIMIZATION In this section we show the optimization of plans by means of gradient descent. For that purpose, let us assume an initial plan, i.e. a sequence of N actions, is given. The first action of this plan together with the current state (and, in case of a recurrent model network, its current context activations) are fed into the model network (Fig. 2). This gives us a prediction for the subsequent state and reinforcement of the world. If we assume that the state prediction is a good estimation for the next state, we can proceed by predicting the immediate next state and reinforcement from the second action of the plan correspondingly. This procedure is repeated for each of the N stages of the plan. The final output is a sequence of N reinforcement predictions, which represents the quality of the plan. In order to maximize reinforcement, we 452 Thrun, l\1OIler, and Linden "')}"".".' .. '. ~ __ m_od_e_l_n_e-.-r_o_r_k_(N_J_---,11-oI .1---- plan: JVlh action •••• ...r.=.. .... .. ..... . ... • • model network (2) ~-- plan: 2nd action .. :/;:::":;:/::;">:>:. model network (1) 1+--- plan: lit action L.....-____ ,--,,.--_.:.-;..._---l (PLANNING RESULT) context units recurrent networks on! Figure 2: Looking ahead by the chain of model networks. establish a differentiable reinforcement energy function Ereinf, which measures the deviation of predicted and desired reinforcement. The problem of optimizing plans is transformed to the problem of minimizing Ereinf' Since both Ereinf and the chain of model networks are differentiable, the gradients of the plan with respect to Ereinf can be computed. These gradients are used for changing the plan in small steps, which completes the gradient descent optimization. The whole update procedure is repeated either until convergence is observed or, which makes it more convenient for real-time applications, a predefined number of iterations - note that in the latter case the computational effort is linear in N. From the planning procedure we obtain the optimized plan, the first action1 of which is then performed on the world. Now the whole procedure is repeated. The gradients of the plan with respect to Ereinf can be computed either by backpropagation through the chain of models or by a feed-forward algorithm which is related to [WZ88, TS90]: Hand in hand with the activations we propagate also the gradients et, (r) a activationj (r) a actioni (s) (1) through the chain of models. Here i labels all action input units and j all units of the whole model network, r (1S;rS;N) is the time associated with the rth model of the chain, and s (1<s<r) is the time of the sth action. Thus, for each action (V'i, s) its influence on later activations (V'j, V'r>s) of the chain of networks, including all predictions, is measured by et,( r). It has been shown in an earlier paper that this gradient can easily be propagated forward through the network [TML90]: if j action input unit if r=l 1\ j state/context input unit et,(r) = if r>l 1\ j state/context input unit (j' corresponding output unit of preceding model) (2) logistic'(netj(r)). L weightjl e!.,(r) otherwise IEpred(j) 11£ an unknown world is to be explored, this action might be disturbed by adding a small random variable. Planning with an Adaptive World Model 453 The reinforcement energy to be minimized is defined as N Ereinf ~ L L gk ( T) . (reinf~ - activationk (T») 2 • T=l k (3) (k numbers the reinforcement output units, reinf~ is the desired reinforcement value, usually Vk: reinf~=l, and gk weights the reinforcement with respect to T and k, in the simplest case gk(T)=l.) Since Ereinf is differentiable, we can compute the gradient of Ereinf with respect to each particular reinforcement prediction. From these gradients and the gradients ef, of the reinforcement prediction units the gradients N (i, _ {) {) ~rein( ) = - ~ ~ gk( T) . (reinf~ - activationk( T» . ef,( T) (4) actloni S L.J L.J T=' k are derived which indicate how to change the plan in order to minimize Ereinf' Variable plan lengths: The feed-forward manner of the propagation allows it to vary the number of look-ahead steps due to the current accuracy of the model network. Intuitively, if a model network has a relatively large error, looking far into future makes little sense. A good heuristic is to avoid further look-ahead if the current linear error (due to the training patterns) of the model network is larger than the effect of the first action of the plan to the current predictions. This effect is exactly the gradients efl ( T). Using variable plan lengths might overcome the difficulties in finding an appropriate plan length N a priori. 2.3 INITIAL PLANS - THE EXPERIENCE NETWORK It remains to show how to obtain initial plans. There are several basic strategies which are more or less problem-dependent, e.g. random, average over previous actions etc. Obviously, if some planning took place before, the problem of finding an initial plan reduces to the problem of finding a simple action, since the rest of the previous plan is a good candidate for the next initial plan. A good way of finding this action is the experience network. This network is trained to predict the result of the planning procedure by observing the world's state and, in the case of recurrent networks, the temporal context information from the model network. The target values are the results of the planning procedure. Although the experience network is trained like a controller [Bar89], it is used in a different way, since outcoming actions are further optimized by the planning procedure. Thus, even if the knowledge of the experience network lags behind the model network's, the derived actions are optimized with respect to the "knowledge" of the model network rather than the experience network. On the other hand, while the optimization is gradually shifted into the experience network, planning can be progressively shortened. 3 APPROACHING A ROLLING BALL WITH A ROBOT ARM We applied planning with an adaptive world model to a simulation of a real-time robotics task: A robot arm in 3-dimensional space was to approach a rolling ball. Both hand position (i.e. x,y,z and hand angle) and ball position (i.e. x' ,y') were observed by a camera system in workspace. Conversely, actions were defined as angular changes of the robot joints in configuration space. Model and experience 454 Thrun, MOller, and Linden X-V-Space H current hand pOS. B current ball pos. :8 previous ban pOS. 1·10 plans Figure 3: (a) The recurrent model network (white) and the experience network (grey) at the robotics task. (b) Planning: Starting with the initial plan 1, the approximation leads finally to plan 10. The first action of this plan is then performed on the world. networks are shown in Fig. 3a. Note that the ball movement was predicted by a recurrent Elman-type network, since only the current ball position was visible at any time. The arm prediction is mathematically more sophisticated, because kinematics and inverse kinematics are required to solve it analytically. The reason why planning makes sense at this task is that we did not want the robot arm to minimize the distance between hand and ball at each step - this would obviously yield trajectories in which the hand follows the ball, e.g.: robot arm Figure 4: Basic strategy, the arm "follows" the ball. Instead, we wanted the system to find short cuts by making predictions about the ball's next movement. Thus, the reinforcement measured the distance in workspace. Fig. 3b illustrates a "typical" planning process with look-ahead N =4, 9 iterations, gk( r) = 1.3T (c.f. (2))2, a weighted stepsize TJ = 0.05· 0.9 T , and well-trained model and experience networks. Starting with an initial plan 1 by the experience network 2This exponential function is crucial for minimizing later distances rather than the sooner. Planning with an Adaptive World Model 455 the optimization led to plan 10. It is clear to see that the resulting action surpassed the initial plan, which demonstrates the appropriateness of the optimization. The final trajectory was: robot arm Figure 5: Planning: The arm finds the short cut. We were now interested in modifying the behavior of the arm. Without further learning of either the model or the experience network, we wanted the arm to approach the ball from above. For this purpose we changed the energy function (7): Before the arm was to approach the ball, the energy was minimal if the arm reached a position exactly above the ball. Since the experience network was not trained for that task, we doubled the number of iteration steps. This led to: robot arm Figure 6: The arm approa.ches from above due to a modified energy function. A first implementation on a real robot arm with a camera system showed similar results. 4 POLE BALANCING Next, we applied our planning method to the pole balancing task adopted from [And89]. One main difference to the task described above is the fact that gradient descent is not applicable with binary reinforcement, since the better the approximation by the world model, the more the gradient vanishes. This effect can be prevented by using a second model network with weight decay, which is trained with the same training patterns. Weight decay smoothes the binary mapping. By using the model network for prediction only and the smoothed network for gradient propagation, the pole balancing problem became solvable. We see this as a general 456 Thrun, MOller, and Linden technique for applying gradient descent to binary reinforcement tasks. We were especially interested in the dependency of look-ahead and the duration of balance. It turned out that in most randomly chosen initial configurations of pole and cart the look-ahead N = 4 was sufficient to balance the pole more than 20000 steps. If the cart is moved randomly, after on average 10 movements the pole falls. 5 DISCUSSION The planning procedure presented in this paper has two crucial limitations. By using a bounded look-ahead, effects of actions to reinforcement beyond this bound can not be taken into account. Even if the plan lengths are kept variable (as described above), each particular planning process must use a finite plan. Moreover, using gradient descent as search heuristic implies the danger of getting stuck in local minima. It might be interesting to investigate other search heuristics. On the other hand this planning algorithm overcomes certain problems of adaptive controller networks, namely: a) The training is relatively fast, since the model network does not include temporal effects. b) Decisions are optimized due to the current "knowledge" in the system, and no controller lags behind the model network. c) The incorporation of additional constraints to the objective function at runtime is possible, as demonstrated. d) By using a probabilistic experience network the planning algorithm is able to act as a non-deterministic many-to-many controller. Anyway, we have not investigated the latter point yet. Acknowledgements The authors thank J org Kindermann and Frank Smieja for many fruitful discussions and Michael Contzen and Michael FaBbender for their help with the robot arm. References [And89] C.W. Anderson. Learning to control an inverted pendulum using neural networks. IEEE Control Systems Magazine, 9(3):31-37, 1989. [Bar89] A. G. Barto. Connectionist learning for control: An overview. Technical Report COINS TR 89-89, Dept. of Computer and Information Science, University of Massachusetts, Amherst, MA, September 1989. [Jor89] M. I. Jordan. Generic constraints on unspecified target constraints. In Proceedings of the First International Joint Conference on Neural Networks, Washington, DC, San Diego, 1989. IEEE TAB NN Committee. [Mun87] P. Munro. A dual backpropagation scheme for scalar-reward learning. In Ninth Annual Conference of the Cognitive Science Society, pages 165-176, Hillsdale, NJ, 1987. Cognitive Science Society, Lawrence Erlbaum. [Sut84] R. S. Sutton. Temporal Credit Assignment in Reinforcement Learning. PhD thesis, University of Massachusetts, 1984. [TML90] S. Thrun, K. Moller, and A. Linden. Adaptive look-ahead planning. In G. Dorffner, editor, Proceedings KONNAIIOEGAI, Springer, Sept. 1990. [TS90] S. Thrun and F. Smieja. A general feed-forward algorithm for gradientdescent in connectionist networks. TR 483, GMD, FRG, Nov. 1990. [WZ88] R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. TR ICS Report 8805, Institute for Cognitive Science, University of California, San Diego, CA, 1988.
1990
85
413
Simple Spin Models for the Development of Ocular Dominance Columns and Iso-Orientation Patches J.D. Cowan & A.E. Friedman Department of Mathematics. Committee on Neurobiology. and Brain Research Institute. The University of Chicago. 5734 S. Univ. Ave .• Chicago. Illinois 60637 Abstract Simple classical spin models well-known to physicists as the ANNNI and Heisenberg XY Models. in which long-range interactions occur in a pattern given by the Mexican Hat operator. can generate many of the structural properties characteristic of the ocular dominance columns and iso-orientation patches seen in cat and primate visual cortex. 1 INTRODUCTION In recent years numerous models for the formation of ocular dominance columns (Malsburg, 1979; Swindale. 1980; Miller. Keller, & Stryker. 1989) and of iso-orientation patches (Malsburg 1973; Swindale 1982 & Linsker 1986)have been published. Here we show that simple spin models can reproduce many of the observed features. Our work is similar to, but independent of a recent study employing spin models (Tanaka. 1990). 26 Simple Spin Models 27 1.1 OCULAR DOMINANCE COLUMNS We use a one-dimensional classical spin Hamiltonian on a two-dimensional lattice with long-range interactions. Let O'i be a spin vector restricted to the orientations i and J, in the lattice space, and let the spin Hamiltonian be: HoD = -L. L. Wij O'i • O'j , (1) i j;ci where Wij is the well-known "Mexican Hat" distribution of weights: Wij = a+ exp(- li-jI2/ 0':) - a_ exp(-li-jI2/ cr) (2) s 0 HO~ = -L L w.. - L. L. w·· . .. IJ ... IJ 1 J;Cl 1 J;CI :i Figure 1. Pattern of Ocular Dominance which results from simulated annealing of the energy function HOD. Light and dark shadings correspond respectively to the two eyes. (3) Let s denote retinal fibers from the same eye and 0 fibers from the opposite eye. Then HOD represents the "energy" of interactions between fibers from the two eyes. It is relatively easy to find a configuration of spins which minimizes HO~ by simulated annealing (Kirkpatrick, Gelatt & Vecchi 1983). The result is shown in figure 1. It will be seen that the resulting pattern of right and left eye spins O'R and O'L is disordered, but at a constant wavelength determined in large part by the space constants 0'+ and 0'_ . 28 Cowan and friedman Breaking the symmetry of the initial conditions (or letting the lattivce grow systematically) results in ordered patterns. If HOD is considered to be the energy function of a network of spins exhibiting gradient dynamics (Hirsch & Smale. 1974). then one can write equations for the evolution of spin patterns in the form: ddt (Jl~ = -_a_ Hoo = L w~~ (J~ a a . . IJ J (J. J¢l 1 sao ~ a ~ = L w .. (J· + L w .. (J· = L w .. (J· - L w .. (J. • (4) j;ti IJ 1 j;ti IJ 1 j;ti IJ 1 j;ti IJ 1 where a = R or L. ~ = L or R respectively. Equation (4) will be recognized as that proposed by Swindale in 1979. 1.2 ISO-ORIENTATION PATCHES Now let (Ji represent avec tor in the plane of the lattice which runs continuously from i to J, without reference to eye class. It follows that (5) where 9i is the orientation of the ith spin vector. The appropriate classical spin Hamiltonian is: HIO = - L L Wij (Ji • erj = -L L Wij leri I leri I cos(9i - 9j). (6) i j;ti i j;ti Physicists will recognize HOD as a form of the Ising Lattice Hamiltonian with long-range alternating next nearest neighbor interactions. a type of ANNNI model (Binder. 1986) and HIO as a similar form of the Heisenberg XY Model for antiferromagnetic materials (Binder 1986). Again one can find a spin configuration that minimizes HIO by simulated annealing. The result is shown in figure 2 in which six differing orientations are depicted. corresponding to 300 increments (note that 9 + 1t is equivalent to 9). It will be seen that there are long stretches of continuously changing spin vector orientations, with intercalated discontinuities and both clockwise and counter-clockwise singular regions around which the orientations rotate. A one-dimensional slice shows some of these features, and is shown in figure 3. 180 9. 90 I o Simple Spin Models 29 Figure 2. Pattern of orientation patches obtained by simulated annealing of the energy function RIO. Six differing orientations varying from 00 to 1800 are represented by the different shadings. o 10 20 30 40 Cell Number Figure 3. Details of a one-dimensional slice through the orientation map. Long stretches of smoothly changing orientations are evident. 50 The length of O'i is also correlated with these details. Figure 4 shows that 100i I is large in smoothly changing regions and smallest in the neighborhood of a singularity. In fact this model reproduces most of the details of iso-orientation patches found by Blasdel and Salama (1986). 30 Cowan and friedman 10 5 o 10 20 30 40 50 Cell Number Figure 4. Variation of leri I along the same one-dim. slice through the orientation map shown in figure 3. The amplitude drops only near singular regions. For example, the change in orientation per unit length, Igrad9il is shown in figure 5. It will be seen that the lattice is "tiled", just as in the data from visual cortex, with max Igrad9illocated at singularities. :.- . :: .. . ;;:.:::: .... Figure S. Plot of Igrad9i I corresponding to the orientation map of figure 2. Regions of maximum rate of change of 9i are shown as shaded. These correspond with the singular regions of figure 2. Simple Spin Models 31 Once again, if HIO is taken to be the energy of a gradient dynamical system, there results the equation: d a dt 0'1' = -HIO = L w··(1· (7) ":}_ .. IJ J au· J~1 1 which is exactly that equation introduced by Swindale in 1981 as a model for the structure of iso-orientation patches. There is an obvious relationship between such equations, and recent similar treatments (Durbin & Mitchison 1990; Schulten, K. 1990 (preprint); Cherjnavsky & Moody, 1990). 2 CONCLUSIONS Simple classical spin models well-known to physicists as the ANNNI and Heisenberg XY Models, in which long-range interactions occur in a pattern given by the Mexican Hat operator, can generate many of the structural properties characteristic of the ocular dominance columns and iso-orientation patches seen in cat and primate visual cortex. Acknowledgements This work is based on lectures given at the Institute for Theoretical Physics (Santa Barbara) Workshop on Neural Networks and Spin Glasses, in 1986. We thank the Institute and The University of Chicago Brain Research Foundation for partial support of this work. References Malsburg, Ch.v.d. (1979), BioI. Cybern., 32, 49-62. Swindale, N.V. (1980), Proc. Roy. Soc. Lond. B, 208, 243-264. Miller, K.D., Keller, J.B. & Stryker, M. P. (1989), Science, 245,605-611. Malsburg, Ch.v.d. (1973), BioI. Cybern., 14,85-100. Swindale, N.V. (1982), Proc. Roy. Soc. Lond. B, 215, 211-230. Linsker, R. (1986), PNAS, 83, 7508-7512; 8390-8394; 8779-8783. Tanaka, S. (1990), Neural Networks, 3, 6, 625-640. Kirkpatrick, S., Gelatt, C.D. Jr. & Vecchi, M.P. (1983), Science, 229, 671-679. Hirsch, M.W. & Smale, S. (1974), Differential Equations. Dynamical Systems. and Linear Algebra. (Academic Press, NY). Binder, K. (1986), Monte Carlo Methods in Statistical Physics, (Springer, NY.). Blasdel, G.G. & Salama, G. (1986), Nature, 321,579-587. Durbin, R. & Mitchison, G. (1990), Nature, 343, 6259, 644-647. Schulten, K. (1990) (preprint). Cherjnavsky, A. & Moody, J. (1990), Neural Computation, 2, 3, 334-354.
1990
86
414
A Multiscale Adaptive Network Model of Motion Computation in Primates H. Taichi Wang Science Center, A18 Rockwell International 1049 Camino Dos Rios Thousand Oaks, CA 91360 Dimal Mathur Christor Koch Computation & Neural Systems Caltech,216-76 Pasadena, CA 91125 Science Center, A 7 A Rockwell International 1049 Camino Dos Rios Thousand Oaks, CA 91360 Abstract We demonstrate a multiscale adaptive network model of motion computation in primate area MT. The model consists of two stages: (l) local velocities are measured across multiple spatio-temporal channels, and (2) the optical flow field is computed by a network of directionselective neurons at multiple spatial resolutions. This model embeds the computational efficiency of Multigrid algorithms within a parallel network as well as adaptively computes the most reliable estimate of the flow field across different spatial scales. Our model neurons show the same nonclassical receptive field properties as Allman's type I MT neurons. Since local velocities are measured across multiple channels, various channels often provide conflicting measurements to the network. We have incorporated a veto scheme for conflict resolution. This mechanism provides a novel explanation for the spatial frequency dependency of the psychophysical phenomenon called Motion Capture. 1 MOTIVATION We previously developed a two-stage model of motion computation in the visual system of primates (Le. magnocellular pathway from retina to V1 and MT; Wang, Mathur & Koch, 1989). This algorithm has these deficiencies: (1) the issue of optimal spatial scale for velocity measurement, and (2) the issue optimal spatial scale for the smoothness of motion field. To address these deficiencies, we have implemented a multi-scale motion network based on multigrid algorithms. All methods of estimating optical flow make a basic assumption about the scale of the velocity relative to the spatial neighborhood and to the temporal discretization step of delay. Thus, if the velocity of the pattern is much larger than the ratio of the spatial to temporal sampling step, an incorrect velocity value will be obtained (Battiti, Amaldi & Koch, 1991). Battiti et al. proposed a coarse-to-fine strategy for adaptively detennining 349 350 Wang, Mathur, and Koch the optimal discretization grid by evaluating the local estimate of the relative error in the flow field due to discretization. The optimal spatial grid is the one minimizing this error. This strategy both leads to a superior estimate of the optical flow field as well as achieving the speedups associated with multigrid methods. This is important. given the large number of iterations needed for relaxation-based algorithms and the remarkable speed with which humans can reliably estimate velocity (on the order of 10 neuronal time constants). Our previous model was based on the standard regularization approach. which involves smoothing with weight A.. This parameter controls the smoothness of the computed motion field. The scale over which the velocity field is smooth depends on the size of the object The larger the object is. the larger the value of A. has to be. Since a real life vision system has to deal with objects of various sizes simultaneously. there does not exist an "optimal" smoothness parameter. Our network architecture allows us to circumvent this problem by having the same smoothing weight A. at different resolution grids. 2 NETWORK ARCHITECTURE The overall architecture of the two-stage model is shown in Figure 1. In the rust stage. local velocities are measured at multiple spatial resolutions. At each spatial resolution p. the local velocities are represented by a set of direction-selective neurons. u(ij.k.p). whose preferred direction is in direction 8tc (the Component cells; Movshon. Adelson. Gizzi & Newsome. 1985). In the second stage. the optical flow field is computed by a network of direction-selective neurons (pattern cells) at multiple spatial resolutions. v(ij.k.p). In the following. we briefly summarize the network. We have used a multiresolution population coding: Nor Nru-l 1 , 1 V = L L n (: vf 81 1 p:<O p'." (1) where Nor is the number of directions in each grid. Nres is the number of resolutions in the network and I is a 2-D linear interpolation operator (Brandt, 1982). In our single resolution model. the input source. sO(ij.k). to a pattern cell v(ij.k) was: av(iJ.k) = so(ij.k) = L COS(81 - 8t') {u(ij.k~ - (u • V(iJ)} e(ij.k') at l' (2) where u is the the unit vector in the direction of local velocity and e(ij.k') is the local edge strength. For our multiscale network. we have used a convergent multi-channel source term. SO' to a pattern cell v(ij.k.p) is: p p ~ n p" p' So = ~ Rp"_l So p'Sp p"-p' (3) where R is a 2-D restriction operator. We use the full weighting operator instead of the injection operator because of the sparse nature of the input data. The computational efficiency of the multigrid algorithms chas been embedded in our multiresolution network by a set of spatial-fIltering synapses. SI' written as: A Multiscale Adaptive Network Model of Motion Computation in IHmates 351 I(I,D U(~J,lI,p),E(I,J,"'p) retina multichannel normal velocity measurement I~ v(~J, II, p) multi resolution motion field Figure 1. The network architecture. Figure 2. A coarse-to-fine veto scheme. IC-'''T 352 Wang, Mathur, and Koch sf = a R~lVP-l - fjI;'lRC+ 1 vp • (4) where a and ~ are constants. As discussed in the section 1. the scale over which the velocity field is smooth depends on the size of the object Consider. for example. an object of certain size is moving with a given velocity across the field of view. The multiresolution representation and the spatial frequency filtering connections will force the velocity field to be represented mostly by a few neurons whose resolution grid matches the size of the object Therefore, the smoothness constraint should be enforced on the individual resolution grids. If membrane potential is used. the source for the smoothness term. S2' at resolution grid P. can be written as: Sf(ij,k) = A. L COS(aA; - at') (v(i-1j,k',p) + v(i+1j,k',p) + v(ij-1,k',p) + v(ij+l.k',p) - 4v(ij,k',p)} k' (5) where A. is the smoothness parameter. The smoothing weight A. in our formulation is the same for each grid and is independent of object sizes. The network equation becomes, aV(ij,k,p) = S& + sf + sf at . (6) The multiresolution network architecture has considerably more complicated synaptic connection pattern but only 33% more neurons as compared to the single resolution model, the convergence is improved by about two orders of magnitude (as measured by numbers of iterations needed). 3 CONFLICT RESOLUTION The velocity estimated by our -- or any other motion algorithm -- depends on the spatial (Ax) and temporal (At) discretization step used. Battiti et ale derived the following expression for the relative error in velocity due to incorrect derivative estimation: 6 = 14K.1 == 21r2 [(Lixl- (u~l] u 3~ m where u is the velocity, A. is the spatial frequency of the moving pattern. As velocity u deviates from Ax=uAt, the velocity measurement become less accurate. The scaling factor in (7) depends on the spatial filtering in the retina. Therefore. the choice of spatial discretization and spatial filtering bandwidth have to satisfy the requirements of both the sampling theorem and the velocity measurement accuracy. Even though (7) was derived based on the gradient model. we believe similar constraint applies to correlation models. We model the receptive field profiles of primate retinal ganglion cells by the Laplacian-ofGaussian (LOG) operators. If we require that the accuracy of velocity measurement be within 10% within u = 0 to u = 2 (Ax/At). then the standard deviation. a. of the Gaussian must be greater or equal to Ux. What happens if velocity measurement at various scales gives inconsistent results? Consider. for example. an object moving at a speed of 3 pixels/sec across the retina. As A Multiscale Adaptive Network Model of Motion Computation in IHmates 353 shown in Figure 2, channels p=1 and p=2 will give the correct measurement, since it is in the reliable ranges of these channels, as depicted by fIlled circles. The finest channel, p=O, on the other hand will give an erroneous reading. This suggests a coarser-to-fine veto scheme for conflict resolution. We have incorporated this strategy in our network architecture by implementing a shunting term in Eq. (4). In this way, the erroneous input signals from the component cells at grid p=O are shunted out (the open circles in Figure 2) by the component cells (the fIlled circles) at coarser grids. 4 MOTION CAPTURE How does human visual system deal with the potential conflicts among various spatial channels? Is there any evidence for the use of such a coarse-to-fine conflict resolution scheme? We believe that the well-known psychophysical phenomenon of Motion Capture is the manifestation of this strategy. When human subjects are presented a sequence of randomly moving random dots pattern, we perceive random motion. Ramachandran and Anstis (1983) found, surprisingly, that our perception of it can be greatly influenced by the movement of a superimposed low contrast, low spatial frequency grating. They found that the human subject has a tendency to perceive the random dots as moving with the spatial grating, as if the random dots adhere to the grating. For a given spatial frequency of the grating, the percentage of capture is highest when the phase shift between frames of the grating is about 900. Even more surprisingly, the lower the spatial frequency of the grating, the higher the percentage of capture. Other researchers (e.g. Yuille & Grzywacz, 1988) and we have attempted to explain this phenomenon based on the smoothness constraint on the velocity field. However, smoothness alone can not explain the dependencies on spatial frequency and the phase shift of the gratings. The coarser-to-fine shunting scheme provides a natural explanation of these dependencies. We have simulated the spatial frequency and phase shift dependency. The results are shown in Figure 3. In these simUlations, we plotted the relative uniformity of the motion-captured velocity fields. Uniformity of 1 signifies total capture. As can be seen clearly, for a given spatial frequency, the effect of capture increases with phase shift, and for a given phase shift, the effect of capture also increase as the spatial frequency become lower. The lower spatial frequency gratings are more effective, because the coarser the channels are, the more finer component cells can be effectively shunted out, as is clear from the receptive field relationship shown in Figure 2. 5 NONCLASSICAL RECEPTIVE FIELD Traditionally, physiologists use isolated bars and slits to map out the classical receptive fields (CRF) of a neuron which is the portion of visual field that can be directly stimulated. Recently, there is mounting evidence that in many visual neurons stimuli presented outside the CRF strongly and selectively influence neural responses to stimuli presented within the CRF. This is tenned nonclassical receptive field. Allman, Miezin & McGuinness (1985) have found that the true receptive field of more than 90% of neurons in the middle temporal (MT) area extends well beyond their CRF. The surrounds commonly have directional and velocity-selectivity influences that are 354 Wang, Mathur, and Koch 100 90 "80 !::-'s .. .2 70 'c ::J 60 ----= :. ........... _ .... ,....... . .,.,. -----. ~ .---.' , .' , .' , .' , .' , " , " , .' , " , .; " , , " , , , , , ,II' , , , , , " , ---lambda. 64 . -.-...... lambda_32 --.... Iambda_18 50+-..... ~ ..... -r ..... -. ..... ~~ ..... --..... ~ ..... ~ ..... , 20 40 80 80 100 Spatial Phase (Degree) Figure 3. Spatial frequency dependency of Motion Capture. --0'./h.doJ tJ" WUll " rtfl1..I1 "~ I ypu I NWJro,I Ino .' .. .. ' . .. f ·· ... , 75 ueTI ...... ~ . . -" .. • r--....=., •• .. •• L...-.. I' • '" • L_~·. c 0 50 . .. . .. .. Q. .. ....... ' VI .. ..... e ••••• .. .. '0 CENTER DOTS .. 25 MOVE .~ ;; .... :/ e \\ BACKGROUND .. 0 DOTS Z 0 til' STATIONARY .; \ .. -. ·25 -200 - 100 0 100 200 Direction or movement or center dots Model Neuron ........... -... Allman', Type I Neuron 100 ~ .. 0 -c..o.:::e.,~ so .. -t._-=-:!'u '\ .. - '"' 0 CENTER DOTS MOVE IN \ OPTIMUM ~ DIRECTION .. \, 0 -so BACKGROUND '::1 .~.:0 DIRECTION :a VARIES oS -100 ·200 ·100 0 100 200 Direction or movement of background dots Figure 4. Simulation of Allman's type I non-classical receptive field properties. A Multiscale Adaptive Network Model of Motion Computation in R'imates 355 antagonistic to the response from the CRF. Based on the surround selectivitYt the MT neurons can be classified into three types. Our model neurons show that same type of nonclassical receptive field selectivity as Allman's type I neuron. We have performed a series of simulations similar to Allman's original experiments. After the CRF of a model is determined, the optimal motion stimulus is presented within the CRF. The surrounds are, however, moved by the same amount but in the various directions. Dearly, the motion in the surround has profound effect of the activity of the cell we are monitoring. 1be effect of the surround motion on the cell as a function of the the direction of surround motion is plotted in Figure 4 (b). When the surround is moved in a similar direction as the center, the neuron activity of the cell is almost totally suppressed. On the other hand, when the surround is moved opposite to the center, the cell's activity is enhanced. Superimposed on Figure 4 are the similar plots from Allman's paper. 6 CONCLUSION In conclusion, we have developed a multi-channel, multi-resolution network model of motion computation in primates. The model MT neurons show similar nonclassical surround properties as Allman's type I cells. We also proposed a novel explanation of the Motion Capture phenomenon based on a coarse-to-fine strategy for conflict resolution among the various input channels. Acknowledgements CK acknowledges ONR, NSF and the James McDonnell Foundation for supporting this research. References Allman, J.t Miezin, F., and McGuinness, E. (1985) "Direction- and velocity-specific responses from beyond the classical receptive field in the middle temporal visual area (MT)", Perception, 14, 105 - 126. Battiti, R., Koch, C. and Amaldi, E. (1991) "Computing optical flow across multiple scales: an adaptive coarse-to-fme approach", to appear in Inti. J. Computer Vision. Brandt, A. (1982) "Guide to multigrid development". In: Muitlgrid Methods, Ed. Dold, A. and Eckmann, B., Springer-Verlag. Movshon, J.A., Adelson, E.H., Gizzi, M.S., and Newsome, W.T. (1985) "The Analysis of Moving Visual Pattern", In Pattern Recognition Mechanisms, ed' Chagas. C., Gattas, R., Gross, C.G., Rome: Vatican Press. Ramachandran, V.S. and Anstis, S.M. (1983) "Displacement thresholds for coherent apparent motion in random dot-patterns", Vision Res. 23 (12), 1719 - 1724. Yuille, A.L. and Grzywacz, N.M. (1988) "A computational theory for the perception of coherent visual motion", Nature, 333, 71 - 74. Wang, H. T., Mathur, B. P. and Koch, C. (1989) "Computing optical flow in the primate visual system", Neural Computation, 1(1),92 - 103.
1990
87
415
Spherical Units as Dynamic Consequential Regions: Implications for Attention, Competition and Categorization Stephen Jose Hanson* Learning and Knowledge Acquisition Group Mark A. Gluck Center for Molecular & Behavioral Neuroscience Rutgers University Newark, NJ 07102 Siemens Corporate Research Princeton, NJ 08540 Abstract Spherical Units can be used to construct dynamic reconfigurable consequential regions, the geometric bases for Shepard's (1987) theory of stimulus generalization in animals and humans. We derive from Shepard's (1987) generalization theory a particular multi-layer network with dynamic (centers and radii) spherical regions which possesses a specific mass function (Cauchy). This learning model generalizes the configural-cue network model (Gluck & Bower 1988): (1) configural cues can be learned and do not require pre-wiring the power-set of cues, (2) Consequential regions are continuous rather than discrete and (3) Competition amoungst receptive fields is shown to be increased by the global extent of a particular mass function (Cauchy). We compare other common mass functions (Gaussian; used in models of Moody & Darken; 1989, Krushke, 1990) or just standard backpropogation networks with hyperplane/logistic hidden units showing that neither fare as well as models of human generalization and learning. 1 The Generalization Problem Given a favorable or unfavorable consequence, what should an organism assume about the contingent stimuli? If a moving shadow overhead appears prior to a hawk attack what should an organism assume about other moving shadows, their shapes and positions? If a dense food patch is occasioned by a particular density of certain kinds of shrubbery what should the organism assume about other shurbbery, vegetation or its spatial density? In an pattern recognition context, given a character of a certain shape, orientation, noise level etc.. has been recognized correctly what should the system assume about other shapes, orientations, noise levels it has yet to encounter? • Also a member of Cognitive Science Laboratory, Princeton University, Princeton, NJ 08544 656 Spherical Units as Dynamic Consequential Regions 657 Many "generalization" theories assume stimulus similarity represents a "failure to discriminate", rather than a cognitive decision about what to assume is consequential about the stimulus event. In this paper we implement a generalization theory with multilayer architecture and localized kernel functions (cf. Cooper, 1962; Albus 1975; Kanerva, 1984; Hanson & Burr, 1987,1990; Niranjan & Fallside, 1988; Moody & Darken, 1989; Nowlan, 1990; Krushke, 1990) in which the learning system constructs hypotheses about novel stimulus events. 2 Shepard's (1987) Generalization Theory Considerable empirical evidence indicates that when stimuli are represented within an multi-dimensional psychological space, similarity, as measured by stimulus generalization, drops off in an approximate exponential decay fashion with psychological distance (Shepard, 1957, 1987). In comparison to a linear function, a similarity-distance relationship with upwards concave curvature, such as an exponential-decay curve, exaggerates the similarity of items which are nearby in psychological space and minimizes the impact of items which are further away. Recently, Roger Shepard (1987) has proposed a "Universal Law of Generalization" for stimulus generalization which derives this exponential decay similarity-distance function as a "rational" strategy given minimal information about the stimulus domain (see also Shepard & Kannappan, this volume). To derive the exponential-decay similaritydistance rule, Shepard (1987) begins by assuming that stimuli can be placed within a psychological space such that the response learned to anyone stimulus will generalize to another according to an invariant monotonic function of the distance between them. If a stimulus, 0, is known to have an important consequence, what is the probability that a novel test stimulus, X, will lead to the same consequence? Shepard shows, through arguments based on probabilistic reasoning that regardless of the a priori expectations for regions of different sizes, this expectation will almost always yield an approximate exponentially decaying gradient away from a central memory point. In particular, very simple geometric constraints can lead to the exponential generalization gradient. Shepard (1987) assumes (1) that the consequential region overlaps the consequential stimulus event. and (2) bounded center symmetric consequential regions of unknown shape and size In the I-dimensional case it can be shown that g(x) is robust over a wide variety of assumptions for the distribution of pes); although for pes) exactly the Erlangian or discrete Gamma, g(x) is exactly Exponential. We now investigate possible ways to implement a model which can learn consequential regions and appropriate generalization behavior (cf. Shepard, 1990). 3 Gluck & Bower's Configural-cue Network Model The first point of contact is to discrete model due to Gluck and Bower: The configuralcue network model (Gluck & Bower, 1988) The network model adapts its weights (associations) according to Rescorla and Wagner's (1972) model of classical conditioning which is a special case of Widrow & Hoffs (1961) Least-Mean-Squares (LMS) algorithm for training one-layer networks. Presentation of a stimulus pattern is 658 Hanson and Gluck represented by activating nodes on the input layer which correspond to the pattern's elementary features and pair-wise conjunctions of features. The configural-cue network model implicitly embodies an exponential generalization (similarity) gradient (Gluck, 1991) as an emergent property of it's stimulus representation scheme. This equivalence can be seen by computing how the number of overlapping active input nodes (similarity) changes as a function of the number of overlapping component cues (distance). If a stimulus pattern is associated with some outcome, the configural-cue model will generalize this association to other stimulus patterns in proportion to the number of common input nodes they both activate. Although the configura! cue model has been successful with various categorization data, there are several limitations of the configural cue model: (1) it is discrete and can not deal adequately with continuous stimuli (2) it possesses a non-adaptable internal representation (3) it can involve the pre-wiring the power set of possible cues Nonetheless, there are several properties that make the Configural Cue model successful that are important to retain for generalizations of this model: (a) the competitive stimulus properties deriving from the delta rule (b) the exponential stimulus generalization property deriving from the successive combinations of higher-order features encoded by hidden units. 4 A Continuous Version of Shepard's Theory We derive in this section a new model which generalizes the configural cue model and derives directly from Shepard's generalization theory. In Figure 1, is shown a one dimensional depiction of the present theory. Similar to Shepard we assume there is a consequential • lell lit ely / 0' o 0' 0' X 0' Figure 1: Hypothesis Distributions based on Consequential Region region associated with a significant stimulus event, O. Also similar to Shepard we assume the learning system knows that the significant stimulus event is contained in the consequential region, but does not know the size or location of the consquential region. In absence of this information the learning system constructs hypothesis distributions Spherical Units as Dynamic Consequential Regions 659 (0') which mayor maynot be contained in the consequential region but at least overlap the significant stimulus event with some finite probablity measure. In some hypothesis distributions the significant stimulus event is "typical" in the consequential region, in other hypothesis distributions the significant stimulus event is "rare". Consequently, the present model differs from Shepard's approach in that the learning system uses the consequential region to project into a continuous hypothesis space in order to construct the conditional probability of the novel stimulus, X, given the significant stimulus event o. Given no further information on the location and size of the consequential region the learning system averages over all possible locations (equally weighted) and all possible (equally weighted) variances over the known stimulus dimension: g(x) = Xp(S)lp (C)H(x,s,C)dCdS (1) In order to derive particular gradients we must assume particular forms for the hypothesis distribution, H(x,s,c). Although we have investigated many different hypothesis distributions and wieghting functions (p(c), pes)), we only have space here to report on two bounding cases, one with very "light tails", the Gaussian, and one with very "heavy tails", the Cauchy (see Figure 2). These two distributions are extremes and provide a test of the robustness of the generalization gradient. At the same time they represent different commitments to the amount of overlap of hidden unit receptive fields and the consequent amount of stimulus competition during learning. '" ~ 0 a=JfIl I~ 0 '" 0 0 0 ... Figure 2: Gaussian compared to the Cauchy: Note heavier Cauchy tail Equation 2 was numerically integrated (using mathematica), over a large range of variances and a large range of locations using uniform densities representing the weighting functions and both Gaussian and Cauchy distributions representing the hypothesis distributions. Shown in Figure 3 are the results of the integrations for both the Cauchy and Gaussian distributions. The resultant gradients are shown by open circles (Cauchy) or stars (Gaussian) while the solid lines show the best fitting exponential gradient. We note that they approximate the derived gradients rather closely in spite of the fact the underlying forms are quite complex, for example the curve shown for the Cauchy integration is actually: 660 Hanson and Gluck -5Arctan ( x -{ 2 ) + 0.01 [Arctan (1 OO(x -c 1))J + 5Arctan ( x +{ 1 ) (2) 0.01 [Arctan (100(X+C2))] - 'l2«c2+x)log(1-s lx+x 2)+(c l-x)log(s2-s lx+x 2))'l2(c l-x)log(1+s lx+x 2) + (c2+x)log(s2+s lx+x 2)) Consequently we confinn Shepard's original observation for a continuous version} of his theory that the exponential gradient is a robust consequence of minimum infonnation set of assumptions about generalization to novel stimuli. o r----------------------------------------~ .,; I: o 10 20 30 40 50 "W1ceIromX Figure 3: Generalization Gradients Compared to Exponential (Solid Lines) 4.1 Cauchy vs Gaussian As pointed out before the Cauchy has heavier tails than the Gaussian and thus provides more global support in the feature space. This leads to two main differences in the hypothesis distributions: (1) Global vs Local support: Unlike back-propagation with hyperplanes, Cauchy can be local in the feature space and unlike the Gaussian can have more global effect. (2) Competition not Dimensional scaling: Dimensional "Attention" in CC and Cauchy multilayer network model is based on competition and effective allocation of resources during learning rather than dimensional contraction or expansion. 1 N-Dimensional Versions: we generalize the above continuous J-d model to an N-dimensional model by assuming that a network of Cauchy units can be used to construct a set of consequential regions each possibly composed of several Cauchy receptive fields. Consequently, dimensions can be differentially weighted by subsets of cauchy units acting in concert could produce metrics like L-J nonns in separable (e.g., shape, size of arbitrary fonns) dimension cases while equally weighting dimensions similar to metries like L-2 nonns in integral (e.g., lightness, hue in color) dimension cases. Spherical Units as Dynamic Consequential Regions 661 Since the stimulus generalization properites of both hypothesis distributions are indistinguishable (both close to exponential) it is important to compare categorization results based on a multilayer gradient descent model using both the Cauchy and Gaussian as hidden node functions. 5 Comparisons with Human Categorization Performance We consider in the final section two experiments from human learning literature which constrain categorization results. The model was a multilayer network using standard gradient descent in the radius, location and second layer weights of either Cauchy or Gaussian functions in hidden units. 5.1 Shepard, Hovland and Jenkins (1961) In order to investigate adults ability to learn simple classification SH&J used eight 3dimensional stimuli (comers of the cube) representing seperable stimuli like shape, color or size. Of the 70 possible 4-exempler dichotomies there are only six unique 4 exemplar dichotomies which ignor the specific stimulus dimension . ! ! I " .. • ....,.,_til'O_ • ~­ _til .. _ [0 -II -- I. -roI y VI .. [ill _0III -roI y VI • • .. .. .. Figure 4: Classification Learning Rate for Gaussian and Cauchy on SHJ stimuli These dichotomies involve both linearly separable and nonlinearly separable classifications as well as selective dependence on a specific dimension or dimensions. 662 Hanson and Gluck For both measures of trials to learn and the number of errors made during learning the order of difficulty was (easiest) I<II<llI<lV<V<VI (hardest). In Figure 4, both the Cauchy model and the Gaussian model was compared using the SHJ stimuli. Note that the Gaussian model misorders the 6 classification tasks: I<lV<lII<ll<V<VI while the Cauchy model confonns with the human perfonnance. 5.2 Medin and Schwanenflugel (1981) Data suitable to illustrate the implications of this non-linear stimulus generalization gradient for classification learning, are provided by Medin and Schwanenflugel (1981). They contrasted perfonnance of groups of subjects learning pairs of classification tasks, one of which was linearly separable and one of which was not. One of the classifications is linearly separable (LS) and the other is not (NLS). :: 1 I • i 01 '" i I I ~ I . 1 I r I : i I ; 1 I I o I -~ 1 \ -:=0 .... , HU ~ N I .... -~ • ~ 1 I I : , . 0 : .. .. • :s 1 :I · .. : : : = ::I : : : .~ · · : , .....,_ ... 0 .... '_HU I-=......... ..,..ttltONN 3HIJ 1-- =to " • • Figure 5: Subjects (a) Cauchy (b) Gaussian (c) and Backprop (d) Leamiog perfonnance 00 the M&S stimuli. An important difference between the tasks lies in how the between-category and withincategory distances are distributed. The linearly separable task is composed of many Spherical Units as Dynamic Consequential Regions 663 "close" (Hamming distance=l) and some "far" (Hamming distance=3) relations, while the non-separable task has a broader distribution of "close", "medium", and "far" between-category distances. These unequal distributions have important implications for models which use a non-linear mapping from distance to similarity. Medin and Schwanenflugel reported reliable and complete results with a four-dimensional task that embodied the same controls for linear separability and inter-exemplar similarities. To evaluate the relative difficulty of the two tasks, Medin & Schwanenflugel compared the average learning curves of subjects trained on these stimuli. Subjects found the linearly separable task (LS) more difficult than the non-linearly separable task (NLS), as indicated by the reduced percentage of errors for the NLS task at all points during training (see next Figure 5--Subjects, top left) In Figure 5 is shown 10 runs of the Cauchy model (top right) note that it, similar to the human performance, had more difficulty with the LS than the NLS separable task. Below this frame is the results for the Gaussian model (bottom left) which does show a slight advantage of learning the NLS task over the LS task. While in the final frame (bottom right) of this series standard backprop actually reverses the speed of learning of each task relative to human performance. 6 Conclusions A continuous version of Shepard's (1987) generalization theory was derived providing for a specific Mass/Activation function (Cauchy) and receptive field distribution. The Cauchy activation function is shown to account for a range of human learning performance while another Mass/Activation function (Gaussian) does not. The present model also generalizes the Configural Cue model to continuous, dynamic, internal representation. Attention like effects are obtained through competition of Cauchy units as a fixed resource rather than dimensional "shrinking" or "expansion" as in an explicit rescaling of each axes. Cauchy units are a compromise; providing more global support in approximation than gaussian units and more local support than the hyperplane/logistic units in backpropagation models. References Albus, J. S. (1975) A new approach to manipulator control: The cerebellar model articulation controller (CMAC), American Society of Engineers, Transactions G (Journal of Dynamic Systems, Measurement and Control) 97(3):220-27. Cooper, P. (1962) The hypersphere in pattern recognition. Information and Control, 5, 324-346. M. A. Gluck (1991) Stimulus generalization and representation in adaptive network models of category learning. Psychological Science, 2, 1, 1-6. M. A. Gluck & G. H. Bower, (1988), Evaluating an adaptive network model of human learning. Journal of Memory and Language, 27, 166-195. Hanson, S. J. and Burr, D. J. (1987) Knowledge Representation in Connectionist Networks, Bellcore Technical Report. 664 Hanson and Gluck Hanson, S. J. and Burr, D. J. (1990) What Connectionist models learn: Learning and Representation in Neural Networks. Behavioral and Brain Sciences. Kanerva, P. (1984) Self propagating search: A unified theory of memory; Ph.D. Thesis, Stanford University. Kruschke, J. (1990) A connectionist model of category learning, Ph.D. Thesis, UC Berkeley. Medin D. L., & Schanwenflugel, P. J. (1981) Linear seperability in classification learning. Journal of Experimental Psychology: Human Learning and Memory, 7, 355-368. Moody .J. and Darken, C (1989) Fast learning in networks of locally-tuned processing units, Neural Computation, 1,2,281-294. Niranjan M. & Fallside, F. (1988) Neural networks and radial basis functions in classifying static speech patterns, Technical Report, CUED/FINFENG TR22, Cambridge University. Nowlan, S. (1990) Max Likelihood Competition in RBF Networks. Technical Report CRG-TR-90-2, University of Toronto. R. A. Rescorla A. R. Wagner (1972) A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and non-reinforcement. A. H. Black W. F. Prokasy (Eds.) Classical conditioning II: Current research and theory, 64-99 Appleton-Century-Crofts: New York. R. N. Shepard (1958), Stimulus and response generalization: Deduction of the generalization gradient from a trace model, Psychological Review 65, 242-256 Shepard, R. N. (1987) Toward a Universal Law of Generalization for Psychological Science. Science, 237. R. N. Shepard, C. 1. Hovland & H. M. Jenkins (1961), Learning and memorization of classifications, Psychological Monographs, 75, 1-42 A B. Widrow & M. E. Hoff (1960) Adaptive switching circuits, Institute of Radio Engineers, Western Electronic Show and Convention, Convention Record, 4, 96194
1990
88
416
Speech Recognition using Connectionist Approaches Khalid Choukri SPRINT Coordinator CAP GEMINI INNOVATION 118 rue de Tocqueville, 75017 Paris. France e-mail: choukri@capsogeti.fr Abstract This paper is a summary of SPRINT project aims and results. The project focus on the use of neuro-computing techniques to tackle various problems that remain unsolved in speech recognition. First results concern the use of feedforward nets for phonetic units classification, isolated word recognition, and speaker adaptation. 1 INTRODUCTION Speech is a complex phenomenon but it is useful to divide it into levels of representation. Connectionism paradigms and particularities are exploited to tackle the major problems in relationship with intra and inter speaker variabilities in order to improve the recognizer performance. For that purpose the project has been split into individual tasks which are depicted below: I..-._S_ign_al_-,H Param ... ,. H Phon.tie H Lexieon The work described herein concerns : • Parameters-to-Phonetic: Classification of speech parameters using a set of "phonetic" symbols and extraction of speech features from signal. • Parameters-to-Lexical: Classification of a sequence of feature vectors by lexical access (isolated word recognition) in various environments. • Parameters-to-Parameters: Adaptation to new speakers and environments. 262 Speech Recognition using Connectionist Approaches 263 The following sections summarize the work carried out within this project. Details, including different nets description, are reported in the project deliverables (Choukri, 1990), (Bimbot, 1990), (Varga, 1990). 2 PARAMETERS-TO-PHONETIC The objectives of this task were to assess various neural network topologies, and to examine the use of prior knowledge in improving results, in the process of acousticphonetic decoding of natural speech. These results were compared to classical pattern classification approaches such as k-nearest neighbour classifiers (K-nn), dynamic programming, and k-means. 2.1 DATABASES The speech was uttered by one male speaker in French. Two databases were used: DB_1 made of isolated non-sense words (logatomes) which contains 6672 phonemes and DB_2 provided by the recording of 200 sentences which contains 5270 phonemes. DB_2 was split equally into training and test sets (2635 data each). 34 different labels were used: 1 per phoneme (not per allophone) and one for the silence. For each phoneme occurrence, 16 frames of signal (8 on each side of the label) were processed to provide a 16 Mel-scaled filter-bank vector. 2.2 CLASSICAL CLASSIFIERS Experiments using k-NN and k-means classifiers were conducted to check the sufficient consistency of the data and to have some reference scores. A first protocol considered each pattern as a 256-dimension vector, and achieved k-nearest neighbours with the euclidean distance between references and tests. A second protocol attempted to decrease the time misalignments influences by carrying out some Dynamic Time Warping between references and tests and taking the sum of distances along the best path, as a distance measure between patterns. The same data was used in the framework of a k-means classifier, for various values of k (number of representatives per class). The best results are : Method K-means (K > 16) Score 61.3 % 2.3 NEURAL CLASSIFIERS 2.3.1 LVQ Classifiers K-nn (K=5) K-nn + DTW (K=5) 72.2 % 77.5 % Experiments were conducted using Learning Vector Quantization technique (LVQ) (Bennani, 1990). A study of the importance of the weights initialization procedure proved to be an important parameter for the classification performance. We have compared three initialization algorithms: k-means, LBG, Multiedit. With k-means and LBG, tests were conducted with different numbers of reference vectors, while for Multiedit, the algorithm discovers automatically representative vectors in the 264 Choukri training set, the number of which is therefore not specified in advance. Initialization by LBG gave better performance for self-consistency (evaluation on the training database: DB_I) , whereas test performance on DB_2 (sentences) were similar for all procedures and very low. Further experiments were carried out on DB_2 both for training and testing. LBG initialization with 16 and 32 classes were tried (since they gave the best performances in the previous experiment). Even though the self-consistency for sentences is slightly lower than the one for logatomes, the improvement of recognition scores are far better as illustrated here: nb ref per class 16 32 K-means 60.3 % 61.3 % LBG -+ LVQ 62.4 % -+ 66.1 % 63.2 % -+ 67.2 % This experiment and some others (not presented here) (Bimbot, 1990) confirm that the failure of previous experiments is more due to a mismatch between the corpora for this recognition method, than an inadequacy of the classification technique itself. 2.3.2 The Time-Delay Neural Network (TDNN) Classifiers A TDNN, as introduced by A. Waibel (Waibel, 1987), can be described by its set of typological parameters, i.e. : MoxNo/Po,So M1xN1/P1,Sl M2XN2 Kx1. In the following a "TDNN-derived" network has a similar architecture, except that M2 is not constrained to be equal to K, and the connectivity between the last 2 layers is full. Various TDNN-derived architectures were tested on recognizing phonemes from sentences (DB_2) after learning on the logatomes (DB_I). Best results are given below: TDNN-derived structure self-consist. reco score 16x16/ 2,1 - 8x15 / 7,2 - 5x5 - 34x1 63.9 % 48.1 % 16x16 / 2,1 - 16x15 / 7,4 - llx3 - 34x1 75.1 % 54.8 % 16x16 / 4,1 - 16x13 / 5,2 - 16x5 - 34x1 81.0 % 60.5 % 16x16 / 2,1 - 16x15 / 7,4 - 16x3 - 34x1 79.8 % 60.8 % The first net is clearly not powerful enough for the task, so the number of free parameters has be increased. This upgraded the results immediately as can be seen for the other nets. The third and fourth nets have equivalent performance, they differ in the local windows width and delays. Other tested architectures did not increase this performance. The main difference between training and test sets is certainly the different speaking rate, and therefore the existence of important time distorsions. Though TDNN-derived architectures seem more able to handle this kind of distorsions than LVQ, as the generalization performance is significantly higher for similar learning self-consistency, but both fail to remove all temporel misalignment Speech Recognition using Connectionist Approaches 265 effects. In order to upgrade classification performance we changed the cost function which is minimized by the network: the error term corresponding to the desired output is multiplied by a constant H superior to 1, the terms of the error corresponding to other outputs being left unchanged to compensate the deficiency of the simple mean square error procedure. We obtained our best results with the best TDDN-derived net we experimented for H=2 : Database Net: self-consist. reco score DB_l 16x16 / 4,1 - 16x13 / 5,2 - 16x5 - 34xl 87.0 % 63.0 % DB_2 16x16 / 4,1 - 16x13 / 5,2 - 16x5 - 34xl 87.0 % 78.0 % The too small number of independent weights (too low-dimensioned TDNN-derived architecture) makes the problem too constrained. A well chosen TDNN-derived architecture can perform as well as the best k-nearest neighbours strategy. Performance gets lower for data that mainly differ by a significant speaking rate mismatch which could indicate that TDNN-derived architectures do not manage to handle all kinds of time distortions. So it is encouraging to combine different networks and classical methods to deal with the temporal and sequential aspects of speech. 2.3.3 Combination of TDNN and LVQ A set of experiments using a combined TDNN-derived network and LVQ architecture were conducted. For these experiments, we have used the best nets found in previous experiments. The main parameter of these experiments is the number of hidden cells in the last layer of the TDNN-derived network which is the input layer of LVQ (Bennani, 1990). Evaluation on DB_l with various numbers of references per class gave the following recognition scores: refs per class 4 8 16 TDNN +k-means 76.2 % 78.1 % 79.8 % TDNN +LBG 77.7 % 79.9% 81.3 % TDNN +LVQ (LBG for initialization) 78.4 % 82.1 % 81.4 % Best results have been obtained with 8 references per class and the LBG algorithm to initialize the LVQ module. The best performance on the test set (82.1 % ) represents a significant increase (4 % ) compared to the best TDNN-derived network. Other experiments were performed on TDNN + LVQ by using a modified LVQ architecture, presented in (Bennani, 1990), which is an extension of LVQ built to automatically weight the variables according to their importance for the classification. We obtain a recognition score of 83.6 % on DB_2 (training and tests on sentences) . We also used low dimensioned TDNNs for discriminating between phonetic features (Bimbot, 1990), assuming that phonetics will provide a description of speech that will appropriately constrain a priori a neural network, the TDNN structure war266 Choukri ranting the desirable property of shift invariance. The feature extraction approach can be considered as an other way to use prior knowledge for solving a complex problem with neural networks. The results obtained in these experiments are an interesting starting point for designing a large modular network where each module is in charge of a simple task, directly related to a well-defined linguistic phenomenon (Bimbot, 1990). 2.4 CONCLUSIONS Experiments with LVQ alone, a TDNN-derived network alone and combined TDNNLVQ architectures proved the combined architecture to be the most efficient with respect to our databases as summarized below (training and tests on DB_2): k-means LVQ k-nn k-nn + DTW TDNN TDNN + LVQ 61.3 % 67.2 % 72.2 % 77.5 % 78.0 % 83.6 % 3 PARAMETERS-TO-LEXICAL The main objective of this task is to use neural nets for the classification of a sequence of speech frames into lexical items (isolated words). Many factors affect the performance of automatic speech recognition systems. They have been categorized into those relating to speaker independent recognition mode, the time evolution of speech (time representation of the neural network input), and the effects of noise. The two first topics are described herein while the third one is described in (Varga, 1990). 3.1 USE OF VARIOUS NETWORK TOPOLOGIES Experiments were carried out to examine the performance of several network topologies such as those evaluated in section 2. A TDNN can be thought of as a single Hidden Markov Model state spread out in time. The lower levels of the network are forced to be shift-invariant, and instantiate the idea that the absolute time of an event is not important. Scaly networks are similar to TDDNs in that the hidden units of a scaly network are fed by partially overlapping input windows. As reported in previous sections, LVQ proved to be efficient for the phoneme classification task and an "optimal" architecture was found as a combination of a TDNN and LVQ. It was used herein for isolated word recognition. From experiments reported in detail in (Varga, 1990) there seems little justification for fully-connected networks with their thousands of weights when TDNNs and Scaly networks with hundreds of weights have very similar performance. This performance is about 83 % (the nearest class mean classifier gave a performance of 69%) on the E-set database (a portion of the larger CONNEX alphabet database which British Telecom Research Laboratories have prepared for experiments on neural networks). The first utterance by each speaker of the "E" words: "B, C, D, Speech Recognition using Connectionist Approaches 267 E, G, P, T, V'" were used. The database is divided into training and test sets, each consisting of approximately 400 words and 50 speakers. Other experiments were conducted on an isolated digits recognition task, speaker independent mode (25 speakers for training and 15 for test), using networks already introduced. A summary of the best performance obtained is: K-means TDNN LVQ TDNN+LVQ train. test train. test train. test train. test 97.38 90.57 98,90 94.0 98.26 92.57 99.90 97.50 Performance for training is roughly equivalent for all algorithms. For generalization, performance of the combined architecture is clearly superior to other techniques. 3.2 TIME EVOLUTION OF SPEECH In contrast to images as patterns of specific size, speech signals display a temporal evolution. Approaches have to be developed on how a network with its fixed number of input units can cover word patterns of variable size and also account for the dynamic time variations within words. Different projections onto the fixed-size collection of NxM network input elements (number of vectors x number of coefficients per vector) have been tested, such as : Linear Normalization: the boundaries of a word are determined by a conventional endpoint detection algorithm and the N' feature vectors linearly compressed or expanded to N by averaging or duplicating vectors, Time Warp: word boundaries are located initially. Some parts of a word of length N' are compressed, while others are stretched and some remain constant with respect to speech characteristics, Noise Boundaries: the sequence of N' vectors of a word are placed in the middle of or at random within the area of the desired N vectors and the margins padded with the noise in the speech pauses, Trace Segmentation: the procedure essentially involves the division of the trace that is followed by the temporal course in the M-dimensional feature vector space, into a constant number of new sections of identical length. These time normalization procedures were used with the scaly neural network (Varga, 1990). It turned out that three methods for time representation - time normalization, trace segmentation with endpoint detection or with noise boundaries - are well suited to solve the transformation problem for a fixed input network layer. The recognition scores are in the 98.5% range (with±l% deviation) for 10 digits and 99.5% for a 57 words in speaker independent mode. There is no clear indication that one of these approaches is superior to the other ones. 3.3 CONCLUSIONS The neural network techniques investigated have delivered comparable performance to classical techniques. It is now well agreed that Hybrid systems (Integration of 268 Choukri Hidden Markov Modeling and MLPs) yield enhanced performance. Initial steps have been made towards the integration of Hidden Markov Models and MLPs. Mathematical formulations are required to unify hybrid models. The temporal aspect of speech has to be carefully considered and taken into account by the formalism. 4 PARAMETERS-TO-PARAMETERS The main objective of this task was to provide the speech recognizer with a set of parameters adapted to the current user without any training phase. Spectral parameters corresponding to the same sound uttered by two speakers are generally different. Speaker-independent recognizers usually take this variability into account, using stochastic models and/or multi-references. An alternative approach consists in learning spectral mappings to transform the original set of parameters into another one more adapted with respect to the characteristics of the current user and the speech acquisition conditions. The way to proceed can be summed up as follows: • Load of the standard dictionary of the reference speaker, • Acquisition of an adaptation vocabulary for the new speaker, • Each new utterance is time-warped against the corresponding reference utterance. Thus temporal variability is softened and corresponding feature vectors are available (input-output pairs), • The spectral transformations are learned from these associated vectors, • The adaptation operator is applied to the reference dictionary, leading to an adapted one, • The recognizer is evaluated using the obtained adapted dictionary. The mathematical formulation is based on a very important result, regarding inputoutput mappings, and demonstrated by Funahashi (Funahashi, 1989) and Hornik, Stinchcombe & White (Hornik, 1989). They proved that a network using a single hidden layer (a net with 3 layers) with an arbitrary squashing function can approximate any Borel measurable function to any desired degree of accuracy. Experiments were conducted (see details in (Choukri, 1990)) on a speech isolated word database consisting of 20 English words recorded 26 times by 16 different speakers (TI data base (Choukri, 1987)). The first repetition of the 20 words are reference templates, tests are conducted on the remaining 25 repetitions. Before adaptation, the cross-speaker scores is of 68%. On the average adaptation with the multi-layer perceptron provides a 15% improvement compared to the non-adapted results. Speech Recognition using Connectionist Approaches 269 5 CONCLUSIONS For phonetic classifications, sophisticated networks, combinations of TONNs and LVQ, revealed to be more efficient than classical approaches or simple network architectures; their use for isolated word recognition offered comparable performance. Various approaches to cope with temporal distortions were implemented and demonstrate that combination of sophisticated neural networks and their cooperation with HMM is a promising research axis. It has also been established that basic MLPs are efficient tools to learn speaker-to-speaker mappings for speaker adaptation procedures. We are expecting more sophisticated MLPs (recurrent and context sensitive) to perform better. Acknowledgements: This project is partially supported by the European ESPRIT Basic research Actions programme (BRA 3228). The partners involved are: CGInn (F), ENST (F), IRIAC (F), RSRE (UK), SEL (FRG), and UPM (SPAIN). References K. Choukri. (1990) Speech processing and recognition using integrated neurocomputing techniques: ESPRIT Project SPRINT (Bra 9ffB), First delitJerable of Task f, June 1990. F. Bimbot. (1990) Speech processing and recognition using integrated neurocomputing techniques: ESPRIT project SPRINT (Bra 9ffB), First delitJerable of task 9, June 1990. A. Varga. (1990) Speech processing and recognition using integrated neurocomputing techniques: ESPRIT Project SPRINT (Bra 9ff8), First delitJerable of Task S, June 1990. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. (1987) Phoneme recognition using Time-Delay Neural Networks., Technical Report, CMU / ATR, Oct 30, 1987. Y. Bennani, N. Chaourar, P. Gallinari, and A. Mellouk. (1990) Comparison of Neural Net models on speech recognition tash, Technical Report, Universit of Paris Sud, LRl, 1990. Ken-Ichi Funahashi. (1989) On the approximate realization of continuous mappings by neural networks, in Neural Networks, 2(2):183-192, march 1989. K. Hornik, M. Stinchcombe, and H. White. (1989) Multilayer feedforward networks are unitJersal approximators., in Neural Networks, vol. 2(number 5):359-366, 1989. K. Choukri. (1987) SetJerai approaches to Speaker Adaptation in Automatic Speech Recognition Systems, PhD thesis, ENST (Telecom Paris), Paris, 1987. Y. BENNANI K. CHOUKRI D. HOWELL A. MELLOUK H.VALBRET AUTHORS AND CONTRIBUTORS F. BIMBOT L. DODD M. IMMENDORFER c. MONTACIE A.VARGA J. BRIDLE F. FOGELMAN A. KRAUSE R.MOORE A. WALLYN N.CHAOURAR P. GALLINARI K. McNAUGHT O. SEGARD Part VI Signal Processing
1990
89
417
Dynamics of Generalization in Linear Perceptrons Anders Krogh Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen, Denmark John A. Hertz NORDITA Blegdamsvej 17 DK-2100 Copenhagen, Denmark Abstract We study the evolution of the generalization ability of a simple linear perceptron with N inputs which learns to imitate a "teacher perceptron". The system is trained on p = aN binary example inputs and the generalization ability measured by testing for agreement with the teacher on all 2N possible binary input patterns. The dynamics may be solved analytically and exhibits a phase transition from imperfect to perfect generalization at a = 1. Except at this point the generalization ability approaches its asymptotic value exponentially, with critical slowing down near the transition; the relaxation time is ex (1 - y'a)-2. Right at the critical point, 1 the approach to perfect generalization follows a power law ex t - '2. In the presence of noise, the generalization ability is degraded by an amount ex (va - 1)-1 just above a = 1. 1 INTRODUCTION It is very important in practical situations to know how well a neural network will generalize from the examples it is trained on to the entire set of possible inputs. This problem is the focus of a lot of recent and current work [1-11]. All this work, however, deals with the asymptotic state of the network after training. Here we study a very simple model which allows us to follow the evolution of the generalization ability in time under training. It has a single linear output unit, and the weights obey adaline learning. Despite its simplicity, it exhibits nontrivial behaviour: a dynamical phase transition at a critical number of training examples, with power-law decay right at the transition point and critical slowing down as one approaches it from either side. 897 898 Krogh and Hertz 2 THE MODEL 1 Our simple linear neuron has an output V = N-"2 2:i Wjei, where ei is the ith input. It learns to imitate a teacher [1] whose weights are Uj by training on p examples of input-output pairs (er, ,~) with (1) generated by the teacher. The adaline learning equation [11] is then 1 p 1 1 Wi = Vii 'E('~ - v'N ~ Wje;)er = N ~(Uj - Wj)e;er. (2) ~=1 J ~J By introducing the difference between the teacher and the pupil, and the training input correlation matrix the learning equation becomes 1 p A .. - ""' r'!cf IJ N L.J"'J"" , ~=1 Vi = - 'EAijVj. j (3) (4) (5) We let the example inputs er take the values ±1, randomly and independently, but it is straightforward to generalize it to any distribution of inputs with (ereJ)e ex 6ij6~v . For a large number of examples (p = O( N) ~ V, the resulting generalization ability will be independent of just which p of the 2 possible binary input patterns we choose. All our results will then depend only on the fact that we can calculate the spectrum of the matrix A. 3 GENERALIZATION ABILITY To measure the generalization ability, we test whether the output of our percept ron with weights Wi agrees with that of the teacher with weights Ui on all possible binary inputs. Our objective function, which we call the generalization error, is just the square of the error, averaged over all these inputs: F (6) (We used that 2~ 2:{q} (Tj(Tj is zero unless i = j.) That is, F is just proportional to the square of the difference between the teacher and pupil weight vectors. With the Dynamics of Generalization in Linear Perceptrons 899 N- 1 normalization factor F will then vary between 1 (tabula rasa) and 0 (perfect generalization) if we normalize it to length .IN. During learning, Wi and thus Vi depends on time, so F is a function of t. The complementary quantity 1 - F(t) could be called the generalization ability. In the basis where A is diagonal, the learning equation (5) is simply Vr = -Arvr (7) where Ar are the eigenvalues of A. This has the solution vr(t) = vr(O)e- Art = ur(O)e- Art , (8) where it is assumed that the weights are zero at time t = 0 (we will come back to the more general case later). Thus we find 1 1 F(t) = N L v;(t) = N L u;e- 2Art (9) r r A veraging over all possible training sets of size p this can be expressed in terms of the density of eigenvalues of A, peE): F(t) = 1~2 J d€p( €)e- 2ft . (10) In the following it will be assumed that the length of it is normalized to .IN, so the prefactor disappears. For large N, the eigenvalue density is (see, e.g. [11], where it can be obtained simply from the imaginary part of the Green's function in eq.(57)) peE) = _1_)(€+ _ €)(€ _ L) + (1 - 0:)0(1 - 0:)8(€), (11) 271'€ where €± = (1 ± fo)2 (12) and O() is the unit step function. The density has two terms: a 'deformed semicircle' between the roots €_ and €+, and for 0: < 1 a delta function at € = 0 with weight 1 - 0:. The delta-function term appears because no learning takes place in the subspace orthogonal to that spanned by the training patterns. For 0: > 1 the patterns span the whole space, and therefore the delta-function is absent. The results at infinite time are immediately evident. For 0: < 1 there is a nonzero limit, F( 00) = 1 - 0:, while F( 00) vanishes for 0: > 1, indicating perfect generalization (the solid line in Figure 1). While on the one hand it may seem remarkable that perfect generalization can be obtained from a training set which forms an infinitesimal fraction of the entire set of possible examples, the meaning of the result is just that N points are sufficient to determine an N - I-dimensional hyperplane in N dimensions. Figure 2 shows F(t) as obtained numerically from (10) and (11). The qualitative form of the approach to F (00) can be obtained analytically by inspection. For 0: i= 1, the asymptotic approach is governed by the smallest nonzero eigenvalue €_. Thus we have critical slowing down, with a divergent relaxation time 1 1 T = €_ = lfo _ 112 (13) 900 Krogh and Hertz 2 .••••.. r:.. 1 . . . . . . . . . . . . . . . . . ... '. ... . ...: . .... ........... ---O~ ____________ ~~ ________ -_-__ -_-~o 1 a 2 Figure 1: The asymptotic generalization error as a function of (}. The full line corresponds to A = 0, the dashed line to A = 0.2, and the dotted line to Wo = 1 and A = O. as the transition at (} = 1 is approached. Right at the critical point, the eigenvalue 1 density diverges for small f like (-'2, which leads to the power law 1 F(t) ex Vi (14) at long times. Thus, while exactly N examples are sufficient to produce perfect generalization, the approach to this desirable state is rather slow. A little bit above (} = 1, F(t) will also follow this power law for times t ~ T, going over to (slow) exponential decay at very long times (t > T). By increasing the training set size well above N (say, to ~N), one can achieve exponentially fast generalization. Below (} = 1, where perfect generalization is never achieved, there is at least the consolation that the approach to the generalization level the network does reach is exponential (though with the same problem of a long relaxation time just below the transition as just above it). 4 EXTENSIONS In this section we briefly discuss some extensions of the foregoing calculation. We will see what happens if the weights are non-zero at t = 0, discuss weight decay, and finally consider noise in the learning process. Weight decay is a simple and frequently-used way to limit the growth of the weights, which might be desirable for several reasons. It is also possible to approximate the problem with binary weights using a weight decay term (the so-called spherical model, see [11]). We consider the simplest kind of weight decay, which comes in as an additive term, -AWi = -A( Ui - Vi), in the learning equation (2), so the equation Dynamics of Generalization in Linear Perceptrons 901 1.0 0.8 0.6 -.. "-" ~ 0.4 0.2 0.0 0 ............ --5 a=O.B ....... -......... a=I.0 - - - - - - - - - ~·~··~~~~~i~2~ ·~· ·~· ·~·~·~·~ 10 t 15 20 Figure 2: The generalization error as a function of time for a couple of different o . (5) for the difference between teacher and pupil is now Vi = - LAijVj + >'(Ui - Vi) = - L(Aij + >'8ij)Vj + >'Ui. (15) j j Apart from the last term this just shifts the eigenvalue spectrum by>.. In the basis where A is diagonal we can again write down the general solution to this equation: (1 -(Ar+,x)t) \ _ e I\Ur (0) -(Ar+,x)t Vr \ + vr e . Ar + 1\ (16) The square of this is [>'(1 e-(Ar+,x)t) W (0) ] 2 v2 = u2 + e-(Ar+,x)t + _r_e-(Ar+,x)t r r Ar + >. Ur (17) As in (10) this has to be integrated over the eigenvalue spectrum to find the averaged generalization error. Assuming that the initial weights are random, so that wr(O) = 0, and that they have a relative variance given by (18) the average of F(t) over the distibution of initial conditions now becomes F(t) = J dept e) [ (,,(1-;, :~+»') + e-('+»') 2 + w6e- 2('+»'] . (19) (Again it is assumed the length of it is .IN.) For >. = 0 we see the result is the same as before except for a factor 1 + w5 in front of the integral. This means that the asymptotic generalization error is now F(oo) = { (1 + w5)(1 - 0) for 0 < 1 (20) o for 0 > 1, 902 Krogh and Hertz which is shown as a dotted line in Figure 1 for Wo = 1. The excess error can easily be understood as a contribution to the error from the non-relaxing part of the initial weight vector in the subspace orthogonal to the space spanned by the patterns. The relaxation times are unchanged for A = O .. For A > 0 the relaxation times become finite even at a = 0, because the smallest eigenvalue is shifted by A, so (13) is now 1 1 T = L + A = lfo _ 1F + A' (21) In this case the asymptotic error can easily be obtained numerically from (19), and is shown by the dashed line in Figure 1. It is smaller than for A = 0 for w5 > 1 at sufficiently small a. This is simply because the weight decay makes the part of w(O) orthogonal to the pattern space decay away exponentially, thereby eliminating the excess error due to large initial weight components in this subspace. This phase transition is very sensitive to noise. Consider adding a noise term 77i(t) to the right-hand side of (2), with (r/i(t)77j(t'» = 2T6(t - t'). (22) Here we restrict our attention to the case A = O. Carrying the extra term through the succeeding manipulations leads, in place of (7), to vr = -Arvr + 77r(t). (23) The additional term leads to a correction (after Fourier transforming) 6 ( ) _ 77r(w) Vr W • A -zw+ r (24) and thus to an extra (time-independent) piece of the generalization error F(t): 6F = ~ '" J dw (l77r(w)12) = ~ '" I-. (25) N L...J 211" 1- iw + Arl2 N L...J Ar r r For a > 1, where there are no zero eigenvalues, we have 6F = T j~+ dfP(f) (26) E_ f which has the large a-limit T / a, as found in equilibrium analyses (also for threshold perceptrons [2,3,5,6,7,8,9]). Equation (26) gives a generalization error which diverges as one approaches the transition at a = 1: 6F T -1/2 _ T IX f r.:. . ya-1 (27) Equation (25) blows up for a < 1, where some of the Ar are zero. This divergence just reflects the fact that in the subspace orthogonal to the training patterns, v feels only the noise and so exhibits a random walk whose variance diverges as t --+- 00. Keeping more careful track of the dynamics in this subspace leads to 6F = 2T(1 - a)t + T 1~+ dfP~f) cx-::;- 2T [(1 - a)t + OC-Yra)] (28) Dynamics of Generalization in Linear Perceptrons 903 5 CONCLUSION Generalization in the linear perceptron can be understood in the following picture. To get perfect generalization the training pattern vectors have to span the whole input space N points (in general position) are enough to specify any hyperplane. This means that perfect generalization appears only for a > 1. As a approaches 1 the relaxation time - i.e. learning time - diverges, signaling a phase transition, as is common in physical systems. Noise has a severe effect on this transition. It leads to a degradation of the generalization ability which diverges as one reduces the number of training examples toward the critical number. This model is of course much simpler than most real-life training problems. However, it does allow us to examine in detail the dynamical phase transition separating perfect from imperfect generalization. Further extensions of the model can also be solved and will be reported elsewhere. References [1] Gardner, E. and B. Derrida: Three Unfinished Works on the Optimal Storage Capacity of Networks. Journal of Physics A 22, 1983-1994 (1989). [2] Schwartz, D.B., V.K. Samalam, S.A. Solla, and J .S. Denker: Exhaustive Learning. Neural Computation 2, 371-382 (1990). [3] Tishby, N., E. Levin, and S.A. Solla: Consistent Inference of Probabilities in Layered Networks: Predictions and Generalization. Proc. IJCNN Washington 1989, vol. 2 403-410, Hillsdale: Erlbaum (1989). [4] Baum, E.B. and D. Haussler: What Size Net Gives Valid Generalization. Neural Computation 1, 151-160 (1989). [5] Gyorgyi, G. and N. Tishby: Statistical Theory of Learning a Rule. In Neural Networks and Spin Glasses, eds W.K. Theumann and R. Koeberle. Singapore: World Scientific (1990). [6] Hansel, D. and H. Sompolinsky: Learning from Examples in a Single-Layer Neural Network. Europhysics Letters 11, 687-692 (1990). [7] Vallet, F., J. Cailton and P. Refregier: Linear and Nonlinear Extension of the Pseudo-Inverse Solution for Learning Boolean Functions. Europhysics Letters 9, 315-320 (1989). [8] Opper, M., W. Kinzel, J. Kleinz, and R. Nehl: On the Ability of the Optimal Perceptron to Generalize. Journal of Physics A 23, L581-L586 (1990). [9] Levin, E., N. Tishby, and S. A. Solla: A Statistical Approach to Learning and Generalization in Layered Neural Networks. AT&T Bell Labs, preprint (1990). [10] Gyorgyi, G.: Inference of a Rule by a Neural Network with Thermal Noise. Physical Review Letters 64, 2957-2960 (1990). [11] Hertz, J .A., A. Krogh, and G.I. Thorbergsson: Phase Transitions in Simple Learning. Journal of Physics A 22, 2133-2150 (1989).
1990
9
418
FEEDBACK SYNAPSE TO CONE AND LIGHT ADAPTATION Josef Skrzypek Machine Perception Laboratory UCLA - Los Angeles, California 90024 INTERNET: SKRZYPEK@CS.UCLA.EDU Abstract Light adaptation (LA) allows cone vIslOn to remain functional between twilight and the brightest time of day even though, at anyone time, their intensity-response (I-R) characteristic is limited to 3 log units of the stimulating light. One mechanism underlying LA, was localized in the outer segment of an isolated cone (1,2). We found that by adding annular illhmination, an I-R characteristic of a cone can be shifted along the intensity domain. Neural network involving feedback synapse from horizontal cells to cones is involved to be in register with ambient light level of the periphery. An equivalent electrical circuit with three different transmembrane channels leakage, photocurrent and feedback was used to model static behavior of a cone. SPICE simulation showed that interactions between feedback synapse and the light sensitive conductance in the outer segment can shift the I-R curves along the intensity domain, provided that phototransduction mechanism is not saturated during maximally hyperpolarized light response. 1 INTRODUCTION 1.1 Light response in cones In the vertebrate retina, cones respond to a small spot of light with sustained hyperpolarization which is graded with the stimulus over three log units of intensity [5]. Mechanisms underlying this I-R relation was suggested to result from statistical superposition of invariant single-photon, hyperpolarizing responses involvnig sodium conductance changes that are gated by cyclic nuclcotides (see 6). The shape of the response measured in cones depends on the size of the stimulating spot of light, presumably because of peripheral signals mediated by a negative feedback synapse from horizontal cells [7,8]; the hyperpolarizing response to the spot illumination in the central portion of the cone receptive field is antagonized by light in the surrounding periphery [11,12,13]. Thus the cone 391 392 Skrzypek membrane is influenced by two antagonistic effects; 1) feedback, driven by peripheral illumination and 2) the light sensitive conductance, in the cone outer segment Although it has been shown that key aspects of adaptation can be observed in isolated cones [1,2,3], the effects of peripheral illumination on adaptation as related to feedback input from horizontal cells have not been examined. It was reported that under appropriate stimulus conditions the resting membrane potential for a cone can be reached at two drastically different intensities for a spot/annulus combinations [8,14]. We present here experimental data and modeling results which suggests that results of feedback from horizontal cells to cones resemble the effect of the neural component of light adaptation in cones. Specifically, peripheral signals mediated via feedback synapse reset the cone sensitivity by instantaneously shifting the I-R curves to a new intensity domain. The full range of light response potentials is preserved without noticeable compression. 2 RESULTS 2.1 Identification of cones Preparation and the general experimental procedure as well as criteria for identification of cones has been detailed in [15,8]. Several criteria were used to distinguish cones from other cells in the OPL such as: 1) the depth of recording in the retina [II, 13],2) the sequence of penetrations concomitant with characteristic light responses, 3) spectral response curves [18],4) receptive field diameter [8], 5) the fastest time from dark potential to the peak of the light response [8, 15], 6)domain of I-R curves and 7) staining with Lucipher Yellow [8, 11, 13]. These values represent averages derived from all intracellular recordings in 37 cones, 84 bipolar cells, more than 1000 horizontal cells, and more than 100 rods. 2.2 Experimental procedure After identifying a cone, its I-R curve was recorded. Then, in a presence of center illumination (diameter = 100 urn) which elicited maximal hyperpolarization from a cone, the periphery of the receptive field was stimulated with an annulus of inner diameter (ID) = 750 urn and the outer diameter (OD) = 1500 urn. The annular intensity was adjusted to elicit depolarization of the membrane back to the dark potential level. Finally, the center intensity was increased again in a stepwise manner to antagonize the effect of peripheral illumination, and this new I-R curve was recorded. 2.3 Peripheral illumination shifts the I-R curve in cones Sustained illumination of a cone with a small spot of light, evokes a hyperpolarizing response, which after transient peak gradually repolarizes to some steady level (Fig. 1a). When the periphery of the relina is illuminated with a ring of light in the presence of center spot, the antagonistic component of response can be recorded in a form of sustained depolarization. It has been argued previously that in the tiger salamander cones, this type of response in cones is mediated via synaptic input from horizontal cells. [11, 12]. Feedback Synapse to Cone and Light Adaptation 393 The significance of this result is that the resting membrane potential for this cone can be reached at two drastically different intensities for a spot/annulus combinations; The action of an annular illumination is a fast depolarization of the membrane; the whole process is completed in a fraction of a second unlike the previous reports where the course of light-adaptation lasted for seconds or even minutes. Response due to spot of light measured at the peak of hyperpolarization, increased in magnitude with increasing intensity over three log units (fig. l.a). The same data is plotted as open circles in fig. l.b. Initially, annulus presented during the central illumination did not produce a noticeable response. Its amplitude reached maximum when the center spot intensity was increased to 3 log units. Further increase of center intensity resulted in disappearance of the annulus- elicited depolarization. Feedback action is graded with annular intensity and it depends on the balance between amount of light falling on the center and the surround of the cone receptive field. The change in cone's membrane potential, due to combined effects of central and annular illumination is plotted as filled circles in fig. lb. This new intensity-response curve is shifted along the intensity axis by approximately two log units. Both I-R curves span approximately three log units of intensity. The I-R curve due to combined center and surround illumination can be described by the function VNm = I/(I+k) [16] where Vm is a peak hyperpolarization and k is a constant intensity generating half-maximal response. This relationship [x/(x+k)] was suggested to be an indication of the light adaptation [2]. The I-R curve plotted using peak response values (open circles), fits a continuous line drawn according to equation (1exp(-kx». This has been argued previously to indicate absence of light adaptation [2,1]. There is little if any compression or change in gain after the shift of the cone operating point to some new domain of intensity. The results suggest that peripheral illumination can shift the center-spot elicited I-R curve of the cone thus resetting the responsegenerating mechanism in cones. 2.4 Simulation of a cone model The results presented in the previous sections imply that maximal hyperpolarization for the cone membrane is not limited by the saturation in the phototransduction process alone. It seems reasonable to assume that such a limit may be in part detennined by the batteries of involved ions. Furthennore, it appears that shifting I-R curves along the intensity domain is not dependent solely on the light adaptation mechanism localized to the outer segment of a cone. To test these propositions we developed a simplified compartmental model of a cone (Fig.2.) and we exercised it using SPICE (Vladimirescu et al., 1981). All interactions can be modeled using Kirchoffs current law; membrane current is cm(dv/dt)+lionic' The leakage current is lleak = Gleak(V m-EleaJ, light sensitive current is Ilight = Glight *(V m-Elight) and the feedback current is lib = Gfb *(V m-Efb). The left branch represents ohmic leakage channels (Gleak) which are associated with a constant battery Eleak ( -70 mY). The middle branch represents the light sensitive conductance (Glight) in series with + 1 m V ionic battery (Elight) [18]. Light adaptation effects could be incorporated here by making Glight time varying and dependent on internal concentration of Calcium ions. In our preliminary studies we were only interested in examining whether the shift of I-R is possible and if it would explain the disappearance of depolarizing FB reponse with hyperpolarization by the center light. This can be done with passive measurements of membrane potential amplitude. The right-most branch represents ionic channels that are controlled by the feedback synapse. With, Efb = -65 mV [11] Gfb is a time and voltage independent feedback conductance. 394 Skrzypek The input resistance of an isolated cone is taken to be near 500 Mohm (270 Mohm Attwell, et al., 82). Assuming specific membrane resistance of 5000 Ohm*cm*cm and that a cone is 40 microns long and has a 8 micron diameter at the base we get the leakage conductance G1eak = 1/(lGohm). In our studies we assume G1eak to be linear altghouth there is evidence that cone membrane rectifies (Skrzypek. 79). The Glight and Gfb are assumed to be equal and add up to l/(lGohm). The Glight varies with light intensity in proportion of two to three log units of intensity for a tenfold change in conductance. This relation was derived empirically, by comparing intensity response data obtained from a cone (V m=f(LogI)} to {V m=f(LogGli$hJ} generated by the model. The changes in Glb have not been calibrated to changes In light intensity of the annulus. However, we assume that G lb can not undergo variation larger that Glight. Figure 3 shows the membrane potential changes generated by the model plotted as a function of Rlighh at different settings of the "feedback" resistance Rib. With increasing Rfb, there is a parallel shift along the abscissa without any changes in the shape of the curve. Increase in Rlight corresponds to increase in light intensity and the increasing magnitude of the light response from Om V (Eligh0 all the way down to -65 m V (Efb). The increase in Rfb is associated with increasing intensity of the annular illumination, which causes additional hyperpolarization of the horizontal cell and consequently a decrease in "feedback" transmitter released from HC to cones. Since we assume the Etb =--65mV, a more negative level than the nonnal resting membrane potential. a decrease in Gfb would cause a depolarizing response in the cone. This can be observed here as a shift of the curve along the abscissa. In our model, a hundred fold change in feedback resistance from O.OlGohm to IGohm, resulted in shift of the "response-intensity" curve by approximately two log units along the abscissa. The relationship between changes in Rfb and the shift of the "response-intensity" curve is nonlinear and additional increases in Rfb from 1 Gohm to lOOGohm results in decreasing shifts. Membrane current undergoes similar parallel shift with changes in feedback conductance. However, the photocurrent (lligh0 and the feedback current (lfb), show only saturation with increasing Glight (not shown). The limits of either llight or Ifb currents are defined by the batteries of the model. Since these currents are associated with batteries of opposite polarities, the difference between them at various settings of the feedback conductance Gfb determines the amount of shift for Ileak along the abscissa. The compression in shift of "response intensity" curves at smaller values of Glb results from smaller and smaller current flowing through the feedback branch of the circuit. Consequently. a smaller Gib changes are required to get response in the dark than in the light. The shifting of the "response-intensity" curves generated by our model is not due to light adaptation as described by [1,2] although it is possible that feedback effects could be involved in modulating light-sensitive channels. Our model suggests that in order to generate additional light response after the membrane of a cone was fully hyperpolarized by light, it is insufficient to have a feedback effect alone that would depolarize the cone membrane. Light sensitive channels that were not previously closed [18] must also be available. Feedback Synapse to Cone and Light Adaptation 395 3 DISCUSSION The results presented here suggest that synaptic feedback from horizontal cells to cones could contribute to the process of light adaptation at the photoreceptor level. A complete explanation of the underlying mechanism requires further studies but the results seem to suggest that depolarization of the cone membrane by a peripheral illumination, resets the response-generating process in the cone. This result can be explained withing the framework of the current hypothesis of the light adaptation, recently summarized by [6]. It is conceivable that feedback transmitter released from horizontal cells in the dark, opens channels to ions with reversal potential near -65 mV [11]. Hence, hyperpolarizing cone membrane by increasing center spot intensity would reduce the depolarizing feedback response as cone nears the battery of involved ions. Additional increase in annular illumination, further reduces the feedback transmitter and the associated feedback conductance thus pushing cone's membrane potential away from the "feedback" battery. Eventually, at some values of the center intensity, cone membrane is so close to -65 mV that no change in feedback conductance can produce a depolarizing response. ACKNOWLEDGEMENTS Special gratitude to Prof. Werblin for providing a superb research environment and generous support during early part of this project We acknowledge partial support by NSF grant ECS-8307553, ARCO-UCLA Grant #1, UCLA-SEASNET Grant KF-21, MICROHughes grant #541122-57442, ONR grant #NOOOI4-86-K-0395, ARO grant DAAL0388- K-00S2 REFERENCES 1. Nakatani, K., & Yau. K.W. (1988). Calcium and light adaptation in retinal rods and cones. Nature. 334,69-71. 2. Matthews, H.R., Murphy, R.L.W., Fain, G.L., & Lamb, T.D. (1988). Photoreceptor light adaptation is mediated by cytoplasmic calcium concentration. Nature, 334, 67-69. 3. Normann, R.A. & Werblin, F.S. (1974). Control of retinal sensitivity. I. Light and dark-adaptation of vertebrate rods and cones. J. Physiol. 63, 37-61. 4. Werblin, F.S. & Dowling, J.E. (1969). Organization of the retina of the mudpuppy, Necturus maculosus. II. Intracellular recording. J. Neurophysiol. 32, (1969),315-338. 5. Pugh, E.N. & Altman, J. Role for calcium in adaptation. Nature 334, (1988), 16-17. 6. O'Bryan P.M., Properties of the dpolarizing synaptic potential evoked by peripheral illumination in cones of the turtle retina. J.Physiol. Lond. 253, (1973), 207-223. 7. Skrzypek J., Ph.D. Thesis, University of California at Berkeley, (1979). 8. Skrzypek, J. & Werblin, F.S.,(1983). Lateral interactions in absence of feedback to cones. J. Neurophysiol. 49, (1983), 1007-1016. 396 Skrzypek 9. Skrzypek. J. & Werblin. F.S., All horizontal cells have center-surround antagonistic receptive fields. ARVO Abstr., (1978). 10. Lasansky. A. Synaptic action mediating cone responses to annular illumination in the retina of the larval tiger salamander. J. Physiol. Lond. 310, (1981), 206-214. 11. Skrzypek J .• Electrical coupling between horizontal vell bodies in the tiger salamander retina. Vision Res. 24, (1984), 701-711. 12. Naka, K.I. & Rushton, W.A.H. (1967). The generation and spread of S-potentials in fish (Cyprinidae) J. Physiol., 192, (1967),437-461. 13. AttweU, D., Werblin, F.S. & Wilson. M. (1982a). The properties of single cones isolated from the tiger salamander retina. J. Physiol. 328.259-283. I I C m --L--rE'Uk -70mv G Feedback Synapse to Cone and Light Adaptation 397 IUk + Imv G Ifgnt G fa Fig. 2 Equivalent circuit model of a cone based on three different transmembrane channels. The ohmic leakage channel consists of a constant conductance GlcGJ: in series with constant battery ElcGJ:. Light sensitive channels are represented in the middle branch by G/i,J". Battery Eli,ltt. represents the reversal potential for light response at approximately Om V. Feedback synapse is shown in the right-most branch as a series combination of Gfb and the battery Efb = -65mV. representing reversal potential for annulus elicited, depolarizing response measured in a cone. 0.02 > 0.00 :! c .. -0.02 0 a. --0'Vmlat Rfbo.Ol Q • ~ 'Vmlat RIb-.' Q 'Vm lor At., Q c ·o.Ot --• ~ 'Vmlat Rlb-I OG .a e • :I .0.01 .0.01 .• ·2 0 2 • Log Alight Fig. 3 Plot of the membrane potential versus the logarithm of light-sensitive resistance. The data was synthesized with the cone model simulated by SPICE. Both current and voltage curves can be fitted by x/(x+k) relation (not shown) at all different settings of Gtb (Rfb) indicated in the legend. The shift of the curves. measured at 1/2 maximal value (k=x) spans about two log units. With increasing settings of Rtb (10 Gohms). curves begin to cross (Vm at -6SmV) signifying decreasing contribution of "feedback" synapse. • , -........."". IOnN to, > E 11 0 ! > • E ·10 > ·20 .... ,,-...... 'l--q 'Q , ,...... '\0 b , " 6, b.e.Q_~.DO 'lie I -31, i' i.... ·1 ·1 •• ·2 • s ,.., ___ _ C ~ -.~------.......... Fig. 1 (a) Series of responses to a combination of center spot and annulus. SurroWld illumination (S) was fixed at -3.2 l.u. throughout the experiment. Center spot intensity (C) was increased in 0.5 I.u. steps as indicated by the numbers near each trace. In the dark (upper-most trace) surround illumination had no measurable effect on the cone membrane potential. Annulus-elicited depolarizing response increased with intensity in the center up 10 about -3 l.u. Further increase of the spot intensity diminished the surround response. Plot of the peak hyperpolarizing response versus center spot intensity in log units in shown in (b) as open circles. It fits the dashed curve drawn according to equation l-exp(-kx). The curve indicated by filled circles represents the membrane potential measurements taken in the middle of the depolarizing response. This data can be approximated by a continuous curve derived from x/(x+k). All membrane potential measurement are made with respect 10 the resting level in the dark. This result shows that in the presence of peripheral illumination, when the feedback is activated, membrane potential follows the intensity-response curve which is shifted along the Log I axis. 2 t.IJ \D 00 til "" ~ '0 n> ""
1990
90
419
Direct memory access using two cues: Finding the intersection of sets in a connectionist model Janet Wiles, Michael S. Humphreys, John D. Bain and Simon Dennis Departments of Psychology and Computer Science University of Queensland QLD 4072 Australia email: janet@psych.psy.uq.oz.au Abstract For lack of alternative models, search and decision processes have provided the dominant paradigm for human memory access using two or more cues, despite evidence against search as an access process (Humphreys, Wiles & Bain, 1990). We present an alternative process to search, based on calculating the intersection of sets of targets activated by two or more cues. Two methods of computing the intersection are presented, one using information about the possible targets, the other constraining the cue-target strengths in the memory matrix. Analysis using orthogonal vectors to represent the cues and targets demonstrates the competence of both processes, and simulations using sparse distributed representations demonstrate the performance of the latter process for tasks involving 2 and 3 cues. 1 INTRODUCTION Consider a task in which a subject is asked to name a word that rhymes with oast. The subject answers "most", (or post, host, toast, boast, ... ). Now the subject is asked to find a word that means a mythical being that rhymes with oast. She or he pauses slighUy and replies "ghost". The difference between the first and second questions is that the first requires the use of one cue to access memory. The second question requires the use of two cues - either combining them before the access process, or combining the targets they access. There are many experimental paradigms in psychology in which a subject uses two or more cues to perform a task (Rubin & Wallace, 1989). One default assumption underlying 635 636 Wiles, Humphreys, Bain, and Dennis many explanations for the effective use of two cues relies on a search process through memory. Models of human memory based on associative access (using connectionist models) have provided an alternative paradigm to search processes for memory access using a single cue (Anderson, Silverstein, Ritz & Jones, 1977; McClelland & Rumelhart, 1986), and for two cues which have been studied together (Humphreys, Bain & Pike 1989). In some respects, properties of these models correspond very closely to the characteristics of human memory (Rumelhart, 1989). In addition to the evidence against search processes for memory access using a single cue, there is also experimental evidence against sequential search in some tasks requiring the combination of two cues, such as cued recall with an extra-list cue, cued recall with a part-word cue, lexical access and semantic access (Humphreys, Wiles & Bain, 1990). Furthermore, in some of these tasks it appears that the two cues have never jointly occurred with the target. In such a situation, the tensor product employed by Humphreys et. a1. to bind the two cues to the target cannot be employed, nor can the co-occurrences of the two cues be encoded into the hidden layer of a three-layer network. In this paper we present the computational foundation for an alternative process to search and decision, based on parallel (or direct) access for the intersection of sets of targets that are retrieved in response to cues that have not been studied together. Definition of an intersection in the cue-target paradigm: Given a set of cue-target pairs, and two (or more) access cues, then the intersection specified by the access cues is defined to be the set of targets which are associated with both cues. If the cue-target strengths are not binary, then they are constrained to lie between 0 and 1, and targets in the intersection are weighted by the product of the cue-target strengths. A complementary definition for a union process could be the set of targets associated with anyone or more of the access cues, weighted by the sum of the target strengths. In the models that are described below, we assume that the access cues and targets are represented as vectors, the cue-target associations are represented in a memory matrix and the set of targets retrieved in response to one or more cues is represented as a linear combination, or blend, of target vectors associated with that cue or cues. Note that under this definition, if there is more than one target in the intersection, then a second stage is required to select a unique target to output from the retrieved linear combination. We do not address this second stage in this paper. A task requiring intersection: In the rhyming task described above, the rhyme and semantic cues have extremely low separate probabilities of accessing the target, ghost, but a very high joint probability. In this study we do not distinguish between the representation of the semantic and part-word cues, although it would be required for a more detailed model. Instead, we focus on the task of retrieving a target weakly associated with two cues. We simulate this condition in a simple task using two cues, C1 and C2, and three targets, T1, T2 and T3. Each cue is strongly associated with one target, and weakly associated with a second target, as follows (strengths of association are shown above the arrows): Direct Memory Access Using Two Cues 637 The intersection of the targets retrieved to the two cues, Cl and C2, is the target, T2, with a strength of 0.01. Note that in this example, a model based on vector addition would be insufficient to select target, T2, which is weakly associated with both cues, in preference to either target, Tl or TJ, which are strongly associated with one cue each. 2 IMPLEMENTATIONS OF INTERSECTION PROCESSES 2.1 LOCAL REPRESENTATIONS Given a local representation for two sets of targets, their intersection can be computed by multiplying the activations elicited by each cue. This method extends to sparse representations with some noise from cross product terms, and has been used by Dolan and Dyer (1989) in their tensor model, and Touretzky and Hinton (1989) in the Distributed Connectionist Production System (for further discussion see Wiles, Humphreys, Bain & Dennis, 1990). However, multiplying activation strengths does not extend to fully distributed representations, since multiplication depends on the basis representation (Le., the target patterns themselves) and the cross-product terms do not necessarily cancel. One strong implication of this for implementing an intersection process, is that the choice of patterns is not critical in a linear process (such as vector addition) but can be critical in a non-linear process (which is necessary for computing intersections). An intersection process requires more information about the target patterns themselves. It is interesting to note that the inner product of the target sets (equivalent to the match process in Humphreys et. al.1s (1989) Matrix model) can be used to determine whether or not the intersection of targets is empty, if the target vectors are orthogonal, although it cannot be used to find the particular vectors which are in the intersection. 2.2 USING INFORMATION ABOUT TARGET VECfORS A local representation enables multiplication of activation strengths because there is implicit knowledge about the allowable target vectors in the local representation itself. The first method we describe for computing the intersection of fully distributed vectors uses information about the targets, explicitly represented in an auto-associative memory, to filter out cross-product terms: In separate operatiOns, each cue is used to access the memory matrix and retrieve a composite target vector (the linear combination of associated targets). A temporary matrix is formed from the outer product of these two composite vectors. This matrix will contain product terms between all the targets in the intersection set as well as noise in the form of cross-product terms. The cross-product terms can be filtered from the temporary matrix by using it as a retrieval cue for accessing a three-dimensional auto-associator (a tensor of rank 3) over all the targets in the original memory. If the target vectors are orthonormal, then this process will produce a vector which contains no noise from cross-product terms, and is the linear combination of all targets associated with both cues (see Box 1). 638 Wiles, Humphreys, Bain, and Dennis Box 1. Creating a temporary matrix from the product of the target vectors, then filtering out the noise terms: Let the cues and targets be represented by vectors which are mutually orthonormal (Le., Ci.Ci = Ti.Ti = 1, Ci,Cj = Ti.Tj = 0, i, j = 1,2,3). The memory matrix, M, is formed from cue-target pairs, weighted by their respective strengths, as follows: where T' represents the transpose of T, and Cj T;' is the outer product of Cj and Tj• In addition, let Z be a three-dimensional auto-associative memory (or tensor of rank 3) created over three orthogonal representations of each target (i.e., Tj is a column vector, T;' is a row vector which is the transpose of Tj , and Tj " is the vector in a third direction orthogonal to both, where i=I,2,3), as follows: z = I· T- T-' T·" I I I I Let a two-dimensional temporary matrix, X, be formed by taking the outer product of target vectors retrieved to the access cues, as follows: X = (Cl M) (C2 M)' = (0.9Tl + 0.lT2) (0.1T2 + 0.9Tj)' = 0.09TlT2' + 0.81Tl Tj' + 0.01T2T2' + 0.09T2Tj' Using the matrix X to access the auto-associator Z, will produce a vector from which all the cross-product terms have been flltered, as follows: X Z = (0.09Tl T2' + 0.81 TlT3' + 0.01T2T2' + 0.09T2Tj' ) (Ij Tj T;' T;") = (0.09TlT2') (Ii Ti T;' T;',) + (0.81TlTl) (Ii Tj T;' Ti") + (0.01 T2T2') ( Ii Tj T;' Ti") + (0.09T2Tj') <l:i Tj T;' T;" ) since all other terms cancel. This vector is the required intersection of the linear combination of target vectors associated with both the input cues, Cl and C2 weighted by the product of the strengths of associations from the cues to the targets. A major advantage of the above process is that only matrix (or tensor) operations are used, which simplifies both the implementation and the analysis. The behaviour of the system can be analysed either at the level of behaviours of patterns, or using a coordinate system based on individual units, since in a linear system these two levels of description are isomorphic. In addition, the auto-associative target matrix could be created incrementally when the target vectors are first learnt by the system using the matrix memory. The Direct Memory Access Using Two Cues 639 disadvantages include the requirement for dynamic creation and short term storage of the two dimensional product-of-targets matrix, and the formation and much longer term storage of the three dimensional auto-associative matrix. It is possible, however, that an auto-associator may be part of the output process. 2.3 ADDITIVE APPROXIMATIONS TO MULTIPLICATIVE PROCESSES An alternative approach to using the target auto-associator for computing the intersection, is to incorporate a non-linearity at the time of memory storage, rather than memory access. The aim of this transform would be to change the cue-target strengths so that linear addition of vectors could be used for computing the intersection. An operation that is equivalent to multiplication is the addition of logarithms. If the logarithm of each cuetarget strength was calculated and stored at the time of association, then an additive access process would retrieve the intersection of the inputs. More generally, it may be possible to use an operation that preserves the same order relations (in terms of strengths) as multiplication. It is always possible to find a restricted range of association strengths such that the sum of a number of weak cue-target associations will produce a stronger target activation than the sum of a smaller number of strong cue-target associations. For example, by scaling the target strengths to the range [(n-l)/n, 1] where n is the number of simultaneously available cues, vector addition can be made to approximate multiplication of target strengths. This method has the advantage of extending naturally to non-orthogonal vectors, and to the combination of three or more cues, with performance limits determined solely by cross-talk between the vectors. Time taken is proportional to the number of cues, and noise is proportional to the product of the set sizes and cross-correlation between the vectors. 3 SIMULATIONS OF THE ADDITIVE PROCESS Two simulations of the additive process using scaled target strengths were performed to demonstrate the feasibility of the method producing a target weakly associated with two cues, in preference to targets with much higher probabilities of being produced in response to a single cue. As a work-around for the problem of how (and when) to decompose the composite output vector, the target with the strongest correlation with the composite output was selected as the winner. To simulate the addition of some noise, non-orthogonal vectors were used. The first simulation involved two cues, C 1 and C2, and three targets, T1, T2 and T3, represented as randomly generated 100 dimensional vectors, 20% Is, the remainder Os. Cue C1 was strongly associated with target T1 and weakly associated with target T2, cue C2 was strongly associated with target T3 and weakly associated with target T2. A trial consisted of generating random cue and target vectors, forming a memory matrix from their outer products (multiplied by 0.9 for strong associates and 0.6 for weak associates; note that these strengths have been scaled to the range, [0,1]), and then pre-multiplying the memory matrix by the appropriate cue (i.e., either C1 or C2 or C1 + C2). 640 Wiles, Humphreys, Bain, and Dennis The memory matrix, M, was formed as shown in Box 1. Retrieval to a cue, Cl , was as follows: C1 M = 0.9 C1 .Cz Tz' + 0.6 C1 .C1 T2' + 0.6 C1 .C2 T2 ' + 0.9 C1.C2 T/. In this case, the cross product terms, Cl.C2, do not cancel since the vectors are not orthogonal, although their expected contribution to the output is small (expected correlation 0.04). The winning target vector was the one that had the strongest correlation (smallest normalized dot product) with the resulting output vector. The results are shown in Table 1. Table 1: Number of times each target was retrieved in 100 trials. c1 c2 c1+c2 t1 92 o 11 t2 8 9 80 t3 o 91 9 Over 100 trials, the results show that when either cue Cl or C2 was presented alone, the target with which it was most strongly paired was retrieved in over 90% of cases. Target T2 had very low probabilities of recall given either Cl or C2 (8% and 9% respectively), however, it was very likely to be recalled if both cues were presented (80%). The first simulation demonstrated the multi-cue paradigm with the simple two-cue and three-target case. In a second simulation, the system was tested for robustness in a similar case involving three cues, C 1 to C 3, and four targets, T 1 to T 4. The results show that T4 had low probabilities of recall given either Cl , C2 or C3 (13%, 22% and 18% respectively), medium probabilities of recall given a combination of two cues (36%, 31 % and 28%), and was most likely to be recalled if all three cues were presented (44%). For this task, when three cues are presented concurrently, in the ideal intersection only T4 should be produced. The results show that it is produced more often than the other targets (44% compared with 22%, 18% and 16%), each of which is strongly associated with two out of the three cues, but there is considerably more noise than in the two-cue case. (See Wiles, Humphreys, Bain & Dennis, 1990, for further details.) 4 DISCUSSION The simulation results demonstrated the effect of the initial scaling of the cue-target strengths, and non-linear competition between the target outputs. It is important to note the difference between the association strengths from cues to targets and the cued recall probability of each target. In memory research, the association strengths have been traditionally identified with the probability of recall. However, in a connectionist model the association strengths are related to the weights in the network and the cued recall probability is the probability of recall of a given target to a given cue. This paper builds on the idea that direct access is the default access method for human memory, and that all access processes are cue based. The immediate response from memory is a blend of patterns, which provide a useful intermediate stage. Other processes may act on the blend of patterns before a single target is selected for output in a Direct Memory Access Using Two Cues 641 successive stage. One such process that may act on the intermediate representation is an intersection process that operates over blends of targets. Such a process would provide an alternative to search as a computational technique in psychological paradigms that use two or more cues. We don't claim that we have described the way to implement such a process - much more is required to investigate these issues. The two methods presented here have served to demonstrate that direct access intersection is a viable neural network technique. This demonstration means that more processing can be performed in the network dynamics, rather than by the control structures that surround memory. Acknowledgements Our thanks to Anthony Bloesch, Michael Jordan, Julie Stewart, Michael Strasser and Roland Sussex for discussions and comments. This work was supported by grants from the Australian Research Council, a National Research Fellowship to J. Wiles and an Australian Postgraduate Research Award to S. Dennis. References Anderson, J.A., Silverstein, J.W., Ritz, S.A. and Jones, R.S. Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychological Review, 84,413-451, 1977. Dolan, C. and Dyer, M.G. Parallel retrieval and application of conceptual knowledge. Proceedings of the 1988 Connectionist Models Summer School, San Mateo, Ca: Morgan Kaufmann, 273-280, 1989. Humphreys, M.S., Bain, J.D. and Pike, R. Different ways to cue a coherent memory system: A theory for episodic, semantic and procedural tasks. Psychological Review, 96:2, 208-233, 1989. Humphreys, M.S., Wiles, J. and Bain, J.D. Direct Access: Cues with separate histories. Paper presented at Attention and Performance 14, Ann Arbor, Michigan, July, 1990. McClelland, J.L. and Rumelhart, D.E. A distributed model of memory. In McClelland, J.L. and Rumelhart, D.E. (eds.) Parallel Distributed Processing: Explorations in the microstructure of cognition, 170-215, MIT Press, Cambridge, MA, 1986. Rubin, D.C. and Wallace, W.T. Rhyme and reason: Analysis of dual retrieval cues. Journal of Experimental Psychology: Learning, Memory and Cognition, 15:4, 698709, 1989. Rumelhart, D.S. The architecture of mind: A connectionist approach. In Posner, M.1. (ed.) Foundations of Cognitive Science, 133-159, MIT Press, Cambridge, MA, 1989. Touretzky, D.S. and Hinton, G.E. A distributed connectionist production system. Cognitive Science, 12, 423-466, 1988. Wiles, J., Humphreys, M.S., Bain, J.D. and Dennis, S. Control processes and cue combinations in a connectionist model of human memory. Department of Computer Science Technical Report, #186, University of Queensland, October 1990, 40pp.
1990
91
420
Flight Control in the Dragonfly: A Neurobiological Simulation William E. Faller and Marvin W. Luttges Aerospace Engineering Sciences, University of Colorndo, Boulder, Colorado 80309-0429. ABSTRACT Neural network simulations of the dragonfly flight neurocontrol system have been developed to understand how this insect uses complex, unsteady aerodynamics. The simulation networks account for the ganglionic spatial distribution of cells as well as the physiologic operating range and the stochastic cellular fIring history of each neuron. In addition the motor neuron firing patterns, "flight command sequences", were utilized. Simulation training was targeted against both the cellular and flight motor neuron firing patterns. The trained networks accurately resynthesized the intraganglionic cellular firing patterns. These in tum controlled the motor neuron fIring patterns that drive wing musculature during flight. Such networks provide both neurobiological analysis tools and fIrst generation controls for the use of "unsteady" aerodynamics. 1 INTRODUCTION Hebb (1949) proposed a theory of inter-neuronal learning. "Hebbian Learning", in which cells acting together as assemblies alter the effIcacy of mutual interconnections. These neural "cell assemblies" presumably comprise the information processing "units" of the nervous system. To provide one framework within which to perform detailed analyses of these cellular organizational "rules" a new analytical technique based on neural networks is being explored. The neurobiological data analyzed was obtained from the neurnl cells of the drngonfly ganglia. 514 Flight Control in the Dragonfly: A Neurobiological Simulation 515 The dragonfly use of unsteady separated flows to generate highly maneuverable flight is governed by the control sequences that originate in the thoracic ganglia flight motor neurons (MN). To provide this control the roughly 2200 cells of the meso- and metathoracic ganglia integrate environmental cues that include visual input. wind shear, velocity and acceleration. The cellular flring patterns coupled with proprioceptive feedback in turn drive elevator/depressor flight MNs which typically produce a 25-37 Hz wingbeat depending on the flight mode (Luttges 1989; Kliss 1989). The neural networks utilized in the analyses incorporate the spatial distribution of cells, the physiologic operating range of each neuron and the stochastic history of the cellular spike trains (Faller and Luttges 1990). The present work describes two neural networks. The simultaneous Single-unit firing patterns at time (t) were used to predict the cellular ftring patterns at time (t+~). And, the simultaneous single-unit frring patterns were used to "drive" flight-MN frring patterns at a 37 Hz wingbeat frequency. 2 METHODS 2.1 BIOLOGICAL DATA Recordings were obtained from the mesothoracic ganglion of the dragonfly Aeshna in the ganglionic regions known to contain the cell bodies of flight MNs as well as small and large cell bodies (Simmons 1977; Kliss 1989). Multiple-unit recordings from many cells (-40-80) were systematically decomposed to yield simultaneously active single-unit ftring patterns. The technique has been described elsewhere (Faller and Luttges in press). During the recording of neural activity spontaneous flight episodes commonly occurred. These events were consistent with typical flight episodes (2-3 secs duration) observed in the tethered dragonfly (Somps and Luttges 1985). For analysis, a 12 second record was obtained from 58 single units, 26 rostral cells and 32 caudal cells. The continuous record was separated into 4 second behavioral epochs: pre-flight, flight and post-flight. A simplified model of one flight mode was assumed. Each forewing is driven by 3 main elevator and 2 main depressor muscles, innervated by 11 and 14 MNs, respectively. A 37 Hz MN firing frequency, 3-5 spikes per output burst, and 180 degree phase shift between antagonistic MNs was assumed. Given the symmetrical nature of the elevator/depressor output patterns only the 11 elevator MNs were simulated. Prior to analysis the ganglionic spatial distribution of neurons was reconstructed. The importance of this is reserved for later discussion. A method has been described (Faller and Luttges submitted:a) that resolves the spatial distribution based on two distancing criteria: the amplitude ratio across electrodes and the spike angle (width) for each cell. Cells were sorted along a rostral, cell 1, to caudal, cell 58 continuum based on this infonnation. The middle 2 seconds of the flight data was simulated. This was consistent with the known duration of spontaneous flight episodes. Within these 2 seconds, 44 cells remained active, 19 rostral and 25 caudal. The cell numbering (1-58) derived for the biological data was not altered. The remaining 14 inactive cells/units carry zeros in all analyses. 516 Faller and Luttges 2.2 MIMICKING THE SINGLE CELLS Each neuron was represented by a unique unit that mimicked both the mean fIring frequency and dynamic range of the physiologic cell. The activation value ranged from zero to twice the nonnalized mean fIring frequency for each cell. The dynamic range was calculated as a unique thermodynamic profile for each sigmoidal activation function. The technique has been described fully elsewhere (Faller and Luttges 1990). 2.3 SPIKE TRAIN REPRESENT A TION The spike trains and MN firing patterns were represented as iteratively continuous "analog" gradients (Faller and Luttges 1990 & submitted:b). Briefly. each spike train was represented in two-dimensions based on the following assumptions: (1) the mean fIring frequency reflects the inherent physiology of each cell and (2) the interspike intervals encode the information transferred to other cells. Exponential functions were mapped into the intervals between consecutive spikes and these functions were then discretized to provide the spike train inputs to the neural network. These functions retain the exact spiking times and the temporal modulations (interval code) of cell fIring histories. 2.4 ARCHITECTURE The two simulation architectures were as follows: Simulation 1 Simulation 2 Input layer 1 cell:l unit (44 units) 1 cell:l unit (44 units) Hidden layer 1 ceU:2 units (88 units) 1 cell:2 unit (88 units) Output layer 1 cell: 1 unit (44 total units) 11 main elevator MNs The hidden units were recurrently connected and the interconnections between units were based on a 1st order exponential rise and decay. The general architecture has been described elsewhere (Faller and Luttges 1990). For the cell-to-cell simulation no bias units were utilized. Since the MNs fire both synchronously and infrequently bias units were incorporated in the MN simulation. These units were constrained to function synchronously at the MN fuing frequency. This constraining technique pennitted the network to be trained despite the sparsity of the MN dataset Training was perfonned using a supervised backpropagation algorithm in time. All 44 cells. 2000 points per discretized gradient (~=1 msec real-time) were presented synchronously to the network. The results were consistent for L\=2-5 msec in all cases. The simulation paradigms were as follows: Simulation 1 Simulation 2 lnmJ.t Neural activity at time (t) Neural activity at time (t) Output/Target Neural activity at time (t+~) MN activity at time (t) Initial weights were random. -0.3 and 0.3. and the learning rate was 1l=O.2. Training was performed until the temporal reproduction of cell spiking patterns was verified for all cells. Following training. the network was "run". 1l = o. Flight Control in the Dragonfly: A Neurobiological Simulation 517 Sum squared errors for aU units were calculated and normalized to an activation value of 0 to 1. The temporal reproduction of the output patterns was verified by linear correlation against the targeted spike trains. The "effective" contribution of each unit to the flight pattern was then determined by "lesioningrt individual cells from the network prior to presenting the input pattern. The effects of lesioning were judged by the change in error relative to the unlesioned network. 3 RESULTS 3.1 CELL·TO-CELL SIMULATION Following training the complete pattern set was presented to the network. And. the sum squared error was averaged over aU units. Fig. 1. Clearly the network has a different "interpretationrt of the data at certain time steps. This is due both to the omission/commission of spikes as well as timing errors. However. the data needed to reproduce overall cell firing patterns is clearly available. o~O~--------------------------------------------------~ c:t:l ~ 1 00-20t ffio.tO~ 0.00 I I I I t 0'00 I I , I I ~ (zoo JOOO I I I :to TIM!: t4JWSECONDS) Figure 1: The network error Unit sum squared errors were also averaged over the 2 second simulation. Fig. 2. Clearly the network predicted some unil/cell flring patterns e:lSier than others. ! ~ t. 0,0,0,0, ,ctili,D1dJlitIm, , ' , ,.DJlI1.1bdJJjJ,llilli),dli ~ 0 10 10 ~o 40 ~ so UNIT (SHOWN BY caJ.. NUYSER) Figure 2: The unit errors The temporal reproduction of the cell firing patterns was verified by linear correlation between the network outputs and the biological spike train representations. If the network accurately reproduces the temporal history of the spike trains these functions should be identical. r=1. Fig. 3. Clearly the network reproduces the temporal coding inherent within each spike train. The lowest correlation of roughly 0.85 is highly signffic:ult. (p<O.Ol). Figure 3: The unit temporal errors 518 Faller and Luttges One way to measure the relative importance of each unit/cell to the network is to omit/"lesion It each unit prior to presenting the cell firing patterns to the trained network. The data shown was collected by lesioning each unit individually, Fig. 4. The unlesioned network error is shown as the "0" cell. Overall the degradation of the network was minimal. Clearly some units provide more information to the network in reproducing the cell fuing histories. Units that caused relatively large errors when "lesioned" were defmed as primary units. The other units were defined as secondary units. ! ~t'Q.O'Q'QI'~'~'I" '~'I'I'I.J1bJJ.Jl.JJ.1 o 10 20 JO 40 ~O to ::I UNIT (SHOWN BY CElL NUMBER) Figure 4: Lesion studies The primary units (cells) form what might classically be termed a central pattern generator. These units can provide a relatively gross representation of both cellular and MN firing patterns. The generation of dynamic cellular and MN firing patterns. however 9 is apparently dependent on both primary and secondary units. It appears that the generation of functional activity patterns within the ganglia is largely controlled by the dynamic interactions between large groups of cells. ie. the "whole" network. This is consistent with other results derived from both neural network and statistical analyses of the biological data (Faller and Luttges 1990 & submitted:b). 3.2 MOTOR NEURON FIRING PATTERNS The 44 cellular frring patterns were then used to drive the MN ruing patterns. Following training. the cell fIring pattern set was presented to the network and the sum squared error was averaged over the output MNs. Fig. 5. The error in this case oscillates in time at the wingbeat frequency of 37 Hz. As will be shown. however. this is an artifact and the network does accurately drive the MNs. ~ Iml o 0:: ~o III om~ •• , o JIIIII U IIUllll!llllUllul II II U!lU 111111111 Uh Ulllilu 1111 IlluUIIL , 1 00 2000 Jobo " ~l nUE (t.IlWSfCONOS) Figure 5: The network error For each MN the sum squared error was also averaged over the 2 second simulation. Fig. 6. Clearly individual MNs contribute nearly equally to the network error. ~ o~I-r==~==~==~==~==~==~==~==~==~~~~~:l ~ ~t -+---L-I ~. I ~. I _____.. I ~. I ~. I +---'--1 I ~o I ~. I ,~I ,~I ,-'--tIl ::::I Q I 12 UOTOR NEURON NUUBER Figure 6: The unit errors Flight Control in the Dragonfly: A Neurobiological Simulation 519 The temporal reproduction of the MN ftring patterns was verifted by linear correlation between the output and targeted MN ftring patterns of the network. This is shown in Fig. 7. Clearly the cell inputs to the network have the spiking characteristics needed for driving the temporal ftring sequences of the MNs innervating the wing musculature. All correlations are roughly 0.80. highly signiftcant. (p<O.Ol). The output for one MN is shown relative to the targeted MN output in Fig. 8. Clearly the network does drive the MNs correctly. g ,.oa! 11.10.90 u i~ I. E; a u .1.1 •. 111.1. I I I I I I 12 WOTOR NEURON NUUBER Figure 7: The unit temporal errors ~'AO~--------~------------~------------~~----------' ~o.~ ~o~o ~ O.2! __ _UU' • .'"ON OUTPUT ~ -- TARGETED MOTOR NEURON OUTPUT .;>... "" .. GO.OO~--~~~~20~--~~--~~--~----~6~~--~----teo~--~--~1~O ~ llUE CMIUJSECONDS} Figure 8: The MN flring patterns 3.3 SUMMARY The re~ults indicate that synthetic networks can learn and then synthesize patterns of neural spiking activity needed for biological function. In this case, cell and MN fIring patterns occurring in the dragonfly ganglia during a spontaneous flight episode. 4 DISCUSSION Recordings from more than 50 spatially unique cells that reflect the complex network characteristics of a small. intact neural tissue were used to successfully train two neural networks. Unit sum squared errors were less than 0.003 and spike train temporal histories were accurately reproduced. There was little evidence for unexpected "cellular behavior". Functional lesioning of single units in the network caused minimal degradation of network performance. however. some lesioned cells were more important than others to overall network performance. The capability to lesion cells permitted the contribution of individual cells to the production of the flight rhythm to be detennined. The detection of primary and secondary cells underlying the dynamic generation of both cellular and MN firing patterns is one example. Such results may encourage neurobiologists to adopt neural networks as effective analytical tools with which to study and analyze spike train data. Clearly the solution arrived at is not the biological one. However. the networks do accurately predict the future cell firing patterns based on past flring history information. It 520 Faller and L u ttges is asserted that the network must therefore contain the majority of infonnation required to resolve biological cell interactions during flight in the dragonfly. A sample of 58 ganglionic cells was utilized. the remaining cells functional contributions are presumably statistically accounted for by this small sampling. The inherent "infonnation" of the biological network is presumably stored in the weight matrices as a generalized statistical representation of the "rules" through which cells participate in biological assemblies. Analyses of the weight matrices in turn may permit the operational "rules" of cell assemblies to be defined. Questions about the effects of cell size. the spatial architecture of the network and the temporal interactions between cells as they relate to cell assembly function can be addressed. For this reason the individuality of cells. the spatial architecture and the stochastic cellular firing histories of the individual cells were retained within the network architectures utilized. Crucial to these analyses will be methods that permit direct, time-incrementing evaluations of the weight matrices following training. Biological nervous system function can now be analyzed from two points of view: direct analyses of the biological data and indirect, but potentially more approachable, analyses of the weight matrices from trained neural networks such as the ones described. REFERENCES Faller WE, Luttges MW (1990) A Neural Network Simulation of Simultaneous SingleUnit Activity Recorded from the Dragonfly Ganglia. ISA Paper #90-033 Faller WE, Luttges MW (in press) Recording of Simultaneous Single-Unit Activity in the Dragonfly Ganglia. J Neurosci Methods Faller WE, Luttges MW (Submitted:a) Spatiotemporal Analysis of Simultaneous SingleUnit Activity in the Dragonfly: 1. Cellular Activity Patterns. Bioi Cybem Faller WE, Luttges MW (Submitted:b) Spatiotemporal Analysis of Simultaneous SingleUnit Activity in the Dragonfly: II. Network Connectivity. Bioi Cybem Hebb DO (1949) The Organization of Behavior: A Neuropsychological Theory. Wiley, New York, Chapman and Hall, London Kliss MH (1989) Neurocontrol Systems and Wing-Fluid Interactions Underlying Dragonfly Flight Ph.D. Thesis, University of Colorado. Boulder, pp 70-80 Luttges MW (1989) Accomplished Insect Fliers. In: Gad-el-Hale M (ed) Frontiers in Experimental Fluid Mechanics. Springer-Verlag, Berlin Heidelberg. pp 429-456 Simmons P (1977) The Neuronal Control of Dragonfly Flight I. Anatomy. J exp Bioi 71:123-140 Somps C, Luttges MW (1985) Dragonfly flight: Novel uses of unsteady separated flows. Science 228:1326-1329 Part IX Applications
1990
92
421
A Framework for the Cooperation of Learning Algorithms Leon Bottou Patrick Gallinari Laboratoire de Recherche en Informatique Universite de Paris XI 91405 Orsay Cedex France Abstract We introduce a framework for training architectures composed of several modules. This framework, which uses a statistical formulation of learning systems, provides a unique formalism for describing many classical connectionist algorithms as well as complex systems where several algorithms interact. It allows to design hybrid systems which combine the advantages of connectionist algorithms as well as other learning algorithms. 1 INTRODUCTION Many recent achievements in the connectionist area have been carried out by designing systems where different algorithms interact. For example (Bourlard & Morgan, 1991) have mixed a Multi-Layer Perceptron (MLP) with a Dynamic Programming algorithm. Another impressive application (Le Cun, Boser & al., 1990) uses a very complex multilayer architecture, followed by some statistical decision process. Also, in speech or image recognition systems, input signals are sequentially processed through different modules. Modular systems are the most promising way to achieve such complex tasks. They can be built using simple components and therefore can be easily modified or extended, also they allow to incorporate into their architecture some structural a priori knowledge about the task decomposition. Of course, this is also true for connectionism, and important 781 782 Bottou and Gallinari progresses in this field could be achieved if we were able to train multi-modules architectures. In this paper, we introduce a formal framework for designing and training such cooperative systems. It provides a unique formalism for describing both the different modules and the global system. We show that it is suitable for many connectionist algorithms, which allows to make them cooperate in an optimal way according to the goal of learning. It also allows to train hybrid systems where connectionist and classical algorithms interact. Our formulation is based on a probabilistic approach to the problem of learning which is described in section 2. One of the advantages of this approach is to provide a formal definition of the goal of learning. In section 3, we introduce modular architectures where each module can be described using this framework, and we derive explicit formulas for training the global system through a stochastic gradient descent algorithm. Section 4 is devoted to examples, including the case of hybrid algorithms combining MLP and Learning Vector Quantization (Bollivier, Gallinari & Thiria, 1990). 2 LEARNING SYSTEMS The probabilistic formulation of the problem of learning has been extensively studied for three decades (Tsypkin 1971), and applied to control, pattern recognition and adaptive signal processing. We recall here the main ideas and refer to (Tsypkin 1971) for a detailed presentation. 2.1 EXPECTED COST Let x be an instance of the concept to learn. In the case of a pattern recognition problem for example, x would be a pair (pattern, class). The concept is mathematically defined by an unknown probability density function p(x) which measures the likelihood of instance x. We shall use a system parameterized by w to perform some task that depends on p(x). Given an example x, we can define a local cost, J(x,w), that measures how well our system behaves on that example. For instance, for classification J would be zero if the system puts a pattern in the correct class, or positive in case of misclassification. Learning consists in finding a parameter w· that optimises some functional of the model parameters. For instance, one would like to minimize the expected cost (1). C(w) = f J(x,w) p(x)dx (1) The expected cost cannot be explicitely computed, because the density p(x) is unknown. Our only knowledge of the process comes from a series of observations {X1 ... xn} drawn from the unknown density p(x). Therefore, the quality of our system can only be measured through the realisations J(x,w) of the local cost function for the different observations. A Framework for the Cooperation of Learning Algorithms 783 2.2 STOCHASTIC GRADIENT DESCENT Gradient descent algorithms are the simplest minimization algorithms. We cannot, however, compute the gradient of the expected cost (1), because p(x) is unknown. Estimating these derivatives on a training set {X1 ... xn}, gives the gradient algorithm (2), where VJ denotes the gradient of J(x,w) with respect to w, and Et, a small positive constant. the "learning rate". 1 n Wt+ 1 = Wt - Et 2. V J(Xj,wt} n . 1 1(2) The stochastic gradient descent algorithm (3) is an alternative to algorithm (2). At each iteration, an example Xt is drawn at random, and a new value of w is computed. (3) Algorithm (3) is faster and more reliable than (2), it is the only solution for training adaptive systems like Neural networks (NN). Such stochastic approximations have been extensively studied in adpative signal processing (Benveniste. Metiver & Priouret, 1987). (Ljung & Soderstrom, 1983). Under certain conditions, algorithm (3) converges almost surely (Bottou, 1991). (White, 1991) and allows to reach an optimal state of the system. 3 MODULAR LEARNING SYSTEMS Most often, when the goal of learning is complex, it can be achieved more easily by using a decomposition of the global task into several simpler subtasks which for instance reflect some a priori knowledge about the structure of the problem. One can use this decomposition to build modular architectures where each module will correspond to one of the subtasks. Within this framework, we will use the expected risk (1) as the goal of learning. The problem now is to change the analytical formulation of the functional (1) so as to introduce the modular decomposition of the global task. In (1), the analytic expression of the local cost J(x,w) has two meanings: it describes a parametric relationship between the inputs and the outputs of the system, and measures the quality of the system. To introduce the decomposition, one may write this local cost J(x,w) as the composition of several functions. One of them will take into account the local error and therefore measure the quality of the system; the others will correspond to the decomposition of the parametric relationship between the inputs and the outputs of the system (Figure 1). Each of the modules will therefore receive some inputs from other modules or the external world and produce some outputs which will be sent to other modules. 784 B ottou and Gallinari a I-y-p Figure 1: A modular system In classical systems. these modules correspond to well defmed processing stages like e.g. signal processing. filtering. feature extraction. classification. They are trained sequentially and then linked together to build a complete processing system which takes some inputs (e.g. raw signals) and produces some outputs (e.g. classes). Neither the assumed decomposition. nor the behavior of the different modules is guaranteed to optimally contribute to the global goal of learning. We will show in the following that it is possible to optimally train such systems. 3.1 TRAINING MODULAR SYSTEMS Each function in the above composition defmes a local processing stage or module whose outputs are defined by a parametric function of its inputs (4). V' je y-1 (n), Yj = fj( (Xk) ke X-1 (n) , (Wi) ie W-1 (n) ) (4) y-1 (n) ( resp. X-1 (n). and W-1 (n) ) denotes the set of subscripts associated to the outputs Y ( resp. inputs x and parameters W ) of module n. Conversely. output Yj ( resp. input xk and parameter Wi ) belongs to module Y(j) ( resp. X(k) and W(i) ). Modules are linked so as to build a feed-forward topology which is expressed by a function cj). V'k, xk = Y~(k) (5) We shall consider that the first module only feeds the system with examples and that the last module only computes Ylast = J(x,w). Following (Le Cun. 1988). we can compute the derivatives of J with a Lagrangian method. Let a and ~ be the Lagrange coefficients for constraints (4) and (5). L = J -L ~k(Xk-Y~(k)) - L aj (Yr!j( (Xk) ke X-1Y(j), (Wi) ie W-1Y(j) )) (6) k j By equating the derivatives of L with respect to x and Y to zero. we get recursive formulas for computing a and ~ in a single backward pass along the acyclic graph cj). A Framework for the Cooperation of Learning Algorithms 785 alast = 1, Then, the derivatives of J with respect to the weights are: dJ dL -(w) = -(aRw) = dwi dwi ,.." d I: ~ a' :LJ.. £.J J :l. ... je y-1W{i) UYVI (7) (8) Once we have computed the derivatives of the local cost J(x,w), we can apply the stochastic gradient descent algorithm (3) for minimizing of the expected cost C(w). We shall say that each module is defined by the equations in (7) and (8) that characterize its behavior. These equations are: • a forward equation (F) • a backward equation (B) • a gradient equation (G) Yj = fj( (xl<) keX-1(n) ,(Wi) ieW1(n) ) . Ell ~= L a J dXk jeY-'X(k) dJ a/: ~i=-.= Laj ~ dwl je Y-'W(i) awl The remaining equations do not depend on the nature of the modules. They describe how modules interact during training. Like back-propagation, they address the credit assignment problem between modules by globally minimizing a single cost function. Training such a complex system actually consists in cooperatively training its components. 4 EXAMPLES Most learning algorithms, as well as new algorithms may be expressed as modular learning systems. Here are some simple examples of modules and systems. 4.1 LINEAR AND QUASI-LINEAR SYSTEMS MODULE SYMBOL FORWARD BACKWARD GRADIENT Matrix product Wx Yi-tWikXk ~k'""L<Xjwik ~ik=aixk i Mean square error MSE J. t{dk'Xk)2 ~k=-2 (dk-xk) Perceptron error Perceptron J.-t(dk-1 9t+(Xk»Xk ~k-- (dk-19t+(Xk» Sigmoid sigmo'id Yk·f(Xk) ~k,""f'(Xk)ak A few basic modules are defined in the above table. Figure 2 gives examples of linear and quasi linear algorithms derived by combining these modules. 786 Bottou and Gallinari ( W x r.r MSE r J L Examples --1 ~~ perceptroj'" J L Examples ---' ( Wx H sigmo'(d H Wx H sigmoId H MSE r J L 1 ExalT1lles _____________ ..1 Figure 2: An Adaline, a Perceptron, and a 2-Layer Perceptron. Some MLP architectures, Time Delay Networks for instance, use local connections and shared weights. Such complex architectures may be constructed by defining either quasilinear unit modules or complex matrix operations modules like convolutions. The latter solution leads to more efficient implementations. Figure 3 gives an example of convolution module, composed of several matrix products modules. w I I - &.. ( Convolve ) -. Yk Xk Yk I I Figure 3: A convolution module, composed of several matrix product modules. 4 ° 2 EUCLIDIAN DISTANCE BASED ALGORITHMS A wide class of learning systems are based on the measure of euclidian distances. Again, defining an euclidian distance module and some adequate cost functions allows for handling most euclidian distance based algorithms. Here are some examples: MODULE SYMBOL FORWARD BACKWARD GRADIENT Euclidian distance (x-w)2 ~k=-2tUj(Wjk-Xk) Ajk=2Uj(Wjk-Xk) Minimum Min ~ko-1, ~k,oko-O LVQ 1 error LVQ1 If the nearest reference Xk· is associated to the correct class J - Xko =Min{xiJ ~ko .1, ~k,ok·-O else J - -Xko =-Min{xiJ ~ko -1, ~k,ok°.O Combining an euclidian distance module with a "minimum" error module gives a Kmeans algorithm; combining it with a LVQI error module gives the LVQI algorithm (Figure 4). A Framework for the Cooperation of Learning Algorithms 787 t~~J Examples .... J Figure 4: K-Means and Learning Vector Quantization. 4.3 HYBRID ALGORITHMS Hybrid algorithms which may combine classical and connectionist learning algorithms are easily defined by chaining appropriate modules. Figure 5, for instance, depicts an algorithm combining a MLP layer and LVQ1. This algorithm has been described and empirically compared to other pattern recognition algorithms in (Bollivier, Gallinari & Thiria, 1990). ( Wx 1 H sigmO·id)-.{ (w -X)2]-1 LVa 1 ~ J Examples 1 Figure 5: An hybrid algorithm combining a MLP and L VQ. Cooperative training gives a framework and a possible implementation for such algorithms. Nevertheless, there are still specific problems (e.g. convergence, initialization) which require a careful study. More complex hybrid systems, including combinations of Markov Models and Time Delay Networks, have been described within this framework in (Bottou,1991). 5 CONCLUSION Cooperative training of modular systems provides a unified view of many learning algorithms, as well as hybrid systems which combine classical or connectionist algorithms. Our formalism provides a way to define specific modules and to combine them into a cooperative system. This allows to design and implement complex learning systems which eventually incorporate structural a priori knowledge about the task. Acknowledgements During this work, L.B. was supported by DRET grant nO 87/808/19. References Benveniste A., Metivier M., Priouret P. (1987) Algorithmes adaptatifs et approximations stochastiques, Masson Bollivier M. de, Gallinari P. & Thiria S. (1990) Cooperation of neural nets for robust classification, Procedings of I1CNN 90, San Diego, voll, 113-120. 788 Boltou and Gallinari Bottou L. (1991) Une approche thiorique de l' apprentissage connexionniste; applications a la reconnaissance de la parole. PhD Thesis. Universite de Paris XI Bourlard H .• Morgan N. (1991) A Continuous Speech Recognition System Embedding MLP into HMM - In Touretzlc:y D.S .• Lipmann R. (eds.) Advances in Neural Information Processing Systems 3 (this volume). Morgan-Kaufman Le Cun Y.: A theoretical framework for back-propagation (1988) in Touretzky D .• Hinton G. & Sejnowsky T. (eds.) Proceedings of the 1988 Connectionist Models Summer School. 21-28. Morgan Kaufmann (1988) Le Cun Y .• Boser B .• & al.. (1990): Handwritten Digit Recognition with a BackPropagation Network- in D.Touretzky (ed.) Advances in Neural Information Processing Systems 2. 396-404. Morgan Kaufmann Ljung L. & SMerstriim T. (1983) Theory and Practice of Recursive Identification. MIT Press Tsypkin Ya. (1971) Adaptation and Learning in Automatic systems. Mathematics in science and engineering. vol 73. Academic Press White H. (1991) An Overview of Representation and Convergence results for Multilayer feed-forward Networks. Touretzky D.S .• Lipmann R. (eds.) Advances in Neural Information Processing Systems 3 (this volume). Morgan-Kaufman
1990
93
422
Continuous Speech Recognition by Linked Predictive Neural Networks Joe Tebelskis, Alex Waibel, Bojan Petek, and Otto Schmidbauer School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract We present a large vocabulary, continuous speech recognition system based on Linked Predictive Neural Networks (LPNN's). The system uses neural networks as predictors of speech frames, yielding distortion measures which are used by the One Stage DTW algorithm to perform continuous speech recognition. The system, already deployed in a Speech to Speech Translation system, currently achieves 95%, 58%, and 39% word accuracy on tasks with perplexity 5, 111, and 402 respectively, outperforming several simple HMMs that we tested. We also found that the accuracy and speed of the LPNN can be slightly improved by the judicious use of hidden control inputs. We conclude by discussing the strengths and weaknesses of the predictive approach. 1 INTRODUCTION Neural networks are proving to be useful for difficult tasks such as speech recognition, because they can easily be trained to compute smooth, nonlinear, nonparametric functions from any input space to output space. In speech recognition, the function most often computed by networks is classification, in which spectral frames are mapped into a finite set of classes, such as phonemes. In theory, classification networks approximate the optimal Bayesian discriminant function [1], and in practice they have yielded very high accuracy [2, 3, 4]. However, integrating a phoneme classifier into a speech recognition system is nontrivial, since classification decisions tend to be binary, and binary phoneme-level errors tend to confound word-level hypotheses. To circumvent this problem, neural network training must be carefully integrated into word level training [1, 5]. An alternative function which can be com199 200 Tebelskis, Waibel, Petek, and Schmidbauer puted by networks is prediction, where spectral frames are mapped into predicted spectral frames. This provides a simple way to get non-binary distortion measures, with straightforward integration into a speech recognition system. Predictive networks have been used successfully for small vocabulary [6, 7] and large vocabulary [8, 9] speech recognition systems. In this paper we describe our prediction-based LPNN system [9], which performs large vocabulary continuous speech recognition, and which has already been deployed within a Speech to Speech Translation system [10]. We present our experimental results, and discuss the strengths and weaknesses of the predictive approach. 2 LINKED PREDICTIVE NEURAL NETWORKS The LPNN system is based on canonical phoneme models, which can be logically concatenated in any order (using a "linkage pattern") to create templates for different words; this makes the LPNN suitable for large vocabulary recognition. Each canonical phoneme is modeled by a short sequence of neural networks. The number of nets in the sequence, N >= 1, corresponds to the granularity of the phoneme model. These phone modeling networks are nonlinear, multilayered, feedforward, and "predictive" in the sense that, given a short section of speech, the networks are required to extrapolate the raw speech signal, rather than to classify it. Thus, each predictive network produces a time-varying model of the speech signal which will be accurate in regions corresponding to the phoneme for which that network has been trained, but inaccurate in other regions (which are better modeled by other networks). Phonemes are thus "recognized" indirectly, by virtue of the relative accuracies of the different predictive networks in various sections of speech. Note, however, that phonemes are not classified at the frame level. Instead, continuous scores (prediction errors) are accumulated for various word candidates, and a decision is made only at the word level, where it is finally appropriate. 2.1 TRAINING AND TESTING ALGORITHMS The purpose of the training procedure is both (a) to train the networks to become better predictors, and (b) to cause the networks to specialize on different phonemes. Given a known training utterance, the training procedure consists of three steps: 1. Forward Pass: All the networks make their predictions across the speech sample, and we compute the Euclidean distance matrix of prediction errors between predicted and actual speech frames. (See Figure 1.) 2. Alignment Step: We compute the optimal time-alignment path between the input speech and corresponding predictor nets, using Dynamic Time Warping. 3. Backward Pass: Prediction error is backpropagated into the networks according to the segmentation given by the alignment path. (See Figure 2.) Hence backpropagation causes the nets to become better predictors, and the alignment path induces specialization of the networks for different phonemes. Testing is performed using the One Stage algorithm [11], which is a classical extension of the Dynamic Time Warping algorithm for continuous speech. Continuous Speech Recognition by Linked ftedictive Neural Networks 201 « CtJ « • • • • • ---- prediction errors ..--+--+-----if---~ ......•................ . ... +--+---+-+--+-----4: •......•.................... t--t---t--+--t---t--~ .... . .•......... . . . ........ phoneme "a" predictors phoneme "b" predictors Figure 1: The forward pass during training. Canonical phonemes are modeled by sequences of N predictive networks, shown as triangles (here N=3). Words are represented by "linkage patterns" over these canonical phoneme models (shown in the area above the triangles), according to the phonetic spelling of the words. Here we are training on the word "ABA". In the forward pass, prediction errors (shown as black circles) are computed for all predictors, for each frame of the input speech. As these prediction errors are routed through the linkage pattern, they fill a distance matrix (upper right). « CtJ +--+---1--1---1--+---1-: : :: : :A(igr)~~~(p.~th· · .... « \~~-\tI------+~;:;:1tt:;:;:;:;:::;::;:;::;::;::;:;:;:;:;::;:::;::;J;;tt Figure 2: The backward pass during training. After the DTW alignment path has been computed, error is backpropagated into the various predictors responsible for each point along the alignment path. The back propagated error signal at each such point is the vector difference between the predicted and actual frame. This teaches the networks to become better predictors, and also causes the networks to specialize on different phonemes. 202 Tebelskis, Waibel, Petek, and Schmidbauer 3 RECOGNITION EXPERIMENTS We have evaluated the LPNN system on a database of continuous speech recorded at CMU. The database consists of 204 English sentences using a vocabulary of 402 words, comprising 12 dialogs in the domain of conference registration. Training and testing versions of this database were recorded in a quiet office by multiple speakers for speaker-dependent experiments. Recordings were digitized at a sampling rate of 16 KHz. A Hamming window and an FFT were computed, to produce 16 melscale spectral coefficients every 10 msec. In our experiments we used 40 context-independent phoneme models (including one for silence), each of which had a 6-state phoneme topology similar to the one used in the SPICOS system [12]. tV>ott..1s; I£UD IS THIS Il£ IFfll£ f~ lIE ClII'EREKI (seero = 17. 61 "*,,,theOl .. d p/l<QM.; , iii EH l (1/ lit Z 111 IH 5 111 1ft III f 1ft 5 f ER KRAHfRIftNS , (1/ I' , . , ,; n .... . .. 11 • • , 11 ..... " ", ·· , . , ...... ,., ... . . . . .......... , ..... , ........ . ... " • • • 1. 11' . . .. . -. " .... 1111 . .. .. " . .. " .... 11 . ........ " •••••••• , . • • , llll .............. ,IIIIIIIIU .. ' • • ".. ''':::::::::~:: 1I11 ............ lh •• , ..... lllIltlll • • ••• .. III ........ ·' .. ...... .. :: : ::~ml;i!E::!~:l;::~:!i;~: U "." U lu, ........ ,III.,IIII.III ........................... · .·... "ll ........... " .. .. ... ,11 ...... " " •• • • h' U ••• ,I.'··, ,111"" I"" .... , •• U .... , .... I.II ............ UI ..... , ................ 111 ........... '.,.111 .. 11 •• 11. ,· .111"' •• 111_1111,,0, •• 1111111'1'" '1" """ """ '" , .• '".IIIII .... IIII ... IIIIUIIIlIlIl"I ........... " •• • •• IU ... 'IIUIIlIlIlIl ...... II'h ... ' , ..................... IIIU' ..... , • • ,II'." I. ''';;:;;;:;:~:::: ,1I1I1I1I11t1l11l"11I.1I""","1I1O •••• ,It .......... ,·, " "III"hl. . • ".""" !!~ '" .... UIlIlI ...... " ''' .... 'I .... • . ............... ,., ", .. II .... . ,· " .. " 1'" . '" •••• 1111111 .. . 11"11 ........... , ............. ' . .. .. '.1' ....... , • '11." •• ' ...... '11' ...... 1111 •••• ,.'.'10·, .•••• 1 ...... 11 ..... . """'11'" ""'''''1 ,., ..... 11 ........ ".0, ............ ,. " " .• 00."" ' . IUIlIl ................ · ., ... , ••• ,. I I '01' '',,, . "'''." """11 Figure 3: Actual and predicted spectrograms. Figure 3 shows the result of testing the LPNN system on a typical sentence. The top portion is the actual spectrogram for this utterance; the bottom portion shows the frame-by-frame predictions made by the networks specified by each point along the optimal alignment path. The similarity of these two spectrograms indicates that the hypothesis forms a good acoustic model of the unknown utterance (in fact the hypothesis was correct in this case). In our speaker-dependent experiments using two males speakers, our system averaged 95%, 58%, and 39% word accuracy on tasks with perplexity 5, 111, and 402 respectively. In order to confirm that the predictive networks were making a positive contribution to the overall system, we performed a set of comparisons between the LPNN and several pure HMM systems. When we replaced each predictive network by a univariate Gaussian whose mean and variance were determined analytically from the labeled training data, the resulting HMM achieved 44% word accuracy, compared to 60% achieved by the LPNN under the same conditions (single speaker, perplexity 111). When we also provided the HMM with delta coefficients (which were not directly available to the LPNN), it achieved 55%. Thus the LPNN was outperforming each of these simple HMMs. Continuous Speech Recognition by Linked R-edictive Neural Networks 203 4 HIDDEN CONTROL EXPERIMENTS In another series of experiments, we varied the LPNN architecture by introducing hidden control inputs, as proposed by Levin [7]. The idea, illustrated in Figure 4, is that a sequence of independent networks is replaced by a single network which is modulated by an equivalent number of "hidden control" input bits that distinguish the state. Sequence of Predictive Networks Hidden Control Network Figure 4: A sequence of networks corresponds to a single Hidden Control network. A theoretical advantage of hidden control architectures is that they reduce the number offree parameters in the system. As the number of networks is reduced, each one is exposed to more training data, and - up to a certain point - generalization may improve. The system can also run faster, since partial results of redundant forward pass computations can be saved. (Notice, however, that the total number of forward passes is unchanged.) Finally, the savings in memory can be significant. In our experiments, we found that by replacing 2-state phoneme models by equivalent Hidden Control networks, recognition accuracy improved slightly and the system ran much faster. On the other hand, when we replaced all of the phonemic networks in the entire system by a single Hidden Control network (whose hidden control inputs represented the phoneme as well as its state), recognition accuracy degraded significantly. Hence, hidden control may be useful, but only if it is used judiciously. 5 CURRENT LIMITATIONS OF PREDICTIVE NETS While the LPNN system is good at modeling the acoustics of speech, it presently tends to suffer from poor discrimination. In other words, for a given segment of speech, all of the phoneme models tend to make similarly good predictions, rendering all phoneme models fairly confusable. For example, Figure 5 shows an actual spectrogram and the frame-by-frame predictions made by the /eh/ model and the /z/ model. Disappointingly, both models are fairly accurate predictors for the entire utterance. This problem arises because each predictor receives training in only a small region of input acoustic space (i.e., those frames corresponding to that phoneme). Consequently, when a predictor is shown any other input frames, it will compute an 204 Tebelskis, Waibel, Petek, and Schmidbauer " 1 ., 0, 1~";";":7';l' lti1"rl . ............ . ............. . ....... " .. . . .. ' •• " ...................... 11 ...................... 1' ... . ..... . • .... 1111" •• • .. . . . ..... • ... " ,. · . . ....... " . ........ .. . .... . " •• • • " , .... ...... .. .... II .. ' .... ' •• " . . ......... "nu ......... ... IIII ..... 'U .. .,' ...... UI ... IIUlllll l ' '' IIU ............. ... , ... u'I, ...... II ... .. ,II . .... U. l lh . . ... ' .... " ..... .... . . .. . . ... . 1 .... .. , • ••• 1111 ... "1111 ••• ,1.111111 .. " ... , . . ... ................. 111 •• 111" ... 11 . 111 ••• , ..... . ....... ... 111 .............. 1' .. '1111111' .. .... " .111I111".,t" · .... , .... .. . ·.' ." ......... .. , • •• 1111 .... 11"1 ......... 1,.'1' . ... ......... ..... "111 • • " ......... ' ... '" " ........... '11 •• • ,'1 • • "" ... 111 ...... . ... '1 11' ... 1" . .. . . ... 11 11 ..... , . '· .. .. • . .. '1 •• 1. '· . . ...... ... .. 1 •• • 1 .............................. 1 •• 1 ........ 1 .... 1111 .......... 1 ..... , ............. 111 ...... 11 ........... 111111 ...... ,1 •• , ..... '11 ........ 1.' •••••••• ' .1 ' •• • .... 1 ..... .. /eh/ ....... , ..... ul,." •••• 'I III'II . ....... ", ..... " ........... ... II'".II'",., .......... " ....... 1 .. 1.11' ... 111· •• " .... , ..... 1 ...... '.11' .... " ....... ' • ••• "" •• • ' ...... , •• , ,, • • •• '111 ....... ... 111' .. '1 .. "' .. "1··"' ..... 1'·" .. ·''' .. ' .. ' ' 1 ... 1 .. 111111 .... 1 ......... 11' .... 11 •• ''' •• '.'' ''''"1'1 ..... ' . .. 1111' .............. "".'." ............ . . , • • • ,,"", . 11' ........... 1111'111 •• ' .. " .... 11' .... 11111111111 ................... 111 .... " ...... '. 1.11' ... 1''' .... ' .... 1111 11 11 111111 ...... 11 ............. . .......... , ..... . . ... 1. ' ... .. • 1 ....... .................. 11111 .. 11111 ... 11.' • ••• , .... " •• • "1 .... 1 ....... IIH •• •• ' .. , ............. IIII'I' ........ , •••• • "., ..... I1 ......... . , .......... ... .. '.1.1 ••• " ................................... , ..... ," •• "'1111 •• 11111"111 ..... 1 .. '".11" ....... 1 ..... 1 ..... 1 .. 111'"11 •• "1 ..... 1 ••• " •• " ....... ., ....... , " "',I., IIf,.,. " .I."" ••• : : :~~-::::IC:=:~~::~~!:::::::::::::::::::::~::::.~!::::~::~:::~~~:~ • .:=::::::::::::~~:::::~::::.-:::~~::.:::::::::~~:;u::::: ::::: ::~:~::::: : :: :::!:: : :~ : . ..... _.--.. .................................. , .......... _., ...... , ....... ,.,· ....... 1 ..... • ....... -... .. •• ......... ' ..... ,111 ..... .. ........ 11 ........ , ......................... ,1111 •• " ...... , ........................ , ...... , .......... , ................... " .... 11.11 •••••• '."N"'" ••••••• , ........................................................... 1 ...................................................... , .......... , ......... I'II •• .! r"TT~!"."""'""'"TmmTmTmTnm""'TnT""'IJII1""'rrm~!!!'I1TT!".""rrmrrm""'...,.,nT11""';nnIlPl_rm""'m;1,m,mn'""rrmTm·nnTTi1TtTTT11TTTm .mml •• t.'1 • . . ...... 1 •• "" I . . ..... . , •• • •••• ,,, .............. , .......... 1.1 ............. . '1111, ...... . ...... , ., ••• , .. ......... . .. , ..... ,It."., '''' ,., •. u . , .... , ".' .. '11' ... , . ....... .. ...... 11.111 .. . 1 .... 11 ..... .. " " ,·" · · .. 1 .. .............. '.11 ....... '",, ....... '.11'".11 •• " .. , .. ,,' •• 1 ... 1111' ... 111.1111.1111" ....... , .... , . ............ '00 . .. .. . .. , . , ... , ........... , ...... , ••• I .. ' .... I1 .. ' •• I.II .... I ..... ' ........... ,,' ...... "III III I • • HIIIIII' ., 111 •• ' 11 11111011' ..... " ... 1 ........ 1'"'''.' .... ",., ••. ' .•• 111 ''.'.' .... 10 ...... '., ..... , ............. . •••• 111" ............ 1111."".' .... ' . ....... " ........ 1.11111 .......... ".1 •• 1 ........ 11"' ••• • 11'".1, ........... , ........ , ..... ··· •• 111 111.11111" ...... . , ... . ... ,. · .. .. .. , .... .. • , , ......... "1 '1. 11111111 1' ..... . .. . , "" .. , ' .... U ' .. I IIII " ....... , 111' .. 111 ..... '11 .. " .... . , , . .. ... , .. , ........ , . .... '"11 ' ••• • • , " .1111111'1 11'" "' " ... .. , ••••• ' . """111" . , .IUff'.""I".'.IIIIIIIIIII'" . " , . " • • 11, .... , ........ ' .. 1'1 •• ' .. . 11".' ............ , ... · ' • • 1 • • •• 11 .... ",,,,,,,,,," ... ,11' "' , .... . ...... , ... . " .. ....... , .. , /z/ .. " '1 ....... 111111111111 11 .' ..... ' . ' ... .. ,.11 ' •• • • '"' ... ' .1.1I1t1l.1 ..... ' .. '"' ......... 1 •• , · • " ".tt'"IIII , I" .. . ... 111111 .. . .. ' •• '.I .... II IH ·· . · .. • 1 ................. , ...... ' •• , .. ,· , .. ... , 11 ..... , •••• 'II ....... ' •• , .. .................. . . . 1 .. " .............. 1" .... "1 .. 1' .... ...... 1 ..... .. '" , ,11 " ..... . ,." ..................... ,," ....... 111''''' ......... 11 ...... '1' . . ..... ............... ' . .... 1 ....... 11' ..... ' .......... ", ... , ............ .... , .11101 ...... , •. , " " ............... I .... U IIl' ... . .. ... .. , .. . ..... , .. . , ... 1' . 1. ' •• , . .. .. ' " ' ......... 11" . . .. I .............. . .. . . " .'11., •• 1 • • . , ••• , .......... .. , ," " " 01 ' 11" " ' " • • " ..................... f ... • ' . ...... . 11 ....... ' " '1".','11"" •• , .. III' .......... "" .. ••••• 11'1" "" " " ••• 11."110 •• '· · . · ........ 111 .... . . • · · . . . . . ...... . , ., ... . , ... . • , • • , ........................ 1"" • • • , .. ........ 1 •• , . . ..... 11" •• ,0, • • , I ................. . ', .. ,' ...... "00 , . .. ......... 1" ... "., . . ............... . . ,. , ,". 11 tu • • , . . . . ... ' '" •• • ............................. "1 ....... 11.1." ......... 1. , ........................ . . 0011 ......... 11' •• , ......... 1 ..... ' .' ........... " ...... . , ,"III.'h, ... . . " • • 00 •• , . ............ I.H ••• ••• III ............ I ....... " . . .... II ... 'I •• ••• ,' ................. 1 .. . ........... 1" " '" ••• 1 ........... . .. . 1 ............ 11 ••• , '.' " ........ 1'.'. ', "' " . ............................................ 1 ... " ........ 111 ........... , ........ ' "' .............. 11' •••• , ......... . ,' ...... " .... ,.11 •• I.,.I .. '~~h ••• ~.!~ .. , Figure 5: Actual spectrogram, and corresponding predictions by the /eh/ and /z/ phoneme models. undefined output, which may overlap with the outputs of other predictors. In other words, the predictors are currently only trained on positive instances, because it is not obvious what predictive output target is meaningful for negative instances; and this leads to problematic "undefined regions" for the predictors. Clearly some type of discriminatory training technique should be introduced, to yield better performance in prediction based recognizers. 6 CONCLUSION We have studied the performance of Linked Predictive Neural Networks for large vocabulary, continuous speech recognition. Using a 6-state phoneme topology, without duration modeling or other optimizations, the LPNN achieved an average of 95%, 58%, and 39% accuracy on tasks with perplexity 5, 111, and 402, respectively. This was better than the performance of several simple HMMs that we tested. Further experiments revealed that the accuracy and speed of the LPNN can be slightly improved by the judicious use of hidden control inputs. The main advantages of predictive networks are that they produce non-binary distortion measures in a simple and elegant way, and that by virtue of their nonlinearity they can model the dynamic properties of speech (e.g., curvature) better than linear predictive models [13]. Their main current weakness is that they have poor discrimination, since their strictly positive training causes them all to make confusably accurate predictions in any context. Future research should concentrate on improving the discriminatory power of the LPNN, by such techniques as corrective training, explicit context dependent phoneme modeling, and function word modeling. Continuous Speech Recognition by Linked ltedictive Neural Networks 205 Acknowledgements The authors gratefully acknowledge the support of DARPA, the National Science Foundation, ATR Interpreting Telephony Research Laboratories, and NEC Corporation. B. Petek also acknowledges support from the University of Ljubljana and the Research Council of Slovenia. O. Schmidbauer acknowledges support from his employer, Siemens AG, Germany. References [1] H. Bourlard and C. J. Wellekens. Links Between Markov Models and Multilayer Perceptrons. Pattern Analysis and Machine Intelligence, 12:12, December 1990. [2] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. Phoneme Recognition Using Time-Delay Neural Networks. IEEE Transactions on Acoustics, Speech, and Signal Processing, March 1989. [3] M. Miyatake, H. Sawai, and K. Shikano. Integrated Training for Spotting Japanese Phonemes Using Large Phonemic Time-Delay Neural Networks. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 1990. [4] E. McDermott and S. Katagiri. Shift-Invariant, Multi-Category Phoneme Recognition using Kohonen's LVQ2. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1989. [5] P. Haffner, M. Franzini, and A. Waibel. Integrating Time Alignment and Connectionist Networks for High Performance Continuous Speech Recognition. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1991. [6] K. Iso and T. Watanabe. Speaker-Independent Word Recognition Using a Neural Prediction Model. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 1990. [7] E. Levin. Speech Recognition Using Hidden Control Neural Network Architecture. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, April 1990. [8] J. Tebelskis and A. Waibel. Large Vocabulary Recognition Using Linked Predictive Neural Networks. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 1990. [9] J. Tebelskis, A. Waibel, B. Petek, and O. Schmidbauer. Continuous Speech Recognition Using Linked Predictive Neural Networks. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1991. [10] A. Waibel, A. Jain, A. McNair, H. Saito, A. Hauptmann, and J. Tebelskis. A Speechto-Speech Translation System Using Connectionist and Symbolic Processing Strategies. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1991. [11] H. Ney. The Use of a One-Stage Dynamic Programming Algorithm for Connected Word Recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32:2, April 1984. [12] H. Ney, A. Noll. Phoneme Modeling Using Continuous Mixture Densities. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 1988. [13] N. Tishby. A Dynamic Systems Approach to Speech Processing. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 1990.
1990
94
423
A B-P ANN Commodity Trader Joseph E. Collard Martingale Research Corporation 100 Allentown Pkwy., Suite 211 Allen, Texas 75002 Abstract An Artificial Neural Network (ANN) is trained to recognize a buy/sell (long/short) pattern for a particular commodity future contract. The BackPropagation of errors algorithm was used to encode the relationship between the Long/Short desired output and 18 fundamental variables plus 6 (or 18) technical variables into the ANN. Trained on one year of past data the ANN is able to predict long/short market positions for 9 months in the future that would have made $10,301 profit on an investment of less than $1000. 1 INTRODUCTION An Artificial Neural Network (ANN) is trained to recognize a long/short pattern for a particular commodity future contract. The Back-Propagation of errors algorithm was used to encode the relationship between the Long/Short desired output and 18 fundamental variables plus 6 (or 18) technical variables into the ANN. 2 NETWORK ARCHITECTURE The ANNs used were simple, feed forward, single hidden layer networks with no input units, N hidden units and one output unit. See Figure 1. N varied from six (6) through sixteen (16) hidden units. 551 552 Collard INPUTS • • • T, TZ T3 • • • T 6 OR TI8 DESIRED OUTPUT INPUT LAYER • • • • • • Figure 1. The 3 TRAINING PROCEDURE HIDDEN LAYER • • OUTPUT OUTPUT LAYER BUY OR SELL LONG OR SHORT Network Architecture Back Propagation of Errors Algorithm: The ANN was trained using the well known ANN training algorithm called Back Propagation of Errors which will not be elaborated on here. A Few Mods to the Algorithm: We are using the algorithm above with three changes. The changes, when implemented and tested on the standard exclusive-or problem. resulted in a trained, one hidden unit network after 60-70 passes through the 4 pattern vectors. This compares to the 245 passes cited by Rumelhart [2]. Even with a 32 hidden unit network, Yves found the average number of passes to be 120 [2]. A B-P ANN Commodity Trader 553 The modifications to standard back propagation are: 1. Minimum Slope Term in the Derivative of the Activation Function [John Denker at Bell Labs]. 2. Using the Optional Momentum Term [2]. 3. Weight change frequency [1]. 4 DATA In all cases, the six market technical variables (Open, High, Low, Close, Open Interest and Volume) were that trading day's data for the "front month" commodity contract (roughly speaking, the most active month's commodity contract) . The first set of training data consisted of 105 or 143 "high confidence" trading days in 1988. Each trading day had associated with it a twenty-five component pattern vector (25-vector) consisting of eighteen fundamental variables, such as weather indicators and seasonal indicators, plus the six market technical variables for the trading day, and finally, the EXPERT's hindsight long/short position. The test data for these networks was all 253 25-vectors in 1988. The next training data set consisted of all 253 trading days in 1988. Again each trading day had associated with it a 25-vector consisting of the same eighteen fundamental variables plus the six market technical variables and finally, the EXPERT's long/short position. The test set for these networks consisted of 25-vectors from the first 205 trading days in 1989. Finally, the last set of training data consisted of the last 251 trading days in 1988. For this set each trading day had associated with it a 37 component pattern vector (37-vector) consisting of the same eighteen fundamental variables plus six market technical variables for that trading day, six market technical variables for the previous trading day, six market technical variables for the two days previous trading day, and finally, the EXPERT's long/short position. The test set for these networks consisted of 37-vectors from the first 205 trading days in 1989. 5 RESULTS The results for 7 trained networks are summarized in Table 1. 554 Collard Table 1. Study Results # SIZE/IN TRAIN. % • £-XPT PROFIT/RTS TEST PROFIT/RTS 005 6-1 /24 105-'88 100 '.125 253-'88 76% 006 6-1 /24 143-'88 99 '.125-1 253-'88 82% Targets »> 253-'88 --------$25,296/10 205- t 89 $14,596/ 6 009 10-1/24 253-'88 98 • .25-4 $24,173/14 205-'89 $ 7,272/ 6 010 6-1 /24 105-'88 100 • . 1 $17,534/13 253-'88 80% Targets »> 251-'88 --------$24,819/10 205-'89 $14,596/ 6 011 10-1/36 251-'88 98 @ .25-4 $23,370/14 205-'89 $ 7,272/ 6 012 13-1/36 251-'88 97 @ .25-7 $22,965/12 205-'89 $ 6,554/14 013 16-1/36 251-'88 99 ~ .25-3 $22,495/12 205-'89 $10,301/19 The column headings for Table 1 have the following meanings: # The numerical designation of the Network. Size/In The hidden-output layer dimensions and the number of inputs to the network. Train. The number of days and year of the training set. % @ E-Xpt The percent of the training data encoded in the network at less than E error - the number of days not encoded. Profit RTs The profit computed for the training or test set and how many round turns (RTs) it required for that profit. Or, if the profit calculation was not yet available, then the percent the network is in agreement with the EXPERT. Test set The number of trading days/year of the test set. Figure 2 shows how well the 013 network agrees with its training set's long/short positions. The NET 19 INPUT curve is the commodities price curve for 1988's 251 trading days. A B-P ANN Commodity Trader 555 FILE: sqlc913i ROWS: 1 TO 252 i PLOT OF RO~ • us. OUTPUT,tEOCHER & PRICE HET 1 OUTPl-tPE 11 TEIlCHR····)(· HET 19 INPUt - -e. 1.99 9.Bc] 9.18 9.67 9.56 9.44 9.33 9.22 9.11 9.99 --- .--.}(:~-:~ . :--r--l(- ---. ~ r" T IJ I) --J) , ~ \ IJ t J \ ? ~)\ft.P~ i' J~\\,f I ) I ( t I ~/ 'y'f\ I ~I P "y I I I, 1\ I I I "I. ~ I r! I ~~ ( ,l~J ~ ~" \ If l ;:J~'I: ~~, I I I 'I I I I I 1 29 51 85 113 149 16B 196 224 252 Figure 2. Trained Network 013 Figure 3 shows the corresponding profit plot on the training data for both the EXPERT and network 013. TOLCOF & EXPERT PROFIT ,24B19 I 22961. I 193~r.1. I 16546 I 13788. I 11939. 8273 5515. I 2757. ~ .J~~~ .. __ ": NeTWORK 9.99 +---r-*""1----1----..,.----r----r--,----.-----..-----. 1 29 57 B5 113 149 168 196 224 252 Figure 3. Network 013's and EXPERT's Profit for 1988 data 556 Collard Figure 4 is the profit plot for the network when tested on the first 205 trading days in 1989. Two si~nificant features should be noted in Figure 4. The network's profit step function is almost always increasing. The profit is never negative. TOLCOF & [~PrRT PROFIT '14591 I 12915. I 11353. I 9?31. I 8199. I 64B1. I 4865. I 3243. I 1621. :-._---.... : .•• _i:(-_ .............. )(_. EXPERT : :~ .... - ........... )Eo .......... . r£TWORK 9. 99 -+-c=~~-.,..--.:..----.---.-----.---........----.----r----. 1 24 41 . 69 92 115 138 169 183 296 '89 Figure 4. Network 013's and EXPERT's Profit for 1989 data 6 REFERENCES 1. Pao, Y.H., Adaptive Pattern Recognition and Neural Networks, Addison Wesley, 1989. 2. Rumelhart, D., and McClelland, J., Parallel Distributed Processing, MIT Press, 1986.
1990
95
424
Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interactions Andreas V.M. Herz Division of Chemistry Caltech 139-74 Pasadena, CA 91125 Zhaoping Li School of Natural Sciences Institute for Advanced Study Princeton, N J 08540 J. Leo van Hemmen Physik-Department der TU M iinchen D-8046 Garching, FRG Abstract We study the representation of static patterns and temporal associations in neural networks with a broad distribution of signal delays. For a certain class of such systems, a simple intuitive understanding of the spatia-temporal computation becomes possible with the help of a novel Lyapunov functional. It allows a quantitative study of the asymptotic network behavior through a statistical mechanical analysis. We present analytic calculations of both retrieval quality and storage capacity and compare them with simulation results. 1 INTRODUCTION Basic computational functions of associative neural structures may be analytically studied within the framework of attractor neural networks where static patterns are stored as stable fixed-points for the system's dynamics. If the interactions between single neurons are instantaneous and mediated by symmetric couplings, there is a Lyapunov function for the retrieval dynamics (Hopfield 1982). The global computation corresponds in that case to a downhill motion in an energy landscape created by the stored information. Methods of equilibrium statistical mechanics may be applied and permit a quantitative analysis of the asymptotic network behavior (Amit et al. 1985, 1987). The existence of a Lyapunov function is thus of great conceptual as well as technical importance. Nevertheless, one should be aware that environmental inputs to a neural net always provide information in both space and time. It is therefore desirable to extend the original Hopfield scheme and to explore possibilities for a joint representation of static patterns and temporal associations. 176 Statistical Mechanics of Temporal Association in Neural Networks 177 Signal delays are omnipresent in the brain and play an important role in biological information processing. Their incorporation into theoretical models seems to be rather convincing, especially if one includes the distribution of the delay times involved. Kleinfeld (1986) and Sompolinsky and Kanter (1986) proposed models for temporal associations, but they only used a single delay line between two neurons. Tank and Hopfield (1987) presented a feedforward architecture for sequence recognition based on multiple delays, but they just considered information relative to the very end of a given sequence. Besides these deficiences, both approaches lack the quality to acquire knowledge through a true learning mechanism: Synaptic efficacies have to be calculated by hand which is certainly not satisfactory both from a neurobiological point of view and also for applications in artificial intelligence. This drawback has been overcome by a careful interpretation of the Hebb principle (1949) for neural networks with a broad distribution of transmission delays (Herz et al. 1988, 1989). After the system has been taught stationary patterns and temporal sequences by the same principle ! it reproduces them with high precission when triggered suitably. In the present contribution, we focus on a special class of such delay networks and introduce a Lyapunov (energy) functional for the deterministic retrieval dynamics (Li and Herz 1990). We thus generalize Hopfield's approach to the domain of temporal associations. Through an extension of the usual formalism of equilibrium statistical mechanics to time-dependent phenomena, we analyze the network performance under a stochastic (noisy) dynamics. We derive quantitative results on both the retrieval quality and storage capacity, and close with some remarks on possible generalizations of this approach. 2 DYNAMICS OF THE NEURONS Throughout what follows, we describe a neural network as a collection of N twostate neurons with activities Si = 1 for a firing cell and Si = -1 for a quiescent one. The cells are connected by synapses with modifiable efficacies Jij(r). Here r denotes the delay for the information transport from j to i. We focus on a solitonlike propagation of neural signals, characteristic for the (axonal) transmission of action potentials, and consider a model where each pair of neurons is linked by several axons with delays 0 ~ r < rmax. Other architectures with only a single link have been considered elsewhere (Coolen and Gielen 1988; Herz et al. 1988, 1989; Kerszberg and Zippelius 1990). External stimuli are fed into the system via receptors Ui =±1 with input sensitivity 'Y. The postsynaptic potentials are given by N Tm .. x hi(t) = (1- 'Y) L L Jij(r)Sj(t - r) + 'YUi(t) . (1) j=l T=O We concentrate on synchronous dynamics (Little 1974) with basic time step ~t = 1. Consequently, signal delays take nonnegative integer values. Synaptic noise is described by a stochastic Glauber dynamics with noise level (3=T- 1 (Peretto 1984), 1 Prob[Si(t + 1) = ±1] = 'i{1 ± tanh ((3hi(t)]} , (2) where Prob denotes probability. For (3-00, we arrive at a deterministic dynamics, _ { 1, if hi (t) > 0 Si(t + 1) = sgn[hi(t)] = -1, if hi(t) < 0 . (3) 178 Herz, Li, and van Hemmen 3 HEBBIAN LEARNING During a learning session the synaptic strengths may change according to the Hebb principle (1949). We focus on a connection with delay r between neurons i and j. According to Hebb, the corresponding efficacy Jij(r) will be increased if cell j takes part in firing cell i. In its physiological context, this rule was originaly formulated for excitatory synapses only, but for simplicity, we apply it to all synapses. Due to the delay r in (1) and the parallel dynamics (2), it takes r+ 1 time steps until neuron j actually influences the state of neuron i. Jij(r) thus changes by an amount proportional to the product of Sj(t-r) and Si(t+l). Starting with Jij(r) = 0, we obtain after P learning sessions, labeled by I-' and each of duration D/1' P D", Jij(r) = e:(r)N-1 2: 2: Si(t/1 + I)Sj(t/1 - r) = e:(r)iij(r) . (4) /1=1 t", =1 The parameters e:( r), normalized by L;~o~ e:( r) = 1, take morphological characteristics of the delay lines into account; N- 1 is a scaling factor useful for the theoretical analysis. By (4), synapses act as microscopic feature detectors during the learning sessions and store correlations of the taught sequences in both space (i, j) and time (r). In general, they will be asymmetric in the sense that Jij(r) :/; Jji(r). During learning, we set T = 0 and 'Y = 1 to achieve a "clamped learning scenario" where the system evolves strictly according to the external stimuli, Si(t/1) = O"i(t/1-1). We study the case where all input sequences O"i(t/1) are cyclic with equal periods D/1 = D, i.e., O"i(t/1) = O"i(t/1 ± D) for alII-'. In passing we note that one should offer the sequences already rmax time steps before allowing synaptic plasticity a la (4) so that both Si and Sj are in well defined states during the actual learning sessions. We define patterns erao by erao - O"i(t/1 = a) for 0 < a < D and get P D-1 Jij( r) = e:( r)N- 1 2: 2: er.~+l ef.~-T . (5) /1=1 a=O Our learning scheme is thus a generalization of outer-product rules to spatiotemporal patterns. As in the following, temporal arguments of the sequence pattern states e and the synaptic couplings should always be understood modulo D. 4 LYAPUNOV FUNCTIONAL Using formulae (1)-(5), one may derive equations of motion for macroscopic order parameters (Herz et al. 1988, 1989) but this kind of analysis only applies to the case P ~ 10gN. However, note that from (4) and (5), we get iij(r) = iji(D - (2 + r)). For all networks whose a priori weights e:( r) obey e:( r) = e:(D - (2 + r)) we have thus found an "extended synaptic symmetry" (Li and Herz 1990), (6) generalizing Hopfield's symmetry assumption Jij = Jji in a natural way to the temporal domain. To establish a Lyapunov functional for the noiseless retrieval Statistical Mechanics of Temporal Association in Neural Networks 179 dynamics (3), we take ,=0 in (1) and define 1 N D-l H(t) = -- 2: 2: Jij(T)Si(t - a)Sj(t - (a + T + 1)%D) , (7) 2 . . 1 ° ',]= a,'T= where a%b = a mod b. The functional H depends on aI/states between t+l-D and t so that solutions with constant H, like D-periodic cycles, need not be static fixed points of the dynamics. By (1), (5) and (6), the difference AH(t)=H(t)-H(t-l) }s N P D-I N AH(t)= -2: [Si(t)-Si(t-D)]h,(t-l)- e;(D-l) 2: 2: {2:erao[Si(t)-Si(t-D)]}2 . . 1 2N I . ,= 1'= a=O I=} (8) The dynamics (3) implies that the first term is nonpositive. Since e;( T) > 0, the same holds true for the second one. For finite N, H is bounded and AH has to vanish as t-oo. The system therefore settles into a state with Si(t)=Si(t-D) for all i. We have thus exposed two important facts: (a) the retrieval dynamics is governed by a Lyapunov functional, and (b) the system relaxes to a static state or a limit cycle with Si(t)=Si(t - D) oscillatory solutions with the same period as that of the taught cycles or a period which is equal to an integer fraction of D. Stepping back for an overview, we notice that H is a Lyapunov functional for all networks which exhibit an "extended synaptic symmetry" (6) and for which the matrix J(D - 1) is positive semi-definite. The Hebbian synapses (4) constitute an important special case and will be the main subject of our further discussion. 5 STATISTICAL MECHANICS We now prove that a limit cycle of the retrieval dynamics indeed resembles a stored sequence. We proceed in two steps. First, we demonstrate that our task concerning cyclic temporal associations can be mapped onto a symmetric network without delays. Second, we apply equilibrium statistical mechanics to study such "equivalent systems" and derive analytic results for the retrieval quality and storage capacity. D-periodic oscillatory solutions ofthe retrieval dynamics can be interpreted as static states in a "D-plicated" system with D columns and N rows of cells with activities Sia. A network state will be written A = (Ao, AI' ... ' AD-d with Aa = {Sia; 1 ::; i < N}. To reproduce the parallel dynamics of the original system, neurons Sia with a = t%D are updated at time t. The time evolution of the new network therefore has a pseudo-sequential characteristic: synchronous within single columns and sequentially ordered with respect to these columns. Accordingly, the neural activities at time t are given by Sia(t) = SiCa + n,) for a ~ t%D and Sia(t) = SiCa + n, - D) for a>t%D, where n, is defined through t=nt+t%D. Due to (6), symmetric efficacies Jljb = Jft may be contructed for the new system by J:/ = Jij (b - a - 1)%D) , (9) allowing a well-defined Hamiltonian, equal to that of a Hopfield net of size N D, 1 N D-l H = -- 2: 2: Jt/ SiaSjb . 2 i,;=l a,b=O (10) 180 Herz, Li, and van Hemmen An evaluation of (10) in terms of the former state variables reveals that it is identical to the Lyapunov functional (7). The interpretation, however, is changed: a limit cycle of period D in the original network corresponds to a fixed-point of the new system of size ND. We have thus shown that the time evolution of a delay network with extended symmetry can be understood in terms of a downhill motion in the energy landscape of its "equivalent system" . For Hebbian couplings (5), the new efficacies Jf/ take a particularly simple form if we define patterns {~raa; 1 ~ i ~ N, 0 ~ a <D-l} by ~raa = ~f.~a-a)%D' i.e., if we create column-shifted copies of the prototype ~rao. Setting Eab = e:(b - a - 1)%D) = Eba leads to P D-l J ab C" N-l '""" '""" clJa clJa ij = cab L...J L...J "'ia "'jb . (11) 1J=la=O Storing one cycle O'i(tlJ) = ~rao in the delay network thus corresponds to memorizing D shifted duplicates ~raa, 0 ~ a < D, in the equivalent system, reflecting that a D-cycle can be retrieved in D different time-shifted versions in the original network. If, in the second step, we now switch to the stochastic dynamics (2), the important question arises whether H also determines the equilibrium distribution p of the system. This need not be true since the column-wise dynamics of the equivalent network differs from both the Little and Hopfield model. An elaborate proof (Li and Herz, 1990), however, shows that there is indeed an equilibrium distribution a la Gibbs, peA) = Z-l exp[-,BH(A)] , (12) where Z = TrA exp[-,BH(A)]. In passing we note that for D = 2 there are only links with zero delay. By (6) we have Jij(O) = Jji(O), i.e., we are dealing with a symmetric Little model. We may introduce a reduced probability distribution p for this special case, peAl) = TrAo p(AoAI), and obtain peAl) = Z-l exp[-,BH(Al)] with N N iI = _,B-1 :L In[2 cosh(,B:L JijSj)] . (13) i=l j=l We thus have recovered both the effective Hamiltonian of the Little model as derived by Peretto (1984) and the duplicated-system technique of van Hemmen (1986). We finish our argument by turning to quantative results. We focus on the case w here each of the P learning sessions corresponds to teaching a (different) cycle of D patterns ~rao, each lasting for one time step. We work with unbiased random patterns where ~rao = ±1 with equal probability, and study our network at a finite storage level a = limN_oo(P/ N) > O. A detailed analysis of the case where the number of cycles remains bounded as N -+ 00 can be found in (Li and Herz 1990). As in the replica-symmetric theory of Amit et al. (1987), we assume that the network is in a state highly correlated with a finite number of stored cycles. The remaining, extensively many cycles are described as a noise term. We define "partial" overlaps by m~a = N-l Li ~raa Sia. These macroscopic order parameters measure how close the system is to a stored pattern ~lJa at a specific column a. We consider retrieval Statistical Mechanics of Temporal Association in Neural Networks 181 solutions, i.e., m~a = mllba,o, and arrive at the fixed-point equations (Li and Herz 1990) (14) where and q = « tanh2[,8{L: mllC D +..;c;;z }])). (15) Double angular brackets represent an average with respect to both the "condensed" cycles and the normalized Gaussian random variable z. The Ak(£) are eigenvalues of the matrix £. Retrieval is possible when solutions with mil > 0 for a single cycle J.L exist, and the storage capacity a c is reached when such solutions cease to exist. It should be noted that each cycle consists of D patterns so that the storage capacity for single patterns is ac = Dac• During the recognition process, however, each of them will trigger the cycle it belongs to and cannot be retrieved as a static pattern. For systems with a "maximally uniform" distribution, £ab = (D_1)-l(1- bab), we get D 2 0.100 3 0.110 4 0.116 5 0.120 00 0.138 where the last result is identical to that for the corresponding Hopfield model since the diagonal terms of £ can be neglected in that case. The above findings agree well with estimates from a finite-size analysis (N ~ 3000) of data from numerical simulations as shown by two examples. For D = 3, we have found a c = 0.120 ± 0.015, for D=4, a c =0.125±0.0l5. Our results demonstrate that the storage capacity for temporal associations is comparable to that for static memories. As an example, take D = 2, i.e., the Little model. In the limit of large N, we see that 0.100 · N twa-cycles of the form erDD ~ erlo may be recalled as compared to 0.138· N static patterns (Fontanari and Koberle 1987); this leads to an 1.45-fold increase of the information content per synapse. The influence of the weight distribution on the network behavior may be demonstrated by some choices of g( T) for D = 4: T 0 1 2 3 a c ffic g(T) 1/3 1/3 1/3 0 0.116 0.96 g(T) 1/2 0 1/2 0 0.100 0.93 g(T) 0 1 0 0 0.050 0.93 The storage capacity decreases with decreasing number of delay lines, but measured per synapse, it does increase. However, networks with only a few number of delays are less fault-tolerant as known from numerical simulations (Herz et al. 1989). For all studied architectures, retrieved sequences contain less than 3.5% errors. Our results prove that an extensive number of temporal associations can be stored as spatia-temporal attractors for the retrieval dynamics. They also indicate that dynamical systems with delayed interactions can be programmed in a very efficient manner to perform associative computations in the space-time domain. 182 Herz, Li, and van Hemmen 6 CONCLUSION Learning schemes can be successful only if the structure of the learning task is compatible with both the network architecture and the learning algorithm. In the present context, the task is to store simple temporal associations. It can be accomplished in neural networks with a broad distribution of signal delays and Hebbian synapses which, during learning periods, operate as microscopic feature detectors for spatio-temporal correlations within the external stimuli. The retrieval dynamics utilizes the very same delays and synapses, and is therefore rather robust as shown by numerical simulations and a statistical mechanical analysis. Our approach may be generalized in various directions. For example, one can investigate more sophisticated learning rules or switch to continuous neurons in "iteratedmap networks" (Marcus and Westervelt 1990). A generalization of the Lyapunov functional (7) covers that case as well (Herz, to be published) and allows a direct comparison of theoretical predictions with results from hardware implementations. Finally, one could try to develop a Lyapunov functional for a continuous-time dynamics with delays which seems to be rather significant for applications as well as for the general theory of functional differential equations and dynamical systems. Acknowledgements It is a pleasure to thank Bernhard Sulzer, John Hopfield, Reimer Kuhn and Wulfram Gerstner for many helpful discussions. AVMH acknowledges support from the Studienstiftung des Deutschen Volkes. ZL is partly supported by a grant from the Seaver Institute. References Amit D J, Gutfreund Hand Sompolinsky H 1985 Phys. Rev. A 32 1007 1987 Ann. Phys. (N. Y.) 173 30 Cool en A C C and Gielen C CAM 1988 Europhys. Lett. 7 281 Fontanari J F and Koberle R 1987 Phys. Rev. A 36 2475 Hebb D 0 1949 The Organization of Behavior Wiley, New York van Hemmen J L 1986 Phys. Rev. A 34 3435 Herz A V M, Sulzer B, Kuhn R and van Hemmen J L 1988 Europhys. Lett. 7 663 1989 Bioi. Cybern. 60 457 Hopfield J J 1982 Proc. Natl. Acad. Sci. USA 79 2554 Kerszberg M and Zippelius A 1990 Phys. Scr. T33 54 Kleinfeld D 1986 Proc. Natl. Acad. Sci. USA 83 9469 Li Z and Herz A V M 1990 inLectureNotes in Physics 368 pp287 Springer ,Heidelberg Little W A 1974 Math. Biosci. 19 101 Marcus C M and Westervelt R M 1990 Phys. Rev. A 42 2410 Peretto P 1984 Bioi. Cybern. 50 51 Sompolinsky H and Kanter I 1986 Phys. Rev. Lett. 57 2861 Tank D Wand Hopfield J J 1987 Proc. Natl. Acad. Sci. USA 84 1896
1990
96
425
Cholinergic Modulation May Enhance Cortical Associative Memory Function Michael E. Hasselmo· Computation and Neural Systems Brooke P. Anderson t Computation and Neural Systems Caltech 139-74 Pasadena, CA 91125 James M. Bower Computation and Neural Systems Caltech 216-76 Pasadena, CA 91125 Caltech 216-76 Pasadena, CA 91125 Abstract Combining neuropharmacological experiments with computational modeling, we have shown that cholinergic modulation may enhance associative memory function in piriform (olfactory) cortex. We have shown that the acetylcholine analogue carbachol selectively suppresses synaptic transmission between cells within piriform cortex, while leaving input connections unaffected. When tested in a computational model of piriform cortex, this selective suppression, applied during learning, enhances associative memory performance. 1 INTRODUCTION A wide range of behavioral studies support a role for the neurotransmitter acetylcholine in memory function (Kopelman, 1986; Hagan and Morris, 1989). However, the role of acetylcholine in memory function has not been linked to the specific neuropharmacological effects of this transmitter within cerebral cortical networks. For several years, we have explored cerebral cortical associative memory function using the piriform cortex as a model system (Wilson and Bower, 1988, Bower, 1990; Hasselmo et a/., 1991). The anatomical structure of piriform cortex (represented schematically in figure 1) shows the essential features of more abstract associative 46 • e-mail: hasselmo@sma.ug.cns.caltech.edu t e-mail: brooke@hope.caltech.edu Cholinergic Modulation May Enhance Cortical Associative Memory Function 47 matrix memory models (Haberly and Bower, 1989) 1. Afferent fibers in layer la provide widely distributed input, while intrinsic fibers in layer Ib provide extensive excitatory connections between cells within the cortex. Computational models of piriform cortex demonstrate a theoretical capacity for associative memory function (Wilson and Bower, 1988; Bower, 1990; Hasselmo et at., 1991). Recently, we have investigated differences in the physiological properties of the afferent and intrinsic fiber systems, using modeling to test how these differences affect memory function. In the experiments described below, we found a selective cholinergic suppression of intrinsic fiber synaptic transmission. When tested in a simplified model of piriform cortex, this modulation enhances associative memory performance. } Afferent fiber synapses (layer la) A. - -- -- ------ - -_ .. Afferent input = I } Intrinsic fiber synapses (layer lb) = B .. IJ +- Neuron activation = ai(t) } Lateral inhibition (via interneurons) = H .. IJ Neuron output = g( ai (t» Figure 1: Schematic representation of piriform cortex, showing afferent input Ai (layer la) and intrinsic connections Bij (layer Ib) 2 EXPERIMENTS To study differences in the effect of acetylcholine on afferent and intrinsic fiber systems, we applied the pharmacological agent carbachol (a chemical analogue of acetylcholine) to a brain slice preparation of piriform cortex while monitoring changes in the strength of synaptic transmission associated with each fiber system. In these experiments, both extracellular and intracellular recordings demonstrated clear differences in the effects of carbachol on synaptic transmission (Hasselmo and Bower, 1991). The results in figure 2 show that synaptic potentials evoked by activating intrinsic fibers in layer 1 b were strongly suppressed in the presence of 1 For descriptions of standard associative memory models, see for example (Anderson et al., 1977; Kohonen et al., 1977). 48 Hasselmo, Anderson, and Bower 100llM carbachol, while at the same concentration, synaptic potentials evoked by stimulation of afferent fibers in layer la showed almost no change. Control 1 x 10·oM Carbachol Washout 2 3 LAYER lA V V V O.SmV 1 2 3 V V V.5mv LAYER 1B 20m. I Figure 2: Synaptic potentials recorded in layer la and layer Ib before, during, and after perfusion with the acetylcholine analogue carbachol. Carbachol selectively suppresses layer Ib (intrinsic fiber) synaptic transmission. These experiments demonstrate that there is a substantial difference in the neurochemical modulation of synapses associated with the afferent and intrinsic fiber systems within piriform cortex. Cholinergic agents selectively suppress intrinsic fiber synaptic transmission without affecting afferent fiber synaptic transmission. While interesting in purely pharmacological terms, these differential effects are even more intriguing when considered in the context of our computational models of memory function in this region. 3 MODELING To investigate the effects of cholinergic suppression of intrinsic fiber synaptic transmission on associative memory function, we developed a simplified model of the piriform cortex. This simplified model is shown schematically in figure 1. At each time step, a neuron was picked at random, and its activation was updated as N ai(t + 1) = Ai(t) + 2)(1- C)Bij - Hij]g(aj(t». j=1 where N = the number of neurons; t = time E {O, 1,2, ... }; c = a parameter representing the amount of acetylcholine present. c E [0,1]; ai = the activation or membrane potential of neuron i; g(ai) = the output or firing frequency of neuron i given ai; Ai = the input to neuron i, representing the afferent input from the olfactory bulb; Bij = the weight matrix or the synaptic strength from neuron j to neuron i; and Hij = the inhibition matrix or the amount that neuron j inhibits neuron i. To account for the local nature of inhibition in the piriform cortex, Hij = 0 Cholinergic Modulation May Enhance Cortical Associative Memory Function 49 for Ii - il > r and Hi; = h for Ii - il < r, where r is the inhibition radius. The function g(ai) was set to 0 if at < (Ja, where (Ja = a firing threshold; otherwise, it was set to "Yatanh(ai - (Ja), where "Ya = a firing gain. The weights were updated every N time steps according to the following hebbian learning rule. Bij = !(Wij) AWij = Wij(t + N) - Wij(t) = (1- c)"Yl(ai - (Jl)g(aj) The function !(.) is a saturating function, similar to g( .), used so that the weights could not become negative or grow arbitrarily large (representing a restriction on how effective synapses could become). "Yl is a parameter that adjusts learning speed, and (Jl is a learning threshold. The weights were updated every N time steps to account for the different time scales between synapse modification and neuron settling. 3.1 TRAINING OF THE MODEL During learning, the model was presented with various vectors (taken to represent odors) at the input (Ai(t». The network was then allowed to run and the weights to adapt. The procedure for creating the set of vectors {Am 1m E {I, ... , M} } was: set Ai" = max{O, G(J.l, un, where G = gaussian with average J.l and standard deviation 0', and normalize the whole vector so that IIA m 112 = N ( 0'2 + J.l2). M = number of memories or odors presented to network during training, and Ai" = the input to neuron i while odor m is present. During learning, in the asynchronous update equation, Ai(t) = At for T time steps, then Ai(t) = Al for the next T time steps, and so on; i.e., the various odors were presented cyclically. 3.2 PERFORMANCE MEASURE FOR THE MODEL The piriform cortex gets inputs from the olfactory bulb and sends outputs to other areas of the brain. Assuming that during recall the network receives noisy versions of the learned input patterns (or odors), we presume the piriform cortex performs a useful service if it reduces the chance of error in deciding which odor is present at the input. One way to quantify this is by using the minimum probability of classification error, Pe (from the field of pattern recognition 2). For the case of 2 odors corrupted by gaussian noise, Pe = the area underneath the intersection of the gaussians. For spherically symmetric gaussians with mean vectors J.ll and J.l2 and identical standard deviations 0', 2 1 00 -u2d Pe = .;:; .L e u *, where d = 1IJ.l1 J.l211. Thus, the important parameter is the amount of overlap as quantified by dIu the larger the dIu, the lower the overlap and Pe . 2See, for example, (Duda and Hart, 1973). 50 Hasselmo, Anderson, and Bower For more than 2 odors and for non-gaussian noise or non-spherically-symmetric gaussian noise, the equation for Pe becomes less tractable. But keeping with the above calculations, an analogue of dIu was developed as follows. ul was set = (IIx -lJiIl2), and then f3 was defined as f3 = L !llJi -lJj II i<i ~(Ui + Uj) where i,j E {I, ... , M}. Here, f3 is the analogue of dIu in the previous paragraph and is similar to an average over all odor pairs of dIu. For the model, if f3 is larger for the output vectors than for the input vectors, there is less overlap in the outputs, classification of the outputs is easier than classification of the inputs, and the model is serving a useful purpose. Thus, we use p = f3out! f3in as the performance measure. 3.3 TESTING THE MODEL The model was designed to show whether the presence of acetylcholine has any influence on learning performance. To that end, the model was allowed to learn for a time with various levels of acetylcholine present, and then acetylcholine was turned off and the model was tested. For testing, weight adaptation was turned off, acetylcholine influence was turned off (c = 0), noisy versions of the various odors presented during learning were presented at the input, and the network was allowed to settle. From these noisy input/output pairs, u's could be estimated, f3in and f30ut could be calculated, and finally p could be calculated. Then, the state of the network could either be reset (for a new learning run) or be set to what it was before the test (so that learning could continue as if uninterrupted). 3.4 RESULTS OF TESTING A typical example of a test run is shown in figure 3. There, c was varied from 0 (no acetylcholine) to .9 (a large concentration), and the various other parameters were: N = lO, M = 10, r = 2, h = .3, 'Ya = 1, 00 = 1, 'Yl = lO-3, Ol = 1, and T = 10. In the figure, large dark rectangles represent larger values of p. Small or non-existant rectangles represent values of p :5 1. Notice that, for a fixed amount of acetylcholine, the model's performance rises and then falls over time. Ideally, the performance should rise and then flatten out, as further learning should not degrade performance. The weight adaptation equation used in the model was not optimized to preclude overlearning (where all of the weights being reinforced have saturated to the largest allowed value). In principle, the function 1(·) could be used for this, perhaps in conjunction with a weight decay term. This was not of great concern since the peak performance is what indicates whether or not acetylcholine has a useful effect. Also, the more acetylcholine present, the longer the learning took. This is reasonable as, before saturation, AW ex: (I-C). Figure 4 shows maximum average performance for various values of acetylcholine. A verages were calculated by doing many tests like the one above. This is useful as Cholinergic Modulation May Enhance Cortical Associative Memory Function 51 the odor inputs and the individual tests are stochastic in nature. Obviously, the larger values of acetylcholine enhance performance. .---... 0.9 .. ............ . .. ..... .......... .... . ... ................ ..... ·····1 .. "1" .. ·· 11 ... · ... 111.11111·111111 ... 111111 u '--" t:: 0 .-.... ro ;..... · ....... , . .... . .... . ..... ' ··1 • · ··' · IIIII.IoI'lIl.I' IIIIIIUII".IIIIII'lIl1l1qlll"II'II~III'III •• 1I1111I. · ...... ............... '" "'1,111",111111111111111111111111""11111" .. l1li1 •• 1) ••• 1.1. .. .. ... ······· · 11 ·IIIIIIIIII ... U .......... u .... 1 d I I I III. III •• '11,11.11'.11111'11' " ......... I. I.lIlilllllllJlil .. I ...... , •••• 1'1111111'. " · 1 ············1 .......... 111111.11.11111.11 · . '., ............ I .... II ... a ...... I~ •• ql.IIIIII .. I ... I ..•. ··1 111." ... ,11 ..... 1 ·.·!I·I .... 1 1 11 .... 1 1II111n1I11Jt1J1.U. .... t:: . .. . ·· ... IIIIIIIIIIII.I ... U_"·I····· .··1II11411111··I·II.1411110 .. ··1 ." ............ .... ............ .. . ... ··1 .' <l) u t:: •........ • 1011 .. 1111 • • 1,.111 .... 1 ........ 1.' ••• 1111 .• 11 •............................. ............................................ ·········IIIIII14Ulqlll,II ··.·.·IIIII·····.·············· .. ........................ ········· ··1······················· ········· .......... ..... . 0 u ....... ,11111111.11.1 .. ·lil· ·1·· .... .... . ..... ..... .... '" ...... .. ... ..... ... ...... ............ ..................................... .. " ..... . .... ·'·ldl.IIII· .. I·· ···· ····· ··· .. ·················•···· .................................... ······ ··········1··············· ......... '" .. <l) t:: . ........ · ... ·111111111··········· ................... .......................... ............................ ........................•....... ... ....... ·1······ .. ' 11'11'" .... ...... . ............ .. . .. . . . .......................... ............. .......... . 0 ...c: u ..... 11111 ........ ............................... .................................................................................................... . ····11·1·1························· ·········· ········· .............................................................. ····················1·········· ........ >-. ····111························ ·············································1················································1····· ................ . .... <l) <.) ·····1················································ ................................................... .. ............... ·········• · .. ············1 ··· 111 ·· .. · .. ···· .. ···· ······· ·· · · .. •· ···· ········ .... ······ · ······ ·· ...... ······· .. ·· .. ····· .. ·· .. ·········1 .. · .. ······· .... · .................... . ~ ·· .. 1····· ············ .. ············· .. ·· .. ···· .. ···· .. · .. ·· .... ·· .. · .... · .. ·· ....... ··· ........ ·· .. ···· .. · .. ·1 .. ·• .. ·••······· .. ·· . ......... ... .... . 0 ···1······ .. · .. ··· .. ··· .. · .. ··· .. ······· .. ·· .... ·· .. · .. · .. · .. · .. ···· ......... .................................................. ................ .. .. . o Time (t) 4.5x105 Figure 3: Sample test run, with time on the horizontal axis and acetylcholine level on the vertical axis. Larger black rectangles indicate better performance. 1.7 ~ t.. 1.6 ~ ::s eJ.l ~ e'3 ~ 1.5 ~ e > ~ e'3 (j 1.4 e ; ::s 5 e t.. 1.3 . . >< c8 e'3 t.. ~~ 1.2 1.1 0.0 0.2 0.4 0.6 0.8 1.0 Acetylcholine concentration (c) Figure 4: Maximum average performance vs. acetylcholine level. Acetylcholine increases the performance level attained. 52 Hasselmo, Anderson, and Bower 4 CONCLUSION The results from the model show that suppression of connections between cells within the piriform cortex during learning enhances the performance during recall. Thus, acetylcholine released in the cortex during learning may enhance associative memory function. These results may explain some of the behavioral evidence for the role of acetylcholine in memory function and predict that acetylcholine may be released in cortical structures preferentially during learning. Further biological experiments are necessary to confirm this prediction. Acknowledgements This work was supported by ONR contracts NOOOl4-88-K-0513 and NOOOl4-87-K0377 and NIH postdoctoral training grant NS0725l. References J .A. Anderson, J .W. Silverstein, S.A. Ritz and R.S. Jones (1977) Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychol. Rev. 84: 413-45l. J .M. Bower (1990) Reverse engineering the nervous system: An anatomical, physiological and computer based approach. In S. Zornetzer, J. Davis and C. Lau (eds.), An Introduction to Neural and Electronic Networks. San Diego: Academic Press. R. Duda and P. Hart (1973), Pattern Classification and Scene Analysis, New York: Wiley. L.B. Haberly and J .M. Bower (1989) Olfactory cortex: Model circuit for study of associative memory? Trends Neurosci. 12: 258-264. J.J. Hagan and R.G.M. Morris (1989) The cholinergic hypothesis of memory: A review of animal experiments. In L.L. Iversen, S.D. Iversen and S.H. Snyder (eds.) Handbook of Psychopharmacology Vol. 20 New York: Plenum Press. M.E. Hasselmo, M.A. Wilson, B.P. Anderson and J .M. Bower (1991) Associative memory function in piriform (olfactory) cortex: Computational modeling and neuropharmacology. In: Cold Spring Harbor Symposium on Quantitative Biology: The Brain. Cold Spring Harbor: Cold Spring Harbor Laboratory. M.E. Hasselmo and J .M. Bower (1991) Cholinergic suppression specific to intrinsic not afferent fiber synapses in piriform (olfactory) cortex. J. Neurophysiol. in press. T. Kohonen, P. Lehtio, J. Rovamo, J. Hyvarinen, K. Bry and L. Vainio (1977) A principle of neural associative memory. Neurosci. 2:1065-1076. M.D. Kopelman (1986) The cholinergic neurotransmitter system in human memory and dementia: A review. Quart. J. Exp. Psych 01. 38A:535-573. M.A. Wilson and J .M. Bower (1988) A computer simulation of olfactory cortex with functional implications for storage and retrieval of olfactory information. In D. Anderson (ed.) Neural Information Processing Systems. AlP Press: New York. Part II N euro-Dynal11.ics
1990
97
426
Neural Network Application to Diagnostics and Control of Vehicle Control Systems Kenneth A. Marko Research Staff Ford Motor Company Dearborn, Michigan 48121 ABSTRACT Diagnosis of faults in complex, real-time control systems is a complicated task that has resisted solution by traditional methods. We have shown that neural networks can be successfully employed to diagnose faults in digitally controlled powertrain systems. This paper discusses the means we use to develop the appropriate databases for training and testing in order to select the optimum network architectures and to provide reasonable estimates of the classification accuracy of these networks on new samples of data. Recent work applying neural nets to adaptive control of an active suspension system is presented. 1 INTRODUCTION This paper reports on work performed on the application of artificial neural systems (ANS) techniques to the diagnosis and control of vehicle systems. Specifically, we have examined the diagnosis of common faults in powertrain systems and investigated the problem of developing an adaptive controller for an active suspension system. In our diagnostic investigations we utilize neural networks routinely to establish the standards for diagnostic accuracy we can expect from analysis of vehicle data. Previously we have examined the use of various ANS paradigms to diagnosis of a wide range of faults in a carefully collected data set from a vehicle operated in a narrow range of speed and load. Subsequently, we have explored the classification of a data set with a more restricted set of faults, drawn from a much broader range of operating conditions. This step was taken as concern about needs for specific, real-time continuous diagnostics superseded the need to develop well-controlled, on-demand diagnostic testing. The 537 538 Marko impetus arises from recently enacted legislation which dictates that such real-time diagnosis of powertrain systems wi II be req uired on cars sold in the U.S. by the mid-1990's. The difference between the two applications is simple: in the former studies it was presumed that an independent agent has identified that a fault is present, the root cause needs only to be identified. In the real-time problem, the diagnostic task is to detect and identify the fault as soon as it occurs. Consequently, the real-time application is more demanding. In analyzing this more difficult task, we explore some of the complications that arise in developing successful classification schemes for the virtually semi-infinite data streams that are prcxJuced in continuous operation of a vehicle fleet. The obstacles to realized applications of neural nets in this area often stem from the sophistication required of the classifier and the complexity of the problems addressed. The limited computational resources on-board vehicles will determine the scope of the diagnostic task and how implementations, such as ANS methods, will operate. Finally, we briefly examine an extension of the ANS work to developing trainable controllers for non-linear dynamic systems such as active suspension systems. Preliminary work in this area indicates that effecti ve controllers for non-linear plants can be developed effiCiently, despite the exclusion of an accurate plant model from the training process. Although our studies were carried out in simulation, and accurate plant models were therefore available, the capability to develop controllers in the absence of such models is a significant step forward. Such controllers can be developed for existing, unmodeled hardware, and thereby reduce both the efforts required to develop control algorithms by conventional means and the time to program the real-time controllers. 2 NEURAL NET DIAG~OSTICS OF CONTROL SYSTEMS Our interest in neural networks for diagnosis of faults in control systems stemmed from work on model-based diagnosis of faults in such systems, typically called plants. In the model-based approach, a model of the system under control is developed and used to predict the dynamic behavior of the system. With the system in operation, the plant performance is observed. The expected behavior and the observed behavior are compared, and if no differences are found, the plant is deemed to be operating normally. If deviations are found, the differences indicate that a fault of some sort is present (failure detection), and an analysis of the differences is used in an attempt to identify the cause (fault identification). Successful implememations (~lin, 1987; Liubakka et al, 1988; Rizzoni et aI, 1989) of fault detection and identification in complex systems linearized about selected operating points were put together utilizing mathematical constructs called failure detection filters. These filters are simply matrices which transform a set of observations (which become an input vector to the filter) of a plant into another vector space (the output vector or classification space). The form of these filters suggested to us that neural networks could be used to learn similar transforms and thereby avoid the tedious process of model development and validation and a priori identification of the detection filter matrix elements. We showed previously that complex signal patterns from operating internal combustion engines could be examined on a cycle by cycle basis (two revolutions of the common four-stroke engine cycle) and used to correctly identify faults present in the engine (Marko el ai, 1989). Typical data collected from an operating engine has been shown elsewhere (Marko et ai, 1989). This demonstration was focussed on a production engine, limited to a small Neural Network Application to Diagnostics 539 operating range. One might suppose that a linear model-based diagnostic system could be constructed for such a task, if one wished to expend the time and effort., and therefore this exercise was not a strenuous test of the neural network approach. Additionally. our expert diagnostician could examine the data traces and accurately identify the faults. However. we demonstrated that this problem, which had eluded automated solution by other means up to that time, could easily be handled by neural network classifiers and encouraged us to proceed to more difficult problems for which efficient. rigorous procedures did not exist. We were prepared to tolerate developing empirical solutions to our more difficult problems, since we did not expect that a thorough analytic understanding would precede a demonstrated solution. The process outlined here utilized neural network analysis almost exclusively (predominantly back-propagation) on these problems. The understanding of the relationship of neural networks, the structure of the data and the training and testing of the classifiers emerged after acceptable solutions using the neural networks methods were obtained. Consequently, the next problem addressed was that of identifying similar faults by observing the system through the multiplex serial communication link resident on the engine control computer. The serial link provides a simple hook-up procedure to the vehicle without severing any links between plant and microcontroller. However, the chief drawback of this approach is that it greatly complicates the recognition task. The complication arises because the data from the plant is sampled too infrequently, is "contaminated" by some processing in the controller. and delivered asynchronously to the serial link with respect to events in the plant (the data output process is not permitted to interrupt the real-time control requirements). In this case, a test sample of a smaller number of faults was drawn from a vehicle operated in a similar limited range to the fIrst example and an attempt to detect and identify the faults was made using a variety of networks. Unlike the previous case. it was impossible for any experienced technicians to identify the faults. Again, neural network classifIers were found to develop satisfactory solutions over these limited data sets, which were later verified by a number of careful statistical tests (Marko el aI, 1990). This more complex problem also produced a wider range of performance among the various neural ne.t paradigms studied, as shown in Figure I. where the error rates for various classifiers on these data sets are shown in the graph. These results suggested that not only would data quality and quantity need to be controlled and improved. but that the problem itself would implicitly direct us to the choice of the classifier paradigm. These issues are more thoroughly discussed elsewhere (Marko et al, 1990; Weiss et al. 1990). but the conclusion was that a complete, acceptable solution to the real scope of this problem could not be developed with our group's resources for data collection, data verification and classifier validation. With these two experiences in mind, we could see that the fIrst approach was an effective means of handling the failure detection and identification (FDI) problem, while the latter, although attractive from the standpoint of easy link-up to a vehicle, was for our numerical analysis, a very difficult task. It seemed that the appropriate course was to obtain reliable data, by observing the plant directly, and to perfonn the classification on that data. An effective scheme to accomplish this goal is to perfonn the classifIcation task in the control microprocessor which has access to the dire{:t data. Adopting this strategy, we move the diagnostics from an off-board processor to the on-board processor, and create a new set of possibilities for diagnostics. 540 Marko With diagnostics contained in the controlling processor, diagnostics can be shifted from an on-demand activity. undertaken at predetermined intervals or when the vehicle operator has detected a problem, to a continuous, real-time activity. This change implies that me diagnostic algorithms will, for the most part, be evaluating a properly operating system and only infrequently be required to detect a failure and identify the cause. Additionally, the diagnostic algorithms will have to be very compact, since the current control microprocessors have very limited time and memory for calculation compared to a off-board PC. Furthermore, the classification task will need to be learned from a sample of data which is minuscule compared to the data sets that the deployed diagnostics will have to classify. This fact imposes on the training data set the requirement that it be an accurate statistical sample of the much more voluminous real-world data. This situation must prevail because we cannot anticipate the deployment of a classifier mal is undergoing continuous training. A classifier capable of continuous adaptation would require more computational capability, and quite likely a supervised learning environment. The fact is, even for relatively simple diagnostics of operating engines, assembling a large, accurate training data set off-line is a considerable task. This last issue is explored in the next paragraph, but it seems to rule out early deployment of anything other than pretrained classifiers until some experience with much larger data sets from deployed diagnostic systems is obtained. ERROR RATE" 60 PIN DATA ~ DCLDATA 0.5 .... .. ...................... .. .............. .. ....................... .. ................................ .. ............................................................. .. o.~ ........................................................................ .. ........................ ...... .. 0.3 ................... --..... __ ............................................ . o.~ 0.1 o .. ::::::~........ . ................................................... . ::~::-...... ~::=::::::::::::::~:: ...... ~::---' ...... ........,. N.N. RCESINGLE RCEMULT. BACK PROP. CLASSIFIER TREEHYPPL TREE· C.R. Figure I. Comparison of the performance of various neural network paradigms on two static data sets by leave~ne~ut testing from measurements performed on vehicles in a service bay. The network paradigms tested arc nearest neighbor, Restricted Coulomb Energy CRCE) Single Unit, RCE Multiple Units, Backpropagation, Tree Classifier using hyperplane separation, Tree Classifier using Center-Radius decision surface. The 6O-Pin data is the data obtained directly from the engine, the DCL (Data Communication Link) data comes through the control microprocessor on a multiplexed two-wire link. Note that RCE-Multiple requires a priori knowledge about the problem which was unavailable for the DCL data and thal the complete statistical testing of backpropagation was impractical due to the length of time required to train each network. Neural Network Application to Diagnostics 541 We have examined this issue of real-time diagnostics as it applies to engine misfire detection and identification. Data from nonnal and misfiring engines was required from a wide range of conditions, a task which consumes hours of test track driving. The set of measurements taken is extensive in order to be cenain that the infonnation obtained is a superset of the minimum set of informaLion required. Additionally, great care needed to be exercised in eSLablishing the accuracy of a training set for supervised learning. Specifically, we needed to be certain that the only samples of misfires included were those intentionally created, and not those which occurred spontaneously and were presumably mislabeled as normal because no intentional fault was being introduced at that time. In order to accomplish this purification of the training set, one must either have an independent detector of misfires (none exists for a production engine operating in a vehicle) or go through an iterative process to remove all the data vectors misclassified as misfire from the data set after the network has completed training. Since the independent assessment of misfire cannot be obtained, we must accept the latter method which is not altogether satisfactory. The problem with the iterative method is that one must initially exclude from the training set exactly the type of event that the system is being trained to classify. We have to stan with the assumption that any additional misfires, beyond the number we introduce, are classification errors. We then reserve the right to amend this judgment in light of further experience as we build up confidence in the classifier. The results of our initial studies is shown in Fig. 2. Here we can see that a backpropagation neural network can classify a broad range of engine operation correctly, and thal the network does quite well when we broaden the operating range almost to the perfonnance limits of the engine. The classification errors indicated in the more exhaustive study are misfires detected when no misfire was introduced. At this stage of our investigation we cannot be certain that these are real errors, they may very well be misfires occurring spontaneously or appearing as a result of an additional, unintentional induced misfrre in an engine cycle following the one in which the fault was introduced. The results shown in Fig. 2 therefore represent a conservative estimate of the classification errors thaL can be expected from tests of our engine data. The backpropagation network we constructed demonstrated that misfire detection and identification is attainable if adequate computation resources are available and appropriate C/) C/) -=: ....J <.) ....J -=: w a: NORMAL MISFIRE ANSCLASS LIMITED OPERA nON 762 o 1 15 NORMAL MISFIRE EXTENDED OPERA nON NORMAL 7419 13 MISFIRE 4 150 NORMAL MISFIRE Figure 2. Classification accuracy of a backpropagation neural network trained on misfire data tabulated as confusion matrices. Data similar to that shown in Fig. 2 is collected over a modest range of dynamic conditions and then over a very wide range of conditions (potholed roads, severe accelerations and braking etc.) to estimate the performance limits of classifiers on such data. These misclassification rates are indicators of the best possible perfonnance obtainable from such data, and therefore they are not reasonable estimates of what practical implementations of classifiers should produce. 542 Marko care in obtaining a suitable training set is exercised. However. in order to make a neural net a practical means of performing this diagnosis aboard vehicles, we need to eHminate information from the input vector which has no effect on the classification accuracy; otherwise the computational task is hopelessly beyond the capability of the engine's microcontroller. This work is currently underway using a combination of a priori knowledge about the sensor information and principal component analysis of the data sets. Nonetheless, the neural network analysis has once again established that a solution exists and set standards for classification accuracy that we can hope. to emulate with more compact forms of classifiers. 3 NEURAL NET CONTROL OF ACTIVE SUSPENSION The empirical approach to developing solutions for diagnostic problems suggested that a similar tactic might be employed effectively to control problems for which developing acceptable controllers for non-linear dynamic systems by conventional means was a daunting task. We wished to explore the application of feed-forward networks to the problem of learning to control a model of a non-linear active suspension system. This problem was of interest because considerable effort had gone into designing controllers by conventional means and a performance comparison could readily be made. In addition t since active suspension systems are being investigated by a number of companies, we wished to examine the possibility of developing model-independent controllers for such systems, since effective hardware systems are usually available before thoroughly validated system models appear. The initial results of this investigation, outlined below, are quite encouraging. A backpropagation network was trained to emulate an existing controller for an active suspension as a first exercise to establish some feel for the complexity of the network required to perform such a task. A complete description of the work can be found elsewhere (Hampo, 1990), but briefly, a network with seve.ral hidden nodes was trained to provide perfonnance equivalent to the conventional controller. Since this exercise simply replicated an existing controller, the next step was to develop a controller in the absence of any conventional controller. Therefore, a system model with a novel non-linearity was developed and utilized to train a neural network to control such a plant. The architecture for this control system is similar to that used by Nygen and Widrow (Ngyen et al t 1990) and is described in detail elsewhere.(Hampo et ai, 1991) Once again, a backpropagation network, with only 2 hidden nodes, was trained to provide an satisfactory performance in controlling the suspension system simulation running on a workstation. This small network learned the task with less than WOO training vectors, the equivalent of less than 1 ()() feet of bumpy road. Finally, we examined the performance of the neural network on the same plant, but without explicit use of the plant model in the control architecture. In this scheme, the output error is derived from the difference between the. observed performance and the desired performance produced by a cost function based upon conventional measures of suspension performance. In this Cost Function architecture, networks of similar size were readily trained to control non-linear plants and attain performance equivalent to conventional controllers hand-tuned for such plants. Controllers developed in this manner provide a flexible means of approaching the problem of investigating tradeoffs between the conflicting demands made on such suspension systems. These demands Neural Network Application to Diagnostics 543 include ride quality, vehicle control, and energy management. This control architecture is being applied both to simulations of new systems and to actual, un-modeled hardware rigs to expedite prototype development. 4 CONCLUSIONS This brief summary of our investigations has shown that neural networks play an important role in the development both of classification systems for diagnosis of faults in control systems and of controllers for practical non-linear plants. In these tasks, neural networks must compete with conventional methods. Conventional methods, although endowed with a more thorough analytic understanding, have usually failed to provide acceptable solutions to the problems we encountered as readily as have the neural network methods. Therefore, the ANS methods have a crucial role in developing solutions. Although neural networks provide these solutions expeditiously, we are just beginning to understand how these solutions arise. The growth of this understanding will detennine the role neural networks play in the deployed implementations of these solutions. References 1. P.S. Min, "Detection of Incipient Failures in Dynamic Systems", Ph.D. Thesis, University of Michigan, 1987. 2. M.K. Liubakka, G. Rizzoni, W.B. Ribbens and K.A. Marko, "Failure Detection Algorithms Applied to Control System Design for Improved Diagnostics and Reliability", SAE Paper #880726, Detroit, Michigan, 1988. 3. G. Rizzoni, R. Hampo, M.K. Liubakka and K.A. Marko, "Real-Time Detection Filters for On-Board Diagnosis of Incipient Failures", SAE Paper #890763, 1989. 4. K.A. Marko, J. James, J. Dosdall and J. Murphy, "Automotive Control System Diagnostics Using Neural ~ets for Rapid Classification of Large Data Sets", Proceedings DCNN, 11-13, Washington, D.C., 1989. 5. K.A. Marko, L.A. Feldkamp and G.V. Puskorius, "Automotive Diagnostics Using Trainable Classifiers: Statistical Testing and Paradigm Selection", Proceedings IJC!\"N, 1-33, San Diego, California, 1990. 6. Sholom Weiss and Casimir Kulikowski, "Computer Systems That Learn", Morgan Kaufman, San Mateo, California, 1990. 7. RJ. Hampo, "Neural ~et Control of an Active Suspension System", ~1.S. Thesis, University of Michigan, 1990. 8. D. Ngyen and B. Widrow, "The Truck-Backer Upper: An Example of Self-Learning in ~eural Networks", in Neural Networks for Control, ed. W.T. Miller, MIT Press, Cambridge, Massachusetts, 1990. 9. RJ. Hampo and K.A. Marko, "Neural Net Architectures for Active Suspension Control", paper submitted to UCNN, Seattle, Washington, 1991.
1990
98
427
EVOLUTION AND LEARNING IN NEURAL NETWORKS: THE NUMBER AND DISTRIBUTION OF LEARNING TRIALS AFFECT THE RATE OF EVOLUTION Ron Keesing and David G. Stork* Ricoh California Research Center and *Dept. of Electrical Engineering Stanford University Stanford, CA 94305 stork@psych.stanford.edu 2882 Sand Hill Road Suite 115 Menlo Park, CA 94025 stork@crc.ricoh.com Abstract Learning can increase the rate of evolution of a population of biological organisms (the Baldwin effect). Our simulations show that in a population of artificial neural networks solving a pattern recognition problem, no learning or too much learning leads to slow evolution of the genes whereas an intermediate amount is optimal. Moreover, for a given total number of training presentations, fastest evoution occurs if different individuals within each generation receive different numbers of presentations, rather than equal numbers. Because genetic algorithms (GAs) help avoid local minima in energy functions, our hybrid learning-GA systems can be applied successfully to complex, highdimensional pattern recognition problems. INTRODUCTION The structure and function of a biological network derives from both its evolutionary precursors and real-time learning. Genes specify (through development) coarse attributes of a neural system, which are then refined based on experience in an environment containing more information and 804 Evolution and Learning in Neural Networks 805 more unexpected infonnation than the genes alone can represent. Innate neural structure is essential for many high level problems such as scene analysis and language [Chomsky, 1957]. Although the Central Dogma of molecular genetics [Crick, 1970] implies that information learned cannot be directly transcribed to the genes, such information can appear in the genes through an indirect Darwinian process (see below). As such, learning can change the rate of evolution the Baldwin effect [Baldwin, 1896]. Hinton and Nowlan [1987] considered a closely related process in artificial neural networks, though they used stochastic search and not learning per se. We present here analyses and simulations of a hybrid evolutionary-learning system which uses gradientdescent learning as well as a genetic algorithm, to determine network connections. Consider a population of networks for pattern recognition, where initial synaptic weights (weights "at birth") are detennined by genes. Figure 1 shows the Darwinian fitness of networks (i.e., how many patterns each can correctly classify) as a function the weights. Iso-fitness contours are not concentric, in general. The tails of the arrows represent the synaptic weights of networks at birth. In the case of evolution without learning, network B has a higher fitness than does A, and thus would be preferentially selected. In the case of gradient-descent learning before selection, however, network A has a higher after-learning fitness, and would be preferentially selected (tips of arrows). Thus learning can change which individuals will be selected and reproduce, in particular favoring a network (here, A) whose genome is "good" (i.e., initial weights "close" to the optimal), despite its poor performance at birth. Over many generations, the choice of "better" genes for reproduction leads to new networks which require less learning to solve the problem they are closer to the optimal. The rate of gene evolution is increased by learning (the Baldwin effect). Iso-fitness contours A Weight 1 Figure 1: Iso-fitness contours in synaptic weight space. The black region corresponds to perfect classifications (fitness = 5). The weights of two networks are shown at birth (tails of arrows), and after learning (tips of arrows). At birth, 8 has a higher fitness score (2) than does A (1); a pure genetic algorithm (without learning) would preferentially reproduce 8. Wit h learning, though, A has a higher fitness score (4) than 8 (2), and would thus be preferentially reproduced. Since A's genes are "better" than 8's, learning can lead to selection of better genes. 806 Keesing and Stork Surprisingly, too much learning leads to slow evolution of the genome, since after sufficient training in each generation, all networks can perform perfectly on the pattern recognition task, and thus are equally likely to pass on their genes, regardless of whether they are "good" or "bad." In Figure 1, if both A and B continue learning, eventually both will identify all five patterns correctly. B will be just as likely to reproduce as A, even though A's genes are "better." Thus the rate of evolution will be decreased too much learning is worse than an intermediate amount or even no learning. SIMULA TION APPROACH Our system consists of a population of 200 networks, each for classifying pixel images of the first five letters of the alphabet. The 9 x 9 input grid is connected to four 7 x 7 sets of overlapping 3 x 3 orientation detectors; each detector is fully connected by modifiable weights to an output layer containing five category units (Fig. 2). trainable weights A B V'J Q) ....... C .~ =' ~ 0 ~ 0 OJ) Q) .~ fully interconnected ....... ~ E u ((,~ ~~~~ Figure 2: Individual network architecture. The 9x9 pixel input is detected by each of four orientation selective input layers (7x7 unit arrays), which are fully connected by trainable weights to the five category units. The network is thus a simple perceptron with 196 (=4x7x7) inputs and 5 outputs. Genes specify the initial connection strengths. Each network has a 490-bit gene specifying the initial weights (Figure 3). For each of the 49 filter positions and 5 categories, the gene has two bits Evolution and Learning in Neural Networks 807 which specify which orientation is initially most strongly connected to the category unit (by an arbitrarily chosen factor of 3:1). During training, the weights from the filters to the output layer are changed by (supervised) perceptron learning. Darwinian fitness is given by the number of patterns correctly classified after training. We use fitness-proportional reproduction and the standard genetic algorithm processes of replication, mutation, and cross-over [Holland, 1975]. Note that while fitness may be measured after training, reproduction is of the genes present at birth, in accord with the Central Dogma. This is llil1 a Lamarkian process. A detector B detector C detector D detector ..... . pit 011010101001 0 101 00111010 100101 01001 01 010010101 00101 010010 101001010 10 10 1 001 010100 ... • 001010110001001110100110100101010101101010101011 ... ~ possible ~@IJ ~ QI]~ gene Relative values at initial 3 1 1 1 ~ a spatial weights position between 1 3 1 1 ~ filters (at a spatial 1 1 3 1 ~ position) and ~ category 1 1 1 3 Figure 3: The genetic representation of a network. For each of the five category units, 49 two-bit numbers describe which of the four orientation units is most strongly connected at each position within the 7x7 grid. This unit is given a relative connection strength of 3, while the other three orientation units at that position are given a relative strength of 1. For a given total number of teaching presentations, reproductive fitness might be defined in many ways, including categorization score at the end of learning or during learning; such functions will lead to different rates of evolution. We show simulations for two schemes: in uniform learning each network received the same number (e.g., 20) of training presentations; in 808 Keesing and Stork distributed learning networks received a randomly chosen number (10, 34, 36, 16, etc.) of presentations. RESULTS AND DISCUSSION Figure 4 shows the population average fitness at birth. The lower curve shows the performance of the genetic algorithm alone; the two upper curves represent genetypic evolution the amount of information within the genes when the genetic algorithm is combined with gradient-descent learning. Learning increases the rate of evolution both uniform and distributed learning are significantly better than no learning. The fitness after learning in a generation (not shown) is typically only 5% higher than the fitness at birth. Such a small improvement at a single generation cannot account for the overall high performance at later generations. A network's performance even after learning is more dependent upon its ancestors having learned than upon its having learned the task. Pop. Avg. Fitness at Birth for Different Learning Schemes = S~--------------------~ ... m 4 as (t) (t) 3 CD C !:: u.. 2 . C) > cr:: 1 . D.. 0 0 D.. 0 20 40 60 80 100 Generation Figure 4: Learning guides the rate of evolution. In uniform learning, every network in every generation receives 20 learning presentations; in the distributed learning scheme, any network receives a number of patterns randomly chosen between 0 and 40 presentations (mean = 20). Clearly, evolution with learning leads to superior genes (fitness at birth) than evolution without learning. Ave. Fitness at Generation 100 Depends on Amount of Training .cs-r--------------. t: _ 4 as (t) (t) 3 CD C u.. 2 ai > cr:: 1 D.. 0 Q. 0 1 Avg. Distributed Learning 10 100 1000 Learning Trials per Indlv. Figure 5: Selectivity of learningevolution interactions. Too little or too much learning leads to slow evolution (population fitness at birth at generation 100) while an intermediate amount of learning leads to significantly higher such fitness. This effect is significant in both learning schemes. (Each point represents the mean of five simulation runs.) Evolution and Learning in Neural Networks 809 Figure 5 illustrates the tuning of these learning-evolution interactions, as discussed above: too little or too much learning leads to poorer evolution than does an intermediate amount of learning. Given excessive learning (e.g., 500 presentations) all networks perform perfectly. This leads to the slowest evolution, since selection is independent of the quality of the genes. Note too in Fig. 4 that distributed learning leads to significantly faster evolution (higher fitness at any particular generation) than uniform learning. In the uniform learning scheme, once networks have evolved to a point in weight space where they (and their offspring) can identify a pattern after learning, there is no more "pressure" on the genes to evolve. In Figure 6, both A and B are able to identify three patterns correctly after uniform learning, and hence both will reproduce equally. However, in the distributed learning scheme, one of the networks may (randomly) receive a small amount of learning. In such cases, A's reproductive fitness will be unaffected, because it is able to solve the patterns without learning, while B's fitness will decrease significantly. Thus in the distributed learning scheme (and in schemes in which fitness is determined in part during learning), there is "pressure" on the genes to improve at every generation. Diversity is, a driving force for evolution. Our distributed learning scheme leads to a greater diversity of fitness throughout a population. Iso-fitness contours Weight 1 CONCLUSIONS Figure 6: Distributed learning leads to faster evolution than uniform learning. In uniform learning, (shown above) A and B have equal reproductive fitness, even though A has "better" genes. In distributed learning, A will be more likely to reproduce when it (randomly) receives a small amount of learning (shorter arrow) than B will under similar circumstances. Thus "better" genes will be more likely to reproduce, leading to faster evolution. Evolutionary search via genetic algorithms is a powerful technique for avoiding local minima in complicated energy landscapes [Goldberg, 1989; Peterson, 1990], but is often slow to converge in large problems. Conventional genetic approaches consider only the reproductive fitness of 810 Keesing and Stork the genes; the slope of the fitness landscape in the immediate vicinity of the genes is ignored. Our hybrid evolutionary-learning approach utilizes the gradient of the local fitness landscape, along with the fitness of the genes, in detennining survival and reproduction. We have shown that this technique offers advantages over evolutionary search alone in the single-minimum landscape given by perceptron learning. In a simple pattern recognition problem, the hybrid system performs twice as well as a genetic algorithm alone. A hybrid system with distributed learning, which increases the "pressure" on the genes to evolve at every generation, performs four times as well as a genetic algorithm. In addition, we have demonstrated that there exists an optimal average amount of learning in order to increase the rate of evolution too little or too much learning leads to slower evolution. In the extreme case of too much learning, where all networks are trained to perfect performance, there is no improvement of the genes. The advantages of the hybrid approach in landscapes with multiple minima can be even more pronounced [Stork and Keesing, 1991]. Acknowledgments Thanks to David Rumelhart, Marcus Feldman, and Aviv Bergman for useful discussions. References Baldwin, J. M. "A new factor in evolution," American Naturalist 30,441451 (1896) Chomsky, N. Syntactic Structures The Hague: Mouton (1957) Crick, F. W. "Central Dogma of Molecular Biology," Nature 227, 561-563 (1970) Goldberg, D. E. Genetic Algorithms in Search, Optimization & Machine Learning Reading, MA: Addison-Wesley (1989). Hinton, G. E. and Nowlan, S. 1. "How learning can guide evolution," Complex Systems 1,495-502 (1987) Holland, J. H. Adaptation in Natural and Artificial Systems University of Michigan Press (1975) Peterson, C. "Parallel Distributed Approaches to Combinatorial Optimization: Benchmanrk Studies on Traveling Salesman Problem," Neural Computation 2, 261-269 (1990). Stork, D. G. and Keesing, R. "The distribution of learning trials affects evolution in neural networks" (1991, submitted).
1990
99
428
English Alphabet Recognition with Telephone Speech Mark Fanty, Ronald A. Cole and Krist Roginski Center for Spoken Language Understanding Oregon Graduate Institute of Science and Technology 19600 N.W. Von Neumann Dr., Beaverton, OR 97006 Abstract A recognition system is reported which recognizes names spelled over the telephone with brief pauses between letters. The system uses separate neural networks to locate segment boundaries and classify letters. The letter scores are then used to search a database of names to find the best scoring name. The speaker-independent classification rate for spoken letters is 89%. The system retrieves the correct name, spelled with pauses between letters, 91 % of the time from a database of 50,000 names. 1 INTRODUCTION The English alphabet is difficult to recognize automatically because many letters sound alike; e.g., BID, PIT, VIZ and F IS. When spoken over the telephone, the information needed to discriminate among several of these pairs, such as F IS, PIT, BID and VIZ, is further reduced due to the limited bandwidth of the channel Speaker-independent recognition of spelled names over the telephone is difficult due to variability caused by channel distortions, different handsets, and a variety of background noises. Finally, when dealing with a large population of speakers, dialect and foreign accents alter letter pronunciations. An R from a Boston speaker may not contain an [r]. Human classification performance on telephone speech underscores the difficulty of the problem. We presented each of ten listeners with 3,197 spoken letters in random order for identification. The letters were taken from 100 telephone calls 199 200 Fanty, Cole, and Roginski in which the English alphabet was recited with pauses between letters, and 100 different telephone calls with first or last names spelled with pauses between letters. Our subjects averaged 93% correct classification of the letters, with performance ranging from 90% to 95%. This compares to error rates of about 1 % for high quality microphone speech [DALY 87]. Over the past three years, our group at OGI has produced a series of letter classification and name retrieval systems. These systems combine speech knowledge and neural network classification to achieve accurate spoken letter recognition [COLE 90, FANTY 91]. Our initial work focused on speaker-independent recognition of isolated letters using high quality microphone speech. By accurately locating segment boundaries and carefully designing feature measurements to discriminate among letters, we achieved 96% classification of letters. We extended isolated letter recognition to recognition of words spelled with brief pauses between the letters, again using high quality speech [FANTY 91, COLE 91]. This task is more difficult than recognition of isolated letters because there are "pauses" within letters such as the closures in "X" "H" and "W " which must be , " distinguished from the pauses that separate letters, and because speakers do not always pause between letters when asked to do so. In the system, a neural network segments speech into a sequence of broad phonetic categories. Rules are applied to the segmentation to locate letter boundaries, and the hypothesized letters are re-classified using a second neural network. The letter scores from this network are used to retrieve the best scoring name from a database of 50,000 last names. First choice name retrieval was 95.3%, with 99% of the spelled names in the top three choices. Letter recognition accuracy was 90%. During the past year, with support from US WEST Advanced Technologies, we have extended our approach to recognition of names spelled over the telephone. This report describes the recognition system, some experiments that motivated its design, and its current performance. 1.1 SYSTEM OVERVIEW Data Capture and Signal Processing. Telephone speech is sampled at 8 kHz at 14-bit resolution. Signal processing routines perform a seventh order PLP (Perceptual Linear Predictive) analysis [HERMANSKY 90] every 3 msec using a 10 msec window. This analysis yields eight coefficients per frame, including energy. Phonetic Classification. Frame-based phonetic classification provides a sequence of phonetic labels that can be used to locate and classify letters. Classification is performed by a fully-connected three-layer feed-forward network that assigns 22 phonetic category scores to each 3 msec time frame. The 22 labels provide an intermediate level of description, in which some phonetic categories, such as [b]-[d], [p]-[t]-[k] and [m]-[n] are combined; these fine phonetic distinctions are performed during letter classification, described below. The input to the network consists of 120 features representing PLP coefficients in a 432 msec window centered on the frame to be classified. The frame-by-frame outputs of the phonetic classifier are converted to a sequence of phonetic segments corresponding to a sequence of hypothesized letters. This is English Alphabet Recognition with Telephone Speech 201 done with a Viterbi search that uses duration and phoneme sequence constraints provided by letter models. For example, the letter model for MN consists of optional glottalization (MN-q), followed by the vowel [eh] (MN-eh), followed by the nasal murmur (MN-mn). Because background noise is often classified as [f]-[s) or [m]-[n), a noise "letter" model was added which consists of either of these phonemes. Letter Classification. Once letter segmentation is performed, a set of 178 features is computed for each letter and used by a fully-connected feed-forward network with one hidden layer to reclassify the letter. Feature measurements are based on the phonetic boundaries provided by the segmentation. At present, the features consist of segment durations, PLP coefficients for thirds of the consonant (fricative or stop) before the first sonorant, PLP for sevenths of the first sonorant, PLP for the 200 msecs after the sonorant, PLP slices 6 and 10 msec after the sonorant onset, PLP slices 6 and 30 msec before any internal sonorant boundary (e.g. [eh]/[m)), zero crossing and amplitude profiles from 180 msec before the sonorant to 180 msec after the sonorant. The outputs of the classifier are the 26 letters plus the category "not a letter." Name Retrieval. The output of the classifier is a score between 0 and 1 for each letter. These scores are treated as probabilities and the most likely name is retrieved from the database of 50,000 last names. The database is stored in an efficient tree structure. Letter deletions and insertions are allowed with a penalty. 2 SYSTEM DEVELOPMENT 2.1 DATA COLLECTION Callers were solicited through local newspaper and television coverage, and notices on computer bulletin boards and news groups. Callers had the choice of using a local phone number or toll-free 800-number. A Gradient Technology Desklab attached to a UNIX workstation was programmed to answer the phone and record the answers to pre-recorded questions. The first three thousand callers were given the following instructions, designed to generate spoken and spelled names, city names, and yes/no responses: (1) What city are you calling from? (2) What is your last name? (3) Please spell your last name. (4) Please spell your last name with short pauses between letters. (5) Does your last name contain the letter "A" as in apple? (6) What is your first name? (7) Please spell your first name with short pauses between letters. (8) What city and state did you grow up in? (9) Would you like to receive more information about the results of this project? In order to achieve sufficient coverage of rare letters, the final 1000 speakers were asked to recite the entire English alphabet with brief pauses between letters. The system described here was trained on 800 speakers and tested on 400 speakers. The training set contains 400 English alphabets and 800 first and last names spelled with pauses between letters. The test set consists of 100 alphabets and 300 last names spelled with pauses between letters. 202 Fanty, Cole, and Roginski A subset of the data was phonetically labeled to train and evaluate the neural network segmenter. Time-aligned phonetic labels were assigned to 300 first and last names and 100 alphabets, using the following labels: cl bcl dcl kcl pcl tcl q aa ax: ay b ch d ah eh ey f iy jh kim n ow p r s t uw v w y z h#. This label set represents a subset of the TIMIT [LAMEL 86] labels sufficient to describe the English alphabet. 2.2 FRAME-BASED CLASSIFICATION Explicit location of segment boundaries is an important feature of our approach. Consider, for example, the letters Band D. They are distinguished by information at the onset of the letter; the spectrum of the release burst of [b] and [d], and the formant transitions during the first 10 or 15 msec of the vowel [iy]. By precisely locating the burst onset and vowel onset, feature measurements can be designed to optimize discrimination. Moreover, the duration of the initial consonant segment can be used to discriminate B from P, and D from T. A large number of experiments were performed to improve segmentation accuracy. [ROGINSKI 91]. These experiments focused on (a) determining the appropriate set of phonetic categories, (b) determining the set of features that yield the most accurate classification of these categories, and (c) determining the best strategy for sampling speech frames within the phonetic categories. Phonetic Categories. Given our recognition strategy of first locating segment boundaries and then classifying letters, it makes little sense to attempt to discriminate [b]-[d], [p]-[t]-[k] or [m]-[n] at this stage. Experiments confirmed that using the complete set of phonetic categories found in the English alphabet did not produce the most accurate frame-based phonetic classification. The actual choice of categories was guided initially by perceptual confusions in the listening experiment, and was refined through a series of experiments in which different combinations of acoustically similar categories were merged. Features Used for Classification. A series of experiments was performed which covaried the amount of acoustic context provided to the network and the number of hidden units in the network. The results are shown in Figure 1. A network with 432 msec of spectral information, centered on the frame to be classified, and 40 hidden units was chosen as the best compromise. Sampling of Speech Frames. The training and test sets contained about 1.7 million 3 msec frames of speech; too many to train on all of them The manner in which speech frames were sampled was found to have a large effect of performance. It was necessary to sample more speech frames from less frequently occurring categories and those with short durations (e.g., [b]). The location within segments of the speech frames selected was found to have a profound effect on the accuracy of boundary location. Accurate boundary placement required the correct proportion of speech frames sampled near segment boundaries. For example, in order to achieve accurate location of stop bursts, it was necessary to sample a high proportion of speech frames just prior to the burst (within the g i j c: 0 ~ t:: 8 c ~ ~ III a... o 100 English Alphabet Recognition with Telephone Speech 203 200 300 400 Context window in milliseconds 500 60 hidden nodes 40 hidden nodes 20 hidden nodes 600 Figure 1: Performance of the phonetic classifier as a function of PLP context and number of hidden units. 204 Fanty, Cole, and Roginski closure category). Figure 2 shows the improvement in the placement of the [b]j[iy] boundary after sampling more training frames near that boundary. 2.3 LETTER CLASSIFICATION In order to avoid segmenting training data for letter classification by hand, an automatic procedure was used. Each utterance was listened to and the letter names were transcribed manually. Segmentation was performed as described above, except the Viterbi search was forced to match the transcribed letter sequence. This resulted in very accurate segmentation. One concern with this procedure was that artificially good segmentation for the training data could hurt performance on the test set, where there are bound to be more segmentation errors (since the letter sequence is not known). The letter classifier should be able to recover from segmentation errors (e.g. a B being segmented as V with a long [v] before the burst). To do so, the network must be trained with errorful segmentation. The solution is to perform two segmentations. The forced segmentation finds the letter boundaries so the correct identity is known. A second, unforced, segmentation is performed and these phonetic boundaries are used to generate features used to train the classifier. Any "letters" found by the unforced search which correspond to noise or silence from the forced search are used as training data for the "not a letter" category. So there are two ways noise can be eliminated: It can match the noise model of the segmenter during the Viterbi search, or it can match a letter during segmentation, but be reclassified as "not a letter" by the letter classifier. Both are necessary in the current system. 3 PERFORMANCE Frame-Based Phonetic Classification. The phonetic classifier was trained on selected speech frames from 200 speakers. About 450 speech frames were selected from 50 different occurrences of each phonetic category. Phonetic segmentation performance on 50 alphabets and 150 last names was evaluated by comparing the first-choice of the classifier at each time frame to the label provided by a human expert. The frame-by-frame agreement was 80% before the Viterbi search and 90% after the Viterbi search. Letter Classification and N arne Retrieval. The training set consists of 400 alphabets spelled by 400 callers plus first and last names spelled by 400 callers, all with pauses between the letters. When tested on 100 alphabets from new speakers, the letter classification was 89% with less than 1 % insertions. When tested on 300 last names from new speakers, the letter classification was 87% with 1.5% insertions. For the 300 callers spelling their last name, 90.7% of the names were correctly retrieved from a list of 50,000 common last names. 95.7% of the names were in the English Alphabet Recognition with Telephone Speech 205 fI) Q) (,) ~ Q) 0 '~ T"" (,) (,) 0 0 'Q) LO J:J E ~ z o <= -87-6-5-4-3-2-1 0 1 234 5 6 7>= 8 Offset from hand labels LO T"" fI) Q) (,) ~ Q) 0 '~ T"" (,) (,) 0 0 'Q) LO J:J E ~ z o <= -87-6-5-4-3-2-1 0 1 2 3 4 5 6 7>= 8 Offset from hand labels Figure 2: Test set improvement in the placement of the [b]j[iy] boundary after sampling more training frames near that boundary. The top histogram shows the difference between hand-labeled boundaries and the system's boundaries in 3 msec frames before adding extra boundary frames. The bottom histogram shows the difference after adding the boundary frames. 206 Fanry, Cole, and Roginski top three. 4 DISCUSSION The recognition system described in this paper classifies letters of the English alphabet produced by any speaker over telephone lines at 89% accuracy for spelled alphabets and retrieves names from a list of 50,000 with 91 % first choice accuracy. The system has a number of characteristic features. We represent speech using an auditory model-Perceptual Linear Predictive (PLP) analysis. We perform explicit segmentation of the speech signal into phonetic categories. Explicit segmentation allows us to use segment durations to discriminate letters, and to extract features from specific regions of the signal. Finally, speech knowledge is used to design a set of features that work best for English letters. We are currently analyzing errors made by our system. The great advantage of our approach is that individual errors can be analyzed, and individual features can be added to improve performance. Acknowledgements Research supported by US WEST Advanced Technologies, APPLE Computer Inc., NSF, ONR, Digital Equipment Corporation and Oregon Advanced Computing Institute. References [COLE 91] R. A. Cole, M. Fanty, M. Gopalakrishnan, and R. D. T. Janssen. Speaker-independent name retrieval from spellings using a database of 50,000 names. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 1991. [COLE 90] R. A. Cole, M. Fanty, Y. Muthusamy, and M. Gopalakrishnan. Speakerindependent recognition of spoken English letters. In Proceedings of the International Joint Conference on Neural Networks, San Diego, CA, 1990. [DALY 87] N. Daly. Recognition of words from their spellings: Integration of multiple knowledge sources. Master's thesis, Massachusetts Institute of Technology, May, 1987. [FANTY 91] M. Fanty and R. A. Cole. Spoken letter recognition. In R. P. Lippman, J. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3. San Mateo, CA: Morgan Kaufmann, 1991. [HERMANSKY 90] H. Hermansky. Perceptual Linear Predictive (PLP) analysis of speech. J. Acoust. Soc. Am., 87(4):1738-1752, 1990. [LAMEL 86] L. Lamel, R. Kassel, and S. Seneff. Speech database development: Design and analysis of the acoustic-phonetic corpus. In Proceedings of the DARPA Speech Recognition Workshop, pages 100-110, 1986. [ROGINSKI 91] Krist Roginski. A neural network phonetic classifier for telephone spoken letter recognition. Master's thesis, Oregon Graduate Institute, 1991. PART IV LANGUAGE
1991
1
429
Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks Elliot Singer and Richard P. Lippmann Lincoln Laboratory, MIT Lexington, MA 02173-9108, USA Abstract A high performance speaker-independent isolated-word hybrid speech recognizer was developed which combines Hidden Markov Models (HMMs) and Radial Basis Function (RBF) neural networks. In recognition experiments using a speaker-independent E-set database, the hybrid recognizer had an error rate of 11.5% compared to 15.7% for the robust unimodal Gaussian HMM recognizer upon which the hybrid system was based. These results and additional experiments demonstrate that RBF networks can be successfully incorporated in hybrid recognizers and suggest that they may be capable of good performance with fewer parameters than required by Gaussian mixture classifiers. A global parameter optimization method designed to minimize the overall word error rather than the frame recognition error failed to reduce the error rate. 1 HMM/RBF HYBRID RECOGNIZER A hybrid isolated-word speech recognizer was developed which combines neural network and Hidden Markov Model (HMM) approaches. The hybrid approach is an attempt to capitalize on the superior static pattern classification performance of neural network classifiers [6] while preserving the temporal alignment properties of HMM Viterbi decoding. Our approach is unique when compared to other studies [2, 5] in that we use Radial Basis Function (RBF) rather than multilayer sigmoidal networks. RBF networks were chosen because their static pattern classification performance is comparable to that of other networks and they can be trained rapidly using a one-pass matrix inversion technique [8]. The hybrid HMM/RBF isolated-word recognizer is shown in Figure 1. For each 159 160 Singer and Lippmann WORD MODELS UNKNOWN _ WORD BEST WORD MATCH BACKGROUND NOISE MODEL Figure 1: Block diagram of the hybrid recognizer for a two word vocabulary. pattern presented at the input layer, the RBF network produces nodal outputs which are estimates of Bayesian probabilities [9]. The RBF network consists of an input layer, a hidden layer composed of Gaussian basis functions, and an output layer. Connections from the input layer to the hidden layer are fixed at unity while those from the hidden layer to the output layer are trained by minimizing the overall mean-square error between actual and desired output values. Each RBF output node has a corresponding state in a set of HMM word models which represent the words in the vocabulary. HMM word models are left-to-right with no skip states and have a one-state background noise model at either end. The background noise models are identical for all words. In the simplified diagram of Figure 1, the vocabulary consists of 2 E-set words and the HMMs contain 3 states per word model. The number of RBF output nodes (classes) is thus equal to the total number of HMM non-background states plus one to account for background noise. In recognition, Viterbi decoders use the nodal outputs of the RBF network as observation probabilities to produce word likelihood scores. Since the outputs of the RBF network can take on any value, they were initially hard limited to 0.0 and 1.0. The transition probabilities estimated as part of HMM training are retained. The final response of the recognizer corresponds to that word model which produces the highest Viterbi likelihood. Note that the structure of the HMM/RBF hybrid recognizer is identical to that of a tied-mixture HMM recognizer. For a discussion and comparison of the two recognizers, see [10]. Training of the hybrid recognizer begins with the preliminary step of training an HMM isolated-word recognizer. The robust HMM recognizer used provides good recognition performance on many standard difficult isolated-word speech databases [7]. It uses continuous density, unimodal diagonal-covariance Gaussian classifiers for each word state. Variances of all states are equal to the grand variance averaged over all words and states. The trained HMM recognizer is used to force an alignment of every training token and assign a label to each frame. Labels correspond to both states of HMM word models and output nodes of the RBF network. The Gaussian centers in the RBF hidden layer are obtained by performing k-means Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks 161 clustering on speech frames and separate clustering on noise frames, where speech and noise frames are distinguished on the basis of the initial Viterbi alignment. The RBF weights from the hidden layer to the output layer are computed by presenting input frames to the RBF network and setting the desired network outputs to 1.0 for the output node corresponding to the frame label and 0.0 for all other nodes. The RBF hidden node outputs and their correlations are accumulated across all training tokens and are used to estimate weights to the RBF output nodes using a fast one-pass algorithm [8]. Unlike the performance of the system reported in [5], additional training iterations using the hybrid recognizer to label frames did not improve performance. 2 DATABASE All experiments were performed using a large, speaker-independent E-set (9 word) database derived from the ISOLET Spoken Letter Database [4]. The training set consisted of 1,080 tokens (120 tokens per word) spoken by 60 female and 60 male speakers for a total of 61,466 frames. The test set consisted of 540 tokens (60 tokens per word) spoken by a different set of 30 female and 30 male speakers for a total of 30,406 frames. Speech was sampled at 16 kHz and had an average SNR of 31.5 dB. Input vectors were based on a mel-cepstrum analysis of the speech waveform as described in [7]. The input analysis window was 20ms wide and was advanced at 10ms intervals. Input vectors were created by adjoining the first 12 non-energy cepstral coefficients, the first 13 first-difference cepstral coefficients, and the first 13 second-difference cepstral coefficients. Since the hybrid was based on an 8 state-per-word robust HMM recognizer, the RBF network contained a total of 73 output nodes (72 speech nodes and 1 background node). The error rate of the 8 state-per-word robust HMM recognizer on the speaker-independent E-set task was 15.7%. 3 MODIFICATIONS TO THE HYBRID RECOGNIZER The performance of the baseline HMM/RBF hybrid recognizer described in Section 1 is quite poor. We found it necessary to select the recognizer structure carefully and utilize intermediate outputs properly to achieve a higher level of performance. A full description of these modifications is presented in [10]. Briefly, they include normalizing the hidden node outputs to sum to 1.0, normalizing the RBF outputs by the corresponding a priori class probabilities as estimated from the initial Viterbi alignment, expanding the RBF network into three individually trained subnetworks corresponding to the ceptrum, first difference cepstrum, and second difference cepstrum data streams, setting a lower limit of 10-5 on the values produced at the RBF output nodes, adjusting a global scaling factor applied to the variances of the RBF centers, and setting the number of centers to 33,33, and 65 for the first, second, and third subnets, respectively. The structure of the final hybrid recognizer is shown in Figure 2. This recognizer has an error rate of 11.5% (binomial standard deviation = ±1.4) on the E-set test data compared to 15.7% (±1.6) for the 8 state-per-word unimodal Gaussian HMM recognizer, and 9.6% (±1.3) for a considerably more complex tied-mixture HMM recognizer [10]. The final hybrid system contained a total of 131 Gaussians and 9,563 weights. On a SUN SPARCstation 2, training time for 162 Singer and Lippmann the final hybrid recognizer was about 1 hour and testing time was about 10 minutes. BEST WORD MATCH Figure 2: Block diagram of multiple sub net hybrid recognizer. 4 GLOBAL OPTIMIZATION In the hybrid recognizer described above, discriminative training is performed at the frame level. A preliminary segmentation by the HMM recognizer assigns each speech frame to a specific RBF output node or, equivalently, an HMM word state. The RBF network weights are then computed to minimize the squared error between the network output and the desired output over all input frames. The goal of the recognizer, however, is to classify words. To meet this goal, discriminant training should be performed on word-level rather than frame-level outputs. Recently, several investigators have described techniques that optimize parameters based on word-level discriminant criteria [1, 3]. These techniques seek to maximize a mutual information type of criterion: Lc C = logy, where Lc. is the likelihood score of the word model corresponding to the correct result and L = Lw Lw is the sum of the word likelihood scores for all models. By computing oC/oO, the gradient of C with respect to parameter 0, we can optimize any parameter in the hybrid recognizer using the update equation where 0 is the new value of parameter 0, () is the previous value, and TJ is a gain term proportional to the learning rate. Following [1], we refer to the word-level optimization technique as "global optimization." Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks 163 To apply global optimization to the HMM/RBF hybrid recognizer, we derived the formulas for the gradient of C with respect to wt ' the weight connecting RBF center i to RBF output node j in subnet k; Pj, the RBF output normalization factor for RBF output node j in subnet k; and mfl' the Ith element of the mean of center i of subnet k. For each token of length T frames, these are given by and T 8C = (be; - Pw ) "'"' frjt{3jt <I>~ J:lwk L L..J kIt' U ij w t=1 St likelihood score for word model w, Lw / Lw Lw is the normalized word likelihood, { I if RBF output node j is a member of the correct word model o otherwise, forward partial probability of HMM state j at time t, backward partial probability of HMM state j at time t, unnormalized output of RBF node j of subnet k at time t, normalized output of ith Gaussian center of sub net k at time t, ~ . <I>~t = 1 ~, I , Ith element of the input vector for subnet k at time t, global scaling factor for the variances of sub net k, [th component of the standard deviation of the ith Gaussian center of subnet k, number of RBF output nodes in sub net k. In implementing global optimization, the frame-level training procedure described earlier serves to initialize system parameters and hill climbing methods are used to reestimate parameters iteratively. Thus, weights are initialized to the values derived using the one-pass matrix inversion procedure, RBF output normalization factors are initialized to the class priors, and Gaussian means are initialized to the k-means clustering values. Note that while the priors sum to one, no such constraint was placed on the RBF output normalization factors during global optimization. It is worth noting that since the RBF network outputs in the hybrid recognizer are a posteriori probabilities normalized by a priori class probabilities, their values may exceed 1. The accumulation of these quantities in the Viterbi decoders often leads to values of (Xjt{3jt and Lw in the range of 1080 or greater. Numerical problems with the implementation of the global optimization equations were avoided by using log arithmetic for intermediate operations and working with the quantity {3jt! Lw throughout. Values of 7J which produced reasonable results were generally in the range of 10- 10 to 10- 6 164 Singer and Lippmann The results of using the global optimization technique to estimate the RBF weights are shown in Figure 3. Figure 3( a) shows the recognition performance on the training and test sets versus the number of training iterations and Figure 3(b) tracks the value of the criterion C = Lei L on the training and test set under the same conditions. It is apparent that the method succeeds in iteratively increasing the value of the criterion and in significantly lowering the error rate on the training data. Unfortunately, this behavior does not extend to improved performance on the test data. This suggests that global optimization is overfitting the hybrid word models to the training data. Results using global optimization to estimate RBF output normalization factors and the Gaussian means produced similar results. 0 C = log (Lc I L) %ERROR 20 ,.---~----r-----,r-----" -0.2 TEST TRAIN -0.4 10 -0.6 TEST -0.8 o ~--~---~--~~--~ -1 o 5 10 15 20 0 5 10 15 20 NUMBER OF ITERATIONS NUMBER OF ITERATIONS Figure 3: (a) Error rates for training and test data. (b) Criterion C for training and test data. 5 ACCURACY OF BAYES PROBABILITY ESTIMATION Three methods were used to determine how well RBF outputs estimate Bayes probabilities. First, since network outputs must sum to one if they are probabilities, we computed the RMS error between the sum of the RBF outputs and unity for all frames of the test data. The average RMS error was low (10-4 or less for each subnet). Second, the average output of each RBF node was computed because this should equal the a priori probability of the class associated with the node [9]. This condition was true for each subnet with an average RMS error on the order of 10-5 . For the final method, we partitioned the outputs into 100 equal size bins between 0.0 and 1.0. For each input pattern, we used the output values to select the appropriate bins and incremented the corresponding bin counts by one. In addition, we incremented the correct-class bin count for the one bin which corresponded to the class of the input pattern. For example, data indicated that for the 61,466 frames of training tokens, nodal outputs of the cepstra subnet in the range 0.095-0.105 occurred 29,698 times and were correct classifications (regardless of class) 3,067 times. If the outputs of the network were true Bayesian probabilities, we would expect the Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks 165 relative frequency of correct labeling to be close to 0.1. Similarly, relative frequencies measured in other intervals would also be expected to be close to the value of the corresponding center of the interval. Thus, a plot of the relative frequencies for each bin versus the bin centers should show the measured values lying close to the diagonal. The measured relative frequency data for the cepstra subnet and ±2u bounds for the binomial standard deviations of the relative frequencies are shown in Figure 4. Outputs below 0.0 and above 1.0 are fixed at 0.0 and 1.0, respectively. Although the relative frequencies tend to be clustered around the diagonal, many values lie outside the bounds. Furthermore, goodness-of-fit measurements using the x: test indicate that fits fail at significance levels well below .01. We conclude that although the system provides good recognition accuracy, better performance may be obtained with improved estimation of Bayesian probabilities. 1r-------------------------------------------------------------~ • .J ~0.9 :3 0.8 t; wO.7 IX IX 0 0.6 o ~0.5 gO.4 IX ~0.3 1= 0.2 :5 ~O.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 RBF NETWORK OUTPUT (All Nodes) Figure 4: Relative frequency of correct class labeling and ±2u bounds for the binomial standard deviation, cepstra subnet. 6 SUMMARY AND CONCLUSIONS This paper describes a hybrid isolated-word speech recognizer which successfully integrates Radial Basis Function neural networks and Hidden Markov Models. The hybrid's performance is better than that of a tied-mixture recognizer of comparable complexity and near that of a tied-mi..xture recognizer of considerably greater complexity. The structure of the RBF networks and the processing of network outputs had to be carefully selected to provide this level of performance. A global optimization technique designed to maximize a word discrimination criterion did not succeed in improving performance further. Statistical tests indicated that the accuracy of the Bayesian probability estimation performed by the RBF networks could 166 Singer and Lippmann be improved. We conclude that RBF networks can be used to provide good performance and short training times in hybrid recognizers and that these systems may require fewer parameters than Gaussian-mixture-based recognizers at comparable performance levels. Acknowledgements This work was sponsored by the Defense Advanced Research Projects Agency. The views expressed are those of the authors and do not reflect the official policy or position of the U.S. Government. References [1] Yoshua Bengio, Renato De Mori, Giovanni Flammia, and Ralf Kompe. Global optimization of a neural network - Hidden Markov model hybrid. Technical Report TR-SOCS-90.22, MgGill University School of Computer Science, Montreal, Qc., Canada, December 1990. [2] Herve Bourlard and Nelson Morgan. A continuous speech recognition system embedding MLP into HMM. In D. Touretzky, editor, Advances in Neural Information Processing 2, pages 186-193. Morgan Kaufmann, San Mateo, CA, 1990. [3] John S. Bridle. Alpha-nets: A recurrent neural network architecture with a hidden Markov model interpretation. Speech Communication, 9:83-92, 1990. [4] Ron Cole, Yeshwant Muthusamy, and Mark Fanty. The Isolet spoken letter database. Technical Report CSE 90-004, Oregon Graduate Institute of Science and Technology, Beverton, OR, March 1990. [5] Michael Franzini, Kai-Fu Lee, and Alex Waibel. Connectionist viterbi training: A new hybrid method for continuous speech recognition. In Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing. IEEE, April 1990. [6] Richard P. Lippmann. Pattern classification using neural networks. IEEE Communications Magazine, 27(11):47-54, November 1989. [7] Richard P. Lippmann and Ed A. Martin. Mqlti-style training for robust isolated-word speech recognition. In Proceedings International Conference on Acoustics Speech and Signal Processing, pages 705-708. IEEE, April 1987. [8] Kenney Ng and Richard P. Lippmann. A comparative study of the practical characteristics of neural network and conventional pattern classifiers. In R. P. Lippmann, J. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing 3. Morgan Kaufmann, San Mateo, CA, 1991. [9] Mike D. Richard and Richard P. Lippmann. Neural network classifiers estimate Bayesian a posteriori probabilities. Neural Computation, In Press. [10] Elliot Singer and Richard P. Lippmann. A speech recognizer using radial basis function neural networks in an HMM framework. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992.
1991
10
430
Green's Function Method for Fast On-line Learning Algorithm of Recurrent Neural Networks Guo-Zheng Sun, Hsing-Hen Chen and Yee-Chun Lee Institute for Advanced Computer Studies and Laboratory for Plasma Research, University of Maryland College Park, MD 20742 Abstract The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the on-line requirement in many practical applications. Although the forward propagation algorithm can be used in an on-line manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitivity matrix (0( fir) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task. In this paper w~ proposed a forward learning algorithm which is one order faster (only 0(fV3) operations for each time step) than the sensitivity matrix algorithm. The basic idea is that instead of integrating the high dimensional sensitivity dynamic equation we solve forward in time for its Green's function to avoid the redundant computations, and then update the weights whenever the error is to be corrected. A Numerical example for classifying state trajectories using a recurrent network is presented. It substantiated the faster speed of the proposed algorithm than the Williams and Zipser's algorithm. I. Introduction. In order to deal with sequential signals, recurrent neural networks are often put forward as a useful model. A particularly pressing issue concerning recurrent networks is the search for an efficient on-line training algorithm. Error back-propagation method (Rumelhart, Hinton, and Williams[ I]) was originally proposed to handle feedforward networks. This method can be applied to train recurrent networks if one unfolds the time sequence of mappings into a multilayer feed-forward net, each layer with identical weights. Due to the nature of backward path, it is basically an off-line method. Pineda [2] generalized it to recurrent networks with hidden neurons. However, he is mostly interested in time-independent fixed point type ofbehaviocs. Pearlmutter [3] proposed a scheme to learn temporal trajectories which involves equations to be solved backward in time. It is essentially a generalized version of error back-propagation to the problem of learning a target state trajectory. The viable on-line method to date is the RTRL (Real Time Recurrent Learning) algorithm (Williams and Zipser [4]), which propagates a sen333 334 Sun, Chen, and Lee sitivity matrix forward in time. The main drawback of this algorithm is its high cost of computation. It needs O( JII) number of operations per time step. Therefore, a faster (less than O(JII) operations) on-line algorithm appears to be desirable. Toomarian and Barhen [5] proposed an O(N2) on-line algorithm. They derived the same equations as Pearlmutter's back-propagation using adjoint-operator approach. They then tried to convert the backward path into a forward path by adding a Delta function to its source term. But this is not correct. The problem is not merely because it "precludes straightforward numerical implementation" as they acknowledged later [6]. Even in theory, the result is not correct. The mistake is in their using a not well defined equity of the Delta function integration. Briefly speaking, the equity f% 0 (t - t,)f(t) dt = f(t,) is not right if the functionj(t) is discontinuous at t = tf The value of the left-side integral depends on the distribution of functionj(t) and therefore is not uniquely defined. If we deal with the discontinuity carefully by splitting time interval from to to 'linto two segments: to to 1"£ and tr£ to 'land let E ~ 0, we will find out that adding a Delta function to the source term does not affect the basic property of the adjoint equation. Namely, it still has to be solved backward in time. Recently, Toomarian and Barhen [6] modified their adjoint-operator approach and proposed an alternative O(~) on-line training algorithm. Although, in nature, their result is very similar to what we presented in this paper, it will be seen that our approach is more straightforward and can be easily implemented numerically. Schmidhuber[7] proposed an O(N3) algorithm which is a combination of back propagation (within each data block of size N) and forward propagation (between blocks). It is therefore not truly an on-line algorithm. Sun, Chen and Lee [8] studied this problem, using a more general approach - variational approach, in which a constrained optimization problem with Lagrangian multipliers was considered. The dynamic equation of the Lagrangian multiplier was derived, which is exactly the same as adjoint equation[5]. By taking advantage oflinearity of this equation an O(N3) on-line algorithm was derived. But, the numerical implementation of the algorithm, especially the numerical instabilities are not addressed in the paper. In this paper we will present a new approach to this problem - the Green's function method. The advantages of the this method are the simple mathematical fonnulation and easy numerical implementation. One numerical example of trajectory classification is presented to substantiate the faster speed of the proposed algorithm. The numerical results are benchmarked with Williams and Zipser's algorithm. II. Green's Function Approach. (a) Definition of the Problem Consider a fully recurrent network with neural activity represented by an N-dimensional vector x(t). The dynamic equations can be written in general as a set of first order differential equations: i(t) = F(x(t),w,I(t» (1) where W is a matrix representing the set of weights and all other adjustable parameters, I(t) is a vector representing the neuron units clamped by external input signals at time t. For a simple network connected by first order weights the nonlinear function F may look like F(x(t), w,I(t» = -x(t) +g(w ·x) +I(t) (2) where the scaler function g(u) could be, for instance, the Sigmoid function g(u) = 1 1(1+e" u). Suppose that part of the state neurons {Xi liE M} are measurable and part of neurons {Xi liE Green's Function Method for Fast On-line Learning Algorithm of Recurrent Neural Networks 335 H} are hidden. For the measurable units we may have desired output x (t) . In order to train the network, an objective functional (or an error measure functional) is often given to be tf E (x, f) = f e (x (t),f (t» dt (3) where functional E depends on weights w implicitly through the measurable neurons {Xi liE M}. A typical error function is e(x(t),x(t» = (xU) _x(t»2 (4) The gradient descent learning is to modify the weights according to Awoc: oE = -rf~. ax dt ow ax ow . to (5) In order 0 evaluate the integral in Eq. (5) one needs to know both de/dw and dx/dW. The first term can be easily obtained by taking derivative of the given error function e (x (t), f (t» . For the second term one needs to solve the differential equation !l. ~ x) = of . Ox + of (6) dt dw ax ow ow which is easily derived by taking derivative of Eq.(l) with respect to w. The well known forward algorithm of recurrent networks [4] is to solve Equation (6) forward in time and make the weight correction at the end (t = r.r) of the input sequence. (This algorithm was developed independently by several researchers, but due to the page limitation we could not refer all related papers and now simply call it Williams and Zipser's algorithm) The on-line learning is to make weight correction whenever an error is to be corrected during the input sequence A w (t) = -11 ( ~ . ~ ax dW (7) The proof of convergence of on-line learning algorithm will be addressed elsewhere. The main drawback of this forward algorithm is that it requires O(Ni) operations per time step to update the matrix dx/dW. Our goal of the Green's function approach is to find an online algorithm which requires less computation load. (b). Green's Function Solution First let us analyze the computational complexity when integrating Eq. (6) directly. Rewrite Eq. (6) as L. ax = of ow ow (8) where the linear operator L is defined as L = !l. _ of dt ax Two types of redundancy will be seen from Eq. (8). First, the operator L does not depend on w explicitly, which means that what we did in solving for dx/dw is to repeatedly solve the identical differential equation for each components of w. This is redundant. It is especially wasteful when higher order connection weights are used. The second redundancy is in the special form of dF/dw for neural computations where the same activity function (say, Sigmoid function) is 336 Sun, Chen, and Lee used for every neuron, so that aFk , y- = g (LWkl ' xI) 8ki Xj UWi} I (9) where 8ki is the Kronecker delta function. It is seen from Eq. (9) that among N3 components of this third order tensor most of them, N2(N-l), are zero (when k ~ i) and need not to be computed repeatedly. In the original forward learning scheme, we did not pay attention to this redundancy. Our Green's function approach is able to avoid the redundancy by solving for the low dimensional Green's function. And then we construct the solution ofEq. (8) by the dot product of (JF / (Jw with the Green's function, which can in tum be reduced to a scaler product due to Eq. (9). The Green's function of the operator L is defined as a dual time tensor function G(t-t) which satisfies the following equation d aF -G(t-t)-- ·G(t-t) = 8(t-t) (10) dt ax It is well known that, if the solution of Eq. (10) is known, the solution of the original equation Eq. (6) (or (8» can be constructed using the source term (JF/dw through the integral , ax I aF dW (t) = f (G (t - t) . dW (t» dt (11) '0 To find the Green's function solution we first introduce a tensor function V(t) that satisfies the homogeneous form of Eq. (10) -V(t) --' V(t) = 0 dt ax ~ d aF (to) = 1 The solution ofEq. (10) or the Green's function can then be constructed as G(t-t) = V(t) . VI (t)H(t-t) where H(t-t) is the Heaviside function defined as 1 H(t- t) = {O Using the well known equalities t~t t<t :tH (t - t) = 8 (t - t) and J(t, t) 8 (t - t) = J(t, t) 8 (t - t) (12) (13) one can easily verify that the constructed Green's function shown in Eq. (13) is correct, that is, it satisfies Eq. (10). Substituting G(t-t) from Eq. (13) into Eq. (11) we obtain the solution of Eq. (6) as, t :, (t) = V(t) . f «V(t»-l . ~: (t»dt (14) to Green's Function Method for Fast On-line Learning Algorithm of Recurrent Neural Networks 337 We note that this fonnal solution not only satisfies Eq. (6) but also satisfies the required initial condition ax dw (to) = 0 . (15) The "on-line" weight correction at time t is obtained easily from Eq. (5) Ow = -11 1e · dx = -11 (de. V(t) Jt «V('t)f1 . dF ('t»d'tJ dx dw dx dw to (16) (c) Implementation To implement Eq. (16) numerically we will introduce two auxiliary memories. First, we define U(t) to be the inverse of matrix V(t), i.e. U(t) = V -I(t). It is easy to see that the dynamic equation of U(t) is ~ d U (t) + U (t) . dF = 0 dt dx (to) = 1 Secondly, we define a third order tensor TIijk that satisfies ( dTI = U(t) . dF dt dw TI (to) = 0 then the weight correction in Eq. (16) becomes Ow = -11 (v(t) . TI(t» where the vector v(t) is the solution of the linear equation de v (t) . U (t) = d x In discrete time, Eqs. (17) - (20) become: ( Utj (t) = Uij (t - 1) + At L Uik(t _l):Fk k QXj U .. (O) = 0" 'J " ( d~ TI"k(t) = TI·Jk (t-l) + (At) U'I(t-l)~ IJ I IJ QWjk TIijk(O) = 0 L de v· (t) . U .. (t) = -d I 'J x i j (17) (18) (19) (20) (21) (22) (23) 338 Sun, Chen, and Lee awi} = -11 (~:>k (t) llkij (1» k (24) To summarize the procedure of the Green's function method, we need to simultaneously integrate Eq. (21) and Eq. (22) for U(I) and n forward in time starting from Ui}{O) = Oij and 0ijk(O) = O. Whenever error message are generated, we shall solve Eq. (23) for v(t) and update weights according to Eq. (24). The memory size required by this algorithm is simply I?+fil for storing U(I) and O(t). The speed of the algorithm is analyzed as follows. From Eq. (21) and Eq. (22) we see that the update of U(I) and IT both need I? operations per time step. To solve for v(t) and update w, we need also NJ operations per time step. So, the on-line Updating of weights needs totally 41? operations per time step. This is one order of magnitude faster than the current forward learning scheme. In Numerical Simulation We present in this section numerical examples to demonstrate the proposed learning algorithm and benchmark it against Williams&Zipser's algorithm. Class 1 Class 2 Class 3 Fig.1 PHASE SPACE TRAJECTORIES Three different shapes of 2-D trajectory, each is shown in one column with three examples. Recurrent neural networks are trained to recognize the different shapes of trajectory. We consider the trajectory c1assitication problem. The input data are the time series of two Green's Function Method for Fast On-line Learning Algorithm of Recurrent Neural Networks 339 dimensional coordinate pairs (x{t), yet)} sampled along three different types of trajectories in the phase space. The sampling is taken uniformly with flt=27t160. The trajectory equations are X{I) = sin{I+~)lsin{I)1 X{I) = sin(o.sl+~)sin(l.st) X{I) = sin(t+~)sin{21) {y (I) = cos (I +~) I sin (I) I {y (I) = cos (0.51 +~) sin (l.5t) {y (I) = cos (I +~) sin (21) where ~ is a uniformly distributed random parameter. When J3 is changed, these trajectories are distorted accordingly. Nine examples (three for each class) are shown in Fig.l. The neural net used here is a fully recurrent first-order network with dynamics (: +6 Si(t+l) = Si(t) + (Tanh L Wi/se/)})) }=1 (25) where S and I are vectors of state and input neurons, the symbol e represents concatenation, and N is the number of state. Six input neurons are used to represent the normalized vector {I, x(t), yet), x(t)2, y(t)2, x(t)y(t)}. The neural network structure is shown in Fig. 2. Check state neurons at State {t + 1 '\ ---- the end of input S.fquence. • • •• ~ error = Target - ;} 2 SN State{t) Input{t) Fig.2 Recurre"t Neural Network for Trajectory ClassiflCatio" For recognition, each trajectory data sequence needs to be fed to the input neurons and the state neurons evolve according to the dynamics in Eq. (25). At the end of input series we check the last three state neurons and classify the input trajectory according to the "winner-take-all" rule. For training, we assign the desired final output for the three trajectory classes to (1,0,0), (0,1,0) and (0,0,1) respectively. Meanwhile, we need to simultaneously integrate Eq. (21) for U(t) and Eq. (22) for n. At the end, we calculated the error from Eq. (4) and solve Eq. (23) for vet) using LU decomposition algorithm. Finally, we update weights according to Eq. (24). Since the classification error is generated at the end of input sequence, this learning does not have to be on-line. We present this example only to compare the speeds of the proposed fast algorithm against the Williams and Zipser's. We run the two algorithms for the same number of iterations and compare the CPU time used. The results are shown in Table. 1 , where in each one iteration we present 150 training patterns, 50 for each class. These patterns are chosen by randomly selecting ~ values. It is seen that the CPU time ratio is O( lIN), indicating the Green's. function algorithm is one order faster in N. Another issue to be considered is the error convergent rate (or learning rate, as usually called). Although the two algorithms calculate the same weight correction as in Eq. (7), due to different numerical schemes the outcomes may be different. As the result, the error convergent rates are slightly different even if the same learning rate 11 is used. In all numerical simulations we have conducted the learning results are very good (in testing, the recognition is perfect, no single misclassification was found). But, during training the error convergence rates are different. The numerical experiments show that the proposed fast algorithm converges slower than 340 Sun, Chen, and Lee the Williams and Zipser's for the small size neural nets but faster for the large size neural net. ~ Simulation Fast Algorithm Willillms&Zipser's -ratio N=4 1607.4 5020.8 1:3 (Number of Iterations = 200) N=8 1981.7 10807.0 1:5 (Number of Iterations = 50) N=12 5947.6 45503.0 1,' 8 (Number of Iterations = 50) Table 1. The CPU time (in seconds) comparison, implemented in DEC3100 Workstation, for learning the trajectory classification example. IV. Conclusion The Green's function has been used to develop a faster on-line learning algorithm for recurrent neural networks. This algorithm requires O(tv3) operations for each time step, which is one order faster than the Williams and Zipser's algorithm. The memory required is O(tv3). One feature of this algorithm is its straightforward formula, which can be easily implemented numerically. A numerical example of trajectory classification has been used to demonstrate the speed of this fast algorithm compared to Williams and Zipser's algorithm. References [1] D.Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. In Parallel distributed processing: VoU MIT press 1986. P. Werbos, Beyond Regression: New tools for prediction and analysis in the behavior sciences. Ph.D. thesis, Harvard university, 1974. [2] F. Pineda, Generalization of back-propagation to recurrent neural networks. Phys. Rev. Letters, 19(59):2229, 1987. [3] B. Pearlmutter, Learning state space trajectories in recurrent neural networks. Neural Computation,1(2):263, 1989. [4] R. Williams and D. Zipser, A learning algorithm for continually running fully recurrent neural networks. Tech. Report ICS Report 8805, UCSD, La Jolla, CA 92093, November 1988. [5] N. Toomarian, J. Barben and S. Gulati, "Application of Adjoint Operators to Neural Learning", Appl. Math. Lett., 3(3), 13-18, 1990. [6] N. Toomarian and J. Barhen, "Adjoint-Functions and Temporal Learning Algorithms in Neural Networks", Advances in Neural Information Processing Systems 3, p. 113-120, Ed. by R. Lippmann, J. Moody and D. Touretzky, Morgan Kaufmann, 1991. [7] J. H. Schmidhuber, "An O(N3) Learning Algorithm for Fully Recurrent Networks", Tech Report FKI-151-91, Institut fUr Informatik, Technische Universitiit MUnchen, May 1991. [8] Guo-Zheng Sun, Hsing-Hen Chen and Yee-Chun Lee, "A Fast On-line Learning Algorithm for Recurrent Neural Networks", Proceedings of International Joint Conference on Neural Networks, Seattle, Washington, page 11-13, June 1991.
1991
100
431
Constrained Optimization Applied to the Parameter Setting Problem for Analog Circuits David Kirk, Kurt Fleischer, Lloyd Watts~ Alan Barr Computer Graphics 350-74 California Institute of Technology Pasadena, CA 91125 Abstract We use constrained optimization to select operating parameters for two circuits: a simple 3-transistor square root circuit, and an analog VLSI artificial cochlea. This automated method uses computer controlled measurement and test equipment to choose chip parameters which minimize the difference between the actual circuit's behavior and a specified goal behavior. Choosing the proper circuit parameters is important to compensate for manufacturing deviations or adjust circuit performance within a certain range. As biologically-motivated analog VLSI circuits become increasingly complex, implying more parameters, setting these parameters by hand will become more cumbersome. Thus an automated parameter setting method can be of great value [Fleischer 90]. Automated parameter setting is an integral part of a goal-based engineering design methodology in which circuits are constructed with parameters enabling a wide range of behaviors, and are then "tuned" to the desired behaviors automatically. 1 Introduction Constrained optimization methods are useful for setting the parameters of analog circuits. We present two experiments in which an automated method successfully finds parameter settings which cause our circuit's behavior to closely approximate the desired behavior. These parameter-setting experiments are described in Section 3. The difficult subproblems encountered were (1) building the electronic setup ·Dept of Electrical Engineering 116-81 789 790 Kirk, Fleischer, Watts, and Barr to acquire the data and control the circuit, and (2) specifying the computation of deviation from desired behavior in a mathematical form suitable for the optimization tools. We describe the necessary components of the electronic setup in Section 2, and we discuss the selection of optimization technique toward the end of Section 3. Automated parameter setting can be an important component of a system to build accurate analog circuits. The power of this method is enhanced by including appropriate parameters in the initial design of a circuit: we can build circuits with a wide range of behaviors and then "tune" them to the desired behavior. In Section 6, we describe a comprehensive design methodology which embodies this strategy. 2 Implementation We have assembled a system which allows us to test these ideas. The system can be conceptually decomposed into four distinct parts: circuit: an analog VLSI chip intended to compute a particular function. target function: a computational model quantitatively describing the desired behavior of the circuit. This model may have the same parameters as the circuit, or may be expressed in terms of biological data that the circuit is to mimic. error metric: compares the target to the actual circuit function, and computes a difference measure. constrained optimization tool: a numerical analysis tool, chosen based on the characteristics of the particular problem posed by this circuit. Constrained Optimization Tool parameters Circuit Target Function difference measure The constrained optimization tool uses the error metric to compute the difference between the performance of the circuit and the target function. It then adjusts the parameters to minimize the error metric, causing the actual circuit behavior to approach the target function as closely as possible. 2.1 A Generic Physical Setup for Optimization A typical physical setup for choosing chip parameters under computer control has the following elements: an analog VLSI circuit, a digital computer to control the optimization process, computer programmable voltage/current sources to drive the chip, and computer programmable measurement devices, such as electrometers and oscilloscopes, to measure the chip's response. The combination of all of these elements provides a self-contained environment for testing chips. The setting of parameters can then be performed at whatever level Constrained Optimization Applied to the Parameter Setting Problem for Analog Circuits 791 of automation is desirable. In this way, all inputs to the chip and all measurements of the outputs can be controlled by the computer. 3 The Experiments We perform two experiments to set parameters of analog VLSI circuits using constrained optimization. The first experiment is a simple one-parameter system, a 3-transistor "square root" circuit. The second experiment uses a more complex time-varying multi-parameter system, an analog VLSI electronic cochlea. The artificial cochlea is composed of cascaded 2nd order section filters. 3.1 Square Root Experiment In the first experiment we examine a "square-root" circuit [Mead 89], which actually computes ax Ci + b, where a is typically near 0.4. We introduce a parameter (V) into this circuit which varies a indirectly. By adjusting the voltage V in the square root circuit, as shown in Figure l(a), we can alter the shape of the response curve . v 1.6 1.4 1.2 " ~ 1.0 ;i .8 ..51 ~ .......... . ...... . 6 ~/ x ,I': .4 re,/ ~: . 2 o .... ~ ..... ............. ... -.,. ••• 1i"" " " " "'/ ... / .. ~.// 2 3 lin (10e-6) • ·sqrt(x)+b """" chip data • 4 Figure 1: (a) Square root circuit. (b) Resulting fit. We have little control over the values of a and b in this circuit, so we choose an error metric which optimizes a, targeting a curve which has a slope of 0.5 in log-log lin vs. lout space. Since b < < avx, we can safely ignore b for the purposes of this parameter-setting experiment. The entire optimization process takes only a few minutes for this simple one-parameter system. Figure l(b) shows the final results of the square root computation, with the circuit output normalized by a and b. 3.2 Analog VLSI Cochlea As an example of a more complex system on which to test the constrained optimization technique, we chose a silicon cochlea, as described by [Lyon 88]. The silicon cochlea is a cascade of lowpass second-order filter sections arranged such that the natural frequency r of the stages decreases exponentially with distance into the 792 Kirk, Fleischer, Watts, and Barr cascade, while the quality factor Q of the filters is the same for each section (tap). The value of Q determines the peak gain at each tap. Va ---r-------------------,----------------V~--~------------------~---------------____ ---''---_________ VD'Igh! Figure 2: Cochlea circuit To specify the performance of such a cochlea, we need to specify the natural frequencies of the first and last taps, and the peak gain at each tap. These performance parameters are controlled by bias voltages VTL VTR , and VQ, respectively. The parameter-setting problem for this circuit is to find the bias voltages that give the desired performance. This optimization task is more lengthy than the square root optimization. Each measurement of the frequency response takes a few minutes, since it is composed of many individual instrument readings. 3.2.1 Cochlea Results The results of our attempts to set parameters for the analog VLSI cochlea are quite encouragmg. 45 40 ]' 35 is ~ 30 Q 1 2S 50 20 en § IS ~ ... cE 10 5 0 10 20 First tap --Last tap -----30 40 SO 60 Optinrization Step 70 Figure 3: Error metric trajectories for gradient descent on cochlea 80 Figure 3 shows the trajectories of the error metrics for the first and last tap of the cochlea. Most of the progress is made in the early steps, after which the optimization Constrained Optimization Applied to the Parameter Setting Problem for Analog Circuits 793 is proceeding along the valley of the error surface, shown in Figure 5. 8 6 4 .... /~:.::....... ,/ 2 ............. o ~ __ ~~ .. ;:.. ~--..--~~r--~:.:. ...... ,~.: .... -. 1\\ -2 -4 -6 -8 100 ~ ~.\ '. 1000 Frequency \ \ '\ \ , First tap goal -First tap data -+--Last tap goal .... --. Last tap data _ .... -.... \ \ i \ \ 10000 Figure 4: Target frequency response and gradient descent optimized data for cochlea Figure 4 shows both the target frequency response data and the frequency responses which result from our chosen parameter settings. The curves are quite similar, and the differences are at the scale of measurement noise and instrument resolution in our system. 3.2.2 Cochlea Optimization Strategies We explored several optimization strategies for finding the best parameters for the electronic cochlea. Of these, two are of particular interest: special knowledge: use a priori knowledge of the effect of each knob to guide the optimization gradient descent: assume that we know nothing except the input/output relation of the chip. Then we can estimate the gradient for gradient descent by varying the inputs. Robust numerical techniques such as conjugate gradient can also be helpful when the energy landscape is steep. We found the gradient descent technique to be reliable, although it did not converge nearly as quickly as the "special knowledge" optimization. This corresponds with our intuition that any special knowledge we have about the circuit's operation will aid us in setting the parameters. 4 Choosing An Appropriate Optimization Method One element of our system which has worked without much difficulty is the optimization. However, more complex circuits may require more sophisticated optimization methods. A wide variety of constrained optimization algorithms exist which are 794 Kirk, Fleischer, Watts, and Barr Figure 5: The error surface for the error metric for the frequency response of the first tap of the cochlea. Note the narrow valley in the error surface. Our target (the minimum) lies near the far left, at the deepest part of the valley. effective on particular classes of problems (gradient descent, quasi-newton, simulated annealing, etc) [Platt 89, Gill 81, Press 86, Fleischer 90], and we can choose a method appropriate to the problem at hand. Techniques such as simulated annealing can find optimal parameter combinations for multi-parameter systems with complex behavior, which gives us confidence that our methods will work for more complex circuits. The choice of error metric may also need to be reconsidered for more complex circuits. For systems with time-varying signals, we can use an error metric which captures the time course of the signal. We can deal with hysteresis by beginning at a known state and following the same path for each optimization step. Noisy and non-smooth functions can be improved by averaging data and using robust numeric techniques which are less sensitive to noise. 5 Conclusions The constrained optimization technique works well when a well-defined goal for chip operation can be specified. We can compare automated parameter setting with adjustment by hand: consider that humans often fail in the same situations where optimization fails (eg. multiple local minima). In contrast, for larger dimensional spaces, hand adjustment is very difficult, while an optimization technique may succeed. We expect to integrate the technique into our chip development process, and future developments will move the optimization and learning process gradually into the chip. It is interesting to note that our gradient descent method "learns" the parameters of the chip in a manner similar to backpropagation. Seen from this Constrained Optimization Applied to the Parameter Setting Problem for Analog Circuits 795 perspective, this work is a step on the path toward robust on-chip learning. In order to use this technique, there are two moderately difficult problems to address. First, one must assemble and interface the equipment to set parameters and record results from the circuit under computer control (eg. voltage and current sources, electrometer, digital oscilloscope, etc). This is a one-time cost since a similar setup can be used for many different circuits. A more difficult issue is how to specify the target function of a circuit, and how to compute the error metric. For example, in the simple square-root circuit, one might be more concerned about behavior in a particular region, or perhaps along the entire range of operation. Care must be taken to ensure that the combination of the target model and the error metric accurately describes the desired behavior of the circuit. The existence of an automated parameter setting mechanism opens up a new avenue for producing accurate analog circuits. The goal of accurately computing a function differs from the approach of providing a cheap (simple) circuit which loosely approximates the function [Gilbert 68] [Mead 89]. By providing appropriate parameters in the design of a circuit, we can ensure that the desired function is in the domain of possible circuit behaviors (given expected manufacturing tolerances). Thus we define the domain of the circuit in anticipation of the parameter setting apparatus. The optimization methods will then be able to find the best solution in the domain, which could potentially be accurate to a high degree of precision. 6 The Goal-based Engineering Design Technique The results of our optimization experiments suggest the adoption of a comprehensive Goal-based Engineering Design Technique that directly affects how we design and test chips. Our results change the types of circuits we will try to build. The optimization techniques allow us to aggresively design and build ambitious circuits and more frequently have them work as expected, meeting our design goals. As a corollary, we can confidently attack larger and more interesting problems. The technique is composed of the following four steps: 1) goal-setting: identify the target function, or behavioral goals, of the design 2) circuit design: design the circuit with "knobs" (adjustable parameters) in it, attempting to make sure desired (target) circuit behavior is in gamut of the actual circuit, given expected manufacturing variation and device characteristics. 3) optimization plan: devise optimization strategy to explore parameter settings. This includes capabilities such as a digital computer to control the optimization, and computer-driven instruments which can apply voltages/currents to the chip and measure voltage/current outputs. 4) optimization: use optimization procedure to select parameters to minimize deviation of actual circuit performance from the target function the optimization may make use of special knowledge about the circuit, such as "I know that this knob has effect x," or interaction, such as "I know that this is a good region, so explore here." 796 Kirk, Fleischer, Watts, and Barr Design Goals Circuit Design Circuit Optimization Plan 1-------. The goal-setting process produces design goals that influence both the circuit design and the form of the optimization plan. It is important to produce a match between the design of the circuit and the plan for optimizing its parameters. Acknowledgements Many thanks to Carver Mead for ideas, encouragement, and support for this project. Thanks also to John Lemoncheck for help getting our physical setup together. Thanks to Hewlett-Packard for equipment donation. This work was supported in part by an AT&T Bell Laboratories Ph.D. Fellowship. Additional support was provided by NSF (ASC-89-20219). All opinions, findings, conclusions, or recommendations expressed in this document are those of the author and do not necessarily reflect the views of the sponsoring agencies. References [Fleischer 90] Fleischer, K., J. Platt, and A. Barr, "An Approach to Solving the Parameter Setting Problem," IEEE/ ACM 23rd IntI Conf on System Sciences, January 1990. [Gilbert 68] Gilbert, B., "A Precise Four-Quadrant Multiplier with Sub-nanosecond Response," IEEE Journal of Solid-State Circuits, SC-3:365, 1968. [Gill 81] Gill, P. E., W. Murray, and M. H. Wright, "Practical Optimization," Academic Press, 1981. [Lyon 88] Lyon, R. A., and C. A. Mead, "An Analog Electronic Cochlea," IEEE Trans. Acous. Speech, and Signal Proc., Volume 36, Number 7, July, 1988, pp. 1119-1134. [Mead 89] Mead, C. A., "Analog VLSI and Neural Systems," Addison-Wesley, 1989. [Platt 89] Platt, J. C., "Constrained Optimization for Neural Networks and Computer Graphics," Ph.D. Thesis, California Institute of Technology, Caltech-CS-TR-89-07, June, 1989. [Press 86] Press, W., Flannery, B., Teukolsky, S., Vetterling, W., "Numerical Recipes: the Art of Scientific Computing," Cambridge University Press, Cambridge, 1986.
1991
101
432
Fault Diagnosis of Antenna Pointing Systems using Hybrid Neural Network and Signal Processing Models Padhraic Smyth, J eft" Mellstrom Jet Propulsion Laboratory 238-420 California Institute of Technology Pasadena, CA 91109 Abstract We describe in this paper a novel application of neural networks to system health monitoring of a large antenna for deep space communications. The paper outlines our approach to building a monitoring system using hybrid signal processing and neural network techniques, including autoregressive modelling, pattern recognition, and Hidden Markov models. We discuss several problems which are somewhat generic in applications of this kind in particular we address the problem of detecting classes which were not present in the training data. Experimental results indicate that the proposed system is sufficiently reliable for practical implementation. 1 Background: The Deep Space Network The Deep Space Network (DSN) (designed and operated by the Jet Propulsion Laboratory (JPL) for the National Aeronautics and Space Administration (NASA)) is unique in terms of providing end-to-end telecommunication capabilities between earth and various interplanetary spacecraft throughout the solar system. The ground component of the DSN consists of three ground station complexes located in California, Spain and Australia, giving full 24-hour coverage for deep space communications. Since spacecraft are always severely limited in terms of available transmitter power (for example, each of the Voyager spacecraft only use 20 watts to transmit signals back to earth), all subsystems of the end-to-end communications link (radio telemetry, coding, receivers, amplifiers) tend to be pushed to the 667 668 Smyth and Mellstrom absolute limits of performance. The large steerable ground antennas (70m and 34m dishes) represent critical potential single points of failure in the network. In particular there is only a single 70m antenna at each complex because of the large cost and calibration effort involved in constructing and operating a steerable antenna of that size the entire structure (including pedestal support) weighs over 8,000 tons. The antenna pointing systems consist of azimuth and elevation axes drives which respond to computer-generated trajectory commands to steer the antenna in realtime. Pointing accuracy requirements for the antenna are such that there is little tolerance for component degradation. Achieving the necessary degree of positional accuracy is rendered difficult by various non-linearities in the gear and motor elements and environmental disturbances such as gusts of wind affecting the antenna dish structure. Off-beam pointing can result in rapid fall-off in signal-to-noise ratios and consequent potential loss of irrecoverable scientific data from the spacecraft. The pointing systems are a complex mix of electro-mechanical and hydraulic components. A faulty component will manifest itself indirectly via a change in the characteristics of observed sensor readings in the pointing control loop. Because of the non-linearity and feedback present, direct causal relationships between fault conditions and observed symptoms can be difficult to establish this makes manual fault diagnosis a slow and expensive process. In addition, if a pointing problem occurs while a spacecraft is being tracked, the antenna is often shut-down to prevent any potential damage to the structure, and the track is transferred to another antenna if possible. Hence, at present, diagnosis often occurs after the fact, where the original fault conditions may be difficult to replicate. An obvious strategy is to design an on-line automated monitoring system. Conventional control-theoretic models for fault detection are impractical due to the difficulties in constructing accurate models for such a non-linear system an alternative is to learn the symptom-fault mapping directly from training data, the approach we follow here. 2 Fault Classification over Time 2.1 Data Collection and Feature Extraction The observable data consists of various sensor readings (in the form of sampled time series) which can be monitored while the antenna is in tracking mode. The approach we take is to estimate the state of the system at discrete intervals in time. A feature vector ~ of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(wd~), 1 :::; i :::; m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. This hierarchical pattern of information flow, where the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making, is quite generic to systems which must passively sense and monitor their environment in real-time. Experimental data was gathered from a new antenna at a research ground-station at the Goldstone DSN complex in California. We introduced hardware faults in a Fault Diagnosis of Antenna Pointing Systems 669 controlled manner by switching faulty components in and out of the control loop. Obtaining data in this manner is an expensive and time-consuming procedure since the antenna is not currently instrumented for sensor data acquisition and is located in a remote location of the Mojave Desert in Southern California. Sensor variables monitored included wind speed, motor currents, tachometer voltages, estimated antenna position, and so forth, under three separate fault conditions (plus normal conditions) . The time series data was segmented into windows of 4 seconds duration (200 sampies) to allow reasonably accurate estimates of the various features. The features consisted of order statistics (such as the range) and moments (such as the variance) of particular sensor channels. In addition we also applied an autoregressiveexogenous (ARX) modelling technique to the motor current data, where the ARX coefficients are estimated on each individual 4-second window of data. The autoregressive representation is particularly useful for discriminative purposes (Eggers and Khuon, 1990). 2.2 State Estimation with a Hidden Markov Model If one applies a simple feed-forward network model to estimate the class probabilities at each discrete time instant t, the fact that faults are typically correlated over time is ignored. Rather than modelling the temporal dependence of features, p(.f.(t)I.f.(t1), ... ,.f.(0)), a simpler approach is to model temporal dependence via the class variable using a Hidden Markov Model (HMM). The m classes comprise the Markov model states. Components of the Markov transition matrix A (of dimension m x m) are specified subjectively rather than estimated from the data, since there is no reliable database of fault-transition information available at the component level from which to estimate these numbers. The hidden component of the HMM model arises from the fact that one cannot observe the states directly, but only indirectly via a stochastic mapping from states to symptoms (the features). For the results reported in this paper, the state probability estimates at time t are calculated using all the information available up to that point in time. The probability state vector is denoted by p(s(t)). The probability estimate of state i at time t can be calculated recursively via the standard HMM equations: A ( ) ( ( ) ) ( ()) Ui(t)Yi(t) U t = A.p s t - 1 and p Si t = ",m A • () • ( ) L.,.;j=l U, t Y, t where the estimates are initialised by a prior probability vector p(s(O)), the Ui(t) are the components of u(t), 1 ~ i ~ m, and the Yi(t) are the likelihoods p(.f.IWi) produced by the particular classifier being used (which can be estimated to within a normalising cons tan t by p( Wi I.f.) / p( Wi)) . 2.3 Classification Results We compare a feedforward multi-layer perceptron model (single hidden layer with 12 sigmoidal units, trained using the squared error objective function and a conjugategradient version of backpropagation) and a simple maximum-likelihood Gaussian classifier (with an assumed diagonal covariance matrix, variances estimated from the data), both with and without the HMM component. Table 1 summarizes the 670 Smyth and Mellstrom 1 0.8 Estimated probability 0 .6 of true class (Normal) 0 .4 0.2 00 ----------------------v--------------------neural+Markov -neural 210 4 0 630 840 Time (seconds) Figure 1: Stabilising effect of Markov component overall classification accuracies obtained for each of the models these results are for models trained on data collected in early 1991 (450 windows) which were then field-tested in real-time at the antenna site in November 1991 (596 windows)_ There were 12 features used in this particular experiment, including both ARX and timedomain features. Clearly, the neural-Markov model is the best model in the sense that no samples at all were misclassified it is significantly better than the simple Gaussian classifier. Without the Markov component, the neural model still classified quite well (0.84% error rate). However all of its errors were false alarms (the classifier decision was a fault label, when in reality conditions were normal) which are highly undesirable from an operational viewpoint in this context, the Markov model has significant practical benefit. Figure 1 demonstrates the stabilising effect of the Markov model over time. The vertical axis corresponds to the probability estimate of the model for the true class. Note the large fluctuations and general uncertainty in the neural output (due to the inherent noise in the feature data) compared to the stability when temporal context is modelled. Table 1: Classification results for different models Model 3 Detecting novel classes While the neural model described above exhibits excellent performance in terms of discrimination, there is another aspect to classifier performance which we must consider for applications of this nature: how will the classifier respond if presented with data from a class which was not included in the training set ? Ideally, one would like the model to detect this situation. For fault diagnosis the chance that one will encounter such novel classes under operational conditions is quite high since there is little hope of having an exhaustive library of faults to train on. In general, whether one uses a neural network, decision tree or other classification feature 2 B B B B B B B B B A A A A A A Fault Diagnosis of Antenna Pointing Systems 671 c " novel input .... __ ---training data A A feature 1 Figure 2: Data from a novel class C method, there are few guarantees about the extrapolation behaviour of the trained classification model. Consider Figure 2, where point C is far away from the "A" s and "B"s on which the model is trained. The response of the trained model to point C may be somewhat arbitrary, since it may lie on either side of a decision boundary depending on a variety of factors such as initial conditions for the training algorithm, objective function used, particular training data, and so forth. One might hope that for a feedforward multi-layer perceptron, novel input vectors would lead to low response for all outputs. However, if units with non-local response functions are used in the model (such as the commonly used sigmoid function), the tendency of training algorithms such as backpropagation is to generate mappings which have a large response for at least one of the classes as the attributes take on values which extend well beyond the range of the training data values. Leonard and Kramer (1990) discuss this particular problem of poor extrapolation in the context of fault diagnosis of a chemical plant. The underlying problem lies in the basic nature of discriminative models which focus on estimating decision boundaries based on the differences between classes. In contrast, if one wants to detect data from novel classes, one must have a generative model for each known class, namely one which specifies how the data is generated for these classes. Hence, in a probabilistic framework, one seeks estimates of the probability density function of the data given a particular class, f(~.lwi)' from which one can in turn use Bayes' rule for prediction: (1) 4 Kernel Density Estimation Unless one assumes a particular parametric form for f(~lwd, then it must be somehow estimated from the data. Let us ignore the multi-class nature of the problem temporarily and simply look at a single-class case. We focus here on the use of kernel-based methods (Silverman, 1986). Consider the I-dimensional case of estimating the density f( x) given samples {Xi}, 1 ::; i ::; N. The idea is simple enough: we obtain an estimate j(x), where x is the point at which we wish to know the density, by summing the contributions of the kernel K((x - xi)/h) (where h is the bandwidth of the estimator, and K(.) is the kernel function) over all the samples • 672 Smyth and Mellstrom and normalizing such that the estimate is itself a density, i.e., N j(x) = ;h {;,K( x ~ x,) (2) The estimate j(x) directly inherits the properties of K(.), hence it is common to choose the kernel shape itself to be some well-known smooth function, such as a Gaussian. For the multi-dimensional case, the product kernel is commonly used: j(x) = 1 t(rr K(xk - Xf)) Nh1···hd . hk ,=1 k=1 (3) where xk denotes the component in dimension k of vector ~, and the hi represent different bandwidths in each dimension. Various studies have shown that the quality of the estimate is typically much more sensitive to the choice of the bandwidth h than it is to the kernel shape K(.) (Izenmann, 1991). Cross-validation techniques are usually the best method to estimate the bandwidths from the data, although this can be computationally intensive and the resulting estimates can have a high variance across particular data sets. A significant disadvantage of kernel models is the fact that all training data points must be stored and a distance measure between a new point and each of the stored points must be calculated for each class prediction. Another less obvious disadvantage is the lack of empirical results and experience with using these models for realworld applications in particular there is a dearth of results for high-dimensional problems. In this context we now outline a kernel approximation model which is considerably simpler both to train and implement than the full kernel model. 5 Kernel Approximation using Mixture Densities 5.1 Generating a kernel approximation An obvious simplification to the full kernel model is to replace clusters of data points by representative ceritroids, to be referred to as the centroid kernel model. Intuitively, the sum of the responses from a number of kernels is approximated by a single kernel of appropriate width. Omohundro (1992) has proposed algorithms for bottom-up merging of data points for problems of this nature. Here, however, we describe a top-down approach by observing that the kernel estimate is itself a special case of a mixture density. The underlying density is assumed to be a linear combination of L mixture components, i.e., L f(x) = I>~ifi(X) (4) i=1 where the ai are the mixing proportions. The full kernel estimate is itself a special case of a mixture model with ai = liN and Ji(x) = K(x). Hence, our centroid kernel model can also be treated as a mixture model but now the parameters of the mixture model (the mixing proportions or weights, and the widths and locations of the centroid kernels) must be estimated from the data. There is a well-known and 40 20 Log-likelihood of 0 unknown ellis -20 under normll hypothesis -40 on test dltl -10 -10 Fault Diagnosis of Antenna Pointing Systems 673 Kernel Model -Centroid Kernel Likelihood -----lower 1-s1gma boundary ____________________________ :-:-:~r_1:.s~~a_~u~~ry -100 +-'----t.;;.;;...--+-':...:;....-~-__F~-_F-=---~F_=-_F=-___1 00 Time (seconds) Sigmoidal Model o~ ____ ~ _____________________________ ~ __ -0-4 Log-likelihood of unknown ellss -0.' under normll hypothesis -1.2 on test dltl -1.1 ---·-loWer 1-81 un -likelihOOd.~eu~ _·--·upper 105 ma un~~ 150 200 250 300 350 400 -2+----t---r--r---+--~-~~-~-___1 Time (seconds) Figure 3: Likelihoods of kernel versus sigmoidal model on novel data fast statistical procedure known as the EM (Expectation-Maximisation) algorithm for iteratively calculating these parameters, given some initial estimates (e.g., Redner and Walker, 1984). Hence, the procedure for generating a centroid kernel model is straightforward: divide the training data into homogeneous subsets according to class labels and then fit a mixture model with L components to each class using the EM procedure (initialisation can be based on randomly selected prototypes). Prediction of class labels then follows directly from Bayes' rule (Equation (1)). Note that there is a strong similarity between mixture/kernel models and Radial Basis Function (RBF) networks. However, unlike the RBF models, we do not train the output layer of our network in order to improve discriminative performance as this would potentially destroy the desired probability estimation properties of the model. 5.2 Experimental results on detecting novel classes In Figure 3 we plot the log-likelihoods, log f(~lwd, as a function of time, for both a centroid kernel model (Gaussian kernel, L = 5) and the single-hidden layer sigmoidal network described in Section 2.2. Both of these models have been trained on only 3 of the original 4 classes (the discriminative performance of the models was roughly equivalent), excluding one of the known classes. The inputs {~i} to the models are data from this fourth class. The dashed lines indicate the Jl ± u boundaries on the 674 Smyth and Mellstrom log-likelihood for the normal class as calculated on the training data this tells us the typical response of each model for class "normal" (note that the absolute values are irrelevant since the likelihoods have not been normalised via Bayes rule). For both models, the maximum response for the novel data came from the normal class. For the sigmoidal model, the novel response was actually greater than that on the training data the network is very confident in its erroneous decision that the novel data belongs to class normal. Hence, in practice, the presence of a novel class would be completely masked. On the other hand, for the kernel model, the measured response on the novel data is significantly lower than that obtained on the training data. The classifier can directly calculate that it is highly unlikely that this new data belongs to any of the 3 classes on which the model was trained. In practice, for a centroid kernel model, the training data will almost certainly fit the model better than a new set of test data, even data from the same class. Hence, it is a matter of calibration to determine appropriate levels at which new data is deemed sufficiently unlikely to come from any of the known classes. Nonetheless, the main point is that a local kernel representation facilitates such detection, in contrast to models with global response functions (such as sigmoids). In general, one does not expect a generative model which is not trained discriminatively to be fully competitive in terms of classification performance with discriminative models on-going research involves developing hybrid discriminativegenerative classifiers. In addition, on-line learning of novel classes once they have been detected is an interesting and important problem for applications of this nature. An initial version of the system we have described in this paper is currently undergoing test and evaluation for implementation at DSN antenna sites. Acknowledgements The research described in this paper was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration and was supported in part by DARPA under grant number AFOSR-90-0199. References M. Eggers and T. Khuon, 'Neural network data fusion concepts and application,' in Proceedings of 1990 IJCNN , San Diego, vol. II , 7-16, 1990. M. A. Kramer and J. A. Leonard, 'Diagnosis using backpropagation neural networks analysis and criticism,' Computers in Chemical Engineering, vol.14, no.12, pp.1323-1338, 1990. B. Silverman, Density Estimation for Statistics and Data Analysis, New York: Chapman and Hall, 1986. A. J. Iz en mann , 'Recent developments in nonparametric density estimation,' J. Amer. Stat. Assoc., vol.86, pp.205-224, March 1991. S. Omohundro, 'Model-merging for improved generalization,' in this volume. R. A. Redner and H. F. Walker, 'Mixture densities, maximum likelihood, and the EM algorithm,' SIAM Review, vol. 26 , no.2, pp.195-239, April 1984.
1991
102
433
NETWORK MODEL OF STATE-DEPENDENT SEQUENCING Jeffrey P. Sutton: Adam N. Mamelakt and J. Allan Hobson Laboratory of Neurophysiology and Department of Psychiatry Harvard Medical School 74 Fenwood Road, Boston, MA 02115 Abstract A network model with temporal sequencing and state-dependent modulatory features is described. The model is motivated by neurocognitive data characterizing different states of waking and sleeping. Computer studies demonstrate how unique states of sequencing can exist within the same network under different aminergic and cholinergic modulatory influences. Relationships between state-dependent modulation, memory, sequencing and learning are discussed. 1 INTRODUCTION Models of biological information processing often assume only one mode or state of operation. In general, this state depends upon a high degree of fidelity or modulation among the neural elements. In contrast, real neural networks often have a. repertoire of processing states that is greatly affected by the relative balances of various neuromodulators (Selverston, 1988; Harris-Warrick and Marder, 1991). One area where changes in neuromodulation and network behavior are tightly and dramatically coupled is in the sleep-wake cycle (Hobson and Steriade, 1986; Mamelak and Hobson, 1989). This cycle consists of three main states: wake, non-rapid eye • Also in the Center for Biological Information Processing, Whitaker College, E25-201, Massachusetts Institute of Technology, Cambridge, MA 02139 t Currently in the Department of Neurosurgery, University of California, San Francisco, CA 94143 283 284 Sutton, Mamelak, and Hobson movement (NREM) sleep and rapid eye movement (REM) sleep. Each state is characterized by a unique balance of monoaminergic and cholinergic neuromodulation (Hobson and Steriade, 1986; figure 1). In humans, each state also has characteristic cognitive sequencing properties (Foulkes, 1985; Hobson, 1988; figure 1). An integration and better understanding of the complex relationships between neuromodulation and information sequencing are desirable from both a computational and a neurophysiological perspective. In this paper, we present an initial approach to this difficult neurocognitive problem using a network model. MODULATION STATE tonic phasic SEQUENCING amlDerglc cholinergic (tf) (6) progrt!6llive WAKE high low Al ~ A2 --~ A3 J, +- illput Bl -7 B2 perseverative NREM intcrAl low l' \, SLEEP UlC(liatc A3~ A2 bizarre Al -7 A2 REM J, +-rGO low high SLEEP A2/Bl PGO -+ J, D2 ~ B3 Figure 1: Overview of the three state model which attempts to integrate aspects of neuromodulation and cognitive sequencing. The aminergic and cholinergic systems are important neuromodulators that filter and amplify, as opposed to initiating or carrying, distributed information embedded as memories (eg. A1, A2, A3) in neural networks. In the wake state, a relative aminergic dominance exists and the associated network sequencing is logical and progressive. For example, the sequence A1 -+ A2 transitions to B1 -+ B2 when an appropriate input (eg. B1) is present at a certain time. The NREM state is characterized by an intermediate aminergicto-cholinergic ratio correlated with ruminative and perseverative sequences. Unexpected or "bizarre" sequences are found in the REM state, wherein phasic cholinergic inputs dominate and are prominent in the ponto-geniculo-occipital (PGO) brain areas. Bizarreness is manifest by incongruous or mixed memories, such as A2/ B1, and sequence discontinuities, such as A2 -+ A2/ B1 -+ B2, which may be associated with PGO bursting in the absence of other external input. Network Model of State-Dependent Sequencing 285 2 AMINERGIC AND CHOLINERGIC NEUROMODULATION As outlined in figure 1, there are unique correlations among the aminergic and cholinergic systems and the forms of information sequencing that exist in the states of waking and NREM and REM sleep. The following brief discussion, which undoubtably oversimplifies the complicated and widespread actions of these systems, highlights some basic and relevant principles. Interested readers are referred to the review by Hobson and Steriade (1986) and the article by Hobson et al. in this volume for a more detailed presentation. The biogenic amines, including norepinephrine, serotonin and dopamine, have been implicated as tonic regulators of the signal-to-noise ratio in neural networks (eg. Mamelak and Hobson, 1989). Increasing (decreasing) the amount of aminergic modulation improves (worsens) network fidelity (figure 2a). A standard means of modeling this property is by a stochastic or gain factor, analogous to the well-known Boltzmann factor f3 = l/kT, which is present in the network updating rule. Complex neuromodulatory effects of acetylcholine depend upon the location and types of receptors and channels present in different neurons. One main effect is facilitatory excitation (figure 2b). Mamelak and Hobson (1989) have suggested how the phasic release of acetylcholine, involving the bursting of PGO cells in the brainstem, coupled with tonic aminergic demodulation, could induce bifurcations in information sequencing at the network level. The model described in the next section sets out to test this notion. a. 1.0 r--------7":::::::O~==-_, be 0.8 .9 ~ "0 0.6 ~ ~ 0.4 '" e c.. 0.2 ·3 -2 -1 o b-8 1 2 3 Membrane Potential Relative to Threshold h. Initial Activity ------------(] ····A···A· 8-6 EPSP ~ -6 Resultant Activity ------(] A no efFow:t action potential subthreshold induced adivity pp.rRists Figure 2: (a) Plot of neural firing probability as a function of the membrane protential, h, relative to threshold, 9, for values of aminergic modulation f3 of 0.5, 1.0, 1.5 and 3.0. (b) Schematic diagram of cholinergic facilitation, where EPSPs of magnitude 6 only induce a change in firing activity if h is initially in the range (9 - 6, 9). Modified from Mamelak and Hobson (1989). 286 Sutton, Mamelak, and Hobson 3 ASSOCIATIVE SEQUENCING NETWORK There are several ways to approach the problem of modeling modulatory effects on temporal sequencing. We have chosen to commence with an associative network that is an extension of the work on models resembling elementary motor pattern generators (Kleinfeld, 1986; Sompolinsky and Kanter, 1986; Gutfreund and Mezard, 1988). We consider it to be significant that recent data on brainstem control systems show an overlap between sleep-wake regulators and locomotor pattern generators (Garcia-Rill et al., 1990). The network consists of N neural elements with binary values S, = ±1, i = 1, .'" N, corresponding to whether they are firing or not firing. The elements are linked together by two kinds of a priori learned synaptic connections. One kind, p JH) = ~ I: ere;, i:/; j, (1) #,=1 encodes a set of p uncorrelated patterns {er}[~l! J.L = 1, ... ,p, where each er takes the value ±l with equal probabilities. These patterns correspond to memories that are stable until a transition to another memory is made. Transitions in a sequence of memories J.L = 1 -+ 2 -+ ... -+ q < p are induced by a second type of connection 9-1 J~~) = ~ "c~+lc~. '3 N L...J Ii., 1i.3 (2) #,=1 Here, ~ is a relative weight of the connection types. The average time spent in a memory pattern before transitioning to the next one in a sequence is T. At time t, the membrane potential is given by N h,(t) = ~ [IN) Sj(t) + J,~') Sj(t - T) 1 + 6,(t) + 1;(t). (3) The two terms contained in the brackets reflect intrinsic network interactions, while phasic PGO effects are represented by the 6,(t). External inputs, other than PGO inputs, to ~(t) are denoted by Ii(t). Dynamic evolution of the network follows the updating rule with probability { 1 + .'F'/I[h«.)-•• (.)) } -1 (4) In this equation, the amount of aminergic-like modulation is parameterized by {3. While updating could be done serially, a parallel dynamic process is chosen here for convenience. In the absence of external and PGO-like inputs, and with {3 > 1.0, the dynamics have the effect of generating trajectories on an adiabatically varying hyper surface that molds in time to produce a path from one basin of attraction to another. For {3 < 1.0, the network begins to lose this property. Lowering {3 mostly affects neural elements close to threshold, since the decision to change firing activity centers around the threshold value. However, as {3 decreases, fluctuations in the membrane potentials increase and a larger fraction of the neural elements remain, on average, near threshold. Network Model of State-Dependent Sequencing 287 4 SIMULATION RESULTS A network consisting of N = 50 neural elements was examined wherein p = 6 memory patterns (A1, A2, A3, B1, B2 and B3) were chosen at random (pi N = 0.12). These memories were arranged into two loops, A and B, according to equation (2) such that the cyclic sequences A1 --+ A2 --+ A3 --+ A1 --+ ••• and B1 --+ B2 --+ B3 --+ B1 --+ ••• were stored in loops A and B, respectively. For simplicity, c5i(t) = c5(t) and 9.(t) = 0, 'Vi. The transition parameters were set to A = 2.5 and T = 8 for all the simulations to ensure reliable pattern generation under fully modulated conditions (large /3, c5 = OJ Somplinsky and Kanter, 1986). Variations in /3, c5(t) and I.(t) delineated the individual states that were examined. In the model wake state, where there was a high degree ofaminergic-like modulation (eg. /3 = 2.0), the network generated loops of sequential memories. Once cued into one of the two loops, the network would remain in that loop until an external input caused a transition into the other loop (figure 3). :CI.II~'" \ ... ILS, • • "~ ____ -~ .-, ~ (lOll, Z$' so l, ;, .ao .21 ,5(1 . ' , ..: , • I "'.II~" " u __ ; __________________ _ 1..00 "I 2J' ", 1. .:Jo ,;, t. , , , . ' , ~ :~~ , '~~-----------1Il0-------!.' • I • II ~ ~ _ _ _ .. 1.11 " . . , os I , , . , , AI ' , ., I II , ~f.llr ~\ " " o.s ~ I , U • • ~', '. 'I • :Is ItJ 7r~'" , , , ; :~f------------~ A~ , ~ n ~ ~ ~ I/me JLBI Figure 3: Plot of overlap as a function of time for each of the six memories A1, A2, A3, B1, B2, B3 in the simulated wake state. The overlap is a measure of the normalized Hamming distance between the instantaneous pattern of the network and a given memory. f3 = 2.0, c5 = 0, A = 2.5, T = 8. The network is cued in pattern A1 and then sequences through loop A. At t = 75, pattern B1 is inputted to the network and loop B ensues. The dotted lines highlight the transitions between different memory patterns. 288 Sutton, Mamelak, and Hobson SlmuillflHl NREII Sf.." S,.,. ;: o u ~ n f~ fa f~ /lme Figure 4: Graph of overlap VB. time for each of the six memories in the simulated NREM sleep state. {3 = 1.1, 6 = 0, A = 2.5, T = 8. Initially, the network is cued in pattern Al and remains in loop A. Considerable fluctuations in the overlaps are present and external inputs are absent. As {3 was decreased (eg. (3 = 1.1), partially characterizing conditions of a model NREM state, sequencing within a loop was observed to persist (figure 4). However, decreased stability relative to the wake state was observed and small perturbations could cause disruptions within a loop and occasional bifurcations between loops. Nevertheless, in the absence of an effective mechanism to induce inter-loop transitions, the sequences were basically repetitive in this state. For small f3 (eg. 0.8 < f3 < 1.0) and various PGO-like activities within the simulated REM state, a diverse and rich set of dynamic behaviors was observed, only some of which are reported here. The network was remarkably sensitive to the timing of the PGO type bursts. With f3 = 1.0, inputs of 6 = 2.5 units in clusters of 20 time steps occurring with a frequency of approximately one cluster per 50 time steps could induce the following: (a) no or little effect on identifiable intra-loop sequencing; (b) bifurcations between loops; (c) a change from orderly intra-loop sequencing to apparent disorder;l(d) a change from apparent disorder to orderly progression within a single loop ("defibrillation" effect); (e) a change from a disorderly pattern to another disorderly pattern. An example of transition types (c) and (d), with the overall effect of inducing a bifurcation between the loops, is shown in figure 5. 10n detailed inspection, the apparent disorder actually revealed several sequences in loops A and/or B running out of phase with relative delays generally less than T. Network Model of State-Dependent Sequencing 289 In general, lower intensity (eg. 2.0 to 2.5 units), longer duration (eg. >20 time steps) PGO-like bursting was more effective in inducing bifurcations than higher intensity (eg. 4.0 units), shorter duration (eg. 2 time steps) bursts. PGO induced bifurcations were possible in all states and were associated with significant populations of neural elements that were below, but within 6 units of threshold. Slmu/afMI REJI SllHIp SIIIIII :c ::~0: ~ A--~ u,!:-...... ,-~zs~. -~~:-'-. -+.7S,....\--:'±;DII:-----:'2$=--~,~ , '. " ' ~'.II', ' " " oslt\-..¥~ u ~ , , ' , \ \ ' , ~ ,.0 U ~~ H ~ n ~ ~ _ &O~~ , , ' ; ~ , . , , u ~ n ~ ~ ~ 11",. PGO PGO Figure 5: REM sleep state plot of overlap VB. time for each of the six memories. f3 = 1.0, 6 = 2.5, A = 2.5, T = 8. The network sequences progressively in loop A until a cluster of simulated PGO bursts (asterisks) occurs lasting 40 < t < 60. A complex output involving alternating sequences from loop A and loop B results (note dotted lines). A second PGO burst cluster during the interval 90 < t < 110 yields an output consisting of a single loop B sequence. Over the time span of the simulation, a bifurcation from loop A to loop B has been induced. 5 STATE-DEPENDENT LEARNING The connections set up by equations (1) and (2) are determined a priori using a standard Hebbian learning algorithm and are not altered during the network simulations. Since neuromodulators, including the monoamines norepinephrine and serotonin, have been implicated as essential factors in synaptic plasticity (Kandel et al., 1987), it seems reasonable that state changes in modulation may also affect changes in plasticity. This property, when superimposed on the various sequencing features of a network, may yield possibly novel memory and sequence formations, associations and perhaps other unexamined global processes. 290 Sutton, Mamelak, and Hobson 6 CONCLUSIONS The main finding of this paper is that unique states of information sequencing can exist within the same network under different modulatory conditions. This result holds even though the model makes significant simplifying assumptions about the neurophysiological and cognitive processes motivating its construction. Several observations from the model also suggest mechanisms whereby interactions between the aminergic and cholinergic systems can give rise to sequencing properties, such as discontinuities, in different states, especially REM sleep. Finally, the model provides a means of investigating some of the complex and interesting relationships between modulation, memory, sequencing and learning within and between different states. AcknowledgeInents Supported by NIH grant MH 13,923, the HMS/MMHC Research & Education Fund, the Livingston, Dupont-Warren and McDonnell-Pew Foundations, DARPA under ONR contract N00014-85-K-0124, the Sloan Foundation and Whitaker College. References Foulkes D (1985) Dreaming: A Cognitive-Psychological Analysis. Hillsdale: Erlbaum. Garcia-Rill E, Atsuta Y, Iwahara T, Skinner RD (1990) Development of brainstem modulation of locomotion. Somatosensory Motor Research 7 238-239. Gutfreund H, Mezard M (1988) Processing of temporal sequences in neural networks. PhYI Rev Lett 61 235-238. Harris-Warrick RM, Marder E (1991) Modulation of neural networks for behavior. Annu Rev Neurolci 14 39-57. Hobson JA (1988) The Dreaming Brain. New York: Basic. Hobson JA, Steriade M (1986) Neuronal basis of behavioral state control. In: Mountcastle VB (ed) Handbook of Physiology - The Nervous System, Vol IV. Bethesda: Am Physiol Soc, 701-823. Kandel ER, Klein M, Hochner B, Shuster M, Siegelbaum S, Hawkins R, et al. (1987) Synaptic modulation and learning: New insights into synaptic transmission from the study of behavior. In: Edelman GM, Gall WE, Cowan WM (eds) Synaptic Function. New York: Wiley, 471-518. Kleinfeld D (1986) Sequential state generation by model neural networks. Proc Naa Acad Sci USA 83 9469-9473. Mamelak AN, Hobson JA (1989) Dream bizarrenes as the cognitive correlate of altered neuronal behavior in REM sleep. J Cog Neurolci 1(3) 201-22. Selverston AI (1988) A consideration of invertebrate central pattern generators as computational data bases. Neural Networks 1 109-117. Sompolinsky H, Kanter I (1986) Temporal association in asymmetric neural networks. Phys Rev Lett 57 2861-2864.
1991
103
434
Splines, Rational Functions and Neural Networks Robert C. Willialnson Department of Systems Engineering Australian National University Canberra, 2601 Australia Peter L. Bartlett Department of Electrical Engineering University of Queensland Queensland, 4072 Australia Abstract Connections between spline approximation, approximation with rational functions, and feedforward neural networks are studied. The potential improvement in the degree of approximation in going from single to two hidden layer networks is examined. Some results of Birman and Solomjak regarding the degree of approximation achievable when knot positions are chosen on the basis of the probability distribution of examples rather than the function values are extended. 1 INTRODUCTION Feedforward neural networks have been proposed as parametrized representations suitable for nonlinear regression. Their approximation theoretic properties are still not well understood. This paper shows some connections with the more widely known methods of spline and rational approximation. A result due to Vitushkin is applied to determine the relative improvement in degree of approximation possible by having more than one hidden layer. Furthermore, an approximation result relevant to statistical regression originally due to Birman and Solomjak for Sobolev space approximation is extended to more general Besov spaces. The two main results are theorems 3.1 and 4.2. 1040 Splines, Rational Functions and Neural Networks 1041 2 SPLINES AND RATIONAL FUNCTIONS The two most widely studied nonlinear approximation methods are splines with free knots and rational functions. It is natural to ask what connection, if any, these have with neural networks. It is already known that splines with free knots and rational functions are closely related, as Petrushev and Popov's remarkable result shows: Theorem 2.1 ([10, chapter 8]) Let Rn(J)p := inf{llf - rllp: r a rational function of degree n} S!(f)p := inf{lIf - slip: s a spline of degree k - 1 with n - 1 free knots}. If f E Lp [a, b],oo < a < b < 00, 1 < p < 00, k ~ 1, 0 < a < k, then Rn(f)p = D(n-a) if and only if S~(f)p = D(n-a). In both cases the efficacy of the methods can be understood in terms of their flexibility in partitioning the domain of definition: the partitioning amounts to a "balancing" of the error of local linear approximation [4]. There is an obvious connection between single hidden layer neural networks and splines. For example, replacing the sigmoid (1 + e-x)-l by the piecewise linear function (Ix + II-Ix - 11)/2 results in networks that are in one dimension splines, and in d dimensions can be written in "Canonical Piecewise Linear" form [3]: f( x) := a + bT x + L Ci I aT x - fJi I i=l defines f: IR d --+ IR, where a, Ci, fJi E IR and b, O'i E IR d. Note that canonical piecewise linear representations are unique on a compact domain if we use the form f(x) := L;~ll Ci laT x -11. Multilayer piecewise linear nets are not generally canonical piecewise linear: Let g(x) := Ix+y-ll-lx+y+ll-lx-y+ll-lx-y-ll+x+y. Then gC) is canonical piecewise linear, but Ig(x)1 (a simple two-hidden layer network) is not. The connection between certain single hidden layer networks and rational functions has been exploited in [13]. 3 COMPOSITIONS OF RATIONAL FUNCTIONS There has been little effort in the nonlinear approximation literature in understanding nonlinearly parametrized approximation classes "more complex" than splines or rational functions. Multiple hidden layer neural networks are in this more complex class. As a first step to understanding the utility of these representations we now consider the degree of approximation of certain smooth function classes via rational functions or compositions of rational functions in the sup-metric. A function ¢> : IR --+ IR is rational of degree 7r if ¢> can be expressed as a ratio of polynomials in x E IR of degree at most 7r. Thus L:'" O'ixi (3.1) ¢>(J := ¢>(J(x) := ~=l . x E IR, (} := [0',,8] Li=l fJixt 1042 Williamson and Bartlett Let (]" 1f (1, ¢) := inf {Ilf - ¢911: deg ¢ S 7T} denote the degree of approximation of f by a rational function of degree 7T or less. Let 'Ij; := ¢ 0 p, where ¢ and pare rational functions: p: lR x e p ---+ lR, ¢: lR x e ¢ ---+ lR, both of degree 7T. Let IF be some function space (metrized by 11·1100) and let (]"1f(lF,.) := sup{(]"1f(1,·): f ElF} denote the degree of approximation of the function class IF. Theorem 3.1 Let lFa := W~(n) denote the Sobolev space of functions from a compact subset n c lR to lR with s := La J continuous derivatives and the sth derivative satisfying a Lipschitz condition with order a - s. Then there exist positive constants C1 and C2 not depending on 7T such that for sufficiently large 7T (3.2) (]"1f(lFa ,p) 2: C1 (21 7T) a and (3.3) Note that (3.2) is tight: it is achievable. Whether (3.3) is achievable is unknown. The proof is a consequence of theorem 3.4. The above result, although only for rational functions of a single variable, suggests that no great benefit in terms of degree of approximation is to be obtained by using multiple hidden layer networks. 3.1 PROOF OF THEOREM Definition 3.2 Let rd C lR d. A map r: r d ---+ lR is called a piecewise rational function of degree k with barrier b~ of order q if there is a polynomial b~ of degree q in x E r d such that on any connected component of Ii C r d \ {x: b~( x) = O}, r is a rational function on Ii of degree k: P;i(X) k k d r:=r(x):= k' Pdi,QdiElR[x]. Qdi(x) , , , Note that at any point x E Ii n Ii, (i f. j j, r is not necessarily single valued. Definition 3.3 Let IF be some function class defined on a set G metrized with 11·1100 and let e = lR II. Then F:,'J: G x e ---+ lR, F:'.,}: (x, 0) f-t F( x, 0) where 1. F( x, 0) is a piecewise rational function of 0 of degree k or less with barrier b~'x (possibly depending on x) of order q; 2. For all f E IF there is a 0 E e such that IIf - Fe, 0) liSE; is called an E-representation of IF of degree k and order q. Theorem 3.4 ([12, page 191, theorem 1]) If Ff:k~q is an E-representation oflFa of degree k and order q with barrier b not depending ~n x, then for sufficiently small (3.4) (1)1/a v log[(q + l)(k + 1)] 2: C "[ where C is a constant not dependent on E, v, k or q. Splines, Rational Functions and Neural Networks 1043 Theorem 3.4 holds for any [-representation F and therefore (by rearrangement of (3.4) and setting v = 271") (3.5) 1 0"11" (IF, F) ~ c -( 2-7I"-lo--:g [:--( q-+-1-) (-k-+-1--=-)] -) Q Now ¢(J given by (3.1) is, for any given and fixed x E IR, a piecewise rational function of () of degree 1 with barrier of degree 0 (no barrier is actually required). Thus (3.5) immediately gives (3.2). Now consider 'IjJ(J = <p 0 p, where 'I\" 11" 4> • 'I\" 11" P • j ¢ = L...-~~1 a i Y~ (y E IR) and p = L...-~=1 IJ x. (x E IR) . Li=l j3iY' Lj~l 8j x J Direct substitution and rearrangement gives '1\"11"4> • ['I\"1I"p . j]i ['I\"1I"p '. j]1I"4>-i L.....i=l a z L...-j=l IJ x L...-j=l OJ x V'9 = . . 'I\" 11" 4> [ 'I\" 11" .] t ['I\" 11" .] 11" 4> t L...-i=l j3i L...-j~lljxJ L...-j~l 8j xJ where we write () = [a, j3, 1,8] and for simplicity set 71" ¢ = 71" P = 71". Thus dim () = 471" =: v. For arbitrary but fixed x, V' is a rational function of degree k = 71". No barrier is needed so q = 0 and hence by (3.4), u.(IF.,.;,) > c, (4"10g~,,+ IJ· 3.2 OPEN PROBLEMS An obvious further question is whether results as in the previous section hold for multivariable approximation, perhaps for multivariable rational approximation. A popular method of d-dimensional nonlinear spline approximation uses dyadic splines [2,5,8]. They are piecewise polynomial representations where the partition used is a dyadic decomposition. Given that such a partition 3 is a subset of a partition generated by the zero level set of a barrier polynomial of degree ~ 131, can Vitushkin's results be applied to this situation? Note that in Vitushkin's theory it is the parametrization that is piecewise rational (PR), not the representation. What connections are there in general (if any) between PR representations and PR parametrizations? 4 DEGREE OF APPROXIMATION AND LEARNING Determining the degree of approximation for given parametrized function classes is not only of curiosity value. It is now well understood that the statistical sample complexity of learning depends on the size of the approximating class. Ideally the approximating class is small whilst well approximating as large as possible an approximat ed class. Furthermore, in order to make statements such as in [1] regarding the overall degree of approximation achieved by statistical learning, the classical degree of approximation is required. 1044 Williamson and Bartlett For regression purposes the metric used is L p ,/-, , where where J-t is a probability measure. Ideally one would like to avoid calculating the degree of approximation for an endless series of different function spaces. Fortunately, for the case of spline approximation (with free knots) this not necessary because (thanks to Petrushev and others) there now exist both direct and converse theorems characterizing such approximation classes. Let Sn (f)p denote the error of n knot spline approximation in Lp[O, 1]. Let I denote the identity operator and T(h) the translation operator (T(h)(f, x) := f(x + h)) and let ~~ := (T(h) - I)k, k = 1,2, ... , be the difference operators. The modulus of smoothness of order k for f E LpUJ) is Wk(f,t)p := L 11~~f(·)IILp(n). Ihl:$;t Petrushev [9] has obtained Theorelll 4.1 (4.1) if and only if (4.2) Let T = (aid + IIp)-l. Then f)n a Sn(f)p]k ~ n n=l < 00 The somewhat strange quantity in (4.2) is the norm of f in a Besov space B<; Tok' Note that for a large enough, T < 1. That is, the smoothness is measured in a~ Lp (p < 1) space. More generally [11], we have (on domain [0,1]) IlflIBC> 0 := ( t (t-awk(f, t)p)q dt) l/q p,q,k Jo t Besov spaces are generalizations of classical smoothness spaces such as Sobolev spaces (see [11]). We are interested in approximation in L p ,/-, and following Birman and Solomjak [2] ask what degree of approximation in Lp ,/-, can be obtained when the knot positions are chosen according to J-t rather than f. This is of interest because it makes the problem of determining the parameter values on the basis of observations linear. Theorelll 4.2 Let f E Lp ,/-, where J-t E LA for some>. > 1 and is absolutely continuous. Choose the n knot positions of a spline approximant v to f on the basis of J-t only. Then for all such f there is a constant c not dependent on n such that (4.3) where u = (a + (1- >.-l)p-l)-l and p < u. The constant c depends on J-t and >.. Splines, Rational Functions and Neural Networks 1045 II p ~ 1 and (1 S p, for any el' < (1-1 for all I under the conditions above, there is a v such that ( 4.4) and again c depends on J-l and A but does not depend on n. Proof First we prove (4.3). Let [0,1] be partitioned by 3. Thus if v is the approximant to Ion [0,1] we have Ilf - vIIi •. , = ~ Ilf - vlli •. ,(,,) = ~ L If(x) - v(x)IPdl'(x). For any A 2: 1, i I/(x) - v(x)IPdJ-l(x) = ill - viP (~~) dx ~ [L If - vIP(1-'-')-' dXr'-' [L (ix)' dx r' = III vll~~(A) IldJ-l/dxIIL>.(A) where 'IjJ = p(l- A-I)-I. Now Petrushev and Popov [10, p.216] have shown that there exists a polynomial of degree k on ~ = [r, s] such that III vll~~(A) s cll/ll~(A) where ( f(s-r)/k dt) l/u IlfIIB(A):= Jo (t-QII~: 1(·)IIL.,.(r,s-kt)t T and (1:= (el' + 'IjJ-l)-I, 0< 'IjJ < 00 and k > 1. Let 131 =: n and choose:=: = Ui~i (~i = [ri, SiD such that 1. (~~)' dx = ~lIdl'/dxIlL(o.1)' Thus IldJ-l/dxIIL>.(A) = n-l/AlldJ-l/dxIIL>.(O,l)' Hence (4.5) III - vll~p , ,, s ClldJ-l/dxIIL>. L n-I/Allfll~(A)' Ae2 Since (by hypothesis) p < (1, Holder's inequality gives II! - vilL., ~ clldJlldxllL, [~ G) t.-:,. ] 1-~ [~llfIlR(")] ~ Now for arbitrary partitions 3 of [0,1] Petrushev and Popov [10, page 216] have shown L II/IIB(A) S 1I/IIB~.k Ae3 ' 1046 Williamson and Bartlett where Be;.k = Be; l7·k = B([O, 1]). Hence , , , III - vll~ :S clldJ.tjdxIIL>. n~+I-t "I"~ cw p,p. tT j k and so (4.6) with u = (a + '1/'-1 )-1, 'I/' = p(l- A-I )-1 . Hence u = (a + 1_;-1 )-1. Thus given a and p, choosing different A adjusts the u used to measure I on the right-hand side of (4.6). This proves (4.3). Note that because of the restriction that p < u , a > 1 is only achievable for p < 1 (which is rarely used in statistical regression [6]). Note also the effect of the term IIdJ.tjdxll.i!:' When A = 1 this is identically 1 (since J.t is a probability measure). When A > 1 it measures the departure from uniform distribution, suggesting the degree of approximation achievable under non-uniform distributions is worse than under uniform distributions. Equation (4.4) is proved similarly. When u :S p with p 2: 1, for any a :S 1 j u, we can set A := (1 ~ + pa )-1 2: 1. From (4.5) we have and therefore III - vll~ •.• :0; clld,,/dxllL, ~ (D 1/' 11/11':.("1 :0; clld,,/dxIIL, G) 1/' [~II/IIB("f" :S clldJ.tjdxIlL>.n-l+~-p a ll/ll~<> 0' ;k 5 CONCLUSIONS AND FURTHER WORK • In this paper a result of Vitushkin has been applied to "multi-layer" rational approximation. Furthermore, the degree of approximation achievable by spline approximation with free knots when the knots are chosen according to a probability distribution has been examined. The degree of approximation of neural networks, particularly multiple layer networks, is an interesting open problem. Ideally one would like both direct and converse theorems, completely characterizing the degree of approximation. If it turns out that from an approximation point of view neural networks are no better than dyadic splines (say), then there is a strong incentive to study the PAC-like learning theory (of the style of [7]) for such spline representations. We are currently working on this topic. Splines, Rational Functions and Neural Networks 1047 Acknowledgements This work was supported in part by the Australian Telecommunications and Electronics Research Board and OTC. The first author thanks Federico Girosi for providing him with a copy of [4]. The second author was supported by an Australian Postgraduate Research Award. References [1] A. R. Barron, Approxima.tion and Estimation Bounds for Artificial Neural Networks, To appear in Machine Learning, 1992. [2] M. S. Birman and M. Z. Solomjak, Piecewise-Polynomial Approximations of Functions of the Classes W;, Mathematics o{the USSR Sbornik, 2 (1967), pp. 295317. [3] L. Chua and A. -C'. Deng, Canonical Piecewise-Linear Representation, IEEE Transactions on C'iIcuits and Systems, 35 (1988), pp. 101-111. [4] R. A. DeVore. Degree of Nonlinear Approximat.ion, in Approximation Theory VI, Volump 1. C. K. Chui. L. 1. Schumaker and J. D. Ward, eds., Academic Press, Bost.on, 1991, pp. 17.5-201. [5] R. A. DeVore, B. Jawert.h and V. Popov, Compression of Wavelet Decompositions, To appear in American Journal of Mathematics, 1992. [6] H. Ekblom , Lp-met.hods for Robust Regression, BIT, 14 (1974), pp. 22-32. [7] D. Haussler, Decision Theoretic Generalizations of the PAC Model for Neural Net and Ot.her Learning Applicat.ions, Report UCSC-CRL-90-52, Baskin Center for Computer Engineering and Informat.ion Sciences, University of California, Santa Cruz, H)90. [8] P. Oswald. On t.he Degree of Nonlinear Spline Approximat.ion in Besov-Sobolev Space~. Journal of A.pproximatioll Theory, 61 (1990), pp. 131-157. [9] P. P. Pet.ru~hev. Direct and Converse Theorems for Spline and Rational Approximation and Be~oy Spaces, in Function Spaces and Applications (Lecture Notes ill Ma tllem a. tics 1.'3(2), M. Cwikel, J. Peetre, Y. Sagher and H. Wallin, eds., Sprin~er-Verlag. Berlin, 1988, pp. 363-377. [10] P. P. Petrushev and V. A. Popov, Rational Approximation of Real Functions, Cambridge Univer~it.y Press, Cambridge, 1987. [11] H. Triebel. TlleorJ' of Function Spaces. Birkhauser Verlag, Basel, 1983. [12] A. G. Vitllshkin. Tlleocy of the Transmission and Processing of Information, Pergamon Press, Oxford, 1961, Originally published as Otsenka slozhnosti zadachi taIJu1icO\'ani,va (Est.ima.tion of the Complexit.y of the Tabulation Problem), Fizmatgiz. Mo~cow. 19.59. [13] R. C. Williamson and U. Helmke, Existence and Uniqueness Results for Neural Network Approximations. Submitted, 1992.
1991
104
435
Learning to Make Coherent Predictions in Domains with Discontinuities Suzanna Becker and Geoffrey E. Hinton Department of Computer Science, University of Toronto Toronto, Ontario, Canada M5S 1A4 Abstract We have previously described an unsupervised learning procedure that discovers spatially coherent propertit>_<; of the world by maximizing the information that parameters extracted from different parts of the sensory input convey about some common underlying cause. When given random dot stereograms of curved surfaces, this procedure learns to extract surface depth because that is the property that is coherent across space. It also learns how to interpolate the depth at one location from the depths at nearby locations (Becker and Hint.oll. 1992). 1n this paper, we propose two new models which handle surfaces with discontinuities. The first model attempts to detect cases of discontinuities and reject them. The second model develops a mixture of expert interpolators. It learns to detect the locations of discontinuities and to invoke specialized, asymmetric interpolators that do not cross the discontinuities. 1 Introd uction Standard backpropagation is implausible as a model of perceptual learning because it requires an external teacher to specify the desired output of the network. We have shown (Becker and Hinton, 1992) how the external teacher can be replaced by internally derived teaching signals. These signals are generated by using the assumption that different parts of the perceptual input have common causes in the external world. Small modules that look at separate but related parts of the perceptual input discover these common causes by striving to produce outputs that agree with each other (see Figure 1 a). The modules may look at different modalities (e.g. vision and touch), or the same modality at different times (e.g. the consecutive 2-D views of a rotating 3-D object), or even spatially adjacent parts of the same image. In previous work, we showed that when our learning procedure is applied 372 Learning to Make Coherent Predictions in Domains with Discontinuities 373 to adjacent patches of 2-dimensional images, it allows a neural network that has no prior knowledge of the third dimension to discover depth in random dot stereograms of curved surfaces. A more general version of the method allows the network to discover the best way of interpolating the depth at one location from the depths at nearby locations. We first summarize this earlier work, and then introduce two new models which allow coherent predictions to be made in the presence of discontinuities. a) left rightm~m~ patch A patch B Figure 1: a) Two modules that receive input from corresponding parts of stereo images. The first module receives input from stereo patch A, consisting of a horizontal strip from the left image (striped) and a corresponding strip from the right image (hatched). The second module receives input from an adjacent stereo patch B . The modules try to make their outputs, da and db, convey as much information as possible about some underlying signal (i. e., the depth) which is common to both patches. b) The architecture of the interpolating network, consisting of multiple copies of modules like those in a) plus a layer of interpolating units. The network tries to maximize the information that the locally extracted parameter de and the contextually predicted parameter de convey about some common underlying signal. We actually used 10 modules and the central 6 modules tried to maximize agreement between their outputs and contextually predicted values. We used weight averaging to constrain the interpolating function to be identical for all modules. 2 Learning spatially coherent features in images The simplest way to get the outputs of two modules to agree is to use the squared difference between the outputs as a cost function, and to adjust the weights in each module so as to minimize this cost. Unfortunately, this usually causes each module to produce the same constant output that is unaffected by the input to the module and therefore conveys no information about it. What we want is for the outputs of two modules to agree closely (i.e. to have a small expected squared difference) relative to how much they both vary as the input is varied. When this happens, the two modules must be responding to something that is common to their two inputs. In the special case when the outputs, da , db, of the two modules are scalars, a good 374 Becker and Hinton measure of agreement is: (1) where V is the variance over the training cases. If da and db are both versions of the same underlying Gaussian signal that have been corrupted by independent Gaussian noise, it can be shown that I is the mutual information between the underlying signal and the average of da and db. By maximizing I we force the two modules to extract as pure a version as possible of the underlying common signal. 2.1 The basic stereo net We have shown how this principle can be applied to a multi-layer network that learns to extract depth from random dot stereograms (Becker and Hinton, 1992). Each network module received input from a patch of a left image and a corresponding patch of a right image, as shown in Figure 1 a). Adjacent modules received input from adjacent stereo image patches, and learned to extract depth by trying to maximize agreement between their outputs. The real-valued depth (relative to the plane of fixation) of each patch of the surface gives rise to a disparity between features in the left and right images; since that disparity is the only property that is coherent across each stereo image, the output units of modules were able to learn to accurately detect relative depth. 2.2 The interpolating net The basic stereo net uses a very simple model of coherence in which an underlying parameter at one location is assumed to be approximately equal to the parameter at a neighbouring location. This model is fine for the depth of fronto-parallel surfaces but it is far from the best model of slanted or curved surfaces. Fortunately, we can use a far more general model of coherence in which the parameter at one location is assumed to be an unknown linear function of the parameters at nearby locations. The particular linear function that is appropriate can be learned by the network. We used a network of the type shown in Figure 1 b). The depth computed locally by a module, dc, was compared with the depth predicted by a linear combination de of the outputs of nearby modules, and the network tried to maximize the agreement between de and de. The contextual prediction, dc, was produced by computing a weighted sum of the outputs of two adjacent modules on either side. The interpolating weights used in this sum, and all other weights in the network, were adjusted so as to maximize agreement between locally computed and contextually predicted depths. To speed the learning, we first trained the lower layers of the network as before, so that agreement was maximized between neighbouring locally computed outputs. This made it easier to learn good interpolating weights. When the network was trained on stereograms of cubic surfaces, it learned interpolating weights of -0.147,0.675,0.656, -0.131 (Becker and Hinton, 1992). Given noise free estimates of local depth, the optimal linear interpolator for a cubic surfa.ce is -0.167,0.667,0.667, -0.167. Learning to Make Coherent Predictions in Domains with Discontinuities 375 3 Throwing out discontinuities If the surface is continuous, the depth at one patch can be accurately predicted from the depths of two patches on either side. If, however, the training data contains cases in which there are depth discontinuities (see figure 2) the interpolator will also try to model these cases and this will contribute considerable noise to the interpolating weights and to the depth estimates. One way of reducing this noise is to treat the discontinuity cases as outliers and to throw them out. Rather than making a hard decision about whether a case is an outlier, we make a soft decision by using a mixture model. For each training case, the network compares the locally extracted depth, dc, with the depth predicted from the nearby context, de. It assumes that de - de is drawn from a zero-mean Gaussian if it is a continuity case and from a uniform distribution if it is a discontinuity case. It can then estimate the probability of a continuity case: Spline curve Left Image Right Image I 1 "I I l I II I II \ I I III 1111 II I Iii II ill I I -------til I, I "'I II I 111 II III ,i I I \ I I I I ,,\ 1'1 III II I Figure 2: Top: A curved surface strip with a discontinuity created by fitting 2 cubic splines through randomly chosen control points, 25 pixels apart, separated by a depth discontinuity. Feature points are randomly scattered on each spline with an average of 0.22 features per pixel. Bottom: A stereo pair of "intensity" images of the surface strip formed by taking two different projections of the feature points, filtering them through a gaussian, and sampling the filtered projections at evenly spaced sample points. The sample values in corresponding patches of the two images are used as the inputs to a module. The depth of the surface for a particular zmage region is directly related to the disparity between corresponding features in the left and right patch. Disparity ranges continuously from -1 to + 1 image pixels. Each stereo image was 120 pixels wide and divided into 10 receptive fields 10 pixels wide and separated by 2 pixel gaps, as input for the networks shown in figure 1. The receptive field of an interpolating unit spanned 58 image pixels, and discontinuities were randomly located a minimum of 40 pixels apart, so only rarely would more than one discontinuity lie within an interpolator's receptive field. 376 Becker and Hinton (2) where N is a gaussian, and kdi3eont is a constant representing a uniform density. 1 We can now optimize the average information de and de transmit about their common cause. We assume that no information is transmitted in discontinuity cases, so the average information depends on the probability of continuity and on the variance of de + de and de - de measured only in the continuity cases. (3) We tried several variations of this mixture approach. The network is quite good at rejecting the discontinuity cases, but this leads to only a modest improvement in the performance of the interpolator. In cases where there is a depth discontinuity between da and db or between dd and de the interpolator works moderately well because the weights on da or de are small. Because of the term Peont in equation 3 there is pressure to include these cases as continuity cases, so they probably contribute noise to the interpolating weights. In the next section we show how to avoid making a forced choice between rejecting these cases or treating them just like all the other continuity cases. 4 Learning a mixture of expert interpolators The presence of a depth discontinuity somewhere within a strip of five adjacent patches does not entirely eliminate the coherence of depth across these patches. It just restricts the range over which this coherence operates. So instead of throwing out cases that contain a discontinuity, the network could try to develop a number of different, specialized interpolators each of which captures the particular type of coherence that remains in the presence of a discontinuity at a particular location. If, for example, there is a depth discontinuity between de and de, an extrapolator with weights of -1.0, +2.0,0, ° would be an appropriate predictor of de . Figure 3 shows the system of five expert interpolators that we used for predicting de from the neighboring depths. To allow the system to invoke the appropriate interpolator, each expert has its own "controller" which must learn to detect the presence of a discontinuity at a particular location (or the absence of a discontinuity in the case of the interpolator for pure continuity cases). The outputs of the controllers are normalized, as shown in figure 3, so that they form a probability distribution. We can think of these normalized outputs as the probability with which the system selects a particular expert. The controllers get to see all five local depth estimates and most of them learn to detect particular depth discontinuities by using large weights of opposite sign on the local depth estimates of neighboring patches. lWe empirically select a good (fixed) value of kdiseont, and we choose a starting value of Veont{de - de) (some proportion of the initial variance of de - de), and gradually shrink it during learning. Learning to Make Coherent Predictions in Domains with Discontinuities expert 1 de I , PI Xl controller 1 Normalexpert 2 de 2 , P2 ization X2 controller 2 expert 3 de,3 P3 X3 controller 3 e x ,2 Pi = I: x · 2 e J J expert 4 de 4 , P4 X4 controller 4 expert 5 de 5 , P5 X5 controller 5 Figure 3: The architecture of the mixture of interpolators and discontinuzty detec. tors. We actually used a larger modular network and equality constraints between modules, as described in figure 1 b), with 6 copies of the architecture shown here. Each copy received input from different but overlapping parts of the input. Figure 4 shows the weights learned by the experts and by their controllers. As expected, there is one interpolator (the top one) that is appropriate for continuity cases and four other interpolators that are appropriate for the four different locations of a discontinuity. In interpreting the weights of the controllers it is important to remember that a controller which produces a small X value for a particular case may nevertheless assign high probability to its expert if all the other controllers produce even smaller x values. 4.1 The learning procedure In the example presented here, we first trained the network shown in figure 1 b) on images with discontinuities. We then used the outputs of the depth extracting layer, da, ... ,de as the inputs to the expert interpolators and their controllers. The system learned a set of expert interpolators without backpropagating derivatives all the way down to the weights of the local depth extracting modules. So the local depth estimates d a , ... ,de did not change as the interpolators were learned. To train the system we used an unsupervised version of the competing experts algorithm described by Jacobs, Jordan, Nowlan and Hinton (1991) . The output of the ith expert, de,i, is treated as the mean of a Gaussian distribution with variance 0-2 and the normalized output of each controller, Pi , is treated as the mixing proportion of that Gaussian. So, for each training case, the outputs of the experts and their controllers define a probability distribution that is a mixture of Gaussians. The aim 377 378 Becker and Hinton ,a) Interpolator weights Discontinuity detector weights b) 1.00 0.95 0.90 0.15 0.10 0.75 0.70 0.6.5 0.60 0.55 0.50 0.45 0.40 O.lS 0.10 02S 020 0.15 0.10 OAS I I 0.00 .oAS ·60.00 Mean output vs. distance to nearest discontinuity ~ " iI " " , . 'I , . . \ , . ~, , , , I , , , I i • , \ , , ,~ : I , , , \ /~ ; i , , , , I , . , • ~ 1 I, : , , .• I \' \ l ~, I' , , \ :' ti j' " , . ., \ :' \ : :; ~ \ I 1 , ~ . I' • I ~ 'i " 'I " :i " i ~ I I: ' ! , I I . : \ , , I I .. , I -~:' .. -f i , I , / . , , ..... ' , I , i I , ! I ' I , I \ I' \ i I I \' I " I, .4()00 -20.00 0.00 20.00 40.00 iiiiiI iiiii2 ~TYU.-4- uair pudol 60.00 Figure 4: a) Typical weights lear~ed by the five competing interpolators and corresponding five discontinuity detectors. Positive weights are shown in white, and negative weights in black. b) The mean probabilities computed by each discontinuity detector are plotted against the the distance from the center of the units' receptive field to the nearest discontinuity. The probabilistic outputs are averaged over an ensemble of 1000 test cases. If the nearest discontinuity is beyond ± thirty pixels, it is outside the units' receptive field and the case is therefore a continuity example. of the learning is to maximize the log probability density of the desired output, de, under this mixture of Gaussians distribution. For a particular training case this log probability is given by: '" 1 ((d e -dei)2) log P( de) = log L.,; Pi to= exp 2 2 ' . v2~u u I (4) By taking derivatives of this objective function we can simultaneously learn the weights in the experts and in the controllers. For the results shown here, the nework was trained for 30 conjugate gradient iterations on a set of 1000 random dot stereograms with discontinuities. The rationale for the use of a variance ratio in equation 1 is to prevent the variances of da and db collapsing to zero. Because the local estimates d1 , ... , ds did not change as the system learned the expert interpolators, it was possible to use (de - dc,d 2 in the objective function without worrying about the possibility that the variance of de across cases would collapse to zero during the learning. Ideally we would like to Learning (0 Make Coherent Predictions in Domains with Discontinuities 379 refine the weights of the local depth estimators to maximize their agreement with the contextually predicted depths produced by the mixture of expert interpolators. One way to do this would be to generalize equation 3 to handle a mixture of expert interpolators: (5) Alternatively we could modify equation 4 by normalizing the difference (de - de.i )2 by the actual variance of dc, though this makes the derivatives considerably more complicated. 5 Discussion The competing controllers in figure 3 explicitly represent which regularity applies in a particular region. The outputs of the controllers for nearby regions may themselves exhibit coherence at a larger spatial scale, so the same learning technique could be applied recursively. In 2-D images this should allow the continuity of depth edges to be discovered. The approach presented here should be applicable to other domains which contain a mixture of alternative local regularities aCl·OSS space or time. For example, a l·igiJ shape causes a linear constraint between the locations of its parts in an image, so if there are many possible shapes, there are many alternative local regularities (Zemel and Hinton, 1991). Our learning procedure differs from methods that try to capture as much information as possible about the input (Linsker, 1988; Atick and Redlich, 1990) because we ignore information in the input that is not coherent across space. Acknowledgements This research was funded by grants from NSERC and the Ontario Information Technology Research Centre. Hinton is Noranda fellow of the Canadian Institute for Advanced Research. Thanks to John Bridle and Steve Nowlan for helpful discussions. References Atick, J. J. and Redlich, A. N. (1990). Towards a theory of early visual processing. Technical Report IASSNS-HEP-90j10, Institute for Advanced Study, Princeton. Becker, S. and Hinton, G. E. (1992). A self-organizing neural network that discovers surfaces in random-dot stereograms. January 1992 Nature. Jacobs, R. A., Jordan, M. 1., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive mixtures of local experts. Neural Computation, 3(1). Linsker, R. (1988). Self-organization in a perceptual network. IEEE Computer, March, 21:105-117. Zemel, R. S. and Hinton, G. E. (1991). Discovering viewpoint-invariant relationships that characterize objects. In Advances In Neural Information Processing Systems 3, pages 299-305. Morgan Kaufmann Publishers.
1991
105
436
Bayesian Model Comparison and Backprop Nets David J.C. MacKay· Computation and Neural Systems California Institute of Technology 139-14 Pasadena CA 91125 mackayGras.phy.cam.ac.uk Abstract The Bayesian model comparison framework is reviewed, and the Bayesian Occam's razor is explained. This framework can be applied to feedforward networks, making possible (1) objective comparisons between solutions using alternative network architectures; (2) objective choice of magnitude and type of weight decay terms; (3) quantified estimates of the error bars on network parameters and on network output. The framework also generates a measure of the effective number of parameters determined by the data. The relationship of Bayesian model comparison to recent work on prediction of generalisation ability (Guyon et al., 1992, Moody, 1992) is discussed. 1 BAYESIAN INFERENCE AND OCCAM'S RAZOR In science, a central task is to develop and compare models to account for the data that are gathered. Typically, two levels of inference are involved in the task of data modelling. At the first level of inference, we assume that one of the models that we invented is true, and we fit that model to the data. Typically a model includes some free parameters; fitting the model to the data involves inferring what values those parameters should probably take, given the data. This is repeated for each model. The second level of inference is the task of model comparison. Here, ·Current address: Darwin College, Cambridge CB3 9EU, U.K. 839 840 MacKay we wish to compare the models in the light of the data, and assign some sort of preference or ranking to the alternatives.1 For example, consider the task of interpolating a noisy data set. The data set could be interpolated using a splines model, polynomials, or feedforward neural networks. At the first level of inference, we find for each individual model the best fit interpolant (a process sometimes known as 'learning'). At the second level of inference we want to rank the alternative models and state for our particular data set that, for example, 'splines are probably the best interpolation model', or 'if the interpolant is modelled as a polynomial, it should probably be a cubic', or 'the best neural network for this data set has eight hidden units'. Model comparison is a difficult task because it is not possible simply to choose the model that fits the data best: more complex models can always fit the data better, so the maximum likelihood model choice leads us inevitably to implausible overparameterised models which generalise poorly. 'Occam's razor' is the principle that states that unnecessarily complex models should not be preferred to simpler ones. Bayesian methods automatically and quantitatively embody Occam's razor (Gull, 1988, Jeffreys, 1939), without the introduction of ad hoc penalty terms. Complex models are automatically self-penalising under Bayes' rule. Let us write down Bayes' rule for the two levels of inference described above. Assume each model 1ii has a vector of parameters w. A model is defined by its functional form and two probability distributions: a 'prior' distribution P(WI1ii) which states what values the model's parameters might plausibly take; and the predictions P(Dlw, 1ii) that the model makes about the data D when its parameters have a particular value w. Note that models with the same parameterisation but different priors over the parameters are therefore defined to be different models. 1. Model fitting. At the first level of inference, we assume that one model 1ii is true, and we infer what the model's parameters w might be given the data D. Using Bayes' rule, the posterior probability of the parameters w is: (1) In words: . Likelihood X Prior Postenor = . d EVI ence It is common to use gradient-based methods to find the maximum of the posterior, which defines the most probable value for the parameters, W MP ; it is then common to summarise the posterior distribution by the value of W MP , and error bars on these best fit parameters. The error bars are obtained from the curvature of the posterior; writing the Hessian A = -\7\7 log P(wID, 1ii) and Taylor-expanding the log posterior with ~w = w W MP , (2) 1 Note that both levels of inference are distinct from decision theory. The goal of inference is, given a defined hypothesis space and a particular data set, to assign probabilities to hypotheses. Decision theory chooses between alternative actions on the basis of these probabilities so as to minimise the expectation of a 'loss function'. Bayesian Model Comparison and Backprop Nets 841 w Figure 1: The Occam factor This figure shows the quantities that determine the Occam factor for a hypothesis 1ii havin§ a single parameter w. The prior distribution (dotted line) for the parameter has width Il. w. The posterior distribution (solid line) has a single peak at WMP with characteristic width Il.w. The Occam factor is :b~. we see that the posterior can be locally approximated as a gaussian with covariance matrix (error bars) A -1. 2. Model comparison. At the second level of inference, we wish to infer which model is most plausible given the data. The posterior probability of each model is: P(1ii ID) ex P(DI1ii )P(1ii ) (3) Notice that the objective data-dependent term P(DI1id is the evidence for 1ii, which appeared as the normalising constant in (1). The second term, P(1ii ), is a 'subjective' prior over our hypothesis space. Assuming that we have no reason to assign strongly differing priors P(1ii) to the alternative models, models 1ii are ranked by evaluating the evidence. This concept is very general: the evidence can be evaluated for parametric and 'non-parametric' models alike; whether our data modelling task is a regression problem, a classification problem, or a density estimation problem, the evidence is the Bayesian's transportable quantity for comparing alternative models. In all these cases the evidence naturally embodies Occam's razor, as we will now see. The evidence is the normalising constant for equation (1): P(D l1ii) = J P(Dlw, 1ii)P(wl1id dw (4) For many problems, including interpolation, it is common for the posterior P(wID,1ii) ex P(Dlw,1ij)P(wl1ii) to have a strong peak at the most probable parameters W MP (figure 1). Then the evidence can be approximated by the height of the peak of the integrand P(Dlw, 1ii)P(wl1ii) times its width, ~w: P(D IwMP ' 1ii) , ~ y Evidence Best fit likelihood P(wMPI1ii) ~w , ~ y Occam factor (5) Thus the evidence is found by taking the best fit likelihood that the model can achieve and multiplying it by an 'Occam factor' (Gull, 1988), which is a term with magnitude less than one that penalises 1ii for having the parameter w. 842 MacKay Interpretation of the Occam factor The quantity ~ w is the posterior uncertainty in w. Imagine for simplicity that the prior P(wlllj) is uniform on some large interval ~ow (figure 1), so that P(wMPllli) = AJW; then ~w Occam factor = ~ ow' i.e. the ratio of the posterior accessible volume oflli's parameter space to the prior accessible volume (Gull, 1988, Jeffreys, 1939). The log of the Occam factor can be interpreted as the amount of information we gain abou t the model lli when the data arrive. Typically, a complex or flexible model with many parameters, each of which is free to vary over a large range ~ ow, will be penalised with a larger Occam factor than a simpler model. The Occam factor also penalises models which have to be finely tuned to fit the data. Which model achieves the greatest evidence is determined by a trade-off between minimising this natural complexity measure and minimising the data misfit. Occam factor for several parameters If w is k-dimensional, and if the posterior is well approximated by a gaussian, the Occam factor is given by the determinant of the gaussian's covariance matrix: where A = - VV log P(wID, lli), the Hessian which we already evaluated when we calculated the error bars on W MP. As the amount of data collected, N, increases, this gaussian approximation is expected to become increasingly accurate on account of the central limit theorem. Thus Bayesian model selection is a simple extension of maximum likelihood model selection: the evidence is obtained by multiplying the best fit likelihood by the Occam factor. To evaluate the Occam factor all we need is the Hessian A, if the gaussian approximation is good. Thus the Bayesian method of model comparison by evaluating the evidence is computationally no more demanding than the task of finding for each model the best fit parameters and their error bars. 2 THE EVIDENCE FOR NEURAL NETWORKS Neural network learning procedures include a host of control parameters such as the number of hidden units and weight decay rates. These parameters are difficult to set because there is an Occam's razor problem: if we just set the parameters so as to minimise the error on the training set, we would be led to over-complex and under-regularised models which over-fit the data. Figure 2a illustrates this problem by showing the test error versus the training error of a hundred networks of varying complexity all trained on the same interpolation problem. Bayesian Model Comparison and Backprop Nets 843 Of course if we had unlimited resources, we could compare these networks by measuring the error on an unseen test set or by similar cross-validation techniques. However these techniques may require us to devote a large amount of data to the test set, and may be computationally demanding. If there are several parameters like weight decay rates, it is preferable if they can be optimised on line. Using the Bayesian framework, it is possible for all our data to have a say in both the model fitting and the model comparison levels of inference. We can rank alternative neural network solutions by evaluating the 'evidence'. Weight decay rates can also be optimised by finding the 'most probable' weight decay rate. Alternative weight decay schemes can be compared using the evidence. The evidence also makes it possible to compare neural network solutions with other interpolation models, for example, splines or radial basis functions, and to choose control parameters such as spline order or RBF kernel. The framework can be applied to classification networks as well as the interpolation networks discussed here. For details of the theoretical framework (which is due to Gull and Skilling (1989» and for more complete discussion and bibliography, MacKay (1991) should be consulted. 2.1 THE PROBABILISTIC INTERPRETATION Fitting a backprop network to a data set D = {x, t} often involves minimising an objective function M(w) = {3ED(W; D) + aEw(w). The first term is the data error, for example ED = L !(Y - t)2, and the second term is a regulariser (weight decay term), for example Ew = L !w~ . (There may be several regularisers with independent constants {a c}. The Bayesian framework also covers that case.) A model 1£ has three components {A,N, 'R}: The architecture A specifies the functional dependence of the input-output mapping on the network's parameters w. The noise model N specifies the functional form of the data error. Within the probabilistic interpretation (Tishby et ai., 1989), the data error is viewed as relating to a likelihood, P(Dlw,{3,A,N) = exp(-{3ED )/ZD. For example, a quadratic ED corresponds to the assumption that the distribution of errors between the data and the true interpolant is Gaussian, with variance u~ = 1/ {3. Lastly, the regulariser 'R, with associated regularisation constant a, is interpreted as specifying a prior on the parameters w, P(wla,A, 'R) = exp( -aEw). For example, the use of a plain quadratic regulariser corresponds to a Gaussian prior distribution for the parameters. Given this probabilistic interpretation, interpolation with neural networks can then be decomposed into three levels of inference: 1 Fitting a regularised model 2a Setting regularisation constants and estimating nOIse level 2 Model comparison P( ID {3 1£.) = P(Dlw, {3, 1£j)P(wla, 1£i) w ,a, , a P(Dla,{3,1id P(a {3ID 1£-) = P(Dla,{3,1£i)P(a, {311£i) , ,a P(DI1ij) Both levels 2a and 2 require Occam's razor. For both levels the key step is to evaluate the evidence P(Dla,{3,1£), which, within the quadratic approximation 844 MacKay eo ... .. I ... I .. . 'i'~ • ... o .. J ... . . .. ,~. ' .. i . . '\ .. :, , J tao • I ... ... • • uo .. b) JOe ... ... ... .. III _.Figure 2: The evidence solves the neural networks' Occam problem a) Test error vs. data error. Each point represents the performance of a single trained neural network on the training set and on the test set. This graph illustrates the fact that the best generalisation is not achieved by the models which fit the training data best. b) Log Evidence vs. test error. around w MP , is given by: 1 k log P(Dla,,B, 1-£) = -aEW -,BE~P -210g det A-log Zw(a)-log ZD (,B) + '2 log 27r. (1) At level 2a we can find the most probable value for the regularisation constant a and noise level 1/,B by differentiating (1) with respect to a and ,B. The result is X!. = 2aEw = 'Y and X~ = 2,BED = N - 'Y, (8) where'Y is 'the effective number of parameters determined by the data' (Gull, 1989), k -1 " Aa 'Y = k - aTraceA: = ~ A ' a=1 a + a (9) where Aa are the eigenvalues of 'V''V' ,BED in the natural basis of Ew. Each term in the sum is a number between 0 and 1 which measures how well one parameter is determined by the data rather than by the prior. The expressions (8), or approximations to them, can be used to re-estimate weight decay rates on line. The central quantity in the evidence and in 'Y is the inverse hessian kl, which describes the error bars on the parameters w. From this we can also obtain error bars on the outputs of a network (Denker and Le Cun, 1991, MacKay, 1991). These error bars are closely related to the predicted generalisation error calculated by Levin et al.(1989). In (MacKay, 1991) the practical utility of these error bars is demonstrated for both regression and classification networks. Figure 2b shows the Bayesian 'evidence' for each of the solutions in figure 2a against the test error. It can be seen that the correlation between the evidence and the test error is extremely good. This good correlation depends on the model being well-matched to the problem; when an inconsistent weight decay scheme was used (forcing all weights to decay at the same rate), it was found that the correlation between the evidence and the test error was much poorer. Such comparisons between Bayesian and traditional methods are powerful tools for human learning. Bayesian Model Comparison and Backprop Nets 845 3 RELATION TO THEORIES OF GENERALISATION The Bayesian 'evidence' framework assesses within a well-defined hypothesis space how probable a set of alternative models are. However, what we really want to know is how well each model is expected to generalise. Empirically, the correlation between the evidence and generalisation error is surprisingly good. But a theoretical connection linking the two is not yet established. Here, a brief discussion is given of similarities and differences between the evidence and quantities arising in recent work on prediction of generalisation error. 3.1 RELATION TO MOODY'S 'G.P.E.' Moody's (1992) 'Generalised Prediction Error' is a generalisation of Akaike's 'F .P.E.' to non-linear regularised models. The F .P.E. is an estimator of generalisation error which can be derived without making assumptions about the distribution of errors between the data and true interpolant, and without assuming a known class to which the true interpolant belongs. The difference between F .P.E. and G.P.E. is that the total number of parameters k in F.P.E. is replaced by an effective number of parameters, which is in fact identical to the quantity -y arising in the Bayesian analysis (9). If ED is as defined earlier, G.P.E. = (ED + u~-y) IN. (10) Like the log evidence, the G .P.E. has the form of the data error plus a term that penalises complexity. However, although the same quantity -y arises in the Bayesian analysis, the Bayesian Occam factor does not have the same scaling behaviour as the G.P.E. term (see discussion below). And empirically, the G.P.E. is not always a good predictor of generalisation. The reason for this is that in the derivation of the G.P.E., it is assumed that the distribution over x values is well approximated by a sum of delta functions at the samples in the training set. This is equivalent to assuming test samples will be drawn only at the x locations at which we have already received data. This assumption breaks down for over-parameterised and over-flexible models. An additional distinction that between the G.P.E. and the evidence framework is that the G.P.E. is defined for regression problems only; the evidence can be evaluated for regression, classification and density estimation models. 3.2 RELATION TO THE EFFECTIVE V-C DIMENSION Recent work on 'structural risk minimisation' (Guyon et al., 1992) utilises empirical expressions of the form: E ,...., E IN log(NI-y) + C2 gen D + Cl N l-y (11) where -y is the 'effective V-C dimension' of the model, and is identical to the quantity arising in (9). The constants Cl and C2 are determined by experiment. The structural risk theory is currently intended to be applied only to nested families of classification models (hence the abscence of (3: ED is dimensionless) with monotonic effective V-C dimension, whereas the evidence can be evaluated for any models. However, it is very interesting that the scaling behaviour of this expression (11) is 846 MacKay identical to the scaling behaviour of the log evidence (1), subject to the following assumptions. Assume that the value of the regularisation constant satisfies (8). Assume furthermore that the significant eigenvalues (Aa > a) scale as Aa -- Na/'Y (It can be confirmed that this scaling is obtained for example in the interpolation models consisting of a sequence of steps of independent heights, as we vary the number of steps.) Then it can be shown that the scaling of the log evidence is: 1 -log P(Dla,,B, 1i) '" ,BE~P + 2 ('Y log(N /'Y) + 'Y) (12) (Readers familiar with MDL will recognise the dominant 1 log N term; MDL and Bayes are identical.) Thus the scaling behaviour of the lo~ evidence is identical to the structural risk minimisation expression (11), if Cl = - and C2 = 1. I. Guyon (personal communication) has confirmed that the empirically determined values for Cl and C2 are indeed close to these Bayesian values. It will be interesting to try and understand and develop this relationship. Acknowledgements This work was supported by studentships from Caltech and SERC, UK. References J.S. Denker and Y. Le Cun (1991). 'Transforming neural-net output levels to probability distributions', in Advances in neural information processing systems 3, ed. R.P. Lippmann et al., 853-859, Morgan Kaufmann. S.F. Gull (1988). 'Bayesian inductive inference and maximum entropy', in Maximum Entropy and Bayesian Methods in science and engineering, vol. 1: Foundations, G.J. Erickson and C.R. Smith, eds., Kluwer. S.F. Gull (1989). 'Developments in Maximum entropy data analysis', in J. Skilling, ed., 53-11. I. Guyon, V.N. Vapnik, B.E. Boser, L.Y. Bottou and S.A. Solla (1992). 'Structural risk minimization for character recognition', this volume. H. Jeffreys (1939). Theory of Probability, Oxford Univ. Press. E. Levin, N. Tishby and S. Solla (1989). 'A statistical approach to learning and generalization in layered neural networks', in COLT '89: 2nd workshop on computational learning theory, 245-260. D.J.C. MacKay (1991) 'Bayesian Methods for Adaptive Models', Ph.D. Thesis, Caltech. Also 'Bayesian interpolation', 'A practical Bayesian framework for backprop networks', 'Information-based objective functions for active data selection', to appear in Neural computation. And 'The evidence framework applied to classification networks', submitted to Neural computation. J .E. Moody (1992). 'Generalization, regularization and architecture selection in nonlinear learning systems', this volume. N. Tishby, E. Levin and S.A. Solla (1989). 'Consistent inference of probabilities in layered networks: predictions and generalization', in Proc. [lCNN, Washington.
1991
106
437
Networks for the Separation of Sources that are Superimposed and Delayed John C. Platt Federico Faggin Synaptics, Inc. 2860 Zanker Road, Suite 206 San Jose, CA 95134 ABSTRACT We have created new networks to unmix signals which have been mixed either with time delays or via filtering. We first show that a subset of the Herault-Jutten learning rules fulfills a principle of minimum output power. We then apply this principle to extensions of the Herault-Jutten network which have delays in the feedback path. Our networks perform well on real speech and music signals that have been mixed using time delays or filtering. 1 INTRODUCTION Recently, there has been much interest in neural architectures to solve the "blind separation of signals" problem (Herault & Jutten, 1986) (Vittoz & Arreguit, 1989). The separation is called "blind," because nothing is assumed known about the frequency or phase of the signals. A concrete example of blind separation of sources is when the pure signals are sounds generated in a room and the mixed signals are the output of some microphones. The mixture process would model the delay of the sound to each microphone, and the mixing of the sounds at each microphone. The inputs to the neural network would be the microphone outputs, and the neural network would try to produce the pure signals. The mixing process can take on different mathematical forms in different situations. To express these forms, we denote the pure signal i as Pi, the mixed signal i as Ii (which is the ith input to the network), and the output signal i as Oi. The simplest form to unmix is linear superposition: 730 lj(t) = Pi(t) + L Mjj(t)Pj(t). j# (1) Networks for the Separation of Sources that are Superimposed and Delayed 731 A more realistic, but more difficult form to unmix is superposition with single delays: l i(f) = Pi(t) + L Mij(t)Pj(t - Djj(t)). j i-i (2) Finally, a rather general mixing process would be superposition with causal filtering: li(t) = Pi(t) + L L Mijk(t)Pj (t 15k). ji-i k (3) Blind separation is interesting for many different reasons. The network must adapt on-line and without a supervisor, which is a challenging type of learning. One could imagine using a blind separation network to clean up an input to a speech understanding system. (Juttell & Herault, 1991) uses a blind separation network to deskew images. Finally, researchers have implemented blind separation networks using analog VLSI to yield systems which are capable of performing the separation of sources in real time (Vittoz & Arreguit, 1990) (Cohen, et. al., 1992). 1.1 Previous Work Interest in adaptive systems which perform noise cancellation dates back to the 1960s and 1970s (Widrow, et. al., 1975). The first neural network to un mix on-line a linear superposition of sources was (Herault & Jutten, 1986). Further work on off-line blind separation was performed by (Cardoso, 1989). Recently, a network to unmix filtered signals was proposed in (Jutten, et. al., 1991), independently of this paper. 2 PRINCIPLE OF MINIMUM OUTPUT POWER In this section, we apply the mathematics of noise-cancelling networks (Widrow, et. al. , 1975) to the network in (Herault & Jutten, 1986) in order to generalize to new networks that can handle delays in the mixing process. 2.1 Noise-cancellation Networks A noise-cancellation network tries to purify a signal which is corrupted by filtered noise (Widrow, et. al. , 1975). The network has access to the isolated noise signal. The interference equation is 1(t) = P(t) + L MjN(t - 8j ) . j The adaptive filter inverts the interference equation, to yield an output: O(t) = 1(t) - L Cj N(t - 8j ). j (4) (5) The adaptation of a noise-cancellation network relies on an elegant notion: if a signal is impure, it will have a higher power than a pure signal, because the noise power adds to the signal power. The true pure signal has the lowest power. This minimum output power principle is used to determine adaptation laws for noisecancellation networks. Specifically, at any time t , Cj is adjusted by taking a step that minimizes 0(t)2 732 Platt and Faggin Figure 1: The network described in (Herault & Jutten, 1986). The dashed arrows represent adaptation. 2.2 The Herault-Jutten Network The Herault-Jutten network (see Figure 1) uses a purely additive model of interference. The interference is modeled by Ii = Pi + LMijPj. j ,#-i (6) Notice the Herault-Jutten network solves a more general problem than previous noise-cancellation networks: the Herault-Jutten network has no access to any pure signal. In (Herault & Jutten, 1986), the authors also propose inverting the interference model: OJ = Ii - L: GijOj . j ,#-i (7) The Herault-Jutten network can be understood intuitively by assuming that the network has already adapted so that the outputs are the pure signals (OJ = Pj ). Each connection Gij subtracts just the right amount of the pure signal Pj from the input Ii to yield the pure signal Pi. So, the Herault-J utten network will produce pure signals if the Gij = M ij . In (Herault & Jutten, 1986), the authors propose a very general adaptation rule for the Gij: (8) for some non-linear functions f and g. (Sorouchyari, 1991) proves that the network converges for f(x) = x3 . In this paper, we propose that the same elegant minimization principle that governs the noise-cancellation networks can be used to justify a subset of Herault-Jutten Networks for the Separation of Sources that are Superimposed and Delayed 733 learning algorithms. Let g(x) = x and f(x) be a derivative of some convex function h(x), with a minimum at x = O. In this case, each output of the Hcrault-Jutten network independently minimizes a function h(x) . A Herault-Jutten network can be made by setting h(x) = x2 . Unfortunately, this network will not converge, because the update rules for two connections G ij and Gji are identical: (9) Under this condition, the two parameters Gij and Gji will track one another and not converge to the correct answer. Therefore, a non-linear adaptation rule is needed to break the symmetry between the outputs. The next two sections of the paper describe how the minimum output power principle can be applied to generalizations of the Herault-J utten architecture. 3 NETWORK FOR UNMIXING DELAYED SIGNALS Figure 2: Our network for unmixing signals mixed with single delays. The adjustable delay in the feedback path avoids the degeneracy in the learning rule. The dashed arrows represent adaptation: the source of the arrow is the source of the error used by gradient descent. Our new network is an extension of the Herault-Jutten network (see Figure 2). We assume that the interference is delayed by a certain amount: Ii(t) = Pi(t) + L: Mij Pj (t - Djj (t»). (10) i:j:.j Compare this to equation (6): our network can handle delayed interference, while the Herault-Jutten network cannot. We introduce an adjustable delay in the feedback path in order to cancel the delay of the interference: Oi(t) = I(t) - L: GijOj(t - djj(t)). i:j:.j (11) 734 Platt and Faggin We apply the minimum output power principle to adapt the mixing coefficients Gij and the delays dij : ~Gij(t) = aOi(t)Oj(t - dij(t)), dO · ~dij(t) = -f3Gij (t)Oj(t) d/ (t - djj(t)) . (12) By introducing a delay in the feedback, we prevent degeneracy in the learning rule, hence we can use a quadratic power to adjust the coefficients . .... 0 1-0 .... <1)0 ~ . 0 0 0.. -0 .... ~S ~o > <"<:t <1)b .§ .... ..... I tv) Ob 65 .... \0 b .... 0 1 2 3 4 5 6 7 Time (sec) Figure 3: The results of the network applied to a speech/music superposition. These curves are short-time averages of the power of signals. The upper curve shows the power of the pure speech signal. The lower curve shows the power of the difference between the speech output of the network, and the pure speech signal. The gap between the curves is the amount that the network attenuates the interference between the music and speech: the adaptation of the network tries to drive the lower curve to zero. As you can see, the network quickly isolates the pure speech signal. For a test of our network, we took two signals, one speech and one music, and mixed them together via software to form two new signals: the first being speech plus a delayed, attenuated music; the second being music plus delayed, attenuated speech. Figure 3 shows the results of our network applied to these two signals: the interference was attenuated by approximately 22 dB. One output of the network sounds like speech, with superimposed music which quickly fades away. The other output of the network sounds like music, with a superimposed speech signal which quickly fades away. Our network can also be extended to more than two sources, like the Herault-Jutten network. If the network tries to separate S sources, it requires S non-identical Networks for the Separation of Sources that are Superimposed and Delayed 735 inputs. Each output connects to one input, and a delayed version of each of the other outputs, for a total of 28(S - 1) adaptive coefficients. 4 NETWORK FOR UNMIXING FILTERED SIGNALS Figure 4: A network to unrnix signals that have been mixed via filtering. The filters in the feedback path are adjusted to independently minimize the power h( Oi) of each output. For the mixing process that involves filtering, Ii(t) = Pi(t) + L L MijkPj(t - bk), j-:ti k we put filters in the feedback path of each output: Oi(t) = li(t) - L L CjkOj(t 15k), j -:ti k (13) (14) (Jutten, et. al., 1991) also independently developed this architecture. We can use the principle of minimum output power to develop a learning rule for this architecture: (15) for some convex function h. (Jutten, et. al., 1991) suggests using an adaptation rule that is equivalent to choosing h(x) = X4. Interestingly, neither the choice of h( x) = x2 nor h( x) = X4 converges to the correct solution. For both h(x) = x2 and h(x) = x4, if the coefficients start at the correct solution, they stay there. However, if the coefficients start at zero, they converge to a solution that is only roughly correct (see Figure 5). These experiments show 736 Platt and Faggin ... = Absolute Value o =SquSIe o = Fourth Power ,.....; 90~~1---+2--~3---4+-~5---+6--~7---+8--~9 coefficient number Figure 5: The coefficients for one filter in the feedback path of the network. The weights were initialized t.o zero. Two different speech/music mixtures were applied to the network. The solid line indicates the correct solution for the coefficients. When minimizing either h(x) = x2 or h(x) = x\ the network converges to an incorrect solution. Minimizing h(x) = Ixl seems to work well. that the learning algorithm has multiple stable states. Experimentally, the spurious stable states seem to perform roughly as well as the true answer. To account for these multiple stable states, we came up with a conjecture: that the different minimizations performed by each output fought against one another and created the multiple stable states. Optimization theory suggests using an exact penalty method to avoid fighting between multiple terms in a single optimization criteria (Gill, 1981). The exact penalty method minimizes a function h(x) that has a non-zero derivative for x close to O. We tried a simple exact penalty method of h(x) = Ix\' and it empirically converged to the correct solution (see Figure 5). The adaptation rule is then (16) In this case, the non-linearity of the adaptation rule seems to be important for the network to converge to the true answer. For a speech/music mixture, we achieved a signal-to-noise ratio of 20 dB using the update rule (16). 5 FUTURE WORK The networks described in the last two sections were found to converge empirically. In the future, proving conditions for convergence would be useful. There are some known pathological cases which cause these networks not to converge. For example, using white noise as the pure signals for the network in section 3 causes it to fail, because there is no sensible way for the network to change the delays. Networks for the Separation of Sources that are Superimposed and Delayed 737 More exploration of the choice of optimization function needs to be performed in the future. The work in section 4 is just a first step which illustrates the possible usefulness of the absolute value function. Another avenue of future work is to try to express the blind separation problem as a global optimization problem, perhaps by trying to minimize the mutual information between the outputs. (Feinstein, Becker, personal communication) 6 CONCLUSIONS We have found that the minimum output power principle can generate a subset of the Herault-Jutten network learning rules. We use this principle to adapt extensions of the Herault-Jutten network, which have delays in the feedback path. These new networks unmix signals which have been mixed with single delays or via filtering. Acknowledgements We would like to thank Kannan Parthasarathy for his assistance in some of the experiments. We would also like to thank David Feinstein, Sue Becker, and David Mackay for useful discussions. References Cardoso, J. F., (1989) "Blind Identification of Independent Components," Proceedings of the Workshop 011 Higher-Order Spectral Analysis, Vail, Colorado, pp. 157160, (1989). Cohen, M. H., Pouliquen, P.O., Andreou, A. G., (1992) "Analog VLSI Implementation of an Auto-Adaptive Network for Real-Time Separation of Independent Signals," Advances in Neural Information Processing Systems 4, Morgan-Kaufmann, San Mateo, CA. Gill, P. E., Murray, W., Wright, M. H., (1981) Practical Optimization, Academic Press, London. Herault, J., J utten, C., (1986) "Space or Time Adaptive Signal Processing by Neural Network Models," Neural Networks for Computing, AlP Conference Proceedings 151, pp. 207-211, Snowbird, Utah. Jutten, C., Thi, L. N., Dijkstra, E., Vittoz, E., Caelen, J., (1991) "Blind Separation of Sources: an Algorithm for Separation of Convolutive Mixtures," Proc. Inti. Workshop on High Order Statistics, Chamrousse France, July 1991. Jutten, C., Herault, J., (1991) "Blind Separation of Sources, part I: An Adaptive Algorithm Based on Neuromimetic Architecture," Signal Processing, vol. 24, pp. 110. Sorouchyari, E., (1991) "Blind Separation of Sources, Part III: Stability analysis," Signal Processing, vol. 24, pp. 21-29. Vittoz, E. A., Arreguit, X., (1989) "CMOS Integration of Herault-Jutten Cells for Separation of Sources," Proc. Workshop on Analog VLSI and Neural Systems, Portland, Oregon, May 1989. Widrow, B., Glover, J., McCool, J., Kaunitz, J., Williams, C., Hearn, R., Zeidler, J., Dong, E., Goodlin, R., (1975) "Adaptive Noise Cancelling: Principles and Applications," Proc. IEEE, vol. 63, no. 12, pp. 1692-1716. PART XI IMPLEMENTATION
1991
107
438
Burst Synchronization Without Frequency-Locking in a Completely Solvable Network Model Heinz Schuster Institut fur theoretische Physik Universitat Kiel OlshausenstraBe 40 2300 Kiel 1, Germany Christof Koch Computation and Neural System Program California Institute of Technology Pasadena, California 91125, USA Abstract The dynamic behavior of a network model consisting of all-to-all excitatory coupled binary neurons with global inhibition is studied analytically and numerically. We prove that for random input signals, the output of the network consists of synchronized bursts with apparently random intermissions of noisy activity. Our results suggest that synchronous bursts can be generated by a simple neuronal architecture which amplifies incoming coincident signals. This synchronization process is accompanied by dampened oscillations which, by themselves, however, do not play any constructive role in this and can therefore be considered to be an epiphenomenon. 1 INTRODUCTION Recently synchronization phenomena in neural networks have attracted considerable attention. Gray et al. (1989, 1990) as well as Eckhorn et al. (1988) provided electrophysiological evidence that neurons in the visual cortex of cats discharge in a semi-synchronous, oscillatory manner in the 40 Hz range and that the firing activity of neurons up to 10 mm away is phase-locked with a mean phase-shift of less than 3 msec. It has been proposed that this phase synchronization can solve the binding problem for figure-ground segregation (von der Malsburg and Schneider, 1986) and underly visual attention and awareness (Crick and Koch, 1990). A number of theoretical explanations based on coupled (relaxation) oscillator mod117 118 Schuster and Koch els have been proposed for burst synchronization (Sompolinsky et al., 1990). The crucial issue of phase synchronization has also recently been addressed by Bush and Douglas (1991), who simulated the dynamics of a network consisting of bursty, layer V pyramidal cells coupled to a common pool of basket cells inhibiting all pyramidal cells. 1 Bush and Douglas found that excitatory interactions between the pyramidal cells increases the total neural activity as expected and that global inhibition leads to synchronized bursts with random intermissions. These population bursts appear to occur in a random manner in their model. The basic mechanism for the observed burst synchronization is hidden in the numerous anatomical and biophysical details of their model. These, and the related observation that to date no strong oscillations have been recorded in the neuronal activity in visual cortex of awake monkeys, prompted us to investigate how phase synchronization can occur in the absence of frequency locking. 2 A COINCIDENCE NETWORK We consider n excitatory coupled binary McCulloch-Pitts (1943) neurons whose output x:+l E [0,1] at time t + 1 is given by: t+l [w ~ t ~t (}J Xi = U ;; L: Xi + '-i (1) Here win > 1 is the normalized excitatory all-to-all synaptic coupling, e: represents the external binary input and u[zJ is the Heaviside step function, such that u[z] = 1 for z > 0 and 0 elsewhere. Each neuron has the same dynamic threshold () > O. Next we introduce the fraction mt of neurons which fire simultaneously at time t: 1 mt = - ~ x~ n LJ ' i (2) In general, 0 < mt < 1; only if every neuron is active at time t do we have mt = l. By summing eq. (1) we obtain the following equation of motion for our simple network. 1 mt+l = - L u[wmt + e: - (}J n . , (3) The behavior of this (n+l)-state automata is fully described by the phase-state diagram of Figure 1. If () > 1 and (}/w > 1, the output of the network mt will vary with the input until at some time t', me = O. Since the threshold () is always larger than the input, the network will remain in this state for all subsequent times. If () < 1 and () /w < 1, the network will drift until it comes to the state mt' = l. Since subsequent wmt is at all times larger than the threshold, the network remains latched at mt = 1. If () > 1, but (}/w < 1, the network can latch in either the mt = 0 or the mt = 1 state and will remain there indefinitely. Lastly, if () < 1, but () /w > 1, the threshold is by itself not large enough to keep the network latched 1 This model bears similarities to Wilson and Bower's {1992} model describing the origin of phase-locking in olfactory cortex. Burst Synchronization without Frequency Locking in a Completely Solvable Network Model 119 o Th:"esho Id 8 Figure 1: Phase diagram for the network described by eq. (3). Different regions correspond to different stationary output states mt in the long time limit. into the mt = 1 state. Defining the normalized input activity as st = .!. "e n~' , (4) with 0 < st < 1, we see that in this part of phase space mt+1 = st, and the output activity faithfully reflects the input activity at the previous time step. Let us introduce an adaptive time-dependent threshold, ot. We assume that ot remains at its value 0 < 1 as long as the total activity remains less than 1. If, however, mt = 1, we increase ot to a value larger than w + 1. This has the effect of resetting the activity of the entire network to 0 in the next time step, i.e., mt+l = (lin) L:i u(w + ei - (w + 1 + f)) = O. The threshold will then automatically reset itself to its old value: with 1 mt+l = ;; ~ u[wmt + e: - O(mt)] , for mt < 1 for mt = 1 (5) Therefore, we are operating in the topmost left part of Fig. 1 but preventing the network from latching to mt = 1 by resetting it. Such a dynamic threshold bears some similarities to the models of Horn and Usher (1990) and others, but is much simpler. Note that O(mt) exactly mimics the effect of a common inhibitory neuron which is only excited if all neurons fire simultaneously. Our network now acts as a coincidence detector, such that all neurons will "fire" at time t + 2, i.e., ml+2 = 1 if at least k neurons receive at time t a "I" as input. k is the smallest integer with k > 0 . nlw. The threshold O(mt) is then transiently increased and the network is reset and the game begins anew. In other words, the 120 Schuster and Koch Output .,. Input S 1 0.8 0.6 D. " 1 0.8 0. 6 D." o. 100 Tl.,.. Ti.,.. Figure 2: Time dependence of the fraction mt of output neurons which fire simultaneously compared to the corresponding fraction of input signals st for n = 20 and () /w = 0.225. The input variables e! are independently distributed according to peen = p8(e! - 1) + (1 - p)8(en with p = 0.1. If more than five input signals with e: = 1 coincide, the entire population will fire in synchrony two time steps later, i.e. mt+2 = 1. Note the "random" appearance of the interburst intervals. network detects coincidences and signals this by a synchronized burst of neuronal activity followed by a brief respite of activity (Figure 2). The time dependence of mt given by eq. (5) can be written as: t 1 w { st for 0 < mt < 1m + = 1 for ~ =:; m t < 1 o for mt = 1 (6) By introducing functions A(m), B(m), C(m) which take on the value 1 in the intervals specified for m = mt in eq. (6), respectively, and zero elsewhere, mHl can be written as: mt+l = st A(mt) + 1 . B(mt) + O. C(mt) (7) This equation can be iterated, yielding an explicit expression for mt as a function of the external inputs sHl, ... sO and the initial value mO: with the matrix ( A(s) 0 1) M(s) = B(s) 0 0 C(s) 1 0 (8) Burst Synchronization without Frequency Locking in a Completely Solvable Network Model 121 Eq. (8) shows that the dynamics of the network can be solved explicitly, by iteratively applying M, t - 1 number of times to the initial network configuration. 3 DISTRIBUTION OF BURSTS AND TIME CORRELATIONS The synchronous activity at time t depends on the specific realization of the input signals at different times (eq. 8). In order to get rid of this ambiguity we resort to averaged quantities where averages are understood over the distribution P{ st} of inputs Sf = ~ L~=l e:. A very useful averaged quantity is the probability pt(m), describing the fraction m of simultaneously firing neurons at time t. pt(m) is related to the probability distribution P{ st} via: pt(m) = (6[m - mt{ st-1, ... sO}]) (9) where ( ... ) denotes the average with respect to P{st} and mt{st-l, .. . sO} is given by eq. (8). If the input signals e: are un correlated in time, mt+l depends according to eq. (7) only on mt, and the time evolution of pt(m) can be described by the Chapman-Kolmogorov equation. We then find: pt(m) = pOO(m) + [pO(m) - pOO(m)]. /(t) (10) where 1 ~ pOO(m) = 1 + 217 [P(m) + 176(m - 1) + 176(m)] (11) is the limiting distribution which evolves from the initial distribution pO( m) for large times, because the factor /(t) = 17~ cos(Ot), where 0 = 7r arctan[J417 -172 /17]' decays exponentially with time and 17 = f01 P(s)B(s)ds = f(},w P(s)ds. Notice that o < 17 < 1 holds (for more details, see Koch and Schuster, 1992). The limiting equilibrium distribution poo (m) evolves from the initial distribution pO( m) in an oscillatory fashion, with the building up of two delta-functions at m = 1 and m = 0 at the expense of pO(m). This signals the emergence of synchronous bursts, i.e., mt = 1, which are always followed at the next time-step by zero activity, i.e., mt+l = 0 (see also Fig. 2). The equilibrium value for the mean fraction of synchronized neurons is (12) which is larger than the initial value (s) = f;dsP(s)s, for (s) <~, indicating an increase in synchronized bursting activity. It is interesting to ask what type of time correlations will develop in the output of our network if it is stimulated with uncorrelated noise, e:. The autocovariance function is can be computed directly since mt and pOO(m) are known explicitly. We find C( r) = 6T ,oCO + (1 - 6T,0)Cl17ITI/2 cos(Or + cp) (14) 122 Schuster and Koch 1 O. 75 ((1:) 0. 6 0.25 -0.25 0 35 to -0. 5 -0.75 -1 1 0.75 C( 1:) 0.5 0. 26 3 t 5 6 7 8 -0.25 -0.5 lUTe -0.75 -1 Figure 3: Time dependence of the auto-covariance function G( r) for two different 1 A values of.,., = fe/w dsP(s). The top figure corresponds to .,., = 0.8 and a period T = 3.09, while the bottom correlation function is for.,., = 0.2 with an associated T = 3.50. Note the different time-scales. with br,o the Kroneker symbol (br,o = 1 for r = 0 and 0 else). Figure 3 shows that G(r) consists of two parts. A delta peak at r = 0 which reflects random uncorrelated bursting and an oscillatory decaying part which indicates correlations in the output. The period of the oscillations, T = 21r/rl, varies monotonically between 3 < T < 4 as O/w moves from zero to one. Since.,., is given by fe1/w P(s)ds, we see that the strengths of these oscillations increases as the excitatory coupling w increases. The emergence of periodic correlations can be understood in the limit 1 A O/w --+ 0, where the period T becomes three (and.,., = fo P(s)ds = 1), because according to eq. (6), mt = 0 is followed by mt+l = st which leads for O/w --+ 0 always to mt+2 = 1 followed by mt+3 = O. In other words, the temporal dynamics of mt has the form OsI10s41Os7 1Os1olO.... In the opposite case of O/w --+ 1, .,., converges to 0 and the autocovariance function G( r) essentially only contains the peak at r = O. Thus, the output of the network ranges from completely uncorrelated noise for O/w ::::: 1 to correlated periodic bursts for ~ --+ O. The power spectrum of the system is a broad Lorentzian centered at the oscillation frequency, superimposed onto a constant background corresponding to uncorrelated neural activity. It is important to discuss in this context the effect of the size n of the network. If the input variables e: are distributed independently in time and space with probabilities Pi(eD, then the distribution pes) has a width which decreases as l/fo as n --+ 00. Therefore, in a large system.,., = fe1/w pes )ds is either 0 if O/w > (s) or 1 if O/w < (s), where (s) is the mean value of s, which coincides for n --+ 00 with the maximum of Burst Synchronization without Frequency Locking in a Completely Solvable Network Model 123 />(8). If.,., = 0 the correlation function is a constant according to eq. (14), while the system will exhibit undamped oscillations with period 3 for .,., = 1. Therefore, the irregularity ofthe burst intervals, as shown, for instance, in Fig. 2, is for independent e: a finite size effect. Such synchronized dephasing due to finite size has been reported by Sompolinsky et al. (1989). However, for biologically realistic correlated inputs e: , the width of />(8) can remain finite for n ~ 1. For example, if the inputs er, ... , e~ can be grouped into q correlated sets eI ... eL e~ ... e~, ... ,e: ... e:, with finite q, then the width of />(8) scales like 1/ vq. Our model, which now effectively corresponds to a situation with a finite number q of inputs, leads in this case to irregular bursts which mirror and amplify the correlations present in the input signals, with an oscillatory component superimposed due to the dynamical threshold. 4 CONCLUSIONS AND DISCUSSION We here suggest a mechanism for burst synchronization which is based on the fact that excitatory coupled neurons fire in synchrony whenever a sufficient number of input signals coincide. In our model, common inhibition shuts down the activity after each burst, making the whole process repeatable, without entraining any signals. It is rather satisfactory to us that our simple model shows qualitative similar dynamic behavior of the much more detailed biophysical simulations of Bush and Douglas (1991). In both models, all-to-all excitatory coupling leads-together with common inhibition-to burst synchronization without frequency locking. In our analysis we updated all neurons in parallel. The same model has been investigated numerically for serial (asynchronous) updating, leading to qualitatively similar results. The output of our network develops oscillatory correlations whose range and amplitude increases as the excitatory coupling is strengthened. However, these oscillations do not depend on the presence of any neuronal oscillators, as in our earlier models (e.g., Schuster and Wagner, 1990; Niebur et al.,1991). The period of the oscillations reflects essentially the delay between the inhibitory response and the excitatory stimulus and only varies little with the amplitude of the excitatory coupling and the threshold. The crucial role of inhibitory interneurons in controlling the 40 Hz neuronal oscillations has been emphasized by Wilson and Bower (1992) in their simulations of olfactory and visual cortex. Our model shows complete synchronization, in the sense that all neurons fire at the same time. This suggests that the occurrence of tightly synchronized firing activity across neurons is more important for feature linking and binding than the locking of oscillatory frequencies. Since the specific statistics of the input noise is, via coincidence detection, mirrored in the burst statistics, we speculate that our network-acting as an amplifier for the input noise-can play an important role in any mechanism for feature linking that exploits common noise correlations of different input signals. Acknowledgements We thank R. Douglas for stimulating discussions and for inspiring us to think about this problem. Our collaboration was supported by the Stiftung Volkswagenwerk. The research of C.K. is supported by the National Science Foundation, the James 124 Schuster and Koch McDonnell Foundation, and the Air Force Office of Scientific Research. References Bush, P.C. and Douglas, R.J. "Synchronization of bursting action potential discharge in a model network of neocortical neurons." Neural Computation 3: 19-30, 1991. Crick, F. and Koch, C. ''Towards a neurobiological theory of consciousness." Seminars Neurosci. 2: 263-275, 1990. Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M. and Reitboeck, H.J. "Coherent oscillations: a mechanism of feature linking in the visual cortex?" Bioi. Cybern. 60: 121-130, 1988. Gray, C.M., Engel, A.K., Konig, P. and Singer, W. "Stimulus-dependent neuronal oscillations in cat visual cortex: Receptive field properties and feature dependence." Eur. J. Neurosci. 2: 607-619, 1990. Gray, C.M., Konig, P., Engel, A.K. and Singer, W. "Oscillatory response in cat visual cortex exhibits inter-columnar synchronization which reflects global stimulus attributes." Nature 338: 334-337, 1989. Horn, D. and Usher, M. "Excitatory-inhibitory networks with dynamical thresholds." Int. J. of Neural Systems 1: 249-257, 1990. Koch, C. and Schuster, H. "A simple network showing burst synchronization without frequency locking." Neural Computation, in press. McCulloch, W.S. and Pitts, W.A. "A logical calculus of the ideas immanent in neural nets." Bull. Math. Biophys. 5: 115-137, 1943. Niebur, E., Schuster, H.G., Kammen, D.M. and Koch, C. "Oscillator-phase coupling for different two-dimensional network connectivities." Phys. Rev. A., in press. Schuster, H.G. and Wagner, P. "A model for neuronal oscillations in the visual cortex: I Mean-field theory and the derivation of the phase equations." Bioi. Cybern. 64: 77-82, 1990. Sompolinsky, H., Golomb, D. and Kleinfeld, D. "Global processing of visual stimuli in a neural network of coupled oscillators." Proc. Natl. Acad. Sci. USA 87: 7200-7204, 1989. von der Malsburg, C. and Schneider, W. "A neural cocktail-party processor." Bioi. Cybern. 54: 29-40, 1986. Wilson, M.A. and Bower, J .M. "Cortical oscillations and temporal interactions in a computer simulation of piriform cortex." J. Neurophysiol., in press.
1991
108
439
Incrementally Learning Time-varying Half-planes Anthony Kuh * Dept. of Electrical Engineering University of Hawaii at Manoa Honolulu, ill 96822 Thomas Petsche t Siemens Corporate Research 755 College Road East Princeton, NJ 08540 Ronald L. Rivest+ Laboratory for Computer Science MIT Cambridge, MA 02139 Abstract We present a distribution-free model for incremental learning when concepts vary with time. Concepts are caused to change by an adversary while an incremental learning algorithm attempts to track the changing concepts by minimizing the error between the current target concept and the hypothesis. For a single halfplane and the intersection of two half-planes, we show that the average mistake rate depends on the maximum rate at which an adversary can modify the concept. These theoretical predictions are verified with simulations of several learning algorithms including back propagation. 1 INTRODUCTION The goal of our research is to better understand the problem of learning when concepts are allowed to change over time. For a dichotomy, concept drift means that the classification function changes over time. We want to extend the theoretical analyses of learning to include time-varying concepts; to explore the behavior of current learning algorithms in the face of concept drift; and to devise tracking algorithms to better handle concept drift. In this paper, we briefly describe our theoretical model and then present the results of simulations *kuh@wiliki.eng.hawaii.edu 920 t petsche@learning.siemens.com +rivest@theory.lcs.mit.edu Incrementally Learning Time-varying Half-planes 921 in which several tracking algorithms, including an on-line version of back-propagation, are applied to time-varying half-spaces. For many interesting real world applications, the concept to be learned or estimated is not static, i.e., it can change over time. For example, a speaker's voice may change due to fatigue, illness, stress or background noise (Galletti and Abbott, 1989), as can handwriting. The output of a sensor may drift as the components age or as the temperature changes. In control applications, the behavior of a plant may change over time and require incremental modifications to the model. Haussler, et al. (1987) and Littlestone (1989) have derived bounds on the number of mistakes an on-line learning algorithm will make while learning any concept in a given concept class. , However, in that and most other learning theory research, the concept is assumed to be fixed. Helmbold and Long (1991) consider the problem of concept drift, but their results apply to memory-based tracking algorithms while ours apply to incremental algorithms. In addition, we consider different types of adversaries and use different methods of analysis. 2 DEFINITIONS We use much the same notation as most learning theory, but we augment many symbols with a subscript to denote time. As usual, X is the instance space and Xt is an instance drawn at time t according to afixed, ~rbitrary distribution Px. The function Ct : X ~ {O, I} is the active concept at time t, that is, at time t any instan~e is labeled according to Ct. The label of the instance is at = Ct(Xt). Each active concept C i is a member of the concept class C. A sequence of active concepts is denoted c. At any time t, the tracker uses an algorithm £ to generate a hypothesis Ct of the active concept. We use a symmetric distance function to measure the difference between two concepts: d(c, c') = Px[x : c(x) =1= c'(x)]. As we alluded to in the introduction, we distinguish between two types of tracking algorithms. A memory-based tracker stores the most recent m examples and chooses a hypothesis based on those stored examples. Helmbold and Long (1991), for example, use an algorithm that chooses as the hypothesis the concept that minimizes the number of disagreements between cr(xt) and Ct(Xt). An incremental tracker uses only the previous hypothesis and the most recent examples to form the new hypothesis. In what follows, we focus on incremental trackers. The task for a tracking algorithm is, at each iteration t, to form a "good" estimate ct of the active concept Ct using the sequence of previous examples. Here "good" means that the probability of a disagreement between the label predicted by the tracker and the actual label is small. In the time-invariant case, this would mean that the tracker would incrementally improve its hypothesis as it collects more examples. In the time-varying case, however, we introduce an adversary whose task is to change the active concept at each iteration. Given the existence of a tracker and an adversary, each iteration of the tracking problem consists of five steps: (1) the adversary chooses the active concept Cr; (2) the tracker is given an unlabeled instance, Xr, chosen randomly according to Px; (3) the tracker predicts a label using the current hypothesis: at = Ct-l (xt); (4) the tracker is given the correct label at'= ct(xt); (5) the tracker forms a new hypothesis: ct = £(Ct-l, (xt,at)). 922 Kuh, Petsche, and Rivest It is clear that an unrestricted adversary can always choose a concept sequence (a sequence of active concepts) that the tracker can not track. Therefore, it is necessary to restrict the changes that the adversary can induce. In this paper, we require that two subsequent concepts differ by no more than /" that is, d(c t, ct-r) ~ /' for all t. We define the restricted concept sequence space C-y = {c : Ct E C, d(ct, Ct+1) ~ y}. In the following, we are concerned with two types of adversaries: a benign adversary which causes changes that are independent of the hypothesis; and a greedy adversary which always chooses a change that will maximize d(ct, Ct-1) constrained by the upper-bound. Since we have restricted the adversary, it seems only fair to restrict the tracker too. We require that a tracking algorithm be: deterministic, i.e., that the process generating the hypotheses be detenninistic; prudent, i.e., that the label predicted for an instance be a detenninistic function of the current hypothesis: at = Ct-1 (xt); and conservative, i.e., that the hypothesis is modified only when an example is mislabeled. The restriction that a tracker be conservative rules out algorithms which attempt to predict the adversary's movements and is the most restrictive of the three. On the other hand, when the tracker does update its hypothesis, there are no restrictions on d( Ct. Ct-1). To measure perfonnance, we focus on the mistake rate of the tracker. A mistake occurs when the tracker mislabels an instance, i.e., whenever Ct-1 (xt) =I Ct(Xt). For convenience, we define a mistake indicator function, M(xt• Ct. Ct-1) which is I if Ct-1(Xt) =I ct(xt) and 0 otherwise. Note that if a mistake occurs, it occurs before the hypothesis is updateda conservative tracker is always a step behind the adversary. We are interested in the asymptotic mistake rate, p.. = lim inft->oo ~ 2::=0 M(xt. Ct. Ct-l)· Following Helmbold and Long (1991), we say that an algorithm (p.., y)-tracks a sequence space C if, for all C E C-y and all drift rates 1" not greater than 1', the mistake rate p..' is at most p... We are interested in bounding the asymptotic mistake rate of a tracking algorithm based on the concept class and the adversary. To derive a lower bound on the mistake rate, we hypothesize the existence of a perfect conservative tracker, i.e., one that is always able to guess the correct concept each time it makes a mistake. We say that such a tracker has complete side information (CSI). No conservative tracker can do better than one with CSI. Thus, the mistake rate for a tracker with CSI is a lower bound on the mistake rate achievable by any conservative tracker. To upper bound the mistake rate, it is necessary that we hypothesize a particular tracking algorithm when no side information (NSI) is available, that is, when the tracker only knows it mislabeled an instance and nothing else. In our analysis, we study a simple tracking algorithm which modifies the previous hypothesis just enough to correct the mistake. 3 ANALYSIS We consider two concept classes in this paper, half-planes and the intersection of two halfplanes which can be defined by lines in the plane that pass through the origin. We call these classes HS2 and IHS2• In this section, we present our analysis for HS 2• Without loss of generality, since the lines pass through the origin, we take the instance space to, be the circumference of the unit circle. A half-plane in HS 2 is defined by a vector w such that for an instance x, c(x) = 1 if wx ~ 0 and c(x) = 0 otherwise. Without loss of Incrementally Learning Time-varying Half-planes 923 Figure' I: Markov chain for the greedy adversary and (a) CSI and (b) COVER trackers. generality, as we will show later, we assume that the instances are chosen uniformly. To begin, we assume a greedy adversary as follows: Every time the tracker guesses the correct target concept (that is, Ct-l = ct-d, the greedy adversary randomly chooses a vector r orthogonal to w and at every iteration, the adversary rotates w by 7r"l radians in the direction defined by r. We have shown that a greedy adversary maximizes the asymptotic mistake rate for a conservati ve tracker but do not present the proof here. To lower bound the achievable error rate, we assume a conservative tracker with complete side information so that the hypothesis is unchanged if no mistake occurs and is updated to the correct concept otherwise. The state of this system is fully described by d(c t, ct ) and, for "I = 1/ K for some integer K, is modeled by the Markov chain shown in figure I a. In each state Si (labeled i in the figure), d(cr. Ct) = i"l. The asymptotic mistake rate is equal to the probability of state 0 which is lower bounded by 1("1) = J2"1/7T' - 2"1/7r Since I( "I) depends only on "I which, in tum, is defined in terms of the probability measure, the results holds for all distributions. Therefore, since this result applies to the best of all possible conservative trackers, we can say that Theorem 1. For HS2 , if d(ct, ct-d ~ "I, then there exists a concept sequence C E C-y such that the mistake rate p., > 1("1). Equivalently, C-y is not ("I,p.,)-trackable whenever p., < 1("1). To upper bound the achievable mistake rate, we must choose a realizable tracking algorithm. We have analyzed the behavior of a simple algorithm we call COVER which rotates the hypothesize line just far enough to cover the incorrectly labeled instance. Mathematically, if Wt is the hypothesized normal vector at time t and Xt is the mislabeled instance: -.. -.. (-..) Wt = Wt-l Xt · Wt-l Xt· (1) In this case, a mistake in state Si can lead to a transition to any state Sj for j ~ i as shown in Figure I b. The asymptotic probability of a mistake is the sum of the equilibrium transition probabilities P(Sj lSi) for all j ~ i. Solving for these probabilities leads to an upper bound u( "I) on the mistake rate: u("I) = J7T'''I/2+''I(2+~) Again this depends only on "I and so is distribution independent and we can say that: Theorem 2. For HS2, for all concept sequences c E C-y the mistake rate for COVER p., ~ u("I). Equivalently, C-y is ("I,p.,)-trackable whenever p., < u("I). 924 Kuh, Petsche, and Rivest If the adversary is benign, it is as likely to decrease as to increase the probability of a mistake. Unfortunately, although this makes the task ofthe tracker easier, it also makes the analysis more difficult. So far, we can show that: Theorem 3. For HS2 and a benign adversary, there exists a concept sequence C Eel' such that the mistake rate J.L is O( 'Y2/3). 4 SIMULATIONS To test the predictions ofthe theory and explore some areas for which we currently have no theory, we have run simulations for a variety of concept classes, adversaries, and tracking algorithms. Here we will present the results for single half-planes and the intersection of two half-planes; both greedy and benign adversaries; an ideal tracker; and two types of trackers that use no side information. 4.1 HALF-PLANES The simplest concept class we have simulated is the set of all half-planes defined by lines passing through the origin. This is equivalent to the set classifications realizable with 2-dimensional perceptrons with zero threshold. In other words, if w is the normal vector and x is a point in space, c(x) = 1 if w . x 2:: 0 and c(x) = 0 otherwise. The mistake rate reported for each data point is the average of 1,000,000 iterations. The instances were chosen uniformly from the circumference of the unit circle. We also simulated the ideal tracker using an algorithm called CSI and tested a tracking algorithm called COVER, which is a simple implementation of the tracking algorithm analyzed in the theory. If a tracker using COVER mislabels an instance, it rotates the normal vector in the plane defined by it and the instance so that the instance lies exactly on the new hypothesis line, as described by equation 1. 4.1.1 Greedy adversary Whenever CSI or COVER makes a mistake and then guesses the concept exactly, the greedy adversary uniformly at random chooses a direction orthogonal to the normal vector ofthe hyperplane. Whenever COVER makes a mistake and wt =I w" the greedy adversary choose the rotation direction to be in the plane defined by W t and Wt and orthogonal to w t. At every iteration, the adversary rotates the normal vector of the hyperplane in the most recently chosen direction so that d(c" cr+t> = 'Y, or equivalently, Wt . Wt-l = cos( 1T'Y). Figure 2 shows that the theoretical lower bound very closely matches the simulation results for CSI when 'Y is small. For small 'Y, the simulation results for COVER lie very close to the theoretical predictions for the NSI case. In other words, the bounds predicted in theorems 1 and 2 are tight and the mistake rates for CSI and COVER differ by only a factor of 1T /2. 4.1.2 Benign adversary At every iteration, the benign adversary uniformly at random chooses a direction orthogonal to the normal vector of the hyperplane and rotates the hyperplane in that direction so that d(c" ct+d = 'Y. Figure 3 shows that CSI behaves as predicted by Theorem 3 when J.L = 0.6'Y2/3. The figure also shows that COVER performs very well compared to CSI. 0.500 0.100 Q) ~ 0.050 ~ .19 (J) ~ 0.010 0.005 Incrementally Learning Time-varying Half-planes 925 + o + 0 .. , + 0 ..... . 0.········· .D····· o + Theorem 1 Theorem 2 CSt COVER 0.001 L---r------r----~~========:;::::J 0.0001 0.0010 0.0100 0.1000 Rate of change Figure 2: The mistake rate, /.L, as a function of the rate of change, ,)" for HS 2 when the adversary is greedy. 0.5000 0.1000 Q) 0.0500 ~ Q) .:;t! ttl en 0.0100 ~ 0.0050 0.0010 0;0005 .fj .... ii···· ~ ..... a···· rn .. ' .d} •••• .d} •••• o. .... t5 rlJ· .• ' .rn-.... .' 19- • 0 CSt i§..... .' + COVER 0.0001 0.0010 0.0100 0.1000 Rate of change Figure 3: The mistake rate, /.L, as a function of the rate of change, ,)" for HS 2 when the adversary is benign. The line is /.L = 0.6,),2/3. 4.2 INTERSECTION OF TWO HALF-PLANES The other concept class we consider here is the intersection of two half-spaces defined by lines through the origin. That is, c(x) = 1 if W IX ~ 0 and W2X ~ 0 and ~(x) = 0 otherwise. We tested two tracking algorithms using no side information for this concept class. The first is a variation on the previous COVER algorithm. For each mislabeled instance: if both half-spaces label Xt differently than Ct(Xt), then the line that is closest in euclidean distance to Xt is updated according to COVER; otherwise, the half-space labeling X t differently than ct(xt) is updated. The second is a feed-forward network with 2 input, 2 hidden and 1 output nodes. The 926 Kuh, Petsche, and Rivest 0.500 r;::::============:;-------------:7~ 0.100 Q) ~ 0.050 Q) .::t! 1\1 iii ~ 0.010 0.005 Theorem 1 + Theorem 2 + ~ ..... ~ ~., .. -liit ••• ' .~ .... .fijt •••• :i1t ••• , + Ji .... + .M .••. + n.'·· 1iiI""'~ ~ ... ,. w···· 0 CSI + COVER X Back prop 0.001 L.---r------r----~:;:::========::;::~ 0.0001 0.0010 0.0100 0.1000 Rate of change Figure 4: The mistake rate, fL, as a function of the rate of change, 'Y, for IHS 2 when the adversary is greedy. thresholds of all the neurons and the weights from the hidden to output layers are fixed, i.e., only the input weights can be modified. The output of each neuron is/CD) = (1 +e -lOwu)-l. For classification, the instance was labeled one if the output of the network was greater than 0.5 and zero otherwise. If the difference between the actual and desired outputs was greater than 0.1, back-propagation was run using only the most recent example until the difference was below 0.1. The learning rate was fixed at 0.01 and no momentum was used. Since the model may be updated without making a mistake, this algorithm is not conservative. 4.2.1 Greedy Adversary At each iteration, the greedy adversary rotates each hyperplane in a direction orthogonal to its normal vector. Each rotation direction is based on an initial direction chosen uniformly at random from the set of vectors orthogonal to the normal vector. At each iteration, both the normal vector and the rotation vector are rotated 7T'Y /2 radians in the plane they define so that d(ct, Ct-l) = 'Y for every iteration. Figure 4 shows that the simulations match the predictions well for small 'Y. Non-conservative back-propagation performs about as well as conservative CSI and slightly better than conservative COVER. 4.2.2 Benign Adversary At each iteration, the benign adversary uniformly at random chooses a direction orthogonal to Wi and rotates the hyperplane in that direction such that d(c t, Ct-l) = 'Y. The theory for the benign adversary in this case is not yet fully developed, but figure 5 shows that the simulations approximate the optimal performance for HS 2 against a benign adversary with c E c'Y/2' Non-conservative back-propagation does not perform as well for very small 'Y, but catches up for 'Y > .001. This is likely due to the particular choice of learning rate. 0.5000 0.1000 Q) 0.0500 ra ~ Q) .:s:. ra 0.0100 Cii ~ 0.0050 0.0010 0.0005 Incrememally Learning Time-varying Half-planes 927 1!9 ill .J ..... ~./ ~ ...... . ~ ~ ........ . X X X X ~ ~ ••••••••••••• ~ ....... 0 CSI ~ ....... + COVER a .... ~............ X Back prop 0.0001 0.0010 0.0100 0.1000 Rate of change Figure 5: The mistake rate, IL, as a function of the rate of change, y, for IHS 2 when the adversary is benign. The dashed line is IL = O.6( Y /2)2/3. 5 CONCLUSIONS We have presented the results of some of our research applied to the problem of tracking time-varying half-spaces. For HS2 and IHS2 presented here, simulation results match the theory quite well. For IHS2, non-conservative back-propagation perforn1s quite well. We have extended the theorems presented in this paper to higher-dimensional input vectors and more general geometric concept classes. In Theorem 3, IL ~ cy 2/3 for some constant c and we are working to find a good value for that constant. We are also working to develop an analysis of non-conservative trackers and to better understand the difference between conservative and non-conservative algorithms. Acknowledgments Anthony Kuh gratefully acknowledges the support of the National Science Foundation through grant EET-8857711 and Siemens Corporate Research. Ronald L. Rivest gratefully acknowledges support from NSF grant CCR-8914428, ARO grant NOOO14-89-J-1988 and a grant from the Siemens Corporation. References Galletti, I. and Abbott, M. (1989). Development of an advanced airborne speech recognizer for direct voice input. Speech Technology, pages 60-63. Haussler, D., Littlestone, N., and Warmuth, M. K. (1987). Expected mistake bounds for on-line learning algorithms. (Unpublished). Helmbold, D. P. and Long, P. M. (1991). Tracking drifting concepts using random examples. In Valiant, L. G. and Warmuth, M. K., editors, Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 13-23. Morgan Kaufmann. Littlestone, N. (1989). Mistake bounds and logarithmic linear-threshold learning algorithms. Technical Report UCSC-CRL-89-11, Univ. of California at Santa Cruz.
1991
109
440
Connectionist Optimisation of Tied Mixture Hidden Markov Models Steve Renals Nelson Morgan ICSI Berkeley CA 94704 USA Herve Bourlard L&H Speech products leper B-9800 Belgium Abstract Horacio Franco Michael Cohen SRI International Menlo Park CA 94025 USA Issues relating to the estimation of hidden Markov model (HMM) local probabilities are discussed. In particular we note the isomorphism of radial basis functions (RBF) networks to tied mixture density modellingj additionally we highlight the differences between these methods arising from the different training criteria employed. We present a method in which connectionist training can be modified to resolve these differences and discuss some preliminary experiments. Finally, we discuss some outstanding problems with discriminative training. 1 INTRODUCTION In a statistical approach to continuous speech recognition the desired quantity is the posterior probability p(Wrlxf, 8) of a word sequence Wr = Wl ..... Ww given the acoustic evidence X[ = Xl ..... XT and the parameters of the speech model used 8. Typically a set of models is used, to separately model different units of speech. This probability may be re-expressed using Bayes' rule: (1) P(WwIXT 8) = p(XflW r.8)p(WrI8) 1 l' p(Xn8) p(Xflwr. 8)p(Wr 18) = Lw' p(XnW'. 8)P(W'18) . p(XnWr.8)/p(Xn8) is the acoustic model. This is the ratio of the likelihood of the acoustic evidence given the sequence of word models, to the probability of 167 168 RenaIs, Morgan, Bourlard, Franco, and Cohen the acoustic data being generated by the complete set of models. p(Xn8) may be regarded as a normalising term that is constant (across models) at recognition time. However at training time the parameters 8 are being adapted, thus p(Xn8) is no longer constant. The prior, P(WfI8), is obtained from a language model. The basic unit of speech, typically smaller than a word (here we use phones), is modelled by a hidden Markov model (HMM). Word models consist of concatenations of phone HMMs (constrained by pronunciations stored in a lexicon), and sentence models consist of concatenations of word HMMs (constrained by a grammar). The lexicon and grammar together make up a language model, specifying prior probabilities for sentences, words and phones. A HMM is a stochastic automaton defined by a set of states qi, a topology specifying allowed state transitions and a set oflocal probability density functions (PDFs) P(x" qilqj. X~-l). Making the further assumptions that the output at time t is independent of previous outputs and depends only on the current state, we may separate the local probabilities into state transition probabilities P(q;lqj) and output PDFs P(X,lqi). A set of initial state probabilities must also be specified. The parameters of a HMM are usually set via a maximum likelihood procedure that optimally estimates the joint density P(q, xI8). The forward-backward algorithm, a provably convergent algorithm for this task, is extremely efficient in practice. However, in speech recognition we do not wish to make the best model of the data {x, q} given the model parameters; we want to make the optimal discrimination between classes at each time. This can be better achieved by computing a discriminant P(qlx, 8). Note that in this case we do not model the input density P(xI8). We may estimate P(qlx.8) using a feed-forward network trained to an entropy criterion (Bourlard & Wellekens, 1989). However, we require likelihoods of the form P(xlq, 8), as HMM output probabilities. We may convert posterior probabilities to scaled likelihoods P(xlq, 8)IP(xI8), by dividing the network outputs by the relative frequencies of each class1. Note that we are not using connectionist training to obtain density estimates here; we are obtaining a ratio and not modelling P(xI8). This ratio is the quantity that we wish to maximise: this corresponds to maximising P(xlqc.8) and minimising P(Xlqi. 8). i =I c, where qc is the correct class. We have used discriminatively trained networks to estimate the output PDFs (Bourlard & Morgan, 1991; Renals et al., 1991, 1992), and have obtained superior results to maximum likelihood training on continuous speech recognition tasks. In this paper, we are mainly concerned with radial basis function (RBF) networks. A RBF network generally has a single hidden layer, whose units may be regarded as computing local (or approximately local) densities, rather than global decision surfaces. The resultant posteriors are obtained by output units that combine these local densities. We are interested in using RBF networks for various reasons: • A RBF network is isomorphic to a tied mixture density model, although the training criterion is typically different. The relationship between the two is explored in this paper . • The locality of RBFs makes them suitable for situations in which the input ITbese are the estimates of P(qi) implicitly used during classifier training. Connectionist Optimisation of Tied Mixture Hidden Markov Models 169 distribution may change (e.g. speaker adaptation). Surplus RBFs in a region of the input space where data no longer occurs will not effect the final classification. This is not so for sigmoidal hidden units in a multi-layer perceptron (MLP), which have a global effect. • RBFs are potentially more computationally efficient than MLPs at both training and recognition time. 2 TIED MIXTURE HMM Tied mixtures of Gaussians have proven to be powerful PDF estimators in HMM speech recognition systems (Huang & Jack, 1989; Bellegarda & Nahamoo, 1990). The resulting systems are also known as semi-continuous HMMs. Tied mixture density estimation may be regarded as an interpolation between discrete and continuous density modelling Essentially, tied mixture modelling has a single "codebook" of Gaussians shared by all output PDFs. Each of these PDFs has its own set of mixture coefficients used to combine the individual Gaussians. If h(xlqk) is the output PDF of state qk, and Nj(xlJ1j. ~j) are the component Gaussians, then: (2) h(xlqk. 8 ) = I: akjNj (xlJ1jo ~j) j 1:akj = 1 j where akj is an element of the matrix of mixture coefficients (which may be interpreted as the prior probability P(Pj. l:jlqk») defining how much component density Nj(xlJ1jt ~j) contributes to output PDF h(xlqk.8). Alternatively this may be regarded as "fuzzy" vector quantisation. 3 RADIAL BASIS FUNCTIONS The radial basis functions (RBF) network was originally introduced as a means of function interpolation (Powell, 1985; Broomhead & Lowe, 1988). A set of K approximating functions, hex) is constructed from a set of J basis functions cp(x): (3) J fk(X) = 1: akjcpj(x) j=l This equation defines a RBF network with J RBFs (hidden units) and K outputs. The output units here are linear, with weights akj. The RBFs are typically Gaussians, with means Ilj and covariance matrices ~r (4) where R is a normalising constant. The covariance matrix is frequently assumed to be diagonal2• 1bis is often reasonable for speech applications. since mel or PLP cepstral coefficients are orthogonal. 170 RenaIs, Morgan, Bourlard, Franco, and Cohen Such a network has been used for HMM output probability estimation in continuous speech recognition (Renals et al., 1991) and an isomorphism to tied-mixture HMMs was noted. However, there is a mismatch between the posterior probabilities estimated by the network and the likelihoods required for the HMM decoding. Previously this was resolved by dividing the outputs by the relative frequencies of each state. It would be desirable, though, to retain the isomorphism to tied mixtures: specifically we wish to interpret the hidden-to-output weights of an RBF network as the mixture coefficients of a tied mixture likelihood function. This can be achieved by defining the transfer units of the output units to implement Bayes' rule, which relates the posterior gk{X) to the likelihood !k(x): (5) Such a transfer function ensures the output units sum to 1; if fA:{x) is guaranteed non-negative, then the outputs are formally probabilities. The output of such a network is a probability distribution and we are using 'l-from-K' training: thus the relative entropy E is simply: (6) E == -loggc(x). where qc is the desired output class (HMM distribution). Bridle (1990) has demonstrated that minimising this error function is equivalent to maximising the mutual information between the acoustic evidence and HMM state sequence. If we wish to interpret the weights as mixture coefficients, then we must ensure that they are non-negative and sum to 1. This may be achieved us'ing a normalised exponential (softmax) transformation: (7) The mixture coefficients akj are used to compute the likelihood estimates, but it is the derived variables wkj that are used in the unconstrained optimisation. 3.1 TRAINING Steepest descent training specifies that: (8) Here E is the relative entropy objective function (6). We may decompose the right hand side of this by a careful application of the chain rule of differentiation: (9) Connectionist Optimisation of Tied Mixture Hidden Markov Models 171 We may write down expressions for each of these partials (where ~ab is the Kronecker delta and qc is the desired state): (10) (11) (12) (13) dE ~c' --=-dg,(X) gc dg,(X) = gA;(X) (~ _ g,) d!A:(X) !A: (x) dfA;(X) () -- =tPli x daU dau = aU(~IIj - alcj). dWIcj Substituting (10), (11), (12) and (13) into (9) we obtain: (14) dE 1 dwlcj = fA;(X) (gA;(X) ~A:c) alcj (tPj(x) - !A: (x) ) . Apart from the added terms due to the normalisation of the weights, the major difference in the gradient compared with using a sigmoid or softmax transfer function is the 1/!A:(x) factor. To some extent we may regard this as a dimensional term. The required gradient is simpler if we construct the network to estimate log likelihoods, replacing fA;(X) with ZA;(X) = logfA;(x): (15) ZA;(X) = L wkj'tPj(x) j (16) gA;(X) = p(qA;) exp(zA;(x» . 2:, p(q,) exp(z,(x» Since this is in the log domain, no constraints on the weights are required. The new gradient we need is: (17) Thus the gradient of the error is: dE (18) -;- = (gA;(X) ~d) tPj(x). aWIcj Since we are in log domain, the lI!A:(x) factor is additive and thus disappears from the gradient. This network is similar to Bridle's softmax, except here uniform priors are not assumed; the gradient is of identical form, though. In this case the weights do not have a simple relationship with the mixture coefficients obtained in tied mixture density modelling. We may also train the means and variances of the RBFs by back-propagation of error; the gradients are straightforward. 3.2 PRELIMINARY EXPERIMENTS We have experimented with both the Bayes' rule transfer function (5) and the variant in the log domain (16). We used a phoneme classification task, with a 172 Renals, Morgan, Bourlard, Franco, and Cohen database consisting of 160,000 frames of continuous speech. We typically computed the parameters of the RBFs by a k-means clustering process. We found that the gradient resulting from the first transfer function (14) had a tendency to numerical instability, due to the lIJ term; thus most of our experiments have used the log domain transfer function. In experiments using a 1000 RBFs, we have obtained frame classification ra.tes of 52%. This is somewhat poorer than the frame classification we obtain using 80512 hidden unit MLP (59%). We are investigating improvements to our procedure, including variations to the learning schedule, the use of the EM algorithm to set RBF parameters and the use of priors on the weight matrix. 4 PROBLEMS WITH DISCRIMINATIVE TRAINING 4.1 UNLABELLED DATA A problem arises from the use of unlabelled or partially labelled data. When training a speech recogniser, we typically know the word sequence for an utterance, but we do not have a time-aligned phonetic transcription. This is a case of partially labelled data: a training set of data pairs {x"qt} is unavailable, but we do not have purely unlabelled data {Xt}. Instea.d, we have the constraining information of the word sequence W . Thus P(qilxt) may be decomposed as: (19) We usually make the further approximation that the optimal state sequence is much more likely than any competing state sequence. Thus, P(qclxt) = 1, and the probabilities of all other states at time tare O. This most likely state sequence (which may be computed using a forced Viterbi alignment) is often used as the desired outputs for a discriminatively trained network. Using this alignment implicitly assumes model correctness; however, we use discriminative training because we believe the HMMs are an inadequate speech model. Hence there is a mismatch between the maximum likelihood labelling and alignment, and the discriminative training used for the networks. It may be that this mismatch is responsible for the lack of robustness of discriminative training (compared with pure maximum likelihood training) in vocabulary independent speech recognition tasks (Paul et al., 1991). The assumption of model correctness used to generate the labels may have the effect of further embedding specifics of the training data into the final models. A solution to this problem may be to use a probabilistic alignment, with a distribution over labels at each timestep. This could be computed using the forward-backward algorithm, rather than the Viterbi approximation. This maximum likelihood approach still assumes model correctness of course. A discriminative approach to this problem would also attempt to infer distributions over labels. A basic goal might be to sharpen the distribution toward the maximum likelihood estimate. An example of such a method is the 'phantom targets' algorithm introduced by Bridle & Cox (1991). These optimisations are local: the error is not propagated through time. Algorithms for globally optimising discriminative training have been proposed (e.g. Bengio et al., these proceedings), but are not without problems, when used with a constrainConnectionist Optimisation of Tied Mixture Hidden Markov Models 173 ing language model. The problem is that to compute the posterior, the ratio of the probabilities of generating the correct utterance and generating all allowable utterances must be computed. 4.2 THE PRIORS It has been shown, both theoretically and in practice, that the training and recognition procedures used with standard HMMs remain valid for posterior probabilities (Bourlard & Wellekens, 1989). Why then do we replace these posterior probabilities with likelihoods? The answer to this problem lies in a mismatch between the prior probabilities given by the training data and those imposed by the topology of the HMMs. Choosing the HMM topology also amounts to fixing the priors. For instance, if classes qA: represent phones, prior probabilitiesP(qk) are fixed when word models are defined as particular sequences of phone models. This discussion can be extended to different levels of processing: if qk represents sub-phonemic states and recognition is constrained by a language model, prior probabilities qk are fixed by (and can be calculated from) the phone models, word models and the language model. Ideally, the topologies of these models would be inferred directly from the training data, by using a discriminative criterion which implicitly contains the priors. Here, at least in theory, it would be possible to start from fully-connected models and to determine their topology according to the priors observed on the training data. Unfortunately this results in a huge number of parameters that would require an unrealistic amount of training data to estimate them significantly. This problem has also been raised in the context of language modelling (Paul et aI., 1991). Since the ideal theoretical solution is not accessible in practice, it is usually better to dispose of the poor estimate of the priors obtained using the training data, replacing them with "prior" phonological or syntactic knowledge. 5 CONCLUSION Having discussed the similarities and differences between RBF networks and tied mixture density estimators, we present a method that attempts to resolve a mismatch between discriminative training and density estimation. Some preliminary experiments relating to this approach were discussed; we are currently performing further speech recognition experiments using these methods. Finally we raised some important issues pertaining to discriminative training. Acknowledgement This work was partially funded by DARPA contract MDA904-90-C-5253. References Bellegarda, J. R. & Nahamoo, D. (1990). Tied mixture continuous parameter modeling for speech recognition. IEEE Transactions on Acoustics, Speech and Signal Processing, 38, 2033-2045. 174 Renals, Morgan, Bourlard, Franco, and Cohen Bourlard, H. & Morgan, N. (1991). Conectionist approaches to the use of Markov models for continuous speech recognition. In Lippmann, R. P., Moody, J. E., & Touretzky, D. S. (Eds.), Advances in Neural Information Processing Systems, Vol. 3, pp. 213-219. Morgan Kaufmann, San Mateo CA. Bourlard, H. & Wellekens, C. J. (1989). Links between Markov models and multilayer perceptrons. In Touretzky, D. S. (Ed.), Advances in Neural Information Processing Systems, Vol. 1, pp. 502-510. Morgan Kaufmann, San Mateo CA. Bridle, J. S. & Cox, S. J. (1991). RecNorm: Simultaneous normalisation and classification applied to speech recognition. In Lippmann, R. P., Moody, J. E., & Touretzky, D. S. (Eds.), Advances in Neural Information Processing Systems, Vol. 3, pp. 234-240. Morgan Kaufmann, San Mateo CA. Bridle, J. S. (1990). Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters. In Touretzky, D. S. (Ed.), Advances in Neural Information Processing Systems, Vol. 2, pp. 211-217. Morgan Kaufmann, San Mateo CA. Broomhead, D. S. & Lowe, D. (1988). Multi-variable functional interpolation and adaptive networks. Complex Systems, 2, 321-355. Huang, X. D. & Jack, M. A. (1989). Semi-continuous hidden Markov models for speech signals. Computer Speech and Language, 3, 239-251. Paul, D. B., Baker, J. K., & Baker, J. M. (1991). On the interaction between true source, training and testing language models. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 569-572 Toronto. Powell, M. J . D. (1985). Radial basis functions for multi-variable interpolation: a review. Tech. rep. DAMPT/NAI2, Dept. of Applied Mathematics and Theoretical Physics, University of Cambridge. Renals, S., McKelvie, D., & McInnes, F. (1991). A comparative study of continuous speech recognition using neural networks and hidden Markov models. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 369-372 Toronto. Renals, S., Morgan, N., Cohen, M., & Franco, H. (1992). Connectionist probability estimation in the DECIPHER speech recognition system. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing San Francisco. In press.
1991
11
441
Oscillatory Neural Fields for Globally Optimal Path Planning Michael Lemmon Dept. of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Abstract A neural network solution is proposed for solving path planning problems faced by mobile robots. The proposed network is a two-dimensional sheet of neurons forming a distributed representation of the robot's workspace. Lateral interconnections between neurons are "cooperative", so that the network exhibits oscillatory behaviour. These oscillations are used to generate solutions of Bellman's dynamic programming equation in the context of path planning. Simulation experiments imply that these networks locate global optimal paths even in the presence of substantial levels of circuit nOlse. 1 Dynamic Programming and Path Planning Consider a 2-DOF robot moving about in a 2-dimensional world. A robot's location is denoted by the real vector, p. The collection of all locations forms a set called the workspace. An admissible point in the workspace is any location which the robot may occupy. The set of all admissible points is called the free workspace. The free workspace's complement represents a collection of obstacles. The robot moves through the workspace along a path which is denoted by the parameterized curve, p(t). An admissible path is one which lies wholly in the robot's free workspace. Assume that there is an initial robot position, Po, and a desired final position, p J. The robot path planning problem is to find an admissible path with Po and p J as endpoints such that some "optimality" criterion is satisfied. The path planning problem may be stated more precisely from an optimal control 539 540 Lemmon theorist's viewpoint. Treat the robot as a dynamic system which is characterized by a state vector, p, and a control vector, u. For the highest levels in a control hierarchy, it can be assumed that the robot's dynamics are modeled by the differential equation, p = u. This equation says that the state velocity equals the applied control. To define what is meant by "optimal", a performance functional is introduced. (1) where IIxil is the norm of vector x and where the functional c(p) is unity if plies in the free workspace and is infinite otherwise. This weighting functional is used to insure that the control does not take the robot into obstacles. Equation 1 's optimality criterion minimizes the robot's control effort while penalizing controls which do not satisfy the terminal constraints. With the preceding definitions, the optimal path planning problem states that for some final time, t" find the control u(t) which minimizes the performance functional J(u). One very powerful method for tackling this minimization problem is to use dynamic programming (Bryson, 1975). According to dynamic programming, the optimal control, Uopt, is obtained from the gradient of an "optimal return function" , JO (p ). In other words, Uopt = \1 JO. The optimal return functional satisfies the Hamilton-Jacobi-Bellman (HJB) equation. For the dynamic optimization problem given above, the HJB equation is easily shown to be (2) This is a first order nonlinear partial differential equation (PDE) with terminal (boundary) condition, JO(t,) = IIp(t,) - p, 112. Once equation 2 has been solved for the JO, then the optimal "path" is determined by following the gradient of JO. Solutions to equation 2 must generally be obtained numerically. One solution approach numerically integrates a full discretization of equation 2 backwards in time using the terminal condition, JO(t,), as the starting point. The proposed numerical solution is attempting to find characteristic trajectories of the nonlinear first-order PDE. The PDE nonlinearities, however, only insure that these characteristics exist locally (i.e., in an open neighborhood about the terminal condition). The resulting numerical solutions are, therefore, only valid in a "local" sense. This is reflected in the fact that truncation errors introduced by the discretization process will eventually result in numerical solutions violating the underlying principle of optimality embodied by the HJB equation. In solving path planning problems, local solutions based on the numerical integration of equation 2 are not acceptable due to the "local" nature of the resulting solutions. Global solutions are required and these may be obtained by solving an associated variational problem (Benton, 1977). Assume that the optimal return function at time t, is known on a closed set B. The variational solution for equation 2 states that the optimal return at time t < t, at a point p in the neighborhood of the boundary set B will be given by JO(p, t) = min {JO(y, t,) + lip - Y1l2} yeB (t, - t) (3) Oscillatory Neural Fields for Globally Optimal Path Planning 541 where Ilpll denotes the L2 norm of vector p. Equation 3 is easily generalized to other vector norms and only applies in regions where c(p) = 1 (i.e. the robot's free workspace). For obstacles, ]O(p, i) = ]O(p, if) for all i < if. In other words, the optimal return is unchanged in obstacles. 2 Oscillatory Neural Fields The proposed neural network consists of M N neurons arranged as a 2-d sheet called a "neural field". The neurons are put in a one-to-one correspondence with the ordered pairs, (i, j) where i = 1, ... , Nand j = 1, ... , M. The ordered pair (i, j) will sometimes be called the (i, j)th neuron's "label". Associated with the (i, j) th neuron is a set of neuron labels denoted by N i ,i' The neurons' whose labels lie in Ni,i are called the "neighbors" of the (i, j)th neuron. The (i, j)th neuron is characterized by two states. The short term activity (STA) state, Xi,;, is a scalar representing the neuron's activity in response to the currently applied stimulus. The long term activity (LTA) state, Wi,j, is a scalar representing the neuron's "average" activity in response to recently applied stimuli. Each neuron produces an output, I(Xi,;), which is a unit step function of the STA state. (Le., I(x) = 1 if X > 0 and I(x) = 0 if x ~ 0). A neuron will be called "active" or "inactive" if its output is unity or zero, respectively. Each neuron is also characterized by a set of constants. These constants are either externally applied inputs or internal parameters. They are the disturbance Yi,j, the rate constant Ai ,;, and the position vector Pi,j' The position vector is a 2-d vector mapping the neuron onto the robot's workspace. The rate constant models the STA state's underlying dynamic time constant. The rate constant is used to encode whether or not a neuron maps onto an obstacle in the robot's workspace. The external disturbance is used to initiate the network's search for the optimal path. The evolution of the STA and LTA states is controlled by the state equations. These equations are assumed to change in a synchronous fashion. The STA state equation IS xtj = G (x~j + Ai,jYi,j + Ai,j L Dkl/(Xkl/») (k,')ENi,i (4) w here the summation is over all neurons contained within the neighborhood, N i ,j , of the (i,j)th neuron. The function G(x) is zero if x < 0 and is x if x ~ O. This function is used to prevent the neuron's activity level from falling below zero. Dk/ are network parameters controlling the strength of lateral interactions between neurons. The LTA state equation is wT. = w:-· + 1/'(xi J')I I,} I,J I (5) Equation 5 means that the LTA state is incremented by one every time the (i, j)th neuron's output changes. Specific choices for the interconnection weights result in oscillatory behaviour. The specific network under consideration is a cooperative field where Dkl = 1 if (k, I) i= 542 Lemmon (i,j) and Dkl = -A < ° if (k, I) = (i,j). Without loss of generality it will also be assumed that the external disturbances are bounded between zero and one. It is also assumed that the rate constants, Ai,j are either zero or unity. In the path planning application, rate constants will be used to encode whether or not a given neuron represents an obstacle or a point in the free-workspace. Consequently, any neuron where Ai,i = ° will be called an "obstacle" neuron and any neuron where Ai,i = 1 will be called a "free-space" neuron. Under these assumptions, it has been shown (Lemmon, 1991a) that once a free-space neuron turns active it will be oscillating with a period of 2 provided it has at least one free-space neuron as a neighbor. 3 Path Planning and Neural Fields The oscillatory neural field introduced above can be used to generate solutions of the Bellman iteration (Eq. 3) with respect to the supremum norm. Assume that all neuron STA and LTA states are zero at time 0. Assume that the position vectors form a regular grid of points, Pi,i = (i~, j~)t where ~ is a constant controlling the grid's size. Assume that all external disturbances but one are zero. In other words, for a specific neuron with label (i,j), Yk,l = 1 if (k, 1) = (i,j) and is zero otherwise. Also assume a neighborhood structure where Ni,j consist of the (i, j)th neuron and its eight nearest neighbors, Ni,i = {(i+k,j+/);k = -1,0,1;1= -1,0,1}. With these assumptions it has been shown (Lemmon, 1991a) that the LTA state for the (i, j)th neuron at time n will be given by G( n Pk,) where Pkl is the length of the shortest path from Pk,l and Pi,i with respect to the supremum norm. This fact can be seen quite clearly by examining the LTA state's dynamics in a small closed neighborhood about the (k, I)th neuron. First note that the LTA state equation simply increments the LTA state by one every time the neuron's STA state toggles its output. Since a neuron oscillates after it has been initially activated, the LTA state, will represent the time at which the neuron was first activated. This time, in turn, will simply be the "length" of the shortest path from the site of the initial distrubance. In particular, consider the neighborhood set for the (k,l)th neuron and let's assume that the (k, I)th neuron has not yet been activated. If the neighbor has been activated, with an LTA state of a given value, then we see that the (k,l)th neuron will be activated on the next cycle and we have ( IIPk,,-pm,nlloo) Wk,l = max wm,n ~ (m,n)eN"" (6) This is simply a dual form of the Bellman iteration shown in equation 3. In other words, over the free-space neurons, we can conclude that the network is solving the Bellman equation with respect to the supremum norm. In light of the preceding discussion, the use of cooperative neural fields for path planning is straightforward. First apply a disturbance at the neuron mapping onto the desired terminal position, P f and allow the field to generate STA oscillations. When the neuron mapping onto the robot's current position is activated, stop the oscillatory behaviour. The resulting LTA state distribution for the (i, j)th neuron equals the negative of the minimum distance (with respect to the sup norm) from that neuron to the initial disturbance. The optimal path is then generated by a sequence of controls which ascends the gradient of the LTA state distribution. Oscillatory Neural Fields for Globally Optimal Path Planning 543 fig 1. STA activity waves fig 2. LTA distribution Several simulations of the cooperative neural path planner have been implemented. The most complex case studied by these simulations assumed an array of 100 by 100 neurons. Several obstacles of irregular shape and size were randomly distributed over the workspace. An initial disturbance was introduced at the desired terminal location and STA oscillations were observed. A snapshot of the neuronal outputs is shown in figure 1. This figure clearly shows wavefronts of neuronal activity propagating away from the initial disturbance (neuron (70,10) in the upper right hand corner of figure 1). The "activity" waves propagate around obstacles without any reflections. When the activity waves reach the neuron mapping onto the robot's current position, the STA oscillations were turned off. The LTA distribution resulting from this particular simulation run is shown in figure 2. In this figure, light regions denote areas of large LTA state and dark regions denote areas of small LTA state. The generation of the optimal path can be computed as the robot is moving towards its goal. Let the robot's current position be the (i,j)th neuron's position vector. The robot will then generate a control which takes it to the position associated with one of the (i,j)th neuron's neighbors. In particular, the control is chosen so that the robot moves to the neuron whose LTA state is largest in the neighborhood set, Ni,j' In other words, the next position vector to be chosen is Pk,l such that its LTA state is Wk I = max wz:,y , (z:,Y)EN i,j (7) Because of the LTA distribution's optimality property, this local control strategy is guaranteed to generate the optimal path (with respect to the sup norm) connecting the robot to its desired terminal position. It should be noted that the selection of the control can also be done with an analog neural network. In this case, the LTA 544 Lemmon states of neurons in the neighborhood set, Ni,j are used as inputs to a competitively inhibited neural net. The competitive interactions in this network will always select the direction with the largest LTA state. Since neuronal dynamics are analog in nature, it is important to consider the impact of noise on the implementation. Analog systems will generally exhibit noise levels with effective dynamic ranges being at most 6 to 8 bits. Noise can enter the network in several ways. The LTA state equation can have a noise term (LTA noise), so that the LTA distribution may deviate from the optimal distribution. In our experiments, we assumed that LTA noise was additive and white. Noise may also enter in the selection of the robot's controls (selection noise). In this case, the robot's next position is the position vector, Pk I such that Wk I = max( )EN . . (wx y + Vx y) ) ) X,1J 1,1 I ) where Vx,y is an i.i.d array of stochastic processes. Simulation results reported below assume that the noise processes, Vx,y, are positive and uniformly distributed i.i.d. processes. The introduction of noise places constraints on the "quality" of individual neurons, where quality is measured by the neuron's effective dynamic range. Two sets of simulation experiments have been conducted to assess the neural field's dynamic range requirements. In the following simulations, dynamic range is defined by the equation -log2lvm I, where IVm I is the maximum value the noise process can take. The unit for this measure of dynamic range is "bits". The first set of simulation experiments selected robotic controls in a noisy fashion. Figure 3 shows the paths generated by a simulation run where the signal to noise ratio was 1 (0 bits). The results indicate that the impact of "selection" noise is to "confuse" the robot so it takes longer to find the desired terminal point. The path shown in figures 3 represents a random walk about the true optimal path. The important thing to note about this example is that the system is capable of tolerating extremely large amounts of "selection" noise. The second set of simulation experiments introduced LTA noise. These noise experiments had a detrimental effect on the robot's path planning abilities in that several spurious extremals were generated in the LTA distribution. The result of the spurious extremals is to fool the robot into thinking it has reached its terminal destination when in fact it has not. As noise levels increase, the number of spurious states increase. Figure 4, shows how this increase varies with the neuron's effective dynamic range. The surprising thing about this result is that for neurons with as little as 3 bits of effective dynamic range the LTA distribution is free of spurious maxima. Even with less than 3 bits of dynamic range, the performance degradation is not catastrophic. LTA noise may cause the robot to stop early; but upon stopping the robot is closer to the desired terminal state. Therefore, the path planning module can be easily run again and because the robot is closer to its goal there will be a greater probability of success in the second trial. 4 Extensions and Conclusions This paper reported on the use of oscillatory neural networks to solve path planning problems. It was shown that the proposed neural field can compute dynamic programming solutions to path planning problems with respect to the supremeum norm. Simulation experiments showed that this approach exhibited low sensitivity Oscillatory Neural Fields for Globally Optimal Path Planning 545 ~~---r----.---~----'----' fig 3. Selected Path N (/) a (1) a C6 N U5 (/) ::l o .~ ::l a. en 15 o ~ (1) .0 E ::l Z Dynamic Range (bits) 1 2 3 4 fig 4. Dynamic Range to noise, thereby supporting the feasibility of analog VLSI implementations. c. ... The work reported here is related to resistive grid approaches for solving optimization problems (Chua, 1984). Resistive grid approaches may be viewed as "passive" relaxation methods, while the oscillatory neural field is an "active" approach. The primary virtue of the "active" approach lies in the network's potential to control the optimization criterion by selecting the interconnections and rate constants. In this paper and (Lemmon, 1991a), lateral interconnections were chosen to induce STA state oscillations and this choice yields a network which solves the Bellman equation with respect to the supremum norm. A slight modification of this model is currently under investigation in which the neuron's dynamics directly realize the iteration of equation 6 with respect to more general path metrics. This analog network is based on an SIMD approach originally proposed in (Lemmon, 1991). Results for this field are shown in figures 5 and 6. These figures show paths determined by networks utilizing different path metrics. In figure 5, the network penalizes movement in all directions equally. In figure 6, there is a strong penalty for horizontal or vertical movements. As a result of these penalties (which are implemented directly in the interconnection constants D1:1), the two networks' "optimal" paths are different. The path in figure 6 shows a clear preference for making diagonal rather than verticalor horizontal moves. These results clearly demonstrate the ability of an "active" neural field to solve path planning problems with respect to general path metrics. These different path metrics, of course, represent constraints on the system's path planning capabilities and as a result suggest that "active" networks may provide a systematic way of incorporating holonomic and nonholonomic constraints into the path planning process. A final comment must be made on the apparent complexity of this approach. 546 Lemmon fig 5. No Direction Favored Clearly, if this method is to be of practical significance, it must be extended beyond the 2-DOF problem to arbitrary task domains. This extension, however, is nontrivial due to the "curse of dimensionality" experienced by straightforward applications of dynamic programming. An important area of future research therefore addresses the decomposition of real-world tasks into smaller sub tasks which are amenable to the solution methodology proposed in this paper. Acknowledgements I would like to acknowledge the partial financial support of the National Science Foundation, grant number NSF-IRI-91-09298. References S.H. Benton Jr., (1977) The Hamilton-Jacobi equation: A Global Approach. Academic Press. A.E. Bryson and Y.C. Ho, (1975) Applied Optimal Control, Hemisphere Publishing. Washington D.C. L.O. Chua and G.N. Lin, (1984) Nonlinear programming without computation, IEEE Trans. Circuits Syst., CAS-31:182-188 M.D. Lemmon, (1991) Real time optimal path planning using a distributed computing paradigm, Proceedings of the Americal Control Conference, Boston, MA, June 1991. M.D. Lemmon, (1991a) 2-Degree-of-Freedom Robot Path Planning using Cooperative Neural Fields. Neural Computation 3(3):350-362.
1991
110
442
Retinogeniculate Development: The Role of Competition and Correlated Retinal Activity Ron Keesing* Dept. of Physiology U.C. San Francisco San Francisco, CA 94143 keesing@phy.ucsf.edu David G. Stork *Ricoh California Research Center 2882 Sand Hill Rd., Suite 115 Menlo Park, CA 94025 stork@crc.ricoh.com Carla J. Shatz Dept. of Neurobiology Stanford University Stanford, CA 94305 Abstract During visual development, projections from retinal ganglion cells (RGCs) to the lateral geniculate nucleus (LGN) in cat are refined to produce ocular dominance layering and precise topographic mapping. Normal development depends upon activity in RGCs, suggesting a key role for activity-dependent synaptic plasticity. Recent experiments on prenatal retina show that during early development, "waves" of activity pass across RGCs (Meister, et aI., 1991). We provide the first simulations to demonstrate that such retinal waves, in conjunction with Hebbian synaptic competition and early arrival of contralateral axons, can account for observed patterns of retinogeniculate projections in normal and experimentally-treated animals. 1 INTRODUCTION During the development of the mammalian visual system, initially diffuse axonal inputs are refined to produce the precise and orderly projections seen in the adult. In the lateral geniculate nucleus (LGN) of the cat, projections arriving from retinal ganglion cells (RGCs) of both eyes are initially intermixed, and they gradually segregate before birth to form alternating layers containing axons from only one eye. At the same time, the branching patterns of individual axons are refined, with increased growth in topographically correct locations. Axonal segregation and refinement depends upon 91 92 Keesing, Stork, and Schatz presynaptic activity blocking such activity disrupts nonnal development (Sretavan, et al., 1988; Shatz & Stryker, 1988). These and fmdings in other vertebrates (Cline, et a1., 1987) suggest that synaptic plasticity may be an essential factor in segregation and modification of RGC axons (Shatz, 1990). Previous models of visual development based on synaptic plasticity (Miller. et aI .• 1989; Whitelaw & Cowan. 1981) required an assumption of spatial correlations in RGC activity for nonnal development This assumption may have been justified for geniculocortical development. since much of this occurs postnatally: visual stimulation provides the correlations. Th~ assumption was more difficult to justify for retinogeniculate development. since this occurs prenatally before any optical stimulation. The first strong evidence for correlated activity before birth has recently emerged in the retinogenculate system: wave-like patterns of synchronized activity pass across the prenatal retina. generating correlations between neighboring cells' activity (Meister, et aI .• 1991). We believe our model is the first to incorporate these important results. We propose that during visual development. projections from both eyes compete to innervate LGN neurons. Contralateral projections. which reach the LGN earlier. may have a slight advantage in competing to innervate cells of the LGN located farthest from the optic tract. Retinal waves of activity could reinforce this segregation and improve the precision of topographic mapping by causing weight changes within the same eye and particularly within the same region of the same eye to be highly correlated. Unlike similar models of cortical development. our model does not require lateral interactions between post-synaptic cells available evidence suggests that lateral inhibition is not present during early development (Shotwell. et at, 1986). Our model also incorporates axon growth an essential aspect of retinogeniculate development. since the growth and branching ofaxons toward their ultimate targets occurs simultaneously with synaptic competition. Moreover. synaptic competition may provide cues for growth (Shatz & Stryker, 1988). We consider the possibility that diffusing intracellular signals indicating local synaptic strength guide axon growth. Below we present simulations which show that this model can account for development in nonnal and experimentally-treated animals. We also predict the outcomes of novel experiments currently underway. 2 SIMULATIONS Although the LGN is. of course, three-dimensional, in our model we represent just a single two-dimensional LGN slice, ten cells wide and eight cells high. The retina is then one-dimensional: 50 cells long in our simulations. (This ratio of widths, 50/10, is roughly that found in the developing cat.) In order to eliminate edge effects, we "wrap" the retina into a ring; likewise we wrap the LGN into a cylinder. Development of projections to the LGN is modelled in the following way: projections from all fifty RGCs of the contralateral eye arrive at the base of the LGN before those of the ipsilateral eye. A very rough topographic map is imposed, corresponding to coarse topography which might be supplied by chemical gradients (Wolpert, 1978). Development is then modelled as a series of growth steps, each separated by a period of Hebb-style synaptic competition (Wig strom & Gustafson. 1985). During competition. synapses are strengthened when pre- and post-synaptic activity are sufficiently correlated, Retinogeniculate Development: The Role of Competition and Correlated Retinal Activity 93 and they are weakened otherwise. More specifically. for a given ROC cell i with activity ~. the strength of synapse Wij to LON cell j is changed according to: ~w,,=e(a, -a)(a.-~) [1] 1J 1 J where (l and ~ are threshholds and e a learning rate. If a "wave" of retinal activity is present. the activity of ROC cells is detennined as a probability of firing based on a Oaussian function of the distance from the center of the wavefront. LON cell activity is equal to the sum of weighted inputs from ROC cells. Mter each wave. the total synaptic weight supported by each ROC cell i is renormalized linearly: w .. (t) 1J w .,(t+l)= ~ 1J k W, (t) k 1k [2] The weights supported by each LON cell are also renonnalized. gradually driving them toward some target value T: w .. (t+l)=w .. (t)+[T-Lw k,(t)] [3] 1J 1J k J Renonnalization reflects the notion that there is a limited amount of synaptic weight which can be supported by any neuron. During growth steps, connections are modified based on the strength of neighboring synapses from the same ROC cell. After normalization. connections grow or retract according to: w .. (t+I)=w .. (t)+'Y L w·k(t) 1J 1J 1 neighbors [4] where 'Y is a constant term. Equation 4 shows that weights in areas of high synaptic strength will increase more than those elsewhere. 3 RESULTS Synaptic competition. in conjunction with waves of pre-synaptic activity and early arrival of contralateral axons. can account for pattens of growth seen in normal and experimentally-treated animals. In the presence of synaptic competition. modelled axons from each eye segregate to occupy discrete layers of the LGN precisely what is seen in nonnal development. In the absence of competition, as in treatment with the drug TTX. axons arborize throughout the LON (Figure I). The segregation and refinement of retinal inputs to the LON is best illustrated by the fonnation of ocular dominance patterns and topographic ordering. In simulations of normal development, where retinal waves are combined with early arrival of contalateral inputs, strong ocular dominance layers are formed: LON neurons farthest from the optic tract receive synaptic inputs solely from the contralateral eye and those closer receive only ipsilateral inputs (Figure2, Competition). The development of these ocular dominance patterns is gradual: early in development, a majority of LON neurons receive inputs from both eyes. When synaptic competition is eliminated, there is no segregation into eyespecific layers LON neurons receive significant synaptic inputs from both eyes. These results are consistent with labelling studies of cat development (Shatz & Stryker. 1988). 94 Keesing, Stork, and Schatz c:::: Q .• -.• 8-e Q CJ = Q .• Q= =8-e 8 contralateral ipsilateral Figure 1: Retinogeniculate projections in vivo (adapted from Sretavan, et at, 1988.) (left), and simulation results (right). In the presence of competition (top), arbors are narrow and spatially localized, confined to the appropriate ocular dominance layer. In the absence of such competition (bottom), contralateral and ipsilateral projections are diffuse; there is no discernible ocular dominance pattern. During simulations, projections are represented by synapses throughout the LGN slice, shown as squares; the particular arborization patterns shown above are inferred from related simulations. Competition 4 2IH-+-+-+-+-+-+-+-+-II o 2 4 6 8 10 8 6 4 2 0 0 No Competition . . 2 4 6 8 10 Simultaneous 8 6 4 2 0 0 2 4 6 8 10 Figure 2: Ocular dominance at the end of development. Dark color indicates strongest synapses from the contralateral eye, light indicates strongest synapses from ipsilateral, and gray indicates significant synapses from both eyes. In the presence of competition, LGN cells segregate into eye-specific layers, with the contralateral eye dominating cells which are farthest from the optic tract (base). When competition is eliminated (No Competition), as in the addition of the drug TTX, there is no segregation into layers and LGN cells receive significant inputs from both eyes. These simulations reproduced results from cat development. When inputs from both retinae arrive simultaneously (Simultaneous), ocular dominance "patches" are established, similar to those observed in normal cortical development. Retinogeniculate Development: The Role of Competition and Correlated Retinal Activity 95 Retinal waves cause the activity of neighboring ROCs to be highly correlated. When combined with synaptic competition, these waves lead to a refinement of topographic ordering of retinogeniculate projections. During development, the coarse topography imposed as ROC axons enter the LON is refined to produce an accurate, precise mapping of retinal inputs (Figure 3, Competition). Without competition, there is no refinement of topography, and the coarse initial mapping remains. Competition • . . . . • . . . • . . . . • a 0 cCCIl:J co· . . . . . . . . . . ,. . • • a [lO' I • I la • • . . . ,. . ............ .• aORb'°COI •. • • • • • • • • • • • D Ccl __ _ [J a . . . • • • • • • • a cOO I 00.· ....... . • • • • • • • • • • • •• • • • 0 CCCDJco • • ........... . · . . . . . . . . . . . • • a C! I I [j a •• • • • • • • • I a cc[[]Jc [J a • • • . . · . • a cCO I po a· •.......... to I 10 •• . • • . . . . • . . . . . . • • • ooC No Competition aC- a· a a a· • aC· . . - .• a· . - .. aC· - a 0 a· 0 •• CaCa 0 aaCC- aC Ca. • ··.c· .. a· .. • a· .•. 0 a·.· a· . c· c· .•. a· aC. c. a a a· a a· . a······· ·C.··. ·C. a··.· a· aD. a···· aaa·· ·C·· .•..• a" .• aa·· a' aaC" a·. a·CaCa· .• C. . a' .... a" ·C···· D' •• a.Ca ·C· .CD· 'C' 'Co aCaC···· a" .•. •.. 'C'" a-·· -C·aaC·aC· aa· a-· ·a.· a· ·a· a······.····· · . - . a· .. - a· . a a -[JCa. - a . - C· - ... - . - . . - ........... . · .•••• • .. C· . C· • • •• ,...... .•..•. ,. .•....• ,. .. • ,. · .Ca. a a· . D D· a ·C ••.... a·· ... a •. C· ....• a •... a ••. a· • a a C . CC· . • Ca. . a . . . . . . C . . 0 • • Ca· D. . • . . . . a Ca. 0 • C D D . C a Figure 3: Topographic mapping with and without competition. The vertical axis represents ten LON cells within one section, and the horizontal axis 50 ROC cells. The size of each box indicates strength of the synapse connecting corresponding ROC and LON cells. If the system is topographically ordered, this connection matrix should contain only connections forming a diagonal from lower left to upper right, as is found in normal development in our model (Competition). When competition is eliminated, the topographic map is coarse and non-contiguous. 4 PREDICTIONS In addition to replicating current experimental findings, our model makes several interesting predictions about the outcome of novel experiments. If inputs from each eye arrive simultaneously, so that contralateral projections have no advantage in competing to innervate specific regions of the LON, synaptic competition and retinal waves lead to a pattern of ocular dominance "patches" similar to that observed in visual cortex (Figure 2, Simultaneous). Topography is refined, but in this case a continuous map is formed between the two eyes (Figure 4) again similar to patterns observed in visual cortex. 96 Keesing, Stork, and Schatz . . . . . . . . . ... . . • • • • • • • • • • • • • • • • • D CD , , , DC D • • • • • • _. _ ...... "r"T""T""'1.:.. ,.:,11 CU I I I DC II • • • • • • • • • ~....,_II.' •••••• . . ... II • . ..• .. , .... • •.•. • ' • - DQ.J 10 [JCD • . . . . -. . -......... . . 1.1 , , LiiP -Ii ~ ~ : . . .. , ..• -CD . . . .. . ..... Figure 4: Topographic mapping with synchronous arrival of projections from both eyes. Light boxes represent contralateral inputs, dark boxes represent ipsilateral. Synaptic competition and retinal waves cause ocular segregation and topo:sraphic refinement, but in this case the continuous map is formed using both eyes rather than a single eye. Our model predicts that the width of retinal waves the distribution of activity around the moving wavefront is an essential factor in determining both the rate of ocular segregation and topographic refinement. Wide waves, which cause many RGes within the same eye to be active, will lead to most rapid ocular segregation as a result of competition. However, wide waves can lead to poor topography: RGes in distant regions of the retina are just as likely to be simultaneously active as neighboring RGes (Figure 5). 100 5 ~QC Segregation -; ~ 80 4 ... 0 (IIJ ... BQ ... 0 .... 60 3 ~ (IIJ u QC til -~ --"0 .= (IIJ QI QI 40 2 =-Q u .... (IIJ (IIJ ....... .... ~ ~(IIJ o QI 20 0 ... 1 =~~ 0 rJ:J EO 0 0.0 0.2 0.4 0.6 0.8 1.0 Average Activity in Neighboring RGCs Figure 5: The width of retinal "waves" determines ocular dominance and topography in normal development in our model. Width of retinal waves is represented by the average activity in RGC cells adjacent to the Gaussian wavefront: high activity indicates wide waves. Topographic error (scale at right) represents the average distance from an RGCs target position multiplied by the strength of the synaptic connection. LGN cells are considered ocularly segregated when they receive .9 or more of their total synaptic input from one eye. Wide waves lead to rapid ocular segregation many RGes within the same retina are simulaneously active. An intermediate width, however, leads to lower topographic error wide waves cause spurious correlations, while narrow waves don't provide enough information about neighboring RGCs to significantly refine topography. Retinogeniculate Development: The Role of Competition and Correlated Retinal Activity 97 5 SUMMARY Our biological model differs from more developed models of cortical development in its inclusion of 1) differences in the time of arrival of RGC axons from the two eyes, 2) lack of intra-target (LGN) inhibitory connections, 3) absence of visual stimulation, and 4) inclusion of a growth rule. The model can account for the development of topography and ocular dominance layering in studies of normal and experimental-treated cats, and makes predictions concerning the role of retinal waves in both segregation and topography. These neurobiological experiments are currently underway. Acknowledgements Thanks to Michael Stryker for helpful suggestions and to Steven Lisberger for his generous support of this work. References Cline, H.T., Debski, E.A., & Constantine-Paton, M .. (1987) "N-methyl-D-aspartate receptor antagonist desegregates eye-specific stripes." PNAS 84: 4342-4345. Meister, M., Wong, R., Baylor, D., & Shatz, C. (1991) "Synchronous Bursts of Action Potentials in Ganglion Cells of the Developing Mammalian Retina." Science. 252: 939943. Miller, K.D., Keller, J.B., & Stryker, M.P. (1989) "Ocular Dominance Column Development: Analysis and Simulation." Science. 245: 605-615. Shatz, C.J. (1990) "Competitive Interactions between Retinal Ganglion Cells during Prenatal Development." 1. Neurobio. 21(1): 197-211. Shatz, C.J., & Stryker, M.P. (1988) "Prenatal Tetrodotoxin Infusion Blocks Segregation of Retinogeniculation Afferents." Science. 242: 87-89. Shotwell, S.L., Shatz, C.J., & Luskin, M.B. (1986) "Development of Glutamic Acid Decarboxylase Immunoreactivity in the cat's lateral geniculate nucleus." 1. Neurosci. 6(5) 1410-1423. Sretavan, D.W., Shatz, C.J., & Stryker, M.P. (1988) "Modification of Retinal Ganglion Cell Morphology by Prenatal Infusion of Tetrodotoxin." Nature. 336: 468-471. Whitelaw, V.A., & Cowan, J.D. (1981) "Specificity and plasticity of retinotectal connections: a computational model." 1. Neurosci. 1(12) 1369-1387. Wigstrom, H., & Gustafsson, B. (1985) "Presynaptic and postsynaptic interactions in the control of hippocampal long-term potentiation." in P.W. Landfield & S.A. Deadwyler (Eds.) Longer-term potentiation: from biophysics to behavior (pp. 73-107). New York: Alan R. Liss. Wolpert, L. (1978) "Pattern Formation in Biological Development." Sci. Amer. 239(4): 154-164. PART II NEURO-DYNAMI CS
1991
111
443
A Neural Net Model for Adaptive Control of Saccadic Accuracy by Primate Cerebellum and Brainstem Paul Deana, John E. W. Mayhew and Pat Langdon Department of Psychology a and Artificial Intelligence Vision Research Unit, University of Sheffield, Sheffield S10 2TN, England. Abstract Accurate saccades require interaction between brainstem circuitry and the cerebeJJum. A model of this interaction is described, based on Kawato's principle of feedback-error-Iearning. In the model a part of the brainstem (the superior colliculus) acts as a simple feedback controJJer with no knowledge of initial eye position, and provides an error signal for the cerebeJJum to correct for eye-muscle nonIinearities. This teaches the cerebeJJum, modelled as a CMAC, to adjust appropriately the gain on the brainstem burst-generator's internal feedback loop and so alter the size of burst sent to the motoneurons. With direction-only errors the system rapidly learns to make accurate horizontal eye movements from any starting position, and adapts realistically to subsequent simulated eye-muscle weakening or displacement of the saccadic target. 1 INTRODUCTION The use of artificial neural nets (ANNs) to control robot movement offers advantages in situations where the relevant analytic solutions are unknown, or where unforeseeable changes, perhaps as a result of damage or wear, are likely to occur. It is also a mode of control with considerable similarities to those used in biological systems. It may thus prove possible to use ideas derived from studies of ANNs in robots to help understand how the brain produces movements. This paper describes an attempt to do this for saccadic eye movements. 595 596 Dean, Mayhew, and Langdon The structure of the human retina, with its small foveal area of high acuity, requires extensive use of eye-movements to inspect regions of interest. To minimise the time during which the retinal image is blurred, these saccadic refixation movements are very rapid - too rapid for visual feedback to be used in acquiring the target (Carpenter 1988). The saccadic control system must therefore know in advance the size of control signal to be sent to the eye muscles. This is a function of both target displacement from the fovea and initial eye-position. The latter is important because the eye-muscles and orbital tissues are elastic, so that more force is required to move the eye away from the straightahead position than towards it (Collins 1975). Similar rapid movements may be required of robot cameras. Here too the desired control signal is usually a function of both target displacement and initial camera positions. Experiments with a simulated four degree-of-freedom stereo camera rig have shown that appropriate ANN architectures can learn this kind of function reasonably efficiently (Dean et al. 1991), provided the nets are given accurate error information. However, this infonnation is only available if the relevant equations have been solved; how can ANNs be used in situations where this is not the case? A possible solution to this kind of problem (derived in part from analysis of biological motor control systems) has been suggested by Kawato (1990), and was implemented for the simulated stereo camera rig (Fig 1). Two controllers are arranged in Camera Positions First Saccade (1) 'Thrget Coordinates (1) (2) Second (corrective) Saccade (2) Adaptive Feedforward Controller (ANN) Command No.1 Change in camera position ERROR Simple Feedback Controller (1) I-..... --~ ... (2) Command No.2 Change in camera position Fig 1: Control architecture for robot saccades parallel. Target coordinates, together with information about camera positions, are passed to an adaptive feedforward controller in the form of an ANN, which then moves the cameras. If the movement is inaccurate, the new target coordinates are passed to the second controller. This knows nothing of initial camera position, but issues a corrective movement command that is simply proportional to target displacement. In the absence of the adaptive controller it can be used to home in on the target with a series of saccades: Adaptive Control of Saccadic Accuracy by Primate Cerebellum and Brainstem 597 though each individual saccade is ballistic, the sequence is generated by visual feedback, hence the tenn simple feedback controller. When the adaptive controller is present, however, the output of the simple feedback controller can be used not only to generate a corrective saccade but also as a motor error signal (Fig 1). Although this error signal is not accurate, its imperfections become less important as the ANN learns and so takes on more responsibility for the movement (for proof of convergence see Kawato 1990). The architecture is robust in that it learns on-line, does not require mathematical knowledge, and still functions to some extent when the adaptive controller is untrained or damaged. These qualities are also important for control of saccades in biological systems, and it is therefore of interest that there are similarities between the architecture shown in Fig 1 and the structure of the primate saccadic system (Fig 2). The cerebellum is widely (though Cerebellar Structures NPH = nucleus prepositus hypoglossi I I MouyFibm NPH I Posterior Vermis ..... NKfP= nucleus reticularis tegmenti ~ pontis Mouy Climbing Fibre Fibre l NRTP J Inferior Fastigial Olive Nucleus f-' ( Retina ) Superior Pontine Oculomotor Collic:ulus Reticular r--Nuc:lei Eye Muscles Formation Brainstem Structures Fig 2: Schematic diagram of major components of primate saccadic control system not universally) regarded as an adaptive controller, and when the relevant part of it is damaged the remaining brainstem structures function like the simple feedback controller of Fig 1. Saccades can still be made, but (i) they are not accurate; (ii) the degree of inaccuracy depends on initial eye position; (iii) multiple saccades are required to home in on the target; and (iv) the system never recovers (eg Ritchie 1976; Optic an and Robinson 1980). These similarities suggest that it is worth exploring the idea that the brains tern teaches the cerebellum to make accurate saccades (cf Grossberg and Kuperstein 1986), just as the simple feedback controller teaches the adaptive controller in the Kawato architecture. A model of the primate system was therefore constructed, using 'off-the-shelf components wired together in accordance with known anatomy and physiology, and its performance assessed under a variety of conditions. 598 Dean, Mayhew, and Langdon 2 STRUCTURE OF MODEL The overall structure of the model is shown in Fig 3. It has three main components: a simple feedback controller, a burst generator, and a CMAC. The simple feedback 'IlIrget Eye Position Crude Command (copy) ViaNJ(['P r,------.. I Feedback Controller ----------, I CMAC I I I I~--,.~~ I L __ E~~~~J Error Via olivt Flxed gain ...... II-.J ........ -, Integrator (ftsettable) I I I I I I PLANT Figure 3: Main components of the model. The corresponding biological structures are shown in italics and dotted lines. controller sends a signal proportional to target displacement from the fovea to the burst generator. The function of the burst generator is to translate this signal into an appropriate command for the eye muscles, and it is based here on the model of Robinson (Robinson 1975; van Gisbergen et at. 1981). Its output is a rapid burst of neural impulses, the frequency of which is esentially a velocity command. A crucial feature of Robinson's model is an internal feedback loop, in which the output of the generator is integrated and compared with the input command. The saccade tenninates when the two are equal. This system ensures that the generator gives the output matching the input command in the face of disturbances that might alter burst frequency and hence saccade velocity. The simple feedback controller sends to the CMAC (Albus 1981) a copy of its command to the burst generator. The CMAC (Cerebellar Model Arithmetic Computer) is a neural net model of the cerebellum incoporating theories of cerebellar function proposed independently by Marr (1969) and Albus (1971). Its function is to learn a mapping between a multidimensional input and a single-valued output, using a form of lookup table with local interpolation. The entries in the lookup table are modified using the delta rule, by an error signal which is either the difference between desired and actual output or some estimate of that difference. CMACs have been used successfully in a number of Adaptive Control of Saccadic Accuracy by Primate Cerebellum and Brainstem 599 applications concerning prediction or control (eg Miller et aI. 1987; Honnel 1990). In the present case the function to be learnt is that relating desired saccade amplitude and initial eye position (inputs) to gain adjustment in the internal feedback loop of the burst generator (output). The correspondences between the model structure and the anatomy and physiology of the primate saccadic system are as follows. (1) The simple feedback controller represents the superior colliculus. (2) The burst generator corresponds to groups of neurons located in the brainstem. (3) The CMAC models a particular region of cerebellar cortex, the posterior vennis. (4) The pathway conveying a copy of the feedback controller's crude command corresponds to the projection from the superior colliculus to the nucleus reticularis tegmenti pontis, which in tum sendes a mossy fibre projection to the posterior vennis. Space precludes detailed evaluation of the substantial evidence supporting the above correspondences (see eg Wurtz and Goldberg 1989). The remaining two connections have a less secure basis. (5) The idea that the cerebellum adjusts saccadic accuracy by altering feedback gains in the burst generator is based on stimulation evidence (Keller 1989); details of the projection, including its anatomy, are not known. (6) The error pathway from feedback controller to CMAC is represented by the anatomically identified projection from superior colliculus to inferior olive, and thence via climbing fibres to the posterior vermis. There is considerable debate concerning the functional role of climbing fibres, and in the case of the tecto-olivary projection the relevant physiological evidence appears to be lacking. 3 PERFORMANCE OF MODEL The system shown in Fig 3 was trained to make horizontal movements only. The size of burst ~I (arbitrary units) required to produce an accurate rightward saccade ~9 deg was calculated from Van Gisbergen and Van Opstal's (1989) analysis of the nonlinear relationship between eye position and muscle position as ~I = a [~92 + ~9 (b + 29)] (1) where 9 is initial eye-position (measured in deg from extreme leftward eye-position) and a and b are constants. In the absence of the CMAC, the feedback controller and burst generator produce a burst of size ~I = x. (c/d) (2) where x is the rightward horizontal displacement of the target, c is the gain constant of the feedback controller, and d a constant related to the fixed gain of the internal feedback loop of the burst generator. The kinematics of the eye are such that x (measured in deg of visual angle) is equal to ~9. The constants were chosen so that the perfonnance of the system without the CMAC resembled that of the primate saccadic system after cerebellar damage (fig 4A), namely position-dependent overshoot (eg Ritchie 1976; Optican and 600 Dean, Mayhew, and Langdon l i = -; = ~ < A 5.0 (No cerebellum) 4.5 4.0 3.5 3 . 0 2.5 2.0 1.5 1.0 0 . 5 0.0 20 40 '0 10 100 5.0 B (Infant) 5.0 C ('fralned) 4.5 1 RiKhhrardsaccade -.1 4 . 5 4.0 3 . 5 3.0 2 . 5 2.0 S Iat1mg pooin"" (deg. righr) --0--40 ••••••••• -20 _0 .--.-+20 20 40 '0 10 100 saccade amplitude (deg.) 4.0 3.5 3 . 0 2.5 2 . 0 overshoot t ~ ••• _. u aIhn •• • -- aCOlrate 0_undlhoot o. o+-O""-T~-r--"'-O""-T----' 20 40 '0 10 100 Fig 4. Performance of model under different conditions before and after training Robinson 1980). When the CMAC is present, the size of burst changes to ~I = x. [c/(g + d)] (3) where g is the output of the CMAC. This was initialised to a value that produced a degree of saccadic undershoot (Fig 4b) characteristic of initial performance in human infants (eg Aslin 1987). Training data were generated as 50,000 pairs of random numbers representing the initial position of the eye and the location of the target respectively. Each pair had to satisfy the constraints that (i) both lay within the oculomotor range (45 deg on either side of midline) and (ii) the target lay to the right of the starting position. For the test data the starting position varied from 40 deg left to 30 deg right in 10 degree steps. For each starting position there was a series of targets, starting at 5 deg to the right of the start and increasing in 5 degree steps up to 40 dcg to the right of midline (a subset of the test data was used in Fig 4). The main measure of performance was the absolute gain error (ie the the difference between the actual gain and 1.0, always taken as positive) averaged ovcr the test set. The configuration of the CMAC was examined in pilot experiments. The CMAC coarsecodes its inputs, so that for a given resolution r, an input span of s can be represented as set of m measurement grids each dividing the input span into n compartments, where sIr = m.n. Combinations of m and n were examined, using perfect error feedback. A reasonable compromise between learning speed and asymptotic accuracy was achieved by using 10 coarse-coding grids each with lOxlO resolution (for the two input dimensions). giving a total of 1000 memory cells. Adaptive Control of Saccadic Accuracy by Primate Cerebellum and Brainstem 601 The main part of the study investigated first the effects of degrading the quality of the error feedback on learning. The main conclusion was that efficient learning could be obtained if the CMAC were told only the direction of the error, ie overshoot versus undershoot. This infonnation was used to increase by a small fixed amount the weights in the activated cells (thereby producing increased gain in the internal feedback loop) when the saccade was too large, and to decreasing them similarly when it was too small. Appropriate choice of learning rate gave a realistic overall error of 5% (Fig 4c) after about 2000 trials. Direct comparison with learning rates of human infants, who take several months to achieve accuracy, is confounded by such factors as the maturation of the retina (Aslin 1987). Learning parameters were then kept constant, and the model tested with simulations of two different conditions that produce saccadic plasticity in adult humans. One involved the effects of weakening the rightward pulling eye muscle in one eye. In people, the weakened eye can be trained by covering the nonnal eye with a patch, an effect which experiments with monkeys indicate depends on the cerebellum (Optic an and Robinson 1980). For the model eye-weakening was simulated by increasing the constant a in equation (1) such that the trained system gave an average gain of about 0.5. Retraining required about 400-500 trials. Testing the previously normal eye (ie with the original value of a) showed that it now overshot, as is also the case in patients and experimental animals. Again normal performance was restored after 400-500 trials. These learning rates compare favourably with those observed in experimental animals. Finally, the second simulation of adult saccadic plasticity concerned the effects of moving the target during a saccade. If the target is moved in the opposite direction to its original displacement the saccade will overshoot, but after a few trials adaptation occurs and the saccade becomes 'accurate' once more. Simulation of the procedure used by Deubel et al. (1986) gave system adaptation rates similar to those observed experimentally in people. 4 CONCLUSIONS These results indicate that the model can account in general terms for the acquisition and maintenance of saccadic accuracy in primates (at least in one dimension). In addition to its general biologically attractive properties, the model's structure is consistent with current anatomical and physiological knowledge, and offers testable predictions about the functions of the hitherto mysterious projections from superior colliculus to posterior vennis. If these predictions are supported by experimental evidence, it would be appropriate to extend the model to incorporate greater physiological detail, for example concerning the precise location(s) of cerebellar plasticity. Acknowledgements This work was supported by the Joint Council Initiative in Cognitive Science. 602 Dean, Mayhew. and Langdon References Albus, J.A. (1971) A theory of cerebellar function. Math. Biosci. 10: 25-61. Albus, J.A. (1981) Brains, Behavior and Robotics. BYTE books (McGraw-Hill), Peterborough New Hampshire. Aslin, R.N. (1987) Motor aspects of visual development in infancy. In: Handbook of Infant Perception, eds. P. Salapatek and L. Cohen. Academic Press, New York, pp.43113. Collins, C.c. (1975) The human oculomotor control system. In: Basic Mechanisms of Ocular Motility and their Clinical Implications, eds. G. Lennerstrand and P. Bach-yRita. Pergamon Press, Oxford, pp. 145-180. Dean, P., Mayhew, J.E.W., Thacker, T. and Langdon, P. (1991) Saccade control in a simulated robot camera-head system: neural net architectures for efficient learning of inverse kinematics. Bioi. Cybern. 66: 27-36. Deubel, H., Wolf, W. and Hauske, G. (1986) Adaptive gain control of saccadic eye movements. Human Neurobiol. 5: 245-253. Grossberg, S. and Kuperstein, M. (1986) Neural Dynamics of Adaptive Sensory-Motor Control: Ballistic Eye Movements. Elsevier, Amsterdam. Honnel, M. (1990) A self-organising associative memory system for control applications. In: Advances in Neural Information Processing Systems 2, ed. D.S. Touretzky. Morgan Kaufman, San Mateo, California, pp.332-339. Kawato, M. (1990) Feedback-error-learning neural network for supervised motor learning. In Advanced Neural Computers, ed. R. EckmiIler. Elsevier, Amsterdam, pp.365-372. Keller, E.L. (1989) The cerebellum. In: The Neurobiology of Saccadic Eye Movements, eds. Wurtz, R.H. and Goldberg, M.E. Elsevier Science Publishers, North Holland, pp. 391-411. Marr, D. (1969) A theory of cerebellar cortex. 1. Physiol. 202: 437-470. Miller, W.T. III, Glanz, EH. and Gordon Kraft, L. III (1987) Application of a general learning algorithm to the control of robotic manipulators. Int. 1. Robotics Res. 6: 8498. Optican, L.M. and Robinson, D.A. (1980) Cerebellar-dependent adaptive control of primate saccadic system. 1. Neurophysiol. 44: 1058-1076. Ritchie, L. (1976) Effects of cerebellar lesions on saccadic eye movements. 1. Neurophysiol. 39: 1246-1256. Robinson, D.A. (1975) Oculomotor control signals. In: Basic Mechanisms of Ocular Motility and their Clinical Implications, eds. Lennerstrand, G. and Bach-y-Rita, P. Pergamon Press, Oxford, pp. 337-374. Van Gisbergen, J.A.M., Robinson, D.A. and Gielen, S. (1981) A quantitative analysis of generation of saccadic eye movements by burst neurons. 1. Neurophysiol. 45: 417442. Van Gisbcrgen, J.A.M. and van Opstal, AJ. (1989) Models. In: The Neurobiology of Saccadic Eye Movements, eds. Wurtz, R.H. and Goldberg, M.E. Elsevier Science Publishers, North Holland, pp. 69-101. Wurtz, R.H. and Goldberg, M.E. (1989) The Neurobiology of Saccadic Eye Movements. Elsevier Science Publishers, North Holland.
1991
112
444
Segmentation Circuits Using Constrained Optimization John G. Harris'" MIT AI Lab 545 Technology Sq., Rm 767 Cambridge, MA 02139 Abstract A novel segmentation algorithm has been developed utilizing an absolutevalue smoothness penalty instead of the more common quadratic regularizer. This functional imposes a piece-wise constant constraint on the segmented data. Since the minimized energy is guaranteed to be convex, there are no problems with local minima and no complex continuation methods are necessary to find the unique global minimum. By interpreting the minimized energy as the generalized power of a nonlinear resistive network, a continuous-time analog segmentation circuit was constructed. 1 INTRODUCTION Analog hardware has obvious advantages in terms of its size, speed, cost, and power consumption. Analog chip designers, however, should not feel constrained to mapping existing digital algorithms to silicon. Many times, new algorithms must be adapted or invented to ensure efficient implementation in analog hardware. Novel analog algorithms embedded in the hardware must be simple and obey the natural constraints of physics. Much algorithm intuition can be gained from experimenting with these continuous-time nonlinear systems. For example, the algorithm described in this paper arose from experimentation with existing analog segmentation hardware. Surprisingly, many of these "analog" algorithms may prove useful even if a computer vision researcher is limited to simulating the analog hardware on a digital computer [7] . ... A portion of this work is part. of a Ph.D dissertation at Caltech [7]. 797 798 Harris 2 ABSOLUTE-VALUE SMOOTHNESS TERM Rather than deal with systems that. have many possible stable states, a network t.hat has a unique stable stat.e will be studied . Consider a net.work that minimizes: E(u) = ~ I:(d i lid:? +,\ I: 11Ii+1 - lIil 2 . . (2) 1 I Thf' absolute-vahIf.' function is used for the smoothness penalty instead of the more familiar quadratic term. There are two intuitive reasons why the absolut.e-value pena1t.y is an improvement over the quadratic penalty for piece-wise const.ant. segnwntation. First, for large values of Illi - 1Ii+11, the penalty is not. as severE" which means that edges will be smoothed less. Second, small values of Illi - lIi+11 are penalized more than they are in t.he quadratic case, resulting in a flat.ter surface bet.ween edges. Since no complex continuation or annealing methods are necessary t.o avoid local minima. this computat.ional model is of interest to vision researchers independent of any hardware implicat.ions. This method is very similar to constrained optimization methods uisclIssed by Platt [14] and Gill [4]. Uncler this interpretation, the problem is to minimize L(di Ui f with t.he constraint. that lIj = lIi+l for all i. Equation 1 is an inst.ance of the penalty met.hod, as ,\ ~ (Xl, the const.raint lIi = lIi+l is fulfilled exactly. The absolute-value value penalt.y function given in Equat.ion 2 is an example of a nondifferent.ial pena.lty. The const.raint. lli = Ui+1 is fulfilled exactly for a finit.e value of ,\. Howewr, unlike typical constrained optimization methous, this application requires some of these "exact ,. constraints to fail (at discontinuities) and others to be fulfilled . This algorithm also resembles techniques in robust st.at.istics, a field pioneered and formalized by Huber [9]. The need for robust estimation techniques in visual processing is clear since, a single out.lier may cause wild variations in standard regularization networks which rely on quadrat.ic data constraint.s [171. Rather than use the quadratic data constraints, robust. regression techniques tend to limit the infl uence of outlier dat.a points. 2 The absolut.e-value function is one method commonly used to reduce outlier succeptability. In fact, the absolute-value network developed in this paper is a robust method if discontinuities in the data are interpret.ed as outliers. The line process or resistive fuse networks can also be interpreted as robust methods using a more complex influence functions. 3 ANALOG MODELS As pointed out by Poggio and Koch [15], the notion of minimizing power in linear networks implementing quadrat.ic "regularized" a.lgorithms must be replaced by t.he more general notion of minimizing the total resistor co-content [1:31 for nonlinear networks. For a voltage-controlled resistor characterized by I = f(V), the cocontent is defined as v J(V) = i f(V')dV' (3) 20utlier detect.ion techniques have been mapped to analog hardware [8). Segmentation Circuits Using Constrained Optimization 799 • • • ••• Figure 1: Nonlinear resist.ive network for piece-wise const.ant segmentation. One-dimensional surface int.erpolation from dense dat.a will be used as the model problem in t.his paper, but these techniques generalize to sparse data in multiple dimensions. A standarJ technique for smoothing or int.erpolating noisy input.s di is to minimize an energy! of the form: (1) The first. term ensures t.hat the solution Ui will be close to the data while the second term implements a smoothness constraint. The parameter A controls the tradeoff between the degree of smoothness and the fidelity to the data. Equation 1 can be interpreted as a regularization method [1] or as the power dissipa.ted the linear version of the resistive network shown in Figure 1 [16]. Since the energy given by Equation 1 oversmoothes discontinuities, numerous researchers (starting with Geman and Geman [3]) have modified Equa.tion 1 with line processes and successfully demonstrated piece-wise smooth segmentation. In these methods, the resultant energy is nonconvex and complex annealing or continuation methods are required to converge to a good local minima of the energy space. This problem is solved using probabilistic [11] or deterministic annealing techniques [2, 10]. Line-process discontinuities have been successfully demonstrated in analog hardware using resistive fuse networks [5], but continuation methods are still required to find a good solution [6]. lThe term ene'yy is used throughout this paper as a cost functional to be minimized. It does not necessarily relate t.o any true energy dissipated in the real world. 800 Harris (b) 6 = lOOmV (c) S = lOmV (d)S=lmV Figure 2: Various examples of tiny-tanh network simulation for varying 6. The I-V characteristic of the saturating resistors is I = ,\ tanh(V /6). (a) shows a synthetic 1.0V tower image with additive Gaussian noise of q = O.3V which is input to the network. The network outputs are shown in Figures (b) 6 = 100mV, (c) 6 = 10mV and (d) 6 = 1m V. For all simulations ,\ = 1. Segmentation Circuits Using Constrained Optimization 801 \'i ~ 'R .....-ji-t-------------t Figure 3: Tiny tanh circuit. The saturating tanh characteristic is measured between nodes VI and \/2, Controls FR and VG set the conductance and saturation voltage for the device. For a linear resistor, I = ev, the co-cont.ent. is given by ~ev2, which is half the dissipa.ted power P = eV~. The absolute-value functional in Equat.ion 2 is not strictly convex. Also, since the absolut.e-value function is nondifferentiable at the origin, hardware and software methods of solution will be plagued with instabilities and oscillations. We approximate Equation 2 with the following well-behaved convex co-content: (4) The co-content becomes the absolute-va.lue cost function in Equation 2 in the limiting case as 8 -----t O. The derivative of Equation 2 yields Kirchoff's current equation at each node of the resistive network in Figure 1: Uj Ui+l Ui Uj-l (Uj-dj)+Atanh( 8 )+Atanh( 8 )=0 (5) Therefore, construction of this network requires a nonlinear resistor with a hyperbolic tangent I-V characteristic with an extremely narrow linear region. For this 802 Harris reason, t.his element. is called t.he tiTly-tanh resist.or. This saturating resistor is used as the nonlinear element. in the resistive network shown in Figure 1. Its I-V charact.eristic is I = -\ tanh(l' / b). It is well-known that any circuit made of inuependent. voltage sources and two-terminal resistors \'\lit.h strictly increasing 1-V characterist.ics has a unique st.able st.ate. 4 COMPUTER SIMULATIONS Figure 2a shows a synthetic 1.0V tower image with additive Gaussian noise of (J = 0.3V. Figure 2b shows the simulated result for b = 100m V and -\ = 1. As Mead has observed, a network of saturating resistors has a limited segmentation effect. [12]. Unfortunately, as seen in the figure, noise is still evident in the output, and the curves on either side of the step have started t.o slope toward one anot.her. As -\ is increased to further smooth the noise, the t.wo sides of the st.ep will blend together into one homogeneous region. However, a'3 the width of the linear region of t.he sat.urating resist.or is reduced, network segmentation propert.ies are greatly enhanced. Segmentation performance improves for b = 10m V shown in Figure LC and further improves for f, = 1mF in Figure 2d. The best. segment.ation occurs when the I-V curve resembles a step function, and co-content., therefore, approximates an absolute-value. Decreasing b less than 1m V shows no discernible change in the output..3 One drawback of this net.work is t.hat it does not. recover the exact heights of input steps. Rather it. subtracts a const.ant from the height of each input. It is st.raight.forward to show that the amount each uniform region is pulled towards the background is given by -\(perimeter/area) [7]. Significant features with large area/perimeter ratios will retain their original height. Noise point.s have small area/perimeter ratios and therefore will be pulled towards the background. Typically, the exact values of the height.s are less important than the location of the discontinuities. Furthermore, it. would not be uifficult to construct a t.wo-stage network t.o recover the exact values of the step height.s if desired. In this scheme a tiny-tanh network would control the switches on a second fuse network. 5 ANALOG IMPLEMENTATION Mead has constructed a CMOS saturating resistor with an I-V characteristic of the form I = -\ tanh(ll/b), where delta must be larger than 50mV because of fundamental physical limitations [12]. Simulation results from section 4 suggest that for a tower of height h to be segmented, h/8 must be at least on the order of 1000. Therefore a network using Mead's saturating resistor (8 = 50m V) could segment a tower on the order of 50V, which is much too large a voltage to input to these chips. Furt.hermore, since we are typically interested in segmenting images into more than two levels even higher voltages would be required. The tiny-tanh circuit (shown in Figure 3) builds upon an older version of Mead's saturating resistor [18] using a gain stage t.o decrease the linear region of the device. This device can be made to saturate at voltages as low as 5m V. 3These simulations were also used to smooth and segment noisy depth da.ta from a correlation-based stereo algorithm run on real images [7). (V) 3.2 2.8 2.4 2.0 Chip Input Segmentation Circuits Using Constrained Optimization 803 :3.6 (V) :3.2 2.8 2.4~'''''''''' ... Jt.j'""'''''''''''.11.''''"",,,,,,·,,,~,,~,,,,,,,,, 2.0 Segment.ed Step Figure 4: Measured segmentat.ion performance of the tiny-tanh network for a step. The input shown 011 the left. is about. a IV step. The out.put shown on the right. is a sf'gment.ed step about 0.5V in height. By implementing the nonlinear resistors in Figure 1 with the tiny-t.anh circuit. a ID segmentation network was successfully fabricated and t.ested. Figure 4 shows t.he segmentation which resulted when a st.ep (about 1 V) ,vas scanned into the chip. The segment.ed step has been reduced to about 0.5V. No special annealing met.hods ,,,ere necessary because a convex energy is being minimized. 6 CONCLUSION A novel energy functional was developed for piece-wise constant segmentatioll. 4 This computational model is of interest to vision researchers independent of any hardware implications, because a convex energy is minimized. In sharp contrast to previous solutions of t.his problem, no complex continuation or annealing methods are necessary to avoid local minima. By interpreting this Lyapunov energy as the co-content of a nonlinear circuit, we have built and demonstrated the tiny-tanh network, a cont.inuous-time segmentation network in analog VLSI. Acknowledgements Much of this work was perform at Calt.ech with the support of Christof Koch and Carver Mead. A Hughes Aircraft graduate student fellowship and an NSF postdoctoral fellowship are gratefully acknowledged. 4This work has also been extended to segment piece-wise lillea.r regions, instead of the purely piece-wise constant processing discussed in this paper [7]. 804 Harris References [1] M. Bert.ero, T. Poggio, and V. Torre. Ill-posed problems in early vision. Proc. IEEE, 76:869-889, 1988. [2] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press. Cambridge, MA. 1987. [3] S. Geman and D. Geman. Stochast.ic relaxation. gibbs distribut.ion and the bayesian rest.oration of images. IEEE Trans. Pafifrll Anal. Mach. Intdl., 6:721-741, 1984. [4] P. E. Gill, "V. Murray, and M. H. 'Vright. Practical Optimization. Academic Press, 1981. [5] .J. G. Harris, C. Koch, and .J. Luo. A two-dimensional analog VLSI circuit for detecting discontinuities in early vision. Science, 248:1209-1211,1990. [6] .J. G. Harris, C. Koch, .J. Luo, and .J. 'Wyat.t.. Resist.ive fuses: analog hardware for det.ecting discontinuities in early vision. In Ivl. Mead, C.and Ismail, editor, Analog VLSI Implementations of Neural Systems. Kluwer, Norwell. MA, 1989. [7] .J .G. Harris. Analog models for early vision. PhD thesis, California Inst.itut.e of Technology, Pasadena, CA, 1991. Dept. of Computat.ion and Neural Syst.ems. [8] .J .G. Harris, S.C. Liu, and B. Mathur. Discarding out.liers in a nonlinear resistive network. In blter1lational Joint Conference 011 NEural .Networks, pages 501-506, Seattie, 'VA., July 1991. [9] P .. l. Huber. Robust Statistics . . J. 'Viley & Sons, 1981. [10] C. Koch, .J. Marroquin, and A. Yuille. Analog "neuronal" networks in early vision. Proc Nail. Acad. Sci. B. USA, 83:4263-4267, 1987. [11] J. Marroquin, S. Mitter, and T. Poggio. Probabilistic solut.ion of ill-posed problems in computational vision. J. Am. Statistic Assoc. 82:76-89, 1987. [12] C. Mead. Analog VLSI and Neural Systems. Addison-\Vesley, 1989. [13] w. Millar. Some general theorems for non-linear systems possessing resistance. Phil. Mag., 42:1150-1160, 1951. [14] .J. Platt. Constraint methods for neural networks and computer graphics. Dept. of Comput.er Science Technical Report Caltech-CS-TR-89-07, California Institute of Technology, Pasadena, CA, 1990. [15] T. Poggio and C. Koch. An analog model of computation for the ill-posed problems of early vision. Technical report, MIT Artificial Intelligence Laboratory, Cambridge, MA, 1984. AI Memo No. 783. [16] T. Poggio and C. Koch. Ill-posed problems in early vision: from computational theory to analogue networks. Proc. R. Soc. Lond. B, 226:303-323, 1985. [17] B.G. Schunck. Robust computational vision. In Robust methods in computer tJision workshop., 1989. [18] M. A. Sivilotti, M. A. Mahowald, and C. A. Mead. Real-time visual computation using analog CMOS processing arrays. In 1987 Stanford Conference on Very Large Scale Integration, Cambridge, MA, 1987. MIT Press.
1991
113
445
Rule Induction through Integrated Symbolic and Subsymbolic Processing Clayton McMillan, Michael C. Mozer, Paul Smolensky Department of Computer Science and Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 Abstract We describe a neural network, called RufeNet, that learns explicit, symbolic condition-action rules in a formal string manipulation domain. RuleNet discovers functional categories over elements of the domain, and, at various points during learning, extracts rules that operate on these categories. The rules are then injected back into RuleNet and training continues, in a process called iterative projection. By incorporating rules in this way, RuleNet exhibits enhanced learning and generalization performance over alternative neural net approaches. By integrating symbolic rule learning and subsymbolic category learning, RuleNet has capabilities that go beyond a purely symbolic system. We show how this architecture can be applied to the problem of case-role assignment in natural language processing, yielding a novel rule-based solution. 1 INTRODUCTION We believe that neural networks are capable of more than pattern recognition; they can also perform higher cognitive tasks which are fundamentally rule-governed. Further we believe that they can perform higher cognitive tasks better if they incorporate rules rather than eliminate them. A number of well known cognitive models, particularly of language, have been criticized for going too far in eliminating rules in fundamentally rule-governed domains. We argue that with a suitable choice of high-level, rule-governed task, representation, processing architecture, and learning algorithm, neural networks can represent and learn rules involving higher-level categories while simultaneously learning those categories. The resulting networks can exhibit better learning and task performance than neural networks that do not incorporate rules, have capabilities that go beyond that of a purely symbolic rule-learning algorithm. 969 970 McMillan, Mozer, and Smolensky We describe an architecture, called RuleNet, which induces symbolic condition-action rules in a string mapping domain. In the following sections we describe this domain, the task and network architecture, simulations that demonstrate the potential for this approach, and finally, future directions of the research leading toward more general and complex domains. 2 DOMAIN We are interested in domains that map input strings to output strings. A string consists of n slots, each containing a symbol. For example, the string abed contains the symbol e in slot 3. The domains we have studied are intrinsically rule-based, meaning that the mapping function from input to output strings can be completely characterized by explicit, mutually exclusive condition-action rules. These rules are of the general form "if certain symbols are present ill the input then perform a certain mapping from the input slots to the output slots." The conditions do not operate directly on the input symbols, but rather on categories defined over the input symbols. Input symbols can belong to mUltiple categories. For example, the words boy and girl are instances of the higher level category HUMAN. We denote instances with lowercase bold font, and categories with uppercase bold font. It should be apparent from context whether a letter string refers to a single instance, such as boy, or a string of instances, such as abed. Three types of conditions are allowed: 1) a simple condition, which states that an instance of some category must be present in a particular slot of the input string, 2) a conjunction of two simple conditions, and 3) a disjunction of two simple conditions. A typical condition might be that an instance of the category W must be present in slot 1 of the input string and an instance of category Y must be present in slot 3. The action performed by a rule produces an output string in which the content of each slot is either a fixed symbol or a function of a particular input slot, with the additional constraint that each input slot maps to at most one output slot. In the present work, this function of the input slots is the identity function. A typical action might be to switch the symbols in slots 1 and 2 of the input, replace slot 3 with the symbol a, and copy slot 4 of the input to the output string unchanged, e.g., abed baad. We call rules of this general form second-order categorical permutation (SCP) rules. The number of rules grows exponentially with the length of the strings and the number of input symbols. An example of an SCP rule for strings of length four is: if (input1 is an instance of Wand input] is an instance of Y) then (output1 = input2' oUtput2 = input1' output] = a, output4==input4) where illputa and outputJl denote input slot a and output slot ~, respectively. As a shorthand for this rule, we write [A W_Y_ 21a4], where the square brackets indicate this is a rule, the" A" denotes a conjunctive condition, and the "_" denotes a wildcard symbol. A disjunction is denoted by "v". This formal string manipulation task can be viewed as an abstraction of several interesting cognitive models in the connectionist literature, including case-role assignment (McClelland & Kawamoto, 1986), grapheme-phoneme mapping (Sejnowski & Rosenberg, 1987), and mapping verb stems to the past tense (Rumelhart & McClelland, 1986). Rule Induction through Integrated Symbolic and Subsymbolic Processing 971 o single unit c:::::I layer of units .- complete connectivity I>-- gating connection m condition units n pools of v category units n pools of u hidden units Figure 1: The RuleNet Architecture 3 TASK input RuleNet's task is to induce a compact set of rules that accurately characterizes a set of training examples. We generate training examples using a predefined rule base. The rules are over strings of length four and alphabets which are subsets of {a, b, c, d, e, f, g, h, i, j, k, I}. For example, the rule [v Y _VI _ - 4h21] may be used to generate the exemplars: hedk kheh, cldk-khlc, gbdj -j hbg, gdbk-khdg where category VI consists of a, b, c, d, i, and category Y consists of f, g, h. Such exemplars form the corpus used to train RuleNet. Exemplars whose input strings meet the conditions of several rules are excluded. RuleNet's task is twofold: It must discover the categories solely based upon the usage of their instances, and it must induce rules based upon those categories. The rule bases used to generate examples are minimal in the sense that no smaller set of rules could have produced the examples. Therefore, in our simulations the target number of rules to be induced is the same as the number used to generate the training corpus. There are several traditional, symbolic systems, e.g., COBWEB (Fisher, 1987), that induce rules for classifying inputs based upon training examples. It seems likely that, given the correct representation, a system such as COBWEB could learn rules that would classify patterns in our domain. However, it is not clear whether such a system could also learn the action associated with each class. Classifier systems (Booker, et ai., 1989) learn both conditions and actions, but thcre is no obvious way to map a symbol in slot a of the input to slot ~ of the output. We have also devised a greedy combinatoric algorithm for inducing this type of rule, which has a number of shortcomings in comparison to RuleNet. See McMillan (1992) for comparisons of RuleNet and alternative symbolic approaches. 4 ARCHITECTURE RuleNet can implement SCP rules of the type outlined above. As shown in Figure 1, RuleNet has five layers of units: an input layer, an output layer, a layer of category units, a layer of condition units, and a layer of hidden units. The operation of RuleNet can be divided into three functional components: categorization is performed in the mapping from the input layer to the category layer via the hidden units, the conditions are evaluated in the mapping from the category layer to the condition layer, and actions are performed in 972 McMillan. Mozer. and Smolensky the mapping from the input layer to the output layer, gated by the condition units. The input layer is divided into II pools of units, one for each slot, and activates the category layer, which is also divided into 11 pools. Input pool a maps to category pool a. Units in category pool a represent possible categorizations of the symbol in input slot a. One or more category units will respond to each input symbol. The activation of the hidden and category units is computed with a logistic squashing function. There are m units in the condition layer, one per rule. The activation of condition unit i, Pi' is computed as follows: logistic (11 e t;) p. I ~ logistic (Ilet) J The activation Pi represents the probability that rule i applies to the current input. The normalization enforces a soft winner-take-all competition among condition units. To the degree that a condition unit wins, it enables a set of weights from the input layer to the output layer. These weights correspond to the action for a particular rule. There is one set of weights, A j , for each of the m rules. The activation of the output layer, y, is calculated from the input layer, x, as follows: Essentially, the transformation Ai for rule each rule i is applied to the input, and it contributes to the output to the degree that condition i is satisfied. Ideally, just one condition unit will be fully activated by a given input, and the rest will remain inactive. This architecture is based on the local expert architecture of Jacobs, Jordan, Nowlan, and Hinton (1991), but is independently motivated in our work by the demands of the task domain. RuleNet has essentially the same structure as the Jacobs network, where the action substructure of RuleNet corresponds to their local experts and the condition substructure corresponds to their gatillg lIetwork. However, their goal-to minimize crosstalk between logically independent sub tasks-is quite different than ours. 4.1 Weight Templates In order to interpret the weights in RuleNet as symbolic SCP rules, it is necessary to establish a correspondence between regions of weight space and SCP rules. A weight template is a parameterized set of constraints on some weights-a manifold in weight space-that has a direct correspondence to an SCP rule. The strategy behind iterative projection is twofold: constrain gradient descent so that weights stay close to templates in weight space, and periodically project the learned weights to the nearest template, which can then readily be interpreted as a set of SCP rules. For SCP rules, there are three types of weight templates: one dealing with categorization, one with rule conditions, and one with rule actions. Each type of template is defined over a subset of the weights in RuleNet. The categorization templates are defined over the weights from input to category units, the condition templates are defined over the weights from category to condition units for each rule i, ci ' and the action templates are defined over the weights from input to output units for each rule i, Ai' Rule Induction through Integrated Symbolic and Subsymbolic Processing 973 Category templates. The category templates specify that the mapping from each input slot a to category pool a, for 1 s a S II, is uniform. This imposes category invariance across the input string. Condition templates. The weight vector ci , which maps category activities to the activity of condition unit i, has Vil elements-v being the number of category units per slot and 11 being the number of slots. The fact that the condition unit should respond to at most one category in each slot implies that at most one weight in each v-element subvector of cj should be nonzero. For example, assuming there are three categories, N, X, and Y, the vector cj that detects the simple condition "illput2 is an instance of X" is: (000 OcpO 000 000), where cp is an arbitrary parameter. Additionally, a bias is required to ensure that the net input will be negative unless the condition is satisfied. Here, a bias value, b, of -O.5cp will suffice. For disjunctive and conjunctive conditions, weights in two slots should be equal to cp, the rest zero, and the appropriate bias is -.5cp or -1.5cp, respectively. There is a weight template for each condition type and each combination of slots that takes part in a condition. We generalize these templates further in a variety of ways. For instance, in the case where each input symbol falls into exactly one category, if a constant Ea is added to all weights of Cj corresponding to slot a and Ea is also subtracted from b, the net input to condition unit i will be unaffected. Thus, the weight template must include the {Ea}. Action templates. If we wish the actions carried out by the network to correspond to the string manipulations allowed by our rule domain, it is necessary to impose some restrictions on the values assigned to the action weights for rule i, Aj • Ai has an 11 x Il block form, where II is the length of input/output strings. Each block is a k x k submatrix, where k is the number of elements in the representation of each input symbol. The block at block-row ~, block-column a of Aj copies illputa to outputr. if it is the identity matrix. Thus, the weight templates restrict each block to being either the identity matrix or the zero matrix. If outputr. is to be a fixed symbol, then block-row ~ must be all zero except for the output bias weights in block-row ~. The weight templates are defined over a submatrix Ajr.' the set of weights mapping the input to an output slot ~. There are 11+1 templates, one for the mapping of each input slot to the output, and one for the writing of a fixed symbol to the output. An additional constraint that only one block may be nonzero in block-column a of Ai ensures that inputa maps to at most one output slot. 4.2 Constraints on Weight Changes Recall that the strategy in iterative projection is to constrain weights to be close to the templates described above, in order that they may be readily interpreted as symbolic rules. We use a combination of hard and soft constraints, some of which we briefly describe here. To ensure that during learning every block in Ai approaches the identity or zero matrix, we constrain the off-diagonal terms to be zero and constrain weights along the diagonal of each block to be the same, thus limiting the degrees of freedom to one parameter within each block. All weights in Cj except the bias are constrained to positive or zero values. Two soft constraints are imposed upon the network to encourage all-or-none categorization of input instances: A decay term is used on all weights in cj except the maximum in each slot, and a second cost term encourages binary activation of the category units. 974 McMillan, Mozer, and Smolensky 4.3 Projection The constraints described above do not guarantee that learning will produce weights that correspond exactly to SCP rules. However, using projection, it is possible to transform the condition and action weights such that the resulting network can be interpreted as rules. The essential idea of projection is to take a set of learned weights, such as CI, and compute values for the parameters in each of the corresponding weight templates such that the resulting weights match the learned weights. The weight template parameters are estimated using a least squares procedure, and the closest template, based upon a Euclidean distance metric, is taken to be the projected weights. 5 SIMULATIONS We ran sim ulations on 14 different training sets, averaging the performance of the network over at least five runs with different initial weights for each set. The training data were generated from SCP rule bases containing 2-8 rules and strings of length four. Between four and eight categories were used. Alphabets ranged from eight to 12 symbols. Symbols were represented by either local or distributed activity vectors. Training set sizes ranged from 3-15% of possible examples. Iterative projection involved the following steps: (1) start with one rule (one set of c;-AI weights), (2) perform gradient descent for 500-5,000 epochs, (3) project to the nearest set of SCP rules and add a new rule. Steps (2) and (3) were repeated until the training set was fully covered. In virtually every run on each data set in which RuleNet converged to a set of rules that completely covered the training set, the rules extracted were exactly the original rules used to generate the training set. In the few remaining runs, RuleNet discovered an equivalent set of rules. It is instructive to examine the evolution of a rule set. The rightmost column of Figure 2 shows a set of five rules over four categories, used to generate 200 exemplars, and the left portion of the Figure shows the evolution of the hypothesis set of rules learned by RuleNet over 20,000 training epochs, projecting every 4000 epochs. At epoch 8000, RuleNet has discovered two rules over two categories, covering 24.5% of the training set. At epoch 12,000, RuleNet has discovered three rules over three categories, covering 52% of the training set. At epoch 20,000, RuleNet has induced five rules over four categories that epoch 8000 epoch 12,000 epoch 20,000 original rules/categ. [v B_C_ 4h21] [v B_C_ 4h21] [v B_C_ 4h21] [v Y_W_ 4h21] [1\ _B_C 341£] [1\ _EC 2413] [ _B_ 4213] [ _Y_ 4213] [1\ _B_B 321£] [v _E_D 342£] [v _Z_X 342£] [1\ _D_B 3214] [1\ _X_Y 3214] [v _EC 2413] [v _ZW 2413] Categ. Instance Categ. Instance Categ. Instance Categ. Instance B f 9 h B f 9 h C abc d i w abc d i C abc i C abc d i D e 9 1 X e 9 1 E a i j k B f 9 h y f 9 h E a c i j k z a c i j k Figure 2: Evolution of a Rule Set Rule Induction through Integrated Symbolic and Subsymbolic Processing 975 Table 1: Generalization performance of RuleNet (average of five runs) % of patterns correctly mapped Data Set 1 Data Set 2 Data Set 3 Data Set 4 Architecture (8 Rules) (3 Rules) (3 Rules) (5 Rules) tram test tram test tram test tram test RuleNet 100 100 100 100 100 100 100 100 Jacobs architecture 100 22 100 7 100 14 100 27 3-layer backprop 100 27 100 7 100 14 100 35 # of patterns in set 120 1635 45 1380 45 1380 75 1995 cover 100% of the training examples. A close comparison of these rules with the original rules shows that they only differ in the arbitrary labels RuleNet has attached to the categories. Learning rules can greatly enhance generalization. In cases where RuleNet learns the original rules, it can be expected to generalize perfectly to any pattern created by those rules. We compared the performance of RuleNet to that of a standard three-layer backprop network (with 15 hidden units per rule) and a version of the Jacobs architecture, which in principle has the capacity to perform the task. Four rule bases were tested, and roughly 5% of the possible examples were used for training and the remainder were used for generalization testing. Outputs were thresholded to 0 or 1. The cleaned up outputs were compared to the targets to determine which were mapped correctly. All three learn the training set perfectly. However, on the test set, RuleNet's ability to generalize is 300% to 2000% better than the other systems (Table1). Finally, we applied RuleNet to case-role assignment, as considered by McClelland and Kawamoto (1986). Case-role assignment is the problem of mapping syntactic constituents of a sentence to underlying semantic, or thematic, roles. For example, in the sentence, "The boy broke the window", boy is the subject at the syntactic level and the agent, or acting entity, at the semantic level. Window is the object at the syntactic level and the patient, or entity being acted upon, at the semantic level. The words of a sentence can be represented as a string of Il slots, where each slot is labeled with a constituent, such as subject, and that slot is filled with the corresponding word, such as boy. The output is handled analogously. We used McClelland and Kawamoto's 152 sentences over 34 nouns and verbs as RuleNet's training set. The five categories and six rules induced by RuleNet are shown in Table 2, where S = subject, 0 = object, and wNP = noun in the with noun-phrase. We conjecture that RuleNet has induced such a small set of rules in part because it employs Table 2: SCP Rules Induced by RuleNet in Case-Role Assignment Rule Sample of Sentences Handled Correctly if 0 = VICTIM then wNP-modifier The boy ate the pasta with cheese. if 0 = THING 1\ wNP = UTENSIL then wNP-instrument The boy ate the pasta with the fork. if S = BREAKER then S-instrument The rock broke the window. if S = THING then S-patient The window broke. The fork moved. if V = moved then self-patient The man moved. if S = ANIMATE then food-patient The lion ate. 976 McMillan, Mozer, and Smolensky implicit conflict resolution, automatically assigning strengths to categories and conditions. These rules cover 97% of the training set and perform the correct case-role assignments on 84% of the 1307 sentences in the test set. 6 DISCUSSION RuleNet is but one example of a general methodology for rule induction in neural networks. This methodology involves five steps: 1) identify a fundamentally rule-governed domain, 2) identify a class of rules that characterizes that domain, 3) design a general architecture, 4) establish a correspondence between components of symbolic rules and manifolds of weight space-weight templates, and 5) devise a weight-template-based learning procedure. Using this methodology, we have shown that RuleNet is able to perform both category and rule learning. Category learning strikes us as an intrinsically subsymbolic process. Functional categories are often fairly arbitrary (consider the classification of words as nouns or verbs) or have complex statistical structure (consider the classes "liberals" and "conservatives"). Consequently, real-world categories can seldom be described in terms of boolean (symbolic) expressions; subsymbolic representations are more appropriate. While category learning is intrinsically subsymbolic, rule learning is intrinsically a symbolic process. The integration of the two is what makes RuleNet a unique and powerful system. Traditional symbolic machine learning approaches aren't well equipped to deal with subsymbolic learning, and connectionist approaches aren't well equipped to deal with the symbolic. RuleNct combines the strengths of each approach. Acknowledgments This research was supported by NSF Presidential Young Investigator award IRI-9058450, grant 9021 from the James S. McDonnell Foundation, and DEC external research grant 1250 to MM; NSF grants IRI-8609599 and ECE-8617947 to PS; by a grant from the Sloan Foundation's computational neuroscience program to PS; and by the Optical Connectionist Machine Program of the NSF Engineering Research Center for Optoelectronic Computing Systems at the University of Colorado at Boulder. References Booker, L.B., Goldberg, D.E., and Holland, J.H. (1989). Classifier systems and genetic algorithms, Artificiallntelligellce 40:235-282. Fisher, D.H. (1987). Knowledge acquisition via incremental concept clustering. Machine Learning 2:139-172. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G. (1991). Adaptive mixtures of local experts. Neural Computation, 3:79-87. McClelland, J. & Kawamoto, A. (1986). Mechanisms of sentence processing: assigning roles to constituents. In J.L. McClelland, D.E. Rumelhart, & the PDP Research Group, Parallel Distributed Processing: Explorations in tire microstructure of cognition, Vol. 2. Cambridge, MA: MIT PresslBradford Books. McMillan, C. (1992). Rule induction in a neural network through integrated symbolic and subsymbolic processing. Unpublished Ph.D. Thesis. Boulder, CO: Department of Computer Science, University of Colorado. Rumelhart, D., & McClelland, 1. (1986). On learning the past tense of English verbs. In 1.L. McClelland, D.E. Rumelhart, & the PDP Research Group, Parallel Distributed Processing: Explorations in the microstructure of cognition. Vol. 2. Cambridge, MA: MIT PresslBradford Books. Sejnowski, T. 1. & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text, Complex Systems, 1: 145-168.
1991
114
446
A comparison between a neural network model for the formation of brain maps and experimental data K. Obermayer Beckman-Institute University of Illinois Urbana, IL 61801 K. Schulten Beckman-Institute University of Illinois Urbana, IL 61801 Abstract G.G. Blasdel Harvard Medical School Harvard University Boston, MA 02115 Recently, high resolution images of the simultaneous representation of orientation preference, orientation selectivity and ocular dominance have been obtained for large areas in monkey striate cortex by optical imaging [1-3]. These data allow for the first time a "local" as well as "global" description of the spatial patterns and provide strong evidence for correlations between orientation selectivity and ocular dominance. A quantitative analysis reveals that these correlations arise when a fivedimensional feature space (two dimensions for retinotopic space, one each for orientation preference, orientation specificity, and ocular dominance) is mapped into the two available dimensions of cortex while locally preserving topology. These results provide strong evidence for the concept of topology preserving maps which have been suggested as a basic design principle of striate cortex [4-7]. Monkey striate cortex contains a retinotopic map in which are embedded the highly repetitive patterns of orientation selectivity and ocular dominance. The retinotopic projection establishes a "global" order, while maps of variables describing other stimulus features, in particular line orientation and ocularity, dominate cortical organization locally. A large number of pattern models [8-12] as well as models of development [6,7,13-21] have been proposed to describe the spatial structure of these patterns and their development during ontogenesis. However, most models have not been compared with experimental data in detail. There are two reasons for this: (i) many model-studies were not elaborated enough to be experimentally testable and (ii) a sufficient amount of experimental data obtained from large areas of striate cortex was not available. 83 84 Obermayer, Schulten, and Blasdel Figure 1: Spatial pattern of orientation preference and ocular dominance in monkey striate cortex (left) compared with predictions of the SOFM-model (right). Isoorientation lines (gray) are drawn in intervals of 11.25° (left) and IS.00 (right), respectively. Black lines indicate the borders (ws(rj = 0) of ocular dominance bands. The areas enclosed by black rectangles mark corresponding elements of organization in monkey striate cortex and in the simulation result (see text). Left: Data obtained from a 3.1mm x 4.2mm patch of the striate cortex of an adult macaque (macaca nemestrina) by optical imaging [1-3]. The region is located near the border with area IS, close to midline. Right: Model-map generated by the SOFM-algorithm. The figure displays a small section of a network of size N = d = 512. The parameters of the simulation were: € = 0.02, tTh = 5, vr,:x = 20.48, vrax = 15.36, 9 . 101 iterations, with retinotopic initial conditions and periodic boundary conditions. 1 Orientation and ocular dominance columns in monkey striate cortex Recent advances in optical imaging [1-3,22,23] now make it possible to obtain high resolution images of the spatial pattern of orientation selectivity and ocular dominance from large cortical areas. Prima vista analysis of data from monkey striate cortex reveals that the spatial pattern of orientation preference and ocular dominance is continuous and highly repetitive across cortex. On a global scale orientation preferences repeat along every direction of cortex with similar periods. Locally, orientation preferences are organized as parallel slabs (arrow 1, Fig. 1a) in linear zones, which start and end at singularities (arrow 2, Fig. la), point-like discontinuities, around which orientation preferences change by ±IS00 in a pinwheel-like fashion. Both types of singularities appear in equal numbers (359:354 for maps obtained from four adult macaques) with a density of 5.5/mm2 (for regions close to A Neural Network Model for the Formation of Brain Maps Compared with Experimental Data 85 Fourier transforms Wj (k) E;r exp{ikr) Wj{r) correlation functions Cij (P) < Wier) Wj{r + p) >;r feature gradients I V;rWj (r) I ({Wj{rl + 1, r2) - Wj{rl' r2»2 + (Wj(rl, r2 + 1) - Wj{rl' r2»2}i/2 (211'0';)-t f d2r'wj{r') (;r ;t'r~ ~ exp{;(1~ + ik{r' - !r)} Gabor transforms 9j (k, r) Table 1: Quantitative measures used to characterize cortical maps. the midline). Figure la reveals that the iso-orientation lines cross ocular dominance bands at nearly right angles most of the time (region number 2) and that singularities tend to align with the centers of the ocular dominance bands (region number 1). Where orientation preferences are organized as parallel slabs (region number 2), the iso-orientation contours are often equally spaced and orientation preferences change linearly with distance. These results are confirmed by a quantitative analysis (see Table 1). For the following we denote cortical location by a two-dimensional vector r. At each location we denote the (average) position of receptive field centroids in visual space by (Wl(r), tlJ2{r). Orientation selectivity is described by a two-dimensional vector ( W3( r), W4 ( r), whose length and direction code for orientation tuning strength and preferred orientation, respectively [1,10]. Ocular dominance is described by a realvalued function ws{r), which denotes the difference in response to stimuli presented to the left and right eye. Data acquisition and postprocessing are described in detail in [1-3]. A Fourier transform of the map of orientation preferences reveals a spectrum which is a nearly circular band (Fig. 2a), showing that orientation preferences repeat with similar periods in every direction in cortex. Neglecting the slight anisotropy in the experimental datal, a power spectrum can be approximated by averaging amplitudes over all directions of the wave-vector (Fig. 2b, dots~. The location of the peak corresponds to an average period Ao = 710pm ± 50pm and it's width to a coherence length of 820pm ± 130pm. The coherence length indicates the typical distance over which orientation preferences can change linearly and corresponds to the average size of linear zones in Fig. la. The corresponding autocorrelation functions (Fig. 2c) have a Mexican hat shape. The minimum occurs near 300pm, which indicates that orientation preferences in regions separated by this distance tend to be orthogonal. In summary, the spatial pattern of orientation preference is characterized by local correlation and global "disorder" . , ,\ long axes parallel t.o t.he ocular dominance fllabfl, orient.ation preferencM repeat on average every 660/Jm ± 40pm; perpendicular t.o the fltripes every 840/Jm ± 40/Jm. The fllight. horizontal elongation reflects t.he fact that. iR(H)rientation fllabR tend to r.onned t.he centerR of ocul:" dominance bandR. 2,\11 quantities regarding experimental data are averages over four animalR, nml-nm4, unleAA Rtated ot.herwifle. F.rror marginfl indicat.e fltandard deviations. 86 Obermayer, Schulten. and Blasdel a) X: :::::.- .... ~ 'j ': :. .. .. :~~;~ .: : : .. ', . :.:. :. :¥ .. : ::::-:':0: ::::~;:::-;::::: . iI;ip;:lr:, . . :<o>:\~. >: ;:':.: ii.i;··': i~l;' ~;::··;i:>·~ •..•.. ~;, .. "' ",-.... 1.5 13 6 b) C44 c) .~ .~",-.... 0"0 C':S 1.0 c:: ~ e cE .0.5 0 c::~ c:: 0.5 .. nm2 .9 e '-' 0.0 ~ -theory ~ 0 .-nm2 -d ~'-' _ theory & 0.0 8 -0.5 a 1 2 3 0.0 0.5 1.0 1.5 spatial frequency distance (nonnalized) (nonnalized) Figure 2: Fourier analysis and correlation functions of the orientation map in monkey striate cortex (animal nm2) compared with the predictions of the SOFMmodel. Simulation results were taken from the data set described in Fig. 1, right. (a) Fourier spectra of nm2 (left) and simulation results (right). Each pixel represents one mode; location and gray value of the pixel indicate wave-vector and energy, respectively. (b) Approximate power spectrum (normalized) obtained by averaging the Fourier-spectra in (a) over all directions of the wave-vector. Peak frequency of 1.0 corresponds to 1.4/mm for nm2. (c) Correlation functions (normalized). A distance of 1.0 corresponds to 725IJm for nm2. Local properties of the spatial patterns, as well as correlations between orientation preference and ocular dominance, can be quantitatively characterized using GaborHelstrom-transforms (see Table 1). If the radius ug of the Gaussian function in the Gabor-filter is smaller than the coherence length the Gabor-transform of any of the quantities W3Cr'), W4(r') and ws(r') typically consists of two localized regions of high energy located on opposite sides of the origin. The length Ikd of the vectors ki' i E [3,4,5], which corresponds to the centroids of these regions, fluctuates around the characteristic wave-number 27r/>'o of this pattern, and its direction gives the normal to the ocular dominance bands and iso-orientation slabs at the location r, where the Gabor-transform was performed. A Neural Network Model for the Fonnation of Brain Maps Compared with Experimental Data 87 nml-nm4 theory U) .. § .. 6 10 10 .:::1 .:::1 ~ 8 8 8 8 '6 'C) 6 0 & 4 & 4 S 2 rP PIBllel alalll S ~ /SI c:: 2 0 ~ 0 8. fP ~ 90° ainguJaritiel 8. cP II .J. U Figure 3: Gabor-analysis of cortical maps. The percentage of map locations is plotted against the parameters 81 and 82 (see text) for 3,421 locations randomly selected from the cortical maps of four monkeys, nml-nm4, (left) and for 1,755 locations randomly selected from simulation results (right). Error bars indicate standard deviations. Simulation results were taken from the data set described in Fig. 1. CTg was 150l'm for the experimental data and 28 pixels for the SOFM-map. Results of this analysis are shown in Fig. 3 (left) for 3,434 samples selected randomly from data of four animals. The angle between k3 and k4 is represented along the 81 axis. Histograms at the back, where 81 = 0°, represent regions where iso-orientation lines are parallel. Histograms in the front, where 81 = 90°, represent regions containing singularities. The intersection angle of iso-orientation slabs and ocular dominance bands is represented along the 82 axis. The proportion of sampled regions increases steadily with decreasing 81. As 81 approaches zero, values accumulate at the right, where orientation and ocular dominance bands are orthogonal. Thus linear zones and singularities are important elements of cortical organization but linear zones (back rows) are the most prominent features in monkey striate cortex3 . Where iso-orientation regions are organized as parallel slabs, orientation slabs intersect ocular dominance bands at nearly right angles (back and right corner of diagrams). 2 Topology preserving maps Recently, topology preserving maps have been suggested as a basic design principle underlying these patterns and its was proposed that these maps are generated by simple and biologically plausible pattern formation processes [4,6,7]. In the following we will test these models against the recent experimental data. We consider a five-dimensional feature 8pace V which is spanned by quantities describing the most prominent receptive field properties of cortical cells: position of a receptive field in retinotopic space (VI, V2), orientation preference and tuning strength (V3, V4), and ocular dominance (V5). If all combinations of these properties !iDBtB from BreB 17 of the eBt indkBte t.hat in thifl flpeciefl, Blthough hot,h elementfl are prPJlent, flingulBritiefi Bre more import.ant [23] 88 Obermayer, Schulten, and Blasdel are represented in striate cortex, each point in this five-dimensional feature space is mapped onto one point on the two-dimensional cortical surface A. In order to generate these maps we employ the feature map (SOFM-) algorithm of Kohonen [15,16] which is known to generate topology preserving maps between spaces of different dimensionality [4,5]4. The algorithm describes the development of these patterns as unsupervised learning, i.e. the features of the input patterns determine the features to be represented in the network [4]. Mathematically, the algorithm assignes feature vectors w(r), which are points in the feature space, to cortical units r, which are points on the cortical surface. In our model the surface is divided into N x N small patches, units r, which are arranged on a two-dimensional lattice (network layer) with periodic boundary conditions (to avoid edge effects). The average receptive field properties of neurons located in each patch are characterized by the feature vector w( r) whose components (Wj (r) are interpreted as receptive field properties of these neurons. The algorithm follows an iterative procedure. At each step an input vector V, which is of the same dimensionality as w(;;') is chosen at random according to a probability distribution P( V). Then the unit i whose feature vector w(S) is closest to the input pattern v is selected and the components (Wj(r) of its feature vector are changed according to the feature map learning rule [15,16]' P( V) was chosen to be constant within a cylindrical manifold in feature space, where vg::x and vrax are some real constants, and zero elsewhere. Figure 4 shows a typical map, a surface in feature space, generated by the SOFMalgorithm. For the sake of illustration the five-dimensional feature space is projected onto a three-dimensional subspace spanned by the coordinate-axes corresponding to retinotopic location (VI and V2) and ocular dominance (vs). The locations of feature vectors assigned to the cortical units are indicated by the intersections of a grid in feature space. Preservation of topology requires that the feature vectors assigned to neighboring cortical units must locally have equal distance and must be arranged on a planar square lattice in feature space. Consequently, large changes in one feature, e.g. ocular dominance vs, along a given direction on the network correlate with small changes of the other features, e.g. retinotopic location VI and V2, along the same direction (crests and troughs of the waves in Fig. 4) and vice versa. Other correlations arise at points where the map exhibits maximal changes in two features. For example for retinotopic location (VI) and ocular dominance (vs) to vary at a maximal rate, the surface in Fig. 4 must be parallel to the (VI, vs)plane. Obviously, at such points the directions of maximal change of retinotopic location and ocular dominance are orthogonal on the surface. In order to compare model predictions with experimental data the surface in the fivedimensional feature space has to be projected into the three-dimensional subspace "'fhp. p.xBd form of t.hp. Bigorit.hm iA not P.A.'lP.ntiBI, howp.vp.r. A Igorit.hmA hR.'JP.<i on AimilBr prindplP.A, e.g. the elMtie net Bigorithm [6], predict. AimilBr pBtt.ernA. A Neural Network Model for the Formation of Brain Maps Compared with Experimental Data 89 Figure 4: Typical map generated by the SOFM-algorithm. The five-dimensional feature space is projected into the threedimensional subspace spanned by the three coordinates (VI, V2 and V5). Locations of feature vectors which are mapped to the units in the network are indicated by the intersections of a grid in feature space. Only every fourth vector is shown. spanned by orientation preferences (V3 and V4) and ocular dominance (vs). This projection cannot be visualized easily because the surface completely fills space, intersecting itself multiple times. However, the same line of reasoning applies: (i) regions where orientation preferences change quickly, correlate with regions where ocular dominance changes slowly, and (ii) in regions where orientation preferences change most rapidly along one direction, ocular dominance has to change most rapidly along the orthogonal direction. Consequently we expect discontinuities of the orientation map to be located in the centers of the ocular dominance bands and iso-orientation slabs to intersect ocular dominance bands at steep angles. Figures 1, 2 and 3 show simulation results in comparison with experimental data. The algorithm generates all the prominent features of lateral cortical organization: singularities (arrow 1), linear zones (arrow 2), and parallel ocular dominance bands. Singularities are aligned with the centers of ocular dominance bands (region 1) and iso-orientation slabs intersect ocular dominance stripes at nearly right angles (region 2). The shape of Fourier- and power-spectra as well as of the correlation functions agrees quantitatively with the experimental data (see Fig. 2). Isotropic spectra are the result of the invariance of eqs. (1) and (2) under rotation with respect to cortical coordinates r; global disorder and singularities are a consequence of their invariance under translation. The emergence of singularities can also be understood from an entropy argument. Since dimension reducing maps, which exhibit these features, have increased entropy, they are generated with higher probability. Correlations between orientation preference and ocular dominance, however, follow from geometrical constraints and are inherent properties the topology preserving maps. 3 Conclusions On the basis of our findings the following picture of orientation and ocular dominance columns in monkey striate cortex emerges. Orientation preferences are organized into linear zones and singularities, but areas where iso-orientation regions form parallel slabs are apparent across most of the cortical surface. In linear zones, 90 Obermayer, Schulten, and Blasdel iso-orientation slabs indeed intersect ocular dominance slabs at right angles as initially suggested by Hubel and Wiesel [8]. Orientation preferences, however, are arranged in an orderly fashion only in regions 0.8mm in size, and the pattern is characterized by local correlation and global disorder. These patterns can be explained as the result of topology-preserving, dimension reducing maps. Local correlations follow from geometrical constraints and are a direct consequence of the principle of dimension reduction. Global disorder and singularities are consistent with this principle but reflect their generation by a local and stochastic self-organizing process. Acknowledgements The authors would like to thank H. Ritter for fruitful discussions and comments and the Boehringer-Ingelheim Fonds for financial support by a scholarship to K. O. This research has been supported by the National Science Foundation (grant numbers DIR 90-17051 and DIR 91-22522). Computer time on the Connection Machine CM-2 has been made available by the National Center for Supercomputer Applications at Urbana-Champaign funded by NSF. References [1] Blasdel G.G. and Salama G. (1986), Nature 321, 579-585. [2] Blasdel G.G. (1992), J. Neurosci. in press. [3] Blasdel G.G. (1992), J. Neurosci. in press. [4] Kohonen T. (1987), Self-Organization and Associative Memory, Springer-Verlag, New York. [5] Ritter H. and Schulten K. (1988), BioI. Cybern. 60, 59-71. [6] Durbin R. and Mitchison M. (1990), Nature 343, 644-647. [7] Obermayer K. et aI. (1990), Proc. Natl. Acad. Sci. USA 81, 8345-8349. [8] Hubel D.H. and Wiesel T.N. (1974), J. Compo NeuroI. 158, 267-294. [9] Braitenberg V. and Braitenberg C. (1979), BioI. Cybern. 33, 179-186. [10] Swindale N.V. (1982), Proc. R. Soc. Lond. B 215, 211-230. [11] Baxter W.T. and Dow B.M. (1989), BioI. Cybern. 61, 171-182. [12] Rojer A.S. and Schwartz E.L. (1990), BioI. Cybern. 62, 381-391. [13] Malsburg C. (1973), Kybernetik 14, 85-100. [14] Takeuchi A. and Amari S. (1979), BioI. Cybern. 35,63-72. [15] Kohonen T. (1982a), BioI. Cybern. 43, 59-69. [16] Kohonen T. (1982b), BioI. Cybern. 44, 135-140. [17] Linsker R. (1986), Proc. N atl. Acad. Sci. USA 83, 8779-8783. [18] Soodak R. (1987), Proc. Natl. Acad. Sci. USA 84, 3936-3940. [19] Kammen D.M. and Yuille A.R. (1988), BioI. Cybern. 59, 23-31. [20] Miller K.D. et al. (1989), Science 245, 605-615. [21] Miller K.D. (1989), Soc. Neurosci. Abs. 15, 794. [22] Grinvald A. et al. (1986), Nature 324, 361-364. [23] Bonhoeffer T. and Grinvald A. (1991), Nature 353,429-431.
1991
115
447
A Connectionist Learning Approach to Analyzing Linguistic Stress Prahlad Gupta Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 Abstract David S. Touretzky School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 We use connectionist modeling to develop an analysis of stress systems in terms of ease of learnability. In traditional linguistic analyses, learnability arguments determine default parameter settings based on the feasibilty of logicall y deducing correct settings from an initial state. Our approach provides an empirical alternative to such arguments. Based on perceptron learning experiments using data from nineteen human languages, we develop a novel characterization of stress patterns in terms of six parameters. These provide both a partial description of the stress pattern itself and a prediction of its learnability, without invoking abstract theoretical constructs such as metrical feet. This work demonstrates that machine learning methods can provide a fresh approach to understanding linguistic phenomena. 1 LINGUISTIC STRESS The domain of stress systems in language is considered to have a relatively good linguistic theory, called metrical phonologyl. In this theory, the stress patterns of many languages can be described concisely, and characterized in terms of a set of linguistic "parameters," such as bounded vs. unbounded metrical feet, left vs. right dominant feet, etc.2 In many languages, stress tends to be placed on certain kinds of syllables rather than on others; the former are termed heavy syllables, and the latter light syllables. Languages that distinguish lFor an overview of the theory, see [Goldsmith 90, chapter 4]. 2See [Dresher 90] for one such parameter scheme. 225 226 Gupta and Touretzky OUTPUT UNIT (PERCEPTRON) INPUT LAYER (2 x 13 units) Input Figure 1: Perceptron model used in simulations. between heavy and light syllables are termed quantity-sensitive (QS), while languages that do not make this distinction are termed quantity-insensitive (QI). In some QS languages, what counts as a heavy syllable is a closed syllable (a syllable that ends in a consonant), while in others it is a syllable with a long vowel. We examined the stress patterns of nineteen QI and QS systems, summarized and exemplified in Table 1. The data were drawn primarily from descriptions in [Hayes 80]. 2 PERCEPTRON SIMULATIONS In separate experiments, we trained a perceptron to produce the stress pattern of each of these languages. 1\\10 input representations were used. In the syllabic representation, used for QI patterns only, a syllable was represented as a [11] vector, and [00] represented no syllable. In the weight-string representation, which was necessary for QS languages, the input patterns used were [1 0] for a heavy syllable, [0 1] for a light syllable, and [00] for no syllable. For stress systems with up to two levels of stress, the output targets used in training were 1.0 for primary stress, 0.5 for secondary stress, and 0 for no stress. For stress systems with three levels of stress, the output targets were 0.6 for secondary stress, 0.35 for tertiary stress, and 1.0 and 0 respectively for primary stress and no stress. The input data set for all stress systems consisted of all word-forms of up to seven syllables. With the syllabic input representation there are 7 of these, and with the weight -string representation, there are 255. The perceptron's input array was a buffer of 13 syllables; each word was processed one syllable at a time by sliding it through the buffer (see Figure 1). The desired output at each step was the stress level of the middle syllable of the buffer. Connection weights were adjusted at each step using the back-propagation learning algorithm [Rumelhart 86]. One epoch consisted of one presentation of the entire training set. The network was trained for as many epochs as necessary to ensure that the stress value produced by the perceptron was within 0.1 of the target value, for each syllable of the word, for all words in the training set. A learning rate of 0.05 and momentum of 0.90 was used in all simulations. Initial weights were uniformly distributed random values in the range ±0.5. Each simulation was run at least three times, and the learning times averaged. Connectionist Learning and Linguistic Stress 227 REF LANGUAGE DESCRIPTION OF S1RESS PATIERN EXAMPLES Quantity-Insensitive Languages: Ll Latvian Fixed word-initial stress. SlSOSOSOSOSoSo L2 French Fixed word-final stress. SOSoSoSOSOSOSI L3 Maranungku Primary stress on first syllable, secondary stress on alternate SlSOS2S0S2S0S2 succeeding syllables. L4 Weri Primary stress on last syllable, secondary stress on alternate S2S0S2S0S2S0S1 preceding syllables. L5 Garawa Primary stress on first syllable, secondary stress on penultiSlSOSOS3S0S2S0 mate syllable, tertiary stress on alternate syllables preceding the penUlt, no stress on second syllable. L6 Lakota Primary stress On second syllable. SOSI SOSOSoSOSo L7 Swahili Primary stress on penultimate syllable. SOSOSOSOSOSlSO L8 Paiute Primary stress on second syllable, secondary stress on alterSOSlSOS2SOS2S0 nate succeeding syllables. L9 Warao Primary stress on penultimate syllable. secondary stress on SOS2S0S2SOS1S0 alternate preceding syllables. Quantity-Sensitive Languages: LlO Koya Primary stress on first syllable, secondary stress on heavy LILoLoH2LoLoLo syllables. LILoLoLoLoLoLo (Heavy = closed syllable or syllable with long vowel.) L11 Eskimo (Primary) stress on final and heavy syllables. LOLoLoHILoLoLl (Heavy = closed syllable.) LOLoLoLoLoLoLI L12 Gurkhali Primary stress on first syllable except when first syllable light LILoL°J-fJLoLoLo and second syllable heavy. L ° HI L 0J-fJ L °LoLo (Heavy = long vowel.) L13 Yapese Primary stress on last syllable except when last is light and LOLoL°J-fJLoLoLl penultimate heavy. L°J-fJL °J-fJL °HIL ° (Heavy = long vowel.) L14 Ossetic Primary stress on first syllable if heavy. else on second sylHI L ° L ° J-fJ L ° L ° L ° lable. LOLILoLoLoLoLo (Heavy = long vowel.) L15 Rotuman Primary stress on last syllable if heavy. else on penultimate LOLoL°J-fJLoLoHl syllable. LOLoLoLoLoLlLo JHeavy = long vowel.) L16 Komi Primary stress on first heavy syllable. or on last syllable if L ° L °H1 L °L°J-fJL ° none heavy. LOLoLoLoLoLoLl (Heavy = long vowel.) L17 Cheremis Primary stress on last heavy syllable. or on first syllable if LOL °J-fJLoLoH1L ° none heavy. (Heavy = long vowel.) LILoLoLoLoLoLo L18 Mongolian Primary stress on first heavy syllable. or on first syllable if L °L °H1LoL°J-fJL ° none heavy. (Heavy = long vowel.) LILoLoLoLoLoLo L19 Mayan Primary stress on last heavy syllable. or on last syllable if LOL°J-fJLoLoHlLo none heavy. LOLoLoLoLoLoLI (Heavy = long vowel.) Table 1: Stress patterns: description and example stress assignment. Examples are of stress assignment in seven-syllable words. Primary stress is denoted by the superscript 1 (e.g., Sl), secondary stress by the superscript 2, tertiary stress by the superscript 3, and no stress by the superscript O. "S" indicates an arbitrary syllable, and is used for the QI stress patterns. For QS stress patterns, "H" and "L" are used to denote Heavy and Light syllables, respectivel y. 228 Gupta and Touretzky 3 PRELIMINARY ANALYSIS OF LEARNABILITY OF STRESS The learning times differ considerably for {Latvian, French}, {Maranungku, Weri} , {Lakota, Polish} and Garawa, as shown in the last column of Table 2. Moreover, Paiute and Warao were unlearnable with this mode1.3 Differences in learning times for the various stress patterns suggested that the factors ("parameters") listed below are relevant in determining learnability. 1. Inconsistent Primary Stress (IPS): it is computationally expensive to learn the pattern if neither edge receives primary stress except in mono- and di-syllables; this can be regarded as an index of computational complexity that takes the values {O, I}: 1 if an edge receives primary stress inconsistently, and 0, otherwise. 2. Stress clash avoidance (SeA): if the components of a stress pattern can potentially lead to stress clash4, then the language may either actually permit such stress clash, or it may avoid it. This index takes the values {O, I}: 0 if stress clash is permitted, and 1 if stress clash is avoided. 3. Alternation (AIt): an index of learnability with value 0 if there is no alternation, and value 1 if there is. Alternation refers to a stress pattern that repeats on alternate syllables. 4. Multiple Primary Stresses (MPS): has value 0 if there is exactly one primary stress, and value 1 if there is more then one primary stress. It has been assumed that a repeating pattern of primary stresses will be on alternate, rather than adjacent syllables. Thus, [Alternation=O] implies [MPS=O]. Some of the hypothetical stress patterns examined below include ones with more than one primary stress; however, as far as is known, no actually occurring QI stress pattern has more than one primary stress. 5. Multiple Stress Levels (MSL): has value 0 if there is a single level of stress (primary stress only), and value 1 otherwise. Note that it is possible to order these factors with respect to each other to form a fivedigit binary string characterizing the ease/difficulty of learning. That is, the computational complexity of learning a stress pattern can be characterized as a 5-bit binary number whose bits represent the five factors above, in decreasing order of significance. Table 2 shows that this characterization captures the learning times of the QI patterns quite accurately. As an example of how to read Thble 2, note that Garawa takes longer to learn than Latvian (165 vs. 17 epochs). This is reflected in the parameter setting for Garawa, "01101", being lexicographically greater than that for Latvian, "00000". A further noteworthy point is that this framework provides an account of the non-learnability of Paiute and Warao, viz,. that stress patterns whose parameter string is lexicographically greater than "10000" are unlearnable by the perceptron. 4 TESTING THE QI LEARNABILITY PREDICTIONS We devised a series of thirty artificial QI stress patterns (each a variation on some language in Table 1) to examine our parameter scheme in more detail. The details of the patterns 3They were learnable in a three-layer model, which exhibited a similar ordering of learning times [Gupta 92]. 4Placement of stress on adjacent syllables. Connectionist Learning and Linguistic Stress 229 IPS SCA Alt MPS MSL QI LANGUAGES REF EPOCHS (syllabic) 0 0 0 0 0 Latvian Ll 17 French L2 16 0 0 1 0 1 Maranungku L3 37 Weri L4 34 0 1 1 0 1 Garawa L5 165 1 0 0 0 0 Lakota L6 255 Swahili L7 254 1 0 1 0 1 Paiute L8 ** Warao L9 ** Table 2: Preliminary analysis of learning times for QI stress systems. using the syllabic input representation. IPS=Inconsistent Primary Stress; SCA=Stress Clash Avoidance; Alt=Altemation; MPS=Multiple Primary Stresses; MSL=Multiple Stress Levels. References LI-L9 refer to Table 1. II Agg I IPS SCA I Alt I MPS I MSL " QI LANGS I REF I TIME " QS LANGS I REF I TIME II 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 Latvian Ll 2 French L2 2 0 0 0 0 1 Koya LI0 2 0 0 0 1 0 Eskimo Lll 3 0 0 1 0 1 Maranungku L3 3 Weri L4 3 0 1 1 0 1 Garawa L5 7 0.25 0 0 0 0 Gurkhali L12 19 Yapese L13 19 0.50 0 0 0 0 Ossetic L14 30 Rotuman L15 29 1 0 0 0 0 Lakota L6 10 Swahili L7 10 1 0 1 0 1 Paiute L8 ** Warao L9 ** 0 0 0 0 0 Komi L16 216 Cheremis L17 212 0 0 0 0 0 Mongolian U8 2306 Mayan U9 2298 Table 3: Summary of results and analysis of QI and QS learning (using weightstring input representations). Agg=Aggregative Information; IPS=Inconsistent Primary Stress; SCA=Stress Clash Avoidance; Alt=Altemation; MPS=Multiple Primary Stresses; MSL=Multiple Stress Levels. References index into Table 1. Time is learning time in epochs. 230 Gupta and Touretzky are not crucial for present purposes (see [Gupta 92] for details). What is important to note is that the learnability predictions generated by the analytical scheme described in the previous section show good agreement with actual perceptron learning experiments on these patterns. The learning results are summarized in Table 4. It can be seen that the 5-bit characterization fits the learning times of various actual and hypothetical patterns reasonably well (although there are exceptions - for example, the hypothetical stress patterns with reference numbers h21 through h25 have a higher 5-bit characterization than other stress patterns, but lower learning times.) Thus, the "complexity measure" suggested here appears to identify a number of factors relevant to the learnability of QI stress patterns within a minimal twolayer connectionist architecture. It also assesses their relative impacts. The analysis is undoubtedly a Simplification, but it provides a completely novel framework within which to relate the various learning results. The important point to note is that this analytical framework arises from a consideration of (a) the nature of the stress systems, and (b) the learning results from simulations. That is, this framework is empirically based, and makes no reference to abstract constructs of the kind that linguistic theory employs. Nevertheless, it provides a descriptive framework, much as the linguistic theory does. 5 INCORPORATING QS SYSTEMS INTO THE ANALYSIS Consideration of the QS stress patterns led to refinement of the IPS parameter without changing its setting for the QI patterns. This parameter is modified so that its value indicates the proportion of cases in which primary stress is not assigned at the edge of a word. Additionally, through analysis of connection weights for QS patterns, a sixth parameter, Aggregative Information, is added as a further index of computational complexity. 6. Aggregative Information (Agg) : has value 0 if no aggregative information is required (single-positional information suffices); 1 if one kind of aggregative information is required; and 2 if two kinds of aggregative information are required. Detailed discussion of the analysis leading to these refinements is beyond the scope of this paper; the interested reader is referred to [Gupta 92]. The point we wish to make here is that, with these modifications, the same parameter scheme can be used for both the QI and QS language classes, with good learnability predictions within each class, as shown in Table 3. Note that in this table, learning times for all languages are reported in terms of the weight-string representation (255 input patterns) rather than the unweighted syllabic representation (7 input patterns) used for the initial QI studies. Both the QI and QS results fall into a single analysis within this generalized parameter scheme and weight-string representation, but with a less perfect fit than the within-class results. 6 DISCUSSION Traditional linguistic analysis has devised abstract theoretical constructs such as "metrical foot" to describe linguistic stress systems. Learnability arguments were then used to determine default parameter settings (e.g., whether feet should by default be assumed to be bounded or unbounded, left or right dominant, etc.) based on the feasibility of logically deducing correct settings from an initial state. As an example, in one analysis Connectionist Learning and Linguistic Stress 231 IPS SCA Alt MPS MSL LANGUAGE REF EPOCHS (syllabic) 0 0 0 Latvian Ll 17 French L2 16 0 0 1 Latvian2stress hI 21 Latvian3stress h2 11 French2stress h3 23 French 3 stress h4 14 0 1 0 Latvian2edge h5 30 0 1 1 Latvian2edge2stress h6 37 1 0 0 impossible 1 0 1 Maranungku L3 37 Weri L4 34 Maranungku3stress h7 43 Weri3stress h8 41 Latvian2edge2stress-alt h9 58 Garawa-SC hl0 38 Garawa2stress-SC hll 50 1 1 0 Maranungku 1 stress h12 61 Weri 1 stress h13 65 Latvian2edge-alt h14 78 Garawal stress-SC h15 88 1 1 1 Latvian2edge2stress-lalt h16 85 1 0 0 0 impossible 1 0 0 1 Garawa-non-alt h17 164 Latvian3stress2edge-SCA h18 163 1 0 1 0 Latvian2edge-SCA h19 194 1 0 1 1 Latvian2edge2stress-SCA h20 206 1 1 0 1 Garawa L5 165 Garawa2stress h21 71 Latvian2edge2stress-alt -SCA h22 91 1 1 1 0 Garawalstress h23 121 Latvian2edge-alt-SCA h24 126 1 1 1 1 Latvian2edge2stress-lalt-SCA h25 129 1 0 0 0 Lakota L6 255 Swahili L7 254 1 0 0 1 Lakota2stress h26 ** 1 0 1 0 Lakota2edge h27 ** 1 0 1 1 Lakota2edge2stress h28 ** 1 1 0 1 Paiute L8 ** Warao L9 ** 1 1 1 0 Lakota-alt h29 ** 1 1 1 1 Lakota2stress-alt h30 ** Table 4: Analysis of Quantity-Insensitive learning using the syllabic input representation. IPS=Inconsistent Primary Stress; SCA=Stress Clash Avoidance; Alt=Altemation; MPS=Multiple Primary Stresses; MSL=Multiple Stress Levels. References LI-L9 index into Table 1. 232 Gupta and Touretzky [Dresher 90, p. 191], "metrical feet" are taken to be "iterative" by default, since there is evidence that can cause revision of this default if it turns out to be the incorrect setting, but there might not be such disconfirming evidence if the feet were by default taken to be "non-iterative". We provide an alternative to logical deduction arguments for determining "markedness" of parameter values, by measuring learnability (and hence markedness) empirically. The parameters of our novel analysis generate both a partial description of each stress pattern and a prediction of its learnability. Furthermore, our parameters encode linguistically salient concepts (e.g., stress clash avoidance) as well as concepts that have computational significance (single-positional vs. aggregative information.) Although our analyses do not explicitly invoke theoretical linguistic constructs such as metrical feet, there are suggestive similarities between such constructs and the weight patterns the perceptron develops [Gupta 91]. In conclusion, this work offers a fresh perspective on a well-studied linguistic domain, and suggests that machine learning techniques in conjunction with more traditional tools might provide the basis for a new approach to the investigation of language. Acknowledgements We would like to acknowledge the feedback prOvided by Deirdre Wheeler throughout the course of this work. The first author would like to thank David Evans for access to exceptional computing facilities at Carnegie Mellon's Laboratory for Computational Linguistics, and Dan Everett, Brian MacWhinney, Jay McClelland, Eric Nyberg, Brad Pritchett and Steve Small for helpful discussion of earlier versions of this paper. Of course, none of them is responsible for any errors. The second author was supported by a grant from Hughes Aircraft Corporation, and by the Office of Naval Research under contract number NOOOI4-86-K-0678. References [Dresher 90] Dresher, B., & Kaye, J., A Computational Learning Model for Metrical Phonology, Cognition 34, 137-195. [Goldsmith 90] Goldsmith, J., Autosegmental and Metrical Phonology, Basil Blackwell, Oxford, England, 1990. [Gupta 91] Gupta, P. & Touretzky, D., What a perceptron reveals about metrical phonology. Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, 334339. Lawrence Erlbaum, Hillsdale, NJ, 1991. [Gupta 92] Gupta, P. & Touretzky, D., Connectionist Models and Linguistic Theory: Investigations of Stress Systems in Language. Manuscript. [Hayes 80] Hayes, B., A Metrical Theory of Stress Rules, doctoral dissertation, Massachusetts Institute of Technology, Cambridge, MA, 1980. Circulated by the Indiana University Linguistics Club, 1981. [Rumelhart 86] Rumelhart, D., Hinton, G., & Williams, R, Learning Internal Representations by Error Propagation, in D. Rumelhart, J. McClelland & the PDP Research Group. Parallel Distributed Processing. Volume 1: Foundations, MIT Press, Cambridge, MA, 1986.
1991
116
448
Stationarity of Synaptic Coupling Strength Between Neurons with Nonstationary Discharge Properties Mark R. Sydorenko and Eric D. Young Dept. of Biomedical Engineering & Center for Hearing Sciences The Johns Hopkins School of Medicine 720 Rutland Avenue Baltimore. Maryland 21205 Abstract Based on a general non-stationary point process model, we computed estimates of the synaptic coupling strength (efficacy) as a function of time after stimulus onset between an inhibitory interneuron and its target postsynaptic cell in the feline dorsal cochlear nucleus. The data consist of spike trains from pairs of neurons responding to brief tone bursts recorded in vivo. Our results suggest that the synaptic efficacy is non-stationary. Further. synaptic efficacy is shown to be inversely and approximately linearly related to average presynaptic spike rate. A second-order analysis suggests that the latter result is not due to non-linear interactions. Synaptic efficacy is less strongly correlated with postsynaptic rate and the correlation is not consistent across neural pairs. 1 INTRODUCTION The aim of this study was to investigate the dynamic properties of the inhibitory effect of type IT neurons on type IV neurons in the cat dorsal cochlear nucleus (DeN). Type IV cells are the principal (output) cells of the DCN and type II cells are inhibitory intemeurons (Voigt & Young 1990). In particular. we examined the stationarity of the efficacy of inhibition of neural activity in a type IV neuron by individual action potentials (APs) in a type II neuron. Synaptic efficacy. or effectiveness, is defmed as the average number of postsynaptic (type IV) APs eliminated per presynaptic (type IT) AP . This study was motivated by the observation that post-stimulus time histograms of type IV neurons often show gradual recovery ("buildup") from inhibition (Rhode et al. 1983; Young & Brownell 1976) which could arise through a weakening of inhibitory input over time. 11 12 Sydorenko and Young Correlograms of pairs of DCN units using long duration stimuli are reported to display inhibitory features (Voigt & Young 1980; Voigt & Young 1990) whereas correlograms using short stimuli are reported to show excitatory features (Gochin et a1. 1989). This difference might result from nonstationarity of synaptic coupling. Finally, pharmacological results (Caspary et al. 1984) and current source-density analysis of DCN responses to electrical stimulation (Manis & Brownell 1983) suggest that this synapse may fatigue with activity. Synaptic efficacy was investigated by analyzing the statistical relationship of spike trains recorded simultaneously from pairs of neurons in vivo. We adopt a first order (linear) nonstationary point process model that does not impose a priori restrictions on the presynaptic process's distribution. Using this model, estimators of the postsynaptic impulse response to a presynaptic spike were derived using martingale theory and a method of moments approach. To study stationarity of synaptic efficacy, independent estimates of the impulse response were derived over a series of brief time windows spanning the stimulus duration. Average pre- and postsynaptic rate were computed for each window, as well. In this report, we summarize the results of analyzing the dependence of synaptic efficacy (derived from the impulse response estimates) on post-stimulus onset time, presynaptic average rate. postsynaptic average rate, and presynaptic interspike interval. 2 METHODS 2.1 DATA COLLECTION Data were collected from unanesthetized cats that had been decerebrated at the level of the superior colliculus. We used a posterior approach to expose the DCN that did not require aspiration of brain tissue nor disruption of the local blood supply. Recordings were made using two platinum-iridium electrodes. The electrodes were advanced independently until a type II unit was isolated on one electrode and a type IV unit was isolated on the other electrode. Only pairs of units with best frequencies (BFs) within 20% were studied. The data consist of responses of the two units to 500-4000 repetitions of a 100-1500 millisecond tone. The frequency of the tone was at the type II BF and the tone level was high enough to elicit activity in the type II unit for the duration of the presentation, but low enough not to inhibit the activity of the type IV unit (usually 5-10 dB above the type II threshold). Driven discharge rates of the two units ranged from 15 to 350 spikes per second. A silent recovery period at least four times longer than the tone burst duration followed each stimulus presentation. 2.3 DATA ANALYSIS The stimulus duration is divided into 3 to 9 overlapping or non-overlapping time windows ('a' thru 'k' in figure 1). A separate impulse response estimate, presynaptic rate. and postsynaptic rate computation is made using only those type II and type IV spikes that fall within each window. The effectiveness of synaptic coupling during each window is calculated from the area bounded by the impulse response feature and the abscissa (shaded area in figure 1). The effectiveness measure has units of number of spikes. The synaptic impulse response is estimated using a non-stationary method of moments algorithm. The estimation algorithm is based on the model depicted in figure 2. The thick gray line encircles elements belonging to the postsynaptic (type IV) cell. The neural network surrounding the postsynaptic cell is modelled as a I-dimensional multivariate counting process. Each element of the I-dimensional counting process is an input to the postsynaptic Stationarity of Synaptic Coupling Strength Between Neurons 13 cell. One of these input elements is the presynaptic (type II) cell under observation. The input processes modulate the postsynaptic cell's instantaneous rate function, Aj(t). Roughly speaking, A.j(t) is the conditional flring probability of neuron j given the history of the input events up to time t. 200 ~I Vl '-' SR 0 a b y,., 400 U' Q) Q) ~ ~ ... SR··········· ~o. Vl '-' TYFE II PST HISTOGRAM + + Post-Stimulus Time ....... ~ .... TYFE IV PST HISTOGRAM + t Post-Stimulus Time ~ Kh2(t) I • , , , , , , •• F' , , , , , I Figure 1: Analysis of Non-stationary Synaptic Coupling ,,, •..... _ .... _ ... _-... N· ......... , .... Np J Nj+ 1 ". ~ . \" . . .. . . \ NJ • ~ .. / .... Nj ... _ ..••• "" Figure 2 The transformation K describes how the input processes influence Aj(t). We model this transformation as a linear sum of an intrinsic rate component and the contribution of all the presynaptic processes: Aj(t} = KOj(t}+ ± J Kljk(t,U} dNk{U) k = 1 (1) where KO describes the intrinsic rate and the K 1 describe the impulse response of the postsynaptic cell in response to an input event. The output of the postsynaptic neuron is modeled as the integral of this rate function plus a mean-zero noise process, the innovation martingale (Bremaud 1981): Nj(t} = 11 Aj{U) du + Mj{t). TO (2) An algorithm for estimating the first order kernel, Kl, was derived without assuming 14 Sydorenko and Young anything about the distribution of the presynaptic process and without assuming stationary flrst or second order product densities (Le., without assuming stationary rate or stationary auto-correlation). One or more such assumptions have been made in previous method of moments based algorithms for estimating neural interactions (Chornoboy et al. 1988 describe a maximum likelihood approach that does not require these assumptions). Since Kl is assumed to be stationary during the windowed interval (figure 1) while the process product densities are non-stationary (see PSTHs in figure 1), Kl is an average of separate estimates of K 1 computed at each point in time during the windowed interval: ".... (Il) 1 ".... (Il Il) KliAt = L KliAti, tj nil ~-tf=t'\ tfeI (3) where K 1 inside the summation is an estimate of the impulse response of neuron i at time t? to a spike from neuron j at time tf (times are relative to stimulus onset); the digitization bin width D. (= 0.3 msec in our case) determines the location of the discrete time points as well as the number of separate kernel estimates, nL\, within the windowed interval, I. The time dependent kernel, Kl(','), is computed by deconvolving the effects of the presynaptic process distribution, described by rii below, from the estimate of the cross-cumulant density, qij: where: Klilt?, tf) = Lqj{vll, tf)f~(t?_VIl,tf)D. qj(UIl,VIl) = Pij(UIl,VIl)- Pi (UIlJPj(VIl)' fjj(UIl,VIl) = ~(UIl,VIl) + o(UIl_VIlJPj(vll), f~I(UIl,~) = ~-l[ 1 J' .r[fjj (UIl,VIl)] pj (tf) = # { spike in neuron j during [tf t, tf +~ )} / (#{ trials) D.) , #{ spike in i during [t~.A, t~ +D. ) and spike in j during [tf.A, tf +D. )} (4) (5) (6) (7) (8) Pij(t?,tf) = 2 2 2 2, #{ trials} D. 2 (9) where Be-> is the dirac delta function; ~and .r1 are the DFf and inverse DFf, respectively; and #{.} is the number of members in the set described inside the braces. If the presynaptic process is Poisson distributed, expression (4) simplifles to: K .. (t~ t~) = qj{t~, tf) 11J I, J ,., ( Il) pj tj (to) Under mild (physiologically justiflable) conditions, the estimator given by (3) converges in quadratic mean and yields an asymptotically unbiased estimate of the true impulse response function (in the general, (4), and Poisson presynaptic process, (10), cases). 3 RESULTS Figure 3 displays estimates of synaptic impulse response functions computed using tradi tional cross-correlation analysis and compares them to estimates computed using the method of moments algorithms described above. (We use the deflnition of cross-correlation given by Voigt & Young 1990; equivalent to the function given by dividing expression (10) by Stationarity of Synaptic Coupling Strength Between Neurons 15 expression (9) after averaging across all tj-) Figure 3A compares estimates computed from the responses of a real type II and type IV unit during the flrst 15 milliseconds of stimulation (where nonstationarity is greatest). Note that the cross-correlation estimate is distorted due to the nonstationarity of the underlying processes. This distortion leads to an overestimation of the effectiveness measure (shaded area) as compared to that yielded by the method of moments algorithm below. Figure 3B compares estimates computed using a simulated data set where the presynaptic neuron had regular (non-Poisson) discharge properties. Note the characteristic ringing pattern in the cross-correlation estimate as well as the larger feature amplitude in the non-Poisson method of moments estimate. (A) Cross-correlogram -10 -5 0 5 milliseconds Method of Moments -10 -5 0 5 milliseconds (B) Cross-correlogram 10 10 Figure 3 30~--------~----------, 15 O~~Pri~~++~rHrH~~~ -15 -30+T""T"T""T"T""T""'1-r-r-r-,r""'T""Il"'T"'1l'""T""1l'""T""1r-f -50 30 15 o -15 -30 . -: . -50 -25 0 25 50 milliseconds Method of Moments p ~ 'T If-. • I . I -25 0 25 50 milliseconds Results from one analysis of eight different type II / type IV pairs are shown in flgure 4. For each pair, the effectiveness and the presynaptic (type ll) average rate during each window are plotted and fit with a least squares line. Similar analyses were performed for effectiveness versus postsynaptic rate and for effectiveness versus post-stimulus-onset time. The number of pairs showing a positive or negative correlation of effectiveness with each parameter are tallied of table 1. The last column shows the average correlation coefflcient of the lines fit to the eight sets of data. Note that: Synaptic efficacy tends to increase with time; there is no consistent relationship between synaptic efflcacy and postsynaptic rate; there is a strong inverse and linear correlation between synaptic efflcacy and presynaptic rate in 7 out of 8 pairs. If the data appearing in figure 4 had been plotted as effectiveness versus average interspike interval (reciprocal of average rate) of the presynaptic neuron, the result would suggest that synaptic effIcacy increases with average inter-spike interval. This result would be consistent with the interpretation that the effectiveness of an input event is suppressed by the occurrence of an input event immediately before it. The linear model initially used to analyze these data neglects the possibility of such second order effects. 16 Sydorenko and Young Table 1: Summary of Results NUMBER OF NUMBER OF AVERAGE LINEAR PAIRS WITH PAIRS WITH REGRESSION GRAPH POSITWE NEGATIVE CORRELATION SLOPE SLOPE COEFFICIENT Effectiveness -vs7/8 1/8 0.83 Post Stimulus Onset Time Effectiveness -vs5/8 3 /8 0.72 Average Postsynaptic Rate Effectiveness -vs1/8 7/8 0.89 A verage Presynaptic Rate 0.2 __ ------------, 0.2-r-------------, .. : ··t···············t················ 0.15 · . ...... -----............ _---_ ............. _-_ ......................... . · . . · . . · . . 0.15 - ! ! · . . · . . · . . · . . · . . · . . : i ! : -1 \ !. . . ............. _ ................. _ ............... _ ................. . i I'! · ····· ·····-t···············.:.···~········ · ·t~ ··· ·· ....... . · . · . · . · . · . · . · . ...... ~~ ..... .)~ .. A~J + -. I I . ~~: : : -:---- ..... : : :: :" : i ~'~ i O~~~~~~~ .. ~~~~ 0.05 i a . .-•••••••••. :-•• +-.Q.-..... ---t ••••• --••• -••• -.~ •.• -•• -..••••• •. + ==a:=;; ~ + ~ a : a + + : : : • ,. • 1 Ai .J 0.05 • ~ : 0 of' 0 50 100 150 200 250 0 5 10 15 20 Type II Rate (spikes/sec) Type II Inter-spike Interval (millisec) Figure 4 Figure 5 We used a modification of the analysis described in the methods to investigate second order effects. Rather than window small segments of the stimulus duration as in figure I, the entire duration was used in this analysis. Impulse response estimates were constructed conditional Stationarity of Synaptic Coupling Strength Between Neurons 17 on presynaptic interspike interval. For example, the first estimate was constructed using presynaptic events occurring after alms interspike interval, the second estimate was based on events after a 2 ms interval, and so on. The results of the second order analysis are shown in figure 5. Note that there is no systematic relationship between conditioning interspike interval and effectiveness. In fact. lines fitted to these points tend to be horizontal, suggesting that there are no significant second order effects under these experimental conditions. Our results suggest that synaptic efficacy is inversely and roughly linearly related to average presynaptic rate. We have attempted to understand the mechanism of the observed decrease in efficacy in terms of a model that asswnes stationary synaptic coupling mechanisms. The model was designed to address the following hypothesis: Could the decrease in synaptic efficacy at high input rates be due to an increase in the likelihood of driving the stochastic intensity below zero, and, hence decreasing the apparent efficacy of the input due to clipping? The answer was pursued by attempting to reproduce the data collected for the 3 best type II / type IV pairs in our data set. Real data recorded from the presynaptic unit are used as input to these models. The parameters of the models were adjusted so that the first moment of the output process had the same quantitative trajectory as that seen in the real postsynaptic unit. The simulated data were analyzed by the same algorithms used to analyze the real data. Our goal was to compare the simulated results with the real results. If the simulated data showed the same inverse relationship between presynaptic rate and synaptic efficacy as the real data, it would suggest that the phenomenon is due to non-linear clipping by the postsynaptic unit. The simulation algorithm was based on the model described in figure 2 and equation (1) but with the following modifications: • The experimentally determined type IV PST profile was substituted for KO (this term represents the average combined influence of all extrinsic inputs to the type IV cell plus the intrinsic spontaneous rate). • An impulse response function estimated from the data was substituted for Kl (this kernel is stationary in the simulation model). • The convolution of the experimentally determined type II spikes with the first-order kernel was used to perturb the output cell's stochastic intensity: Al{t) = MAX [0, Pl{t) + L Kl 12{t - Ui) 1 dN2 (Ui) = s where: dN2(t) = Real type n cell spike record, and PI (t) = PST profile of real type IV cell. • The output process was simulated as a non-homogeneous Poisson process with ),,1 (t) as its parameter. This process was modified by a 0.5 msec absolute dead time. • The simulated data were analyzed in the same manner as the real data. The dependence of synaptic efficacy on presynaptic rate in the simulated data was compared to the corresponding real data. In lout of the 3 cases, we observed an inverse relationship between input rate and efficacy despite the use of a stationary first order kernel in the simulation. The similarity between the real and simulated results for this one case suggests that the mechanism may be purely statistical rather than physiological (e.g., not presynaptic depletion or postsynaptic desensitization). The other 2 simulations did not yield a strong dependence of effectiveness on input rate and, hence, failed to mimic the experimental results. In these two cases, the results suggest that the mechanism is not due solely to clipping, but involves some additional, possibly physiological, mechanisms. 18 Sydorenko and Young 4 CONCLUSIONS 1) The amount of inhibition imparted to type IV units by individual presynaptic type II unit action potentials (expressed as the expected nwnber of type N spikes eliminated per type II spike) is inversely and roughly linearly related to the average rate of the type II unit. (2) There is no evidence for second order synaptic effects at the type II spike rates tested. In other words, the inhibitory effect of two successive type II spikes is simply the linear sum of the inhibition imparted by each individual spike. (3) There is no consistent relationship between type II I type IV synaptic efficacy and postsynaptic (type IV) rate. (4) Simulations, in some cases, suggest that the inverse relationship between presynaptic rate and effectiveness may be reproduced using a simple statistical model of neural interaction. (5) We found no evidence that would explain the discrepancy between Voigt and Young's results and Gochin's results in the DCN. Gochin observed correlogram features consistent with monosynaptic excitatory connections within the DCN when short tone bursts were used as stimuli. We did not observe excitatory features between any unit pairs using short tone bursts. Acknowledgements Dr. Alan Karr assisted in developing Eqns. 1-10. E. Nelken provided helpful comments. Research supported by NIH grant DCOO115. References Bremaud, P. (1981). Point Processes and Queues: Martingale Dynamics. New York, Springer-Verlag. Caspary, D.M., Rybak, L.P.et al. (1984). "Baclofen reduces tone-evoked activity of cochlear nucleus neurons." Hear Res. 13: 113-22. Chomoboy, E.S., Schramm, L.P.et al. (1988). "Maximum likelihood identification of neural point process systems." BioI Cybem. 59: 265-75. Gochin, P.M., Kaltenbach, J.A.et al. (1989). "Coordinated activity of neuron pairs in anesthetized rat dorsal cochlear nucleus." Brain Res. 497: 1-11. Manis, P.B. & Brownell, W.E. (1983). "Synaptic organization of eighth nerve afferents to cat dorsal cochlear nucleus." J Neurophysiol. 50: 1156-81. Rhode, W.S., Smith, P.H.et al. (1983). "Physiological response properties of cells labeled intracellularly with horseradish peroxidase in cat dorsal cochlear nucleus." J Comp Neurol. 213: 426-47. Voigt, H.F. & Young, ED. (1980). "Evidence of inhibitory interactions between neurons in dorsal cochlear nucleus." J Neurophys. 44: 76-96. Voigt, H.F. & Young, E.D. (1990). "Cross-correlation analysis of inhibitory interactions in the Dorsal Cochlear Nucleus." J Neurophys. 54: 1590-1610. Young, E.D. & Brownell, W.E. (1976). "Responses to tones and noise of single cells in dorsal cochlear nucleus of unanesthetized cats." J Neurophys. 39: 282-300.
1991
117
449
Time-Warping Network: A Hybrid Framework for Speech Recognition Esther Levin Roberto Pieraccini AT&T Bell Laboratories Speech Research Department Murray Hill, NJ 00974 USA ABSTRACT Enrico Bocchieri Recently. much interest has been generated regarding speech recognition systems based on Hidden Markov Models (HMMs) and neural network (NN) hybrids. Such systems attempt to combine the best features of both models: the temporal structure of HMMs and the discriminative power of neural networks. In this work we define a time-warping (1W) neuron that extends the operation of the fonnal neuron of a back-propagation network by warping the input pattern to match it optimally to its weights. We show that a single-layer network of TW neurons is equivalent to a Gaussian density HMMbased recognition system. and we propose to improve the discriminative power of this system by using back-propagation discriminative training. and/or by generalizing the structure of the recognizer to a multi-layered net The performance of the proposed network was evaluated on a highly confusable, isolated word. multi speaker recognition task. The results indicate that not only does the recognition performance improve. but the separation between classes is enhanced also, allowing us to set up a rejection criterion to improve the confidence of the system. L INTRODUCTION Since their first application in speech recognition systems in the late seventies, hidden Markov models have been established as a most useful tool. mainly due to their ability to handle the sequential dynamical nature of the speech signal. With the revival of connectionism in the mid-eighties. considerable interest arose in applying artificial neural networks for speech recognition. This interest was based on the discriminative power of NNs and their ability to deal with non-explicit knowledge. These two paradigms. namely HMM and NN. inspired by different philosophies. were seen at first as different and competing tools. Recently. links have been established between these two paradigms. aiming at a hybrid framework in which the advantages of the two models can be combined. For example. Bourlard and Wellekens [1] showed that neural 151 152 Levin, Pieraccini, and Bocchieri networks with proper architecture can be regarded as non-parametric models for computing "discriminant probabilities" related to HMM. Bridle [2] introduced " Alpha-nets", a recurrent neural architecture that implements the alpha computation of HMM, and found connections between back-propagation [3] training and discriminative HMM parameter estimation. Predictive neural nets were shown to have a statistical interpretation [4], generalizing the conventional hidden Markov model by assuming that the speech signal is generated by nonlinear dynamics contaminated by noise. In this work we establish one more link between the two paradigms by introducing the time-warping network (1WN) that is a generalization of both an HMM-based recognizer and a back-propagation net. The basic element of such a network, a timewarping neuron, generalizes the function of a fonnal neuron by warping the input signal in order maximize its activation. In the special case of network parameter values, a single-layered network of time-warping (TW) neurons is equivalent to a recognizer based on Gaussian HMMs. This equivalence of the HMM-based recognizer and single-layer TWN suggests ways of using discriminative neural tools to enhance the perfonnance of the recognizer. For instance, a training algorithm, like backpropagation, that minimizes a quantity related to the recognition performance, can be used to train the recognizer instead of the standard non-discriminative maximum likelihood training. Then, the architecture of the recognizer can be expanded to contain more than one layer of units, enabling the network to fonn discriminant feature detectors in the hidden layers. This paper is organized as follows: in the first part of Section 2 we describe a simple HMM-based recognizer. Then we define the time-warping neuron and show that a single-layer network built with such neurons is equivalent to the HMM recognizer. In Section 3 two methods are proposed to improve the discriminative power of the recognizer, namely, adopting neural training algorithms and extending the structure of the recognizer to a multi-layer net. For special cases of such multi-layer architecture such net can implement a conventional or weighted [5] HMM recognizer. Results of experiments using a TW network for recognition of the English E-set are presented in Section 4. The results indicate that not only does the recognition performance improve, but the separation between classes is enhanced also, allowing us to set up a rejection criterion to improve the confidence of the system. A summary and discussion of this work are included in Section 5. ll. THE MODEL In this section first we describe the basic HMM-based speech recognition system that is used in many applications, including isolated and connected word recognition [6] and large vocabulary subword-based recognition [7]. Though in this paper we treat the case of isolated word recognition, generalization to connected speech can be made like in [6,7]. In the second part of this section we define a single-layered time-warping network and show that it is equivalent to the HMM based recognizer when certain conditions constraining the network parameter values apply. 11.1 THE HIDDEN MARKOV MODEL·BASED RECOGNITION SYSTEM A HMM-based recognition system consists of K N-state HMMs, where K is the vocabul~ size (number of words or subword units in the defined task). The k-th HMM, 0 , is associated with the k-th word in the vocabulary and is characterized by a matrix A A:= (at} of transition probabilities between states, at=Pr(St=j I St-l=i) ,0~i~N , l~j~N, (1) where St denotes the active state at time t (so =0 is a dummy initial state) and by a set of emission probabilities (one per state): Time-Warping Network: A Hybrid Framework for Speech Recognition 153 Pr(X, I s,=i)= ~21t Illl:~ II 2 exp [- ~ (X,-J1~). (l:~)-l (X,-J1~)] , i =1, ... ,N, (2) where X, is the d-dimensional observation vector describing some parametric representation of the t-th frame of the spoken token, and (). denotes the transpose operation. For the case discussed here, we concentrate on strictly left-to-right HMMs, where at * 0 only if j =i or j =i + 1, and a simplified case of (2) where all r} = I d, the d=dimensional unit matrix. The system recognizes a speech token of duration T, X={X~.x2'··· ,XT}, by classifying the token into the class ko with the highest likelihood L O(X), ko=argmaxLJ:(X). (3) ISk~ The likelihood L J:(X) is computed for the k-th HMM as LJ:(X)= max 10g[Pr(X I of,Si=i 1, ••• ,sT=i,)] (4) {il. Of 0 ,iTt = 0 max 0 L -2 II X,-J.1t II 2+10gat14 -log21t. {II' 0 0 0 ,IT} 1=1 The state sequence that maximizes (4) is found by using the Viterbi [8] algorithm. 11.2 THE EQUIVALENT SINGLE-LAYER TIME-WARPING NETWORK A single-layer TW network is composed of K TW neurons, one for each word in the vocabulary. The TW neuron is an extension of a fonnal neuron that can handle dynamic and temporally distorted patterns. The k-th TW neuron, associated with"tpe k-t.I\ '!..Qfabulary" word, is charpcterized by a bias w~ and a set of weights. W = {W 10 W2, .... ~) • where Wi is a column vector of dimensionality d +2. Given an input speech token of duration T. X={XltX2 •... ,XT }, the output activation yJ: of the k-th unit is computed as T ". ::.k N ". .:.k /=g( LX"w 4 +w~ )=g( L ( L X')'Wj+w~), (5) 1=1 j=l , : 4=j ". where g (-) is a sigmoidal, smooth, strictly increasing nonlinearity, and X, = [X; • 1, 1] is an d+2 - dimensional augmented input vector. The corresponding indices i,. t=l, ... ,T are detennined by the following condition: T ". "J: fi10 ... • iT} =argmax LX, 'W4 +w~ . (6) ,=1 In other words. a TW neuron warps the input pattern to match it optimally to its weights (6) and computes its output using this warped version of the input (5). The time-warping process of (6) is a distinguishing feature of this neural model, enabling it to deal with the dynamic nature of a speech signal and to handle temporal distortions. All TW neurons in this single-layer net recognizer receive the same input speech token X. Recognition is perfonned by selecting the word class corresponding to the neuron with the maximal output activation. It is easy to show that when ::.k • k. 1 II k II 2 k [W j] = [[Jlj] • -"2 Jlj .logaj,j ], (7a) and 154 Levin, Pieraccini, and Bocchieri N w~ = L loga',j_1 -loga',j (7b) j=1 this network is equivalent to an HMM-based recognition system, with K N-state HMMs, as described above.l This equivalent neural representation of an HMM-based system suggests ways of improving the discriminative power of the recognizer, while preserving the temporal structure of the HMM, thus allowing generalization to more complicated tasks (e.g., continuous speech, subword units, etc.). III. IMPROVING DISCRIMINATION There are two important differences between the HMM-based ~ystem and a neural net approach to speech recognition that contribute to the improved discrimination power of the latter, namely, training and structure. ID.I DISCRIMINATIVE TRAINING The HMM parameters are usually estimated by applying the maximum likelihood approach, using only the examples of the word represented by the model and disregarding the rival classes completely. This is a non-discriminative approach: the learning criterion is not directly connected to the improvement of recognition accuracy. Here we propose to enhance the discriminative power of the system by adopting a neural training approach. NN training algorithms are based on minimizing an error function E. which is related to the performance of the network on the training set of labeled examples, {X I , Z '}, 1=1, ... ,L, where Z'=[z{, ... ,zkl* denotes the vector of target neural outputs for the I-th input token. Zl has +1 only in the entry corresponding to the right word class, and -1 elsewhere. Then, L E = LE'(Z', yl), (8) '=1 where yl = [Y,i, ... ,ykt is a vector of neural output activations for the I-th input token, and E'(Z', y') measures the distortion between the two vectofi' One choice for E'(Z', yl) is a quadratic error measure, i.e., E'(Z', y')= II Z'_yl 2. Other choices include the cross-entropy error [9] and the recently proposed discriminative error functions, which measure the misciassification rate more directly [10]. The gradient based training algorithms (such as back-propagation) modify the parameters of the network after presentation of each training token to minimize the error (8). The change in the j-th weight subvector of the k-th model after presentation of the I-th training token, ~IW' is inversely proportional to the derivative of the error E' with respect to this weight subvector, dE' K dE' dyl ~'W'=-(l--j; =-(l L -, ~, l~jgj ,1~g, (9) dWj m=1 dy m aWj where a> 0 is a step-size, resulting in an updated weight vectpr .=-.I: • I: 1:. 1 II I: I: II 2 I: ay m [Wj] = [[ Wj +~Wi] ,-"'2 Wi +~Wi ' logaj,j]' To compute the terms d~ J 1. With minor changes we can show equivalence to a general Gaussian HMM, where the covariance matrices are not restricted to be the unit matrix. Time-Warping Network: A Hybrid Framework for Speech Recognition 155 we have to consider (5) and (6) that define the operation of the neuron. Equation (6) expresses the dependence of the warping indices iI, ... ,iT on W,. In the proposed learning rule we compute the gradient for the quadratic error criterion using only (5). A'W'=a(zi-Yi)g'(·) L X~-W' ' (10) , :i,=j where the values of it fulfill condition (6). Although the weights do not change according to the exact gradient descent rule (since (6) is not taken into account for back-propagation) we found experimentally that the error made by the network always decreases after the weight update. This fact also can be proved when certain conditions restricting the step-size a hold, and we conjecture that it is always true for a>O. m.2 THE STRUCTURE OF THE RECOGNIZER When the equivalent neural representation of the HMM-based recognizer is used, there exists a natural way of adaptively increasing the complexity of the decision boundaries and developing discriminative feature detectors. This can be done by extending the structure of the recognizer to a multi-layered neL There are many possible architectures that result from such an extension by changing the number of hidden layers, as well as the number and the type (Le., standard or TW ) of neurons in the hidden layers. Moreover, the role of the TW neurons in the first hidden layer is different now: they are no longer class representatives, as in a single-layered net, but just abstract computing elements with built-in time scale nonnalization. In this work we investigate only a simple special case of such multi-layered architecture. The multi-layered network we use has a single hidden layer, with NxK TW neurons. Each hidden neuron corresponds to oQ~ state of one of the original HM:Ms, and is charac~riz~ by a weight vector Wj and a bias w,. The output activation hj of the neuron IS gIven as (11) where and N { i It ••• ,iT} = argmax L ur j=1 The output layer is composed of K standard neurons. The activation of output neurons yi, k=I, ... , K, is detennined by the hidden layer neurons activations as yi=g(H* Vi + Vi), (12) where Vi is a NxK dimensional weight vector, H is the vector of hidden neurons activation, and Vi is a bias tenn. In a special case of parameter values, when ~ satisfy the conditions (7a,b) and k I k I i w- = oga- --I - oga- J j,J j,J ' (13) the activation hj corresponds to an accumulated j-th state likelihood of the k-th HMM: and the network implements a weighted [5] HMM recognizer where the connection weight vectors Vi detennine the relative weights assigned to each state likelihood in the final classification. Such network can learn to adopt these weights to enhance discrimination by giving large positive weights to states that contain infonnation important for discrimination and ignoring (by fonning zero or close to zero weights) those states that do not contribute to discrimination. A back-propagation algorithm 156 Levin, Pieraccini, and Bocchieri can be used for training this net. IV. EXPERIMENTAL RESULTS To evaluate the effectiveness of the proposed TWN, we conducted several experiments that involved recognition of the highly confusable English E-set (i.e., Ib, c, d, e, g, p, t, v, z/). The utterances were collected from 100 speakers, 50 males and 50 females, each speaking every word in the E-set twice, once for training and once for testing. The signal was sampled at 6.67 kHz. We used 12 cepstral and 12 delra-cepstral LPCderived [11] coefficients to represent each 45 msec frame of the sampled signal. We used a baseline conventional HMM-based recognizer to initialize the TW network, and to get a benchmark performance. Each strictly left-to-right HMM in this system has five states, and the observation densities are modeled by four Gaussian mixture components. The recognition rates of this system are 61.7% on the test data, and 80.2% on the training data. Experiment with single-layer TWN: In this experiment the single-layer TW network was initialized according to (7), using the parameters of the baseline HMMs. The four mixture components of each state were treated as a fully connected set of four states, with transition probabilities that reflect the original transition probabilities and the relati ve weights of the mixtures. This corresponds to the case in which the local likelihood is computed using the dominant mixture component only. The network was trained using the suggested training algorithm (10), with quadratic error function. The recognition rate of the trained network increased to 69.4% on the test set and 93.6% on the training sel Experiment with multi-layer TWN: In this experiment we used the multi-layer network architecture described in the previous section. The recognition perfonnance of this network after training was 74.4% on the test set and 91 % on the training set. Figures I, 2, and 3 show the recognition performance of a single-layer lWN, initialized by a baseline HMM. the trained single-layer TWN. and the trained multilayer TWN, respectively. In these figures the activation of the unit representing the correct class is plotted against the activation of the best wrong unit (Le., the incorrect class with the highest score) for each input utterance. Therefore, the utterances that correspond to the marks above the diagonal line are correctly recognized, and those under it are misclassified. The most interesting observation that can be made from these plots is the striking difference between the multi-layer and the single-layer TWNs. The single-layer lWNs in Figures 1 and 2 (the baseline and the trained) exhibit the same typical behavior when the utterances are concentrated around the diagonal line. For the multi-layer net, the utterances that were recognized correctly tend to concentrate in the upper part of the graph, having the correct unit activation close to 1.0. This property of a multi-layer net can be used for introducing error rejection criterions: utterances for which the difference between the highest activation and second high activation is less than a prescribed threshold are rejected. In Figure 4 we compare the test performance of the multi-layer net and the baseline system, both with such rejection mechanism. for different values of rejection threshold. As expected. the multi-layer net outperforms the baseline recognizer, by showing much smaller misclassification rate for the same number of rejections. V. SUMMARY AND DISCUSSION In this paper we established a hybrid framework for speech recognition, combining the characteristics of hidden Markov models and neural networks. We showed that a HMM-based recognizer has an equivalent representation as a single-layer network composed of time-warping neurons, and proposed to improve the discriminative power of the recognizer by using back-propagation training and by generalizing the structure of the recognizer to a multi-layer net. Several experiments were conducted for testing Time-Warping Network: A Hybrid Framework for Speech Recognition 157 the perfonnance of the proposed network on a highly confusable vocabulary (the English E-set). The recognition perfonnance on the test set of a single-layer TW net improved from 61% (when initialized with a baseline HMMs) to 69% after training. Expending the structure of the recognizer by one more layer of neurons, we obtained further improvement of recognition accuracy up to 74.4%. Scatter plots of the results indicate that in the multi-layer case, there is a qualitative change in the perfonnance of the recognizer, allowing us to set up a rejection criterion to improve the confidence of the system. Rererences 1. H. Bourlard, CJ. Wellekens, "Links between Markov models and multilayer perceptrons," Advances in Neural Information Processing Systems. pp.502-510, Morgan Kauffman, 1989. 2. J.S. Bridle, "Alphanets: a recurrent 'neural' network architecture with a hidden NIarkov model interpretation," Speech Communication, April 1990. 3. D.E. Rumelhart, G.E. Hinton and RJ. Williams, "Learning internal representation by error propagation," Parallel Distributed Processing: Exploration in the Microstructure of Cognition. MIT Press. 1986. 4. E. Levin. "Word recognition using hidden control neural architecture," Proc. of ICASSP. Albuquerque, April 1990. 5. K.-Y. Suo C.-H. Lee. "Speech Recognition Using Weighted HMM and Subspace Projection Approaches," Proc of ICASSP. Toronto, 1991. 6. L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in speech recognition," Proc. of IEEE. vol. 77, No.2, pp. 257-286, February 1989. 7. C.-H. Lee, L. R. Rabiner, R. Pieraccini, J. G. Wilpon, "Acoustic Modeling for Large Vocabulary Speech Recognition," Computer Speech and Language, 1990. No.4. pp. 127-165. 8. G.D. Forney, "The Viterbi algorithm," Proc. IEEE. vol. 61, pp. 268-278, 1-tar. 1973. 9. S.A. Solta, E. Levin, M. Fleisher. "Improved targets for multilayer perceptron learning." Neural Networks Journal. 1988. 10. B.-H. Juang, S. Katagiri, "Discriminative Learning for :Minimum Error Classification," IEEE Trans. on SP, to be published. 11. B.S. Atal, "Effectiveness of linear prediction characteristics of the speech wave for automatic speaker identification and verification," J. Acoust. Soc. Am., vol. 55, No.6, pp. 1304-1312, June 1974. Figure 1: Scatter plot for baseline recognizer 158 Levin, Pieraccini, and Bocchieri • • r r : -.1 · -. -. -. -. Q • • · -. -. -. • .... e. ...... Figure 2: Scatter plot for trained single-layer 1WN Figure 3: Scaner plot for multi-layer 1WN I j I j ~ J4 14"ni. JOi.. a£. z ~~ rr '." ~-ry - iCl. I" t:. .. ::t, ~_-' .J : I~ ~. ~ . ,,,~ " .. l ~ III S ,ot ~ ~ • ... :i. , '1 ~. I , ~I .0 W to H~I Ir"J ... tM I Figure 4: Rejection perfonnance of baseline recognizer and the multi-layer nvN
1991
118
450
The Effective Number of Parameters: An Analysis of Generalization and Regularization in Nonlinear Learning Systems John E. Moody Department of Computer Science, Yale University P.O. Box 2158 Yale Station, New Haven, CT 06520-2158 Internet: moody@cs.yale.edu, Phone: (203)432-1200 Abstract We present an analysis of how the generalization performance (expected test set error) relates to the expected training set error for nonlinear learning systems, such as multilayer perceptrons and radial basis functions. The principal result is the following relationship (computed to second order) between the expected test set and tlaining set errors: (1) Here, n is the size of the training sample e, u;f f is the effective noise variance in the response variable( s), ,x is a regularization or weight decay parameter, and Peff(,x) is the effective number of parameters in the nonlinear model. The expectations ( ) of training set and test set errors are taken over possible training sets e and training and test sets e' respectively. The effective number of parameters Peff(,x) usually differs from the true number of model parameters P for nonlinear or regularized models; this theoretical conclusion is supported by Monte Carlo experiments. In addition to the surprising result that Peff(,x) ;/; p, we propose an estimate of (1) called the generalized prediction error (GPE) which generalizes well established estimates of prediction risk such as Akaike's F P E and AI C, Mallows Cp, and Barron's PSE to the nonlinear setting.! lCPE and Peff(>") were previously introduced in Moody (1991). 847 848 Moody 1 Background and Motivation Many of the nonlinear learning systems of current interest for adaptive control, adaptive signal processing, and time series prediction, are supervised learning systems of the regression type. Understanding the relationship between generalization performance and training error and being able to estimate the generalization performance of such systems is of crucial importance. We will take the prediction risk (expected test set error) as our measure of generalization performance. 2 Learning from Examples Consider a set of n real-valued input/output data pairs ~(n) = {~i = (xi, yi); i = 1, ... , n} drawn from a stationary density 3(~). The observations can be viewed as being generated according to the signal plus noise model2 (2) where yi is the observed response (dependent variable), Xl are the independent variables sampled with input probability density O( x), Ei is independent, identicaIIydistributed (iid) noise sampled with density ~(E) having mean 0 and variance (72,3 and J.t(x) is the conditional mean, an unknown function. From the signal plus noise perspective, the density 3(~) = 3(x, y) can be represented as the product of two components, the conditional density w(ylx) and the input density O(x): 3(x, y) w(ylx) O(x) ~(y - J.t(x» O(x) (3) The learning problem is then to find an estimate jJ,(x) of the conditional mean J.t(x) on the basis of the training set ~(n). In many real world problems, few a priori assumptions can be made about the functional form of J.t(x). Since a parame~ric function class is usually not known, one must resort to a nonparametric regression approach, whereby one constructs an estimate jJ,(x) = f(x) for J.t(x) from a large class of functions F known to have good approximation properties (for example, F could be all possible radial basis function networks and multilayer perceptrons). The class of approximation functions is usually the union of a countable set of subclasses (specific network architectures)4 A C F for which the elements of each subclass f(w, x) E A are continuously parametrized by a set of p = p( A) weights w = {WCX; 0: = 1, ... , p}. The task of finding the estimate f( x) thus consists of two problems: choosing the best architecture A and choosing the best set of weights w given the architecture. Note that in 2The assumption of additive noise ( which is independent of x is a standard assumption and is not overly restrictive. Many other conceivable signal/noise models can be transformed into this form. For example, the multiplicative model y = /L(x)(l + () becomes y' = /L'(x) + (' for the transformed variable y' = log(y). 3Note that we have made only a minimal assumption about the noise (, that it is has finite variance (T2 independent of x. Specifically, we do not need to make the assumption that the noise density <I>(() is of known form (e.g. gaussian) for the following development. 4For example, a "fully connected two layer perceptron with five internal units". The Effective Number of Parameters 849 the nonparametric setting, there does not typically exist a function f( w'" , x) E F with a finite number of parameters such that f(w"', x) = I1(X) for arbitrary l1(x). For this reason, the estimators ji( x) = f( w, x) will be biased estimators of 11( x). 5 The first problem (finding the architecture A) requires a search over possible architectures (e.g. network sizes and topologies), usually starting with small architectures and then considering larger ones. By necessity, the search is not usually exhaustive and must use heuristics to reduce search complexity. (A heuristic search procedure for two layer networks is presented in Moody and Utans (1992).) The second problem (finding a good set of weights for f(w,x)) is accomplished by minimizing an objective function: WA = argminw U(A, w, e(n)) . (4) The objective function U consists of an error function plus a regularizer: U(A, w,e(n)) = nEtrain(W,e(n)) + A S(w) (5) Here, the error Etrain(W,e(n)) measures the "distance" between the target response values yi and the fitted values f(w,xi): n 1 "'. . Etrain(W,e(n)) = ~ 6 E[y"f(w,x' )] , (6) i=l and S( w) is a regularization or weight-decay function which biases the solution toward functions with a priori "desirable" characteristics, such as smoothness. The parameter A ~ 0 is the regularization or weight decay parameter and must itself be optimized.6 The most familiar example of an objective function uses the squared error7 E[yi,f(w, xi)] = [yi - f(w,x i )]2 and a weight decay term: n p U(A,w,~(n)) = L(yi - f(w,xi))2 + A Lg(wCY ) (7) i=l cy=l The first term is the sum of squared errors (SSE) of the model f ( w, x) with resp ect to the training data, while the second term penalizes either small, medium, or large weights, depending on the form of g(wCY). Two common examples of weight decay functions are the ridge regression form g( wCY) = (w CY )2 (which penalizes large weights) and the Rumelhart form g(wCY ) = (w CY )2/[(wO)2 + (w CY )2] (which penalizes weights of intermediate values near wO). 5By biased, we mean that the mean squared bias is nonzero: MSB = J p(x)((/:t(x))elL(x))2dx > o. Here, p(x) is some positive weighting function on the input space and ()e denotes an expected valued taken over possible training sets €(n). For unbiasedness (MSB = 0) to occur, there must exist a set of weights w* such that f(w"', x) = IL(X), and the learned weights ill must be "close to" w*. For "near unbiasedness", we must have w* = argminwMSB(w) such that (MSB(w·)::::: 0) and ill "close to" w*. 6The optimization of..x will be discussed in Moody (1992). 7 Other error functions, such as those used in generalized linear models (see for example McCullagh and NeIder 1983) or robust statistics (see for example Huber 1981) are more appropriate than the squared error if the noise is known to be non-gaussian or the data contains many outliers. 850 Moody An example of a regularizer which is not explicitly a weight decay term is: S(w) = 1 dxO(x)IIOxxf(w, x)112 . (8) This is a smoothing term which penalizes functional fits with high curvature. 3 Prediction Risk With l1(x) = f( w[c;( n)], x) denoting an estimate of the true regression function J.t(x) trained on a data set c;( n), we wish to estimate the prediction risk P, which is the exp ected error of 11( x) in predicting future data. In principle, we can either define P for models l1(x) trained on arbitrary training sets of size n sampled from the unknown density w(ylx )O( x) or for training sets of size n with input density equal to the empirical density defined by the single training set available: 1 n O'(x) = - L 8(x - xi) . (9) n i=1 For such training sets, the n inputs xi are held fixed, but the response variables yi are sampled with the conditional densities w(ylxi ). Since O'(x) is known, but O(x) is generally not known a priori, we adopt the latter approach. For a large ensemble of such training sets, the expected training set error is8 (f ... ;n( A)), / ~ t f[Y;, I( w[~( n)], X;)]) \ 1=1 E J ~ t. f[lI ,J( w[~( n)], x;)] {g wMx; )dll } (10) For a future exemplar (x,z) sampled with density w(zlx)O(x), the prediction risk P is defined as: P = J f[z,J(w[~(n)]'x)lw(zlx)n(x) {g W(Y;IX;)dY;} dzdx (11) Again, however, we don't assume that O(x) is known, so computing (11) is not possible. Following Akaike (1970), Barron (1984), and numerous other authors (see Eubank 1988), we can define the prediction risk P as the expected test set error for test sets of size n e'(n) = {c;i, = (xi,zi); i = 1, ... ,n} having the empirical input density 0' (x). This expected test set error has form: (f.".(A)),<, / ~ tf[i,J(w[~(n)l,x;)l) (12) \ 1=1 EE' J ! t. f[z; ,J( w[~( n)], x;) I {g w (y; Ix; )w( z; Ix;)dy; dz; } 8Following the physics convention, we use angled brackets ( ) to denote expected values. The subscripts denote the random variables being integrated over. The Effective Number of Parameters 851 We take (12) as a proxy for the true prediction risk P. In order to compute (12), it will not be necessary to know the precise functional form of the noise density ~(f). Knowing just the noise variance (T2 will enable an exact calculation for linear models trained with the SSE error and an approximate calculation correct to second order for general nonlinear models. The results of these calculations are presented in the next two sections. 4 The Expected Test Set Error for Linear Models The relationship between expected training set and expected test set errors for linear models trained using the SSE error function with no regularizer is well known in statistics (Akaike 1970, Barron 1984, Eubank 1988). The exact relation for test and training sets with density (9): (13) As pointed out by Barron (1984), (13) can also apply approximately to the case of a nonlinear model f( w, x) trained by minimizing the sum of squared errors SSE. This approximation can be arrived at in two ways. First, the model few, x) can be treated as locally linear in a neighborhood of w. This approximation ignores the hessian and higher order shape of f( w, x) in parameter space. Alternatively, the model f( w, x) can be assumed to be locally quadratic in parameter space wand unbiased. However, the extension of (13) as an approximate relation for nonlinear models breaks down if any of the following situations hold: The SSE error function is not used. (For example, one may use a robust error measure (Huber 1981) or log likelihood error measure instead.) A regularization term is included in the objective function. (This introduces bias.) The locally linear approximation for few, x) is not good. The unbiasedness assumption for few, x) is incorrect. 5 The Expected Test Set Error for Nonlinear Models For neural network models, which are typically nonparametric (thus biased) and highly nonlinear, a new relationship is needed to replace (13). We have derived such a result correct to second order for nonlinear models: (14) This result differs from the classical result (13) by the appearance of Pelf ()..) (the effective number of parameters), (T;1f (the effective noise variance in the response variable( s», and a dependence on ).. (the regularization or weight decay parameter). A full derivation of (14) will be presented in a longer paper (Moody 1992). The result is obtained by considering the noise terms fi for both the training and test 852 Moody sets as perturbations to an idealized model fit to noise free data. The perturbative expansion is computed out to second order in the fi s subject to the constraint that the estimated weights w minimize the perturbed objective function. Computing expectation values and comparing the expansions for expected test and training errors yields (14). It is important to re-emphasize that deriving (14) does not require knowing the precise form of the noise density ~(f). Only a knowledge of u 2 is assumed. The effective number of parameters Peff(>') usually differs from the true number of model parameters P and depends upon the amount of model bias, model nonlinearity, and on our prior model preferences (eg. smoothness) as determined by the regularization parameter A and the form of our regularizer. The precise form of Peff(A) is 1", _ Peff(A) = trC = - L..J1iaUaJTf3i , 2. laf3 (15) where C is the generalized influence matrix which generalizes the standard influence or hat matrix of linear regression, 1ia is the n x p matrix of derivatives of the training error function 8 8 1ia = -8 . -8 nE(w,e(n)) , yl wa and U;;J is the inverse of the hessian of the total objective function 8 8 Uaf3 = 8wa 8wf3 U(A, w, e(n)) (16) (17) In the general case that u2 (x) varies with location in the input space x, the effective noise variance u;ff is a weighted average of the noise variances u2{xi). For the uniform signal plus noise model model we have described above, u;f f = u 2 • 6 The Effects of Regularization In the neural network community, the most commonly used regularizers are weight decay functions. The use of weight decay is motivated by the intuitive notion that it removes unnecessary weights from the model. An analysis of Peff{A) with weight decay (A > 0) confirms this intuitive notion. Furthermore, whenever u 2 > 0 and n < 00, there exists some Aoptimal > 0 such that the expected test set error (12) is minimized. This is because weight decay methods yield models with lower model variance, even though they are biased. These effects will be discussed further in Moody (1992). For models trained with squared error ~SSE) and quadratic weight decay g(wa ) = (wa )2, the assumptions of unbiasedness or local linearizability lead to the following expression for Peff{A) which we call the linearized effective number of parameters Plin{A): (18) 9Strictly speaking, a model with quadratic weight decay is unbiased only if the "true" weights are o. ~ ..0 • E :i u > ::= , " i The Effective Number of Parameters 853 mplied. Li nearized. and Full P-effectiv e Linearized . - -----.--~--.... .. , 'II( ~, Full ~" t , '" K ImpJi d ~-- ~ ~1\' ,. ,. ,. I. " 1Weight Decay Parameter (Lambda) Figure 1: The full Peff(~) (15) agrees with the implied Pimp(~) (19) to within exp erimental error, whereas the linearized Plin (~) (18) does not (except for very large ~). These results verify the significance of (14) and (15) for nonlinear models. Here, ",01 is the a th eigenvalue of the P x P matrix J{ = TtT, with T as defined in (16). The form of Pelf(~) can be computed easily for other weight decay functions, such as the Rumelhart form g(w Ol ) = (w Ol)2/[(wO)2 + (w Ol)2]. The basic result for all weight decay or regularization functions, however, is that Peff (~) is a decreasing function of ~ with Pelf(O) = P and Pelf(oo) = 0, as is evident in the special case (18). If the model is nonlinear and biased, then Pelf (0) generally differs from p. 7 Testing the Theory To test the result (14) in a nonlinear context, we computed the full Pej j(A) (15), the linearized Plin(~) (18), and the implied number of parameters Pimp (A) (19) for a nonlinear test problem. The value of Pimp (~) is obtained by computing the expected training and test errors for an ensemble of training sets of size n with known noise variance u 2 and solving for Pelf (~) in equation (14): (19) The """s indicate Monte Carlo estimates based on computations using a finite ensemble (10 in our experiments) of training sets. The test problem was to fit training sets of size 50 generated as a sum of three sigmoids plus noise, with the noise sampled from the uniform density. The model architecture f(w , x) was also a sum of three sigmoids and the weights w were estimated by minimizing (7) with quadratic weight decay. See figure 1. 854 Moody 8 G PE: An Estimate of Prediction Risk for Nonlinear Systems A number of well established, closely related criteria for estimating the prediction risk for linear or linearizable models are available. These include Akaike's F P E (1970), Akaike's AlC (1973) Mallow's Cp (1973), and Barron's PSE (1984). (See also Akaike (1974) and Eubank (1988).) These estimates are all based on equation (13). The generalized prediction error (G P E) generalizes the classical estimators F P E, AIC, Cp, and PSE to the nonlinear setting by estimating (14) as follows: () -. () ~2 Peff(>') GPE>' = PGPE = &train n + 2ueff . n (20) The estimation process and the quality of the resulting GP E estimates will be described in greater detail elsewhere. Acknowledgements The author wishes to thank Andrew Barron and Joseph Chang for helpful conversations. This research was supported by AFOSR grant 89-0478 and ONR grant N00014-89-J-1228. References H. Akaike. (1970). Statistical predictor identification. Ann. Inst. Stat. Math., 22:203. H. Akaike. (1973). Information theory and an extension of the maximum likelihood principle. In 2nd Inti. Symp. on Information Theory, Akademia Kiado, Budapest, 267. H. Akaike. (1974). A new look at the statistical model identification. IEEE Trans. Auto. Control, 19:716-723. A. Barron. (1984). Predicted squared error: a criterion for automatic model selection. In Self-Organizing Methods in Modeling, S. Farlow, ed., Marcel Dekker, New York. R. Eubank. (1988). Spline Smoothing and Nonparametric Regression. Marcel Dekker, New York. P. J. Huber. (1981). Robust Statistics. Wiley, New York. C. L. Mallows. (1973). Some comments on Cpo Technometrics 15:661-675. P. McCullagh and J.A. NeIder. (1983). Generalized Linear Models. Chapman and Hall, New York. J. Moody. (1991). Note on Generalization, Regularization, and Architecture Selection in Nonlinear Learning Systems. In B.H. Juang, S.Y. Kung, and C.A. Kamm, editors, Neural Networks for Signal Processing, IEEE Press, Piscataway, N J. J. Moody. (1992). Long version of this paper, in preparation. J. Moody and J. Utans. (1992). Principled architecture selection for neural networks: application to corporate bond rating prediction. In this volume.
1991
119
451
Principled Architecture Selection for Neural Networks: Application to Corporate Bond Rating Prediction John Moody Department of Computer Science Yale University P. O. Box 2158 Yale Station New Haven, CT 06520 Joachim U tans Department of Electrical Engineering Yale University P. O. Box 2157 Yale Station New Haven, CT 06520 Abstract The notion of generalization ability can be defined precisely as the prediction risk, the expected performance of an estimator in predicting new observations. In this paper, we propose the prediction risk as a measure of the generalization ability of multi-layer perceptron networks and use it to select an optimal network architecture from a set of possible architectures. We also propose a heuristic search strategy to explore the space of possible architectures. The prediction risk is estimated from the available data; here we estimate the prediction risk by v-fold cross-validation and by asymptotic approximations of generalized cross-validation or Akaike's final prediction error. We apply the technique to the problem of predicting corporate bond ratings. This problem is very attractive as a case study, since it is characterized by the limited availability of the data and by the lack of a complete a priori model which could be used to impose a structure to the network architecture. 1 Generalization and Prediction Risk The notion of generalization ability can be defined precisely as the prediction risk, the expected performance of an estimator is predicting new observations. Consider a set of observations D = {(Xj, tj); j = 1 ... N} that are assumed to be generated 683 684 Moody and Urans as ( 1) where J.l(x) is an unknown function, the inputs Xj are drawn independently with an unknown stationary probability density function p(x), the fj are independent random variables with zero mean (l = 0) and variance (j~, and the tj are the observed target values. The learning or regression problem is to find an estimate jt)..(x; D) of J.l(x) given the data set D from a class of predictors or models J.l)..(x) indexed by 'x. In general, ,x E A = (5, A, W), where 5 C X denotes a chosen subset of the set of available input variables X, A is a selected architecture within a class of model architectures A, and Ware the adjustable parameters (weights) of architecture A. The prediction risk P(,x) is defined as the expected performance on future data and can be approximated by the expected performance on a finite test set: (2) where (xi, ti) are new observations that were not used in constructing jt)..(x). In what follows, we shall use P(,x) as a measure of the generalization ability of a model. See [4] and [6] for more detailed presentations. 2 Estimates of Prediction Risk Since we cannot directly calculate the prediction risk P).., we have to estimate it from the available data D. The standard method based on test-set validation is not advisable when the data set is small. In this paper we consider such a case; the prediction of corporate bond ratings from a database of only 196 firms. Crossvalidation (CV) is a sample re-use method for estimating prediction risk; it makes maximally efficient use of the available data. Other methods are the generalized cross-validation (GCV) and the final prediction error (FPE) criteria, which combine the average training squared error ME with a measure of the model complexity. These will be discussed in the next sections. 2.1 Cross Validation Cross-Validation is a method that makes minimal assumptions on the statistics of the data. The idea of cross validation can be traced back to Mosteller and Tukey [7]. For reviews, see Stone [8, 9], Geisser [5] and Eubank [4]. Let jt)..(j)(x) be a predictor trained using all observations except (Xj, tj) such that jt )..(j) (x) minimizes MEj = (N ~ 1) L (tk - jt)..(j)(Xk») 2 k~j Then, an estimator for the prediction risk P(-X) is the cross validation average Principled Architecture Selection for Neural Networks 685 squared error N CV(,x) = ~ E (tj - flA(j)(Xj)) 2 N j=l (3) This form of CV(,x) is known as leave-one-out cross-validation. However, CV(,x) in (3) is expensive to compute for neural network models; it involves constructing N networks, each trained with N - 1 patterns. For the work described in this paper we therefore use a variation of the method, v-fold crossvalidation, that was introduced by Geisser [5] and Wahba et al [12]. Instead of leaving out only one observation for the computation of the sum in (3) we delete larger subsets of D. Let the data D be divided into v randomly selected disjoint subsets Pj of roughly equal size: Uj=lPj = D and Vi i= j, Pi n Pj = 0. Let Nj denote the number of observations in subset Pj. Let flA(Pj) (x) be an estimator trained on all data except for (x, t) E Pj. Then, the cross-validation average squared error for subset j is defined as CVPj('x) = ~. E (tk - flA(Pj)(Xk)) 2 , 3 (Xk,tk)ePj and 1 CVp(,x) = ; L CVPj('x). (4) j Typical choices for v are 5 and 10. Note that leave-one-out CV is obtained in the limit v = N. 2.2 Generalized Cross-Validation and Final Prediction Error For linear models, two useful criteria for selecting a model architecture are generalized cross-validation (CCV) (Wahba [11]) and Akaike's final prediction error (FPE) ([1]): GCV('x) = ASE('x) 1 2 (I-¥) ( I+~) FPE('x) = ASE('x) 1- ~ . S(A) denotes the number of weights of model'x. See [4] for a tutorial treatment. Note that although they are slightly different for small sample sizes, they are asymptotically equivalent for large N: p(,x) - ASE('x) (1 + 2S~)) ~ GCV('x) ~ FPE('x) (5) We shall use this asymptotic estimate for the prediction risk in our analysis of the bond rating models. It has been shown by Moody [6] that FPE and therefore p(,x) is an unbiased estimate of the prediction risk for the neural network models considered here provided that (1) the noise fj in the observed targets tj is independent and identically distributed, 686 Moody and Utans (2) weight decay is not used, and (3) the resulting model is unbiased. (In practice, however, essentially all neural network fits to data will be biased (see Moody [6]).) FPE is a special case of Barron's PSE [2] and Moody's GPE [6]. Although FPE and P{A.) are unbiased only under the above assumptions, they are much cheaper to compute than GVp since no retraining is tequired. 3 A Case Study: Prediction of Corporate Bond Ratings A bond is a debt security which constitutes a promise by the issuing firm to pay a given rate of interest on the original issue price and to redeem the bond at face value at maturity. Bonds are rated according to the default risk of the issuing firm by independent rating agencies such as Standard & Poors (S&P) and Moody's Investor Service. The firm is in default if it is not able make the promised interest payments. Representation of S&P Bond Ratings Table 1: Key to S&P bond ratings. We only used the range from 'AAA' or 'very low default risk' to 'CCC' meaning 'very high default risk'. (Note that AAA- is a not a standard category; its inclusion was suggested to us by a Wall Street analyst.) Bonds with rating BBB- or better are "investment grade" while "junk bonds" have ratings BB+ or below. For our output representation, we assigned an integer number to each rating as shown. S&P and Moody's determine the rating from various financial variables and possibly other information, but the exact set of variables is unknown. It is commonly believed that the rating is at least to some degree judged on the basis of subjective factors and on variables not directly related to a particular firm. In addition, the method used for assigning the rating based on the input variables is unknown. The problem we are considering here is to predict the S&P rating of a bond based on fundamental financial information about the issuer which is publicly available. Since the rating agencies update their bond ratings infrequently, there is considerable value to being able to anticipate rating changes before they are announced. A predictive model which maps fundamental financial factors onto an estimated rating can accomplish this. The input data for our model consists of 10 financial ratios reflecting the fundamental characteristics of the firms. The database was prepared for us by analysts at a major financial institution. Since we did not attempt to include all information in the input variables that could possibly be related to a firms bond rating (e.g. all fundamental or technical financial factors, or qualitative information such as quality of management), we can only attempt to approximate the S&P rating. 3.1 A Linear Bond Rating Predictor For comparison with the neural network models, we computed a standard linear regression model. All input variables were used to predict the rating which is represented by a number in [0,1]. The rating varies continuously from one category to the next higher or next lower one and this "smoothness" is captured in the single output representation and should make the task easier. To interpret the network Principled Architecture Selection for Neural Networks 687 Cross Validation Error vs. Nur.ber of Hidden Units 1.00 ,....,....-.----r--,.--.--..-~~~.--,......., r... o r... r... ~I.. ~ ::: 1,10 III 't1 III > fill ... ID o r... U Number of Hidden Units p vs . Nur.ber of Hidden Units ... ... ... ... I .a Number of Hidden Units Figure 1: Cross validation error CVp (>.) and Pp..) versus number of hidden units. response, the output was rescaled from [0,1] to [2, 19] and rounded to the nearest integer; 19 corresponds to a rating of 'AAA' and 2 to 'eee' and below (see Table 1). The input variables were normalized to the interval [0,1] since the original financial ratios differed widely in magnitude. The model predicted the rating of 21.4 % of the firms correctly, for 37.2 % the error was one notch and for 21.9 % two notches (thus predicting 80.5 % of the data within two notches from the correct target). The RMS training error was 1.93 and the estimate of the prediction risk P == 2.038. 3.2 Beyond Linear Regression: Prediction by Two Layer Perceptrons The class of models we are considering as predictors are two-layer perceptron networks with h input variables, H>. internal units and a single output unit having the form H>. I>. p>.(x) = f( Vo + L Va g(WaO + L Wa{3 X(3)) . (6) a=l {3=1 The hidden units have a sigmoidal transfer function while our single output unit uses a piecewise linear function. 3.3 Heuristic Search over the Space of Percept ron Architectures Our proposed heuristic search algorithm over the space of perceptron architectures is as follows. First, we select the optimal number of internal units from a sequence of fully connected networks with increasing number of hidden units. Then, using the optimal fully connected network, we prune weights and input variables in parallel resulting in two separately pruned networks. Lastly, the methods were combined and the resulting networks is retrained to yield the final model 3.3.1 Selecting the Number of Hidden Units We initially trained fully connected networks with all 10 available inputs variables but with the number of hidden units H>. varying from 2 to 11. Five-fold cross688 Moody and Utans Training Error 3 Hidden Units IE.,,..,,./, I firms % 0 67 34.2 1 84 42.9 2 34 17.3 >2 11 5 .6 number of weights standard deviation mean absolute deviation training errOr cum. % 34.2 77.1 94.4 100.0 37 1.206 0.898 1.320 Cross Validation Error 3 Hidden Units .IE....n.otc.b.l firms 1'0 cum.1! 0 54 28.6 28.6 1 77 38.8 67.3 2 35 17.3 84.7 >2 30 15.3 100.0 number of weights 37 standard deviation 1.630 mean absolute deviation 1.148 cross validation error 1.807 Table 2: Results for the network with 3 hidden units. The standard deviation and the mean absolute deviation are computed after rescaling the output of the network to [2,19] and rounding to the nearest integer (notches). The RMS training error is computed using the rescaled output of the network before rounding. The table also describes the predictive ability of the network by a histogram; the error column gives the number of rating categories the network was off from the correct target. The network with 3 hidden units significantly outperformed the linear regression model. On the right Cross Validation results for the network with 3 hidden units are shown. In order to predict the rating for a firm, we choose among the networks trained for the cross-validation procedure the one that was not trained using the subset the firm belongs to. Thus the results concerning the predictive ability of the model reflect the expected performance of the model trained on all the data with new data in the cross-validation-sense. validation and P(>\) were used to select the number of hidden units. We compute CVp(A) according to equation (4); the data set was partitioned into v = 5 subsets. We also computed P(A) according to equation (5). The results of the two methods are consistent, having a common minimum for H>.. = 3 internal units (see figure 1). Table 2 (left ) shows the results for the network with H)" = 3 trained on the entire data set. A more accurate description of the performance of the model is shown in table 2( right) were the predictive ability is calculated from the hold-out sets of the cross-validation procedure. 3.3.2 Pruning of Input Variables via Sensitivity Analysis Next, we attempted to further reduce the number of weights of the network by eliminating some of the input variables. To test which inputs are most significant for determining the network output, we perform a sensitivity analysis. We define the "Sensitivity" of the network model to variable (3 as: 1 N Sf3 = N L AS'E(x{3) - AS'E(xf3) j=l with 1 N x{3 = N LX{3j j=l Here, x{3j is the 13th input variable of the ph exemplar. S{3 measures the effect on the training AS'E of replacing the 13th input xf3 by its average x{3. Replacement of a variable by its average value reIl!0ves its influence on the network output. Again we use 5-fold cross-validation and P to estimate the prediction risk P>... We constructed a sequence of models by deleting an increasin~ number of input variables in order of increasing S{3. For each model, CVp and P was computed, figure 2 shows the results. A minimum was attained for the model with 1>.. = 8 input variables (2 inputs were removed). This reduces the number of weights by 2H)" = 6. Principled Architecture Selection for Neural Networks P and P and I.a Sensitivity Analysis 1.00 "Op t iIml Brain Damge" a.1 .. 16 a.\ 1.00 c..a.o c.. "16 ... 1.10 1 .• 1.76 1.. I a • \. .. \0 \ ao NurDer of Inputs RSIDved Nurrher of Weights ReaDved Figure 2: peA) for the sensitivity analysis and OBD. In both cases, the Cross validation error CVp(A) has a minimum for the same A. 3.3.3 Weight Pruning via "Optimal Brain Damage" Optimal Brain Damage (OBD) was introduced by Le Cun at al [3] as a method to reduce the number of weights in a neural network to avoid overfitting. OBD is designed to select those weights in the network whose removal will have a small effect on the training ME. Assuming that the original network was too large, removing these weights and retraining the now smaller network should improve the generalization performance. The method approximates ME at a minimum in weight space by a diagonal quadratic expansion. The saliency 1 {PME 2 Si = 2 w· 2 ow. I I computed after training has stopped is a measure (in the diagonal approximation) for the change of ME when weight Wi is removed from the network. CVp and P were computed to select the optimal model. We find that CVp and P are minimized when 9 weights are deleted from the network using all input variables. However, some overlap exists when compared to the sensitivity analysis described above: 5 of the deleted weights would also have been removed by the sensitivity method. Table 3 show the overall performance of our model when the two techniques were combined to yield the final architecture. This architecture is obtained by deleting the union of the sets of weights that were deleted using weight and input pruning separately. Note the improvement in estimated prediction performance (CV error) in table 3 relative to 2. 4 Summary Our example shows that (1) nonlinear network models can out-perform linear regression models, and (2) substantial benefits in performance can be obtained by the use of principled architecture selection methods. The resulting structured networks 689 690 Moody and Utans Training Error, 3 Hidden Units Cross Validation Error, 3 Hidden Units 2 Inputs and 9 Connections Removed 2 Inputs and 9 Connections Removed IEnatt'Ohl firms % cum . % IEnotchl firms % cum . % 0 69 35.2 35.2 0 58 29.6 29.6 1 81 41.3 76.5 1 76 38.8 68.4 2 32 16.3 92.8 2 37 18.9 87.2 >2 14 7.2 100.0 >2 26 12.8 100.0 number of weights 27 number of weights 27 standard deviation 1.208 standard deviation 1.546 mean absolute deviation 0.882 mean absolute deviation 1.117 training error 1.356 cross validation error 1.697 Table 3: Results for the network with 3 hidden units with both, sensitivity analysis and OBD applied. Note the improvement in CV error performance of relative to Table 2. are optimized with respect to the task at hand, even though it may not be possible to design them based on a priori knowledge. Estimates of the prediction risk offer a sound basis for assessing the performance of the model on new data and can be used as a tool for principled architecture selection. Cross-validation, GCV and FPE provide computationally feasible means of estimating the prediction risk. These estimates of prediction risk provide very effective criteria for selecting the number of internal units and performing sensitivity analysis and OBD. References [1] H. Akaike. Statistical predictor identification. Ann. Inst. Statist. Math., 22:203-217, 1970. [2] A. Barron. Predicted squared error: a criterion for automatic model selection. In S. Farlow, editor, Self-Organizing Methods in Modeling. Marcel Dekker, New York, 1984. [3] Y. Le Cun J. S. Denker, and S. A. Solla. Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2. Morgan Kaufmann Publishers, 1990. [4] Randall L. Eubank. Spline Smoothing and Nonparametric Regression. Marcel Dekker, Inc., 1988. [5] Seymour Geisser. The predictive sample reuse method with applications. Journal of The American Statistical Association, 70(350), June 1975. [6] John Moody. The effective number of parameters: an analysis of generalization and regularization in nonlinear learning systems. short version in this volume, long version to appear, 1992. [7] F. Mosteller and J. W. Tukey. Data analysis, including statistics. In G. Lindzey and E. Aronson, editors, Handbook of Social Psychology, Vol. 2. Addison-Wesley, 1968 (first edition 1954). [8] M. Stone. Cross-validatory choice and assessment of statistical predictions. Roy. Stat. Soc., B36, 1974. [9] M. Stone. Cross-validation: A review. Math. Operationsforsch. Statist., Ser. Statistics, 9(1), 1978. [10] Joachim Utans and John Moody. Selecting neural network architectures via the ~ediction risk: Application to corporate bond rating prediction. In Proceedings of the First International Conference on Artifical Intelligence Applications on Wall Street. IEEE Computer Society Press, Los Alamitos, CA, 1991. [11] G. Wahba. Spline Models for Observational Data, volume 59 of Regional Conference Series in Applied Mathematics. SIAM Press, Philadelphia, 1990. [12] G. Wahba and S. Wold. A completely automatic french curve: Fitting spline functions by cross-validation. Communiations in Statistics, 4(1): 1-17, 1975.
1991
12
452
Dual Inhibitory Mechanisms for Definition of Receptive Field Characteristics in Cat Striate Cortex A. B. Bonds Dept. of Electrical Engineering Vanderbilt University N ashville, TN 37235 Abstract In single cells of the cat striate cortex, lateral inhibition across orientation and/or spatial frequency is found to enhance pre-existing biases. A contrast-dependent but spatially non-selective inhibitory component is also found. Stimulation with ascending and descending contrasts reveals the latter as a response hysteresis that is sensitive, powerful and rapid, suggesting that it is active in day-to-day vision. Both forms of inhibition are not recurrent but are rather network properties. These findings suggest two fundamental inhibitory mechanisms: a global mechanism that limits dynamic range and creates spatial selectivity through thresholding and a local mechanism that specifically refines spatial filter properties. Analysis of burst patterns in spike trains demonstrates that these two mechanisms have unique physiological origins. 1 INFORMATION PROCESSING IN STRIATE CORTICAL CELLS The most popular current model of single cells in the striate cortex casts them in terms of spatial and temporal filters. The input to visual cortical cells from lower visual areas, primarily the LGN, is fairly broadband (e.g., Soodak, Shapley & Kaplan, 1987; Maffei & Fiorentini, 1973). Cortical cells perform significant bandwidth restrictions on this information in at least three domains: orientation, spatial frequency and temporal frequency. The most interesting quality of these cells is 75 76 Bonds therefore what they reject from the broadband input signal, rather than what they pass, since the mere passage of the signal adds no information. Visual cortical cells also show contrast-transfer, or amplitude-dependent, nonlinearities which are not seen at lower levels in the visual pathway. The primary focus of our lab is study of the cortical mechanisms that support both the band limitations and nonlinearities that are imposed on the relatively unsullied signals incoming from the LGN. All of our work is done on the cat. 2 THE ROLE OF INHIBITION IN ORIENTATION SELECTIVITY Orientation selectivity is one of the most dramatic demonstrations of the filtering ability of cortical cells. Cells in the LGN are only mildly biased for stimulus orientation, but cells in cortex are completely unresponsive to orthogonal stimuli and have tuning bandwidths that average only about 40-50° (e.g., Rose & Blakemore, 1974). How this happens remains controversial, but there is general consensus that inhibition helps to define orientation selectivity although the schemes vary. The concept of cross-orientation inhibition suggests that the inhibition is itself orientation selective and tuned in a complimentary way to the excitatory tuning of the cell, being smallest at the optimal orientation and greatest at the orthogonal orientation. More recent results, including those from our own lab, suggests that this is not the case. We studied the orientation dependence of inhibition by presenting two superimposed gratings, a base grating at the optimal orientation to provide a steady level of background response activity, and a mask grating of varying orientation which yielded either excitation or inhibition that could supplement or suppress the basegenerated response. There is some confusion when both base and mask generate excitation. In order to separate the response components from each of these stimuli, the two gratings were drifted at differing temporal frequencies. At least in simple cells, the individual contributions to the response from each grating could then be resolved by performing Fourier analysis on the response histograms. Experiments were done on 52 cells, of which about 2/3 showed organized suppression from the mask grating (Bonds, 1989). Fig. 1 shows that while the mask-generated response suppression is somewhat orientation selective, it is by and large much flatter than would be required to account for the tuning of the cell. There is thus some orientation dependence of inhibition, but not specifically at the orthogonal orientation as might be expected. Instead, the predominant component of the suppression is constant with mask orientation, or global. This suggests that virtually any stimulus can result in inhibition, whether or not the recorded cell actually "sees" it. What orientation-dependent component of inhibition that might appear is expressed in suppressive side-bands near the limits of the excitatory tuning function, which have the effect of enhancing any pre-existing orientation bias. Thus the concept of cross-orientation inhibition is not particularly correct, since the inhibition is found not just at the "cross" orientation but rather at all orientations. Even without orientation-selective inhibition, a scheme for establishment of true orientation selectivity from orientation-biased LGN input can be derived Dual Inhibitory Mechanisms 77 70.------------, ~ 0,..-----------, No INIIk (luning) ··0·· A ~ ~ 80 14% mask -15. Ii 28% mask ___ E -5 a. 50 c;. E :I ·10 :~ & ! I~ &~ ~ I --~~---~~r_ ~20 u ci 10 o~~-~-~-~-~ 80 80 100 120 14%mask __ 28%1NIIk-B 0.------------------------, c 2 Hz (*e) rwp. -----SupprWIion No mask (luning) ···0······ Exc:iWtion Sv12:A4.D11 Simple 330 ~ 90 150 210 270 330 Mask Orientation (deg) Figure 1: Response suppression by mask gratings of varying orientation. A. Impact of masks of 2 different contrasts on 2 Hz (base-generated) response, expressed by decrease (negative imp/sec) from response level arising from base stimulus alone. B. Similar example for mask orientations spanning a full 3600 • by assuming that the nonselective inhibition is graded and contrast-dependent and that it acts as a thresholding device (Bonds, 1989). 3 THE ROLE OF INHIBITION IN SPATIAL FREQUENCYSELECTnnTY While most retinal and LGN cells are broadly tuned and predominantly low-pass, cortical cells generally have spatial frequency bandpasses of about 1.5-2 octaves (e.g., Maffei & Fiorentini, 1973). We have examined the influence of inhibition on spatial frequency selectivity using the same strategy as the previous experiment (Bauman & Bonds, 1991). A base grating, at the optimal orientation and spatial frequency, drove the cell, and a superimposed mask grating, at the optimal orientation but at different spatial and temporal frequencies, provided response facilitation or suppression. We defined three broad categories of spatial frequency tuning functions: Low pass, with no discernible low-frequency fall-off, band-pass, with a peak between 0.4 and 0.9 c/deg, and high pass, with a peak above 1 c/deg. About 75% of the cells showed response suppression organized with respect to the spatial frequency of mask gratings. For example, Fig. 2A shows a low-pass cells with high-frequency suppression and Fig. 2B shows a band-pass cell with mixed suppression, flanking the tuning curve at both low and high frequencies. In each case response suppression was graded with mask contrast and some suppression was found even at the optimal spatial frequency. Some cells showed no suppression, indicating that the suppression was not merely a stimulus artifact. In all but 2 of 42 cases, the suppression was appropriate to the enhancement of the tuning function (e.g., low-pass cells had highfrequency response suppression), suggesting that the design of the system is more 78 Bonds than coincidental. No similar spatial-frequency-dependent suppression was found in LGN cells. ~.-----------------------, 40 o 0)30 ..!!? ~20 c. 0) 10 (I) No mask (tuning) ...... A 14'lCtmask -2O'lCt mask --28'lCt mask --Simple LV9:R4.05 a 0 ~--------" -, ' __ ~ ___ -e--I 110 -20 ~.-----------------------, No mask (tuning) ....... . .. B 40 10'lCt mask --20 === :// .. ... \'\'" =" •. . 0 .• . -- 0 ' O~~==~----------~==~~ -20 ~~~--~--~----~----~~ 4D~~--~--~----~----~~ 0.2 0.3 0.5 1 2 0.2 0.3 0.5 1 2 Spatial Frequency (cyc/deg) Figure 2: Examples of spatial frequency-dependent response suppression. Upper broken lines show excitatory tuning functions and solid lines below zero indicate response reduction at three different contrasts. A. Low-pass cell with high frequency inhibition. B. Band-pass cell with mixed (low and high frequency) inhibition. Note suppression at optimal spatial frequency in both cases. 4 NON-STATIONARITY OF CONTRAST TRANSFER PROPERTIES The two experiments described above demonstrate the existence of intrinsic cortical mechanisms that refine the spatial filter properties of the cells. They also reveal a global form of inhibition that is spatially non-specific. Since it is found even with spatially optimal stimuli, it can influence the form of the cortical contrast-response function (usually measured with optimal stimuli). This function is essentially logarithmic, with saturation or even super-saturation at higher contrasts (e.g., Albrecht & Hamilton, 1982), as opposed to the more linear response behavior seen in cells earlier in the visual pathway. Cortical cells also show some degree of contrast adaptation; when exposed to high mean contrasts for long periods of time, the response vs contrast curves move rightward (e.g., Ohzawa, Sclar & Freeman, 1985). We addressed the question of whether contrast-response nonlinearity and adaptation might be causally related. In order to compensate for "intrinsic response variability" in visual cortical cells, experimental stimulation has historically involved presentation of randomized sequences of pattern parameters, the so-called multiple histogram technique (Henry, Bishop, Tupper & Dreher, 1973). Scrambling presentation order distributes timedependent response variability across all stimulus conditions, but this procedure can be self-defeating by masking any stimulus-dependent response variation. We therefore presented cortical cells with ordered sequences of contrasts, first ascending then descending in a stepwise manner (Bonds, 1991). This revealed a clear and powerful response hysteresis. Fig. 3A shows a solid line representing the contrast-response Dual Inhibitory Mechanisms 79 function measured in the usual way, with randomized parameter presentation, overlaid on an envelope outlining responses to sequentially increasing or decreasing 3-sec contrast epochs; one sequential presentation set required 54 secs. Across 36 cells measured in this same way, the average response hysteresis corresponded to 0.36 log units of contrast. Some hysteresis was found in every cortical cell and in no LGN cells, so this phenomenon is intrinsically cortical. 50 100 8 0 CD 40 80 .!e Complex a. E ._ 30 60 14% peak contr. CD ~20 40 0 a. en 10 CD 20 a: 0 0 3 5 10 2030 50 100 3 5 10 Contrast (%) Figure 3: Dynamic response hysteresis. A. A response function measured in the usual way, with randomized stimulus sequences (filled circles) is overlaid on the function resulting from stimulation with sequential ascending (upper level) and descending (lower level) contrasts. Each contrast was presented for 3 seconds. B. Hysteresis resulting from peak contrast of 14%; 3 secs per datum. Hysteresis demonstrates a clear dependence of response amplitude on the history of stimulation: at a given contrast, the amplitude is always less if a higher contrast was shown first. This is one manifestation of cortical contrast adaptation, which is well-known. However, adaptation is usually measured after long periods of stimulation with high contrasts, and may not be relevant to normal behavioral vision. Fig. 3B shows hysteresis at a modest response level and low peak contrast (14%), suggesting that it can serve a major function in day-to-day visual processing. The speed of hysteresis also addresses this issue, but it is not so easily measured. Some response histogram waveforms show consistent amplitude loss over a few seconds of stimulation (see also Albrecht, Farrar & Hamilton, 1984), but other histograms can be flat or even show a slight rise over time despite clear contrast adaptation (Bonds, 1991). This suggests the possibility that, in the classical pattern of any well-designed automatic gain control, gain reduction takes place quite rapidly, but its effects linger for some time. The speed of reaction of gain change is illustrated in the experiment of Fig. 4. A "pedestal" grating of 14% contrast is introduced. After 500 msec, a contrast increment of 14% is added to the pedestal for a variable length of time. The response during the first and last 500 msec of the pedestal presentation is counted and the ratio is taken. In the absence of the increment, this ratio is about 0.8, reflecting the adaptive nature of the pedestal itself. For an increment of even 50 msec duration, this ratio is reduced, and it is reduced monotonically-by up to half the control 80 Bonds level-for increments lasting less than a second. The gain control mechanism is thus both sensitive and rapid. 1 0 0.8 e. CD 't:J 0.6 ::l s:: Q. E 0( 0.4 't:J CD .!::! Iii ~ 0.2 0 z 0 CV9:L11.06-7 0 0.2 0.4 0.6 0.8 Blip Duration (sec) "Blip· 28%~ 14'li.l.1 "Probe" t2 L 0.0 0.5 1.0 1.5 2.0 Time (sec) Norm. Ampl. = spikes (t2)/spikes(t1) Figure 4: Speed of gain reduction. The ratio of spikes generated during the last and first 500 msec of a 2 sec pedestal presentation can be modified by a brief contrast increment (see text). 5 PHYSIOLOGICAL INDEPENDENCE OF TWO INHIBITORY MECHANISMS The experimental observations presented above support two basic phenomena: spatially-dependent and spatially-independent inhibition. The question remains whether these two types of inhibition are fundamentally different, or if they stem from the same physiological mechanisms. This question can be addressed by examining the structure of a serial spike train generated by a cortical cell. In general, rather than being distributed continuously, cortical spikes are grouped into discrete packets, or bursts, with some intervening isolated spikes. The burst structure can be fundamentally characterized by two parameters: the burst frequency (bursts per second, or BPS) and the burst duration (spikes per burst, or SPB). We have analyzed cortical spike trains for these properties by using an adaptive algorithm to define burst groupings; as a rule of thumb, spike intervals of 8 msec or less were considered to belong to bursts. Both burst frequency (BPS) and structure (SPB) depend strongly on mean firing rate, but once firing rate is corrected for, two basic patterns emerge. Consider two experiments, both yielding firing rate variation about a similar range. In one experiment, firing rate is varied by varying stimulus contrast, while in the other, firing rate is varied by varying stimulus orientation. Burst frequency (BPS) depends only on spike rate, regardless of the type of experiment. In Fig. 5A, no systematic difference is seen between the experiments in which contrast (filled circles) and orientation (open squares) are varied. To quantify the difference between the curves, polynomials were fit to each and the quantity gamma, defined by the (shaded) area bounded by the two polynomials, was calculated; here, it equalled about 0.03. Dual Inhibitory Mechanisms 81 16,----------------------------. ,.-... 3.6 A CI) Q) 3.4 ~ 0' 14 Gamma: 0.0290 Q) Gamma: 0.2485 B • ~ 12 CI) • • '0.. 3.2 CJ) • • • • • ~ 10 ::J • ... ~ • 3.0 • e?.8 Q) ::J m ~ 2.8 1U 6 a: Q) a. CI) 2.6 Q) ~ 1;) 4 _ Variation of stimulus contrast 2.4 D _ Variation of stimulus contrast -t:r- Variation of stimulus orientation ~ :J 2 III --tl- Variation of stimulus orientation .0.. 2.2 CJ) 60 2.0 0 10 60 Response (imp/sec) Figure 5: A. Comparison of burst frequency (bursts per second) as function offiring rate resulting from presentations of varying contrast (filled circles) and varying orientation (open squares). B. Comparison of burst length (spikes per burst) under similar conditions. Note that at a given firing rate, burst length is always shorter for experiment parametric on orientation. Shaded area (gamma) is quantitative indicator of difference between two curves. Fig. 5B shows that at similar firing rates, burst length (SPB) is markedly shorter when firing rate is controlled by varying orientation (open squares) rather than contrast (filled circles). In this pair of curves, the gamma (of about 0.25) is nearly ten times that found in the upper curve. This is a clear violation ofunivariance, since at a given spike rate (output level) J the structure of the spike train differs depending on the type of stimulation. Analysis of cortical response merely on the basis of overall firing rate thus does not give the signalling mechanisms the respect they are properly due. This result also implies that the strength of signalling between nerve cells can dynamically vary independent of firing rate. Because of post-synaptic temporal integration, bursts of spikes with short interspike intervals will be much more effective in generating depolarization than spikes at longer intervals. Thus, at a given average firing rate, a cell that generates longer bursts will have more influence on a target cell than a cell that distributes its spikes in shorter bursts, all other factors being equal. This phenomenon was consistent across a population of 59 cells. Gamma, which reflects the degree of difference between curves measured by variation of contrast and by variation of orientation, averaged zero for curves based on number of bursts (BPS). For both simple and complex cells, gamma for burst duration (SPB) averaged 0.15. At face value, these results simply mean that when lower spike rates are achieved by use of non-optimal orientations, they result from shorter bursts than when lower spike rates result from reduction of contrast (with the spatial configuration remaining optimal). This means that non-optimal orientations and, from some preliminary results, non-optimal spatial frequencies, result in inhibition that acts specifically to shorten bursts, whereas contrast manipulations for the most part act to modulate both the number and length of bursts. 82 Bonds These results suggest strongly that there are at least two distinct forms of cortical inhibition, with unique physiological bases differentiated by the burst organization in cortical spike trains. Recent results from our laboratory (Bonds, Unpub. Obs.) confirm that burst length modulation, which seems to reflect inhibition that depends on the spatial characteristics of the stimulus, is strongly mediated by GABA. Microiontophoretic injection of GABA shortens burst length and injection of bicuculline, a GABA blocker, lengthens bursts. This is wholly consistent with the hypothesis that GABA is central to definition of spatial qualities of the cortical receptive field, and suggests that one can indirectly observe GAB A-mediated inhibition by spike train analysis. Acknowledgements This work was done in collaboration with Ed DeBruyn, Lisa Bauman and Brian DeBusk. Supported by NIH (ROI-EY03778-09). References D. G. Albrecht & D. B. Hamilton. (1982) Striate cortex of monkey and cat: contrast response functions. Journal of Neurophysiology 48, 217-237. D. G. Albrecht, S. B. Farrar & D. B. Hamilton. (1984) Spatial contrast adaptation characteristics ofneurones recorded in the cat's visual cortex. Journal of Physiology 347,713-739. A. B. Bonds. (1989) The role of inhibition in the specification of orientation selectivity of cells of the cat striate cortex. Visual Neuroscience 2, 41-55. A. B. Bonds. (1991) Temporal dynamics of contrast gain control in single cells of the cat striate cortex. Visual Neuroscience 6, 239-255. L. A. Bauman & A. B. Bonds. (1991) Inhibitory refinement of spatial frequency selectivity in single cells of the cat striate cortex. Vision Research 31, 933-944. G. Henry, P. O. Bishop, R. M. Tupper & B. Dreher. (1973) Orientation specificity of cells in cat striate cortex. Vision Research 13, 1771-1779. L. Maffei & A. Fiorentini. (1973) The visual cortex as a spatial frequency analyzer. Vision Research 13, 1255-1267. 1. Ohzawa, G. Sclar & R. D. Freeman. (1985) Contrast gain control in the cat's visual system. Journal of Neurophysiology 54, 651-667. D. Rose & C. B. Blakemore. (1974) An analysis of orientation selectivity in the cat's visual cortex. Experimental Brain Research 20, 1-17. R. E. Soodak, R. M. Shapley & E. Kaplan. (1987) Linear mechanism of orientation tuning in the retina and lateral geniculate of the cat. Journal of Neurophysiology 58, 267-275.
1991
120
453
Statistical Reliability of a Blowfly Movement-Sensitive Neuron Rob de Ruyter van Steveninck .. Biophysics Group, Rijksuniversiteit Groningen, Groningen, The Netherlands Abstract William Bialek NEe Research Institute 4 Independence Way, Princeton, N J 08540 We develop a model-independent method for characterizing the reliability of neural responses to brief stimuli. This approach allows us to measure the discriminability of similar stimuli, based on the real-time response of a single neuron. Neurophysiological data were obtained from a movementsensitive neuron (HI) in the visual system of the blowfly Calliphom erythrocephala. Furthermore, recordings were made from blowfly photoreceptor cells to quantify the signal to noise ratios in the peripheral visual system. As photoreceptors form the input to the visual system, the reliability of their signals ultimately determines the reliability of any visual discrimination task. For the case of movement detection, this limit can be computed, and compared to the HI neuron's reliability. Under favorable conditions, the performance of the HI neuron closely approaches the theoretical limit, which means that under these conditions the nervous system adds little noise in the process of computing movement from the correlations of signals in the photoreceptor array. 1 INTRODUCTION In the 1940s and 50s, several investigators realized that understanding the reliability of computation in the nervous system posed significant theoretical challenges. Attempts to perform reliable computations with the available electronic computers ·present address: University Hospital Groningen, Dept. of Audiology, POB 30.001, NL 9700RB Groningen, The Netherlands 27 28 de Ruyter van Steveninck and Bialek certainly posed serious practical problems, and the possibility that the problems of natural and artificial computing are related was explored. Guided by the practical problems of electronic computing, von Neumann (1956) formulated the theoretical problem of "reliable computation with unreliable components". Many authors seem to take as self-evident the claim that this is a problem faced by the nervous system as well, and indeed the possibility that the brain may implement novel solutions to this problem has been at least a partial stimulus for much recent research. The qualitative picture adopted in this approach is of the nervous system as a highly interconnected network of rather noisy cells, in which meaningful signals are represented only by large numbers of neural firing events averaged over numerous redundant neurons. Neurophysiological experiments seem to support this view: If the same stimulus is presented repeatedly to a sensory system, the responses of an individual afferent neuron differ for each presentation. This apparently has led to a widespread belief that neurons are inherently noisy, and ideas of redundancy and averaging pervade much of the literature. Significant objections to this view have been raised, however (c/. Bullock 1970). As emphasized by Bullock (Ioc.cit), the issue of reliability of the nervous system is a quantitative one. Thus, the first problem that should be overcome is to find a way for its measurement. This paper focuses on a restricted, but basic question, namely the reliability of a single neuron, much in the spirit of previous work (cf. Barlow and Levick 1969, Levick et al. 1983, Tolhurst at al. 1983, Parker and Hawken 1985). Here the methods of analysis used by these authors are extended in an attempt to describe the neuron's reliability in a way that is as model-independent as possible. The second-conceptually more difficult-problem, is summarized cogently in Bullock's words, "how reliable is reliable?". Just quantifying reliability is not enough, and the qualitative question of whether redundancy, averaging, multiplexing, or yet more exotic solutions to von Neumann's problem are relevant to the operation of the nervous system hinges on a quantitative comparison of reliability at the level of single cells with the reliability for the whole system. Broadly speaking, there are two ways to make such a comparison: one can compare the performance of the single cell either with the output or with the input of the whole system. As to the first possibility, if a single cell responds to a certain stimulus as reliably as the animal does in a behavioral experiment, it is difficult to imagine why multiple redundant neurons should be used to encode the same stimulus. Alternatively, if the reliability of a single neuron were to approach the limits set by the sensory periphery, there seems to be little purpose for the nervous system to use functional duplicates of such a cell, and the key theoretical problem would be to understand how such optimal processing is implemented. Here we will use the latter approach. We first quantify the reliability of response of HI, a wide-field movement-sensitive neuron in the blowfly visual system. The method consists essentially of a direct application of signal detection theory to trains of neural impulses generated by brief stimuli, using methods familiar from psychophysics to quantify discriminability. Next we characterize signal transfer and noise in the sensory periphery-the photoreceptor cells of the compound eye-and we compare the reliability of information coded in HI with the total amount of sensory information available at the input. Statistical Reliability of a Blowfly Movement-Sensitive Neuron 29 2 PREPARATION, STIMULATION AND RECORDING Experiments were performed on female wild-type blowfly Calliphora erythrocephala. Spikes from HI were recorded extracellularly with a tungsten microelectrode, their arrival times being digitized with 50 {ts resolution. The fly watched a binary random-bar pattern (bar width 0.029° visual angle, total size (30.5°)2) displayed on a CRT. Movement steps of 16 different sizes (integer multiples of 0.12°) were generated by custom-built electronics, and presented at 200 ms intervals in the neuron's preferred direction. The effective duration of the experiment was 11 hours, during which time about 106 spikes were recorded over 12552 presentations of the 16-step stimulus sequence. Photoreceptor cells were recorded intracellularly while stimulated by a spatially homogeneous field, generated on the same CRT that was used for the HI experiments. The CRT's intensity was modulated by a binary pseudo-random waveform, time sampled at 1 ms. The responses to 100 stimulus periods were averaged, and the cell's transfer function was obtained by computing the ratio of the Fourier transform of the averaged response to that of the stimulus signal. The cell's noise power spectrum was obtained by averaging the power spectra of the 100 traces of the individual responses with the average response subtracted. 3 DATA ANALYSIS 3.1 REPRESENTATION OF STIMULUS AND RESPONSE A single movement stimulus consisted of a sudden small displacement of a wide-field pattern. Steps of varying sizes were presented at regular time-intervals, long enough to ensure that responses to successive stimuli were independent. In the analysis we consider the stimulus to be a point event in time, parametrized by its step size Ct. The neuron's signal is treated as a stochastic point process, the parameters of which depend on the stimulus. Its statistical behavior is described by the conditional probability P(rICt) of finding a response r, given that a step of size Ct was presented. From the experimental data we estimate P(rICt) for each step size separately. To represent a single response r, time is divided in discrete bins of width ~t = 2 ms. Then r is described by a firing pattern, which is just a vector q = [qO, ql, .. ] of binary digits qk(k = 0, n - 1), where qk = 1 and qk = 0 respectively signify the presence or the absence of a spike in time bin k (cf. Eckhorn and Popel 1974). No response is found within a latency time t'at=15 ms after stimulus presentation; spikes fired within this interval are due to spontaneous activity and are excluded from analysis, so k = 0 corresponds to 15 ms after stimulus presentation. The probability distribution of firing patterns, P(qICt), is estimated by counting the number of occurrences of each realization of q for a large number of presentations of Ct. This distribution is described by a tree which results from ordering all recorded firing patterns according to their binary representation, earlier times corresponding to more-significant bits. Graphical representations of two such trees are shown ill Fig. 1. In constructing a tree we thus perform two operations on the raw spike data: first, individual response patterns are represented in discrete time bins ~t, and second, a permutation is performed on the set of discretized patterns to order 30 de Ruyter van Steveninck and Bialek them according to their binary representation. No additional assumptions are made about the way the signal is encoded by the neuron. This approach should therefore be quite powerful in revealing any subtle" hidden code" that the neuron might use. As the number of branches in the tree grows exponentially with the number of time bins n, many presentations are needed to describe the tree over a reasonable time interval, and here we use n = 13. . 3.2 COMPUTATION OF DISCRIMINABILITY To quantify the performance of the neuron, we compute the discriminability of two nearly equal stimuli al and a2, based on the difference in neural response statistics described by P{rlaI) and P{rl(2). The probability of correct decisions is maximized if one uses a maximum likelihood decision rule, so that in the case of equal prior probabilities the outcome is al if P{robslal) > P{robsl(2), and vice versa. On average, the probability of correctly identifying step al is then: Pc{aJ) = L P{rlat} . H[P(rlat) - P{rl(2)], {r} (1) where H{.) is the Heaviside step function and the summation is over the set of all possible responses {r}. An interchange of indices 1 and 2 in this expression yields the formula for correct identification of a2. The probability of making correct judgements over an entire experiment in which al and a2 are equiprobable is then simply Pc(al, (2) = [Pc{at) + Pc(a2)]/2, which from now on will be referred to as Pc. This analysis is essentially that for a "two-alternative forced-choice" psychophysical experiment. For convenience we convert Pc into the discriminability parameter d', familiar from psychophysics (Green and Swets 1966), which is the signal-to-noise ratio (difference in mean divided by the standard deviation) in the equivalent equalvariance Gaussian decision problem. Using the firing-pattern representation, r = q, and computing d' for successive subvectors of q with elements m = 0, .. , k and k = 0, .. , n - 1, we compute Pc for different values of k and from that obtain d'{k), the discrimillability as a function of time. 3.3 THEORETICAL LIMITS TO DISCRIMINATION For the simple stimuli used here it is relatively easy to determine the theoretical limit to discrimination based on the photoreceptor signal quality. For the computation of this limit we use Reichardt's (1957) correlation model of movement detection. This model has been very successful in describing a wide variety of phenomena in biological movement detection, both in fly (Reichardt and Poggio 1976), and in humans (van Santen and Sperling 1984). Also, correlation-like operations can be proved to be optimal for the extraction of movement information at low signal to noise ratio (Bialek 1990). The measured signal transfer of the photoreceptors, combined with the known geometry of the stimulus and the optics of the visual system determine the signal input to the model. The noise input is taken directly Statistical Reliability of a Blowfly Movement-Sensitive Neuron 31 1.0 1.0 0.2G· 0.36· 0.8 0.8 ~ 0.6 0.6 ---:s ~ IV ~ ,Q ---0 b. 0.4 ,Q b. 0.4 0.2 0.2 --~ 0.0 0.0 20 30 40 20 30 40 time (ms) time (ms) Figure 1: Representation of the firing pattern distributions for steps of 0.24° and 0.36°. Here only 11 time bins are shown. from the measured photoreceptor noise power spectrum. Details of this computation are given in de Ruyter van Steveninck (1986). 3.4 ERROR ANALYSIS AND DATA REQUIREMENTS The effects of the approximation due to time-discretization can be assessed by varying the binwidth. It turns out that the results do not change appreciably if the bins are made smaller than 2 ms. Furthermore, if the analysis is to make sense, stationarity is required, i.e. the probability distribution from which responses to a certain stimulus are drawn should be invariant over the course of the experiment. Finally, the distributions, being computed from a finite sample of responses, are subject to statistical error. The statistical error in the final result was estimated by partitioning the data and working out the values of Pc for these partitions separately. The statistical variations in Pc were of the order of 0.01 in the most interesting region of values of Pc, i.e. from 0.6 to 0.9. This results in a typical statistical error of 0.05 in d'. In addition, this analysis revealed no significant trends with time, so we may assume stationarity of the preparation. 4 RESULTS 4.1 STEP SIZE DISCRIMINATION BY THE HI NEURON Although 16 different step sizes were used, we limit the presentation here to steps of 0.24° and 0.36°; binary trees representing the two firing-pattern distributions are shown in Fig. 1. The first time bin describes the probabilities of two possible events: either a spike was fired (black) or not (white), and these two probabilities add up to unity. The second time bin describes the four possible combinations of finding 32 de Ruyter van Steveninck and Bialek 15r-----~----~----~----~ 10 5 50 100 150 200 observation window (ms) 3. 0 r---..----......----.....-----r-----r--r--, 2.0 1.0 predICted I / I / / / 1 / I / / / 1 predicted 1-- and shifted I 20 30 time (ms) 1 1 / 40 Figure 2: Left: Discrimination performance of an ideal movement detector. See text for further details. Right: comparison of the theoretical and the measured values of d'et). Fat line: measured performance of Hl. Thin solid line: predicted performance, taken from the left figure. Dashed line: the same curve shifted by 5 ms to account for latency time in the pathway from photoreceptor to HI. This time interval was determined independently with powerful movement stimuli. or not finding a spike in bin 2 combined with finding or not finding a spike in bin 1, and so on. The figure shows that the probability of firing a spike in time bin 1 is slightly higher for the larger step. From above we compute Pc, the probability of correct identification, in a task where the choice is between step sizes of 0.240 and 0.360 with equal prior probabilities. The decision rule is simple: if a spike is fired in bin 1, choose the larger, otherwise choose the smaller step. In the same fashion we apply this procedure to the following time bin, with four response categories and so on. The value of d' computed from Pc for this step size pair as a function of time is given by the fat line at the right in Fig. 2. 4.2 LIMITS SET BY PHOTORECEPTOR SIGNALS Figure 2 (left) shows the limit to movement detection computed for an array of 2650 Reichardt correlators stimulated with a step size difference of 0.12 0 , conforming to the experimental conditions. Comparing the performance of HI to this result (the fat and the dashed lines in Fig. 2, right), we see that the neuron follows the limit set by the sensory periphery from about 18 to 28 ms after stimulus presentation. So, for this time window the randomness of HI's response is determined primarily by photoreceptor noise. Up to about 20 Hz, the photoreceptor signal-to-noise ratio closely approached the limit set by the random arrival of photons at the photoreceptors at a rate of about 104 effective conversions/so Hence most of the randomness in the spike train was caused by photon shot noise. Statistical Reliability of a Blowfly Movement-Sensitive Neuron 33 5 DISCUSSION The approach presented here gives us estimates for the reliability of a single neuron in a well-defined, though restricted experimental context. In addition the theoretical limits to the reliability of movement-detection are computed. Comparing these two results we find that HI in these conditions uses essentially all of the movement information available over a 10 ms time interval. Further analysis shows that this information is essentially contained in the time of firing of the first spike. The plateau in the measured d'(t) between 28 and 34 ms presumably results from effects of refractoriness, and the subsequent slight rise is due to firing of a second spike. Thus, a step size difference of 0.12° can be discriminated with d' close to unity, using the timing information of just one spike from one neuron. For the blowfly visual system this angular difference is of the order of one-tenth of the photoreceptor spacing, well within the hyperacuity regime (cf. Parker and Hawken 1985). It should not be too surprising that the neuron performs well only over a short time interval and does not reach the values for d' computed from the model at large delays (Fig. 2, left): The experimental stimulus is not very natural, and in real-life conditions the fly is likely to see movement changing continuously. (Methods for analyzing responses to continuous movement are treated in de Ruyter van Steveninck and Bialek 1988, and in Bialek et al. 1991.) In such circumstances it might be better not to wait very long to get an accurate estimate of the stimulus at one point in time, but rather to update rough estimates as fast as possible. This would favor a coding principle where successive spikes code independent events, which may explain that the plateau in the measured d'{t) starts at about the point where the computed d'{t) has maximal slope. Such a view is supported by behavioral evidence: A chasing fly tracks the leading fly with a delay of about 30 ms (Land and Collett 1974), corresponding to the time at which the measured d'{t) levels off. In conclusion we can say that in the experiment, for a limited time window the neuron effectively uses all information available at the sensory periphery. Peripheral noise is in turn determined by photon shot noise so that the reliability of HI's output is set by the physics of its inputs. There is no neuro-anatomical or neurophysiological evidence for massive redundancy in arthropod nervous systems. More specifically, for the fly visual system, it is known that HI is unique in its combination of visual field and preferred direction of movement (Hausen 1982), and from the results presented here we may begin to understand why: It just makes little sense to use functional duplicates of any neuron that performs almost perfectly when compared to the noise levels inherently present in the stimulus. It remains to be seen to what extent this conclusion can be generalized, but one should at least be cautious in interpreting the variability of response of a single neuron in terms of noise generated by the nervous system itself. References Barlow HB, Levick WR (1969) Three factors limiting the reliable detection of light by retinal ganglion cells of the cat. J Physiol 200:1-24. Bialek W (1990) Theoretical physics meets experimental neurobiology. In Jen E (ed.) 1989 Lectures in Complex Systems, SFI Studies in the Sciences of Complexity, 34 de Ruyter van Steveninck and Bialek Lect. Vol. II, pp. 513-595. Addison-Wesley, Menlo Park CA. Bialek W, Rieke F, de Ruyter van Steveninck RR, Warland D (1991) Reading a neural code. Science 252:1854-1857. Bullock TH (1970) The reliability of neurons. J Gen Physiol 55:565-584. Eckhorn R, Popel B (1974) Rigorous and extended application of information theory to the afferent visual system of the cat. I Basic concepts. Kybernetik 16:191-200. Green DM, Swets JA (1966) Signal detection theory and psychophysics. Wiley, New York. Hausen K (1982) Motion sensitive interneurons in the optomotor system of the fly. I. The horizontal cells: Structure and signals. Bioi Cybern 45:143-156. Land MF, Collett TS (1974) Chasing behaviour of houseflies (Fannia canicularis). A description and analysis. J Comp Physiol 89:331-357. Levick WR, Thibos LN, Cohn TE, Catanzaro D, Barlow HB (1983) Performance of cat retinal ganglion cells at low light levels. J Gen Physiol 82:405-426. Neumann J von (1956) Probabilistic logics and the synthesis of reliable organisms from unreliable components. In Shannon CE and McCarthy J (eds.) Automata Studies, Princeton University Press, Princeton N J, 43-98. Parker A, Hawken M (1985) Capabilities of monkey cortical cells in spatialresolution tasks. J Opt Soc Am A2:1101-1114. Reichardt W (1957) Autokorrelations-Auswertung als Funktionsprinzip des Zentralnervensystems. Z Naturf 12b:448-457. Reichardt W, Poggio T (1976) Visual control of orientation behaviour in the fly, Part I. A quantitative analysis. Q Rev Biophys 9:311-375. de Ruyter van Steveninck RR (1986) Real-time performance of a movement-sensitive neuron in the blowfly visual system. Thesis, Rijksuniversiteit Groningen, the Netherlands. de Ruyter van Steveninck RR, Bialek W (1988) Real-time performance of a movement-sensitive neuron in the blowfly visual system: coding and information transfer in short spike sequences. Proc R Soc Lond B 234: 379-414. van Santen JPH, Sperling G (1984) Temporal covariance model of human motion perception. J Opt Soc Am A1:451-473. Tolhurst DJ, Movshon JA, Dean AF (1983) The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Res 23: 775-785.
1991
121
454
Networks with Learned Unit Response Functions John Moody and Norman Yarvin Yale Computer Science, 51 Prospect St. P.O. Box 2158 Yale Station, New Haven, CT 06520-2158 Abstract Feedforward networks composed of units which compute a sigmoidal function of a weighted sum of their inputs have been much investigated. We tested the approximation and estimation capabilities of networks using functions more complex than sigmoids. Three classes of functions were tested: polynomials, rational functions, and flexible Fourier series. Unlike sigmoids, these classes can fit non-monotonic functions. They were compared on three problems: prediction of Boston housing prices, the sunspot count, and robot arm inverse dynamics. The complex units attained clearly superior performance on the robot arm problem, which is a highly non-monotonic, pure approximation problem. On the noisy and only mildly nonlinear Boston housing and sunspot problems, differences among the complex units were revealed; polynomials did poorly, whereas rationals and flexible Fourier series were comparable to sigmoids. 1 Introduction A commonly studied neural architecture is the feedforward network in which each unit of the network computes a nonlinear function g( x) of a weighted sum of its inputs x = wtu. Generally this function is a sigmoid, such as g( x) = tanh x or g(x) = 1/(1 + e(x-9»). To these we compared units of a substantially different type: they also compute a nonlinear function of a weighted sum of their inputs, but the unit response function is able to fit a much higher degree of nonlinearity than can a sigmoid. The nonlinearities we considered were polynomials, rational functions (ratios of polynomials), and flexible Fourier series (sums of cosines.) Our comparisons were done in the context of two-layer networks consisting of one hidden layer of complex units and an output layer of a single linear unit. 1048 Networks with Learned Unit Response Functions 1049 This network architecture is similar to that built by projection pursuit regression (PPR) [1, 2], another technique for function approximation. The one difference is that in PPR the nonlinear function of the units of the hidden layer is a nonparametric smooth. This nonparametric smooth has two disadvantages for neural modeling: it has many parameters, and, as a smooth, it is easily trained only if desired output values are available for that particular unit. The latter property makes the use of smooths in multilayer networks inconvenient. If a parametrized function of a type suitable for one-dimensional function approximation is used instead of the nonparametric smooth, then these disadvantages do not apply. The functions we used are all suitable for one-dimensional function approximation. 2 Representation A few details of the representation of the unit response functions are worth noting. Polynomials: Each polynomial unit computed the function g(x) = alX + a2x2 + ... + anxn with x = wT u being the weighted sum of the input. A zero'th order term was not included in the above formula, since it would have been redundant among all the units. The zero'th order term was dealt with separately and only stored in one location. Rationals: A rational function representation was adopted which could not have zeros in the denominator. This representation used a sum of squares of polynomials, as follows: ( ) ao + alx + ... + anxn 9 x -1 + (bo + b1x)2 + (b2x + b3x2)2 + (b4x + b5x 2 + b6X3 + b7x4)2 + .,. This representation has the qualities that the denominator is never less than 1, and that n parameters are used to produce a denominator of degree n. If the above formula were continued the next terms in the denominator would be of degrees eight, sixteen, and thirty-two. This powers-of-two sequence was used for the following reason: of the 2( n - m) terms in the square of a polynomial p = am xm + '" + anxn , it is possible by manipulating am ... an to determine the n - m highest coefficients, with the exception that the very highest coefficient must be non-negative. Thus if we consider the coefficients of the polynomial that results from squaring and adding together the terms of the denominator of the above formula, the highest degree squared polynomial may be regarded as determining the highest half of the coefficients, the second highest degree polynomial may be regarded as determining the highest half of the rest of the coefficients, and so forth. This process cannot set all the coefficients arbitrarily; some must be non-negative. Flexible Fourier series: The flexible Fourier series units computed n g(x) = L: ai COS(bi X + Ci) i=O where the amplitudes ai, frequencies bi and phases Ci were unconstrained and could assume any value. 1050 Moody and Yarvin Sigmoids: We used the standard logistic function: g(x) = 1/(1 + e(x-9)) 3 Training Method All the results presented here were trained with the Levenberg-Marquardt modification of the Gauss-Newton nonlinear least squares algorithm. Stochastic gradient descent was also tried at first, but on the problems where the two were compared, Levenberg-Marquardt was much superior both in convergence time and in quality of result. Levenberg-Marquardt required substantially fewer iterations than stochastic gradient descent to converge. However, it needs O(p2) space and O(p2n) time per iteration in a network with p parameters and n input examples, as compared to O(p) space and O(pn) time per epoch for stochastic gradient descent. Further details of the training method will be discussed in a longer paper. With some data sets, a weight decay term was added to the energy function to be optimized. The added term was of the form A L~=l w;. When weight decay was used, a range of values of A was tried for every network trained. Before training, all the data was normalized: each input variable was scaled so that its range was (-1,1), then scaled so that the maximum sum of squares of input variables for any example was 1. The output variable was scaled to have mean zero and mean absolute value 1. This helped the training algorithm, especially in the case of stochastic gradient descent. 4 Results We present results of training our networks on three data sets: robot arm inverse dynamics, Boston housing data, and sunspot count prediction. The Boston and sunspot data sets are noisy, but have only mild nonlinearity. The robot arm inverse dynamics data has no noise, but a high degree of nonlinearity. Noise-free problems have low estimation error. Models for linear or mildly nonlinear problems typically have low approximation error. The robot arm inverse dynamics problem is thus a pure approximation problem, while performance on the noisy Boston and sunspots problems is limited more by estimation error than by approximation error. Figure la is a graph, as those used in PPR, of the unit response function of a oneunit network trained on the Boston housing data. The x axis is a projection (a weighted sum of inputs wT u) of the 13-dimensional input space onto 1 dimension, using those weights chosen by the unit in training. The y axis is the fit to data. The response function of the unit is a sum ofthree cosines. Figure Ib is the superposition of five graphs of the five unit response functions used in a five-unit rational function solution (RMS error less than 2%) of the robot arm inverse dynamics problem. The domain for each curve lies along a different direction in the six-dimensional input space. Four of the five fits along the projection directions are non-monotonic, and thus can be fit only poorly by a sigmoid. Two different error measures are used in the following. The first is the RMS error, normalized so that error of 1 corresponds to no training. The second measure is the ~ .; 2 o ~ o ! .. c o -2 -2.0 Figure 1: a . . " . . ' . . ' Networks with Learned Unit Response Functions 1051 Robot arm fit to data 40 20 o -zo -40 1.0 -4 b square of the normalized RMS error, otherwise known as the fraction of explained varIance. We used whichever error measure was used in earlier work on that data set. 4.1 Robot arm inverse dynamics This problem is the determination of the torque necessary at the joints of a twojoint robot arm required to achieve a given acceleration of each segment of the arm, given each segment's velocity and position. There are six input variables to the network, and two output variables. This problem was treated as two separate estimation problems, one for the shoulder torque and one for the elbow torque. The shoulder torque was a slightly more difficult problem, for almost all networks. The 1000 points in the training set covered the input space relatively thoroughly. This, together with the fact that the problem had no noise, meant that there was little difference between training set error and test set error. Polynomial networks of limited degree are not universal approximators, and that is quite evident on this data set; polynomial networks of low degree reached their minimum error after a few units. Figure 2a shows this. If polynomial, cosine, rational, and sigmoid networks are compared as in Figure 2b, leaving out low degree polynomials, the sigmoids have relatively high approximation error even for networks with 20 units. As shown in the following table, the complex units have more parameters each, but still get better performance with fewer parameters total. Type Units Parameters Error degree 7 polynomial 5 65 .024 degree 6 rational 5 95 .027 2 term cosine 6 73 .020 sigmoid 10 81 .139 sigmoid 20 161 .119 Since the training set is noise-free, these errors represent pure approximation error. 1052 Moody and Yarvin ~.Iilte ...... +ootII1n.. 3 ler .... 0.8 0.8 Ooooln.. 4 tel'lNl opoJynomleJ de, 7 XrationeJ do, 8 • ... "'0101 0.8 O.S ~ .. E • 0 0.4 0.4 0.2 0.2 0.0 L---,b-----+--~::::::::8~~§=t::::::!::::::1J 10 111 20 numbel' of WIIt11 number Dr WIIt11 Figure 2: a b The superior performance of the complex units on this problem is probably due to their ability to approximate non-monotonic functions. 4.2 Boston housing The second data set is a benchmark for statistical algorithms: the prediction of Boston housing prices from 13 factors [3]. This data set contains 506 exemplars and is relatively simple; it can be approximated well with only a single unit. Networks of between one and six units were trained on this problem. Figure 3a is a graph of training set performance from networks trained on the entire data set; the error measure used was the fraction of explained variance. From this graph it is apparent 0 .20 O. lfi ~ • 0.10 0.05 Figure 3: a 03 tenD coolh. x.itmold o polJDomll1 d., fi +raUo,,"1 dec 2 02 term.....m. 1.0 0 3 term COllin. x.tpnotd 0.5 b Networks with Learned Unit Response Functions 1053 that training set performance does not vary greatly between different types of units, though networks with more units do better. On the test set there is a large difference. This is shown in Figure 3b. Each point on the graph is the average performance of ten networks of that type. Each network was trained using a different permutation of the data into test and training sets, the test set being 1/3 of the examples and the training set 2/3. It can be seen that the cosine nets perform the best, the sigmoid nets a close second, the rationals third, and the polynomials worst (with the error increasing quite a bit with increasing polynomial degree.) It should be noted that the distribution of errors is far from a normal distribution, and that the training set error gives little clue as to the test set error. The following table of errors, for nine networks of four units using a degree 5 polynomial, is somewhat typical: Set training test Error 0.091 I 0.395 Our speculation on the cause of these extremely high errors is that polynomial approximations do not extrapolate well; if the prediction of some data point results in a polynomial being evaluated slightly outside the region on which the polynomial was trained, the error may be extremely high. Rational functions where the numerator and denominator have equal degree have less of a problem with this, since asymptotically they are constant. However, over small intervals they can have the extrapolation characteristics of polynomials. Cosines are bounded, and so, though they may not extrapolate well if the function is not somewhat periodic, at least do not reach large values like polynomials. 4.3 Sunspots The third problem was the prediction of the average monthly sunspot count in a given year from the values of the previous twelve years. We followed previous work in using as our error measure the fraction of variance explained, and in using as the training set the years 1700 through 1920 and as the test set the years 1921 through 1955. This was a relatively easy test set - every network of one unit which we trained (whether sigmoid, polynomial, rational, or cosine) had, in each of ten runs, a training set error between .147 and .153 and a test set error between .105 and .111. For comparison, the best test set error achieved by us or previous testers was about .085. A similar set of runs was done as those for the Boston housing data, but using at most four units; similar results were obtained. Figure 4a shows training set error and Figure 4b shows test set error on this problem. 4.4 Weight Decay The performance of almost all networks was improved by some amount of weight decay. Figure 5 contains graphs of test set error for sigmoidal and polynomial units, 1054 Moody and Yarvin 0.18 ,..-,------=..::.;==.::.....:::...:=:..:2..,;:.::.:..----r--1 0.25 ~---..::.S.::.:un:::;;a.!:..po.:...:l:....:t:.::e.:...:Bt:....:lI:.::e..:..l ..:.:,mre.::.:an~ __ --,-, .. 0 I: .. 0.14 O.IZ 0.10 O.OB Opolynomlal d •• 1\ """allon.. de. 2 02 term co.lne cs term coolne x.tamcld 0.08 '--+1-----±2-----!Se------+--' number of WIlle Figure 4: a 0.20 0.15 0.10 OP0lr.!:0mt .. dea t ~·leO:: o~:~~ C 3 hrm corlne X_lamold 2 3 Dumb .... of unit. b using various values of the weight decay parameter A. For the sigmoids, very little weight decay seems to be needed to give good results, and there is an order of magnitude range (between .001 and .01) which produces close to optimal results. For polynomials of degree 5, more weight decay seems to be necessary for good results; in fact, the highest value of weight decay is the best. Since very high values of weight decay are needed, and at those values there is little improvement over using a single unit, it may be supposed that using those values of weight decay restricts the multiple units to producing a very similar solution to the one-unit solution. Figure 6 contains the corresponding graphs for sunspots. Weight decay seems to help less here for the sigmoids, but for the polynomials, moderate amounts of weight decay produce an improvement over the one-unit solution. Acknowledgements The authors would like to acknowledge support from ONR grant N00014-89-J1228, AFOSR grant 89-0478, and a fellowship from the John and Fannie Hertz Foundation. The robot arm data set was provided by Chris Atkeson. References [1] J. H. Friedman, W. Stuetzle, "Projection Pursuit Regression", Journal of the American Statistical Association, December 1981, Volume 76, Number 376, 817-823 [2] P. J. Huber, "Projection Pursuit", The Annals of Statistics, 1985 Vol. 13 No. 2,435-475 [3] L. Breiman et aI, Classification and Regression Trees, Wadsworth and Brooks, 1984, pp217-220 Networks with Learned Unit Response Functions Boston housin 0.30 r-T"=::...:..:;.:;:....:r:-=::;.5I~;=::::..:;=:-;;..:..:..::.....;;-=..:.!ar:......::=~..., hi decay 0.25 ~0.20 • 0.15 00 +.0001 0.001 0.01 X.l ·.3 1.0 0.5 00 +.0001 0.001 0.01 )(.1 '.3 Figure 5: Boston housing test error with various amounts of weight decay 0.16 moids wilh wei hl decay O.IB 00 +.0001 0.001 0 .01 ><.1 · .3 0.111 0.14 1.8 0.1 • .. 1: 0.12 D ~ 0.12 ~~ 0.10 0.10 sea ::::::,. 0.08 2 3 <4 0.08 2 3 Dum be .. of 1IJlIt, Dumb.,. 01 WIll' Figure 6: Sunspot test error with various amounts of weight decay 1055
1991
122
455
512 Adaptive Elastic Models for Hand-Printed Character Recognition Geoffrey E. Hinton, Christopher K. I. Williams and Michael D. Revow Department of Computer Science, University of Toronto Toronto, Ontario, Canada M5S lA4 Abstract Hand-printed digits can be modeled as splines that are governed by about 8 control points. For each known digit. the control points have preferred ., home" locations, and deformations of the digit are generated by moving the control points away from their home locations. Images of digits can be produced by placing Gaussian ink generators uniformly along the spline. Real images can be recognized by finding the digit model most likely to have generated the data. For each digit model we use an elastic matching algorithm to minimize an energy function that includes both the deformation energy of the digit model and the log probability that the model would generate the inked pixels in the image. The model with the lowest total energy wins. If a uniform noise process is included in the model of image generation, some of the inked pixels can be rejected as noise as a digit model is fitting a poorly segmented image. The digit models learn by modifying the home locations of the control points. 1 Introduction Given good bottom-up segmentation and normalization, feedforward neural networks are an efficient way to recognize digits in zip codes. (Ie Cun et al., 1990). However. in some cases. it is not possible to correctly segment and normalize the digits without using knowledge of their shapes, so to achieve close to human performance on images of whole zip codes it will be necessary to use models of shapes to influence the segmentation and normalization of the digits. One way of doing this is to use a large cooperative network that simultaneously segments, normalizes and recognizes all of the digit.s in a zip code. A first step in this direct.ion is to take a poorly segmented image of a single digit and to explain the image properly in terms of an appropriately normalized, deformed digit model plus noise. The ability of t.he model to reject some parts of the image as noise is the first step towards model-driven segmentation. Adaptive Elastic Models for Hand)Printed Character Recognition 513 2 Elastic models One technique for recognizing a digit is to perform an elastic match with many different exemplars of each known digit-class and to pick the class of the nearest neighbor. Unfortunately this requires a large number of elastic matches, each of which is expensive. By using one elastic model to capture all the variations of a given digit we greatly reduce the number of elastic matches required. Burr (1981a, 1981b) has investigated several types of elastic model and elastic matching procedure. We describe a different kind of elastic model that is based on splines. Each elastic model contains parameters that define an ideal shape and also define a deformation energy for departures from this ideal. These parameters are initially set by hand but can be improved by learning. They are an efficient way to represent the many possible instances of a given digit. Each digit is modelled by a deformable spline whose shape is determined by the positions of 8 control points. Every point on the spline is a weighted average of four control points, with the weighting coefficients changing smoothly as we move along the spline. 1 To generate an ideal example of a digit we put the 8 control points at their home locations for that model. To deform the digit we move the control points away from their home locations. Currently we assume that, for each model, the control points have independent, radial Gaussian distributions about their home locations. So the negative log probability of a deformation (its energy) is proportional to the sum of the squares of the departures of the control points from their home locations. The deformation energy function only penalizes shape deformations. Translation, rotation, dilation , elongation, and shear do not change the shape of an object so we want the deformation energy to be invariant under these affine transformations. We achieve this by giving each model its own "object-based frame". Its deformation energy is computed relative to this frame, not in image coordinates. When we fit the model to data, we repeatedly recompute the best affine transformation between the object-based frame and the image (see section 4). The repeated recomputation of the affine transform during the model fit means that the shape of the digit is influencing the normalization. Although we will use our digit models for recognizing images, it helps to start by considering how we would use them for generating images. The generative model is an elaboration of the probabilistic interpretation of the elastic net given by Durbin, Szeliski & Yuille (1989) . Given a particular spline, we space a number of "beads" uniformly along the spline. Each bead defines the center of a Gaussian ink generator. The number of beads on the spline and the variance of the ink generators can easily be changed without changing the spline itself. To generate a noisy image of a particular digit class, run the following procedure: • Pick an affine transformation from the model's intrinsic reference frame to the image frame (i.e. pick a size, position, orientation, slant and elongation for the digit) . 1 In computing the weighting coefficients we use a cubic B-spline and we treat the first and last control points as if they were doubled. 514 Hinton, Williams, and Revow • Pick a defo~mation of the mo.d~l (i.e. ~~ve the control !)Qi~ts awa1 from their home locatIOns). The probabIlIty of pIckmg a deformatIOn IS ~ ede.Jornl • Repeat many times: Either (with probability 1I"noi.H) add a randomly positioned noise pixel Or pick a bead at random and generate a pixel from the Gaussian distribution defined by the bead. 3 Recognizing isolated digits We recognize an image by finding which model is most likely to have generated it. Each possible model is fitted to the image and the one that has the lowest cost fit is the winner. The cost of a fit is the negative log probability of generating the image gi ven the model. - log J P(I) P( image I 1) dI IE model instances (1 ) We can approximate this by just considering the best fitting model instance 2 and ignoring the fact that the model should not generate ink where there is no ink in the image:3 E = A EdeJorm - L logP(pixel I best model instance) inked pixels (2) The probability of an inked pixel is the sum of the probabilities of all the possible ways of generating it from the mixture of Gaussian beads or the uniform noise field. P(i) = 1I"noi.H + 1I"model N B (3) where N is the total number of pixels, B is the number of beads, 11" is a mlxmg proportion', and Pb( i) is the probability density of pixel i under Gaussian bead b. 4 The search procedure for fitting a model to an image Every Gaussian bead in a model has the same variance. When fitting data, we start with a big variance and gradually reduce it as in the elastic net algorithm of Durbin and Willshaw (1987) . Each iteration of the elastic matching algorithm involves three steps: 21n effect, we are assuming that the integral in equation 1 can be approximated by the height of the highest peak, and so we are ignoring variations between models in the width of the peak or the number of peaks. 3If the inked pixels are rare, poor models sin mainly by not inking those pixels that should be inked rather than by inking those pixels that should not be inked. Adaptive Elastic Models for Hand) Printed Character Recognition 515 • Given the current locations of the Gaussians, compute the responsibility that each Gaussian has for each inked pixel. This is just the probability of generating the pixel from that Gaussian, normalized by the total probability of generating the pixel. • Assuming that the responsibilities remain fixed, as in the EM algorithm of Dempster, Laird and Rubin (1977), we invert a 16 x 16 matrix to find the image locations for the 8 control points at which the forces pulling the control points towards their home locations are balanced by the forces exerted on the control points by the inked pixels. These forces come via the forces that the inked pixels exert on the Gaussian beads. • Given the new image locations of the control points, we recompute the affine transformation from the object-based frame to the image frame. We choose the affine transformation that minimizes the sum of the squared distances, in object-based coordinates, between the control points and their home locations. The residual squared differences determine the deformation energy. Some stages in the fitting of a model to data are shown in Fig. 1. This search technique avoids nearly all local minima when fitting models to isolated digits. But if we get a high deformation energy in the best fitting model, we can try alternative starting configurations for the models. 5 Learning the digit models We can do discriminative learning by adjusting the home positions and variances of the control points to minimize the objective function c = - L 10gp(cor7'ect digit), training cases e-Ecorrect p(correct digit) = =-----~""" e-Ed'Y'1 Lall digits (4) For a model parameter such as the x coordinate of the home location of one of the control points we need oC / ax in order to do gradient descent learning. Equation 4 allows us to express oC / ax in terms of oE / ax but there is a subtle problem: Changing a parameter of an elastic model causes a simple change in the energy of the configuration that the model previously settled to, but the model no longer settles to that configuration. So it appears that we need to consider how the energy is affected by the change in the configuration. Fortunately, derivatives are simple at an energy minimum because small changes in the configuration make no change in the energy (to first order). Thus the inner loop settling leads to simple derivatives for the outer loop learning, as in the Boltzmann machine (Hinton, 1989). 6 Results on the hand-filtered dataset We are trying out the scheme out on a relatively simple task - we have a model of a two and a model of a three, and we want the two model to win on "two" images, and the three model to win on "three" images. We have tried many variations of the character models, the preprocessing, the initial affine transformations of the models, the annealing schedule for the variances, the 516 Hinton, Williams, and Revow c:= (a) (b) ,,--,. , "",ue:;~, .. ~~4P (c) (d) Figure 1: The sequellce> (n) 1.0 (d) shows SOIIlC stages or rHf.illg a model :~ 1.0 SOllie daf.1\.. The grey circles I·e>presellf. the heads Oil the splille, alld t.he> m,dius or t.he rircl(~ represents t.he standard deviation or t.he Gaussian. (a.) shows the illitia.1 conliglll'atioll, with eight beads equally spaced along the spline. 111 (b) and (c) the va.riallce is 11I'ogl:es~ively decrca.~ed and t.he Humber or heads is incrf~ased. The ri lIal ra \lsi IIg GO beads is showlI in (d). We use about. three iterat.ions al. each or nve variallces on our "annealing schedule". III this example, we used 1Tnoiu = 0.3 which lIIa.kes it. cheaper to explain the extrft,nCOliS 1I0ise pixels and the flourishes 011 t.he cllds or t.11!~ :~ as noise rather lhall deformillg t.he llIodel to briug Gallssiall heads cI()s(~ t.o t.hese pixels. Adaptive Elastic Models for Hand)Printed Character Recognition 517 mixing proportion of the noise, and the relative importance of deformation energy versus data-fit energy. Our current best performance is 10 errors (1.6%) on a test set of 304 two's and 304 three's. We reject cases if the best-fitting model is highly deformed, but on this test set the deformation energy never reached the rejection criterion. The training set has 418 cases, and we have a validation set of 200 cases to tell us when to stop learning. Figure 2 shows the effect of learning on the models. The initial affine transform is defined by the minimal vertical rectangle around the data. (BEFORE) IAFTER I [0 lTI [i] lTI Figure 2: The two and three models before and after learning. The control points are labelled 1 through 8. We used maximum likelihood learning in which each digit model is trained only on instances of that digit. After each pass through all those instances, the home location of each control point (in the object-based frame) is redefined to be the average location of the control point in the final fits of the model of the digit to the instances of the digit. Most of the improvement in performance occurred after the fist pass, and after five updates of the home locations of the control points, performance on the validation set started to decrease. Similar results were obtained with discriminative training. We could also update the variance of each control point to be its variance in the final fits, though we did not adapt the variances in this simulation. 518 Hinton, Williams, and Revow The images are preprocessed to eliminate variations due to stroke-width and paper and ink intensities. First, we use a standard local thresholding algorithm to make a binary decision for each pixel. Then we pick out the five largest connected components (hopefully digits). We put a box around each component, then thin all the data in the box. If we ourselves cannot recognize the resulting image we eliminate it from the data set. The training, validation and test data is all from the training portion of the United States Postal Service Handwritten ZIP Code Database (1987) which was made available by the USPS Office of Advanced Technology. 7 Discussion Before we tried using splines to model digits, we used models that consisted of a fixed number of Gaussian beads with elastic energy constraints operating between neighboring beads. To constrain the curvature we used energy terms that involved triples of beads. With this type of energy function, we had great difficulty using a single model to capture topologically different instances of a digit. For example, when the central loop of a 3 changes to a cusp and then to an open bend, the sign of the curvature reverses. With a spline model it is easy to model these topological variants by small changes in the relative vertical locations of the central two control points (see figure 2). This advantage of spline models is pointed out by (Edelman, Ullman and Flash, 1990) who use a different kind of spline that they fit to character data by directly locating candidate knot points in the image. Spline models also make it easy to increase the number of Gaussian beads as their variance is decreased. This coarse-to-fine strategy is much more efficient than using a large number of beads at all variances, but it is much harder to implement if the deformation energy explicitly depends on particular bead locations, since changiug the number of beads then requires a new function for the deformation energy. In determining where on the spline to place the Gaussian beads, we initially used a fixed set of blending coefficients for each bead. These coefficients are the weight:s used to specify the bead location as a weighted center of gravity of the loca.l-iollS of 4 control points. Unfortunately this yields too few beads in portions of a digit such as a long tail of a 2 which are governed by just a few control points. Performance was much improved by spacing the beads uniformly along the curve. By using spline models, we build in a lot of prior knowledge about wha.t characters look like, so we can describe the shape of a character using only a small number of parameters (16 coordinates and 8 variances). This means that the learning is exploring a much smaller space than a conventional feed-forward network. Also, because the parameters are easy to interpret, we can start with fairly good initial models of the characters. So learning only requires a few updates of the parameters. Obvious extensions of the deformation energy function include using elliptical Gaussians for the distributions of the control points, or using full covariance matrices for neighboring pairs of control points. Another obvious modification is to use elliptical rather than circular Gaussians for the beads. If strokes curve gently relative to their thickness, the distribution of ink can be modelled much better using elliptical Gaussians. However, an ellipse takes about twice as many operations to fit and is not helpful in regions of sharp curvature. Our simulations suggest that, on average, two circular beads are more flexible than one elliptical bead. Adaptive Elastic Models for Hand) Printed Character Recognition 519 Currently we do not impose any penalty on extremely sheared or elongated affine transformations, though this would probably improve performance. Having an explicit representation of the affine transformation of each digit should prove very helpful for recognizing multiple digits, since it will allow us to impose a penalty on differences in the affine transformations of neighboring digits. Presegmented images of single digits contain many different kinds of noise that cannot be eliminated by simple bottom-up operations. These include descenders, underlines, and bits of other digits; corrections; dirt in recycled paper; smudges and misplaced postal franks. To really understand the image we probably need to model a wide variety of structured noise. We are currently experimenting with one simple way of incorporating noise models. After each digit model has been used to segment a noisy image into one digit instance plus noise, we try to fit more complicated noise models to the residual noise. A good fit greatly decreases the cost of that noise and hence improves this interpretation of the image. We intend to handle flourishes on the ends of characters in this way rather than using more elaborate digit models that include optional flourishes. One of our main motivations in developing elastic models is the belief that a strong prior model should make learning easier, should reduce confident errors, and should allow top-down segmentation. Although we have shown that elastic spline models can be quite effective, we have not yet demonstrated that they are superior to feedforward nets and there is a serious weakness of our approach: Elastic matching is slow. Fitting the models to the data takes much more computation than a feedforward net. So in the same number of cycles, a feedforward net can try many alternative bottom-up segmentations and normalizations and select the overall segmentation that leads to the most recognizable digit string. Acknowledgements This research was funded by Apple and by the Ontario Information Technology Research Centre. We thank Allan Jepson and Richard Durbin for suggesting spline models. References Burr, D. J. (1981a). A dynamic model for image registration. Comput. Gmphics image Process., 15:102-112. Burr, D. J. (1981b). Elastic matching of line drawings. IEEE Trans. Pattern Analysis and Machine Intelligence, 3(6):708-713. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Proc. Roy. Stat. Soc., B-39:1-38. Durbin, R., Szeliski, R., and Yuille, A. L. (1989). An analysis of the elastic net approach to the travelling salesman problem. Neural Computation, 1:348-358. Durbin, R. and Willshaw, D. (1987). An analogue approach to the travelling salesman problem. Nature, 326:689-691. Edelman, S., Ullman, S., and Flash, T. (1990). Reading cursive handwriting by alignment of letter prototypes. Internat. Journal of Comput. Vision, 5(3):303-33l. Hinton, G. E. (1989). Deterministic Boltzmann learning performs steepest descent in weight-space. Neural Computation, 1:143-150. Ie Cun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., and Jackel, L. (1990). Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems 2, pages 396-404. Morgan Kaufmann. PART IX CONTROL AND PLANNING
1991
123
456
Modeling Applications with the Focused Gamma Net Jose C. Principe, Bert de Vries, Jyh-Ming Kuo and Pedro Guedes de Oliveira· Department of Electrical Engineering University of Florida, CSE 447 Gainesville, FL 32611 principe@synapse.ee.ufl.edu Abstract *Departamento EletronicalINESC Universidade de Aveiro A veiro, Portugal The focused gamma network is proposed as one of the possible implementations of the gamma neural model. The focused gamma network is compared with the focused backpropagation network and TDNN for a time series prediction problem, and with ADALINE in a system identification problem. 1 INTRODUCTION At NIPS-90 we introduced the gamma neural model, a real time neural net for temporal processing (de Vries and Principe, 1991). This model is characterized by a neural short term memory mechanism, the gamma memory structure, which is implemented as a tapped delay line of adaptive dispersive elements. The gamma model seems to provide an integrative framework to study the neural processing of time varying patterns (de Vries and Principe, 1992). In fact both the memory by delays as implemented in TDNN (Lang et aI, 1990) and memory by local feedback (self-recurrent loops) as proposed by Jordan (1986), and Elman (1990) are special cases of the gamma memory structure. The preprocessor utilized in Tank's and Hopfield concentration in time (CIT) network (Tank and Hopfield, 1989) can be shown to be very similar to the dispersive structure utilized in the gamma memory (deVries, 1991). We studied the gamma memory as an independent adaptive filter structure (Principe et ai, 1992), and concluded that it is a special case of a class of IIR (infinite impulse response) adaptive filters, which we called the generalized feedforward structures. For these structures, the well known Wiener-Hopf solution to find the optimal filter weights can be analytically computed. One of the advantages of the gamma memory as an adaptive filter is that. although being a recursive structure. stability is easily ensured. Moreover. the LMS algorithm can be easily 143 144 Principe, de Vries, Kuo, and de Oliveira extended to adapt all the filter weights, including the parameter that controls the depth of memory, with the same complexity as the conventional LMS algorithm (i.e. the algorithm complexity is linear in the number of weights). Therefore, we achieved a theoretical framework to study memory mechanisms in neural networks. In this paper we compare the gamma neural model with other well established neural networks that process time varying signals. Therefore the first step is to establish a topology for the gamma model. To make the comparison easier with respect to TDNN and Jordan's networks, we will present our results based on the focused gamma network. The focused gamma network is a multilayer feedforward structure with a gamma memory plane in the first layer (Figure 1). The learning equations for the focused gamma network and its memory characteristics will be addressed in detail. Examples will be presented for prediction of complex biological signals (electroencephalogram-EEG) and chaotic time series, as well as a system identification example. 2 THE FOCUSED GAMMA NET The focused neural architecture was introduced by Mozer (1988) and Stornetta et al (1988). It is characterized by a a two stage topology where the input stage stores traces of the input signal, followed by a nonlinear continuous feedforward mapper network (Figure 1). The gamma memory plane represents the input signal in a timespace plane (spatial dimension M, temporal dimension K). The activations in the memory layer are Iik(t), and the activations in the feedforward network are represented by xi(t). Therefore the following equations apply respectively for the input memory plane and for the feedforward network, Io(t) = Ii(t) Iik(t) = (1-~)Iik(t-1)+Jl/j,k_l(t-1),i=1, ... ,M;k=1, ... ,K. (1) Xj(t) = (J(Lwijxj(t) + LWijkIjk(t» , i=1, ... ,N. ( 2) j < i j, k where ~i is an adaptive parameter that controls the depth of memory (Principe et aI, 1992), and Wijk are the spatial weights. Notice that the focused gamma network for K=1 is very similar to the focused-backpropagation network of Mozer and Stornetta. Moreover, when Jl= I the gamma memory becomes a tapped delay line which is the configuration utilized in TDNN, with the time-to-space conversion restricted to the first layer (Lang et aI, 1990). Notice also that if the nonlinear feedforward mapper is restricted to one layer of linear elements, and Jl=1, the focused gamma memory becomes the adaptive linear combiner - ADALINE (Widrow et al,1960). In order to better understand the computational properties of the gamma memory we defined two parameters, the mean memory depth D and memory resolution R as K D=~ K R=-=Jl D (3) Modeling Applications with the Focused Gamma Net 145 (de Vries, 1991). Memory depth measures how far into the past the signal conveys information for the processing task, while resolution quantifies the temporal proximity of the memory traces. Figure 1. The focus gamma network architecture The important aspect in the gamma memory formalism is that Il, which controls both the memory resolution and depth, is an adaptive parameter that is learned from the signal according to the optimization of a performance measure. Therefore the focused gamma network always works with the optimal memory depth/ resolution for the processing problem. The gamma memory is an adaptive recursive structure, and as such can go unstable during adaptation. But due to the local feedback nature of G(z), stability is easily ensured by keeping O<Il<2. The focused gamma network is a recurrent neural model, but due to the topology selected, the spatial weights can be learned using regular backpropagation (Rumelhart et aI, 1986). However for the adaptation of Il, a recurrent learning procedure is necessary. Since most of the times the order of the gamma memory is small, we recommend adapting I..l with direct differentiation using the real time recurrent learning (RTRL) algorithm (Williams and Zipzer,1989), which when applied to the gamma memory yields, 146 Principe, de Vries, Kuo, and de Oliveira = - I/m (t) cr' [netm (t)] LWmikU : (t) m k where by definition U: (t) = -:-.d Iik (t) , and oJli -:-.d Iik(t) = (l-Jlj)Uki(t-I) +Jliu:- 1 (t-I) + [Ii,k-l(t-I) -li,k(t-I)] oJl. I However, backpropagation through time (BPTT) (Werbos, 1990) can also be utilized, and will be more efficient when the temporal patterns are short. 3 EXPERIMENTAL RESULTS The results for prediction that will be presented here utilized the focused gamma network as depicted in Figure 2a, while for the case of system identification, the block diagram is presented in Figure 2b. plant d(n) -4-----i~ joclise. gamvla let) wij,1l a) Prediction ,--::--------' b) System Identification ::.' · : Ftg~~~1~Bio~k diag rarii;fJ;~ }h~~~;e riments Prediction of EEG We selected an EEG signal segment for our first comparison, because the EEG is notorious for its complexity. The problem was to predict the signal five steps ahead (feedforward prediction). Figure 3 shows a four second segment of sleep stage 2. The topology utilized was K gamma units, a one-hidden layer configuration with 5 units (nonlinear) and one linear output unit. The performance criterion is the mean square error signal. We utilized backpropagation to adapt the spatial weights (wijk), and parametrized Jl between 0 and I in steps of 0.1. Figure 3b displays the curves of minimal mean square error versus Jl. One can immediately see that the minimum mean square error is obtained for values of Jl different from one, therefore for the same memory order the gamma memory outperforms the tapped delay line as utilized in TDNN (which once again is equivalent to the gamma memory for Jl=l). For the case of the EEG it seems that the advantage of the gamma memory diminishes when the order of the memory is Modeling Applications with the Focused Gamma Net 147 increased. However, the case of K=2, 11=0.6 produces equivalent performance of a TDNN with 4 memory taps (K=4). Since in experimental conditions there is always noise, experience has shown that the fewer number of adaptive parameters yield better signal fitting and simplifies training, so the focused gamma network is preferable. 0.1 K=2 ·0' ·0.6 ·0. O:--:-::100~200~];;:;:-OO -----:-::.oo~'OO:::---600=--:;-:700:O--; ... =--:::: ... ~1000 ~t o 0 I 0 0] 0'0~01!07 01 09 1 P~edicti~~~;r~'(5;~~p;jwith the g~l7ii;;~fiit;;~;~f~~~n~n~JU / . the EEG. The best MSE is obtained for 11 < 1. The dot shows the performance Notice also that the case of networks with first order context unit is obtained for K= I, so even if the time constant is chosen right (11=0.2 in this case), the performance can be improved if higher order memory kernels are utilized. It is also interesting to note that the optimal memory depth for the EEG prediction problem seems to be arollnd 4, as this is the value of KIll optimal. The information regarding the "optimal memory depth" is not obtainable with conventional models. Prediction of Mackey-Glass time series The Mackey-Glass system is a delay differential equation that becomes chaotic for some values of the parameters and delays (Mackey-Glass, 1977). The results that will be presented here regard the Mackey-Glass system with delay D=30. The time series was generated using a fourth order Runge-Kutta algorithm. The table in Figure 4 shows the performance of TDNN and the focused gamma network with the same number of free parameters. The number of hidden units was kept the same in both networks, but TDNN utilized 5 input units, while the focused gamma network had 4 input units, and the adaptive memory depth parameter 11. The two systems were trained with the same number of samples, and training epochs. For TDNN this was the value that gave the best training when cross validation was utilized (the error in the test set increased after 100 epochs). For this example 11 was adapted on-line using RTRL, with the initial value set at 11= I, and with the same step size as for the spatial weights. As the Table shows, the MSE in the training for the gamma network is substantially lower than for TDNN. Figure 4 shows the behavior of 11 during the training epochs. It is interesting to see that the value of 11 changes during training and settles around a value of 0.92. In terms of learning curve (the MSE as a function of epoch) notice that there is an intersection of the learning curves for the TDNN and gamma network around epoch 42 when the value of 11=1, as we could expect from our analysis. The gamma network starts outperforming TDNN when the correct value of 11 is approached. 148 Principe, de Vries, Kuo, and de Oliveira This example shows that Jl can be learned on line, and that once again having the freedom to select the right value of memory depth helps in terms of prediction performance. For both these cases the required memory depth is relatively shallow, what we can expect since a chaotic time series has positive Lyapunov exponents, so the important information to predict the next point is in the short-term past. The same argument applies to the EEG, that can also be modeled as a chaotic time series (Lo and Principe, 1989). Cases where the long-term past is important for the task should enhance the advantage of the gamma memory. .\\ test .... ,~~gam~ma TD1NN i train .":: , .. ' TDNN 0, _" .. . " • . . , ••.•• .... .... :::: gamma ~ ... . ch Gamma Net Architecture (l+K=4)-12-1 800 same number offree parameters. Notice that learning curves intersect around epoch 42, exactly when the m of the gamma was 1. The Figure 4b also shows that the gamma network is able to achieve a smaller error in this problem. Linear System Identification The last example is the identification of a third order linear lowpass elliptic transfer function with poles and zeros, given by 1 0 8 1 -} 3 -2 -3 H (z) = . 73 z - 0.87 1 z + z 1 - 2.8653z-1 + 2.7505z-2 - 0.8843z-3 The cutoff frequency of this filter was selected such that the impulse response was long, effectively creating the need for a deep memory for good identification. For this case the focused gamma network was reduced to an ADALINE(Jl) (de Vries et aI, 1991), i.e. the feedforward mapper was a one layer linear network. The block diagram of Figure 2b was utilized to train the gamma network, and I(t) was chosen to be white gaussian noise. Figure 5 shows the MSE as a function of Jl for gamma memory orders up to k=3. Notice that the information gained from the Figure 5 agrees with our speculations. The optimal value of the memory is K/Jl - 17 samples. For this value the third order ADALINE performs very poorly because there is not enough information in 3 delays to identify the transfer function with small error. The gamma memory, on the other hand can choose Jl small to encompass the req uired Modeling Applications with the Focused Gamma Net 149 length, even for a third order memory. The price paid is reduced resolution, but the performance is still much better than the ADALINE of the same order (a factor of 10 improvement). 4 CONCLUSIONS In this paper we propose a specific topology of the gamma neural model, the focused gamma network. Several important neural networks become special cases of the focused gamma network. This allowed us to compare the advantages of having a more versatile memory structure than any of the networks under comparison . .. :.11 0 . 8 ::::;::~:~~:r:;:~::: :::::::::{ ';:-:.; i: o .4 i:!· •• • •• • 0.2 ••••.••••.•• ·.iii····· o . 2 .0 . 4 . :1Ii! ~ 16 . 0 . 8 1 : ·Figure .s:. E v~. J..l for H(z). The error achieved with { ~=0.18 IS 10 tlInes smaller than for the ADAL1NE. The conclusion is that the gamma memory is computationally more powerful than fixed delays or first order context units. The major advantage is that the gamma model formalism allows the memory depth to be optimally set for the problem at hand. In the case of the chaotic time series, where the information to predict the future is concentrated in the neighborhood of the present sample, the gamma memory selected the most appropriate value, but its performance is similar to TDNN. However, in cases where the required depth of memory is much larger than the size of the tapped delay line, the gamma memory outperforms the fixed depth topologies with the same number of free parameters. The price paid for this optimal performance is insignificant. As a matter of fact, ~ can be adapted in real-time with RTRL (or BPTT), and since it is a single global parameter the complexity of the algorithm is still O(K) with RTRL. The other possible problem, instability, is easily controlled by requiring that the value of ~ be limited to O<~<2. The focused gamma memory is just one of the possible neural networks that can be implemented with the gamma model. The use of gamma memory planes in the hidden or output processing elements will enhance the computational power of the neural network. Notice that in these cases the short term mechanism is not only utilized to store information of the signal past, but will also be utilized to store the past values of the neural states. We can expect great savings in terms of network size with these 150 Principe, de Vries, Kuo, and de Oliveira other structures, mainly in cases where the information of the long-term past is important for the processing task. Acknowledgments This work has been partially supported by NSF grant DDM-8914084. References De Vries B. and Principe J.e. (1991). A Theory for Neural Nets with Time Delays. In Lippmann R., Moody J., and Touretzky D. (eds.), NIPS90 proceedings, San Mateo, CA, Morgan Kaufmann. DeVries B., Principe J., Oliveira P. (1991). Adaline with Adaptive Recursive Memory, Proc. IEEE Work. Neural Netsfor Sig. Proc., Princeton, 101-110, IEEE Press. De Vries and Principe, (1992). The gamma neural net - A new model for temporal processing. Accepted for publication, Neural Networks. DeVries B.(1991). Temporal Processing with Neural Networks- The Development of the Gamma Model, Ph.D. Dissertation, University of Florida. Elman, (1988). Finding structure in time. CRL technical report 8801, 1988. Jordan, (1986). Attractor dynamics and parallelism in a connectionist sequential machine. Proc. Cognitive Science 1986. Lang et. al. (1990). A time-delay neural network architecture for isolated word recognition. Neural Networks, vol.3 (1), 1990. Lo P.e. and Principe, J.e. (1989). Dimensionality analysis of EEG Segments: experimental considerations, Proc. llCNN 89, vol I, 693-698. Mackey D., Glass L. (1977). Oscillation and Chaos in Physiological Control Systems, Science 197,287. Mozer M.C. (1989). A Focused Backpropagation Algorithm for Temporal Pattern Recognition. Complex Systems 3,349-381. Principe J.C., De Vries B., Oliveira, P.(1992). The Gamma Filter - A New Class of Adaptive IIR Filters with Restricted Feedback. Accepted for publication in IEEE Transactions on Signal Processing. Rumelhart D.E., Hinton G.E. and Williams R.J. (1986). Learning Internal Representations by Error Back-propagation. In Rumelhart D.E., McClelland J.L. (eds.) , Parallel Distributed Processing, vol. 1, ch. 8, MIT Press. Stornetta W.S ., Hogg T. and Huberman B.A. (1988). A Dynamical Approach to Temporal Pattern Processing. In Anderson D.Z. (ed.), Neural Information Processing Systems, 750-759 . Tank and Hopfield, (1987). Concentrating information in time: analog neural networks with applications to speech recognition problems. 1st into con! on neural networks, IEEE, 1987. Werbos P. (1990). Backpropagation through time:what it does and how to do it., Proc. IEEE, vol 78, no 10, 1550-1560. Widrow B., Hoff M. (1960). Adaptive Switching Circuits, IRE Wescon Con! Rep., pt 4. Williams J., and Zipzer D (1989). A learning algorithm for continually tunning fiully recurrent neural networks, Neural Computation, vol I no 2, pp 270-281, MIT Press.
1991
124
457
Towards Faster Stochastic Gradient Search Christian Darken and John Moody Yale Computer Science, P.O. Box 2158, New Haven, CT 06520 Email: darken@cs.yale.edu Abstract Stochastic gradient descent is a general algorithm which includes LMS, on-line backpropagation, and adaptive k-means clustering as special cases. The standard choices of the learning rate 1] (both adaptive and fixed functions of time) often perform quite poorly. In contrast, our recently proposed class of "search then converge" learning rate schedules (Darken and Moody, 1990) display the theoretically optimal asymptotic convergence rate and a superior ability to escape from poor local minima. However, the user is responsible for setting a key parameter. We propose here a new methodology for creating the first completely automatic adaptive learning rates which achieve the optimal rate of convergence. Intro d uction The stochastic gradient descent algorithm is 6. Wet) = -1]\7w E(W(t), X(t)). where 1] is the learning rate, t is the "time", and X(t) is the independent random exemplar chosen at time t. The purpose of the algorithm is to find a parameter vector W which minimizes a function G(W) which for learning algorithms has the form £x E(W, X), i.e. G is the average of an objective function over the exemplars, labeled E and X respectively. We can rewrite 6.W(t) in terms of G as 6. Wet) = -1][\7wG(W(t)) + e(t, Wet))], where the e are independent zero-mean noises. Stochastic gradient descent may be preferable to deterministic gradient descent when the exemplar set is increasing in size over time or large, making the average over exemplars expensive to compute. 1009 1010 Darken and Moody Additionally, the noise in the gradient can help the system escape from local minima. The fundamental algorithmic issue is how to best adjust 11 as a function of time and the exemplars? State of the Art Schedules The usual non-adaptive choices of 11 (i.e. 11 depends on the time only) often yield poor performance. The simple expedient of taking 11 to be constant results in persistent residual fluctuations whose magnitude and the resulting degradation of system performance are difficult to anticipate (see fig. 3). Taking a smaller constant 11 reduces the magnitude of the fluctuations, but seriously slows convergence and causes problems with metastable local minima. Taking l1(t) = cit, the common choice in the stochastic approximation literature of the last forty years, typically results in slow convergence to bad solutions for small c, and parameter blow-up for small t if c is large (Darken and Moody, 1990). The available adaptive schedules (i.e. 11 depends on the time and on previous exemplars) have problems as well. Classical methods which involve estimating the hessian of G are often unusable because they require O(N2) storage and computation for each update, which is too expensive for large N (many parameter systemse.g. large neural nets). Methods such as those of Fabian (1960) and Kesten (1958) require the user to specify an entire function and thus are not practical methods as they stand. The delta-bar-delta learning rule, which was developed in the context of deterministic gradient descent (Jacobs, 1988), is often useful in locating the general vicinity of a solution in the stochastic setting. However it hovers about the solution without converging (see fig. 4). A schedule developed by Urasiev is proven to converge in principle, but in practice it converges slowly if at all (see fig. 5). The literature is widely scattered over time and disciplines, however to our knowledge no published O(N) technique attains the optimal convergence speed. Search-Then-Converge Schedules Our recently proposed solution is the "search then converge" learning rate schedule. 11 is chosen to be a fixed function of time such as the following: 1 +..£1 ( ) 1/D T 11 t = 110 1 + ..£ 1 + T t 2 1/D T T2 This function is approximately constant with value 110 at times small compared to T (the "search phase"). At times large compared with T (the "converge phase"), the function decreases as cit. See for example the eta vs. time curves for figs. 6 and 7. This schedule has demonstrated a dramatic improvement in convergence speed and quality of solution as compared to the traditional fixed learning rate schedule for k-means clustering (Darken and Moody, 1990). However, these benefits apply to supervised learning as well. Compare the error curve of fig. 3 with those of figs. 6 and 7. This schedule yields optimally fast asymptotic convergence if c > c*, c* = 1/2a, where a is the smallest eigenvalue of the hessian of the function G (defined above) Towards Faster Stochastic Gradient Search 1011 Little Drift Much Drift Figure 1: Two contrasting parameter vector trajectories illustrating the notion of drift at the pertinent minimum (Fabian, 1968) (Major and Revesz, 1973) (Goldstein, 1987). The penalty for choosing c < c· is that the ratio of the excess error given c too small to the excess error with c large enough gets arbitrarily large as training time grows, i.e. 1. Ec<c· 1m = 00, t_oo Ec>c. where E is the excess error above that at the minimum. The same holds for the ratio of the two distances to the location of the minimum in parameter space. While the above schedule works well, its asymptotic performance depends upon the user's choice of c. Since neither 1}o nor T affects the asymptotic behavior of the system, we will discuss their selection elsewhere. Setting c > c·, however, is vital. Can such a c be determined automatically? Directly estimating a with conventional methods (by calculating the smallest eigenvalue of the hessian at our current estimate of the minimum) is too computationally demanding. This would take at least O(N2) storage and computation time for each estimate, and would have to be done repeatedly (N is the number of parameters). We are investigating the possibility of a low complexity direct estimation of a by performing a second optimization. However here we take a more unusual approach: we shall determine whether c is large enough by observing the trajectory ofthe parameter (or "weight") vector. On-line Determination of Whether c < c* We propose that excessive correlation in the parameter change vectors (i.e. "drift") indicates that c is too small (see fig. 1). We define the drift as D(t) = ~ d~(t) k dk(t) = JT [{(8k(t) _(~~~tt~}T )2}T )1/2 where 8k (t) is the change in the kth component of the parameter vector at time t and the angled brackets denote an average over T parameter changes. We take T = at, where a « 1. Notice that the numerator is the average parameter step while the 1012 Darken and Moody 1.11 On..t.alll-Ublenbeck Prooen ill OD. Dimenaon DJI / III' III' III' 10-' Figure 2: (Left) An Ornstein-Uhlenbeck process. This process is zero-mean, gaussian, and stationary (in fact strongly ergodic). It may be thought of as a random walk with a restoring force towards zero. (Right) Measurement of the drift for the runs c = .1c· and c = lOc· which are discussed in figs. 7 and 8 below. denominator is the standard deviation of the steps. As a point of reference, if the 61; were independent normal random variables, then the dl; would be "T-distributed" with T degrees of freedom, i.e. approximately unit-variance normals for moderate to large T. We find that 61; may also be taken to be the kth component of the noisy gradient to the same effect. Asymptotically, we will take the learning rate to go as cft. Choosing c too small results in a slow drift of the parameter vector towards the solution in a relatively linear trajectory. When c> c· however, the trajectory is much more jagged. Compare figs. 7 and 8. More precisely, we find that D(t) blows up like a power of t when c is too small, but remains finite otherwise. Our experiments confirm this (for an example, see fig. 2). This provides us with a signal to use in future adaptive learning rate schemes for ensuring that c is large enough. The bold-printed statement above implies that an arbitrarily small change in c which moves it to the opposite side of c··has dramatic consequences for the behavior of the drift. The following rough argument outlines how one might prove this statement, focusing on the source of this interesting discontinuity in behavior. We simplify the argument by taking the 61; 's to be gradient measurements as mentioned above. We consider a one-dimensional problem, and modify d1 to be ..;T{6dT (i.e. we ignore the denominator). Then since T = at as stated above, we approximate d1 = Vr{6} (t))r ~ (Ji61 (t))T = (Ji[VG(t) + e(t)])r Recall the definitions of G and e from the introduction above. As t -+ 00, VG(t) -+ K[W(t) - Wo] for the appropriate K by the Taylor's expansion for G around Wo, the location of the local minimum. Thus lim d1 ~ (K Ji[W(t) - Wo])r + (Jie(t)}r '_00 Define X(t) = Jt[W(t) - Wo]. Now according to (Kushner, 1978), X(e') converges Towards Faster Stochastic Gradient Search 1013 Constant 11=0.1 ~ 01-11 "I III' lulU' 10" 10" 10" Um. 10-" _ 10-' 110-" 1 10-" j leru 110-" iller" P .. ~,. .... wcec.. 10 10" Figure 3: The constant 1/ schedule, commonly used in training backpropagation networks, does not converge in the stochastic setting. in distribution to the well-known Ornstein-Uhlenbeck process (fig. 2) when c > c·. By extending this work, one can show that X(t) converges in distribution to a deterministic power law, tP with p > 0 when c < c·. Since the e's are independent and have uniformly bounded variances for smooth objective functions, the second term converges in distribution to a finite-variance random variable. The first term converges to a finite-variance random variable if c > c·, but to a power of t if c < c· . Qualitative Behavior of Schedules We compare several fixed and adaptive learning rate schedules on a toy stochastic problem. Notice the difficulties that are encountered by some schedules even on a fairly easy problem due to noise in the gradient. The problem is learning a two parameter adaline in the presence of independent uniformly distributed [-0.5,0.5] noise on the exemplar labels. Exemplars were independently uniformly distributed on [-1,1]. The objective function has a condition number of 10, indicating the presence of the narrow ravine indicated by the elliptical isopleths in the figures. All runs start from the same parameter (weight) vector and receive the same sequence of exemplars. The misadjustment is defined as the Euclidean distance in parameter space to the minimum. Multiples of this quantity bound the usual sum of squares error measure above and below, i.e. sum of squares error is roughly proportional to the misadjustment. Results are presented in figs. 3-8. Conclusions Our empirical tests agree with our theoretical expectations that drift can be used to determine whether the crucial parameter c is large enough. Using this statistic, it will be possible to produce the first fully automatic learning rates which converge at optimal speed. We are currently investigating candidate schedules which we expect to be useful for large-scale LMS, backpropagation, and clustering applications. 1014 Darken and Moody Stochastic Della-Dar-Delta Pial thouI.md p"""" \lll)GlDl'1 Figure 4: Delta-bar-deita (Jacobs, 1988) was apparently developed for use with deterministic gradient descent. It is also useful for stochastic problems with little noise, which is however not the case for this test problem. In this example TJ increases from its initial value, and then stabilizes. We use the algorithm exactly as it appears in Jacobs' paper with noisy gradients substituted for the true gradient (which is unavailable in the stochastic setting). Parameters used were TJo = 0.1, B = 0.3, K = 0.01, and ¢ = 0.1. Urasiev Fint ~ p-..adls VMIOra l~rTTmmrTn~-rrrm~nm~~~ro~~ 10-' i---'----'''\.. 10'" 10'" 10" 10-<' 10 .... " 10-' .. 10" 10" 10-'" 10-11 10-" 10-1· IO-~~O~~~~~Lll~~~~~~~~I~ 10' Figure 5: Urasiev's technique (Urasiev, 1988) varies TJ erratically over several orders of magnitude. The large fluctuations apparently cause TJ to completely stop changing after a while due to finite precision effects. Parameters used were D = 0.2, R = 2, and U = 1. Towards Faster Stochastic Gradient Search 1015 Fixed Search-Then-Converge, c=c* 104 10" ~ 10~ Figure 6: The fixed search-then-converge schedule with c = c'" gives excellent performance. However if c'" is not known, you may get performance as in the next two examples. An adaptive technique is called for. Fixed Search-Then-Converge, c=10c* Figure 7: Note that taking c > c'" slows convergence a bit as compared to the c = c'" example in fig. 6, though it could aid escape from bad local minima in a nonlinear problem. 1016 Darken and Moody Fixed Search-Then-Converge, c=O.lc* 10-1 I~ 10-" -! 10-" 10'" 10" 10 10" 10' "i:'IO-1 110-" 110-< ilo'" 110-< a 10-" 10 10' Figure 8: This run illustrates the penalty to be paid if c < c·. References C. Darken and J. Moody. (1990) Note on learning rate schedules for stochastic optimization. Advances in Neural Information Processing Systems 9. 832-838. V.Fabian. (1960) Stochastic approximation methods. Czechoslovak Math J. 10 (85) 123-159. V. Fabian. (1968) On asymptotic normality in stochastic approximation. Ann. Math. Stat. 39(4):1327-1332. 1. Goldstein. (1987) Mean square optimality in the continuous time Robbins Monro procedure. Technical Report DRB-306. Department of Mathematics, University of Southern California. R. Jacobs. (1988) Increased rates of convergence through learning rate adaptation. Neural Networks. 1:295-307. H. Kesten. (1958) Accelerated stochastic approximation. Annals of Mathematical Statistics. 29:41-59. H. Kushner. (1978) Rates of convergence for sequential Monte Carlo optimization methods. SIAM J. Control and Optimization. 16:150-168. P. Major and P.Revesz. (1973) A limit theorem for the Robbins-Monro approximation. Z. Wahrscheinlichkeitstheorie verw. Geb. 27:79-86. S. Urasiev. (1988) Adaptive stochastic quasigradient procedures. In Numerical Techniques for Stochastic Optimization. Y. Ermoliev and R. Wets Eds. Springer-Verlag.
1991
125
458
Gradient Descent: Second-Order Momentum and Saturating Error Barak Pearlmutter Department of Psychology P.O. Box llA Yale Station New Haven, CT 06520-7447 pearlmutter-barak@yale.edu Abstract Batch gradient descent, ~w(t) = -7JdE/dw(t), conver~es to a minimum of quadratic form with a time constant no better than '4Amax/ Amin where Amin and Amax are the minimum and maximum eigenvalues of the Hessian matrix of E with respect to w. It was recently shown that adding a momentum term ~w(t) = -7JdE/dw(t) + Q'~w(t - 1) improves this to ~ VAmax/ Amin, although only in the batch case. Here we show that secondorder momentum, ~w(t) = -7JdE/dw(t) + Q'~w(t -1) + (3~w(t - 2), can lower this no further. We then regard gradient descent with momentum as a dynamic system and explore a non quadratic error surface, showing that saturation of the error accounts for a variety of effects observed in simulations and justifies some popular heuristics. 1 INTRODUCTION Gradient descent is the bread-and-butter optimization technique in neural networks. Some people build special purpose hardware to accelerate gradient descent optimization of backpropagation networks. Understanding the dynamics of gradient descent on such surfaces is therefore of great practical value. Here we briefly review the known results in the convergence of batch gradient descent; show that second-order momentum does not give any speedup; simulate a real network and observe some effect not predicted by theory; and account for these effects by analyzing gradient descent with momentum on a saturating error surface. 887 888 Pearl mutter 1.1 SIMPLE GRADIENT DESCENT First, let us review the bounds on the convergence rate of simple gradient descent without momentum to a minimum of quadratic form [11,1]. Let w* be the minimum of E, the error, H = d2 E/dw2(w*), and Ai, vi be the eigenvalues and eigenvectors of H. The weight change equation dE .6.w = -T}dw (where .6.f(t) = f(t + 1) - f(t» is limited by o < T} < 2/ Amax (1) (2) We can substitute T} = 2/ Amax into the weight change equation to obtain convergence that tightly bounds any achievable in practice, getting a time constant of convergence of -1/log(1 - 2s) = (2s)-1 + 0(1), or E - E* ~ exp(-4st) (3) where we use s = Amin/ Amax for the inverse eigenvalues spread of H and ~ is read "asymptotically converges to zero more slowly than." 1.2 FIRST-ORDER MOMENTUM Sometimes a momentum term is used, the weight update (1) being modified to incorporate a momentum term a < 1 [5, equation 16], dE .6.w(t) = -T} dw (t) + a.6.w(t - 1). (4) The Momentum LMS algorithm, MLMS, has been analyzed by Shynk and Roy [6], who have shown that the momentum term can not speed convergence in the online, or stochastic gradient, case. In the batch case, which we consider here, Tugay and Tanik [9] have shown that momentum is stable when a < 1 and 0 < TJ < 2(a+ 1)/Amax (5) which speeds convergence to E - E* ~ exp(-(4VS + O(s)) t) (6) by * 2 - 4Js(1 - s) a = (1- 2s)2 -1 = 1- 4VS+ 0 (s), T}*=2(a*+1)/Amax. (7) 2 SECOND-ORDER MOMENTUM The time constant of asymptotic convergence can be changed from O(Amax/ Amin) to O( JAmax/ Amin) by going from a first-order system, (1), to a second-order system, (4). Making a physical analogy, the first-order system corresponds to a circuit with Gradient Descent: Second Order Momentum and Saturating Error 889 .... Figure 1: Second-order momentum converges if 7]Amax is less than the value plotted as "eta," as a function of a and (3. The region of convergence is bounded by four smooth surfaces: three planes and one hyperbola. One of the planes is parallel to the 7] axis, even though the sampling of the plotting program makes it appear slightly sloped. Another is at 7] = 0 and thus hidden. The peak is at 4. a resistor, and the second-order system adds a capacitor to make an RC oscillator. One might ask whether further gains can be had by going to a third-order system, dE ~w(t) = -7] dw + a~w(t - 1) + (3~w(t - 2). (8) For convergence, all the eigenvalues of the matrix Mi = (~6 ~) - {3 -a + {3 1 - 7]Ai + a in (c,(t - 1) Ci(t) Ci(t + l))T ~ M,(Ci(t - 2) c,(t - 1) ci(t)f must have absolute value less than or equal to 1, which occurs precisely when -1 ~ (3 ~ 1 o ~ 7] ~ 4({3 + 1) / Ai 7]Ad2 - (1 - (3) < a ~ {37]Ai/2 + (1 - (3). For {3 ~ 0 this is most restrictive for Amax, but for {3 > 0 Amin also comes into play. Taking the limit as Amin -.0, this gives convergence conditions for gradient descent with second-order momentum of -1< {3 {3-1~ a ~1-{3 when a ~ 3{3 + 1 : (9) 2 0< 7] ~ -(1+a-{3) Amax when a 2: 3{3+ 1: 0< 7] ~ f + ~(a + (3 - 1) max 890 Pearlmutter a region shown in figure 1. Fastest convergence for Amin within this region lies along the ridge a = 3{3 + 1, T} = 2(1 + a - {3)/ Amax. Unfortunately, although convergence is slightly faster than with first-order momentum, the relative advantage tends to zero as 8 --+- 0, giving negligible speedup when Amax ~ Amin. For small 8, the optimal settings of the parameters are where a· is as in (7). 3 SIMULATIONS 9 1 - 4: vIS + 0(8) 3 --vIS + 0(8) 4 4(1 - vIS) + 0(8) (10) We constructed a standard three layer backpropagation network with 10 input units, 3 sigmoidal hidden units, and 10 sigmoidal output units. 15 associations between random 10 bit binary input and output vectors were constructed, and the weights were initialized to uniformly chosen random values between -1 and +1. Training was performed with a square error measure, batch weight updates, targets of 0 and 1, and a weight decay coefficient of 0.01. To get past the initial transients, the network was run at T} = 0.45, a = 0 for 150 epochs, and at T} = 0.3, a = 0.9 for another 200 epochs. The weights were then saved, and the network run for 200 epochs for T} ranging from 0 to 0.5 and a ranging from 0 to 1 from that starting point. Figure 3 shows that the region of convergence has the shape predicted by theory. Calculation of the eigenvalues of d2 E / dw2 confirms that the location of the boundary is correctly predicted. Figure 2 shows that momentum speeded convergence by the amount predicted by theory. Figure 3 shows that the parameter setting that give the most rapid convergence in practice are the settings predicted by theory. However, within the region that does not converge to the minimum, there appear to be two regimes: one that is characterized by apparently chaotic fluctuations of the error, and one which slopes up gradually from the global minimum. Since this phenomenon is so atypical of a quadratic minimum in a linear system, which either converges or diverges, and this phenomenon seems important in practice, we decided to investigate a simple system to see if this behavior could be replicated and understood, which is the subject of the next section. 4 GRADIENT DESCENT WITH SATURATING ERROR The analysis of the sections above may be objected to on the grounds that it assumes the minimum to have quadratic form and then performs an analysis in the neighborhood of that minimum, which is equivalent to analyzing a linear unit. Surely our nonlinear backpropagation networks are richer than that. Gradient Descent: Second Order Momentum and Saturating Error 891 0.1195 ! 0 .690~~11~---------__ ---:l 0.611:1 O . 680L......~~-----.J'---'-~~------.l~~~-..J.~~~ .......... 350 400 450 500 epoch Figure 2: Error plotted as a function of time for two settings of the learning parameters, both determined empirically: the one that minimized the error the most, and the one with a = 0 that minimized the error the most. There exists a less aggressive setting of the parameters that converges nearly as fast as the quickly converging curve but does not oscillate. A clue that this might be the case was shown in figure 3. The region where the system converges to the minimum is of the expected shape, but rather than simply diverging outside of this region, as would a linear system, more complex phenomena are observed, in particular a sloping region. Acting on the hypothesis that this region is caused by Amax being maximal at the minimum, and gradually decreasing away from it (it must decrease to zero in the limit, since the hidden units saturate and the squared error is thus bounded) we decided to perform a dynamic systems analysis of the convergence of gradient descent on a one dimensional nonquadratic error surface. We chose 1 E=l-l 2 +w (11) which is shown in figure 4, as this results in a bounded E. Letting f( ) = _ E'( ) _ w(l 2T} + 2w2 + w4 ) W W T} W (1 + w 2 )2 (12) be our transfer function, a local analysis at the minimum gives Amax = E"(O) = 2 which limits convergence to T} < 1. Since the gradient towards the minimum is always less than predicted by a second-order series at the minimum, such T} are in fact globally convergent. As T} passes 1 the fixedpoint bifurcates into the limit cycle w = ±j.,;ry - 1, (13) which remains stable until T} --+- 16/9 = 1.77777 ... , at which point the single symmetric binary limit cycle splits into two asymmetric limit cycles, each still of period two. These in turn remain stable until T} --+- 2.0732261475-, at which point repeated period doubling to chaos occurs. This progression is shown in figure 7. 892 Pearlmutter "i 0.40 !! ~ 0.30 'c 5 0.20 .!l -; 0.10 O.OOC=:==""""",_--=~ 0.0 0.2 0.4 0.6 0.8 ' .0 Q (mom .... lum) Figure 3: (Left) the error at epoch 550 as a function of the learning regime. Shading is based on the height, but most of the vertical scale is devoted to nonconvergent networks in order to show the mysterious non convergent sloping region. The minimum, corresponding to the most darkly shaded point, is on the plateau of convergence at the location predicted by the theory. (Center) the region in which the network is convergent, as measured by a strictly monotonically decreasing error. Learning parameter settings for which the error was strictly decreasing have a low value while those for which it was not have a high one. The lip at 7] = 0 has a value of 0, given where the error did not change. The rim at a = 1 corresponds to damped oscillation caused by 7] > 4aA/(1 - a)2. (Right) contour plot of the convergent plateau shows that the regions of equal error have linear boundaries in the nonoscillatory region in the center, as predicted by theory. As usual in a bifurcation, w rises sharply as 7] passes 1. But recall that figure 3, with the smooth sloping region, plotted the error E rather than the weights. The analogous graph here is shown in figure 6 where we see the same qualitative feature of a smooth gradual rise, which first begins to jitter as the limit cycle becomes asymmetric, and then becomes more and more jagged as the period doubles its way to chaos. From figure 7 it is clear that for higher 7] the peak error of the attractor will continue to rise gently until it saturates. Next, we add momentum to the system. This simple one dimensional system duplicates the phenomena we found earlier, as can be seen by comparing figure 3 with figure 5. We see that momentum delays the bifurcation of the fixed point attractor at the minimum by the amount predicted by (5), namely until 7] approaches 1 + a. At this point the fixed point bifurcates into a symmetric limit cycle of period 2 at a formula of which (13) is a special case. This limit cycle is stable for 16 7]< g(1+a), (14) (15) but as 7] reaches this limit, which happens at the same time that w reaches ±1/V3 (the inflection point of E where E = 1/4) the limit cycle becomes unstable. However, for a near 1 the cycle breaks down more quickly in practice, as it becomes haloed by more complex attractors which make it progressively less likely that a sequence of iterations will actually converge to the limit cycle in question. Both boundaries of this strip, 7] = 1 + a and 7] = 196 (1 + a), are visible in figure 5, 1 E o -3 o 3 w Figure 4: A one dimensional tulip-shaped nonlinear error surface E = 1- (1 + w2)-1. Gradient Descent: Second Order Momentum and Saturating Error 893 Figure 5: E after 50 iterations from a starting point of 0.05, as a function of 7J and a. 0.' -1 -I Figure 6: E as a function of 7J with a = o. When convergent, the final value is shown; otherwise E after 100 iterations from a starting point of w = 1.0. This a more detailed graph of a slice of figure 5 at a = O. 1.' Figure 7: The attractor in was a function of 7J is shown, with the progression from a single attract or at the minimum of E to a limit cycle of period two, which bifurcates and then doubles to chaos. a = 0 (left) and a = 0.8 (right). For the numerical simulations portions of the graphs, iterations 100 through 150 from a starting point of w = 1 or w = 0.05 are shown. particularly since in the region between them E obeys E = 1- J1 ~a (16) The bifurcation and subsequent transition to chaos with momentum is shown for a = 0.8 in figure 7. This a is high enough that the limit cycle fails to be reached by the iteration procedure long before it actually becomes unstable. Note that this diagram was made with w started near the minimum. If it had been started far from it, the system would usually not reach the attractor at w = 0 but instead enter a halo attractor. This accounts for the policy of backpropagation experts, who gradually raise momentum as the optimization proceeds. 894 Pearlmutter 5 CONCLUSIONS The convergence bounds derived assume that the learning parameters are set optimally. Finding these optimal values in practice is beyond the scope of this paper, but some techniques for achieving nearly optimal learning rates are available [4, 10, 8, 7, 3]. Adjusting the momentum feels easier to practitioners than adjusting the learning rate, as too high a value leads to small oscillations rather than divergence, and techniques from control theory can be applied to the problem [2]. However, because error surfaces in practice saturate, techniques for adjusting the learning parameters automatically as learning proceeds can not be derived under the quadratic minimum assumption, but must take into account the bifurcation and limit cycle and the sloping region of the error, or they may mistake this regime of stable error for convergence, leading to premature termination. References [1] S. Thomas Alexander. Adaptive Signal Processing. Springer-Verlag, 1986. [2] H. S. Dabis and T. J. Moir. Least mean squares as a control system. International Journal of Control, 54(2):321-335, 1991. [3] Yan Fang and Terrence J. Sejnowski. Faster learning for dynamic recurrent backpropagation. Neural Computation, 2(3):270-273, 1990. [4] Robert A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1(4):295-307,1988. [5] David E. Rumelhart, Geoffrey E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart, J. 1. McClelland, and the PDP research group., editors, Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundations. MIT Press, 1986. [6] J. J. Shynk and S. Roy. The LMS algorithm with momentum updating. In Proceedings of the IEEE International Symposium on Circuits and Systems, pages 2651-2654, June 6-9 1988. [7] F. M. Silva and L. B. Almeida. Acceleration techniques for the backpropagation algorithm. In L. B. Almeida and C. J. Wellekens, editors, Proceedings of the 1990 EURASIP Workshop on Neural Networks. Springer-Verlag, February 1990. (Lecture Notes in Computer Science series). [8] Tom Tollenaere. SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Networks, 3(5):561-573, 1990. [9] Mehmet Ali Tugay and Yal~in Tanik. Properties of the momentum LMS algorithm. Signal Processing, 18(2):117-127, October 1989. [10] T. P. Vogl, J. K. Mangis, A. K. Zigler, W. T. Zink, and D. L. Alkon. Accelerating the convergence of the back-propagation method. Biological Cybernetics, 59:257-263, September 1988. [11] B. Widrow, J. M. McCool, M. G. Larimore, and C. R. Johnson Jr. Stational and nonstationary learning characteristics of the LMS adaptive filter. Proceedings of the IEEE, 64:1151-1162, 1979.
1991
126
459
Active Exploration in Dynamic Environments Sebastian B. Thrun School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 E-mail: thrun@cs.cmu.edu Knut Moller University of Bonn Dept. of Computer Science ROmerstr. 164 D-5300 Bonn, Germany Abstract \Vhenever an agent learns to control an unknown environment, two opposing principles have to be combined, namely: exploration (long-term optimization) and exploitation (short-term optimization). Many real-valued connectionist approaches to learning control realize exploration by randomness in action selection. This might be disadvantageous when costs are assigned to "negative experiences" . The basic idea presented in this paper is to make an agent explore unknown regions in a more directed manner. This is achieved by a so-called competence map, which is trained to predict the controller's accuracy, and is used for guiding exploration. Based on this, a bistable system enables smoothly switching attention between two behaviors - exploration and exploitation - depending on expected costs and knowledge gain. The appropriateness of this method is demonstrated by a simple robot navigation task. INTRODUCTION The need for exploration in adaptive control has been recognized by various authors [MB89, Sut90, Mo090, Sch90, BB591]. Many connectionist approaches (e.g. [~leI89, MB89)) distinguish a random exploration phase, at which a controller is constructed by generating actions randomly, and a subsequent exploitation phase. Random exploration usually suffers from three major disadvantages: • Whenever costs are assigned to certain experiences - which is the case for various real-world t.asks such as autonomous robot learning, chemical control. flight control etc. -, exploration may become unnecessarily expensive. Intuitively speaking, a child would burn itself again and again simply because it is 531 532 Thrun and Moller world Figure 1: The training of the model network is a system identification task. Weights and biases of the network are estimated by gradient descent using the backpropagation algorithm. in its random phase . • Random exploration is often inefficient in terms of learning time, too [Whi9l, Thr92]. Random actions usually make an agent waste plenty of time in already well-explored regions in state space, while other regions may still be poorly explored. Exploration happens by chance and is thus undirected . • Once the exploitation phase begins, learning is finished and the system is unable to adapt to time-varying, dynamic environments. However, more efficient exploration techniques rely on knowledge about the learning process itself, which is used for guiding exploration. Rather than selecting actions randomly, these exploration techniques select actions such that the expected knowledge gain is maximal. In discrete domains, this may be achieved by preferring states (or state-action pairs) that have been visited less frequently [BS90], or less recently [Sut90], or have previously shown a high prediction error [Mo090, Sch91]i. For various discrete deterministic domains such exploration heuristics have been proved to prevent from exponential learning time [Thr92] (exponential in size of the state space). However, such techniques require a variable associated with each state-action pair, which is not feasible if states and actions are real-valued. A novel real-valued generalization of these approaches is presented in this paper. A so-called competence map estimates the controller's accuracy. Using this estimation, the agent is driven into regions in state space with low accuracy, where the resulting learning effect is assumed to be maximal. This technique defines a directed exploration rule. In order to minimize costs during learning, exploration is combined with an exploitation mechanism using selective attention, which allows for switching between exploration and exploitation. INDIRECT CONTROL USING FORWARD MODELS In this paper we focus on an adaptive control scheme adopted from Jordan (JorS9]: System identification (Fig. 1): Observing the input-output behavior of the unknown world (environment), a model is constructed by minimizing the difference of the observed outcome and its corresponding predictions. This is done with backpropagation. Action search using the model network (Fig. 2): Let an actual state sand a goal state s* be given. Optimal actions are searched using gradient descent in action space: starting with an initial action (e.g. randomly chosen), the next state 1 Note that these two approaches [Moo90, Sch91] are real-valued. Active Exploration in Dynamic Environments 533 Figure 2: Using the model for optimizing actions (exploitation). Starting with some initial action, gradient descent through the model network progressively improves actions. s is predicted with the world model. The exploitation energy function Eexploit (s'" - sf (s'" - s) measures the LMS-deviation of the predicted and the desired state. Since the model network is differentiable, gradients of EexPloit can be propagated back through the model network. Using these gradients, actions are optimized progressively by gradient descent in action space, minimizing Eexploit. The resulting actions exploit the world. THE COMPETENCE MAP The general principle of many enhanced exploration schemes [BS90, Sut90, Mo090, TM91, Sch91, Thr92] is to select actions such that the resulting observations are expected to optimally improve the controller. In terms of the above control scheme, this may be realized by driving the agent into regions in state-action space where the accuracy of the model network is assumed to be low, and thus the knowledge gain by visiting these regions is assumed to be high. In order to estimate the accuracy of the model network, we introduce the notion of a competence network [Sch91, TM91]. Basically, this map estimates some upper bound of the LMS-error of the model network. This estimation is used for exploring the world by selecting actions which minimize the expected competence of the model, and thus maximize the resulting learning effect. However, training the competence map is not as straightforward, since it is impossible to exactly predict the accuracy of the model network for regions in state space not visited for some time. The training procedure for the competence map is based on the assumption that the error increases (and thus competence decreases) slowly for such regions due to relearning and environmental dynamics: 1. At each time tick, backpropagation learning is applied using the last stateaction pair as input, and the observed LMS-prediction error of the model as target value (c.f. Fig. 3), normalized to (O,Cmax) (O~cmax~l, so far we used cmax=l). 2. For some2 randomly generated state-action pairs, the competence map is subsequently trained with target 1.0 (~ largest possible error cmax) [ACL +90]. This training step establishes a heuristic, realizing the loss of accuracy in unvisited regions: over time, the output values of the competence map increase for these reglOns. Actions are now selected with respect to an energy function E which combines both 2in our simulations: five - with a small learning rate 534 Thrun and Moller world model Figure 3: Training the competence map to predict the error of the model by gradient descen t (see text). exploration and exploitation: E (I-f) . Eexplore + f· EexPloil (1) with gain parameter f (O<f<I). Here the exploration energy Eexplore 1 - competence( action) is evaluated using the competence map - minimizing Eexplore is equivalent to maximizing the predicted model error. Since both the model net and the competence net are differentiable, gradient descent in action space may be used for minimizing Eq. (1). E combines exploration with exploitation: on the one hand minimizing Eexploil serves to avoid costs (short-term optimization), and on the other hand minimizing Eexplore ensures exploration (long-term optimization). r determines the portion of both target functions - which can be viewed to represent behaviors - in the action selection process. Note that Cma.x determines the character of exploration: if Cma.x is large, the agent is attracted by regions in state space which have previously shown high prediction error. The smaller Cma.x is, the more the agent is attracted by rarely-visited regions. EXPLORATION AND SELECTIVE ATTENTION Clearly, exploration and exploitation are often conflicting and can hinder each other. E.g. if exploration and exploitation pull a mobile robot into opposite directions, the system will stay where it is. It therefore makes sense not to keep r constant during learning, but sometimes to focus more on exploration and sometimes more on exploitation, depending on expected costs and improvements. In our approach, this is achieved by determining the focus of attention r using the following bistable recursive function which allows for smoothly switching attention between both policies. At each step of action search, let eexploil = ~EexPloil(a) and eexplore = ~Eexplore(a) denote the expected change of both energy functions by action a. With fC) being a positive and monotonically increasing function3 , K f·f(eexploil) (l-r)·f(eexplore) (2) compares the influence of action a on both energy functions under the current focus of attention r. The new r is then derived by squashing K (with c>O): 1 r (3) 1 + e-CoK. 3We chosed f(x) = eX in our simulations. Active Exploration in Dynamic Environments 535 goal + obstacle o start • Figure 4: (a) Robot world - note that there are two equally good paths leading around the obstacle. (b) Potential field: In addition to the x-y-state vector, the environment returns for each state a potential field value (the darker the color, the larger the value). Gradient ascent in the potential field yields both optimal paths depicted. Learning this potential field function is part of the system identification task. If K > 0, the learning system is in exploitation mood and r > 0.5. Likewise, if K < 0, the system is in exploration mood and r < 0.5. Since the actual attention r weighs both competing energy functions, in most cases Eqs. (2) and (3) establish two stable points (fixpoints), close to 0 and 1, respectively. Attention is switched only if K changes its sign. The scalar c serves as stability factor : the larger cis, the closer is r to its extremal values and the larger the switching factors r(l-r)-l (taken from Eq. (2)). A ROBOT NAVIGATION TASK We now will demonstrate the benefits of active exploration using a competence map with selective attention by a simple robot navigation example. The environment is a 2-dimensional room with one obstacle and walls (see Fig. 4a), and x-y-states are evaluated by a potential field function (Fig. 4b). The goal is to navigate the robot from the start to the goal position without colliding with the obstacle or a wall. Using a model network without hidden units for state prediction and a model with two hidden layers (10 units with gaussian activation functions in the first hidden layer, and 8 logistic units in the second) for potential field value prediction, we compared the following exploration techniques - Table 1 summarizes the results: • Pure random exploration. In Fig. 5a the best result out of 20 runs is shown. The dark color in the middle indicates that the obstacle was touched extremely often. Moreover, the resulting controller (exploitation phase) did not find a path to the goal. • Pure exploitation (see Fig. 5b). (With a bit of randomness in the beginning) this exploration technique found one of two paths but failed in both finding the other path and performing proper system identification. The number of crashes 536 Thrun and Moller Figure 5: Resulting models of the potential field function. (a) Random exploration. The dark color in the middle indicates the high number of crashes against the obstacle. Note that the agent is restarted whenever it crashes against a wall or the obstacle - the probability for reaching the goal is 0.0007. (b) Pure exploitation: The resulting model is accurate along the path, but inaccurate elsewhere. Only one of two paths is identified. Figure 6: Active exploration. (a) Resulting model of the potential field function. This model is most accurate, and the number of crashes during training is the smallest. Both paths are found about equally often. (b) "Typical" competence map: The arrows indicate actions which maximize Eexplore (pure exploration). # runs # crashes # paths found L2-model error random exploration 10000 9993 0 2.5 % pure exploitation 15000 11000 1 0.7 % active exploration 15000 4000 2 0.4 % Table 1: Results (averaged over 20 runs). The L2-model error is measured in relation to its initial value (= 100%). explo~ .. lion regIOn (a) Active Exploration in Dynamic Environments 537 (b) explor:a.lion region o / (c) Figure 7: Three examples of trajectories during learning demonstrate the switching attention mechanism described in the paper. Thick lines indicate exploration mode (r <0.2), and thin lines indicate exploitation (r>o.S). The arrows mark some points where exploration is switched off due to a predicted collision. during learning was significantly smaller than with random exploration . • Directed exploration with selective attention. Using a competence network with two hidden layers (6 units each hidden layer), a proper model was found in all simulations we performed (Fig. 6a), and the number of collisions were the least. An intermediate state of the competence map is depicted in Fig. 6b, and three exploration runs are shown in Fig. 7. DISCUSSION We have presented an adaptive strategy for efficient exploration in non-discrete environments. A so-called competence map is trained to estimate the competence (error) of the world model, and is used for driving the agent to less familiar regions. In order to avoid unnecessary exploration costs, a selective attention mechanism switches between exploration and exploitation. The resulting learning system is dynamic in the sense that whenever one particular region in state space is preferred for several runs, sooner or later the exploration behavior forces the agent to leave this region. Benefits of this exploration technique have been demonstrated on a robot navigation task. However, it should be noted that the exploration method presented seeks to explore more or less the whole state-action space. This may be reasonable for the above robot navigation task, but many state spaces, e.g. those typically found in traditional AI, are too large for getting exhaustively explored even once. In order to deal with such spaces, this method should be extended by some mechanism for cutting off exploration in "unrelevant" regions in state-action space, which may be determined by some notion of "relevance" . Note that the technique presented here does not depend on the particular control scheme at hand. E.g., some exploration techniques in the context of reinforcement 538 Thrun and Moller learning may be found in [Sut90, BBS91], and are surveyed and compared in [Thr92]. Acknowledgements The authors wish to thank Jonathan Bachrach, Andy Barto, Jorg Kindermann, Long-Ji Lin, Alexander Linden, Tom Mitchell, Andy Moore, Satinder Singh, Don Sofge, Alex Waibel, and the reinforcement learning group at CMU for interesting and fruitful discussions. S. Thrun gratefully acknowledges the support by German National Research Center for Computer Science (GMD) where part of the research was done, and also the financial support from Siemens Corp. References [ACL +90] 1. Atlas, D. Cohn, R. Ladner, M.A. EI-Sharkawi, R.J. Marks, M.E. Aggoune, and D.C. Park. Training connectionist networks with queries and selective sampling. In D. Touretzky (ed.) Advances in Neural Information Processing Systems 2, San Mateo, CA, 1990. IEEE, Morgan Kaufmann. [BBS91] A.G. Barto, S.J. Bradtke, and S.P. Singh. Real-time learning and control using asynchronous dynamic programming. Technical Report COINS 91-57, Department of Computer Science, University of Massachusetts, MA, Aug. 1991. [BS90] A.G. Barto and S.P. Singh. On the computational economics of reinforcement learning. In D.S. Touretzky et al. (eds.), Connectionist Models, Proceedings of the 1990 Summer School, San Mateo, CA, 1990. Morgan Kaufmann. [Jor89] M.l. Jordan. Generic constraints on underspecified target trajectories. In Proceedings of the First International Joint Conference on Neural Networks, Washington, DC, IEEE TAB Neural Network Committee, San Diego, 1989. [MB89] M.C. Mozer and J.R. Bachrach. Discovering the structure of a reactive environment by exploration. Technical Report CU-CS-451-89, Dept. of Computer Science, University of Colorado, Boulder, Nov. 1989. [MeI89] B.W. Mel. Murphy: A neurally-inspired connectionist approach to learning and performance in vision-based robot motion planning. Technical Report CCSR-89-17 A, Center for Complex Systems Research Beckman Institute, University of Illinois, 1989. [Mo090] A.W. Moore. Efficient Memory-based Learning for Robot Control. PhD thesis, Trinity Hall, University of Cambridge, England, 1990. [Sch90] J.H. Schmidhuber. Making the world differentiable: On using supervised learning fully recurrent neural networks for dynamic reinforcemen t learning and planning in non-stationary environments. Technical Report, Technische Universitiit Munchen, Germany, 1990. [Sch91] J.H. Schmidhuber. Adaptive confidence and adaptive curiosity. Technical Report FKI-149-91, Technische Universitat Munchen, Germany 1991. [Sut90] R.S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning, June 1990. [TM91] S.B. Thrun and K. Moller. On planning and exploration in non-discrete environments. Technical Report 528, GMD, St.Augustin, FRG, 1991. [Thr92] S.B. Thrun. Efficient exploration in reinforcement learning. Technical Report CMU-CS-92-102, Carnegie Mellon University, Pittsburgh, Jan. 1992. [Whi91] S.D. Whitehead. A study of cooperative mechanisms for faster reinforcement learning. Technical Report 365, University of Rochester, Computer Science Department, Rochester, NY, March 1991.
1991
127
460
Data Analysis using G/SPLINES David Rogers· Research Institute for Advanced Computer Science MS T041-5, NASA/Ames Research Center Moffett Field, CA 94035 INTERNET: drogerS@riacs.edu Abstract G/SPLINES is an algorithm for building functional models of data. It uses genetic search to discover combinations of basis functions which are then used to build a least-squares regression model. Because it produces a population of models which evolve over time rather than a single model, it allows analysis not possible with other regression-based approaches. 1 INTRODUCTION G/SPLINES is a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm (Friedman, 1990) with Holland's Genetic Algorithm (Holland, 1975). G/SPLINES has advantages over MARS in that it requires fewer least-squares computations, is easily extendable to non-spline basis functions, may discover models inaccessible to local-variable selection algorithms, and allows significantly larger problems to be considered. These issues are discussed in (Rogers, 1991). This paper begins with a discussion of linear regression models, followed by a description of the G/SPLINES algorithm, and finishes with a series of experiments illustrating its performance, robustness, and analysis capabilities. * Currently at Polygen/Molecular Simulations, Inc., 796 N. Pastoria Ave., Sunnyvale, CA 94086, INTERNET: drogers@msi.com. 1088 Data Analysis Using G/Splines 1089 2 LINEAR MODELS A common assumption used in data modeling is that the data samples are derived from an underlying function: Yi = f(X i) + error '''I f( ) = XU' .•. , Xin + error The goal of analysis is to develop a model F(X) which minimizes the least-squares error: N LSE(F) = ~ L (Yi - F(X i)) 2 i = 1 The function F(X) can then be used to estimate the underlying function fat previouslyseen data samples (recall) or at new data samples (prediction). Samples used to construct the function F(X) are in the training set; samples used to test prediction are in the test set. 10 constructing F(X), if we assume the model F can be written as a linear combination of basis function { ct>1C} : M F(X) = aO + L ak<l>/X) k=1 then standard least-squares regression can find the optimal coefficients {ak}' However, selecting an appropriate set of basis functions for high-dimensional models can be difficult. G/SPLINES is a primarily a method for selecting this set. 3 G/SPLINES Many techniques develop a regression model by incremental addition or deletion of basis functions to a single model.The primary idea of G/SPLINES is to keep a collection of models, and use the genetic algorithm to recombine among these models. G/SPLINES begins with a collection of models containing randomly-generated basis functions. F 1: {ct> 1 ct>2 ct>3 ct> 4 ct> 5 ct> 6 ct>? ct> 8 ct>9 ct> 10 ct> II ct> 12 ct> 13 ct> 14} F2: {01 02 03040506 &, 08 09 010 OIl} • • • • • • FK: {01 °2°3°4°5°6°7°8°9°10 011 012} The basis functions are functions which use a small number of the variables in the data set, such as SIN(X2 - 1) or (X4 - A)(X5 - .1). The model coefficients {ak} are determined using least-squares regression. Each model is scored using Friedman's "lack of fit" (LOF) measure, which is a penalized least-squares measure for goodness of fit; this measure takes into account factors such as the number of data samples, the least-squares error, and the number of model parameters. 1090 Rogers At this point, we repeatedly perform the genetic crossover operation: • Two good models are probabilistically selected as "parents". The likelihood of being chosen is inversely proportional to a model's LOF score. • Each parent is randomly "cut" into two sections, and a new model is created using a piece from each parent: First parent Second parent New model • Optional mutation operators may alter the newly-created model. • The model with the worst LOF score is replaced by this new model. This process ends when the average fitness of the population stops improving. Some features of the G/SPLlNES algorithm are significantly different from MARS: Unlike incremental search, full-sized models are tested at every step. The algorithm automatically determines the proper size for models. Many fewer models are tested than with MARS. A population of models offers information not available from single-model methods. 4 MUTATION OPERATORS Additional mutation operators were added to the system to counteract some negative tendencies of a purely crossover-based algorithm. Problem: genetic diversity is reduced as process proceeds (fewer basis functions in population) NEW: creates a new basis function by randomly choosing a basis function type and then randomly filling in the parameters. Problem: need process for constructing useful multidimensional basis functions MERGE: takes a random basis function from each parent, and creates a new basis function by multiplying them together. Problem: models contain "hitchhiking" basis functions which contribute little DELETION: ranks the basis functions in order of minimum maximum contribution to the approximation. It removes one or more of the least-contributing basis functions. 5 EXPERIMENTAL Experiments were conducted on data derived from a function used by Friedman (1988): 1 2 f(X) = SIN(1tX 1X 2)+20(X 3 -2:) +10X 4 +5X 5 Data Analysis Using G/Splines 1091 Standard experimental conditions are as follows. Experiments used a training set containing 200 samples, and a test set containing 200 samples. Each sample contained 10 predictor variables (5 informative,S non informative) and a response. Sample points were randomly selected from within the unit hypercube. The signal/noise ratio was 4.B/1.0 The G/SPLINE population consisted of 100 models. Linear truncated-power splines were used as basis functions. After each crossover, a model had a 50% chance of getting a new basis function created by operator NEW or MERGE and the least-contributing 10% of its basis functions deleted using operator DELETE. The standard training phase involved 10,000 crossover operations. After training, the models were tested against a set of 200 previously-unseen test samples. 5.1 G/SPLINES VS. MARS Question: is G/SPLINE competitive with MARS? 27.'"t-' .............. __ ............................. 2 22. 2 £ ~ 17. m 1 ....l <;; 12. M <;; 1 ~ 7. o Be,t reot LS leO .. C MARS ... t LS 000 .. SLSOps .100 Figure 1. Test least-squares scores versus number of least-squares regressions for G/SPLINES and MARS. The MARS algorithm was close to convergence after 50,000 least-squares regressions, and showed no further improvement after BO,OOO. The G/SPLINES algorithm was close to convergence after 4,()()() least-squared regressions, and showed no further improvement after 10,000. [Note: the number of least-squares regressions is not a direct measure of the computational efficiency of the algorithms, as MARS uses a technique (applicable only to linear truncated-power splines) to greatly reduce cost of doing least-squares-regression.] To complete the comparison, we need results on the quality of the discovered models: Final average least-squared error of the best 4 G/SPLINES models was: -1.17 Final least-squared error of the MARS model was: -1.12 The "best" model has a least-squared error (from the added noise) of: -LOB Using only linear truncated-power splines, G/SPLINES builds models comparable (though slightly inferior) to MARS. However, by using basis functions other than linear truncated power splines, G/SPLINES can build improved models. If we repeat the experiment with additional basis function types of step functions, linear splines, and quadratic splines, we get improved results: With additional basis functions, the final average least-squared error was: -1.095. I suggest that by including basis functions which reflect the underlying structure of f, the quality of the discovered models is improved. 1092 Rogers 5.2 VARIABLE ELIMINATION Question: does variable usage in the population reflect the underlying function? (Recall that the data samples contained 10 variables; only the first 5 were used to calculate f.) 1400 ..... --............... -_ ............ ..--.... Ii! > 1200 gp .... 1000 CI) =' ~ 800 o ';:j u c .z 600 400 • Var(l) use • Var(2) use 4 Var(3) use .Var(4) use :::: Var(5) use ~J Var(6) use • Var(7) use • Var(8) use "Var(9) use 200 ,. Var[l 0) use O~ .. ~~~~~~~. o 10 20 31) 40 50 60 70 80 90100 II Genetic Operations x 100 Figure 2. # of basis functions using a variable vs. # of crossover operations. G/SPLINES correctly focuses on basis functions which use the first five variables The relative usage of these five variables reflects the complexity of the relationship between an input variable and the response in a given dimension. Question: is the rate of elimination of variables affected by sample size? 90 : 80 70 60 50 40 30 20 1~1-~~~~~!;~~~~~!1 o 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 II Genetic Operations x 100 II Genetic Operations x 100 o Var(6) a Var(7) • Var(8) .::: Var(9) .. , Var(10) Figure 3. Close-up of Figme 2, showing the five variables not affecting the response. The left graph is the standard experiment; the right from a training with 50 samples. The left graph plots the number of basis functions containing a variable versus the number of genetic operations for the five noninformative variables in the standard experiment. The variables are slowly eliminated from consideration. The right graph plots the same infonnation, using a training set size of 50 samples. The variables are rapidly eliminated. Smaller training sets force the algorithm to work with most predictive variables, causing a faster elimination of less predictive variables. Question: Is variable elimination effective with increased numbers of noninfonnative variables? This experiment used the standard conditions but increased the number of predictor variables in the training and test sets to 100 (5 infonnative, 25. noninformative). Data Analysis Using G/Splines 1093 600 500 :i! 400 ·s > It 300 .. .c 1 200 ~ It I era 100 I. '!! I .. 0 _I -100 0 10 20 30 40 50 60 10 80 90 100 Variable Index Figure 4. Number of basis functions which used a variable vs. variable index, after 10,000 genetic operations. Figure 4 shows that elimination behavior was still apparent in this high-dimensional data set. The five infonnative variables were the first five in order of use. 5.3 MODEL SIZE Question: What is the effect of the genetic algorithm on model size? 7~--~~------'---~ 6 5 CD § 4 ~ 3 CD 2 o~~~~ __ -. ______ ~ 0102030405060708090100 o Best score a Avg score 10 9~~~~ __________ ~ o 10 20 30 40 50 60 70 80 90100 • Genetic Ops x 100 • Genlllic Ops x 100 Figure 5. Model scores on training set and average function length. o Avg fcn I. .. The left graph plots the best and average LOF score for the training set versus the number of genetic operations. The right graph plots the average number of basis functions in a model versus the number of genetic operations. Even after the LOF error is minimized, the average model length continues to decrease. This is likely due to pressure from the genetic algorithm; a compact representation is more likely to survive the crossover operation without loss. (In fact, due to the nature of the LOF function, the least-squared errors of the best models is slightly increased by this procedure. The system considers the increase a fair trade-off for smaller model size.) 5.4 RESISTANCE TO OVER FITTING Question: Does Friedman's LOF function resist overfitting with small training sets? Training was conducted with data sets of two sizes: 200 and 50. The left graph in Figure 6 plots the population average least-squared error for the training set and the test set versus the number of genetic operations, using a training set size of 200 samples. The right graph 1094 Rogers 4.5 4 ~ 3.5 § 3 fI) 2.5 ~ 2 ~ < 1.5 1 .5 o Avg LS score a Avg lesl LS score o~~~ ____ .-~-. ____ .-__ -+ o 10 20 30 40 50 60 70 80 90 100 4 CD 3.5 § 3 fI) 2.5 ...J 2 ~ < 1.5 1 .5 o Avg LS score (:E Avg lesl LS score O~ __ ~ ________ -p ____ ~ __ -+ o 10 20 30 40 50 60 70 80 90 100 II Genelic Operellons x 100 II Genelic Operallons x 100 Figure 6. LS error vs. # of operations for training with 200 and 50 samples. plots the same information, but for a system using a training set size of 50 samples. In both cases, little overfitting is seen, even when the algorithm is allowed to run long after the point where improvement ceases. Training with a small number of samples still leads to models which resist overfitting. Question: What is the effect of additive noise on overfitting? 40+---~------------~~ 40~--~------------~~ 38 38 36 o Avg LS score 36 0 Avg LS score a Avg lesl LS score OAvg lest LS score 22 20+-__ ~ ______________ • 20+-________________ ~. o 102030 40 5060 7080 90100 o 102030405060 7080 90100 • LSOpsx 100 IILSOpsx100 Figure 7. LS error vs. # of operations for low and high noise data sets. Training was conducted with training sets having a signal/noise ratio of 1.0/1.0. The left graph plots the least-squared error for the training and test set versus the number of genetic operations. The right graph plots the same information, but with a higher setting of Friedman's smoothing parameter. Noisy data results in a higher risk of overfitting. However, this can be accommodated if we set a higher value for Friedman's smoothing parameter. 5.5 ADDITIONAL BASIS FUNCTION TYPES AND TRAINING SET SIZES Question: What is the effect of changes in training set size on the type of basis functions selected? The experiment in Figure 8 used the standard conditions, but using many additional basis function types. The left graph plots the use of different types of basis functions using a training set of size 50.The right graph plots the same information using a training set size of 200. Simply put, different training set sizes lead to significant changes in preferences among function types. A detailed analysis of these graphs can give insight into the nature of the data and the best components for model construction. 4S0,..... __ ..... _______ ""+ c ..... ~4 C ~ .2: tlO "-' c ..B "-' ] o 10 20 30 40 SO 60 70 80 90 100 =11= 1# Genetic Operalions l 100 Data Analysis Using G/Splines 1095 o Linear Spline use a Lutearuse A Quadratic use ~ Slop use ""'~Itd •• Spline ocder 2 use A BSpline order 0 use • BSpIine ocder I use • BS p1ine ocder 2 use 10 20 30 40 SO 60 70 80 90 100 1/ Genetic Operaliono l 100 Figure 8. # of basis functions of a given type Vs. # of genetic operations, for training sets of 50 and 200 samples. 6 CONCLUSIONS G/SPLINES is a new algorithm related to state-of-the-art statistical modeling techniques such as MARS. The strengths of this algorithm are that G/SPLINES builds models that are comparable in quality to MARS, with a greatly reduced number of intermediate model constructions; is capable of building models from data sets that are too large for the MARS algorithm; and is easily extendable to basis functions that are not spline-based. Weaknesses of this algorithm include the ad-hoc nature of the mutation operators; the lack of studies of the real-time performance of G/SPLINES vs. other model builders such as MARS; the need for theoretical analysis of the algorithm's convergence behavior; the LOF function needs to be changed to reflect additional basis function types. The WOLF program source code, which implements G/SPLINES, is available free to other researchers in either Macintosh or UNIX/C formats. Contact the author (drogerS@riacs.edu) for information. Acknowledgments This work was supported in part by Cooperative Agreements NCC 2-387 and NCC 2-408 between the National Aeronautics and Space Administration (NASA) and the Universities Space Research Association (USRA). Special thanks to my domestic partner Doug Brockman, who shared my enthusiasm even though he didn't know what the hell I was up to; and my father, Philip, who made me want to become a scientist. References Friedman, J., "Multivariate Adaptive Regression Splines," Technical Report No. 102, Laboratory for Computational Statistics, Department of Statistics, Stanford University, November 1988 (revised August 1990). Holland, J., Adaptation in Artificial and Natural Systems, University of Michigan Press, Ann Arbor, MI, 1975. Rogers, David, "G/SPLINES: A Hybrid of Friedman's Multivariate Adaptive Splines (MARS) Algorithm with Holland's Genetic Algorithm," in Proceedings of the Fourth International Conference on Genetic Algorithms, San Diego, July, 1991.
1991
128
461
A Contrast Sensitive Silicon Retina with Reciprocal Synapses Kwabena A. Boahen Computation and Neural Systems California Institute of Technology Pasadena, CA 91125 Andreas G. Andreou Electrical and Computer Engineering Johns Hopkins University Baltimore, MD 21218 Abstract The goal of perception is to extract invariant properties of the underlying world. By computing contrast at edges, the retina reduces incident light intensities spanning twelve decades to a twentyfold variation. In one stroke, it solves the dynamic range problem and extracts relative reflectivity, bringing us a step closer to the goal. We have built a contrastsensitive silicon retina that models all major synaptic interactions in the outer-plexiform layer of the vertebrate retina using current-mode CMOS circuits: namely, reciprocal synapses between cones and horizontal cells, which produce the antagonistic center/surround receptive field, and cone and horizontal cell gap junctions, which determine its size. The chip has 90 x 92 pixels on a 6.8 x 6.9mm die in 2/lm n-well technology and is fully functional. 1 INTRODUCTION Retinal cones use both intracellular and extracellular mechanisms to adapt their gain to the input intensity level and hence remain sensitive over a large dynamic range. For example, photochemical processes within the cone modulate the photo currents while shunting inhibitory feedback from the network adjusts its membrane conductance. Adaptation makes the light sensitivity inversely proportional to the recent input level and the membrane conductance proportional to the background intensity. As a result, the cone's membrane potential is proportional to the ratio between the input and its spatial or temporal average, i.e. contrast. We have 764 A Contrast Sensitive Silicon Retina with Reciprocal Synapses 765 developed a contrast- sensitive silicon retina using shunting inhibition. This silicon retina is the first to include variable inter-receptor coupling, allowing one to trade-off resolution for enhanced signal-to-noise ratio, thereby revealing low-contrast stimuli in the presence of large transistor mismatch. In the vertebrate retina, gap junctions between photoreceptors perform this function [5]. At these specialized synapses, pores in the cell membranes are juxtaposed, allowing ions to diffuse directly from one cell to another [6]. Thus, each receptor's response is a weighted average over a local region. The signal-to-noise ratio increases for features larger than this region-in direct proportion to the space constant [5]. Our chip achieves a four-fold improvement in density over previous designs [2]. We use innovative current-mode circuits [7] that provide very high functionality while faithfully modeling the neurocircuitry. A bipolar phototransistor models the photo currents supplied by the outer-segment of the cone. We use a novel singletransistor implementation of gap junctions that exploits the physics of MaS transistors. Chemical synapses are also modeled very efficiently with a single device. Mahowald and Mead's pioneering silicon retina [2] coded the logarithm of contrast. However, a logarithmic encoding degrades the signal-to-noise ratio because large signals are compressed more than smaller ones. Mead et. al. have subsequently improved this design by including network-level adaptation [4] and adaptive photoreceptors [3, 4] but do not implement shunting inhibition. Our silicon retina was designed to encode contrast directly using shunting inhibition. The remainder of this paper is organized as follows. The neurocircuitry of the distal retina is described in Section 2. Diffusors and the contrast-sensitive silicon retina circuit are featured in Section 3. We show that a linearized version of this circuit computes the regularized solution for edge detection. Responses from a one-dimensional retina showing receptive field organization and contrast sensitivity, and images from the two-dimensional chip showing spatial averaging and edge enhancement are presented in Section 4. Section 5 concludes the paper. Cones I Synapses _____ X ) Hog:n:tal ~ Gap Junctions Figure 1: Neurocircuitry of the outer-plexiform layer. The white and black triangles are excitatory and inhibitory chemical synapses, respectively. The grey regions between adjacent cells are electrical gap junctions. 766 Boahen and Andreou 2 THE RETINA The outer plexiform layer of the retina produces the well-known antagonistic center/surround receptive field organization first described in detail by Kuffler in the cat [11). The functional neurocircuitry, based on the red cone system in the turtle [10, 8, 6], is shown in Figure 1. Cones and horizontal cells are coupled by gap junctions, forming two syncytia within which signals diffuse freely. The gap junctions between horizontal cells are larger in area (larger number of elementary pores), so signals diffuse relatively far in the horizontal cell syncytium. On the other hand, signals diffuse poorly in the cone syncytium and therefore remain relatively strong locally. When light falls on a cone, its activity increases and it excites adjacent horizontal cells which reciprocate with inhibition. Due to the way signals spread, the excitation received by nearby cones is stronger than the inhibition from horizontal cells, producing net excitation in the center. Beyond a certain distance, however, the reverse is true and so there is net inhibition in the surround. The inhibition from horizontal cells is of the shunting kind and this gives rise to to contrast sensitivity. Horizontal cells depolarize the cones by closing chloride channels while light hyperpolarizes them by closing sodium channels [9, I). The cone's membrane potential is given by v = gNaENa + gD Vnet gNa + gCI + gD (1) where the conductances are proportional to the number of channels that are open and voltages are referred to the reversal potential for chloride. gD and Vnet describe the effect of gap junctions to neighboring cones. Since the horizontal cells pool signals over a relatively large area, gCI will depend on the background intensity. Therefore, the membrane voltage will be proportional to the ratio between the input, which determines gNa, and the background. (a) (b) Figure 2: (a) Diffusor circuit. (b) Resistor circuit. The diffusor circuit simulates the currents in this linear resistive network. A Contrast Sensitive Silicon Retina with Reciprocal Synapses 767 3 SILICON MODELS In the subthreshold region of operation, a MOS transistor mimics the behavior of a gap junction. Current flows by diffusion: the current through the channel is linearly proportional to the difference in carrier concentrations across it [2]. Therefore, the channel is directly analogous to a porous membrane and carrier concentration is the analog of ionic species concentration. In conformity with the underlying physics, we call transistors in this novel mode of operation diffusors. The gate modulates the carrier concentrations at the drain and the source multiplicatively and therefore sets the diffusivity. In addition to offering a compact gap junction with electronically adjustable 'area,' the diffusor has a large dynamic range-at least five decades. A current-mode diffusor circuit is shown in Figure 2a. The currents through the diode-connected well devices Ml and M2 are proportional to the carrier concentrations at either end of the diffusor M3 • Consequently, the diffusor current is proportional to the current difference between Ml and M 2• Starting with the equation describing subthreshold conduction [2, p. 36], we obtain an expression for the current IpQ in terms of the currents Ip and IQ, the reference voltage Vre/ and the bias voltage VL : (2) For simplicity, voltages and currents are in units of VT = kT/q, and 10 , the zero bias current, respectively; all devices are assumed to have the same Ii and 10 • The ineffectiveness of the gate in controlling the channel potential, measured by Ii ~ 0.75, introd uces a small nonideality. There is a direct analogy between this circuit and the resistive circuit shown in Figure 2b for which IpQ == (Cz/Cl){IQ - Ip). The currents in these circuits are identical if Cz/Cl == exp(IiVL - Vre/) and Ii == l. Increasing VL or reducing Vre / has the same effect as increasing C 2 or reducing C l . Chemical synapses are also modeled using a single MOS transistor. Synaptic inputs to the turtle cone have a much higher resistance, typically O.6GO or more [1], than the input conductance of a cone in the network which is 50MO or less [8]. Thus the synaptic inputs are essentially current sources. This also holds true for horizontal cells which are even more tightly coupled. Accordingly, chemical synapses are modeled by a MOS transistor in saturation. In this regime, it behaves like a current source driving the postsynapse controlled by a voltage in the presynapse. The same applies to the light-sensitive input supplied by the cone outer-segment; its peak conductance is about OAGO in the tiger salamander [9]. Therefore, the cone outer-segment is modeled by a bipolar phototransistor, also in saturation, which produces a current proportional to incident light intensity. Shunting inhibition is not readily realized in silicon because the 'synapses' are current sources. However, to first order, we achieve the same effect by modulating the gap junction diffusitivity gD (see Equation 1). In the silicon retina circuit, we set VL globally for a given diffusitivity and control Vre/ locally to implement shunting inhibition. A one-dimensional version of the current-mode silicon retina circuit is shown in Figure 3. This is a direct mapping of the neurocircuitry of the outer-plexiform layer (shown in Figure 1) onto silicon using one transistor per chemical synapse/gap junction. Devices Ml and M2 model the reciprocal synapses. M4 and Ms model 768 Boahen and Andreou VDD I Figure 3: Current-mode Outer-Plexiform Circuit. the gap junctions; their diffusitivities are set globally by the bias voltages VG and VF. The phototransistor M6 models the light-sensitive input from the cone outer segment. The transistor M 3 , with a fixed gate bias Vu, is analogous to a leak in the horizontal cell membrane that counterbalances synaptic input from the cone. The circuit operation is as follows. The currents Ic and IH represent the responses of the cone and the horizontal cell, respectively. These signals are actually in the post-synaptic circuit-the nodes with voltage Vc and VH correspond to the presynaptic signals but they encode the logarithm of the response. Increasing the photocurrent will cause Vc to drop, turning on M2 and increasing its current Ic; this is excitation. Ic pulls VH down, turning on Ml and increasing its current IH; another excitatory effect. I H , in turn, pulls Vc up, turning off M2 and reducing its current Ic; this is inhibition. The diffusors in this circuit behave just like those in Figure 2 although the well devices are not diode- connected. The relationship between the currents given by Equation 2 still holds because the voltages across the diffusor are determined by the currents through the well devices. However, the reference voltage for the diffusors between 'cones' (M4) is not fixed but depends on the 'horizontal cell' response. Since IH = exp(VDD - KVH), the diffusitivity in the cone network will be proportional to the horizontal cell response. This produces shunting inhibition. 3.1 RELATION TO LINEAR MODELS Assuming the horizontal cell activities are locally very similar due to strong coupling, we can replace the cone network diffusitivity by g = (IH)g, where (IH) is the local average. Now we treat the diffusors between the 'cones' as if they had a fixed A Contrast Sensitive Silicon Retina with Reciprocal Synapses 769 diffusitivity fJ; the diffusitivity in the 'horizontal cell' network is denoted by h. Then the equations describing the full two-dimensional circuit on a square grid are: I(xm,Yn) + fJ L {Ic(xi,Yj) - Ic(xm,Yn)} i = m± 1 j = n ± 1 Iu + h L {IH(xm,Yn) - IH(xi,Yj)} i= m ±1 j = n ± 1 (3) (4) This system is a special case of the dual layer outer plexiform model proposed by Vagi [12]-we have the membrane admittances set to zero and the synaptic strengths set to unity. Using the second-difference approximation for the laplacian, we obtain the continuous versions of these equations IH(x, y) Ic(x,y) I(x, y) + fJV 2 Ic(x, y) Iu - hV2 IH(x, y) with the internode distance normalized to unity. Solving for IH(x, Y), we find )..V2V 2IH(x,y) +IH(x,y) = I(x,y) (5) (6) (7) This is the biharmonic equation used in computer vision to find an optimally smooth interpolating function 'IH(x,y)' for the noisy, discrete data 'I(x,y)' [13]. The coefficient).. = fJh is called the regularizing parameter; it determines the trade-off between smoothing and fitting the data. In this context, the function of the horizontal cells is to compute a smoothed version of the image while the cones perform edge detection by taking the laplacian of the smoothed image as given by Equation 6. The space constant of the solutions is )..1/4 [13]. This predicts that the receptive field size of our retina circuit will be weakly dependent on the input intensity since fJ is proportional to the horizontal cell activity. 4 CHIP PERFORMANCE Data from the one-dimensional chip showing receptive field organization is in Figure 4. As the 'cone' coupling increases, the gain decreases and the excitatory and inhibitory subregions of the receptive field become larger. Increasing the 'horizontal cell' coupling also enlarges the receptive field but in this case the gain increases. This is because stronger diffusion results in weaker signals locally and so the inhibition decreases. Figure 5(a) shows the variation of receptive field size with intensityroughly doubling in size for each decade. This indicates a one-third power dependence which is close to the theoretical prediction of one-fourth for the linear model. The discrepancy is due to the body effect on transistor M2 (see Figure 3) which makes the diffusor strength increase with a power of 1/ K,2. Contrast sensitivity measurements are shown in Figure 5(b). The S-shaped curves are plots of the Michaelis-Menten equation used by physiologists to fit responses of cones [6]: v - V. In maz In + un (8) 770 Boahen and Andreoli 55 50 45 ii 40 --d 35 v ~ 30 u 25 ~ .fr 20-+--~~ := ~~"In o 15 10 50 5 10 15 20 (a) Node Position 25 50 45 -.40 -< 535 ..., ~ 30 '"' '"' 825 ..., E.. 20 ..., := 015 10 50 5 10 15 20 25 (b) Node Position Figure 4: Receptive fields measured for 25 x 1 pixel chip; arrows indicate increasing diffu80r gate voltages. The inputs were 50DA at the center and lOnA elsewhere, and the output current Iu was set to 20nA. (a) Increasing inter-receptor diffusor voltages in l5mV steps. (b) Increasing inter-horizontal cell diffusor voltages in 50m V steps. 60 50 50 40 -40 -.30 -< < = 30 = --..., :; 20 := ~ ..., 20 ~ := ..., 0 := 010 10 0 0 -10 -10 0 5 10 15 20 25 10-11 10-10 10-9 10-8 10-7 (a) N ode Position (b) Input (A) Figure 5: (a) Dependence of receptive field on intensity; arrows indicate increasing intensity. Center inputs were 500pA, 5nA, 15nA, 50nA, and 500nA. The background input was always one-fifth of the center input. (b) Contrast sensitivity measurements at two background intensity levels. Lines are fits of the Michaelis-Menten equation.
1991
129
462
Extracting and Learning an Unknown Grammar with Recurrent Neural Networks C.L.Gnes·, C.B. Miller NEC Research Institute 4 Independence Way Princeton. NJ. 08540 giles@research.nj.nec.COOl D. Chen, G.Z. Sun, B.H. Chen, V.C. Lee *Institute for Advanced Computer Studies Dept of Physics and Astronomy University of Maryland College pm, Mel 20742 Abstract Simple secood-order recurrent netwoIts are shown to readily learn sman brown regular grammars when trained with positive and negative strings examples. We show that similar methods are appropriate for learning unknown grammars from examples of their strings. TIle training algorithm is an incremental real-time, recurrent learning (RTRL) method that computes the complete gradient and updates the weights at the end of each string. After or during training. a dynamic clustering algorithm extracts the production rules that the neural network has learned.. TIle methods are illustrated by extracting rules from unknown deterministic regular grammars. For many cases the extracted grammar outperforms the neural net from which it was extracted in correctly classifying unseen strings. 1 INTRODUCTION For many reasons, there has been a long interest in "language" models of neural netwoIts; see [Elman 1991] for an excellent discussion. TIle orientation of this work is somewhat different TIle focus here is on what are good measures of the computational capabilities of recurrent neural networks. Since currently there is little theoretical knowledge, what problems would be "good" experimental benchmarks? For discrete i.q>uts, a natural choice would be the problem of learning fonnal grammars - a "hard" problem even for regular grammars [Angluin, Smith 1982]. Strings of grammars can be presented one charncter at a time and strings can be of arbitrary length. However, the strings themselves would be, for the most part, feature independent Thus, the learning capabilities would be, for the most part, feature independent and, therefore insensitive to feature extraction choice. TIle learning of known grammars by recurrent neural networks has sbown promise, for example [Qeeresman, et al1989], [Giles, et al199O, 1991, 1992], [pollack 1991], [Sun, et al 1990], [Watrous, Kuhn 1992a,b], [Williams, Zipser 1988]. But what about learning Ml!~ grammars? We demonstrate in this paper that not only can unknown grammars be learned, but it is possible to extract the grammar from the neural network, both during and after training. Furthennore, the extraction process requires no a priori knowledge about the 317 318 Giles, Miller, Chen, Sun, Chen, and Lee grammar, except that the grammar's representation can be regular, which is always true for a grammar of bounded string length; which is the grammatical "training sample." 2 FORMAL GRAMMARS We give a brief introduction to grammars; for a more detailed explanation see [Hopcroft & Ullman, 1979]. We define a grammar as a 4-mple (N, V, P, S) where N and V are DOOlerminal and tenninal vocabularies, P is a finite set of production rules and S is the start symbol. All grammars we discuss are detelUlinistic and regular. For every grammar there exists a language - the set of strings the grammar generates - and an automaton - the machine that recognizes (classifies) the grammar's strings. For regular grammars, the recognizing machine is a deterministic finite automaton (DFA). There exists a one-ta-one mapping between a DFA and its grammar. Once the DFA is known, the production rules are the ordered triples (notk, arc, 1Wde). Grammatical inference [Fu 1982] is defined as the problem of finding (learning) a grammar from a finite set of strings, often called the training sample. One can interpret this problem as devising an inference engine that learm and extracts the grammar, see Figure I. UNKNOWN GRAMMAR LabeBed Extraction striDgs INFERENCE Process .... ENGINE .. (NEURAL NETWQRKl Figure I: Grammatical inference INFERRED GRAMMAR For a training sample of positive and negative strings and no knowledge of the unknown regular grammar, the problem is NP..complete (for a summary, see [Angluin, Smith 1982]). It is possible to construct an inference engine that consists of a recurrent neural network and a rule extraction process that yields an inferred grammar. 3 RECURRENT NEURAL NETWORK 3.1 ARCHITEcruRE Our recmrent neural network is quite simple and can be considered as a simplified version of the model by [Elman 1991]. For an excellent discussion of recurrent networks full of references that we don't have room for here, see [Hertz, et all99I]. A fairly general expression for a recunent network (which has the same computational power as a DFA) is: s~+ I = F(St I·W) r j' , where F is a nonlinearity that maps the stale neuron Sl and the input neuron 1 at time t to the next state S'+ 1 at time t+ 1. The weight matrix W parameterizes the mapping and is usually leamed (however, it can be totally or partially programmed). A DFA has an analogous mapping but does not use W. For a recurrent neural network we define the mapping F and order of the mapping in the following manner [Lee, et aI 1986]. For a first-order recmrent net: where N is the number of hidden state neurons and L the number of input neurons; Wij and Yij are the real-valued weights for respectively the stale and input neurons; and (J is a stanExtracting and Learning an Unknown Grammar with Recurrent Neural Networks 319 N L S:+1 = a (7WilJ + Pi/!) dard sigmoid discriminant function. The values of the hidden state neurons Sl are defined in the finite N-dimensional space [O,I]N. Assuming all weights are connected and the net is fully recurrent, the weight space complexity is bounded by O(N2+NL). Note that the input and state neurons are not the same neurons. This representation has the capability. assuming sufficiently large N and L, to represent any state machine. Note that there are nontrainable unit weights on the recurrent feedback connections. TIle natural second-order extension of this recurrent net is: where certain state neurons become input neurons. Note that the weights Wijk modify a product of the hidden Sj and input Ik neurons. This quadratic fonn directly represents the state transition diagrams of a state automata process -- (input, state) ::::) (next-state) and thus makes the state transition mapping very easy to learn. It also pennits the net to be directly programmed to be a particular DFA. Unpublished experiments comparing first and second order recurrent nets confirm this ease-in-Iearning hypothesis. The space complexity (number of weights) is O(LN2). For L«N, both first- and second-order are of the same complexity,O(N2). 3.2 SUPERVISED TRAINING & ERROR FUNCTION The error function is defined by a special recurrent output neuron which is checked at the end of each string presentation to see if it is on or off. By convention this output neuron should be on if the string is a positive example of the grammar and off if negative. In practice an error tolerance decides the on and off criteria; see [Giles, et all991] for detail. [If a multiclass recognition is desired, another error scheme using many output neurons can be constructed.] We define two error cases: (1) the networl.c fails to reject a negative string (the output neuron is on); (2) the network fails to accept a positive string (the output neuron is oft). This accept or reject occurs at the end of each string - we define this problem as inference versus prediction.There is no prediction of the next character in the string sequence. As such, inference is a more difficult problem than prediction. If knowledge of the classification of every substring of every string exists and alphabetical training order is preserved, then the prediction and inference problems are equivalent. The training method is real-time recurrent training (RTRL). For more details see [Williams, Zipser 1988]. The error function is defined as: E = (1/2) (Target-S~) 2 where Sf is the output neuron value at the final time step t=fwhen the final character is presented and Target is the desired value of (1.0) for (positive. negative) examples. Using gradient descent training, the weight update rule for a second-order recurrent net becomes: { d~ W1mn = -aV E = a(Target-So) . dW lmn where a is the learning rate. From the recursive network state equation we obtain the relationship between the derivatives of st and St+l: 320 Giles, Miller, Chen, Sun, Chen, and Lee ~; = a'· [f>US~-lr.-l + l:W;jtt.-l~~-l J 1m" jk 1m" where a' is the derivative of the discriminant function. This pennits on-line learning with partial derivatives calculated iteratively at each time step. Let "dS'=O IdWlmn = O. Note that the space complexity is O(L 2~) which can be prohibitive for large N and full connectivity. It is important to note that for all training discussed here, the full gradient is calculated as given above. 3.3 PRESENTATION OF TRAINING SAMPLES The training data consists of a series of stimulus-response pairs, where the stimulus is a string ofO's and 1 's, and the response is either "I" for positive examples or "0" for negative examples. The positive and negative strings are generated by an unknown source grammar (created by a program that creates random grammars) prior to training. At each discrete time step, one symbol from the string activates one input neuron, the other input neurons are zero (one-hot encoding). Training is on-line and occurs after each string presentation; there is no total error accumulation as in batch learning; contrast this to the batch method of [Watrous, Kuhn 1992]. An extra end symbol is added to the string alphabet to give the network more power in deciding the best final neuron state configuration. This requires another input neuron and does not increase the complexity of the DFA (only N2 more weights). The sequence of strings presented during training is very important and certainly gives a bias in learning. We have perfonned many experiments that indicate that training with alphabetical order with an equal distribution of positive and negative examples is much faster and converges more often than random order presentation. TIle training algorithm is on-line, incremental. A small portion of the training set is preselected and presented to the network. The net is trained at the end of each string presentation. Once the net has learned this small set or reaches a maximum number of epochs (set before training, 1000 for experiments reported), a small number of strings (10) classified incorrectly are chosen from the rest of the training set and added to the pre-selected set. This small string increment prevents the training procedure from driving the network too far towards any local minima that the misclassified strings may represent. Another cycle of epoch training begins with the augmented training set. If the net correctly classifies all the training data, the net is said to converge. The total number of cycles that the network is permitted to run is also limited, usually to about 20. 4 RULE EXTRACTION (DFA GENERATION) As the network is training (or after training), we apply a procedure we call dynamic state partitioning (dsp) for extracting the network's current conception of the DF A it is learning or has learned. The rule extraction process has the following steps: 1) clustering of DFA states, 2) constructing a transition diagram by connecting these states together with the alphabet-labelled transitions, 3) putting these transitions together to make the full digraph fonning cycles, and 4) reducing the digraph to a minimal representation. The hypothesis is that during training, the network begins to partition (or quantize) its state space into fairly well-separated, distinct regions or clusters, which represent corresponding states in some DFA. See [Cleeremans, et al1989] and [Watrous and Kuhn 1992a] for other clustering methods. A simple way of finding these clusters is to divide each neuron's range [0,1] into q partitions of equal size. For N state neurons, qN partitions. For example, for q=2, the values of S'~.5 are 1 and S'<.0.5 are 0 and there are 2N regions with 2N possible values. Thus for N hidden neurons, there exist I' possible regions. The DFA is constructed by generating Extracting and Learning an Unknown Grammar with Recurrent Neural Networks 321 a state transition diagram -- associating an input symbol with a set of hidden neuron partitions that it is currently in and the set of neuron partitions it activates. This ordered triple is also a production rule. The initial partition, or start state of the DFA, is detennined from the initial value of St=O. If the next input symbol maps to the same partition we assume a loop in the DFA. Otherwise, a new state in the DFA is fonned.This constructed DFA may contain a maximum of cf states; in practice it is usually much less, since not all neuron partition sets are ever reached. This is basically a tree pruning method and different DFA could be generated based on the choice of branching order. TIle extracted DF A can then be reduced to its minimal size using standard minimization algorithms (an 0(N2) algorithm where N is the number of DFA states) [Hopcroft, Ullman 1979]. [This minimization procedure does not change the grammar of the DFA; the unminimized DFA has same time complexity as the minimized DFA. TIle process just rids the DFA of redundant, unnecessary states and reduces the space complexity.] Once the DF A is known, the production rules are easily extracted. Since many partition values of q are available, many DF A can be extracted. How is the q that gives the best DFA chosen? Or viewed in another way, using different q, what DFA gives the best representation of the grammar of the training set? One approach is to use different q's (starting with q=2), different branching order, different runs with different numbers of neurons and different initial conditions, and see if any similar sets of DFA emerge. Choose the DFA whose similarity set has the smallest number of states and appears most often - an Occam's razor assumption. Define the guess of the DFA as DFAg.This method seems to woIk fairly well. Another is to see which of the DFA give the best perfonnance on the training set, assuming that the training set is not perfectly learned. We have little experience with this method since we usually train to perfection on the training set It should be noted that this DF A extraction method may be applied to any discrete-time recurrent net, regardless of network order or number of hidden layers. Preliminary results on firstorder recurrent networks show that the same DFA are extracted as second-order, but the first-order nets are less likely to converge and take longer to converge than second-order. 5 SIMULATIONS - GRAMMARS LEARNED Many different small « 15 states) regular known grammars have been learned successfully with both first-order [Cleeremans, et al1989] and second-order recurrent models [Giles, et al 91] and [Watrous, Kuhn 1992a]. In addition [Giles, et al1990 & 1991] and [Watrous, Kuhn 1992b] show how corresponding DFA and production rules can be extracted. However for all of the above work, the grammars to be learned were alreatb known. What is more interesting is the learning of unknown grammars. In figure 2b is a randomly generated minimallO-state regular grammar created by a program in which the only inputs are the number of states of the umninimized DFA and the alphabet size p. (A good estimate of the number of possible unique DFA is (n2lln1'"/n!) [Aton, et al1991] where n is number ofDFA states) TIle shaded state is the start state, filled and dashed arcs represent 1 and 0 transitions and all final states have a shaded outer circle. This unknown (honestly, we didn't look) DFA was learned with both 6 and 10 hidden state neuron second-order recurrent nets using the first 1000 strings in alphabetical training order (we could ask the unknown grammar for strings). Of two runs for both 10 and 6 neurons, both of the 10 and one of the 6 converged in less than 1000 epochs. (TIle initial weights were all randomly chosen between [1,-1] and the learning rate and momentum were both 0.5.) Figure 2a shows one of the unminimized DFA that was extracted for a partition parameter of q=2. The minimized 10-state DFA, figure 3b, appeared for q=2 for one 10 neuron net and for q=2,3,4 of the converged 6 neuron net Consequently, using our previous criteria, we chose this DFA as DFAg, our guess at the unknown grammar. We then asked 322 Giles, Miller, Chen, Sun, Chen, and Lee Figures 2a & 2b. Unminimized and minimized 100state random grammar. the program what the grammar was and discovered we were correct in our guess. The other minimized DFA for different q's were all unique and usually very large (number of states > 1(0). The trained recurrent nets were then checked for generalization errors on all strings up to length 15. All made a small number of errors, usually less than 1 % of the total of 65,535 strings. However, the correct extracted DFA was perfect and, of course, makes no errors on strings of any length. Again, [Giles, et a11991, 1992], the extracted DFA outperforms the trained neural net from which the DF A was extracted. Figures 3a and 3b, we see the dynamics ofDFA extraction as a 4 bidden neuron neural network is leaming as a function of epoch and partition size. This is for grammar Tomita-4 [Giles, et al 1991, 1992]] - a 4-state grammar that rejects any string which has more than three 0' s in a row. The number of states of the extracted DF A starts out small, then increases, and finally decreases to a constant value as the grammar is learned As the partition q of the neuron space increases, the number of minimized and unminimized states increases. When the grammar is learned, the number of minimized states becomes constant and, as expected, the number of minimized states, independent of q, becomes the number of states in the grammar's DFA - 4. 6 CONCLUSIONS Simple recurrent neural networks are capable ofleaming small regular unknown grammars rather easily and generalize fairly well on unseen grammatical strings. The training results are fairly independent of the initial values of the weights and numbers of neurons. For a well-trained neural net, the generalization perfonnance on long unseen strings can be perfect. J fIJ ~ ~ ] Col II ~ r;iI Extracting and Learning an Unknown Grammar with Recurrent Neural Networks 323 Unminbnlzed Minimized 3S 3S 30 triangles q=4 30 25 11 25 fIJ 20 ~ 20 ~ 15 ] 15 Col e 10 .. 10 ~ r;iI 5 5 o 04-~--~-r~r-'-~--~ o 10 20 30 40 SO 60 70 0 10 20 30 40 SO 60 70 E~b E~b Figures 3a & 3b. Size of number of states (unmioimized and minimized) ofDFA versus training epoch for different partition parameter q. The correct state size is 4. A heuristic algorithm called dynamic state partitioning was created to extract detenninistic finite state automata (DFA) from the neural network, both during and after training. Using a standard DFA minimization algorithm, the extracted DFA can be reduced to an equivalent minimal-state DFA which has reduced space (not time) complexity. When the source or generating grammar is unknown, a good guess of the unknown grammar DFAg can be obtained from the minimal DFA that is most often extracted from different runs WIth different numbers of neurons and initial conditions. From the extracted DF A, minimal or not, the production rules of the learned grammar are evident. There are some interesting aspects of the extracted DFA. Each of the unminimized DFA seems to be unique, even those with the same number of states. For recunent nets that converge, it is often possible to extract DFA that are perfect, i.e. the grammar of the unknown source grammar. For these cases all unminimized DFA whose minimal sizes have the same number of states constitute a large equivalence class of neural-net-generated DFA. and have the same performance on string classification.This equivalence class extends across neural networks which vary both in size (number of neurons) and initial conditions. Thus. the extracted DF A gives a good indication of how well the neural network learns the grammar. In fact, for most of the trained neural nets, the extracted DF ~ outperforms the trained neural networks in classification of unseen strings. (By aefinition, a perfect DFA will correctly classify all unseen strings). This is not surprising due to the possibility of error accumulation as the neural network classifies long unseen strings [pollack 1991]. However, when the neural network has leamed the grammar well, its generalization performance can be perfect on all strings tested [Giles, et al1991, 1992]. Thus, the neural network can be considered as a tool for extracting a DF A that is representative of the unknown grammar. Once the DFAg is obtained, it can be used independently of the trained neural network. The learning of small DFA using second-order techniques and the full gradient computation reported here and elsewhere [Giles, et all991, 1992], [Watrous, Kuhn 1992a, 1992b] give a strong impetus to using these techniques for learning DFA. The question of DFA state capacity and scalability is unresolved. Further work must show how well these ap324 Giles, Miller, Chen, Sun, Chen, and Lee proaches can model grammars with large numbers of states and establish a theoretical and experimental relationship between DFA state capacity and neural net size. Acknowledgments TIle authors acknowledge useful and helpful discussions with E. Baum, M. Goudreau, G. Kuhn, K. Lang, L. Valiant, and R. Watrous. The University of Maryland authors gratefully acknowledge partial support from AFOSR and DARPA. References N. Alon, A.K. Dewdney, and T.J.Ott, 'Efficient Simulation of Fmite Automata by Neural Nets, Journal of the ACM, Vol 38,p. 495 (1991). D. Angluin, C.H. Smith, Inductive Inference: Theory and Methods, ACM Computing Surveys, Vol 15, No 3, p. 237, (1983). A. Cleeremans, D. Servan-Scbreiber, J. McClelland, Finite State Automata and Simple Recurrent Recurrent Networks, Neural Computation, Vol 1, No 3, p. 372 (1989). lL. Elman, Distributed Representations, Simple Recurrent Networks, and Grammatical Structure, Machine Learning, Vol 7, No 2{3, p. 91 (1991). K.S. Fu, Syntactic Panern Recognition and Applications, Prentice-Hall, Englewood Cliffs, NJ. Ch10 (1982). C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee, D. Olen, Higher Order Recurrent Networks & Grammatical Inference, Advances in Neural Information Systems 2, D.S. Touretzky (ed), Morgan Kaufmann, San Mateo, Ca, p.380 (1990). C.L. Giles, D. Chen, C.B. Miller, H.H. Chen, G.Z. Sun, Y.C. Lee, Grammatical Inference Using Second-Order Recurrent Neural Networks, Proceedings of the International Joint Conference on Neural Networks, IEEE91CH3049-4, Vol 2, p.357 (1991). C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee, Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks, Neural Computation, accepted for publication (1992). J. Hertz, A. Krogh, R.G. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley, Redwood City, Ca., Ch. 7 (1991). J.E. Hopcroft, J.D. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Reading, Ma. (1979). Y.C. Lee, G. Doolen, H.H. Olen, G.Z. Sun, T. Maxwell, H.Y. Lee, C.L. Giles, Machine Learning Using a Higher Order Correlational Network, PhysicaD, Vol 22-D, Nol-3, p. 276 (1986). J .B. Pollack, The Induction of Dynamical Recognizers, Machine Learning, Vol 7, No 2/3, p. 227 (1991). G.Z. Sun, H.H. Chen, C.L. Giles, Y.C. Lee, D. Chen, Connectionist Pushdown Automata that Learn Context-Free Grammars, Proceedings of the International Joint Conference on Neural, Washington D.C., Lawrence Erlbaum Pub., Vol It p. 577 (1990). R.L. Watrous, G.M. Kuhn, Induction of Finite-State Languages Using Second-Order Recurrent Networks, Neural Computation, accepted for publication (l992a) and these proceedings, (1992b). RJ. Williams, D. Zipser, A Learning Algorithm for Continually Running Fully Recurrent Neural Networks, Neural Computation, Vol 1, No 2, p. 270, (1989).
1991
13
463
Markov Random Fields Can Bridge Levels of Abstraction Paul R. Cooper Institute for the Learning Sciences Northwestern University Evanston, IL cooper@ils.nwu.edu Peter N. Prokopowicz Institute for the Learning Sciences Northwestern U ni versity Evanston, IL prokopowicz@ils.nwu.edu Abstract Network vision systems must make inferences from evidential information across levels of representational abstraction, from low level invariants, through intermediate scene segments, to high level behaviorally relevant object descriptions. This paper shows that such networks can be realized as Markov Random Fields (MRFs). We show first how to construct an MRF functionally equivalent to a Hough transform parameter network, thus establishing a principled probabilistic basis for visual networks. Second, we show that these MRF parameter networks are more capable and flexible than traditional methods. In particular, they have a well-defined probabilistic interpretation, intrinsically incorporate feedback, and offer richer representations and decision capabilities. 1 INTRODUCTION The nature of the vision problem dictates that neural networks for vision must make inferences from evidential information across levels of representational abstraction. For example, local image evidence about edges might be used to determine the occluding boundary of an object in a scene. This paper demonstrates that parameter networks [Ballard, 1984], which use voting to bridge levels of abstraction, can be realized with Markov Random Fields (MRFs). We show two main results. First, an MRF is constructed with functionality formally equivalent to that of a parameter net based on the Hough transform. Establishing 396 Markov Random Fields Can Bridge Levels of Abstraction 397 this equivalence provides a sound probabilistic foundation for neural networks for vision. This is particularly important given the fundamentally evidential nature of the vision problem. Second, we show that parameter networks constructed from MRFs offer a more flexible and capable framework for intermediate vision than traditional feedforward parameter networks with threshold decision making. In particular, MRF parameter nets offer a richer representational framework, the potential for more complex decision surfaces, an integral treatment of feedback, and probabilistically justified decision and training procedures. Implementation experiments demonstrate these features. Together, these results establish a basis for the construction of integrated network vision systems with a single well-defined representation and control structure that intrinsically incorporates feedback. 2 BACKGROUND 2.1 HOUGH TRANSFORM AND PARAMETER NETS One approach to bridging levels of abstraction in vision is to combine local, highly variable evidence into segments which can be described compactly by their parameters. The Hough transform offers one method for obtaining these high-level parameters. Parameter networks implement the Hough transform in a parallel feedforward network. The central idea is voting: local low-level evidence cast votes via the network for compatible higher-level parameterized hypotheses. The classic Hough example finds lines from edges. Here local evidence about the direction and magnitude of image contrast is combined to extract the parameters of lines (e.g. slope-intercept), which are more useful scene segments. The Hough transform is widely used in computer vision (e.g. [Bolle et al., 1988]) to bridge levels of abstraction. 2.2 MARKOV RANDOM FIELDS Markov Random Fields offer a formal foundation for networks [Geman and Geman, 1984] similar to that of the Boltzmann machine. MRFs define a prior joint probability distribution over a set X of discrete random variables. The possible values for the variables can be interpreted as possible local features or hypotheses. Each variable is associated with a node S in an undirected graph (or network), and can be written as X,. An assignment of values to all the variables in the field is called a configuration, and is denoted Wi an assignment of a single variable is denoted w,. Each fully-connected neighborhood C in a configuration of the field has a weight, or clique potential, Vc. We are interested in the probability distributions P over the random field X. Markov Random Fields have a locality property: P(X, = w,IXr = Wr,r E S,r '# s) = P(X, = w,lXr = Wr,r EN,) (1) that says roughly that the state of site is dependent only upon the state of its neighbors (N,). MRFs can also be characterized in terms of an energy function U 398 Cooper and Prokopowicz with a Gibb's distribution: e-U(w)/T P(w) = Z (2) where T is the temperature, and Z is a normalizing constant. If we are interested only in the prior distribution P(w), the energy function U is defined as: U(w) = L Vc(w) (3) cEO where C is the set of cliques defined by the neighborhood graph, and the Vc are the clique potentials. Specifying the clique potentials thus provides a convenient way to specify the global joint prior probability distribution P, i.e. to encode prior domain knowledge about plausible structures. Suppose we are instead interested in the distribution P(wIO) on the field after an observation 0, where an observation constitutes a combination of spatially distinct observations at each local site. The evidence from an observation at a site is denoted P ( 0 .11lw.ll) and is called a likelihood. Assuming likelihoods are local and spatially distinct, it is reasonable to assume that they are conditionally independent. Then, with Bayes' Rule we can derive: (4) The MRF definition, together with evidence from the current problem, leaves a probability distribution over all possible configurations. An algorithm is then used to find a solution, normally the configuration of maximal probability, or equivalently, minimal energy as expressed in equation 4. The problem of minimizing non-convex energy functions, especially those with many local minima, has been the subject of intense scrutiny recently (e.g. [Kirkpatrick et al., 1983; Hopfield and Tank, 1985]). In this paper we focus on developing MRF representations wherein the minimum energy configuration defines a desirable goal, not on methods of finding the minimum. In our experiments have have used the deterministic Highest Confidence First (HCF) algorithm [Chou and Brown, 1990]. MRFs have been widely used in computer vision applications, including image restoration, segmentation, and depth reconstruction [Geman and Geman, 1984; Marroquin, 1985; Chellapa and Jain, 1991]. All these applications involve Hat representations at a single level of abstraction. A novel aspect of our work is the hierarchical framework which explicitly represents visual entities at different levels of abstraction, so that these higher-order entities can serve as an interpretation of the data as well as playa role in further constraint satisfaction at even higher levels. 3 CONSTRUCTING MRFS EQUIVALENT TO PARAMETER NETWORKS Here we define a Markov Random Field that computes a Hough transform; i.e. it detects higher-order features by tallying weighted votes from low-level image components and thresholding the sum. The MRF has one discrete variable for Parameterized segment Linear sum and threshold f<f max Input nodes Markov Random Fields Can Bridge Levels of Abstraction 399 High-level variable and label set low-level variables and label sets Clique potentials: clique 'f - Exists -8 Ee E-e -Ee -E-e -w.f kl 1 max k2 o Figure 1: Left: Hough-transform parameter net. Input determines confidence I, in each low-level feature; these confidences are weighted (Wi)' summed, and thresholded. Right: Equivalent MRF. Circles show variables with possible labels and non-zero unary clique potentials; lines show neighborhoods; potentials are for the four labellings of the binary cliques. the higher-order feature, whose possible values are ezists and doesn't ezist and one discrete variable for each voting element, with the same two possible values. Such a field could be replicated in space to compute many features simultaneously. The construction follows from two ideas: first, the clique potentials of the network are defined such that only two of the many configurations need be considered, the other configurations being penalized by high clique potentials (i.e. low a priori probability). One configuration encodes the decision that the higher-order feature exists, the other that it doesn't exist. The second point is that the energy of the "doesn't exist" configuration is independent of the observation, while the energy of the "exists" configurations improves with the strength of the evidence. Consider a parameter net for the Hough transform that represents only a single parameterized image segment (e.g. a line segment) and a set of low-level features, (e.g. edges) which vote for it ( Figure 1 left). The variables, labels, and neighborhoods, of the equivalent MRF are defined in the right side of Figure 1 The clique potentials, which depend on the Hough parameters, are shown in the right side of the figure for a single neighborhood of the graph (There are four ways to label this clique.) Unspecified unary potentials are zero. Evidence applies only to the labels ei; it is the likelihood of making a local observation 0,: (5) In lemma 1, we show that the configuration WE = Eele2 ... en , has an energy equal to the negated weighted sum of the feature inputs, and configuration W9 = ,Ee'le'2 ... ,en has a constant energy equal to the negated Hough threshold. Then, in lemma 2, we show that the clique potentials restrict the possible configurations to only these two, so that the network must have its minimum energy in a configuration whose high-level feature has the correct label. 400 Cooper and Prokopowicz Lemma 1: U(WE 10) = - E~=l wi/i U(W9 I 0) = -0 Proof: The energy contributed by the clique potentials in WE is E~=l -Wi!mo.1:' Defining W = E~=1 Wi, this simplifies to -W!mo.1:' The evidence also contributes to the energy of WE, in the form: E~=1 log ei' Substituting from 5 into 4 and simplifying gives the total posterior energy of WE: n n U(WE 10) = -W!mo.1: + W!mo.1: - LWi!;, = - LWi!i (6) 1=1 ;'=1 The energy of the configuration W9 does not depend on evidence derived from the Hough features. It has only one clique with a non-zero potential, the unary clique of label -,E. Hence U(W9 I 0) = -0.0 Lemma 2: (Vw)(w = E . .. -,elt ... ) :::} U(w I 0) > U(WE I 0) (Vw)(w = -,E ... elt ... ) :::} U(w I 0) > U(W9 I 0) Proof: For a mixed configuration W = E . .. -,elt ... , changing label -,elt to elt adds energy because of the evidence associated with elt. This is at most Wi!mo.1:' It also removes energy because of the potential of the clique Eelt, which is -Wi!mo.1:' Because the clique potential K2 from E-,e1c is also removed, if K2 > 0, then changing this label always reduces the energy. For a mixed configuration w = -,E ... elt ... , changing the low-level label e1e to -,e1c cannot add to the energy contributed by evidence, since -,elt has no evidence associated with it. There is no binary clique potential for -,E-,e, but the potential K1 for clique -,Ee1c is removed. Therefore, again, choosing any K1 > 0 reduces energy and ensures that compatible labels are preferred.D From lemma 2, there are two configurations that could possibly have minimal posterior energy. From lemma I, the configuration which represents the existence of the higher-order feature is preferred if and only if the weighted sum of the evidence exceeds threshold, as in the Hough transform. Often it is desirable to find the mode in a high-level parameter space rather than those elements which surpass a fixed threshold. Finding a single mode is easy to do in a Hough-like MRFj add lateral connections between the ezists labels of the high-level features to form a winner-take-all network. If the potentials for these cliques are large enough, it is not possible for more than one variable corresponding to a high-level feature to be labeled ezists. 4 BEYOND HOUGH TRANSFORMS: MRF PARAMETER NETS The essentials of a parameter network are a set of variables representing low-order features, a set of variables representing high-order features, and the appropriate Markov Random Fields Can Bridge Levels of Abstraction 401 Figure 2: Noisy image data Figure 3: Three parameter-net MRF experiments: white dots in the lower images indicate the decision that a horizontal or vertical local edge is present. Upper images show the horizontal and vertical lines found. The left net is a feedforward Hough transform; the middle net uses positive feedback from lines to edges; the right net uses negative feedback, from non-existing lines to non-existing edges weighted connections between them. This section explores the characteristics of more "natural" MRF parameter networks, still based on the same variables and connections, but not limited to binary label sets and sum/threshold decision procedures. 4.1 EXPERIMENTS WITH FEEDBACK The Hough transform and its parameter net instantiation are inherently feedforward. In contrast, all MRFs intrinsically incorporate feedback. We experimented with a network designed to find lines from edges. Horizontal and vertical edge inputs are represented at the low level, and horizontal and vertical lines which span the image at the high level. The input data look like Figure 2. Probabilistic evidence for the low-level edges is generated from pixel data using a model of edge-image formation [Sher, 1987]. The edges vote for compatible lines. In Figure 3, the decision of the feed-forward, Hough transform MRF is shown at the left: edges exist where the local evidence is sufficient; lines exist where enough votes are received. Keeping the same topology, inputs, and representations in the MRF, we added topdown feedback by changing binary clique potentials so that the existence of a line at the high level is more strongly compatible with the existence of its edges. Missing edges are filled in (middle). By making non-existent lines strongly incompatible with the existence of edges, noisy edges are substantially removed (right). Other MRFs for segmentation [Chou and Brown, 1990; Marroquin, 1985] find collinear edges, 402 Cooper and Prokopowicz but cannot reason about lines and therefore cannot exploit top-down feedback. 4.2 REPRESENTATION AND DECISION MAKING Both parameter nets and MRFs represent confidence in local hypotheses, but here the MRF framework has intrinsic advantages. MRFs can simultaneously represent independent beliefs for and against the same hypotheses. In an active vision system, which must reason about gathering as well as interpreting evidence, one could extend this to include the label don't know, allowing explicit reasoning about the condition in which the local evidence insufficiently supports any decision. MRFs can also express higher-order constraints as more than a set of pairs. The exploitation of appropriate 3-cliques, for example, has been shown to be very useful [Cooper, 1990]. Since the potentials in an MRF are related to local conditional probabilities, there is a principled way to obtain them. Observations can be used to estimate local joint probabilities, which can be converted to the clique potentials defining the prior distribution on the field [Pearl, 1988; Swain, 1990]. Most evidence integration schemes require, in addition to the network topology and parameters, the definition of a decision making process (e.g. thresholding) and a theory of parameter acquisition for that process, which is often ad hoc. To estimate the maximum posterior probability of a MRF, on the other hand, is intrinsically to make a decision among the possibilities embedded in the chosen variables and labels. The space of possible decisions (interpretations of problem input) is also much richer for MRFs than for parameter networks. For both nets, the nodes for which evidence is available define a n-dimensional problem input space. The weights di vide this space into regions defined by the one best interpretation (configuration) for all problems in that region. With parameter nets, these regions are separated by planes, since only the sum of the inputs matters. In MRFs, the energy depends on the log-product of the evidence and the sum of the potentials, allowing more general decision surfaces. Non-linear decisions such as AND or XOR are easy to encode, whereas they are impossible for the linear Hough transform. 5 CONCLUSION This paper has shown that parameter networks can be constructed with Markov Random Fields. MRFs can thus bridge representational levels of abstraction in network vision systems. Furthermore, it has been demonstrated that MRFs offer the potential for a significantly more powerful implementation of parameter nets, even if their topological architecture is identical to traditional Hough networks. In short, at least one method is now available for constructing intermediate vision solutions with Markov Random Fields. It may thus be possible to build entire integrated vision systems with a single welljustified formal framework - Markov Random Fields. Such systems would have a unified representational scheme, constraints and evidence with well-defined semantics, and a single control structure. Furthermore, feedback and feedforward flow of Markov Random Fields Can Bridge Levels of Abstraction 403 information, crucial in any complete vision system, is intrinsic to MRFs. Of course, the task still remains to build a functioning vision system for some domain. In this paper we have said nothing about the definition of specific "features" and the constraints between them that would constitute a useful system. But providing essential tools implemented in a well-defined formal framework is an important step toward building robust, functioning systems. Acknowledgements Support for this research was provided by NSF grant #IRI-9110492 and by Andersen Consulting, through their founding grant to the Institute for the Learning Sciences. Patrick Yuen wrote the MRF simulator that was used in the experiments. References [Ballard, 1984] D.H. Ballard, "Parameter Networks," Artificial Intelligence, 22(3):235-267, 1984. [Bolle et al., 1988] Ruud M. Bolle, Andrea Califano, Rick Kjeldsen, and R.W. Taylor, "Visual Recognition Using Concurrent and Layered Parameter Networks," Technical Report RC-14249, IBM Research Division, T.J. Watson Research Center, Dec 1988. [Chellapa and Jain, 1991] Rama Chellapa and Anil Jain, editors, Markov Random Fields: Theory and Application, Academic Press, 1991. [Chou and Brown, 1990] Paul B. Chou and Christopher M. Brown, "The Theory and Practice of Bayesian Image Labeling," International Journal of Computer Vision, 4:185-210, 1990. [Cooper, 1990] Paul R. Cooper, "Parallel Structure Recognition with Uncertainty: Coupled Segmentation and Matching," In Proceedings of the Third International Conference on Computer Vision ICCV '90, Osaka, Japan, December 1990. [Geman and Geman, 1984] Stuart Geman and Donald Geman, "Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images," PAMI, 6(6):721-741, November 1984. [Hopfield and Tank, 1985] J. J. Hopfield and D. W. Tank, ""Neural" Computation of Decisions in Optimization Problems," Biological Cybernetics, 52:141-152, 1985. [Kirkpatrick et al., 1983] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, "Optimization by Simulated Annealing," Science, 220:671-680, 1983. [Marroquin, 1985] Jose Luis Marroquin, "Probabilistic Solution of Inverse Problems," Technical report, MIT Artificial Intelligence Laboratory, September, 1985. [Pearl, 1988] Judea Pearl, Probabalistic Reasoning in Intelligent Systems, Morgan Kaufman, 1988. [Sher, 1987] David B. Sher, "A Probabilistic Approach to Low-Level Vision," Technical Report 232, Department of Computer Science, University of Rochester, October 1987. [Swain, 1990] Michael J. Swain, "Parameter Learning for Markov Random Fields with Highest Confidence First Estimation," Technical Report 350, Dept. of Computer Science, University of Rochester, August 1990.
1991
130
464
Information Measure Based Skeletonisation Sowmya Ramachandran Department of Computer Science University of Texas at Austin Austin, TX 78712-1188 Lorien Y. Pratt * Department of Computer Science Rutgers University New Brunswick, NJ 08903 Abstract Automatic determination of proper neural network topology by trimming over-sized networks is an important area of study, which has previously been addressed using a variety of techniques. In this paper, we present Information Measure Based Skeletonisation (IMBS), a new approach to this problem where superfluous hidden units are removed based on their information measure (1M). This measure, borrowed from decision tree induction techniques, reflects the degree to which the hyperplane formed by a hidden unit discriminates between training data classes. We show the results of applying IMBS to three classification tasks and demonstrate that it removes a substantial number of hidden units without significantly affecting network performance. 1 INTRODUCTION Neural networks can be evaluated based on their learning speed, the space and time complexity of the learned network, and generalisation performance. Pruning oversized networks (skeletonisation) has the potential to improve networks along these dimensions as follows: • Learning Speed: Empirical observation indicates that networks which have been constrained to have fewer parameters lack flexibility during search, and so tend to learn slower. Training a network that is larger than necessary and *This work was partially supported by DOE #DE-FG02-91ER61129, through subcontract #097P753 from the University of Wisconsin. 1080 Information Measure Based Skeletonisation 1081 trimming it back to a reduced architecture could lead to improved learning speed . • Network Complexity: Skeletonisation improves both space and time complexity by reducing the number of weights and hidden units . • Generalisation: Skeletonisation could constrain networks to generalise better by reducing the number of parameters used to fit the data. Various techniques have been proposed for skeletonisation. One approach [Hanson and Pratt, 1989, Chauvin, 1989, Weigend et al., 1991] is to add a cost term or bias to the objective function. This causes weights to decay to zero unless they are reinforced. Another technique is to measure the increase in error caused by removing a parameter or a unit, as in [Mozer and Smolensky, 1989, Le Cun et al., 1990]. Parameters that have the least effect on the error may be pruned from the network. In this paper, we present Information Measure Based Skeletonisation (IMBS), an alternate approach to this problem, in which superfluous hidden units in a single hidden-layer network are removed based on their information measure (1M). This idea is somewhat related to that presented in [Siestma and Dow, 1991], though we use a different algorithm for detecting superfluous hidden units. We also demonstrate that when IMBS is applied to a vowel recognition task, to a subset of the Peterson-Barney 10-vowel classification problem, and to a heart disease diagnosis problem, it removes a substantial number of hidden units without significantly affecting network performance. 2 1M AND THE HIDDEN LAYER Several decision tree induction schemes use a particular information-theoretic measure, called 1M, of the degree to which an attribute separates (discriminates between the classes of) a given set of training data [Quinlan, 1986]. 1M is a measure of the information gained by knowing the value of an attribute for the purpose of classification. The higher the 1M of an attribute, the greater the uniformity of class data in the subsets of feature space it creates. A useful simplification of the sigmoidal activation function used in back-propagation networks [Rumelhart et al., 1986] is to reduce this function to a threshold by mapping activations greater than 0.5 to 1 and less than 0.5 to O. In this simplified model, the hidden units form hyperplanes in the feature space which separate data. Thus, they can be considered analogous to binary-valued attributes, and the 1M of each hidden unit can be calculated as in decision tree induction [Quinlan, 1986]. Figure 1 shows the training data for a fabricated two-feature, two-class problem and a possible configuration of the hyperplanes formed by each hidden unit at the end of training. Hyperplane h1 's higher 1M corresponds to the fact that it separates the two classes better than h2. 1082 Ramachandran and Pratt 1M = .0115 o 1 1 1 1 1 Figure 1: Hyperplanes and their IM. Arrows indicate regions where hidden units have activations> 0.5. 3 1M TO DETECT SUPERFLUOUS HIDDEN UNITS One of the important goals of training is to adjust the set of hyperplanes formed by the hidden layer so that they separate the training data. 1 We define superfluous units as those whose corresponding hyperplanes are not necessary for the proper separation of training data. For example, in Figure 1, hyperplane h2 is superfluous because: 1. hI separates the data better than h2 and 2. h2 does not separate the data in either of the two regions created by hI. The IMBS algorithm to identify superfluous hidden units, shown in Figure 2, recursively finds hidden units that are necessary to separate the data and classifies the rest as superfluous. It is similar to the decision tree induction algorithm in [Quinlan, 1986]. The hidden layer is skeletonised by removing the superfluous hidden units. Since the removal of these units perturbs the inputs to the output layer, the network will have to be trained further after skeletonisation to recover lost performance. 4 RESULTS We have tested IMBS on three classification problems, as follows: 1. Train a network to an acceptable level of performance. 2. Identify and remove superfluous hidden units. 3. Train the skeletonised network further to an acceptable level of performance. We will refer to the stopping point of training at step 1 as the skeletonisation point (SP); further training will be referred to in terms of SP + number of training epochs. 1 This again is not strictly true for hidden units with sigmoidal activation, but holds for the approximate model. Information Measure Based Skeletonisation 1083 Input: Training data Hidden unit activations for each training data pattern. Output: List of superfluous hidden units. Method: main ident-superfluous-hu begin data-set~ training data useful-hu-list~ nil pick-best-hu (data-set, useful-hu-list) output hidden units that are not in useful-hu-list end procedure pick-best-hu(data-set, useful-hu-list) begin if all the data in data-set belong to the same class then return Calculate 1M of each hidden unit. hl~ hidden unit with best 1M. add hl to the useful-hu list dsl~ all the data in data-set for which hl has an activation of> .5 ds2~ all the data in data-set for which hl has an activation of <= .5 pick-best-hu(dsl, useful-hu-list) pick-best-hu(ds2, useful-hu-list) end Figure 2: IMBS: An Algorithm for Identifying Superfluous Hidden Units For each problem, data was divided into a training set and a test set. Several networks were run for a few epochs with different back-propagation parameters 'rJ (learning rate) and 0: (momentum) to determine their locally optimal values. For each problem, we chose an initial architecture and trained 10 networks with different random initial weights for the same number of epochs. The performances of the original (i.e. the network before skeletonisation) and the skeletonised networks, measured as number of correct classifications of the training and test sets, was mea.sured both at SP and after further training. The retrained skeletonised network was compared with the original network at SP as well as the original network that had been trained further for the same number of weight updates. 2 All training was via the standard back-propagation algorithm with a sigmoidal activation function and updates after every pattern presentation [Rumelhart et al., 1986]. A paired T-test [Siegel, 1988] was used to measure the significance of the difference in performance between the skeletonised and original networks. Our experimental results are summarised in Figure 3, and Tables 1 and 2; detailed experimental conditions are given below. 2This was ensured by adjusting the number of epochs a network was trained after skeletonisation according to the number of hidden units in the network. Thus, a network with 10 hidden units was trained on twice as many epochs as one with 20 hidden units. 1084 Ramachandran and Pratt ~ PB Vowel 0 EJ -a 0> ji 0> <Xl S 18 ~ i5 S; u ~ 240000 300000 -.:> ~ 8[ Q) en ~ en ~ =:. :r C> 240000 32000 Watght Updatas 8 Robinson vowel Heart disease C> eN C> 9.0 1 .1 1.3 1 .5 A5 AA AA AA <Xl 0> 166000 172000 17600C 9 .0 1 .1 1.3 1.5 166000 &5 &6 &6 &6 174000 Waight Updata& Watght Updata& Figure 3: Summary of experimental results. Circles represent skeletonised networks; triangles represent unskeletonised networks for comparison. Note that when performance drops upon skeletonisation, the original performance level is recovered within a few weight updates. In all cases, hidden unit count is reduced. 4.1 PETERSON-BARNEY DATA IMBS was first evaluated on a 3-class subset of the Peterson-Barney 10-vowel classification data set, originally described in [Peterson and Barney, 1952], and recreated by [Watrous, 1991]. This data consists of the formant values F1 and F2 for each of two repetitions of each of ten vowels by 76 speaker (1520 utterances). The vowels were pronounced in isolated words consisting of the consonant "h", followed by a vowel, followed by "d". This set was randomly divided into a ~,~ training/test split, with 298 and 150 patterns, respectively. Our initial architecture was a fully connected network with 2 input units, one hidden layer with 20 units, and 3 output units. We trained the networks with T] = 1.0 and ex = 0.001 until the TSS (total sum of squared error) scores seemed to reach a plateau. The networks were trained for 2000 epochs and then skeletonised. The skeletonisation procedure removed an average of 10.1 (50.5%) hidden units. Though the average performance of the skeletonised networks was worse than that of the original, this difference was not statistically significant (p = 0.001). 4.2 ROBINSON VOWEL RECOGNITION Using data from [Robinson, 1989], we trained networks to perform speaker independent recognition of the 11 steady-state vowels of British English using a training set of LPC-derived log area ratios. Training and test sets were as used by [Robinson, 1989], with 528 and 462 patterns, respectively. The initial network architecture was fully connected, with 10 input units, 11 output units, and 30 hidden units. Networks were trained with T] = 1.0 and ex = 0.01, until the performance on the training set exceeded 95%. The networks were trained for 1500 epochs and then skeletonised. The skeletonisation procedure removed an average of 5.8 (19.3%) hidden units. The difference in performance was not statistically significant (p = 0.001). Information Measure Based Skeletonisation 1085 Table 1: Performance of unskeletonised networks Table 2: Mean difference in the number of correct classifications between the original and skeletonised networks. Positive differences indicate that the original network did better after further training. The numbers in parentheses indicate the 99.9% confidence intervals for the mean. comparison points mean difference I Original Skeletonised Training set Test set I Peterson-Barney SP SP 3.10 l-0.83, 7.03J -0.10 l-2.05, 1.84J SP SP+1010 -0.1 [-1.76, 1.56] 0.7 [-0.73, 2.13] SP+500 SP+1010 0.20 [-1.52, 1.91] 0.30 [-1.30, 1.90] Robinson Vowel SP SP 1.70 J -2.40, 5.80) 2.40 J -2.39, 7.19J SP SP+620 -8.2 [-20.33, 3.93] -4.4 [-18.26, 9.46] SP+500 SP+620 -0.30 [ -3.15, 2.55] -0.301-8.36, 7.76] Heart Disease SP SP 20.80 J-5.66, 47.26J 12.20 l-1.65, 26.051 SP SP+33 o [-4.28, +4.28] o [-2.85, 2.85] SP+14 SP+33 0.60 [ -4.55, 5.75] 0.40 [ -3.03, 3.83] 1086 Ramachandran and Pratt 4.3 HEART DISEASE DATA Using a 14-attribute set of diagnosis information, we trained networks on a heart disease diagnosis problem [Detrano et al., 1989]. Training and test data were chosen randomly in a ~, ~ split of 820 and 410 patterns, respectively. The initial networks were fully connected, with 25 input units, one hidden layer with 20 units, and 2 output units. The networks were trained with a = 1.25 and 'rJ = 0.005. Training was stopped when the TSS scores seemed to reach a plateau. The networks were trained for 300 epochs and then skeletonised. The skeletonisation procedure removed an average of 9.6 (48%) hidden units. Here, removing superfluous units degraded the performance by an average of 2.5% on the training set and 3.0% on the test set. However, after being trained further for only 30 epochs, the skeletonised networks recovered to do as well as the original networks. 5 CONCLUSION AND EXTENSIONS We have introduced an algorithm, called IMBS, which uses an information measure borrowed from decision tree induction schemes to skeletonise over-sized backpropagation networks. Empirical tests showed that IMBS removed a substantial percentage of hidden units without significantly affecting the network performance. Potential extensions to this work include: • Using decision tree reduction schemes to allow for trimming not only superfluous hyperplanes, but also those responsible for overfitting the training data, in an effort to improve generalisation. • Extending IMBS to better identify superfluous hidden units under conditions of less than 100% performance on the training data. • Extending IMBS to work for networks with more than one hidden layer. • Performing more rigorous empirical evaluation. • Making IMBS less sensitive to the hyperplane-as-threshold assumption. In particular, a model with variable-width hyperplanes (depending on the sigmoidal gain) may be effective. Acknowledgements Our thanks to Haym Hirsh and Tom Lee for insightful comments on earlier drafts of this paper, to Christian Roehr for an update to the IMBS algorithm, and to Vince Sgro, David Lubinsky, David Loewenstern and Jack Mostow for feedback on later drafts. Matthias Pfister, M.D., of University Hospital in Zurich, Switzerland was responsible for collection of the heart disease data. We used software distributed with [McClelland and Rumelhart, 1988] for many of our simulations. Information Measure Based Skeletonisation 1087 References [Chauvin, 1989] Chauvin, Y. 1989. A back-propagation algorithm with optimal use of hidden units. In Touretzky, D. S., editor 1989, Advances in Neural Information Processing Systems 1. Morgan Kaufmann, San Mateo, CA. 519-526. [Detrano et al., 1989] Detrano, R.j Janosi, A.j Steinbrunn, W.j Pfisterer, M.; Schmid, J.; Sandhu, S.; Guppy, K.; Lee, S.; and Froelicher, V. 1989. International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology 64:304-310. [Hanson and Pratt, 1989] Hanson, Stephen Jose and Pratt, Lorien Y. 1989. Comparing biases for minimal network construction with back-propagation. In Touretzky, D. S., editor 1989, Advances in Neural Information Processing Systems 1. Morgan Kaufmann, San Mateo, CA. 177-185. [Le Cun et al., 1990] Le Cun, Yanni Denker, John; Solla, Sara A.; Howard, Richard E.; and Jackel, Lawrence D. 1990. Optimal brain damage. In Touretzky, D. S., editor 1990, Advances in Neural Information Processing Systems 2. Morgan Kaufmann, San Mateo, CA. [McClelland and Rumelhart, 1988] McClelland, James L. and Rumelhart, David E. 1988. Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises. Cambridge, MA, The MIT Press. [Mozer and Smolensky, 1989] Mozer, Michael C. and Smolensky, Paul 1989. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In Touretzky, D. S., editor 1989, Advances in Neural Information Processing Systems 1. Morgan Kaufmann, San Mateo, CA. 107-115. [Peterson and Barney, 1952] Peterson, and Barney, 1952. Control methods used in a study of the vowels. J. Acoust. Soc. Am. 24(2):175-184. [Quinlan, 1986] Quinlan, J. R. 1986. Induction of decision trees. Machine Learning 1(1):81-106. [Robinson, 1989] Robinson, Anthony John 1989. Dynamic Error Propagation Networks. Ph.D. Dissertation, Cambridge University, Engineering Department. [Rumelhart et al., 1986] Rumelhart, D.; Hinton, G.; and Williams, R. 1986. Learning representations by back-propagating errors. Nature 323:533-536. [Siegel, 1988] Siegel, Andrew F. 1988. Statistics and data analysis: An Introduction. John Wiley and Sons. chapter 15, 336-339. [Siestma and Dow, 1991] Siestma, Jocelyn and Dow, Robert J. F. 1991. Creating artificial neural networks that generalize. Neural Networks 4:67-79. [Watrous, 1991] Watrous, Raymond L. 1991. Current status of peterson-barney vowel formant data. Journal of the Acoustical Society of America 89(3):2459-60. [Weigend et al., 1991] Weigend, Andreas S.; Rumelhart, David E.; and Huberman, Bernardo A. 1991. Generalization by weight-elimination with application to forecasting. In Lippmann, R. P.; Moody, J. E.; and Touretzky, D. S., editors 1991, Advances in Neural Information Processing Systems 3. Morgan Kaufmann, San Mateo, CA. 875-882.
1991
131
465
Interpretation of Artificial Neural Networks: Mapping Knowledge-Based Neural Networks into Rules Geoffrey Towell Jude W. Shavlik Computer Sciences Department U ni versity of Wisconsin Madison, WI 53706 Abstract We propose and empirically evaluate a method for the extraction of expertcomprehensible rules from trained neural networks. Our method operates in the context of a three-step process for learning that uses rule-based domain knowledge in combination with neural networks. Empirical tests using realworlds problems from molecular biology show that the rules our method extracts from trained neural networks: closely reproduce the accuracy of the network from which they came, are superior to the rules derived by a learning system that directly refines symbolic rules, and are expert-comprehensible. 1 Introduction Artificial neural networks (ANNs) have proven to be a powerful and general technique for machine learning [1, 11]. However, ANNs have several well-known shortcomings. Perhaps the most significant of these shortcomings is that determining why a trained ANN makes a particular decision is all but impossible. Without the ability to explain their decisions, it is hard to be confident in the reliability of a network that addresses a real-world problem. Moreover, this shortcoming makes it difficult to transfer the information learned by a network to the solution of related problems. Therefore, methods for the extraction of comprehensible, symbolic rules from trained networks are desirable. Our approach to understanding trained networks uses the three-link chain illustrated by Figure 1. The first link inserts domain knowledge, which need be neither complete nor correct, into a neural network using KBANN [13] see Section 2. (Networks created using KBANN are called KNNs.) The second link trains the KNN using a set of classified 977 978 Towell and Shavlik Neural Learning Figure 1: Rule refinement using neural networks. training examples and standard neural learning methods [9]. The final link extracts rules from trained KNNs. Rule extraction is an extremely difficult task for arbitrarily-configured networks, but is somewhat less daunting for KNNs due to their initial comprehensibility. Our method (described in Section 3) takes advantage of this property to efficiently extract rules from trained KNNs. Significantly, when evaluated in terms of the ability to correctly classify examples not seen during training, our method produces rules that are equal or superior to the networks from which they came (see Section 4). Moreover, the extracted rules are superior to the rules resulting from methods that act directly on the rules (rather than their re-representation as a neural network). Also, our method is superior to the most widely-published algorithm for the extraction of rules from general neural networks. 2 The KBANN Algorithm The KBANN algorithm translates symbolic domain knowledge into neural networks; defining the topology and connection weights of the networks it creates. It uses a knowledge base of domain-specific inference rules to define what is initially known about a topic. A detailed explanation of this rule-translation appears in [13]. As an example of the KBANN method, consider the sample domain knowledge in Figure 2a that defines membership in category A. Figure 2b represents the hierarchical structure of these rules: solid and dotted lines represent necessary and prohibitory dependencies, respectively. Figure 2c represents the KNN that results from the translation into a neural network of this domain knowledge. Units X and Y in Figure 2c are introduced into the KNN to handle the diSjunction in the rule set. Otherwise, each unit in the KNN corresponds to a consequent or an antecedent in the domain knowledge. The thick lines in Figure 2c represent heavily-weighted links in the KNN that correspond to dependencies in the domain knowledge. The thin lines represent the links added to the network to allow refinement of the domain knowledge. Weights and biases in the network are set so that, prior to learning, the network's response to inputs is exactly the same as the domain knowledge. This example illustrates the two principal benefits of using KBANN to initialize KNNs. First, the algorithm indicates the features that are believed to be important to an example's classification. Second, it specifies important derived features, thereby guiding the choice of the number and connectivity of hidden units. 3 Rule Extraction Almost every method of rule extraction makes two assumptions about networks. First, that training does not significantly shift the meaning of units. By making this assumption, the methods are able to attach labels to rules that correspond to terms in the domain knowledge A:- B. C. B:- notH. B:- notF. O. C :- I. J. (a) Interpretation of Artificial Neural Networks 979 c A F G H I J K (b) (c) Figure 2: Translation of domain knowledge into a KNN. upon which the network is based. These labels enhance the comprehensibility of the rules. The second assumption is that the units in a trained KNN are always either active (::::::: 1) or inactive (::::::: 0). Under this assumption each non-input unit in a trained KNN can be treated as a Boolean rule. Therefore, the problem for rule extraction is to determine the situations in which the "rule" is true. Examination of trained KNNs validates both of these assumptions. Given these assumptions, the simplest method for extracting rules we call the SUBSET method. This method operates by exhaustively searching for subsets of the links into a unit such that the sum of the weights of the links in the subset guarantees that the total input to the unit exceeds its bias. In the limit, SUBSET extracts a set of rules that reproduces the behavior of the network. However, the combinatorics of this method render it impossible to implement. Heuristics can be added to reduce the complexity of the search at some cost in the accuracy of the resulting rules. Using heuristic search, SUBSET tends to produce repetitive rules whose preconditions are difficult to interpret. (See [10] or [2] for more detailed explanations of SUBSET.) Our algorithm, called NOFM, addresses both the combinatorial and presentation problems inherent to the SUBSET algorithm. It differs from SUBSET in that it explicitly searches for rules of the form: "If (N of these M antecedents are true) ... " This method arose because we noticed that rule sets discovered by the SUBSET method often contain N-of-M style concepts. Further support for this method comes from experiments that indicate neural networks are good at learning N-of-M concepts [1] as well as experiments that show a bias towards N-of-M style concepts is useful [5]. Finally, note that purely conjunctive rules result if N = M, while a set of disjunctive rules results when N = 1; hence, using N-of-M rules does not restrict generality. The idea underlying NOFM (summarized in Table 1) is that individual antecedents (links) do not have unique importance. Rather, groups of antecedents form equivalence classes in which each antecedent has the same importance as, and is interchangeable with, other members of the class. This equivalence-class idea allows NOFM to consider groups of links without worrying about particular links within the group. Unfortunately, training using backpropagation does not naturally bunch links into equivalence classes. Hence, the first step of NOFM groups links into equivalence classes. This grouping can be done using standard clustering methods [3] in which clustering is stopped when no clusters are closer than a user-set distance (we use 0.25). After clustering, the links to the unit in the upper-rigtlt corner of Figure 3 form two groups, one of four links with weight near one and one of three links with weight near six. (The effect of this grouping is very similar to the training method suggested by Nowlan and Hinton [7].) 980 Towell and Shavlik Table 1: The NOFM algorithm for rule extraction. (1) With each hidden and output unit, fonn groups of similarly-weighted links. (2) Set link weights of aU group members to the average of the group. (3) Eliminate any groups that do not affect whether the unit will be active or inactive. (4) Holding all links weights constant, optimize biases of hidden and output units. (5) Form a single rule for each hidden and output unit. The rule consists of a threshold given by the bias and weighted antecedents specified by remaining links. (6) Where possible, simplify rules to eliminate spperfluous weights and thresholds. 5ii'f'~ 5ti'N~ <j·f'0~ 6.2 6.0 1.0 1.2 6.1 6.1 1.1 1.1 6.1 6.1 6.1 1.2 1.0 6.0 6 . 1 1.1 1.1 II I I \ \\ / / I I \ \ \ I I A B C D E F G A C F D E B G A C Initial Unit After Steps 1 and 2 After Step 3 if 6.1 ... NurnberTrue (A, C, F) > 10.9 then Z. Nurn.berTrue returns the number of true antecedents After Steps 4 and S if 2 of { A C F} then Z. After Step 6 Figure 3: Rule extraction using NOFM. \ F Once the groups are formed, the procedure next attempts to identify and eliminate groups that do not contribute to the calculation of the consequent. In the extreme case, this analysis is trivial; clusters can be eliminated solely on the basis of their weight. In Figure 3 no combination of the cluster of links with weight 1.1 can cause the summed weights to exceed the bias on unit Z. Hence, links with weight 1.1 are eliminated from Figure 3 after step 3. More often, the assessment of a cluster's utility uses heuristics. The heuristic we use is to scan each training example and determine which groups can be eliminated while leaving the example correctly categorized. Groups not required by any example are eliminated. With unimportant groups eliminated, the next step of the procedure is to optimize the bias on each unit. Optimization is required to adjust the network so that it accurately reflects the assumption that units are boolean. This can be done by freezing link weights (so that the groups stay intact) and retraining the bias terms in the network. After optimization, rules are formed that simply re-express the network. Note that these rules are considerable simpler than the trained network; they have fewer antecedents and those antecedents tend to be in a few weight classes. Finally, rules are simplified whenever possible to eliminate the weights and thresholds. Simplification is accomplished by a scan of each restated rule to determine combinations of Interpretation of Artificial Neural Networks 981 clusters that exceed the threshold. In Figure 3 the result of this scan is a single N-of-M style rule. When a rule has more than one cluster, this scan may return multiple combinations each of which has several N-of-M predicates. In such cases, rules are left in their original form of weights and a threshold. 4 Experiments in Rule Extraction This section presents a set of experiments designed to determine the relative strengths and weaknesses of the two rule-extraction methods described above. Rule-extraction techniques are compared using two measures: quality, which is measured both by the accuracy of the rules; and comprehensibility which is approximated by analysis of extracted rule sets. 4.1 Testing Methodology Following Weiss and Kulikowski [14], we use repeated 10-fold cross-validationl for testing learning on two tasks from molecular biology: promoter recognition [13] and splice-junction determination [6]. Networks are trained using the cross-entropy. Following Hinton's [4] suggestion for improved network interpretability, all weights "decay" gently during training. 4.2 Accuracy of Extracted Rules Figure 4 addresses the issue of the accuracy of extracted rules. It plots percentage of errors on the testing and training sets, averaged over eleven repetitions of 10-fold cross-validation, for both the promoter and splice-junction tasks. For comparison, Figure 4 includes the accuracy of the trained KNNs prior to rule extraction (the bars labeled "Network"). Also included in Figure 4 is the accuracy of the EITHER system, an "all symbolic" method for the empirical adaptation of rules [8]. (EITHER has not been applied to the splice-junction problem.) The initial rule sets for promoter recognition and splice-junction determination correctly categorized 50% and 61 %, respectively, of the examples. Hence, each of the systems plotted in Figure 4 improved upon the initial rules. Comparing only the systems that result in refined rules, the NOFM method is the clear winner. On training examples, the error rate for rules extracted by NOFM is slightly worse than EITHER but superior to the rules extracted using SUBSET. On the testing examples the NOFM rules are more accurate than both EITHER and SUBSET. (One-tailed, paired-sample t-tests indicate that for both domains the NOFM rules are superior to the SUBSET rules with 99.5% confidence.) Perhaps the most significant result in this paper is that, on the testing set, the error rate of the NOFM rules is equal or superior to that of the networks from which the rules were extracted. Conversely, the error rate of the SUBSET rules on testing examples is statistically worse than the networks in both problem domains. The discussion at the end of this paper lIn N -fold cross-validation, the set of examples is partitioned into N sets of equal size. Networks are trained using N - 1 of the sets and tested using the remaining set. This procedure is repeated N times so that each set is used as the testing set once. We actually used only N - 2 of the sets for training. One set was used for testing and the other to stop training to prevent overfitting of the training set. 982 Towell and Shavlik Promoter Domain Training Set Testing Set Splice-Junction Domain Network MofN Subset Figure 4: Error rates of extracted rules. analyses the reasons why NOFM's rules can be superior to the networks from which they came. 4.3 Comprehensibility To be useful, the extracted rules must not only be accurate, they also must be understandable. To assess rule comprehensibility, we looked at rule sets extracted by the NOFM method. Table 3 presents the rules extracted by NOFM for promoter recognition. The rules extracted by NOFM for splice-junction determination are not shown because they have much the same character as those of the promoter domain. While Table 3 is someWhat murky, it is vastly more comprehensible than the network of 3000 links from which it was extracted. Moreover, the rules in this table can be rewritten in a form very similar to one used in the biological community [12], namely weight matrices. One major pattern in the extracted rules is that the network learns to disregard a major portion of the initial rules. These same rules are dropped by other rule-refinement systems (e.g., EITHER). This suggests that the deletion of these rules is not merely an artifact of NOFM, but instead reflects an underlying property of the data. Hence, we demonstrate that machine learning methods can provide valuable evidence about biological theories. Looking beyond the dropped rules, the rules NOFM extracts confirm the importance of the bases identified in the initial rules (Tabie 2). However, whereas the initial rules required matching every base, the extracted rules allow a less than perfect match. In addition, the extracted rules point to places in which changes to the sequence are important. For instance, in the first minus10 rule, a \ T' in position 11 is a strong indicator that the rule is true. However, replacing the \ T' with either a \ G' or an \ A' prevents the rule from being satisfied. 5 Discussion and Conclusions Our results indicate that the NOFM method not only can extract meaningful, symbolic rules from trained KNNs, the extracted rules can be superior at classifying examples not seen during training to the networks from which they came. Additionally, the NOFM method produces rules whose accuracy is substantially better than EITHER, an approach that directly modifies the initial set of rules [8]. While the rule set produced by the NOFM algorithm is Interpretation of Artificial Neural Networks 983 Table 2: Partial set of original rules for promoter-recognition. promoter .- contact, conformation. contact .- minus-35, minus-10. minus-35 .- @-37 'CTTGAC' . --- three additional rules minus-10 .- @-14 'TATAAT' • --- three additional rules conformation .@-45 'AA--A' . --- three additional rules Examples are 57 base-pair long strands of DNA. Rules refer to bases by stating a sequence location followed by a subsequnce. So, @-37 ocr' indicates a 'C' in position -37 and a 'T' in position -36. Table 3: Promoter rules NOFM extracts. Promoter :- Minus35, Minus10. Minus-35 Minus-10 .- 2 of @-14 '---CA---T' and :-10 < 4.0 • nt(@-37 '--TTGAT-' ) + not 1 of @-14 '---RB---S' . 1.5 • nt(@-37 '----TCC-' ) + Minus-10 0.5 • nt(@-37 '---MC---' ) :-10 < 3.0 • nt (@-14 '--TAT--T-' ) + 1.5 • nt(@-37 '--GGAGG-' ) . 1.8 • nt (@-14 '-----GA--' 1 + Minus-35 0.7 • nt (@-14 '----GAT--' 1 :-10 < 5.0 * nt(@-37 '--T-G--A' ) + 0.7 * nt (@-14 '--GKCCCS-') . 3.1 * nt(@-37 '---GT---' ) + Minus-10 1.9 * nt(@-37 '----C-CT' ) + :-10 < 3.8 * nt (@-14 '--TA-A-T-') + 1.5 • nt (@-37 '---C--A-' ) 3.0 * nt(@-14 '--G--C---') + 1.5 • nt(@-37 ,------GC' ) 1.0 • nt(@-14 '---T---A-') 1.9 * nt(@-37 '--CAW---' ) 1.0 * nt (@-14 '--CS-G-S-' ) 3.1 • nt(@-37 '--A----C' ) . 3.0 • nt(@-14 '--A--T---') . Minus-35 @-37 '-C-TGAC-' . Minus-10 . @-14 '-TAWA-T--' • Minus-35 .- @-37 '--TTD-CA' . "ntO" returns the number of enclosed in the parentheses antecedents that match the given sequence. So, nt(@-14 '- - - C - - G - -')wouldreturn 1 whenmatchedagainstthesequence@-14'AAACAAAAA'. Table 4: Standard nucleotide ambiguity codes. Code Meaning Code Meaning Code Meaning Code Meaning M AorC R AorG W AorT S CorG K GorT D A or G orT B C orG orT slightly larger than that produced by EITHER, the sets of rules produced by both of these algorithms is small enough to be easily understood. Hence, although weighing the tradeoff between accuracy and understandability is problem and user-specific, the NOFM approach combined with KBANN offers an appealing mixture. The superiority of the NOFM rules over the networks from which they are extracted may occur because the rule-extraction process reduces overfitting of the training examples. The principle evidence in support of this hypothesis is that the difference in ability to correctly categorize testing and training examples is smaller for NOFM rules than for trained KNNs. Thus, the rules extracted by NOFM sacrifice some training set accuracy to achieve higher testing set accuracy. Additionally, in earlier tests this effect was more pronounced; the NOFM rules were superior to the networks from which they came on both datasets (with 99according to a one-tailed t-test). Modifications to training to reduce overfitting improved generalization by networks without significantly affecting NOFM's rules. The result of the change in training method is that the differences between the network and NOFM are not statistically significant in either dataset. However, the result is significant in that it supports the overfitting hypothesis. 984 Towell and Shavlik In summary, the NOFM method extracts accurate, comprehensible rules from trained KNNs. The method is currently limited to KNNs; randomly-configured networks violate its assumptions. New training methods [7] may broaden the applicability of the method. Even without different methods for training, our results show that NOFM provides a mechanism through which networks can make expert comprehensible explanations of their behavior. In addition, the extracted rules allow for the transfer of learning to the solution of related problems. Acknowledgments This work is partially supported by Office of Naval Research Grant NOOOI4-90-J-194 I , National Science Foundation Grant IRI-9002413, and Department of Energy Grant DEFG02-91ER61129. References [1] D. H. Fisher and K. B. McKusick. An empirical comparison of ID3 and back-propagation. In Proceedings of the Eleventh International loint Conference on Artiftcial Intelligence, pages 788-793,Detroit., MI, August 1989. [2] L. M. Fu. Rule learning by searching on adapted nets. In Proceedings of the Ninth National Conference on ArtiftcialIntelligence, pages 590-595, Anaheim, CA, 1991. [3] J. A. Hartigan. Clustering Algorithms. Wiley. New York. 1975. [4] G. E. Hinton. Connectionist learning procedures. Artificial Intelligence. 40:185-234,1989. [5] P. M. Murphy and M. J. Pazzani. ID2-of-3: Constructive induction of N-of-M concepts for discriminators in decision trees. In Proceedings of the Eighth International Machine Learning Workshop. pages 183-187. Evanston. IL. 1991. [6] M. O. Noordewier. G. G. Towell, and J. W. Shavlik. Training knowledge-based neural networks to recognize genes in DNA sequences. In Advances in Neural Information Processing Systems. 3, Denver. CO, 1991. Morgan Kaufmann. [7] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight-sharing. In Advances in Neural Information Processing Systems, 4, Denver, CO, 1991. Morgan Kaufmann. [8] D. Ourston and R. J. Mooney. Changing the rules: A comprehensive approach to theory refinement. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 815-820, Boston. MA. Aug 1990. [9] D. E. Rumelhart, G. E. Hinton. and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland. editors, Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume 1,' Foundations. pages 318-363. MIT Press, Cambridge. MA. 1986. [10] K. Saito and R. Nakano. Medical diagnostic expert system based on PDP model. In Proceedings of IEEE International Conference on Neural Networks. volume 1, pages 255-262. 1988. [11] J. W. Shavlik. R. J. Mooney. and G. G. Towell. Symbolic and neural net learning algorithms: An empirical comparison. Machine Learning. 6:111-143. 1991. [12] G. D. Stormo. Consensus patterns in DNA. In Methods in Enzymology. volume 183. pages 211-221. Academic Press, Orlando, FL, 1990. [13] G. G. Towell, J. W. Shavlik, and M. O. Noordewier. Refinement of approximately correct domain theories by knowledge-based neural networks. In Proceedings of the Eighth National Conference on Artificial Intelligence, pages 861-866,Boston, MA, 1990. [14] S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn. Morgan Kaufmann. San Mateo, CA, 1990.
1991
132
466
Competitive Anti-Hebbian Learning of Invariants Nicol N. Schraudolph Computer Science & Engr. Dept. University of California, San Diego La Jolla, CA 92093-0114 nici@cs.ucsd.edu Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute for Biological Studies La Jolla, CA 92186-5800 tsejnowski@ucsd.edu Abstract Although the detection of invariant structure in a given set of input patterns is vital to many recognition tasks, connectionist learning rules tend to focus on directions of high variance (principal components). The prediction paradigm is often used to reconcile this dichotomy; here we suggest a more direct approach to invariant learning based on an anti-Hebbian learning rule. An unsupervised tWO-layer network implementing this method in a competitive setting learns to extract coherent depth information from random-dot stereograms. 1 INTRODUCTION: LEARNING INVARIANT STRUCTURE Many connectionist learning algorithms share with principal component analysis (Jolliffe, 1986) the strategy of extracting the directions of highest variance from the input. A single Hebbian neuron, for instance, will come to encode the input's first principal component (Oja and Karhunen, 1985); various forms of lateral interaction can be used to force a layer of such nodes to differentiate and span the principal component subspace cf. (Sanger, 1989; Kung, 1990; Leen, 1991), and others. The same type of representation also develops in the hidden layer of backpropagation autoassociator networks (Baldi and Hornik, 1989). However, the directions of highest variance need not always be those that yield the most information, or as the case may be the information we are interested in (Intrator, 1991). In fact, it is sometimes desirable to extract the invariant structure of a stimulus instead, learning to encode those aspects that vary the least. The problem, then, is how to achieve this within a connectionist framework that is so closely tied to the maximization of variance. 1017 1018 Schraudolph and Sejnowski In (FOldiak, 1991), spatial invariance is turned into a temporal feature by presenting transformation sequences within invariance classes as a stimulus. A built-in temporal smoothness constraint enables Hebbian neurons to learn these transformations, and hence the invariance classes. Although this is an efficient and neurobiologically attractive strategy it is limited by its strong assumptions about the nature of the stimulus. A more general approach is to make information about invariant structure available in the error signal of a supervised network. The most popular way of doing this is to require the network to predict the next patch of some structured input from the preceding context, as in (Elman, 1990); the same prediction technique can be used across space as well as time. It is also possible to explicitly derive an error signal from the mutual information between two patches of structured input (Becker and Hinton, 1992), a technique which has been applied to viewpoint-invariant object recognition (Zemel and Hinton, 1991). 2 METHODS 2.1 ANTI-HEBBIAN FEEDFORWARD LEARNING In most formulations of the covariance learning rule it is quietly assumed that the learning rate be positive. By reversing the sign of this constant in a recurrent autoassociator, Kohonen constructed a "novelty filter" that learned to be insensitive to familiar features in its input (Kohonen, 1989). More recently, such anti-Hebbian synapses have been used for lateral decorrelation of feature detectors (Barlow and FOldiak, 1989; Leen, 1991) as well as in differential form removal of temporal variations from the input (Mitchison, 1991). We suggest that in certain cases the use of anti-Hebbian feedforward connections to learn invariant structure may eliminate the need to bring in the heavy machinery of supervised learning algorithms required by the prediction paradigm, with its associated lack of neurobiological plausibility. Specifically, this holds for linear problems, where the stimuli lie near a hyperplane in the input space: the weight vector of an anti-Hebbian neuron will move into a direction normal to that hyperplane, thus characterizing the invariant structure. Of course a set of Hebbian feature detectors whose weight vectors span the hyperplane would characterize the associated class of stimuli just as well. The anti-Hebbian learning algorithm, however, provides a more efficient representation when the dimensionality of the hyperplane is more than half that of the input space, since less normal vectors than spanning vectors are required for unique characterization in this case. Since they remove rather than extract the variance within a stimulus class, anti-Hebbian neurons also present a very different output representation to subsequent layers. Unfortunately it is not sufficient to simply negate the learning rate of a layer of Hebbian feature detectors in order to turn them into working anti-Hebbian in variance detectors: although such a change of sign does superficially achieve the intended effect, many of the subtleties that make Hebb's rule work in practice do not survive the transformation. In what follows we address some of the problems thus introduced. Like the Hebb rule, anti-Hebbian learning requires weight normalization, in this case to prevent weight vectors from collapsing to zero. Oja's active decay rule (Oja, 1982) is a popular local approximation to explicit weight normalization: 8w = TJ(xy - wy2), where y = ijjT X (1) Competitive Anti-Hebbian Learning of Invariants 1019 Here the first term in parentheses represents the standard Hebb rule, while the second is the active decay. Unfortunately, Oja's rule can not be used for weight growth in anti-Hebbian neurons since it is unstable for negative learning rates (ry < 0), as is evident from the observation that the growth/decay term is proportional to w. In our experiments, explicit L2-normalization of weight vectors was therefore used instead. Hebbian feature detectors attain maximal activation for the class of stimuli they represent. Since the weight vectors of anti-Hebbian invariance detectors are normal to the invariance class they represent, membership in that class is Signalled by a zero activation. In other words, linear anti-Hebbian nodes signal violations of the constraints they encode rather than compliance. While such an output representation can be highly desirable for some applications1, it is unsuitable for others, such as the classification of mixtures of invariants described below. We therefore use a symmetric activation function that responds maximally for a zero net input, and decays towards zero for large net inputs. More specifically, we use Gaussian activation functions, since these allow us to interpret the nodes' outputs as class membership probabilities. Soft competition between nodes in a layer can then be implemented simply by normalizing these probabilities (Le. dividing each output by the sum of outputs in a layer), then using them to scale weight changes (Nowlan, 1990). 2.2 AN ANTI-HEBBIAN OBJECTIVE FUNCTION The magnitude of weight change in a Hebbian neuron is proportional to the cosine of the angle between input and weight vectors. This means that nodes that best represent the current input learn faster than those which are further away, thus encouraging differentiation among weight vectors. Since anti-Hebbian weight vectors are normal to the hyperplanes they represent, those that best encode a gi ven stimulus will experience the least change in weights. As a result, weight vectors will tend to clump together unless weight changes are rescaled to counteract this deficiency. In our experiments, this is done by the soft competition mechanism; here we present a more general framework towards this end. A simple Hebbian neuron maximizes the variance of its output y through stochastic approximation by performing gradient ascent in !y2 (Oja and Karhunen, 1985): 8 1 2 8 dWi ex: -- -y = y-;:;-y = XiY 8wi 2 UWi (2) As seen above, it is not sufficient for an anti-Hebbian neuron to simply perform gradient descent in the same function. Instead, an objective function whose derivative has inverse magnitude to the above at every point is needed, as given by 8 1 2 1 8 Xi dWi ex: --log(y ) = --y = 8Wi 2 Y 8Wi Y (3) I Consider the subsumption architecture of a hierarchical network in which higher layers only receive infonnation that is not accounted for by earlier layers. 1020 Schraudolph and Sejnowski 1.00 ~ ~ .~ ~ A ~ .~ //' '\ .f \\ ,/ , \ 'j 3.00 2.00 0.00 -1.00 -2.00 -3.00 -4.00 -2.00 0.00 2.00 4.00 Figure I: Possible objective functions for anti-Hebbian learning (see text). Unfortunately, the pole at y = 0 presents a severe problem for Simple gradient descent methods: the near-infinite derivatives in its vicinity lead to catastrophically large step sizes. More sophisticated optimization methods deal with this problem by explicitly controlling the step size; for plain gradient descent we suggest reshaping the objective function at the pole such that its partials never exceed the input in magnitude: 8 2 2 2€XiY ~Wi ex -8 dog(y + € ) = 2 2 ' Wi Y +€ (4) where € > 0 is a free parameter determining at which point the logarithmic slope is abandoned in favor of a quadratic function which forms an optimal trapping region for simple gradient descent (Figure 1). 3 RESULTS ON RANDOM-DOT STEREOGRAMS In random-dot stereograms, stimuli of a given stereo disparity lie on a hyperplane whose dimensionality is half that of the input space plus the disparity in pixels. This is easily appreciated by considering that given, say, the left half-image and the disparity, one can predict the right half-image except for the pixels shifted in at the edge. Thus stereo disparities that are small compared to the receptive field width can be learned equally well by Hebbian and anti-Hebbian algorithms; when the disparity approaches receptive field width, however, anti-Hebbian neurons have a distinct advantage. 3.1 SINGLE LAYER NETWORK: LOCAL DISPARITY TUNING Our training set consisted of stereo images of 5,000 frontoparallel strips at uniformly random depth covered densely with Gaussian features of random location, width, polarity and power. The images were discretized by integrating over pixel bins in order to allow for sub-pixel disparity acuity. Figure 2 shows that a single cluster of five anti-Hebbian nodes with soft competition develops near-perfect tuning curves for local stereo disparity after 10 sweeps through this training set. This disparity tuning is achieved by learning to have corresponding weights (at the given disparity) be of equal magnitude but opposite sign, so that any stimulus pattern at that disparity yields a zero net input and thus maximal response. Competitive Anti-Hebbian Learning of Invariants 1021 average response 0.9 ~--+----+------l-----I......_-----1f_0.8 ~--H-\_---_f+__\_--_+4+_--_H_+_--_l_1I_l_o. 7 ~-4-+-t---+-+-+------,I--+-4-----j~-+---J.---1f-+-0.6 --+-_+-\---f-+----\---f--+~;___f__-f.~-_I_--4~0.5 -~_+~Ir_+--+--_\_-l---+-_+_I_-f.-+J~--4---L0.4 ---+---Jlr..--+---ic--+---.b#+---4-MUf-lIA---1-0.1 -+-----+-----+----4-- ---1--2.00 -1.00 0.00 1.00 2.00 stereo disparity Figure 2: Sliding window average response of first-layer nodes after presentation of 50,000 stereograms as a function of stimulus disparity: strong disparity tuning is evident. iOOOOOi . . • soft competition ~ ~ clusters full connectivity • Gaussian nonlinearities left right iOOOOOl . . iOOOOOi . . 1 ______ -------------------II random connectivity II • anti-Hebbian (5/7 per half-image) learning rule 0000000 0000000 0000000 0000000 Figure 3: Architecture of the network (see text). Input 1022 Schraudolph and Sejnowski Note, however, that this type of detector suffers from false positives: input patterns that happen to yield near-zero net input even though they have a different stereo disparity. Although the individual response of a tuned node to an input pattern of the wrong disparity is therefore highly idiosyncratic, the sliding window average of each response with its 250 closest neighbors (with respect to disparity) shown in Figure 2 is far more well-behaved. This indicates that the average activity over a number of patterns (in a "moving stereogram" paradigm) or, alternatively, over a population of nodes tuned to the same disparity allows discrimination of disparities with sub-pixel accuracy. 3.2 TWO-LAYER NETWORK: COHERENT DISPARITY TUNING In order to investigate the potential for hierarchical application of this architecture, it was extended to two layers as shown in Figure 3. The two first-layer clusters with nonoverlapping receptive fields extract local stereo disparity as before; their output is monitored by a second-layer cluster. Note that there is no backpropagation of derivatives: all three clusters use the same unsupervised learning algorithm. This network was trained on coherent input, i.e. stimuli for which the stereo disparity was identical across the receptive field boundary of first-layer clusters. As shown in Figure 4, the second layer learns to preserve the first layer's disparity tuning for coherent patterns, albeit in in somewhat degraded form. Each node in the second layer learns to pick out exactly the two corresponding nodes in the first-layer clusters, again by giving them weights of equal magnitude but opposite sign. However, the second layer represents more than just a noisy copy of the first layer: it meaningfully integrates coherence information from the two receptive fields. This can be demonstrated by testing the trained network on non-coherent stimuli which exhibit a depth discontinuity between the receptive fields of first-layer clusters. The overall response of the second layer is tuned to the coherent stimuli it was trained on (Figure 5). 4 DISCUSSION Although a negation of the learning rate introduces various problems to the Hebb rule, feedforward anti-Hebbian networks can pick up invariant structure from the input. We have demonstrated this in a competitive classification setting; other applications of this framework are possible. We find the subsumption aspect of anti-Hebbian learning particularly intriguing: the real world is so rich in redundant data that a learning rule which can adaptively ignore much of it must surely be an advantage. From this point of view, the promising first experiments we have reported here use quite impoverished inputs; one of our goals is therefore to extend this work towards real-world stimuli. Acknowledgements We would like to thank Geoffrey Hinton, Sue Becker, Tony Bell and Steve Nowlan for the stimulating and helpful discussions we had. Special thanks to Sue Becker for permission to use her random-dot stereogram generator early in our investigation. This work was supported by a fellowship stipend from the McDonnell-Pew Center for Cognitive Neuroscience at San Diego to the first author, who also received a NIPS travel grant enabling him to attend the conference. Competitive Anti-Hebbian Learning of Invariants 1023 average response 0.& -----ft\----I+-.------f-----H-+----I-ll--0.7 -Nll-'-+--\--'Tt--~H\I~_\_---.:++_H--+-+_t_--l--l_u__0.5 ---+------IF----+-+-+--t---If-IMIl---.... +-H-Yt--t---I---2.00 -1.00 0.00 1.00 2.00 stereo disparity Figure 4: Sliding window average response of second-layer nodes after presentation of 250,000 coherent stereograms as a function of stimulus disparity: disparity tuning is preserved in degraded form. average tOtal response 3.1 3.0 2.9 2.& 2.7 2.6 2.5 -4.00 -2.00 0.00 2.00 4.00 stereo discontinuity (disparity difference) Figure 5: Sliding window average of total second-layer response to non-coherent input as a function of stimulus discontinuity: second layer is tuned to coherent patterns. 1024 Schraudolph and Sejnowski References Baldi, P. and Hornik, K. (1989). Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2:53-58. Barlow, H. B. and FOldh1k, P. (1989). Adaptation and decorrelation in the cortex. In Durbin, R. M., Miall, c., and Mitchison, G. J., editors, The Computing Neuron, chapter 4, pages 54-72. Addison-Wesley, Wokingham. Becker, S. and Hinton, G. E. (1992). A self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, to appear. Elman, J. (1990). Finding structure in time. Cognitive Science, 14: 179-211. FOldiak, P. (1991). Learning invariance from transformation sequences. Neural Computation, 3: 194-200. Intrator, N. (1991). Exploratory feature extraction in speech signals. In (Lippmann et al., 1991), pages 241-247. Jolliffe, I. (1986). Principal Component Analysis. Springer-Verlag, New York. Kohonen, T. (1989). Self-Organization and Associative Memory. Springer-Verlag, Berlin, 3 edition. Kung, S. Y. (1990). Neural networks for extracting constrained principal components. submitted to IEEE Trans. Neural Networks. Leen, T. K. (1991). Dynamics of learning in linear feature-discovery networks. Network, 2:85-105. Lippmann, R. P., Moody, J. E., and Touretzky, D. S., editors (1991). Advances in Neural Information Processing Systems, volume 3, Denver 1990. Morgan Kaufmann, San Mateo. Mitchison, G. (1991). Removing time variation with the anti-hebbian differential synapse. Neural Computation, 3:312-320. Nowlan, S. J. (1990). Maximum likelihood competitive learning. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems, volume 2, pages 574582, Denver 1989. Morgan Kaufmann, San Mateo. Oja, E. (1982). A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology, 15:267-273. OJ a, E. and Karhunen, J. (1985). On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. Journal Of Mathematical Analysis and Applications, 106:69-84. Sanger, T. D. (1989). Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2:459-473. Zemel, R. S. and Hinton, G. E. (1991). Discovering viewpoint-invariant relationships that characterize objects. In (Lippmann et al., 1991), pages 299-305.
1991
133
467
3D Object Recognition Using Unsupervised Feature Extraction Nathan Intrator Center for Neural Science, Brown University Providence, RI 02912, USA Heinrich H. Biilthoff Dept. of Cognitive Science, Brown University, and Center for Biological Information Processing, MIT, Cambridge, MA 02139 USA Josh I. Gold Center for Neural Science, Brown University Providence, RI 02912, USA Shimon Edelman Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel Abstract Intrator (1990) proposed a feature extraction method that is related to recent statistical theory (Huber, 1985; Friedman, 1987), and is based on a biologically motivated model of neuronal plasticity (Bienenstock et al., 1982). This method has been recently applied to feature extraction in the context of recognizing 3D objects from single 2D views (Intrator and Gold, 1991). Here we describe experiments designed to analyze the nature of the extracted features, and their relevance to the theory and psychophysics of object recognition. 1 Introduction Results of recent computational studies of visual recognition (e.g., Poggio and Edelman, 1990) indicate that the problem of recognition of 3D objects can be effectively reformulated in terms of standard pattern classification theory. According to this approach, an object is represented by a few of its 2D views, encoded as clusters in multidimentional space. Recognition of a novel view is then carried out by interpo460 3D Object Recognition Using Unsupervised Feature Extraction 461 lating among the stored views in the representation space. A major characteristic of the view interpolation scheme is its sensitivity to viewpoint: the farther the novel view is from the stored views, the lower the expected recognition rate. This characteristic performance in the recognition of novel views of synthetic 3D stimuli was indeed found in human subjects by Biilthoff and Edelman (1991), who also replicated it in simulated psychophysical experiments that involved a computer implementation of the view interpolation model. Because of the high dimensionality of the raster images seen by the human subjects, it was impossible to use them directly for classification in the simulated experiments. Consequently, the simulations were simplified, in that the views presented to the model were encoded as lists of vertex locations of the objects (which resembled 3D wire frames). This simplification amounts to what is referred to in the psychology of recognition as the feature extraction step (LaBerge, 1976). The discussion of the issue of features of recognition in recent psychological literature is relatively scarce, probably because of the abandonment of invariant feature theories (which postulate that objects are represented by clusters of points in multidimensional feature spaces (Duda and Hart, 1973)) in favor of structural models (see review in (Edelman, 1991)). Although some attempts have been made to generate and verify specific psychophysical predictions based on the feature space approach (see especially (Shepard, 1987)), current feature-based theories of perception seem to be more readily applicable to lower-level visual tasks than to the problem of object recognition. In the present work, our aim was to explore a computationally tractable model of feature extraction conceived as dimensionality reduction, and to test its psychophysical validity. This work was guided by previous successful applications in pattern recognition of dimensionality reduction by a network model implementing Exploratory Projection Pursuit (Intrator, 1990; Intrator and Gold, 1991). We were also motivated by results of recent psychophysical experiments (Edelman and Biilthoff, 1990; Edelman et al., 1991) that found improvement in subjects' performance with increasing stimulus familiarity. These results are compatible with a feature-based recognition model which extracts problem-specific features in addition to universal ones. Specifically, the subjects' ability to discern key elements of the solution appears to increase as the problem becomes more familiar. This finding suggests that some of the features used by the visual system are based on the task-specific data, and therefore raises the question of how can such features be extracted. It was our conjecture that features found by the EPP model would turn out to be similar to the task-specific features in human vision. 1.1 Unsupervised Feature Extraction - The BCM Model The feature extraction method briefly described below emphasizes dimensionality reduction, while seeking features of a set of objects that would best distinguish among the members of the set. This method does not rely on a general pre-defined set of features. This is not to imply, however, that the features are useful only in recognition of the original set of images from which they were extracted. In fact, the potential importance of these features is related to their invariance properties, or their ability to generalize. Invariance properties of features extracted by this method have been demonstrated previously in speech recognition (Intrator and Tajchman, 462 Intrator, Gold, Biilthoff, and Edelman 1991; Intrator, 1992). From a mathematical viewpoint, extracting features from gray level images is related to dimensionality reduction in a high dimensional vector space, in which an n x k pixel image is considered to be a vector oflength n x k. The dimensionality reduction is achieved by replacing each image (or its high dimensional equivalent vector) by a low dimensional vector in which each element represents a projection of the image onto a vector of synaptic weights (constructed by a BCM neuron). 1 m 1 m 2 m Projections through m1 Figure 1: The stable solutions for a two dimensional two input problem are ml and m2 (left) and similarly with a two-cluster data (right). The feature extraction method we used (Intrator and Cooper, 1991) seeks multimodality in the projected distribution of these high dimensional vectors. A simple example is illustrated in Figure 1. For a two-input problem in two dimensions, the stable solutions (projection directions) are ml and m2, each has the property of being orthogonal to one of the inputs. In a higher dimensional space, for n linearly independent inputs, a stable solution is one that it is orthogonal to all but one of the inputs. In case of noisy but clustered inputs, a stable solution will be orthogonal to all but one of the cluster centers. As is seen in Figure 1 (right), this leads to a bimodal, or, in general, multi-modal, projected distribution. Further details are given in (Intrator and Cooper, 1991). In the present study, the features extracted by the above approach were used for classification as described in (Intrator and Gold, 1991; Intrator, 1992). 1.2 Experimental paradigm We have studied the features extracted by the BCM model by replicating the experiments of Biilthoff and Edelman (1991), designed to test generalization from familiar to novel views of 3D objects. As in the psychophysical experiments, images of novel wire-like computer-generated objects (Biilthoff and Edelman, 1991; Edelman and Biilthoff, 1990) were used as stimuli. These objects proved to be easily manipulated, and yet complex enough to yield interesting results. Using wires also simplified the problem for the feature extractor, as they provided little or no occlusion of the key features from any viewpoint. The objects were generated by the Symbolics S-Geometry ™ modeling package, and rendered with a visualization graphics tool (AVS, Stardent, Inc.). Each object consisted of seven connected equal-length segments, pointing in random directions and distributed equally around the origin (for further details, see Edelman and Biilthoff, 1990). In the psychophysical experiments of Biilthoff and Edelman (1991), subjects were 3D Object Recognition Using Unsupervised Feature Extraction 463 shown a target wire from two standard views, located 75° apart along the equator of the viewing sphere. The target oscillated around each of the two standard orientations with an amplitude of ±15° about a fixed vertical axis, with views spaced at 3° increments. Test views were located either along the equator - on the minor arc bounded by the two standard views (INTER condition) or on the corresponding major arc (EXTRA condition) - or on the meridian passing through one of the standard views (ORTHO condition). Testing was conducted according to a two-alternative forced choice (2AFC) paradigm, in which subjects were asked to indicate whether the displayed image constituted a view of the target object shown during the preceding training session. Test images were either unfamiliar views of the training object, or random views of a distractor (one of a distinct set of objects generated by the same procedure). To apply the above paradigm to the BCM network, the objects were rendered in a 63 x 63 array, at 8 bits/pixel, under simulated illumination that combined ambient lighting of relative strength 0.3 with a point source of strength 1.0 at infinity. The study described below involved six-way classification, which is more difficult than the 2AFC task used in the psychophysical experiments. The six wires used Figure 2: The six wires used in the computational experiments, as seen from a single view point. in the experiments are depicted in Figure 2. Given the task of recognizing the six wires, the network extracted features that corresponded to small patches of the different images, namely areas that either remained relatively invariant under the rotation performed during training, or represented distinctive features of specific wires (Intrator and Gold, 1991). The classification results were in good agreement with the psychophysical data of Biilthoff and Edelman (1991): (1) the error rate was the lowest in the INTER condition, (2) recognition deteriorated to chance level with increased misorientation in the EXTRA and ORTHO conditions, and (3) horizontal training led to a better performance in the INTER condition than did vertical training. 1 The first two points were interpreted as resulting from the ability of the BCM network to extract rotation-invariant features. Indeed, features appearing in all the training views would be expected to correspond to the INTER condition. EXTRA and ORTHO views, on the other hand, are less familiar and therefore yield worse performance, and also may require features other than the rotation-invariant ones extracted by the model. lThe horizontal-vertical asymmetry might be related to an asymmetric structure of the visual field in humans (Hughes, 1977). This asymmetry was modeled by increasing the resolution along the horizontal axis. 464 Imrator, Gold, Bulthoff, and Edelman 2 Examining the Features of Recognition To understand the meaning of the features extracted by the BCM network under the various conditions, and to establish a basis for further comparison between the psychophysical experiments and computational models, we developed a method for occluding key features from the images and examining the subsequent effects on the various recognition tasks. 2.1 The Occlusion Experiment In this experiment, some of the features previously extracted by the network could be occluded during training and/or testing. Each input to a BCM neuron in our model corresponds to a particular point in the 2D input image, while "features" correspond to combinations of excitatory and inhibitory inputs. Assuming that inputs with strong positive weights constitute a significant proportion of the features, we occluded (set to 0) input pixels whose previously computed synaptic weight exceeded a preset threshold. Figure 3 shows a synaptic weight matrix defining a set of features, and the set of wires with the corresponding features occluded. The main hypothesis we tested concerns the general utility of the extracted features for recognition. If the features extracted by the BCM network do capture rotation-invariant aspects of the object and can support recognition across a variety of rotations, then occluding those features during training should lead to a pronounced and general decline in recognition performance of the model. In particular, recognition should deteriorate most significantly in the INTER and EXTRA cases, since they lie in the plane of rotation during training and therefore can be expected to rely to a larger extent on rotation-invariant features. Little change should be seen in the ORTHO condition, on the other hand, because recognition of ORTHO views, situated outside the plane of rotation defined by the training phase, does not benefit from rotation-invariant features. 2.2 Results and Discussion When there was no occlusion, the pattern of the model's performance replicated the results of the psychophysical experiments of (Biilthoff and Edelman, 1991). Specifically, the best performance was achieved for INTER views, with progressive deterioration under EXTRA and ORTHO conditions (Intrator and Gold, 1991; see Figure 4). The results of simulations involving occlusion of key features during training and no occlusion during testing are illustrated in Figure 5. Essentially the same results were obtained when occlusion was done during either training or testing. Occlusion of the key features led to a number of interesting results. First, when features in the training image were occluded, occluding the same features during testing made little difference. This is not unexpected, since these features were not used to build the internal representation of the objects. Second, there was a general decline in performance within the plane of rotation used during training (especially in the INTER condition) when the extracted features were occluded. This is a strong indication that the features initially chosen by the network were in fact those features which best described the object acroSs a range of rotations. Third, there '" ... .. et: ... 0 ... ... w 3D Object Recognition Using Unsupervised Feature Extraction 465 Figure 3: Wires occluded with a feature extracted by BeM network (left). 0. 8 Inter ............. 0.6 0 .4 0.2 Extra ............ ... L-.. ,.-. .-:::.--:: . .... /.;;/1 f ... -.. -...... / ...... -....... - ...... /~ Ortho >--4>--0 ..... o ~--~~~--~--~-L~-=~ o 10 20 30 40 50 60 Distance [Jeg} Figure 4: Misclassification performance, regular training. 0. 8 0.6 0.4 0.2 Inter ....0--0 10 20 30 40 Distance [DegJ 50 60 Figure 5: Misclassification performance, training on occluded Images. was little degradation of performance in the ORTHO condition when features were occluded during training. This result lends further support to the notion that the extracted features emphasized rotation-invariant characteristics of the objects, as abstracted in the training phase. Finally, we mention that the occlusion of the same features in a new psychophysical experiment caused the same selective deterioration found in the simulations to appear in the human subjects' performance. Specifically, the subjects' error rate was elevated in the INTER condition more than in the other conditions, and this effect was significantly stronger for occlusion masks obtained from the extracted features than for other, randomized, masks (Sklar et al., 1991). To summarize, this work was undertaken to elucidate the nature of the features of recognition of 3D objects. We were especially interested in the features extracted by an unsupervised BCM network, and in their relation to computational and psychophysical findings concerning object recognition. We compared recognition performance of our model following training that involved features extracted by the BCM network with performance in the absence of these features. We found that the model's performance was affected by the occlusion of key features in a manner consistent with their predicted computational role. This method of testing the relative importance of features has also been applied in psychophysical experiments. Preliminary results of those experiments show that feature-derived masks have a stronger effect on human performance compared to other masks that occlude the same proportion of the image, but are not obtained via the BCM model. Taken together, these results demonstrate the strength of the dimensionality reduction approach to feature extraction, and provide a foundation for examining the link 466 Intraror, Gold, Bulthoff, and Edelman between computational and psychophysical studies of the features of recognition. Acknowledgements Research was supported by the National Science Foundation, the Army Research Office, and the Office of Naval Research. References Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. Journal Neuroscience, 2:32-48. Biilthoff, H. H. and Edelman, S. (1991). Psychophysical support for a 2D interpolation theory of object recognition. Proceedings of the National Academy of Science. to appear. Duda, R. O. and Hart, P. E. (1973). Pattern Classification and Scene Analysis. John Wiley, New York. Edelman, S. (1991). Features of recognition. CS-TR 10, Weizmann Institute of Science. Edelman, S. and Biilthoff, H. H. (1990). Viewpoint-specific representations in threedimensional object recognition. A.I. Memo No. 1239, Artificial Intelligence Laboratory, Massachusetts Institute of Technology. Edelman, S., Biilthoff, H. H., and Sklar, E. (1991). Task and object learning in visual recognition. CBIP Memo No. 63, Center for Biological Information Processing, Ma!..sachusetts Institute of Technology. Friedman, J. H. (1987). Exploratory projection pursuit. Journal of the American Statistical Association, 82:249-266. Huber, P. J. (1985). Projection pursuit. (with discussion). The Annals of Statistics, 13:435-475. Hughes, A. (1977). The topography of vision in mammals of contrasting live style: Comparative optics and retinal organisation. In Crescitelli, F., editor, The Visual System in Vertebrates, Handbook of Sensory Physiology VII/5, pages 613-756. Springer Verlag, Berlin. Intrator, N. (1990). Feature extraction using an unsupervised neural network. In Touretzky, D. S., Ellman, J. L., Sejnowski, T. J., and Hinton, G. E., editors, Proceedings of the 1990 Connectionist Models Summer School, pages 310-318. Morgan Kaufmann, San Mateo, CA. Intrator, N. (1992). Feature extraction using an unsupervised neural network. Neural Computation, 4:98-107. Intrator, N. and Cooper, L. N. (1991). Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions. Neural Networks. To appear. Intrator, N. and Gold, J. I. (1991). Three-dimensional object recognition of gray level images: The usefulness of distinguishing features. Submitted. 3D Object Recognition Using Unsupervised Feature Extraction 467 Intrator, N. and Tajchman, G. (1991). Supervised and unsupervised feature extraction from a cochlear model for speech recognition. In Juang, B. H., Kung, S. Y., and Kamm, C. A., editors, Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop, pages 460-469. LaBerge, D. (1976). Perceptual learning and attention. In Estes, W. K., editor, Handbook of learning and cognitive processes, volume 4, pages 237-273. Lawrence Erlbaum, Hillsdale, New Jersey. Poggio, T. and Edelman, S. (1990). A network that learns to recognize threedimensional objects. Nature, 343:263-266. Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science, 237:1317-1323. Sklar, E., Intrator, N., Gold, J. J., Edelman, S. Y., and Biilthoff, H. H. (1991). A hierarchical model for 3D object recognition based on 2D visual representation. In Neurosci. Soc. Abs. PART VIII OPTICAL CHARACTER RECOGNITION
1991
134
468
Information Processing to Create Eye Movements David A. Robinson Departments of Ophthalmology and Biomedical Engineering The Johns Hopkins University School of Medicine Baltimore, MD 21205 ABSTRACT Because eye muscles never cocontract and do not deal with external loads, one can write an equation that relates motoneuron firing rate to eye position and velocity - a very uncommon situation in the CNS. The semicircular canals transduce head velocity in a linear manner by using a high background discharge rate, imparting linearity to the premotor circuits that generate eye movements. This has allowed deducing some of the signal processing involved, including a neural network that integrates. These ideas are often summarized by block diagrams. Unfortunately, they are of little value in describing the behavior of single neurons - a fmding supported by neural network models. 1 INTRODUCTION The neural networks in our studies are quite simple. They differ from other applications in that they attempt to model real neural subdivisions of the oculomotor system which have been extensively studied with microelectrodes. Thus, we can ask the extent to which neural networks succeed in describing the behavior of hidden units that is already known. A major benefit of using neural networks in the oculomotor system is to illustrate clearly the shortcomings of block diagram models which tell one very little about what one may expect if one pokes a microelectrode inside one of its boxes. Conversely, single unit behavior is so loosely coupled to system behavior that, although the simplicity of the oculomotor system allows the relationships to be understood, one fears that, in a more complicated system, the behavior of single (hidden) units will give 351 352 Robinson little information about what a system is trying to do, never mind how. 2 SIMPLIFICA TIONS IN OCUWMOTOR CONTROL Because it is impossible to cocontract our eye muscles and because their viscoelastic load never varies, it is possible to write an equation that uniquely relates the discharge rates of their motoneurons and the position of the load (eye position). This cannot be done in the case of, for example, limb muscles. Moreover, this system is well-approximated by a first-order, linear differential equation. Linearity comes about from the design of the semicircular canals, the origin of the vestibulo-ocular reflex (VOR). This reflex creates eye movements that compensate for head movements to stabilize the eyes in space for clear vision. The canals primarily transduce head velocity, neurally encoded into the discharge rates of its afferents. These rates modulate above and below a high background rate (typically 100 spikes/sec) that keeps them well away from cutoff and provides a wide linear range. The core of this reflex is only three neurons long and the canals impose their properties - linear modulation around a high background rate - onto all down-stream neurons including the motoneurons. In addition to linearity, the functions of the various oculomotor subsystems are clear. There is no messy stretch reflex, the muscle fibers are straight and parallel, and there is only one "joint." All these features combine to help us understand the premotor organization of oculomotor signals in the caudal pons, a system that has enjoyed much block-diagram modelling and now, neural network modelling. 3 DISTRIBUTION OF OCULOMOTOR SIGNAlS The first application of neural networks to the oculomotor system was a study of Anastasio and Robinson (1989). The problem addressed concerned the convergence of diverse oculomotor signals in the caudal pons. There are three major oculomotor subsystems: the VOR; the saccadic system that causes the eyes to jump rapidly from one target to another; and the smooth pursuit system that allows the eyes to track a moving target. Each appears in the caudal pons as a veloci.ty command. The canals, via the vestibular nuclei, provide an eye-velocity command, Ev, for compensatory vestibular eye 'Povements. Burst neurons in the nearby pontine reticular formation provide a signal, Eat for the desired eye velocity for a saccade. Purkinje cells in the cerebellum carry an eye-velocity signal, Ep' for pursuit eye movements. Thus, three eye-velocity commands converge in the region of the motoneurons. When one records from cells in this region one fmds a discharge rate R of: . . . R.=.. Ro + rp Ep + rvEv + r.E. (1) where Ro is the high background rate previously described and rp, rv and r. are coefficients that can assume any values, in a seemingly random way, for anyone neuron (e.g. Tomlinson and Robinson, 1984). Now a block-diagram model need show only the three velocity commands converging on the motoneurons and would not suggest the existence of neurons carrying complicated signals like that of Equ. (1). On the other hand, such behavior has a nice, messy, biological flavor. Somehow, it wOl;lld ~m oqd if such signals did not exist. What is clearly happening is that the signals Ep, Ev and E. Information Processing to Create Eye Movements 353 are being distributed over the intemeurons and then reassembled in the correct amount on the motoneurons. This is just a simple, specific example of distributed parallel processing in the nervous system. A neural network model is merely an explicit statement of such a distribution. Initial randomization of the synaptic weights followed by error-driven learning creates hidden units that conform to Equ. (1). We concluded that a neural network model was entirely appropriate for this neural system. This exercise also brought home, although in a simple way, the obvious, but often overlooked, message that block-diagram models can be misleading about how their conceptual functions are realized by neurons. We next examined distribution of the spatial properties of the intemeurons of the VOR (Anastasio and Robinson, 1990). We used only the vertical VOR to keep things simple. The inputs were the primary afferents of the four vertical semicircular canals that sense head rotations in all combinations of pitch and roll. The output layer was the four motoneurons of the vertical recti and oblique muscles that move the eye vertically and in cyclotorsion. The model was trained to perform compensatory eye movements in all combinations of pitch and roll. The sensitivity axis is that axis around which rotation of the head or eye produces maximum modulation in discharge rate. The sensitivity axis of a canal unit is perpendicular to the plane in which the canal lies. That of a motoneuron is that axis around which its muscle will rotate the eye. What were the sensitivity axes of the hidden units? A block diagram of the spatial manipulations of the VOR consists of matrices. The geometry of the canals can be described by a 3 x 3 matrix that converts a head-velocity vector into its neurally encoded representation on canal nerves. The geometry of the muscles can be described as another matrix that converts the neurally-encoded motoneuron vector into a physical eye-rotation vector. The brain-stem matrix describes how the canal neurons must project to the motoneurons (Robinson, 1982). In this scheme, intemeurons would have only fixed sensitivity axes laying somewhere between that of a canal unit and a motoneuron. In our model, however, sensitivity axes are distributed in the network; those of the hidden units point in a variety of directions. This has also been confirmed by microelectrode recordings (Fukushima et al., 1990). Thus, spatial aspects of transformations, just like temporal aspects, are distributed over the intemeurons. Again, block-diagrams, in this case in the form of a matrix, are misleading about what one will find with a microelectrode. Again, recording from single units tells one little about what a network is trying to do. There is much talk in motor physiology about coordinate systems and transformations from one to another. The question is asked "What coordinate system is this neuron working in?" In this example, individual hidden units do not behave as if they belonged to any coordinate system and this raises the problem of whether this is really a meaningful question. 4 THE NEURAL INTEGRA TOR Muscles are largely position actuators; against a constant load, position is proportional 354 Robinson to innervation. The motoneurons of the extraocular muscles also need a signal proportional to desired eye position as well as velocity. Since eye-movement commands enter the caudal pons as eye-velocity commands. the necessary eye-position command is obtained by integrating the velocity signals (see Robinson. 1989. for a review). The location of the neural network has been discovered in the caudal pons and it is intriguing to speculate how it might work. Hardwired networks. based on positive feedback. have been proposed utilizing lateral inhibition (Cannon et a1.. 1983) and more recently a learning neural network (dynamic) has been proposed for the VOR (Arnold and Robinson. 1991). The hidden units are freely connected. the input is from two canal units in push-pull. the output is two motoneurons also in push-pull. which operate on the plant transfer function. lI(sTc + 1). (Tc is the plant time constant). to create an eye position which should be the time integral of the input head velocity. The error is retinal image slip (the difference between actual and ideal eye velocity). Its ems value over a trial interval is used to change synaptic weights in a steepest descent method until the error is negligible. To compensate the plant lag. the network must produce a combination output of eye velocity plus its integral. eye position. and these two signals. with various weights. are seen on all hidden units which. thus. look remarkably like the integrator neurons that we record from. This exercise raises several issues. The block-diagram model of this network is a box marked lis in parallel with the direct velocity feedforward path given the gain Tc. The parallel combination is (sTc + 1)/s. The zero cancels the pole of the plant leaving lis. so that eye position is the perfect integral of head velocity. While such a diagram is conceptually very useful in diagnosing disorders (Zee and Robinson. 1979). it contains no hint of how neurons might effect integration and so is useless in this regard. Moreover. Galiana and Outerbridge (1984) have pointed out. although in a more complex context. that a direct feedforward path of gain T c with a positive feedback path around it containing a model of the plant. produces exactly the same transfer function. Should we worry about which is correct - feedforward or feedback? Perhaps we should. but note that the neural network model of the integrator just described contains both feedback and feedforward pathways and relies on positive feedback. There is a suspicion that the latter network may subsume both block diagrams making questions about which is correct irrelevant. One thing is certain. at this level of organization. so close to the neuron level. block-diagrams. while having conceptual value. are not only useless but can be misleading if one is interested in describing real neural networks. Finally. how does one test a model network such as that proposed for the neural integrator? It involves the microcircuitry with which small sets of circumscribed cells talk to each other and process signals. The technology is not yet available to allow us to answer this question. I know of no real. successful examples. This. I believe. is a true roadblock in neurophysiology. If we cannot solve it. we must forever be content to describe what cell groups do but not how they do it. Acknowledgements This research is supported by Grant 5 R37 EYOO598 from the National Eye Institute of the National Institutes of Health. Information Processing to Create Eye Movements 355 References T.J. Anastasio & D.A. Robinson. (l989) The distributed representation of vestibuloocular signals by brain-stem neurons. Bioi. Cybern., 61:79-88. T.J. Anastasio & D.A. Robinson. (l990) Distributed parallel processing in the vertical vestibulo-ocular reflex: Learning networks compared to tensor theory. Bioi. Cybern., 63:161-167. D.B. Arnold & D.A. Robinson. (1991) A learning network model of the neural integrator of the oculomotor system. Bioi. Cybern., 64:447-454. S.C. Cannon, D.A. Robinson & S. Shamma. (1983) A proposed neural network for the integrator of the oculomotor system. Bioi. Cybern., 49: 127-136. K. Fukushima, S.I. Perlmutter, J.F. Baker & B.W. Peterson. (1990) Spatial properties of second-order vestibulo-ocular relay neurons in the alert cat. Exp. Brain Res., 81:462478. H.L. Galiana & J.S. Outerbridge. (1984) A bilateral model for central neural pathways in vestibuloocular reflex. J. Neurophysiol., 51:210-241. D.A. Robinson. (1982) The use of matrices in analyzing the three-dimensional behavior of the vestibulo-ocular reflex. Bioi. Cybern., 46:53-66. D.A. Robinson. (1989) Integrating with neurons. Ann. Rev. Neurosci., 12:33-45. R.D. Tomlinson & D.A. Robinson. (1984) Signals in vestibular nucleus mediating vertical eye movements in the monkey. J. Neurophysiol., 51: 1121-1136. D.S. Zee & D.A. Robinson. (l979) Clinical applications of oculomotor models. In H.S. Thompson (ed.), Topics in Neuro-Ophthalmology, 266-285. Baltimore, MD: Williams & Wilkins.
1991
135
469
Propagation Filters in PDS Networks for Sequencing and Ambiguity Resolution Ronald A. Sumida Michael G. Dyer Artificial Intelligence Laboratory Computer Science Department University of California Los Angeles, CA, 90024 sumida@cs.ucla.edu Abstract We present a Parallel Distributed Semantic (PDS) Network architecture that addresses the problems of sequencing and ambiguity resolution in natural language understanding. A PDS Network stores phrases and their meanings using multiple PDP networks, structured in the form of a semantic net. A mechanism called Propagation Filters is employed: (1) to control communication between networks, (2) to properly sequence the components of a phrase, and (3) to resolve ambiguities. Simulation results indicate that PDS Networks and Propagation Filters can successfully represent high-level knowledge, can be trained relatively quickly, and provide for parallel inferencing at the knowledge level. 1 INTRODUCTION Backpropagation has shown considerable potential for addressing problems in natural language processing (NLP). However, the traditional PDP [Rumelhart and McClelland, 1986] approach of using one (or a small number) of backprop networks for NLP has been plagued by a number of problems: (1) it has been largely unsuccessful at representing high-level knowledge, (2) the networks are slow to train, and (3) they are sequential at the knowledge level. A solution to these problems is to represent high-level knowledge structures over a large number of smaller PDP net233 234 Sumida and Dyer works. Reducing the size of each network allows for much faster training, and since the different networks can operate in parallel, more than one knowledge structure can be stored or accessed at a time. In using multiple networks, however, a number of important issues must be addressed: how the individual networks communicate with one another, how patterns are routed from one network to another, and how sequencing is accomplished as patterns are propagated. In previous papers [Sumida and Dyer, 1989] [Sumida, 1991], we have demonstrated how to represent high-level semantic knowledge and generate dynamic inferences using Parallel Distributed Semantic (PDS) Networks, which structure multiple PDP networks in the form of a semantic network. This paper discusses how Propagation Filters address communication and sequencing issues in using multiple PDP networks for NLP. 2 PROPAGATION FILTERS Propagation Filters are inspired by the idea of skeleton filters, proposed by [Sejnowski, 1981, Hinton, 1981]. They are composed of: (1) sets of filter ensembles that gate the connection from a source to a destination and (2) a selector ensemble that decides which filter group to enable. Each filter group is sensitive to a particular pattern over the selector. When the particular pattern occurs, the source pattern is propagated to its destination. Figure 1 is an example of a propagation filter where the "01" pattern over units 2 and 3 of the selector opens up filter group1, thus permitting the pattern to be copied from source1 to destination!. The units of filter group2 do not respond to the "01" pattern and remain well below thresold, so the activation pattern over the source2 ensemble is not propagated. H*wrMA~f-~ I I ... i 1 I I I I •••• I I lOurcol fill« IJ'OUpl I I ~-----~-~~--~ I I sourcc2 filter group2 Mv..-~destin.tioo2 Figure 1: A Propagation Filter architecture. The small circles indicate PDP units within an ensemble (oval), the black arrows represent full connectivity between two ensembles, and the dotted lines connecting units 2 and 3 of the selector to each filter group oval indicate total connectivity from selector units to filter units. The jagged lines are suggestive of temporary patterns of activation over an ensemble. The units in a filter group receive input from units in the selector. The weights on these input connections are set so that when a specific pattern occurs over the Propagation Filters in PDS Networks 235 selector, every unit in the filter group is driven above threshold. The filter units also receive input from the source units and provide output to the destination units. The weights on both these i/o connections can be set so that the filter merely copies the pattern from the source to the destination when its units exceed threshold (as in Figure 1). Alternatively, these weights can be set (e.g. using backpropagation) so that the filter transforms the source pattern to a desired destination pattern. 3 PDS NETWORKS PDS Networks store syntactic and semantic information over multiple PDP networks, with each network representing a class of concepts and with related networks connected in the general manner of a semantic net. For example, Figure 2 shows a network for encoding a basic sentence consisting of a subject, verb and direct object. The network is connected to other PDP networks, such as HUMAN, VERB and ANIMAL, that store information about the content of the subject role (s-content), the filler for the verb role, and the content of the direct-object role (do-content). Each network functions as a type of encoder net, where: (1) the input and output layers have the same number of units and are presented with exactly the same pattern, (2) the weights of the network are modified so that the input pattern will recreate itself as output, and (3) the resulting hidden unit pattern represents a reduced description of the input. In the networks that we use, a single set of units is used for both the input and output layers. The net can thus be viewed as an encoder with the output layer folded back onto the input layer and with two sets of connections: one from the single input/output layer to the hidden layer, and one from the hidden layer back to the i/o layer. In Figure 2 for example, the subject-content, verb, and direct-object-content role-groups collectively represent the input/output layer, and the BASIC-S ensemble represents the hidden layer . , I ..................... -----....<//tvM HUMAN , I I I MA ="hit" VERB = DOG Figure 2: The network that stores information about a basic sentence. The black arrows represent links from the input layer to the hidden layer and the grey arrows indicate links from the hidden layer to the output layer. The thick lines represent links between networks that propagate a pattern without changing it. A network stores information by learning to encode the items in its training set. 236 Sumida and Dyer For each item, the patterns that represent its features are presented to the input role groups, and the weights are modified so that the patterns recreate themselves as output. For example, in Figure 2, the MAN-"hit"-DOG pattern is presented to the BASIC-S network by propagating the MAN pattern from the HUMAN network to the s-content role, the "hit" pattern from the VERB network to the verb-content role, and the DOG pattern from the ANIMAL network to the do-content role. The BASIC-S network is then trained on this pattern by modifying the weights between the input/output role groups and the BASIC-S hidden units so that the MAN-"hit"-DOG pattern recreates itself as output. The network automatically generalizes by having the hidden units become sensitive to common features of the training patterns. When the network is tested on a new concept (i.e., one that is not in the training set), the pattern over the hidden units reflects its similarity to the items seen during training. 3.1 SEQUENCING PHRASES To illustrate how Propagation Filters sequence the components of a phrase, consider the following sentence, whose constituents occur in the standard subject-verb-object order: 81. The man hit the dog. We would like to recognize that the BASIC-S network of Figure 2 is applicable to the input by binding the roles of the network to the correct components. In order to generate the proper role bindings, the system must: (1) recognize the components of the sentence in the correct order (e.g. "the man" should be recognized as the subject, "hit" as the verb, and "the dog" as the direct object), and (2) associate each phrase of the input with its meaning (e.g. reading the phrase "the man" should cause the pattern for the concept MAN to appear over the HUMAN units). Figure 3 illustrates how Propagation Filters properly sequence the components of the sentence. First, the phrase "the man" is read by placing the pattern for "the" over the determiner network (Step 1) and the pattern for "man" over the noun network (Step 2). The "the" pattern is then propagated to the np-determiner input role units of the NP network (Step 3) and the "man" pattern to the np-noun role input units (Step 4). The pattern that results over the hidden NP units is then used to represent the entire phrase "the man" (Step 5). The filters connecting the NP units with the subject and direct object roles are not enabled, so the pattern is not yet bound to any role. Next, the word "hit" is read and a pattern for it is generated over the VERB units (Step 6). The BASIC-S network is now applicable to the input (for simplicity of exposition, we ignore passive constructions here). Since there are no restrictions (i.e., no filter) on the connection between the VERB units and the verb role of BASIC-S, the "hit" pattern is bound to the verb role (Step 7). The verb role units act as the selector of the Propagation Filter that connects the NP units to the subject units. The filter is constructed so that whenever any of the verb role units receive non-zero input (i.e., whenever the role is bound) it opens up the filter group connecting NP with the subject role (Step 8). Thus, the pattern for "the man" is copied from NP to the subject (Step 9) and deleted from the NP units. Similarly, the subject units act as the selector of a filter that connects NP with the direct object. Since the subject was just bound, the connection from the NP to direct object is enabled (Step 10). At this point, the system has generated the expectation that a NP will occur next. The phrase "the dog" is now read and 9. MA "the man" s·MA "the man" 3.~ "the" DET l.~ "the" 7.~ .... "hit" ,. .... ,. .... ,. 11-IS. ~ "the dog" 4·NM "man" N 2.NM "man" Propagation Filters in PDS Networks 237 16.~ "the dog" VERB 6.~ "hit" Figure 3: The figure shows how Propagation Filters sequence the components of the sentence "The man hit the dog". The numbers indicate the order of events. The dotted arrows indicate Propagation Filter connections from a selector to an open filter group (indicated by a black circle) and the dark arrows represent the connections from a source to a destination. its pattern is generated over the NP units (Steps 11-15). Finally, the pattern for "the dog" is copied across the open connection from NP to direct-object (Step 16). 3.2 ASSOCIATING PHRASES WITH MEANINGS The next task is to associate lexical patterns with their corresponding semantic patterns and bind semantic patterns to the appropriate roles in the BASIC-S network. Figure 4 indicates how Propagation Filters: (1) transform the phrase "the man" into its meaning (i.e., MAN), and (2) bind MAN to the s-content role of BASIC-S. Reading the word "man", by placing the "man" pattern into the noun units (Step 2), opens the filter connecting N to HUMAN (Step 5), while leaving the filters connecting N to other networks (e.g. ANIMAL) closed. The opened filter transforms the lexical pattern "man" over N into the semantic pattern MAN over HUMAN (Step 7). Binding "the man" to subject (Step 8) by the procedure shown in the Figure 3 opens the filter connecting HUMAN to the s-content role of BASIC-S (Step 9). The s-content role is then bound to MAN (Step 10). The do-content role is bound by a procedure similar to that shown in Figure 4. When "dog" is read, the filter connecting N with ANIMAL is opened while filters to other networks (e.g. HUMAN) remain closed. The "dog" pattern is then transformed into the semantic pattern DOG over the ANIMAL units. When "the dog" 238 Sumida and Dyer BASIC-S Figure 4: The figure illustrates how the concept MAN is bound to the s-content role of BASIC-S, given the phrase "the man" as input. Black (white) circles indicate open (closed) filters. is bound to direct-object as in Figure 3, the filter from ANIMAL to do-content is opened, and DOG is propagated from ANIMAL to the do-content role of BASIC-S. 3.3 AMBIGUITY RESOLUTION AND INFERENCING There are two forms that inference and ambiguity resolution can take: (1) routing patterns (e.g. propagation of role bindings) to the appropriate subnets and (2) pattern reconstruction from items seen during training. (1) Pattern Routing: Propagation Filters help resolve ambiguities by having the selector only open connections to the network containing the correct interpretation. As an example, consider the following sentence: S2. The singer hit the note. Both S2 and Sl (Sec. 3.1) have the same syntactic structure and are therefore represented over the BASIC-S ensemble of Figure 2. However, the meaning of the word "hit" in Sl refers to physically striking an object while in S2 it refers to singing a musical note. The pattern over the BASIC-S units that represents Sl differs significantly from the pattern that represents S2, due to the differences in the s-content and do-content roles. A Propagation Filter with the BASIC-S units as its selector uses the differences in the two patterns to determine whether to open connections to the HIT network or to the PERFORM-MUSIC network (Figure 5). Propagation Filters in PDS Networks 239 PERRlRM-MUSIC ---/wA Figure 5: The pattern over BASIC-S acts as a selector that determines whether to open the connections to HIT or to PERFORM-MUSIC. Since the input here is MAN-"hit"-DOG, the filters to HIT are opened while the filters to PERFORMMUSIC remain closed. The black and grey arrows indicating connections between the input/output and hidden layers have been replaced by a single thin line. During training, the BASIC-S network was presented with sentences of the general form <MUSIC-PERFORMER "hit" MUSICAL-NOTE> and <ANIMATE "hit" OBJECT>. The BASIC-S hidden units generalize from the training sentences by developing a distinct pattern for each of the two types of "hit" sentences. The Propagation Filter is then constructed so that the hidden unit pattern for <MUSICPERFORMER "hit" MUSICAL-NOTE> opens up connections to PERFORMMUSIC, while the pattern for <ANIMATE "hit" OBJECT> opens up connections to HIT. Thus, when S1 is presented, the BASIC-S hidden units develop the pattern classifying it as <ANIMATE "hit" OBJECT>, which enables connections to HIT. For example, Figure 5 shows how the MAN pattern is routed from the s-content role of BASIC-S to the actor role of HIT and the DOG pattern is routed from the do-content role of BASIC-S to the object role of HIT. If S2 is presented instead, the hidden units will classify it as <MUSIC-PERFORMER "hit" MUSICAL-NOTE> and open the connections to PERFORM-MUSIC. The technique of using propagation filters to control pattern routing can also be applied to generate inferences. Consider the sentence, "Douglas hit Tyson". Since both are boxers, it is plausible they are involved in a competitive activity. In S1, however, punishing the dog is a more plausible motivation for HIT. The proper inference is generated in each case by training the HIT network (Figure 5) on a number of instances of boxers hitting one another and of people hitting dogs. The network learns two distinct sets of hidden unit patterns: <BOXER-HIT-BOXER> and <HUMAN-HIT-DOG>. A Propagation Filter, (like that shown in Figure 5) with the HIT units as its selector, uses the differences in the two classes of patterns to route to either the network that stores competitive activities or to the network that stores punishment acts. (2) Pattern Reconstruction: The system also resolves ambiguities by reconstructing patterns that were seen during training. For example, the word "note" in sentence 240 Sumida and Dyer S2 is ambiguous and could refer to a message, as in "The singer left the note". Thus, when the word "note" is read in S2, the do-content role of BASIC-S can be bound to MESSAGE or to MUSICAL-NOTE. To resolve the ambiguity, the BASIC-S network uses the information that SINGER is bound to the s-content role and "hit" to the verb role to: (1) reconstruct the <MUSIC-PERFORMER "hit" MUSICAL-NOTE> pattern that it learned during training and (2) predict that the do-content will be MUSICAL-NOTE. Since the prediction is consistent with one of the possible meanings for the do-content role, the ambiguity is resolved. Similarly, if the input had been "The singer left the note", BASIC-S would use the binding of a human to the s-content role and the binding of "left" to the verb role to reconstruct the pattern <HUMAN "left" MESSAGE> and thus resolve the ambiguity. 4 CURRENT STATUS AND CONCLUSIONS PDS Networks and Propagation Filters are implemented in OCAIN, a natural language understanding system that: (1) takes each word of the input sequentially, (2) binds the roles of the corresponding syntactic and semantic structures in the proper order, and (3) resolves ambiguities. In our simulations with OCAIN, we successfully represented high-level knowledge by structuring individual PDP networks in the form of a semantic net. Because the system's knowledge is spread over multiple subnetworks, each one is relatively small and can therefore be trained quickly. Since the subnetworks can operate in parallel, OCAIN is able to store and retrieve more than one knowledge structure simultaneously, thus achieving knowlege-Ievel parallelism. Because PDP ensembles (versus single localist units) are used, the generalization, noise and fault-tolerance properties of the PDP approach are retained. At the same time, Propagation Filters provide control over the way patterns are routed (and transformed) between subnetworks. The PDS architecture, with its Propagation Filters, thus provides significant advantages over traditional PDP models for natural language understanding. References [Hinton, 1981] G. E. Hinton. Implementing Semantic Networks in Parallel Hardware. In Parallel Models of Associative Memory, Lawrence Erlbaum, Hillsdale, NJ, 1981. [Rumelhart and McClelland, 1986] D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing, Volume 1. MIT Press, Cambridge, Massachusetts, 1986. [Sejnowski, 1981] T. J. Sejnowski. Skeleton Filters in the Brain. In Parallel Models of Associative Memory, Lawrence Erlbaum, Hillsdale, NJ, 1981. [Sumida and Dyer, 1989] R. A. Sumida and M. G. Dyer. Storing and Generalizing Multiple Instances while Maintaining Knowledge-Level Parallelism. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI, 1989. [Sumida, 1991] R. A. Sumida. Dynamic Inferencing in Parallel Distributed Semantic Networks. In Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, Chicago, IL, 1991.
1991
136
470
Neural Network Routing for Random Multistage Interconnection Networks Mark W. Goudreau Princeton University and NEe Research Institute, Inc. 4 Independence Way Princeton, NJ 08540 c. Lee Giles NEC Research Institute, Inc. 4 Independence Way Princeton, NJ 08540 Abstract A routing scheme that uses a neural network has been developed that can aid in establishing point-to-point communication routes through multistage interconnection networks (MINs). The neural network is a network of the type that was examined by Hopfield (Hopfield, 1984 and 1985). In this work, the problem of establishing routes through random MINs (RMINs) in a shared-memory, distributed computing system is addressed. The performance of the neural network routing scheme is compared to two more traditional approaches - exhaustive search routing and greedy routing. The results suggest that a neural network router may be competitive for certain RMIN s. 1 INTRODUCTION A neural network has been developed that can aid in establishing point-topoint communication routes through multistage interconnection networks (MINs) (Goudreau and Giles, 1991). Such interconnection networks have been widely studied (Huang, 1984; Siegel, 1990). The routing problem is of great interest due to its broad applicability. Although the neural network routing scheme can accommodate many types of communication systems, this work concentrates on its use in a shared-memory, distributed computing system. Neural networks have sometimes been used to solve certain interconnection network 722 Neural Network Routing for Random Multistage Interconnection Networks 723 Input Ports Interconnection Network Control Bits :1 Output Ports : Logic1 Neural I Interconnection Network Logic2: Network I I Controller L.: _ _ _ _ _ _ _ .... _-_-:--r-_~_ _ _ _'--_ -_-_ -_-J:.J Externa Control Figure 1: The communication system with a neural network router. The input ports (processors) are on the left, while the output ports (memory modules) are on the right. problems, such as finding legal routes (Brown, 1989; Hakim and Meadows, 1990) and increasing the throughput of an interconnection network (Brown and Liu, 1990; Marrakchi and Troudet, 1989). The neural network router that is the subject of this work, however, differs significantly from these other routers and is specially designed to handle parallel processing systems that have MINs with random interstage connections. Such random MINs are called RMINs. RMINs tend to have greater fault-tolerance than regular MINs. The problem is to allow a set of processors to access a set of memory modules through the RMIN. A picture of the communication system with the neural network router is shown in Figure 1. The are m processors and n memory modules. The system is assumed to be synchronous. At the beginning of a message cycle, some set of processors may desire to access some set of memory modules. It is the job of the router to establish as many of these desired connections as possible in a non-conflicting manner. Obtaining the optimal solution is not critical. Stymied processors may attempt communication again during the subsequent message cycle. It is the combination of speed and the quality of the solution that is important. The object of this work was to discover if the neural network router could be competitive with other types of routers in terms of quality of solution, speed, and resource 724 Goudreau and Giles RMIN2 RMINI 1 2 3 1 2 4 3 4 5 5 6 6 7 8 RMIN3 1 2 3 4 1 1 2 2 3 3 4 4 3 5 6 7 7 8 8 9 10 Figure 2: Three random multistage interconnection networks. The blocks that are shown are crossbar switches, for which each input may be connected to each output. utilization. To this end, the neural network routing scheme was compared to two other schemes for routing in RMINs - namely, exhaustive search routing and greedy routing. So far, the results of this investigation suggest that the neural network router may indeed be a practicable alternative for routing in RMINs that are not too large. 2 EXHAUSTIVE SEARCH ROUTING The exhaustive search routing method is optimal in terms of the ability of the router to find the best solution. There are many ways to implement such a router. One approach is described here. For a given interconnection network, every route from each input to each output was stored in a database. (The RMIN s that were used as test cases in this paper always had at least one route from each processor to each memory module.) When a new message cycle began and a new message set was presented to the router, the router would search through the database for a combination of routes for the message set that had no conflicts. A conflict was said to occur if more than one route in the set of routes used a single bus in the interconnection network. In the case where every combination of routes for the message set had a conflict, the router would find a combination of routes that could establish the largest possible number of desired connections. If there are k possible routes for each message, this algorithm needs a memory of size 8( mnk) and, in the worst case, takes exponential time with respect to the size Neural Network Routing for Random Multistage Interconnection Networks 725 of the message set. Consequently, it is an impractical approach for most RMINs, but it provides a convenient upper bound for the performance of other routers. 3 GREEDY ROUTING When greedy routing is applied, message connections are established one at a time. Once a route is established in a given message cycle, it may not be removed. Greedy routing does not always provide the optimal routing solution. The greedy routing algorithm that was used required the same route database as the exhaustive search router did. However, it selects a combination of routes in the following manner. When a new message set is present, the router chooses one desired message and looks at the first route on that message's list of routes. The router then establishes that route. Next, the router examines a second message (assuming a second desired message was requested) and sees if one of the routes in the second message's route list can be established without conflicting with the already established first message. If such a route does exist, the router establishes that route and moves on to the next desired message. In the worst case, the speed of the greedy router is quadratic with respect to the size of the message set. 4 NEURAL NETWORK ROUTING The focal point of the neural network router is a neural network of the type that was examined by Hopfield (Hopfield, 1984 and 1985). The problem of establishing a set of non-conflicting routes can be reduced to a constraint satisfaction problem. The structure of the neural network router is completely determined by the RMIN. When a new set of routes is desired, only certain bias currents in the network change. The neural network routing scheme also has certain fault-tolerant properties that will not be described here. The neural network calculates the routes by converging to a legal routing array. A legal routing array is 3-dimensional. Therefore, each element of the routing array will have three indices. If element ai,i,k is equal to 1 then message i is routed through output port k of stage j. We say ai,;,k and a',m,n are in the same row if i = I and k = n. They are in the same column if i = I and j = m. Finally, they are in the same rod if j = m and k = n. A legal routing array will satisfy the following three constraints: 1. one and only one element in each column is equal to 1. 2. the elements in successive columns that are equal to 1 represent output ports that can be connected in the interconnection network. 3. no more than one element in each rod is equal to 1. The first restriction ensures that each message will be routed through one and only one output port at each stage of the interconnection network. The second restriction ensures that each message will be routed through a legal path in the 726 Goudreau and Giles interconnection network. The third restriction ensures that any resource contention in the interconnection network is resolved. In other words, only one message can use a certain output port at a certain stage in the interconnection network. When all three of these constraints are met, the routing array will provide a legal route for each message in the message set. Like the routing array, the neural network router will naturally have a 3-dimensional structure. Each ai,j,k of a routing array is represented by the output voltage of a neuron, V'i,j,k' At the beginning of a message cycle, the neurons have a random output voltage. If the neural network settles in one of the global minima, the problem will have been solved. A continuous time mode network was chosen. It was simulated digitally. The neural network has N neurons. The input to neuron i is Ui, its input bias current is Ii, and its output is Vi. The input Ui is converted to the output Vi by a sigmoid function, g(z). Neuron i influences neuron j by a connection represented by 7ji. Similarly, neuron j affects neuron i through connection Iij. In order for the Liapunov function (Equation 5) to be constructed, Iij must equal7ji. We further assume that Iii = O. For the synchronous updating model, there is also a time constant, denoted by T. The equations which describe the output of a neuron i are: duo U· LN -' = --' + T. .. v,. + L· dt ~ J , T . 1 J= T=RC V; = g(Uj) 1 g(z) = 1 + e-X (1) (2) (3) (4) The equations above force the neural net into stable states that are the local minima of this approximate energy equation iNN N E = - 2 L 2: Iij Vi V; - L V'i Ii i=1j=1 i=l (5) For the neural network, the weights (Iii's) are set, as are the bias currents (Ii'S). It is the output voltages (V'i's) that vary to to minimize E. Let M be the number of messages in a message set, let S be the number of stages in the RMIN, and let P be the number of ports per stage (P may be a function of the stage number). Below are the energy functions that implement the three constraints discussed above: A M 8-1 P P E1 = 2' 2: L 2: Vm",p(-Vm",p + 2: Vm,3,i) m=1 1=1 p=l i=1 (6) B 8-1 P M M E2 = 2' 2: 2: 2: Vm,I,p( - Vm,3,p + L V'i,3,p) ,=1 p=1 m=1 i=1 (7) Neural Network Routing for Random Multistage Interconnection Networks 727 C M S-l P P Ea = "2 2: 2: 2:( -2Vm",p + Vm",p(-Vm",p + 2: Vm",i)) (8) m=l ,=1 p=l i=l M [S-l P P D f. ~]; tt d(s, p, i)Vm,,-l,p Vm",i (9) + &,( d( 1, (JIm, j)Vm,IJ + d( S, j, Pm )Vm,S -IJ )] A, B, C, and D are arbitrary positive constants. l El and Ea handle the first constraint in the routing array. E4 deals with the second constraint. E2 ensures the third. From the equation for E4, the function d(sl,pl,p2) represents the "distance" between output port pI from stage sl - 1 and output port p2 from stage s1. If pI can connect to p2 through stage sl, then this distance may be set to zero. If pI and p2 are not connected through stage sl, then the distance may be set to one. Also, am is the source address of message m, while f3m is the destination address of message m. The entire energy function is: (10) Solving for the connection and bias current values as shown in Equation 5 results in the following equations: (11) -B031,,20pl,p2(1 Oml,m2) -D8m1,m2[031+1,,2d(s2,pl,p2) + 8,1,,2+1d(sl,p2,pl)] 1m ",p = C - D[8"ld(l, am,p) + o"s-ld(S,p,f3m)] (12) 8i,j is a Kronecker delta (8j,j = 1 when i = j, and 0 otherwise). Essentially, this approach is promising because the neural network is acting as a parallel computer. The hope is that the neural network will generate solutions much faster than conventional approaches for routing in RMINs. The neural network that is used here has the standard problem - namely, a global minimum is not always reached. But this is not a serious difficulty. Typically, when the globally minimal energy is not reached by the neural network, some of the desired routes will have been calculated while others will not have. Even a locally minimal solution may partially solve the routing problem. Consequently, this would seem to be a particularly encouraging type of application for this type of neural network. For this application, the traditional problem of not reaching the global minimum may not hurt the system's performance very much, while the expected speed of the neural network in calculating the solution will be a great asset. IFor the simulations, T = 1.0, A = 0 = D = 3.0, and B = 6.0. These values for A, B, 0, and D were chosen empirically. 728 Goudreau and Giles Table 1: Routing results for the RMINs shown in Figure 2. The * entries were not calculated due to their computational complexity. RMIN1 RMIN2 RMIN3 M Eel Egr Enn Eel Egr Enn Eel Egr Enn 1 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 2 1.86 1.83 1.87 1.97 1.97 1.98 1.99 1.88 1.94 3 2.54 2.48 2.51 2.91 2.91 2.93 2.99 2.71 2.87 4 3.08 2.98 2.98 3.80 3.79 3.80 3.94 3.49 3.72 5 3.53 3.38 3.24 4.65 4.62 4.61 * 4.22 4.54 6 3.89 3.67 3.45 5.44 5.39 5.36 * 4.90 5.23 7 4.16 3.91 3.66 6.17 6.13 6.13 * 5.52 5.80 8 4.33 4.10 3.78 6.86 6.82 6.80 * 6.10 6.06 The neural network router uses a large number of neurons. If there are m input ports, and m output ports for each stage of the RMIN, an upper bound on the number of neurons needed is m2 S. Often, however, the number of neurons actually required is much smaller than this upper bound. It has been shown empirically that neural networks of the type used here can converge to a solution in essentially constant time. For example, this claim is made for the neural network described in (Takefuji and Lee, 1991), which is a slight variation of the model used here. 5 SIMULATION RESULTS Figure 2 shows three RMINs that were examined. The routing results for the three routing schemes are shown in Table 1. Eel represents the expected number of messages to be routed using exhaustive search routing. Egr is for greedy routing while Enn is for neural network routing. These values are functions of the size of the message set, M. Only message sets that did not have obvious conflicts were examined. For example, no message set could have two processors trying to communicate to the same memory module. The table shows that, for at least these three RMINs, the three routing schemes produce solutions that are of similar virtue. In some cases, the neural network router appears to outperform the supposedly optimal exhaustive search router. That is because the Eel and Egr values were calculated by testing every message set of size M, while Enn was calculated by testing 1,000 randomly generated message sets of size M. For the neural network router to appear to perform best, it must have gotten message sets that were easier to route than average. In general, the performance of the neural network router degenerates as the size of the RMIN increases. It is felt that the neural network router in its present form will not scale well for large RMINs. This is because other work has shown that large neural networks of the type used here have difficulty converging to a valid solution (Hopfield, 1985). Neural Network Routing for Random Multistage Interconnection Networks 729 6 CONCLUSIONS The results show that there is not much difference, in terms of quality of solution, for the three routing methodologies working on these relatively small sample RMINs. The exhaustive search approach is clearly not a practical approach since it is too time consuming. But when considering the asymptotic analyses for these three methodologies one should keep in mind the performance degradation of the greedy router and the neural network router as the size of the RMIN increases. Greedy routing and neural network routing would appear to be valid approaches for RMINs of moderate size. But since asymptotic analysis has a very limited significance here, the best way to compare the speeds of these two routing schemes would be to build actual implementations. Since the neural network router essentially calculates the routes in parallel, it can reasonably be hoped that a fast, analog implementation for the neural network router may find solutions faster than the exhaustive search router and even the greedy router. Thus, the neural network router may be a viable alternative for RMIN s that are not too large. References Brown, T. X., (1989), "Neural networks for switching," IEEE Commun. Mag., Vol. 27, pp. 72-81, Nov. 1989. Brown, T. X. and Liu, K. H., (1990), "Neural network design of a banyan network controller," IEEE J. on Selected Areas of Comm., pp. 1428-1438, Oct. 1990. Goudreau, M. W. and Giles, C. L., (1991), "Neural network routing for multiple stage interconnection networks," Proc. IJCNN 91, Vol. II, p. A-885, July 1991. Hakim, N. Z. and Meadows, H. E., (1990), "A neural network approach to the setup of the Benes switch," in Infocom 90, pp. 397-402. Hopfield, J. J., (1984), "Neurons with graded response have collective computational properties like those of two-state neurons," Proc. Natl. Acad. Sci. USA, Vol. 81, pp. 3088-3092, May 1984. Hopfield, J. J ., (1985), "Neural computation on decisions in optimization problems," Bioi. Cybern., Vol. 52, pp. 141-152, 1985. Huang, K. and Briggs, F. A., (1984), Computer Architecture and Parallel Processing, McGraw-Hill, New York, 1984. Marrakchi, A. M. and Troudet, T., (1989), "A neural net arbitrator for large crossbar packet-switches," IEEE Trans. on Cire. and Sys., Vol. 36, pp. 1039-1041, July 1989. Siegel, H. J., (1990), Interconnection Networks for Large Scale Parallel Processing, McGraw-Hill, New York, 1990. Takefuji, Y. and Lee, K. C., (1991), "An artificial hysteresis binary neuron: a model suppressing the oscillatory behaviors of neural dynamics", Biological Cybernetics, Vol. 64, pp. 353-356, 1991.
1991
137
471
Direction Selective Silicon Retina that uses N uIl Inhibition Ronald G. Benson and Tobi Delbriick Computation and Neural Systems Program, 139-74 California Institute of Technology Pasadena CA 91125 email: benson@cns.caltech.edu and tdelbruck@caltech.edu Abstract Biological retinas extract spatial and temporal features in an attempt to reduce the complexity of performing visual tasks. We have built and tested a silicon retina which encodes several useful temporal features found in vertebrate retinas. The cells in our silicon retina are selective to direction, highly sensitive to positive contrast changes around an ambient light level, and tuned to a particular velocity. Inhibitory connections in the null direction perform the direction selectivity we desire. This silicon retina is on a 4.6 x 6.8mm die and consists of a 47 x 41 array of photoreceptors. 1 INTRODUCTION The ability to sense motion in the visual world is essential to survival in animals. Visual motion processing is indispensable; it tells us about predators and prey, our own motion and image stablization on the retina. Many algorithms for performing early visual motion processing have been proposed [HK87] [Nak85]. A key salient feature of motion is direction selectivity, ie the ability to detect the direction of moving features. We have implemented Barlow and Levick's model, [BHL64], which hypothesizes inhibition in the null direction to accomplish direction selectivity. In contrast to our work, Boahen, [BA91], in these proceedings, describes a silicon retina that is specialized to do spatial filtering of the image. Mahowald, [Mah91], describes a silicon retina that has surround interactions and adapts over mulitiple time scales. Her silicon retina is designed to act as an analog preprocessor and 756 Preferred .Null ~ (a) Direction Selective Silicon Retina that uses Null Inhibition 757 Pixels inhibit to the left Preferred dPrection ~ Photoreceptor L~ DS cell .. Inhibition (b) Figure 1: Barlow and Levick model of direction selectivity (DS). (a) Shows how two cells are connected in an inhibitory fashion and (b) a mosaic of such cells. so the gain of the output stage is rather low. In addition there is no rectification into on- and off-pathways. This and earlier work on silicon early vision systems have stressed spatial processing performed by biological retinas at the expense of temporal processing. The work we describe here and the work described by Delbriick, [DM9l], emphasizes temporal processing. Temporal differentiation and separation of intensity changes into on- and off-pathways are important computations performed by vertebrate retinas. Additionally, specialized vertebrate retinas, [BHL64], have cells which are sensitive to moving stimuli and respond maximally to a preferred direction; they have almost zero response in the opposite or null direction. We have designed and tested a silicon retina that models these direction selective velocity tuned cells. These receptors excite cells which respond to positive contrast changes only and are selective for a particular direction of stimuli. Our silicon retina may be useful as a preprocessor for later visual processing and certainly as an enhancement for the already existing spatial retinas. It is a striking demonstration of the perceptual saliency of contrast changes and directed motion in the visual world. 2 INHIBITION IN THE NULL DIRECTION Barlow and Levick, [BHL64]' described a mechanism for direction selectivity found in the rabbit retina which postulates inhibitory connections to achieve the desired direction selectivity. Their model is shown in Figure l(a). As a moving edge passes over the photoreceptors from left to right, the left photoreceptor is excited first, causing its direction selective (DS) cell to fire. The right photoreceptor fires when the edge reaches it and since it has an inhibitory connection to the left DS cell, the right photoreceptor retards further output from the left DS cell. If an edge is moving in the opposite or null direction (right to left), the activity evoked in the right photoreceptor completely inhibits the left DS cell from firing, thus creating a direction selective cell. 758 Benson and Delbriick Inhibition to left Inhibition from right Ir Q ~ r Preferred Direction Photoreceptor DS cell Figure 2: Photoreceptor and direction selective (DS) cell. The output of the high-gain, adaptive photoreceptor is fed capacitively to the input of the DS cell. The output of the photoreceptor sends inhibition to the left. Inhibition from the right photoreceptors connect to the input of the DS cell. In the above explanation with the edge moving in the preferred direction (left to right), as the edge moves faster, the inhibition from leading photoreceptors truncates the output of the DS cell ever sooner. In fact, it is this inhibitory connection which leads to velocity tuning in the preferred direction. By tiling these cells as shown in Figure l(b), it is possible to obtain an array of directionally tuned cells. This is the architecture we used in our chip. Direction selectivity is inherent in the connections of the mosaic, ie the hardwiring of the inhibitory connections leads to directionally tuned cells. 3 PIXEL OPERATION A pixel consists of a photoreceptor, a direction selective (DS) cell and inhibition to and from other pixels as shown in Figure 2. The photoreceptor has high-gain and is adaptive [Mah91, DM91]. The output from this receptor, Vp , is coupled into the DS cell which acts as a rectifying gain element, [MS91], that is only sensitive to positive-going transitions due to increases in light intensity at the receptor input. Additionally, the output from the photoreceptor is capacitively coupled to the inhibitory synapses which send their inhibition to the left and are coupled into the DS cell of the neighboring cells. A more detailed analysis of the DS cell yields several insights into this cell's functionality. A step increase of 6. V at Vp , caused by a step increase in light intensity incident upon the phototransistor, results in a charge injection of Cc6. V at Vi. This charge is leaked away by QT at a rate IT, set by voltage VT. Hence, to first order, the output pulse width T is simply T = Cc6.V. IT There is also a threshold minimum step input size that will result in enough change Direction Selective Silicon Retina that uses Null Inhibition 759 1.6 -.. 1.2 > Output '-' ~ 0.8 <Il = 0 c. 0.4 <Il ~ ~ 0.0 Input intensity 0 40 80 120 160 200 Time (msec) Figure 3: Pixel response to intensity step. Bottom trace is intensity; top trace is pixel output. in Vi to pull Vout all the way to ground. This threshold is set by Cc and the gain of the photoreceptor. When the input to the rectifying gain element is not a step, but instead a steady increase in voltage, the current lin flowing into node Vi is lin = CcVp. When this current exceeds IT there is a net increase in the voltage Vi, and the output Vout will quickly go low. The condition lin = IT defines the threshold limit for stimuli detection, i.e. input stimuli resu~ting in an lin < IT are not perceptible to the pixel. For a changing intensity I, the adaptive photoreceptor stage outputs a voltage Vp proportional to j / I, where I is the input light intensity. This photoreceptor behavior means that the pixel threshold will occur at whatever j / I causes Cc Vp to exceed the constant current I r. The inhibitory synapses (shown as Inhibition from right in Figure 2) provide additionalleakage from Vi resulting in a shortened response width from the DS cell. This analysis suggests that a characterization of the pixel should investigate both the response amplitude, measured as pulse width versus input intensity step size, and the response threshold, measured with temporal intensity contrast. In the next section we show such measurements. 4 CHARACTERIZATION OF THE PIXEL We have tested both an isolated pixel and a complete 2-dimensional retina of 47 x 41 pixels. Both circuits were fabricated in a 2J.tm p-well CMOS double poly process available through the MOSIS facility. The retina is scanned out onto a monitor using a completely integrated on-chip scanner[MD91]. The only external components are a video amplifier and a crystal. We show a typical response of the isolated pixel to an input step of intensity in Figure 3. In response to the input step increase of intensity, the pixel output goes low and saturates for a time set by the bias Vr in Figure 2. Eventually the pixel recovers and the output returns to its quiescent level. In response to the step decrease of intensity there is almost no response as seen in Figure 3. 760 Benson and Delbriick ~~ U-16O III rIJ !,120 / ..c= .... be 80 = III III rIJ 40 ::l A.. 1.8 2.2 Step Contrast Temporal Frequency (Hz) (a) (b) Figure 4: (a) Pulse width of response as function of input contrast step size. The abscissa is measured in units of ratio-intensity, i.e., a value of 1 means no intensity step, a value of 1.1 means a step from a normalized intensity of 1 to a normalized intensity of 1.1, and so forth. The different curves show the response at different absolute light levels; the number in the figure legend is the log of the absolute intensity. (b) Receptor threshold measurements. At each temporal frequency, we determined the minimum necessary amplitude of triangular intensity variations to make the pixel respond. The different curves were taken at different background intensity levels, shown to the left of each curve. For example, the bottom curve was taken at a background level of 1 unit of intensity; at 8 Hz, the threshold occurred at a variation of 0.2 units of intensity. The output from the pixel is essentially quantized in amplitude, but the resulting pulse has a finite duration related to the input intensity step. The analysis in Section 3 showed that the output pulse width, T, should be linear in the input intensity contrast step. In Figure 4{ a), we show the measured pulse-width as a function of input contrast step. To show the adaptive nature of the receptor, we did this same measurement at several different absolute intensity levels. Our silicon retina sees some features of a moving image and not others. Detection of a moving feature depends on its contrast and velocity. To characterize this behavior, we measured a receptor's thresholds for intensity variations, as a function of temporal frequency. These measurements are shown in Figure 4(b); the curves define "zones of visibility"; if stimuli lie below a curve, they are visible, if they fall above a curve they are not. (The different curves are for different absolute intensity levels.) For low temporal frequencies stimuli are visible only if they are high contrast; at higher temporal frequencies, but still below the photoreceptor cutoff frequency, lower contrast stimuli are visible. Simply put, if the input image has low contrast and is slowly moving, it is not seen. Only high contrast or quickly moving features are salient stimuli. More precisely, for temporal frequencies below the photoreceptor cutoff frequency, the threshold occurs at a constant value of the temporal intensity contrast j / I. Direction Selective Silicon Retina that uses Null Inhibition 761 Preferred fN uuPhotoreceptors Excitatio (a) Preferred L R Inhib "'----- DS L R ---- Inhib '----- DS 0.1 sec (b) Figure 5: (a) shows the basic connectivity of the tested cell. (b) top trace is the response due to an edge moving in the preferred direction (left to right). (b) bottom trace is the response due to an edged moving in the null direction (right to left). 5 NULL DIRECTION INHIBITION PROPERTIES We performed a series of tests to characterize the inhibition for various orientations and velocities. The data in Figure 5(b) shows the outputs of two photo receptors, the inhibitory signal and the output of a DS cell. The top panel in Figure 5(b) shows the outputs in the preferred direction and the bottom panel shows them in the null direction. Notice that the out pu t of the left photoreceptor (L in Figure 5 (b) top panel) precedes the right (R). The output of the DS cell is quite pronounced, but is truncated by the inhibition from the right photoreceptor. On the other hand, the bottom panel shows that the output of the DS cell is almost completely truncated by the inhibitory input. A DS cell receives most inhibition when the stimulus is travelling exactly in the null direction. As seen in Figure 6(a) as the angle of stimulus is rotated, the maximum response from the DS cell is obtained when the stimulus is moving in the preferred direction (directly opposite to the null direction). As the bar is rotated toward the null direction, the response of the cell is reduced due to the increasing amount of inhibition received from the neighboring photo receptors. If a bar is moving in the preferred direction with varying velocity, there is a velocity, Vmaz , for which the DS cell responds maximally as shown in Figure 6(b). As the bar is moved faster than Vmaz , inhibition arrives at the cell sooner, thus truncating the response. As the cell is moved slower than V maz, less input is provided to the DS cell as described in Section 3. In the null direction (negative in Figure 6(b») the cell does not respond, as expected, until the bar is travelling fast enough to beat the inhibition's onset (recall delay from Figure 5). In Figure 7 we show the response of the entire silicon retina to a rotating fan. When the fan blades are moving to the left the retina does not respond, but when moving to the right, note the large response. Note the largest response when the blades are moving exactly in the preferred direction. 762 Benson and Delbruck (a) 160 -;;-120 8 ~ Q) rn § 80 0.. ~ ~ 40 -0.8 -0.4 0.0 0.4 0.8 Velocity (arbitrary units) (b) Figure 6: (a) polar plot which shows the pixels are directionally tuned. (b) shows velocity tuning of the DS cell (positive velocities are in the preferred direction). (a) (b) Figure 7: (a) Rotating fan used as stimulus to the retina. (b) Output of the retina. Direction Selective Silicon Retina that uses Null Inhibition 763 6 CONCLUSION We have designed and tested a silicon retina that detects temporal changes in an image. The salient image features are sufficiently high contrast stimuli, relatively fast increase in intensity (measured with respect to the recent past history of the intensity), direction and velocity of moving stimuli. These saliency measures result in a large compression of information, which will be useful in later processing stages. Acknowledgments Our thanks to Carver Mead and John Hopfield for their guidance and encouragement, to the Office of Naval Research for their support under grant NAV N0001489-J-1675, and, of course, to the MOSIS fabrication service. References [BA91] K. Boahen and A. Andreou. A contrast sensitive silicon retina with reciprocal synapses. In S. Hanson J. Moody and R. Lippmann, editors, Advances in Neural Information Processing Systems, Volume 4. Morgan Kaufmann, Palo Alto, CA, 1991. [BHL64] H.B. Barlow, M.R. Hill, and W.R. Levick. Retinal ganglion cells responding selectively to direction and speed of image motion in the rabbit. J. Physiol., 173:377-407, 1964. [DM91] T. Delbriick and Carver Mead. Silicon adaptive photoreceptor array that computes temporal intensity derivatives. In Proc. SPIE 1541, volume 1541-12, pages 92-99, San Diego, CA, July 1991. Infrared Sensors: Detectors, Electronics, and Signal Processing. [HK87] E. Hildreth and C. Koch. The analysis of visual motion: From computational theory to neuronal mechanisms. Annual Review in Neuroscience, 10:477-533, 1987. [Mah91] M.A. Mahowald. Silicon retina with adaptive photoreceptor. In SPIE Technical Symposia on Optical Engineering and Photonics in Aerospace Sensing, Orlando, FL, April 1991. Visual Information Processing: From Neurons to Chips. [MD91] C.A. Mead and T. Delbriick. Scanners for use in visualizing analog VLSI circuitry. Analog Integrated Circuits and Signal Processing, 1:93-106, 1991. [MS91] C.A. Mead and R. Sarpeshkar. An axon circuit. Internal Memo, Physics of Computation Laboratory, Caltech, 1991. [Nak85] K. Nakayama. Biological image motion processing: A review. Vision Research, 25(5):625-660, 1985.
1991
138
472
Software for ANN training on a Ring Array Processor Phil Kohn, Jeff Bilmes, Nelson Morgan, James Beck International Computer Science Institute, 1947 Center St., Berkeley CA 94704, USA Abstract Experimental research on Artificial Neural Network (ANN) algorithms requires either writing variations on the same program or making one monolithic program with many parameters and options. By using an object-oriented library, the size of these experimental programs is reduced while making them easier to read, write and modify. An efficient and flexible realization of this idea is Connectionist Layered Object-oriented Network Simulator (CLONES). CLONES runs on UNIX1 workstations and on the 100-1000 MFLOP Ring Array Processor (RAP) that we built with ANN algorithms in mind. In this report we describe CLONES and show how it is implemented on the RAP. 1 Overview As we continue to experiment with Artificial Neural Networks (ANNs) to generate phoneme probabilities for speech recognition (Bourlard & Morgan, 1991), two things have become increasingly clear: 1. Because of the diversity and continuing evolution of ANN algorithms, the programming environment must be both powerful and flexible. 2. These algorithms are very computationally intensive when applied to large databases of training patterns. Ideally we would like to implement and test ideas at about the same rate that we come up with them. We have approached this goal both by developing application specific parallel lUNIX is a trademark of AT&T 781 782 Kahn, Bilrnes, Morgan, and Beck ~ ~ RAP9jstam ~~mID SPERTBowd ~~~L7 System Perfonnance Languages Supported Assem C c++ Sather pSather SparcStation 2 2MFLOP -/ -/ -/ -/ -/ Desktop RAP + lOOMFLOP Sun 4/330 Host -I -I -I -I Nctwmcd RAP (1-10 Boards) 1 GFLOP SparcSlation + lOOP if if if ~ SPERT Board CNS-l System 200 GOP ~ ~ ~ ~ ~ -/ Completed ~ In Design <,...-__ L_in_kC_r_c_om-::pa_lI_ o b_IC ___ > Figure 1: Hardware and software configurations S o u r c c C o m p • t i b I c hardware, the Ring Array Processor (RAP) (Morgan et al., 1990; Beck, 1990; Morgan et al., 1992), and by building an object-oriented software environment, the Connectionist Layered Object-oriented Network Simulator (CLONES) (Kohn, 1991). By using an object-oriented library, the size of experimental ANN programs can be greatly reduced while making them easier to read, write and modify. CLONES is written in C++ and utilizes libraries previously written in C and assem bIer. Our ANN research currently encompasses two hardware platforms and several languages. shown in Figure 1. Two new hardware platforms, the SPERT board (Asanovic et al., 1991) and the CNS-l system are in design (unfilled check marks), and will support source code compatibility with the existing machines. The SPERT design is a custom VLSI parallel processor installed on an SBUS card plugged into a SPARC workstation. Using variable precision fixed point arithmetic, a single SPERT board will have performance comparable to a 10 board RAP system with 40 processors. The CNS-l system is based on multiple VLSI parallel processors interconnected by high speed communication rings. Because the investment in software is generally large, we insiston source level compatibility across hardware platforms at the level of the system libraries. These libraries include matrix and vector classes that free the user from concern about the hardware configuration. It is also considered important to allow routines in different languages to be linked together. This includes support for Sather, an object-oriented language that has been developed at ICSI for workstations. The parallel version of Sather, called pSather, will be supported on Sottware for ANN training on a Ring Array Processor 783 theCNS-l. CLONES is seen as the ANN researcher's interface to this multiplatform, multi language environment. Although CLONES is an application written specifically for ANN algorithms, it's object-orientation gives it the ability to easily include previously developed libraries. CLONES currently runs on UNIX workstations and the RAP; this paper focuses on the RAP implementation. 2 RAP hardware The RAP consists of cards that are added to a UNIX host machine (currently a VME based Sun SPARe). A RAP card has four 32 MFlop Digital Signal Processor (DSP) chips (TI TMS32OC30), each with its own local 256KB or 1MB of fast static RAM and 16MB of DRAM. Instead of sharing memory, the processors communicate on a high speed ring that shifts data in a single machine cycle. For each board, the peak transfer rate between 4 nodes is 64 million words/sec (256 Mbytes/second). This is a good balance to the 64 million multiply-accumulates per second (128 MFLOPS) peak performance of the computational elements. Up to 16 of these boards can be interconnected and used as one Single Program operating on Multiple Data stream (SPMD) machine. In this style of parallel computation, all the processors run the same program and are doing the same operations to different pieces of the same matrix or vector 2. The RAP can run other styles of parallel computation, including pipelines where each processor is doing a different operation on different data streams. However, for fully connected back-propagation networks, SPMD parallelism works well and is also much easier to program since there is only one flow of control to worry about. A reasonable design for networks in which all processors need all unit outputs is a single broadcast bus. However, this design is not appropriate for other related algorithms such as the backward phase of the back-propagation learning algorithm. By using a ring, backpropagation can be efficiently parallelized without the need to have the complete weight matrix on all processors. The number of ring operations required for each complete matrix update cycle is of the same order as the number of units, not the square of the number of units. It should also be noted that we are using a stochastic or on-line learning algorithm. The training examples are not di viding among the processors then the weights batch updated after a complete pass. All weights are updated for each training example. This procedure greatly decreases the training time for large redundant training sets since more steps are being taken in the weight-space per training example. We have empirically derived formulae that predict the performance improvement on backpropagation training as a function of the number of boards. Theoretical peak performance is 128 MFlops/board, with sustained performance of 30-90% for back-propagation problems of interest to us. Systems with up to 40 nodes have been tested, for which throughputs 1'he hardware does not automatically keep the processors in lock step; for example, they may become out of sync because of branches conditioned on the processor's node number or on the data. However, when the processors must communicate with each other through the ring, hardware synchronization automatically occurs. A node that attempts to read before data is ready. or to write when there is already data waiting. will stop executing until the data can be moved. 784 Kahn, Bilrnes, Morgan, and Beck of up to 574 Million Connections Per Second (MCPS) have been measured, as well as learning rates of up to 106 Million Connection Updates Per Second (MCUPS) for training. Practical considerations such as workstation address space and clock skew restrict current implementations to 64 nodes, but in principle the architecture scales to about 16,000 nodes for back-propagation. We now have considerable experience with the RAP as a day-to-day computational tool for our research. With the aid of the RAP hardware and software, we have done network training studies that would have over a century on a UNIX workstation such as the SPARCstation-2. We have also used the RAP to simulate variable precision arithmetic to guide us in the design of higher performance hardware such as SPERT. The RAP hardware remains very flexible because of the extensive use of programmable logic arrays. These parts are automatically downloaded when the host machine boots up. By changing the download files, the functionality of the communications ring and the host interface can be modified or extended without any physical changes to the board. 3 RAP software The RAP DSP software is built in three levels (Kohn & Bilmes, 1990; Bilmes & Kohn, (990). At the lowest level are hand coded assembler routines for matrix, vector and ring operations. Many standard matrix and vector operations are currently supported as well as some operations specialized for efficient back-propagation. These matrix and vector routines do not use the communications ring or split up data among processing nodes. There is also a UNIX compatible library including most standard C functions for file, math and string operations. All UNIX kernel calls (such as file input or output) cause requests to be made to the host SPARC over the VMEbus. A RAP dremon process running under UNIX has all of the RAP memory mapped into its virtual address space. It responds to the RAP system call interrupts (from the RAP device driver) and can access RAP memory with a direct memory copy function or assignment statement. An intermediate level consists of matrix and vector object classes coded in C++. A programmer writing at this level or above can program the RAP as if it were a conventional serial machine. These object classes divide the data and processing among the available processing nodes, using the communication ring to redistribute data as needed. For example, to multiply a matrix by a vector, each processor would have its own subset of the matrix rows that must be multiplied. This is equivalent to partitioning the output vector elements among the processors. If the complete output vector is needed by all processors, a ring broadcast routine is called to redistribute the part of the output vector from each processor to all the other processors. The top level of RAP software is the CLONES environment. CLONES is an object-oriented library for constructing, training and utilizing connectionist networks. It is designed to run efficiently on data parallel computers as well as uniprocessor workstations. While efficiency and portability to parallel computers are the primary goals, there are several secondary design goals: 1. minimize the learning curve for using CLONES; 2. minimize the additional code required for new experiments; 3. maximize the variety of artificial neural network algorithms supported; Software for ANN training on a Ring Array Processor 785 4. allow heterogeneous algorithms and training procedures to be interconnected and trained together; 5. allow the trained network to be easily embedded into other programs. The size of experimental ANN programs is greatly reduced by using an object-oriented library; at the same time these programs are easier to read, write and evolve. Researchers often generate either a proliferation of versions of the same basic program, or one giant program with a large number of options and many potential interactions and side-effects. Some simulator programs include (or worse, evolve) their own language for describing networks. We feel that a modem object-oriented language (such as C++) has all the functionality needed to build and train ANNs. By using an object-oriented design, we attempt to make the most frequently changed parts of the program very small and well localized. The parts that rarely change are in a centralized library. One of the many advantages of an object-oriented library for experimental work is that any part can be specialized by making a new class of object that inherits the desired operations from a library class. 4 CLONES overview To make CLONES easier to learn, we restrict ourselves to a subset of the many features of C++. Excluded features include multiple inheritance, operator overloading (however, function overloading is used) and references. Since the multiple inheritance feature of C++ is not used, CLONES classes can be viewed as a collection of simple inheritance trees. This means that all classes of objects in CLONES either have no parent class (top of a class tree) or inherit the functions and variables of a single parent class. CLONES consists of a library of C++ classes that represent networks (Net), their components (Net-part) and training procedures. There are also utility classes used during training such as: databases of training data (Database), tables of parameters and arguments (Param), and perfonnance statistics (Stats). Database and Param do not inherit from any other class. Their class trees are independent of the rest of CLONES and each other. The Stats class inherits from Net-behavior. The top level of the CLONES class tree is a class called NeLbehavior. It defines function interfaces for many general functions including file save or restore and debugging. It also contains behavior functions that are called during different phases of running or training a network. For example, there are functions that are called before or after a complete training run (pre_training, posLtraining), before or after a pass over the database (pre_epoch, post-epoch) and before or after a forward or backward run of the network (pre_forw-pass, post1orw_pass, pre_back_pass, posLback_pass). The Net, NeLpart and Stats classes inherit from this class. All network components used to construct ANNs are derived from the two classes Layer and Connect. Both of these inherit from class NeLpart. A CLONES network can be viewed as a graph where the nodes are Layer objects and the arcs are Connect objects. Each Connect connects a single input Layer with a single output Layer. A Layer holds the data for a set of units (such as an activation vector), while a Connect transforms the data as it passes between Layers. Data flows along Connects between the pair of Layers by calling forw_propagate (input to output) or back_propagate (output to input) behavior 786 Kahn, Bilrnes, Morgan, and Beck functions in the Connect object. CLONES does not have objects that represent single units (or artificial neurons). Insteadt Layer objects are used to represent a set of units. Because arrays of units are passed down to the lowest level routinest most of the computation time is focused into a few small assembly coded loops that easily fit into the processor instruction cache. Time spent in all of the levels of control code that call these loops becomes less significant as the size of the Layer is increased. The Layer class does not place any restrictions on the representation of its internal information. For examplet the representation for activations may be a floating point number for each unit (AnalogJayer)t or it may be a set of unit indicest indicating which units are active (BinaryJayer). AnalogJayer and BinaryJayer are built into the CLONES library as subclasses of the class Layer. The AnalogJayer class specifies the representation of activationst but it still leaves open the procedures that use and update the activation array. BP ...analogJayer is a subclass of AnalogJayer that specify these procedures for the back-propagation algorithm. Subclasses of AnalogJayer may also add new data structures to hold extra internal state such as the error vector in the case of BP ...analogJayer. The BP -AnalogJaycr class has subclasses for various transfer functions such as BP ...sigmoidJayer and BP Jinear Jayer. Layer classes also have behavior functions that are called in the course of running the network. For examplet one of these functions (pre_forw-propagate) initializes the Layer for a forward passt perhaps by clearing its activation vector. After all of the connections coming into it are runt another Layer behavior function (postJorw_propagate) is called that computes the activation vector from the partial results left by these connections. For examplet this function may apply a transfer function such as the sigmoid to the accumulated sum of all the input activations. These behavior functions can be changed by making a subclass. BP ...analogJayer leaves open the activation transfer function (or squashing function) and its derivative. Subclasses define new transfer functions to be applied to the activations. A new class of backpropagation layer with a customized transfer function (instead of the default sigmoid) can be created with the following C++ code: My_new_BP_layer_class(int number_of_units) : BP_analog_layer(number_of_units)i II constructor void transfer (Fvec *activation) { 1* apply forward transfer function to my activation vector *1 void d_transfer(Fvec *activation, Fvec *err) 1* apply backward error transfer to err (given activation) *1 } i A Connect class includes two behavior functions: one that transforms activations from the incoming Layer into partial results in the outgoing Layer (forw-propagate) and one that takes outgoing errors and generates partial results in the incoming Layer (back-propagate). Software for ANN training on a Ring Array Processor 787 The structure of a partial result is part of the Layer class. The subclasses of Connect include: Bus_connect (one to one), Full_connect (all to all) and Sparse_connect (some to some). Each subclass of Connect may contain a set of internal parameters such as the weight matrix in a BP JulLconnect. Subclasses of Connect also specify which pairs of Layer subclasses can be connected. When a pair of Layer objects are connected, type checking by the C++ compiler insures that the input and output Layer subclasses are supported by the Connect object. In order to do its job efficiently, a Connect must know something about the internal representation of the layers that are connected. By using C++ overloading, the Connect function selected depends not only on the class of Connect, but also on the classes of the two layers that are connected. Not all Connect classes are defined for all pairs of Layer classes. However, Connects that convert between Layer classes can be utilized to compensate for missing functions. CLONES allows the user to view layers and connections much like tinker-toy wheels and rods. ANNs are built up by creating Layer objects and passing them to the create functions of the desired Connect classes. Changing the interconnection pattern does not require any changes to the Layer classes or objects and vice-versa. At the highest level, a Net object delineates a subset of a network and controls its training. Operations can be performed on these subsets by calling functions on their Net objects. The Layers of a Net are specified by calling one of new_inputJayer, new_hidden.Jayer, or new_outputJayer on the Net object for each Layer. Given the Layers, the Connects that belong to the Net are deduced by the Net-order objects (see below). Layer and Connect objects can belong to any number of Nets. The Net labels all of its Layers as one of input, output or hidden. These labels are used by the NeLorder objects to determine the order in which the behavior functions of the NeLparts are called. For example, a Net object contains NeLorder objects called forward_pass_order and backward_pass_order that control the execution sequence for a forward or backward pass. The Net object also has functions that call a function by the same name on all of its component parts (for example set.Jearning-.rate). When a Net-order object is built it scans the connectivity of the Net. The rules that relate topology to order of execution are centralized and encapsulated in subclasses of NeLorder. Changes to the structure of the Net are localized to just the code that creates the Layers and Connects; one does not need to update separate code that contains explict knowledge about the order of evaluation for running a forward or backward pass. The training procedure is divided into a series of steps, each of which is a call to a function in the Net object. At the top level, calling run_training on a Net performs a complete training run. In addition to calling pre_training, posLtraining behavior functions, it calls run_epoch in a loop until the the nextJearning-.rate function returns zero. The run_epoch function calls run_forward and run_backward. At a lower level there are functions that interface the database(s) of the Net object to the Layers of the Net. For example, seLinput sets the activations of the input Layers for a given pattern number of the database. Another of these sets the error vector of the output layer (seLerror). Some of these functions, such as is_correct evaluate the performance of the Net on the current pattern. 788 Kahn, Bilmes, Morgan, and Beck In addition to database related functions, the Net object also contains useful global variables for all of its components. A pointer to the Net object is always passed to all behavior functions of its Layers and Connects when they are called. One of these variables is a Param object that contains a table of parameter names, each with a list of values. These parameters usually come from the command line and/or parameter files. Other variables include: the current pattern, the correct target output, the epoch number, etc. 5 Conclusions CLONES is a useful tool for training ANNs especially when working with large training databases and networks. It runs efficiently on a variety of parallel hardware as well as on UNIX workstations. Acknowledgements Special thanks to Steve Renals for daring to be the first CLONES user and making significant contributions to the design and implementation. Others who provided valuable input to this work were: Krste Asanovi~, Steve Omohundro, Jerry Feldman, Heinz Schmidt and Chuck Wooters. Support from the International Computer Science Institute is gratefully acknowledged. References Asanovi~, K., Beck, J., Kingsbury, B., Kohn, P., Morgan, N., & Wawrzynek, J. (1991). SPERT: A VLIW ISIMD Microprocessor for Artificial Neural Network Computations. Tech. rep. TR-91-072, International Computer Science Institute. Beck, J. (1990). The Ring Array Processor (RAP): Hardware. Tech. rep. TR-90-048, International Computer Science Institute. Bilmes, J. & Kohn, P. (1990). The Ring Array Processor (RAP): Software Architecture. Tech. rep. TR-90-050, International Computer Science Institute. Bourlard, H. & Morgan, N. (1991). Connectionist approaches to the use of Markov models for continuous speech recognition. In Touretzky, D. S. (Ed.), Advances in Neural Information Processing Systems, Vol. 3. Morgan Kaufmann, San Mateo CA. Kohn, P. & Bilmes, J. (1990). The Ring Array Processor (RAP): Software Users Manual Version 1.0. Tech. rep. TR-90-049, International Computer Science Institute. Kohn, P. (1991). CLONES: Connectionist Layered Object-oriented NEtwork Simulator. Tech. rep. TR-91-073, International Computer Science Institute. Morgan, N., Beck, J., Kohn, P., Bilmes, J., Allman, E., & Beer, J. (1990). The RAP: a ring array processor for layered network calculations. In Proceedings IEEE International Conference on Application Specific Array Processors, pp. 296-308 Princeton NI. Morgan, N., Beck, J., Kohn, P., & Bilmes, J. (1992). Neurocomputing on the RAP. In Przytula, K. W. & Prasanna, V. K. (Eds.), Digital Parallellmplemencations of Neural Networks. Prentice-Hall, Englewood Cliffs NJ.
1991
139
473
Network activity determines spatio-temporal integration in single cells Ojvind Bernander, Christof Koch * Computation and Neural Systems Program, California Institut.e of Technology, Pasadena, Ca 91125, USA. Rodney J. Douglas Anatomical Neuropharmacology Unit, Dept. Pharmacology, Oxford, UK. Abstract Single nerve cells with static properties have traditionally been viewed as the building blocks for networks that show emergent phenomena. In contrast to this approach, we study here how the overall network activity can control single cell parameters such as input resistance, as well as time and space constants, parameters that are crucial for excitability and spatiotemporal integration. Using detailed computer simulations of neocortical pyramidal cells, we show that the spontaneous background firing of the network provides a means for setting these parameters. The mechanism for this control is through the large conductance change of the membrane that is induced by both non-NMDA and NMDA excitatory and inhibitory synapses activated by the spontaneous background activity. 1 INTRODUCTION Biological neurons display a complexity rarely heeded in abstract network models. Dendritic trees allow for local interactions, attenuation, and delays. Voltage- and *To whom all correspondence should be a.ddressed. 43 44 Bernander, Koch, and Douglas time-dependent conductances can give rise to adaptation, burst-firing, and other non-linear effects. The extent of temporal integration is determined by the time constant, and spatial integration by the "leakiness" of the membrane. It is unclear which cell properties are computationally significant and which are not relevant for information processing, even though they may be important for the proper functioning of the cell. However, it is crucial to understand the function of the component cells in order to make relevant abstractions when modeling biological systems. In this paper we study how the spontaneous background firing of the network as a whole can strongly influence some of the basic integration properties of single cells. 1.1 Controlling parameters via background synaptic activity The input resistance, RJn, is defined as ~, where dV is the steady state voltage change in response to a small current step of amplitude dI. RJn will vary throughout the cell, and is typically much larger in a long, narrow dendrite than in the soma. However, the somatic input resistance is more relevant to the spiking behavior of the neuron, since spikes are initiated at or close to the soma, and hence Rin,.oma (henceforth simply referred to as Rin) will tell us something of the sensitivity of the cell to charge reaching the soma. The time constant, Tm, for a passive membrane patch is Rm . em, the membrane resistance times the membrane capacitance. For membranes containing voltagedependent non-linearities, exponentials are fitted to the step response and the largest. time constant is taken to be the membrane time constant. A large time constant implies that any injected charge leaks away very slowly, and hence the cell has a longer "memory" of previous events. The parameters discussed above (Rin, Tm) clearly have computational significance and it would be convenient to be able to chanfe them dynamically. Both depend directly on the membrane conductance G m = Jr.;' so any change in Gm will change the ·parameters. Traditionally, however, Gm has been viewed as static, so these parameters have also been considered static. How can we change Gm dynamically? In traditional models, Gm has two components: active (time- and voltagedependent) conductances and a passive "leak" conductance. Synapses are modeled as conductance changes, but if only a few are activated, the cable structure of the cell will hardly change at all. However, it is well known that neocortical neurons spike spontaneously, in the absence of sensory stimuli, at rates from 0 to 10 Hz. Since neocortical neurons receive on the order of 5,000 to 15,000 excitatory synapses (Larkman, 1991), this spontaneous firing is likely to add up to a large total conductance (Holmes & Woody, 1989). This synaptic conductance becomes crucial if the non-synaptic conductance components are small. Recent evidence show indeed that the non-synaptic conductances are relatively small (when the cell is not spiking) (Anderson et aI., 1990). Our model uses a leak Rm = 100,000 kOcm2 , instead of more conventional values in the range of 2,500-10,000 kOcm2 • These two facts, high Rm and synaptic background activity, allow Rin and Tm to change by more than ten-fold, as described below in this paper. Nerwork activity determines spatio-temporal integration in single cells 45 2 MODEL A typical layer V pyramidal cell (fig. 2) in striate cortex was filled with HRP during in vivo experiments in the anesthetized, adult cat (Douglas et aI., 1991). The 3-D coordinates and diameters of the dendritic tree were measured by a computerassisted method and each branch was replaced by a single equivalent cylinder. This morphological data was fed into a modified version of NEURON, an efficient single cell simulator developed by Hines (1989). The dendrites were passive, while the soma contained seven active conductances, underlying spike generation, adaptation, and slow onset for weak stimuli. The model included two sodium conductances (a fast spiking current and a. fJlower non-inactivating current), one calcium conductance, and four potassium conductances (delayed rectifier, slow 'M' and 'A' type currents, and a calcium-dependent current). The active conductances were modeled using a Hodgkin-Huxley-like formalism. The model used a total of 5,000 synapses. The synaptic conductance change in time was modeled with an alpha function, get) = ~., ... e te-tlt.,.... 4,000 . ., .. . synapses were fast excitatory non-NMDA or AMPA-type (tped = 1.5 msec, gpeaJ: = 0.5 nS, EretJ = 0 mV), 500 were medium-slow inhibitory GABAA (tpe4k = 10 msec, gpeole = 1.0 nS, EretJ = -70 mV), and 500 were slow inhibitory GABAB (tpeok = 40 msec, gpeok = 0.1 nS, E,.etJ = -95 mV). The excitatory synapses were less concentrated towards the soma, while the inhibitory ones were more so. For a more detailed description of the model, see Bernander et al. (1991). 120r------------------------------, 100 i 40 20 RIn, no~NMDA Rin, no~MDA and NMDA " ..... " ..... ./ ---2 3 4 '"-- ---5 6 Background frequency (Hz) 7 ~~-----------------------, 60 i 140 20 2 , 3 4 5 6 7 Background frequency (Hz) Figure 1: Input resistance and time constant as a function of background frequency. In (a), the solid line corresponds to the "standard" model with passive dendrites, while the dashed line includes active NMDA synapses as described in the text. 46 Bernander, Koch, and Douglas 3 RESULTS 3.1 R,n and Tm change with background frequency Fig. 1 illustrates what happens to ~n and Tm when the synaptic background activities of all synaptic types are varied simultaneously. In the absence of any synaptic input, ~n = 110 Mn and Tm = 80 msec. At 1 Hz background activity, on average 5 synaptic events are impinging on the cell every msec, contributing a total of 24 nS to the somatic input conductance Gin. Because of the reversal potential of the excitatory synapses (0 mV), the membrane potential throughout the cell is pulled towards more depolarizing potentials, activating additional active currents. Although these trends continue as f is increased, the largest change can be observed between 0 and 2 Hz. Figure 2: Spatial integration as a function of background frequency. Each dendrite has been "stretched" so that its apparent length corresponds to its electrotonic length. The synaptic background frequency was 0 Hz (left) and 2 Hz (right). The scale bar corresponds to 1 A (length constant). Activating synaptic input has two distinct effects: the conductance of the postsynaptic membrane increases and the membrane is depolarized. The system can, at least in principle, independently control these two effects by differentially varying the spontaneous firing frequencies of excitatory versus inhibitory inputs. Thus, increasing f selectively for the GABAB inhibition will further increase the membrane conductance but move the resting potential towards more hyperpolarizing Network activity determines spatio-temporal integration in single cells 47 potentials. Note that the 0 Hz ca£c corresponds to experiments made with in vitro slice preparations or culture. In this case incoming fibers have been cut off and the spontaneous firing rate is very small. Careful studies have shown very large values for Rin and Tm under these circumstances (e.g. Spruston &. Johnston, 1991). In vivo preparations, on the other hand, leave the cortical circuitry intact and much smaller values of R,n and Tm are usually recorded. 3.2 Spatial integration Varying synaptic background activity can have a significant impact on the electrotonic structure of the cell (fig. 2). We plot the electrotonic distance of any particular point from the cell body, that is the sum of the electrotonic length's L, = Ej(lj/Aj) associated with each dendritic segment i, where Aj = J ~m.R~j is the electrotonic length constant of compartment i, Ij its anatomical length and the sum is taken over all compartments between the soma and compartment i. Increasing the synaptic background activity from I = 0 to f = 2 Hz has the effect of stretching the "distance" of any particular synapse t.o the soma by a factor of about 3, on average. Thus, while a distal synapse has an associated L value of about 2.6 at 2 Hz it shrinks to 1.2 if all network activity is shut off, while for a synapse at the tip of a basal dendrite, L shrinks from 0.7 t.o 0.2. In fact, the EPSP induced by a single excitatory synapse at that location goes from 39 to 151 J,lV, a decrease of about 4. Thus, when the overall network activity is low, synapses in the superficial layer of cortex could have a significant effect on somatic discharge, while having only a weak modulatory effect on the soma if the overall network activity is high. Note that basal dendrites, which receive a larger number of synapses, stretch more than apical dendrites. 3.3 Temporal integration That the synaptic background activity can also modify the temporal integration behavior of the cell is demonstrated in fig. 3. At any part.icular background frequency I, we compute the minimal number of additional excitatory synapses (at gpeal: = 0.5 nS) necessary to barely generate one action potential. These synapses were chosen randomly from among all excitatory synapses throughout the cell. We compare the case in which all synapses are activated simultaneously (solid line) with the case in which the inputs arrive asynchronously, smeared out over 25 msec (dashed line). If I = 0, it requires 115 synapses firing simultaneously to generate a single action potential, while 145 are needed if the input is desynchronized. This small difference between inputs arriving synchronized and at random is due to the long integration period of the cell. If the background activity increases to f = 1 Hz, 113 synchronized synaptic inputs-spread out all over the cell-are sufficient to fire the cell. If, however, the synaptic input is spread out over 25 msec, 202 synapses are now needed in order to trigger a response from the cell. This is mainly due to the much smaller value of Tm relative to the period over which the synaptic input is spread out. Note 48 Bernander, Koch, and Douglas that the difference in number of simultaneous synaptic inputs needed to fire the cell for f = 0 compared to f = 1 is small (i.e. 113 vs. 115), in spite of the more than five-fold decrease in somatic input resistance. The effect of the smaller size of the individual EPSP at higher values of f is compensated for by the fact that the resting potential of the cell has been shifted towards the firing threshold of the cell (about -49 mY). o -: I ! =-... J E :;, z ,~----------------------------------------------, Unsynchronized Input 800 Synchronized input 600 °OL-------'------~2------~3~----~4------~5~----~I~----~7 Background frequency (Hz) Figure 3: Phase detection. A variable number of excitatory synapses were fired superimposed onto a constant background frequency of 1 Hz. They fired either simultaneously (solid line) or spread out in time uniformly during a 25 msec interval (dashed line). The y axis shows the minimum number of synapses necessary to cause the cell to fire. 3.4 NMDA synapses Fast excitatory synaptic input in cortex is mediated by both AMPA or non-NMDA as well as NMDA receptors (Miller et aI., 1989). As opposed to the AMPA synapse, the NMDA conductance change depends not only on time but also on the postsynaptic voltage: (1) where '1'1 = 40 msec, '1'2 = 0.335 msec, '1 = 0.33 mM-t, [M g2+] 1 mM, r = 0.06 mV-1. During spontaneous background activity many inputs impinge on the cell and we can time-average the equation above. We will then be left with a purely voltage-dependent conductance. We measured the somatic input resistance, Rin, by injecting a small current pulse in the soma (fig. 4) in the standard model. All synapses fired at a 0.5 Hz background frequency. Next we added 4,000 NMDA synapses in addition to the 4,000 nonNetwork activity determines spatio-temporal integration in single cells 49 NMDA synapses, also at 0.5 Hz, and again injected a current pulse. The voltage response is now larger by about 65%, corresponding to a smaller input conducta.nce, even though we are adding the positive NMDA conductance. This seeming paradox depends on two effects. First, the input conductance is, by definition, ~ = G(V)+ di) . (V - Ern), where G(V) is the conductance specified in eq. (1). For the N DA synapse this derivative is negative below about -35 mV. Second, due to the excitation the membrane voltage has drifted towards more depolarized values. This will cause a change in the activation of the other voltage-dependent currents. Even though the summed conductance of these active currents will be larger at the new voltage, the derivative '*" will be smaller at that point. In other words, activation of NMDA synapses gives a negative contribution to the input conductance, even though more conductances have opened up. Next we replaced 2,000 of the 4,000 non-NMDA synapses in the old model with 2,000 NMDA synapses and recomputed the input resistance as a function of synaptic background activity. The result is overlaid in figure 1a (dashed line). The curve shifts toward larger values of Rin for most values of f. This shift varies between 50 % - 200 %. The cell is more excitable than before. -60 -61 -62 > E -63 E -64 > -65 -66 0 200 400 600 800 1000 t lmsecl Figure 4: Negative input conductance from NMDA activation. At times t = 250 msec and t = 750 msec a 0.05 nA current pulse was injected at the soma and the somatic voltage response was recorded. At t = 500 msec, one NMDA synapse was activated for each non-NMDA synapse, for a total of 8,000 excitatory synaptic inputs. The background frequency was 0.5 Hz for all synapses. 4 DISCUSSION We have seen that parameters such as Rtn, 7'm, and L are not static, but can vary over about one order of magnitude under network control. The potential computational possibilities could be significant. 50 Bernander, Koch, and Douglas For example, if a low-contrast stimulus is presented within the receptive field of the cell, the synaptic input rate will be small and the signal-t~noise ratio (SNR) low. In this case, to make the cell more sensitive to the inputs we might want to increase R;n. This would automatically be achieved as the total network activation is low. We can improve the SNR by integrating over a longer time period, i.e. by increasing Tm. This would also be a consequence of the reduced network activity. The converse argument can be made for high-contrast stimuli, associated with high overall network activity and low R;n and Tm values. Many cortical cells are tuned for various properties of the stimulus, such as orientation, direction, and binocular disparity. As the effective membrane conductance, Gm , changes, the tuning curves are expected to change. Depending on the exact circuitry and implementation of the tuning properties, this change in background frequency could take many forms. One example of phase-tuning was given above. In this case the temporal tuning increases with background frequency. Acknowledgements This work was supported by the Office of Naval Research, the National Science Foundation, the James McDonnell Foundation and the International Human Frontier Science Program Organization. Thanks to Tom Tromey for writing the graphic software and to Mike Hines for providing us with NEURON. References P. Anderson, M. Raastad &, J. F. Storm. (1990) Excitatory synaptic integration in hippocampal pyramids and dentate granule cells. Symp. Quant. Bioi. 55, Cold Spring Harbor Press, pp. 81-86. O. Bernander, R. J. Douglas, K. A. C. Martin &, C. Koch. (1991) Synaptic background activity influences spatiotemporal integration in single pyramidal cells. P.N.A.S, USA 88: 11569-11573. R. J. Douglas, K. A. C. Martin &, D. Whitteridge. (1991) An intracellular analysis of the visual responses of neurones in cat visual cortex. J. Physiol. 440: 659-696. M. Hines. (1989) A program for simulation of nerve equations with branching geometries. Int. J. Biomed. Comput. 24: 55-68. W. R. Holmes &, C. D. Woody. (1989) Effects of uniform and non-uniform synaptic activation-distributions on the cable properties of modeled cortical pyramidal neurons. Brain Research 505: 12-22. A. U. Larkman. (1991) Dendritic morphology of pyramidal neurones of the visual cortex of the rat: III. Spine distributions. J. Compo Neurol. 306: 332-343. K. D. Miller, B. Chapman &, M. P. Stryker. (1989) Responses of cells in cat visual cortex depend on NMDA receptors. P.N.A.S. 86: 5183-5187. N. Spruston &, D. Johnston. (1992) Perforated patch-clamp analysis of the passive membrane properties of three classes of hippocampal neurons. J. Netlrophysiol., in press.
1991
14
474
The Clusteron: Toward a Simple Abstraction for a Complex Neuron Bartlett W. Mel Computation and Neural Systems Division of Biology Caltech, 216-76 Pasadena, CA 91125 mel@cns.caltech.edu Abstract Are single neocortical neurons as powerful as multi-layered networks? A recent compartmental modeling study has shown that voltage-dependent membrane nonlinearities present in a complex dendritic tree can provide a virtual layer of local nonlinear processing elements between synaptic inputs and the final output at the cell body, analogous to a hidden layer in a multi-layer network. In this paper, an abstract model neuron is introduced, called a clusteron, which incorporates aspects of the dendritic "cluster-sensitivity" phenomenon seen in these detailed biophysical modeling studies. It is shown, using a clusteron, that a Hebb-type learning rule can be used to extract higher-order statistics from a set of training patterns, by manipulating the spatial ordering of synaptic connections onto the dendritic tree. The potential neurobiological relevance of these higher-order statistics for nonlinear pattern discrimination is then studied within a full compartmental model of a neocortical pyramidal cell, using a training set of 1000 high-dimensional sparse random patterns. 1 INTRODUCTION The nature of information processing in complex dendritic trees has remained an open question since the origin of the neuron doctrine 100 years ago. With respect to learning, for example, it is not known whether a neuron is best modeled as 35 36 Mel a pseudo-linear unit, equivalent in power to a simple Perceptron, or as a general nonlinear learning device, equivalent in power to a multi-layered network. In an attempt to characterize the input-output behavior of a whole dendritic tree containing voltage-dependent membrane mechanisms, a recent compartmental modeling study in an anatomically reconstructed neocortical pyramidal cell (anatomical data from Douglas et al., 1991; "NEURON" simulation package provided by Michael Hines and John Moore) showed that a dendritic tree rich in NMDA-type synaptic channels is selectively responsive to spatially clustered, as opposed to diffuse, pattens of synaptic activation (Mel, 1992). For example, 100 synapses which were simultaneously activated at 100 randomly chosen locations about the dendritic arbor were less effective at firing the cell than 100 synapses activated in groups of 5, at each of 20 randomly chosen dendritic locations. The cooperativity among the synapses in each group is due to the voltage dependence of the NMDA channel: Each activated NMDA synapse becomes up to three times more effective at injecting synaptic current when the post-synaptic membrane is locally depolarized by 30-40 m V from the resting potential. When synapses are activated in a group, the depolarizing effects of each helps the others (and itself) to move into this more efficient voltage range. This work suggested that the spatial ordering of afferent synaptic connections onto the dendritic tree may be a crucial determinant of cell responses to specific input patterns. The nonlinear interactions among neighboring synaptic inputs further lent support to the idea that two or more afferents that form closely grouped synaptic connections on a dendritic tree may be viewed as encoding higher-order input-space "features" to which the dendrite is sensitive (Feldman & Ballard, 1982; Mel, 1990; Durbin & Rumelhart, 1990). The more such higher-order features are present in a given input pattern, the more the spatial distribution of active synapses will be clustered, and hence the more the post-synaptic cell will be inclined to fire in response. In a demonstration of this idea through direct manipulation of synaptic ordering, dendritic cluster-sensitivity was shown to allow the model neocortical pyramidal cell to reliably discriminate 50 training images of natural scenes from untrained control images (see Mel, 1992). Since all presented patterns activated the same number of synapses of the same strength, and with no systematic variation in their dendritic locations, the underlying dendritic "discriminant function" was necessarily nonlinear. A crucial question remains as to whether other, e.g. non-synaptic, membrane nonlinearities, such as voltage-dependent calcium channels in the dendritic shaft membrane, could enhance, abolish, or otherwise alter the dendritic cluster-sensitivity phenomenon seen in the NMDA-only case. Some of the simulations presented in the remainder of this paper include voltage-dependent calcium channels and/or an anomalous rectification in the dendritic membrane. However, detailed discussions of these channels and their effects will be presented elsewhere. 2 THE CLUSTERON 2.1 MOTIVATION This paper deals primarily with an important extension to the compartmental modeling experiments and the hand-tuned demonstrations of nonlinear pattern discrimiThe Clusteron: Toward a Simple Abstraction for a Complex Neuron 37 =3 =2 Figure 1: The Clusteron. Active inputs lines are designated by arrows; shading of synapses reflects synaptic activation ai when Xi E {O, 1} and weights are set to 1. nation capacity presented in (Mel, 1992). If the manipulation of synaptic ordering is necessary for neurons to make effective use of their cluster-sensitive dendrites, then a learning mechanism capable of appropriately manipulating synaptic ordering must also be present in these neurons. An abstract model neuron called a clusteron is presented here, whose input-output relation was inspired by the idea of dendritic cluster-sensitivity, and whose learning rule is a variant of simple Hebbian learning. The clusteron is a far simpler and more convenient model for the study of clustersensitive learning than the full-scale compartmental model described in (Mel, 1992), whose solutions under varying stimulus conditions are computed through numerical integration of a system of several hundred coupled nonlinear differential equations (Hines, 1989). However, once the basic mathematical and algorithmic issues have been better understood, more biophysically detailed models of this type of learning in dendritic trees, as has been reported in (Brown et al., 1990), will be needed. 2.2 INPUT-OUTPUT BEHAVIOR The c1usteron is a particular second-order generalization of the thresholded linear unit (TLU), exemplified by the common Perceptron. It consists of a "cell body" where the globally thresholded output of the unit is computed, and a dendritic tree, which for present purposes will be visualized as a single long branch attached to the cell body (fig. 1). The dendritic tree receives a set of N weighted synaptic contacts from a set of afferent "axons". All synaptic contacts are excitatory. The output of the clusteron is given by (1) i=l where ai is the net excitatory input at synapse i and g is a thresholding nonlinearity. Unlike the TLU, in which the net input due to a single input line i is WiXi, the net 38 Mel input at a clusteron synapse i with weight Wi is given by, ai = WiXi( I:: WjXj), (2) jE'D; where Xi is the direct input stimulus intensity at synapse i, as for the TLU, and Vi = {i - r, ... i, ... , i + r} represents the neighborhood of radius r around synapse i. It may be noted that the weight on each second-order term is constrained to be the product of elemental weights WiWj, such that the clusteron has only N underlying degrees of freedom as compared to N2 possible in a full second-order model. For the simplest case of Xi E {O,l} and all weights set to 1, equation 2 says that the excitatory contribution of each active synapse is equal to the number of coactive synapses within its neighborhood. A synapse that is activated alone in its neighborhood thus provides a net excitatory input of ai = 1; two synapses activated near to each other each provide a net excitatory input of ai = aj = 2, etc. The biophysical inspiration for the "multiplicative" relation in (2) is that, the net injected current through a region of voltage-dependent dendritic membrane can, under many circumstances, grow faster than linearly with increasing synaptic input to that region. Unlike the dendritic membrane modeled at the biophysical level, however, the clusteron in its current definition does not contain any saturating nonlinearities in the dendrites. 2.3 THE LEARNING PROBLEM The learning problem of present interest is that of two-category classification. A pattern is a sparse N-element vector, where each component is a boolean random variable equal to 1 with probability p, and 0 otherwise. Let T = {tl' t2, ... , tp} be a training set consisting of P randomly chosen patterns. The goal of the classifier is to respond with y = 1 to any pattern in T, and y = 0 to all other "control" patterns with the same average bit density p. Performance at this task is measured by the probability of correct classification on a test set consisting of equal numbers of training and control patterns. 2.4 THE LEARNING RULE Learning in the clusteron is the process by which the ordering of synaptic connections onto the dendrite is manipulated. Second-order features that are statistically prominent in the training set, i.e. pairs of pattern components that are coactivated in the training set more often than average, can become encoded in the clusteron as pairs of synaptic connections within the same dendritic neighborhood. Learning proceeds as follows. Each pattern in T is presented once to the clusteron in a random sequence, constituting one training epoch. At the completion of each training epoch, each synapse i whose activation averaged over the training set 1 p < ai >= P I:: a~p) p=l falls below threshold (), is switched with another randomly chosen subthreshold synapse. The threshold can, for example, be chosen as () = }.; L~l < aj >, i.e. The Clusteron: Toward a Simple Abstraction for a Complex Neuron 39 B Figure 2: Distribution of 100 active synapses for a trained pattern (A) vs. a random control pattern (B); synapse locations are designated by black dots. Layout A is statistically more "clustery" than B, as evidenced by the presence of several clusters of 5 or more active synapses not found in B. While the total synaptic conductance activated in layout A was 20% less than that in layout B (linked to local variations in input-resistance), layout A generated 5 spikes at the soma, while layout B generated none. the averaged synaptic activation across all synapses and training patterns. Each synapse whose average activation exceeds threshold 0 is left undisturbed. Thus, if a synapse is often coactivated with its neighbors during learning, its average activation is high, and its connection is stabilized. If it is only rarely coactivated with its neighbors during learning, it loses its current connection, and is given the opportunity to stabilize a new connection at a new location. The dynamics of clusteron learning may be caricatured as follows. At the start of learning, each "poor performing" synaptic connection improves its average activation level when switched to a new dendritic location where, by definition, it is expected to be an "average performer". The average global response y to training patterns is thus also expected to increase during early training epochs. The average response to random controls remains unchanged, however, since there is no systematic structure in the ordering of synaptic connections relevant to any untrained pattern. This relative shift in the mean responses to training vs. control patterns is the basis for discrimination between them. The learning process approaches its asymptote as each pair of synapses switched, on average, disturbs the optimized clusteron neighborhood structure as much as it improves it. 40 Mel 3 RESULTS The clusteron learning rule leads to a permutation of synaptic input connections having the property that the distribution of activated synapses in the dendritic tree associated with the presentation of a typical training pattern is statistically more "clustery" than the distribution of activated synapses associated with the presentation of a random control pattern. For a given training set size, however, it is crucial to establish that the clustery distributions of active synapses associated with training patterns are in fact of a type that can be reliably discriminated-within the detailed biophysical mode/from diffuse stimulation of the dendritic tree corresponding to unfamiliar stimulus patterns. In order to investigate this question, a clusteron with 17,000 synapses was trained with 1000 training patterns. This number of synapses was chosen in order that a direct map exist between clusteron synapses and dendritic spines, which were assumed to lie at 1 pm intervals along the approximately 17,000 pm of total dendritic length of the model neocortical neuron (from Douglas et al., 1991). In these runs, exactly 100 of the 17,000 bits were randomly set in each of the training and control patterns, such that every pattern activated exactly 100 synapses. After 200 training epochs, 100 training patterns and 100 control patterns were selected as a test set. For each test pattern, the locations of its 100 active clusteron synapses were mapped onto the dendritic tree in the biophysical model by traversing the latter in depth-first order. For example, training pattern #36 activated synapses as shown in fig. 2A, with synapse locations indicated by black dots. The layout in B was due to a control pattern. It may be perceived that layout A contains several clear groupings of 5 or more synapses that are not observed in layout B. Within in the biophysical model, the conductance of each synapse, containing both NMDA and non-NMDA components, was scaled inversely with the input resistance measured locally at the dendritic spine head. Membrane parameters were similar to those used in (Mel, 1992); a high-threshold non-inactivating calcium conductance and an anomalous rectifier were used in these experiments as well, and were uniformly distributed over most of the dendritic tree. In the simulation run for each pattern, each of the 100 activated synapses was driven at 100 Hz for 100 ms, asynchronously, and the number of action potentials generated at the soma was counted. The total activated synaptic conductance in fig. 2A was 20% less than that activated by control layout B. However, layout A generated 5 somatic spikes while layout B generated none. Fig. 3 shows the cell responses averaged over training patterns, four types of degraded training patterns, and control patterns. Most saliently, the average spike count in response to a training pattern was 3 times the average response to a control pattern. Not surprisingly, degraded training patterns gave rise to degraded responses. It is crucial to reiterate that all patterns, regardless of category, activated an identical number of synapses, with no average difference in their synaptic strengths or in dendritic eccentricity. Only the spatial distributions of active synapses were different among categories. The Clusteron: Toward a Simple Abstraction for a Complex Neuron 41 1000 Training Patterns Training TIT Patterns 20% 50% Control T/T/T Noise Noise Patterns Figure 3: Average cell responses to training patterns, degraded training patterns, and control patterns. Categories designated T/T and T/T/T represented feature composites of 2 or 3 training patterns, respectively. Degraded responses to these categories of stimulus patterns was evidence for the underlying nonlinearity of the dendritic discriminant function. 4 CONCLUSION These experiments within the clusteron model neuron have shown that the assumption of (1) dendritic cluster-sensitivity, (2) a combinatorially rich interface structure that allows every afferent axon potential access to many dendritic loci, and (3) a local Hebb-type learning rule for stabilizing newly formed synapses, are sufficient in principle to allow the learning of nonlinear input-ouput relations with a single dendritic tree. The massive rearrangement of synapses seen in these computational experiments is not strictly necessary; much of the work could be done instead through standard Hebbian synaptic potentiation, if a larger set of post-synaptic neurons is assumed to be available to each afferent instead of a single neuron as used here. Architectural issues relevant to this issue have been discussed at length in (Mel, 1990; Mel & Koch, 1990). An analysis of the storage capacity of the clusteron will be presented elsewhere. Acknowledgements This work was supported by the Office of Naval Research, the James McDonnell Foundation, and National Institute of Mental Health. Thanks to Christof Koch for providing an excellent working environment, Ken Miller for helpful discussions, and to Rodney Douglas for discussions and use of his neurons. 42 Mel References Brown, T.H., Mainen, Z.F., Zador, A.M., & Claiborne, B.J. Self-organization of hebbian synapses in hippocampal neurons. In Advances in Neural Information Processing Systems, vol. 3, R. Lippmann, J. Moody, & D. Touretzky, (Eds.), Palo Alto: Morgan Kauffman, 1991. Douglas, R.J., Martin, K.A.C., & Whitteridge, D. An intracellular analysis of the visual responses ofneurones in striate visual cortex. J. Physiol., 1991,440, 659-696. Durbin, R. & Rumelhart, D.E. Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Computation, 1989, 1, 133. Feldman, J.A. & Ballard, D.H. Connectionist models and their properties. Cognitive Science, 1982, 6, 205-254. Hines, M. A program for simulation of nerve equations with branching geometries. Int. J. Biomed. Comput., 1989, 24, 55-68. Mel, B.W. The sigma-pi column: a model for associative learning in cerebral neocortex. CNS Memo #6, Computation and Neural Systems Program, California Institute of Technology, 1990. Mel, B.W. NMDA-based pattern classification in a modeled cortical neuron. 1992, Neural Computation, in press. Mel, B.W. & Koch, C. Sigma-pi learning: On radial basis functions and cortical associative learning. In Advances in neural information processing systems, vol. 2, D.S. Touretzsky, (Ed.), San Mateo, CA: Morgan Kaufmann, 1990.
1991
140
475
Application of Neural Network Methodology to the Modelling of the Yield Strength in a Steel Rolling Plate Mill Ah Chung Tsoi Department of Electrical Engineering University of Queensland, St Lucia, Queensland 4072, Australia. Abstract In this paper, a tree based neural network viz. MARS (Friedman, 1991) for the modelling of the yield strength of a steel rolling plate mill is described. The inputs to the time series model are temperature, strain, strain rate, and interpass time and the output is the corresponding yield stress. It is found that the MARS-based model reveals which variable's functional dependence is nonlinear, and significant. The results are compared with those obta.ined by using a Kalman filter based online tuning method and other classification methods, e.g. CART, C4.5, Bayesian classification. It is found that the MARS-based method consistently outperforms the other methods. 1 Introduction Hot rolling of steel slabs into fiat plates is a common process in a steel mill. This technology has been in use for many years. The process of rolling hot slabs into plates is relatively well understood [see, e.g., Underwood, 1950]. But with the intense intrnational market competition, there is more and more demand on the quality of the finished plates. This demand for quality fuels the search for a better understanding of the underlying mechanisms of the transformation of hot slabs into plates, and a better control of the parameters involved. Hopefully, a better understanding of the controlling parameters will lead to a more optimal setting of the control on the process, which will lead ultimately to a better quality final product. 698 ANN Modelling of a Steel Rolling Plate Mill 699 In this paper, we consider the problem of modelling the plate yield stress in a hot steel rolling plate mill. Rolling is a process of plastic deformation and its objective is achieved by subjecting the material to forces of such a magnitude that the reSUlting stresses produce permanent change of shape. Apart from the obvious dependence on the materials used, the characteristics of the material undergoing plastic deformation are described by stress, strain and temperature, if the rolling is performed on hot slabs. In addition, the interpass time, i.e., the time between passes of the slab through the rollers (an indirect measure of the rolling velocity), directly influences the metallurgical structure of the metal during rolling. There is considerable evidence that the yield stress is also dependent 011 the strain rate. In fact, it is observed that as the strain rate increases, the initial yield point increases appreciably, but after an extension is achieved, the effect of strain rate on the yield stress is very much reduced [see, e.g., Underwood, 1950]. The effect of temperature on the yield stress is important. It is shown that the resistance to deformation increases with a decrease in temperature. The resistance to deformation versus temperature diagram shows a "hump" in the curve, which corresponds to the temperature at which the structure of material changes fundamentally [see, e.g., Underwood, 1950, Hodgson & Collinson, 1990]. Using, e.g., an energy method, it is possible to formulate a theoretical model of the dependence of deformation resistance on temperature, strain, strain rate, velocity (indirectly, the interpass time). One may then validate the theoretical model by performing a rolling experiment on a piece of material, perhaps under laboratory conditions [see .e.g., Horihata, Motomura, 1988, for consideration of a three roller system]. It is difficult to apply the derived theoretical model to a practical situation, due to the fact that in a practical process, the measurement of strain and strain rate are not accurate. Secondly, one cannot possibly perform a rolling experiment on each new piece of material to be rolled. Thus though the theoretical model may serve as a guide to our understanding of the process, it is not suitable for controller design purposes. There are empirical models relating the resistance of deformation to temperature, strain and strain rate [see, e.g., Underwood, 1950, for an account of older models]. These models are often obtained by fitting the observed data to a general data model. The following model has been found useful in fitting the observed practical data d km = a{b sinh -1 (ci: exp( T)!) (1) where km is the yield stress, { is the strain, i is the corresponding strain rate, and T is the temperature. a, b, c, d and f are unknown constants. It is claimed that this model will give a good prediction of the yield stress, especially at lower temperatures, and for thin plate passes [Hodgson & Collinson, 1990]. This model does not always give good predictions over all temperatures as mill conditions vary with time, and the model is only "tuned" on a limited set of data. 700 Tsoi In order to overcome this problem, McFarlane, Telford, and Petersen [1991] have experimented with a recursive model based on the Kalman filter in control theory to update the parameters (see, e.g. Anderson, Moore, [1980]), a, b, c, d, / in the above model. To better describe the material behaviour at different temperatures, the model explicitly incorporates two separate sub-models with a temperature dependence: 1. Full crystallisation (T < Tupper) km = alb sinh-1(ci exp( :)J) The constants a, b, c, d, f are model coefficients. 2. Partial recrystallisation (Tiower ~ T ~ Tupper). km = a({ + f*)bsinh-1(ciexp(:)J) to.5 = j(Ai-lfi-l + {i)9 h«q(Ti - 1, n)h» Ai = h(t, to.5) (2) (3) (4) (5) where A is the fractional retained strain; {*, expressed as a Taylor series expansion of Ai-l (i-l, is the retained strain; t is the interpass time; to.5 is the 50 % recrystallisation time; q(n-l, Ti) is a prescribed nonlinear function of n-l and n; h(.) and 12(.) are pre-specified nonlinear functions; i, the roll pass number; j, h, g are the model coefficients; Tupper is an experimentally determined temperature at which the material undergoes a permanent change in structure; and 1iower is a temperature below which the material does not exhibit any plastic behaviour. Model coefficients a,b,c,d,/,g,h,j are either estimated in a batch mode (i.e., all the past data are assumed to be available simultaneously) or adapted recursively on-line (i.e., only a limited number of the past data is available) using a Kalman filter algorithm in order to provide the best model predictions [McFarlane, Telford, Petersen, 1991]. It is noted that these models are motivated by the desire to fit a nonlinear model of a special type, i.e., one which has an inverse hyperbolic sine function. But, since the basic operation is data fitting, i.e., to fit a model to the set of given data, it is possible to consider more general nonlinear models. These models may not have any ready interpretation in metallurgical terms, but these models may be better in fitting a nonlinear model to the given data set in the sense that it may give a better prediction of the output. It has been shown (see, e.g., Hornik et aI, 1989) that a class of artificial neural networks, viz., a multilayer perceptron with a single hidden layer can approximate any arbitrary input output function to an arbitrary degree of accuracy. Thus it is reasonable to experiment with different classes of artificial neural network or induction tree structures for fitting the set of given data and to examine which structure gives the best performance. ANN Modelling of a Steel Rolling Plate Mill 701 The structure of the paper is as follows: in section 2, a brief review of a special class of neural networks is given. In section 3, results in applying the neural network model to the plate mill data are given. 2 A Tree Based Neural Network model Friedman [1991] introduced a new class of neural network architecture which is called MARS (Multivariate Adaptive Regression Spline). This class of methods can be interpreted as a tree of neurons, in which each leaf of the tree consists of a neuron. The model of the neuron may be a piecewise linear polynomial, or a cubic polynomial, with the knot as a variable. In view of the lack of space, we will refer the interested readers to Friedman's paper [1991] for details on this method. 3 Results MARS has been applied to the platemill data. We have used the data in the following manner. We concatenate different runs of the plate mill into a single time series. This consists of 2877 points corresponding to 180 individual plates with approximately 16 passes on each plate. There are 4 independent variables, viz., interpass time, temperature, strain, and strain rate. The desired output variable is the yield stress. A plot of the individual variables, viz temperature, strain, strain rate, interpass time and stress versus time reveal that the variables can vary rather considerably over the entire time series. In addition, a plot of stress versus temperature, stress versus strain, stress versus strain rate and stress versus interpass time reveals that the functional dependence could be highly nonlinear. We have chosen to use an additive model (Friedman [1991]), instead of the more general multivariate model, as this will allow us to observe any possible nonlinear functional dependencies of the output as a function of the inputs. (6) where k., i = I, 2, 3, 4 are gains, and fi' i = 1,2, 3,4 are piecewise nonlinear polynomial models found by MARS. The results are as follows: Both the piecewise linear polynomial and the piecewise cubic polynomial are used to study this set of data. It is found that the cubic polynomial gives a better fit than the linear polynomial fit. Figure I(a) shows the error plot between the estimated output from a cubic spline fit, and the training data. It is observed that the error is very small. The maximum error is about -0.07. Figure I(b) shows the plot of the predicted yield stress and the original yield stress over the set of training data. These figures indicate that the cubic polynomial fit has captured most of the variation of the data. It is interesting to note that in this model, the interpass time 702 .1' 13 12 - 12 Tsoi JI 28 2' Figure 1: (a) The prediction error on the training data set (b) The prediction and the training data set superimposed plays no significant part. This feature may be a peculiar aspect of this set of data points. It is not true in general. It is found that the strain rate has the most influence on the data, followed by temperature, and followed by strain. The model, once obtained, can be used to predict the yield stress from a given set of temperature, strain, and strain rate. Figure 2(a) shows the prediction error between the yield stress and the predicted yield stress on a set of testing data, i.e. the data which is not used to train the model and Figure 2(b) shows a plot of the predicted value of yield stress superimposed on the original yield stress. It is observed that the prediction on the set of testing data is reasonable. This indicates that the MARS model has captured most of the dynamics underlying the original training data, and is capable of extending this captured knowledge onto a set of hitherto unseen data. 4 Comparison with the results obtained by conventional approaches In order to compare the artificial neural network approach to more conventional methods for model tuning, the same data set was processed using: 1. A MARS model with cubic polynomials 2. An inverse hyperbolic sine law model using least square batch parameter tuning ANN Modelling of a Steel Rolling Plate Mill 703 .. ~ 32 ... 28 .3. 26 .. ~ 2' 22 .• 2. .3 .6 •• '2 II ... -.•. ~ 16V - .• ~. ~ •• 61 , •• " '3 ... '61 ,. 211 23 24. 26' .... 3 •• 61 • '" .3 ... '61 ,. 211 23 2" 26' Figure 2: (a) The prediction error on the testing data set (b) The prediction and the testing data set superimposed 3. An inverse hyperbolic sine law model using a recursive least squares tuning 4. CART based classification [Brie men et. al., 1984] 5. C4.5 based method [Quinlan, 1986,1987] 6. Bayesian classification [Buntine, [1990] In each case, we used a training data set of78 plates (1242 passes) and a testing data set of 16 plates (252 passes). In the cases of CART, C4.5, and Bayesian classification methods, the yield stress variable is divided equally into 10 classes, and this is used as the desired output instead of the original real values. The comparison of the results between MARS and the Kalman filter based approach are shown in the following table Bll B12 All Al2 ell C12 mean% -.64 1.69 -.64 2.38 -0.2 4.5 mean abs% 4.61 4.22 4.61 5.3 3.5 5.3 std % 6.26 5.11 6.26 6.25 4.7 4.9 where Bll = Batch Tuning: tuning model ( forgetting factors =1 in adaption) on the training data B12 = Batch Tuning: running tuned model on the testing data All = Adaptation: on the training data Al2 = Adaptation: on the testing data 704 Tsoi Cll = MARS on the training data Cu = MARS on the testing data, and mean% = mean«kmea, - kpred)/kmea,), meanabs% = mean(abs(kmea, - kpred)/kmea,)), std% = stdev(kmea, - kpred)/kmea,); where mean and stdev stands for the mean and the standard deviations respectively, and kmea" kpred represents the measured and predicted values of the yield stress respectively. It is found that the MARS based model performs extremely well compared with the other methods. The standard deviation of the prediction errors in a MARS model is considerably less than the corresponding standard deviation of prediction errors in a Kalman filter type batch or online tuning model on the testing data set. We have also compared MARS with both the CART based method and the C4.5 based method. As both CART and C4.5 operate only on an output category, rather than a continuous output value, it is necessary to convert the yield stress into a category type of variable. We have chosen to divide equally the yield stress into 10 classes. With this modification, the CART and C4.5 methods are readily applicable. The following table summarises the results of this comparison. The values given are the percentage of the prediction error on the testing data set for various methods. In the case of MARS, we have converted the prediction error from a continuous variable into the corresponding classes as used in the CART and C4.5 methods. I Bayes I CART I C4.5 I MARS I 65.4 12.99 16.14 6.2 It is found that the MARS model is more consistent in predicting the output classes than either the CART method, the C4.5 based method, or the Bayesian classifier. The fact that the MARS model performs better than the CART model can be seen as a confirmation that the MARS model is a generalisation of the CART model (see Friedman [1991]). But it is rather surprising to see that the MARS model outperforms a Bayesian classifier. The results are similar over a number of other typical data sets, e.g., when the interpass time variable becomes significant. 5 Conclusion It is found that MARS can be applied to model the platemill data with very good accuracy. In terms of predictive power on unseen data, it performs better than the more traditional methods, e.g., Kalman filter batch or online tuning methods, CART, C4.5 or Bayesian classifier. It is almost impossible to convert the MARS model into one given in section 1. The Hodgson-Collinson model places a breakpoint at a temperature of 925 deg G, while in the MARS model, the temperature breakpoints are found to be at 1017 degG and 1129 deg C respectively. Hence it is difficult to convert the MARS model into those given by the Hodgson-Collinson model, the Kalman filter type models or vice ANN Modelling of a Steel Rolling Plate Mill 705 versa. A possible improvement to the current MARS technique would be to restrict the breakpoints, so that they must exist within a temperature region where microstructural changes are known to occur. 6 Acknowledgement The author acknowledges the assistance given by the staff at the BHP Melbourne Research Laboratory in providing the data, as well as in providing the background material in this paper. He specially thanks Dr D McFarlane in giving his generous time in assisting in the understanding of the more traditional approaches, and also for providing the results on the Kalman filtering approach. Also, he is indebted to Dr W Buntine, RIACS, NASA, Ames Research Center for providing an early version of the induction tree based programs. 7 References Anderson, B.D.O., Moore, J .B., (1980). Optimal Fitering. Prentice Hall, Eaglewood, NJ. Brieman, L., Friedman, J., Olshen, R.A., Stone, C.J., (1984). Classification and Regrression Trees. Wadworth, Belmont, CA. Buntine, W, (1990). A Theory of Learning Classification Rules. PhD Thesis submitted to the University of Technology, Sydney. Friedman, J, (1991). "Multivariate Adaptive Regression Splines". Ann Stat. to appear. (Also, the implication of the paper on neural network models was presented orally in the 1990 NIPS Conference) Hodgson, Collinson, (1990). Manuscript under preparation (authors are with BHP Research Lab., Melbourne, Australia). Horihata, M, Motomura, M, (1988). "Theoretical analysis of 3-roll Rolling Process by the energy method". Trans of the Iron and Steel Institute of Japan, 28:6, 434439. Hornik, K., Stinchcombe, M., White, H., (1989). "Multilayer Feedforward Networks are Universal Approximators". Neural Networks, 2, 359-366. McFarlane, D, Telford, A, Petersen, I, (1991). Manuscript under preparation Quinlan, R. (1986). "Induction of Decision Trees". Machine Learning. 1,81-106. Quinlan, R. (1987). "Simplifying Decision Trees". International J Man-Machine Studies. 27, 221-234. Underwood, L R, (1950). The Rolling of Metals. Chapman & Hall, London.
1991
141
476
Reverse TDNN: An Architecture for Trajectory Generation Patrice Simard AT &T Bell Laboratories 101 Crawford Corner Rd Holmdel, NJ 07733 Abstract Yann Le Cun AT&T Bell Laboratories 101 Crawford Corner Rd Holmdel, NJ 07733 The backpropagation algorithm can be used for both recognition and generation of time trajectories. When used as a recognizer, it has been shown that the performance of a network can be greatly improved by adding structure to the architecture. The same is true in trajectory generation. In particular a new architecture corresponding to a "reversed" TDNN is proposed. Results show dramatic improvement of performance in the generation of hand-written characters. A combination of TDNN and reversed TDNN for compact encoding is also suggested. 1 INTRODUCTION Trajectory generation finds interesting applications in the field of robotics, automation, filtering, or time series prediction. Neural networks, with their ability to learn from examples, have been proposed very early on for solving non-linear control problems adaptively. Several neural net architectures have been proposed for trajectory generation, most notably recurrent networks, either with discrete time and externalloops (Jordan, 1986), or with continuous time (Pearlmutter, 1988). Aside from being recurrent, these networks are not specifically tailored for trajectory generation. It has been shown that specific architectures, such as the Time Delay Neural Networks (Lang and Hinton, 1988), or convolutional networks in general, are better than fully connected networks at recognizing time sequences such as speech (Waibel et al., 1989), or pen trajectories (Guyon et al., 1991). We show that special architectures can also be devised for trajectory generation, with dramatic performance improvement. 579 580 Simard and Le Cun Two main ideas are presented in this paper. The first one rests on the assumption that most trajectory generation problems deal with continuous trajectories. Following (Pearlmutter, 1988), we present the "differential units", in which the total input to the neuron controls the em rate of change (time derivative) of that unit state, instead of directly controlling its state. As will be shown the "differential units" can be implemented in terms of regular units. The second idea comes from the fact that trajectories are usually come from a plan, resulting in the execution of a "motor program". Executing a complete motor program will typically involve executing a hierarchy of sub-programs, modified by the information coming from sensors. For example drawing characters on a piece of paper involves deciding which character to draw (and what size), then drawing each stroke of the character. Each stroke involves particular sub-programs which are likely to be common to several characters (straight lines of various orientations, curved lines, loops ... ). Each stroke is decomposed in precise motor patterns. In short, a plan can be described in a hierarchical fashion, starting from the most abstract level (which object to draw), which changes every half second or so, to the lower level (the precise muscle activation patterns) which changes every 5 or 10 milliseconds. It seems that this scheme can be particularly well embodied by an "Oversampled Reverse TDNN". a multilayer architecture in which the states of the units in the higher layers are updated at a faster rate than the states of units in lower layers. The ORTDNN resembles a Subsampled TDNN (Bottou et al., 1990)(Guyon et al., 1991), or a subsampled weight-sharing network (Le Cun et al., 1990a), in which all the connections have been reversed, and the input and output have been interchanged. The advantage of using the ORTDNN, as opposed to a table lookup, or a memory intensive scheme, is the ability to generalize the learned trajectories to unseen inputs (plans). With this new architecture it is shown that trajectory generation problems of large complexity can be solved with relatively small resources. 2 THE DIFFERENTIAL UNITS In a time continuous network, the forward propagation can be written as: 8x(t) T{jt = -x(t) + g(wx(t» + I(t) (1) where x(t) is the activation vector for the units, T is a diagonal matrix such that ni is the time constant for unit i, It is the input vector at time t, w is a weight matrix such that Wij is the connection from unit j to unit i, and 9 is a differentiable (multi-valued) function. A reasonable discretization of this equation is: (2) where ~t is the time step used in the discretization, the superscript t means at time t~t (i.e. xt = x(t~t». Xo is the starting point and is a constant. t ranges from 0 to M, with 10 = o. Reverse TDNN: An Architecture for Trajectory Generarion 581 The cost function to be minimized is: t=M E = ~ L: (stxt - dt)T (stxt - dt) t=1 (3) Where Dt is the desired output, and st is a rectangular matrix which has a 0 if the corresponding x: is unconstrained and 1 otherwise. Each pattern is composed of pairs (It, Dt) for t E [1..M]. To minimize equation 3 with the constraints given by equation 2 we express the Lagrage function (Le Cun, 1988): t=M t=M-l L = ~ L:(Stxt_Dt)(Stxt_Dt)T + L: (bt+l)T(_xt+l+xt+LltT-l(_xt+g(wxt)+It») t=1 t=O (4) Where bt+l are Lagrange multipliers (for t E [1..MD. The superscript T means that the corresponding matrix is transposed. If we differentiate with respect to xt we get: (:~ ) T = 0 = (sti' _ d') _ ii' + ;;'+1 _ ~tT-1ii'+1 _ ~tT-1wT g'(wi')ii'+1 (5) For t E [l..M - 1] and 8~'t, = 0 = (S'xM - DM) - bM for the boundary condition. g' a diagonal matrix containing the derivatives of 9 (g'(wx)w is the jacobian of g). From this an update rule for bt can be derived: bM (SMXM _ dM) (S'xt - dt) + (1 - LltT-l)bt+l + LltT-lwTyrg(wxt)bt+l for t E [1..M - 1] (6) This is the rule used to compute the gradient (backpropagation). If the Lagrangian is differentiated with respect to Wij, the standard updating rule for the weight is obtained: oL t=M-l_ ow .. = LltT- 1 L: b;+lxjg;(L: wil:xi) ~ t=1 l: (7) If the Lagrangian is differentiated with respect to T, we get: t=M-l oL _ T- 1 ~ (-t+l -t)b-t+l --L.J x -x oT t=O (8) From the last two equations, we can derived a learning algorithm by gradient descent (9) (10) where 7]w and 7]T are respectively the learning rates for the weights and the time constants (in practice better results are obtained by having different learning rates 7]Wjj and 7]Tii per connections). The constant 7]T must be chosen with caution 582 Simard and Le Cun Figure 1: A backpropagation implementation of equation 2 for a two units network between time t and t + 1. This figure rer.eats itself vertically for every time step from t = 0 to t = M. The quantities x /1, x~+l, d~ = -x~ + gl (wxt) + If and d~ = -x~ + g2(wxt) + n are computed with linear units. since if any time constants tii were to become less than one, the system would be unstable. Performing gradient descent in Tl instead of in tii is preferable for II numerical stability reasons. Equation 2 is implemented with a feed forward backpropagation network. It should first be noted that this equation can be written as a linear combination of xt (the activation at the previous time), the input, and a non-linear function g of wx'. Therefore, this can be implemented with two linear units and one nonlinear unit with activation function g. To keep the time constraint, the network is "unfolded" in time , with the weights shared from one time step to another. For instance a simple two fully connected units network with no threshold can be implemented as in Fig. 1 (only the layer between time t and t + 1 is shown). The network repeats itself vertically for each time step with the weights shared between time steps. The main advantage of this implementation is that all equations 6, 7 and 8 are implemented implicitly by the back-propagation algorithm. 3 CHARACTER GENERATION: LEARNING TO GENERATE A SINGLE LETTER In this section we describe a simple experiment designed to 1) illustrate how trajectory generation can be implemented with a recurrent network, 2) to show the advantages of using differential units instead of the traditional non linear units and 3) to show how the fully connected architecture (with differential units) severly limits the learning capacity of the network. The task is to draw the letter "A" with 0Jtpu12 Reverse TDNN: An Architecture for Trajectory Generation 583 Target drawing 1.25 .15 .25 -.25 -.15 -1.25 ~ _______ _ -1.25 -.15 -.25 .25 .15 1.25 OulpAl NetworK drawing 1.25 .15 .25 - .25 -.15 -1.25"__ ______ _ -1.25 -.75 - . 25 . 25 . 15 1.25 Ou1pJtO Output trajectories 1.25 .15 .25 - . 25 -.15 -1.25 _______ _ o 15 30 45 60 15 '0105120135 1.25 .15 .25 -.25 -.15 -1.25'--______ _ o 15 )0 45 60 15 '0 105120135 1.25 .15 .25 -.25 - . 15 OulpAl Time Figure 2: Top left: Trajectory representing the letter "A". Bottom left: Trajectory produced by the network after learning. The dots correspond to the target points of the original trajectory. The curve is produced by drawing output unit 2 as a function of output unit 1, using output unit 0 for deciding when the pen is up or down. Right: Trajectories of the three output units (pen-up/pen-down, X coordinate of the pen and Y coordinate of the pen) as a function of time. The dots corresponds to the target points of the original trajectory. a pen. The network has 3 output units, two for the X and Y position of the pen, and one to code whether the pen is up or down. The network has a total 21 units, no input unit, 18 hidden units and 3 output units. The network is fully connected. Character glyphs are obtained from a tablet which records points at successive instants of time. The data therefore is a sequence of triplets indicating the time, and the X and Y positions. When the pen is up, or if there are no constraint for some specific time steps (misreading of the tablet), the activation of the unit is left unconstrained. The letter to be learned is taken from a handwritten letter database and is displayed in figure 2 (top left). The letter trajectory covers a maximum of 90 time stamps. The network is unfolded 135 steps (10 unconstrained steps are left at the begining to allow the network to settle and 35 additional steps are left at the end to monitor the network activity). The learning rate 'f/w is set to 1.0 (the actual learning rate is per connection and is obtained by dividing the global learning rate by the fanin to the destination unit, and by dividing by the number of connections sharing the same weight). The time constants are set to 10 to produce a smooth trajectory on the output. The learning rate 'f/T is equal to zero (no learning on the time constants). The initial values for the weights are picked from a uniform distribution between -1 and +1. 584 Simard and Le Cun The trajectories fo units 0, 1 and 2 are shown in figure 2 (right). The top graphs represent the state of the pen as a function of time. The straight lines are the desired positions (1 means pen down, -1 means pen up). The middle and bottom graphs are the X and Y positions of the pen respectively. The network is unconstrained after time step 100. Even though the time constants are large, the output units reach the right values before time step 10. The top trajectory (pen-up/pen-down), however, is difficult to learn with time constants as large as 10 because it is not smooth. The letter drawn by the network after learning is shown in figure 2 (left bottom). The network successfully learned to draw the letter on the fully connected network. Different fixed time constants were tried. For small time constant (like 1.0), the network was unable to learn the pattern for any learning rate TJw we tried. This is not surprising since the (vertical) weight sharing makes the trajectories very sensitive to any variation of the weights. This fact emphasizes the importance of using differential units. Larger time constants allow larger learning rate for the weights. Of course, if those are too large, fast trajectories can not be learned. The error can be further improved by letting the time constant adapt as well. However the gain in doing so is minimal. If the learning rate TJT is small, the gain over 'TJT = 0 is negligible. If TJT is too big, learning becomes quickly unstable. This simulation was done with no input, and the target trajectories were for the drawing of a single letter. In the next section, the problem is extended to that of learning to draw multiple letters, depending on an input vector. 4 LEARNING TO GENERATE MULTIPLE LETTERS: THE REVERSE TDNN ARCHITECTURE In a first attempt, the fully connected network of the previous section was used to try to generate the eight first letters of the alphabet. Eight units were used for the input, 3 for the output, and various numbers of hidden units were tried. Every time, all the units, visible and hidden, were fully interconnected. Each input unit was associated to one letter, and the input patterns consisted of one +1 at the unit corresponding to the letter, and -1/7 for all other input units. No success was achieved for all the set of parameters which were tried. The error curves reached plateaus, and the letter .glyphs were not recognizable. Even bringing the number of letter to two (one "A" and one "B") was unsuccessful. In all cases the network was acting like it was ignoring its input: the activation of the output units were almost identical for all input patterns. This was attributed to the network architecture. A new kind of architecture was then used, which we call" Oversampled Reverse TDNN" because of its resemblance with a Subsampled TDNN with input and output interchanged. Subsampled TDNN have been used in speech recognition (Bottou et al., 1990), and on-line character recognition (Guyon et al., 1991). They can be seen one-dimensional versions of locally-connected, weight sharing networks (Le Cun, 1989 )(Le Cun et al., 1990b). Time delay connections allow units to be connected to unit at an earlier time. Weight sharing in time implements a convolution of the input layer. In the Subsampled TDNN, the rate at which the units states are updated decreases gradually with the layer index. The subsampling provides Reverse TDNN: An Architecture for Trajectory Generation 585 t=13 t=5 Input Hidden1 Hidden 2 Output Figure 3: Architecture of a simple reverse TDNN. Time goes from bottom to top, data flows from left to right. The left module is the input and has 2 units. The next module (hidden!) has 3 units and is undersampled every 4 time steps. The following module (hidden2) has 4 units and is undersampled every 2 time steps. The right module is the output, has 3 units and is not undersampled. All modules have time delay connections from the preceding module. Thus the hidden! is connected to hidden2 over a window of 5 time steps, and hidden2 to the output over a window of 3 time steps. For each pattern presented on the 2 input units, a trajectory of 8 time steps is produced by the network on each of the 3 units of the output. 586 Simard and Le Cun :l£l':~"':~:LR ' ·:LL·· .~l£~ -. r 1-. -. -. J/ -. -. -I ~ ~ ~ -I -I ... ·1 _J •• I ... 1 _J •• I ... -. - ••• I -I -. -J •• I ... -I _J •• I ... -. - ••• I , , , 'il' '~ '. I • • • .. • l~A!il~: TtK~ L. ..... _. _. • • I _I _. _. • • I ... _. _. • • I _I •• -. • • I _I _. _. • • I _I _ I -. • • I • • • ' . . -. . ' I '. '~'kL'LQ'i£'~'LK" :.. ~ ,:.~ .. : ...:. ,:, ..... ,:.. t?:: .~. ) .. , :l C :~ T :~ ~ :f \ t:l ~ tt I :l \; :L2-,:LL:~::WL:~:LL ... -I _ ••• I ... -I -J •• I ... 1 oJ •• I -I -I _ ••• I .... 1 _ ••• I ... -, -J •• I 111L Figure 4: Letters drawn by the reverse TDNN network after 10000 iteration of learning. a gradual reduction of the time resolution. In a reverse TDNN the subsampling starts from the units from the output (which have no subsampling) toward the input. Equivalently, each layer is oversampled when compared to the previous layer. This is illustrated in Figure 3 which shows a small reverse TDNN. The input is applied to the 2 units in the lower left. The next layer is unfolded in time two steps and has time delay connections toward step zero of the input. The next layer after this is unfolded in time 4 steps (with again time delay connections), and finally the output is completely unfolded in time. The advantage of such an architecture is its ability to generate trajectories progressively, starting with the lower frequency components at each layer. This parallels recognition TDNN's which extract features progressively. Since the weights are shared between time steps, the network on the figures has only 94 free weights. With the reverse TDNN architecture, it was easy to learn the 26 letters of of the alphabet. We found that the learning is easier if all the weights are initialized to 0 except those with the shortest time delay. As a result, the network initially only sees its fastest connections. The influence of the remaining connections starts at zero and increase as the network learns. The glyphs drawn by the network after 10,000 training epochs are shown in figure 4. To avoid ambiguity, we give subsampling rates with respect to the output, although it would be more natural to mention oversampling rates with respect to the input. The network has 26 input units, 30 hidden units in the first layer subsampled at every 27 time steps, 25 units at the next layer subsampled at every 9 time steps, and 3 output units with no subsampling. Every layer has time delay connections from the previous layer, and is connected with 3 different updates of the previous layer. The time constants were not subject Reverse TDNN: An Architecture for Trajectory Generation 587 to learning and were initialized to 10 for the x and y output units, and to 1 for the remaining units. No effort was made to optimize these values. Big initial time constants prevent the network from making fast variations on the output units and in general slow down the learning process. On the other hand, small time constants make learning more difficult. The correct strategy is to adapt the time constants to the intrinsic frequencies of the trajectory. With all the time constants equal to one, the network was not able to learn the alphabet (as it was the case in the experiment of the previous section). Good results are obtained with time constants of 10 for the two x-y output units and time constants of 1 for all other units. 5 VARIATIONS OF THE ORTDNN Many variations of the Oversampled Reverse TDNN architecture can be imagined. For example, recurrent connections can be added: connections can go from right to left on figure 3, as long as they go up. Recurrent connections become necessary when information needs to be stored for an arbitrary long time. Another variation would be to add sensor inputs at various stages of the network, to allow adjustment of the trajectory based on sensor data, either on a global scale (first layers), or locally (last layers). Tasks requiring recurrent ORTDNN's and/or sensor input include dynamic robot control or speech synthesis. Another interesting variation is an encoder network consisting of a Subsampled TDNN and an Oversmapled Reverse TDNN connected back to back. The Subsampled TDNN encodes the time sequence shown on its input, and the ORTDNN reconstructs an time sequence from the output of the TDNN. The main application of this network would be the compact encoding of time series. This network can be trained to reproduce its input on its output (auto-encoder), in which case the state of the middle layer can be used as a compact code of the input sequence. 6 CONCLUSION We have presented a new architecture capable of learning to generate trajectories efficiently. The architecture is designed to favor hierarchical representations of trajectories in terms of subtasks. The experiment shows how the ORTDNN can produce different letters as a function of the input. Although this application does not have practical consequences, it shows the learning capabilities of the model for generating trajectories. The task presented here was particularly difficult because there is no correlation between the patterns. The inputs for an A or a Z only differ on 2 of the 26 input units. Yet, the network produces totally different trajectories on the output units. This is promising since typical neural net application have very correlated patterns which are in general much easier to learn. References Bottou, L., Fogelman, F., Blanchet, P., and Lienard, J. S. (1990). Speaker inde588 Simard and Le Cun pendent isolated digit recognition: Multilayer perceptron vs Dynamic Time Warping. Neural Networks, 3:453-465. Guyon, I., Albrecht, P., Le Cun, Y., Denker, J. S., and W., H. (1991). design of a neural network character recognizer for a touch terminal. Pattern Recognition, 24(2):105-119. Jordan, M. I. (1986). Serial Order: A Parallel Distributed Processing Approach. Technical Report ICS-8604, Institute for Cognitive Science, University of California at San Diego, La Jolla, CA. Lang, K. J. and Hinton, G. E. (1988). A Time Delay Neural Network Architecture for Speech Recognition. Technical Report CMU-cs-88-152, Carnegie-Mellon University, Pittsburgh PA. Le Cun, Y. (1988). A theoretical framework for Back-Propagation. In Touretzky, D., Hinton, G., and Sejnowski, T., editors, Proceedings of the 1988 Connectionist Models Summer School, pages 21-28, CMU, Pittsburgh, Pa. Morgan Kaufmann. Le Cun, Y. (1989). Generalization and Network Design Strategies. In Pfeifer, R., Schreter, Z., Fogelman, F., and Steels, L., editors, Connectionism in Perspective, Zurich, Switzerland. Elsevier. an extended version was published as a technical report of the University of Toronto. Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. D. (1990a). Handwritten digit recognition with a backpropagation network. In Touretzky, D., editor, Advances in Neural Information Processing Systems 2 (NIPS *89) , Denver, CO. Morgan Kaufman. Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, 1. D. (1990b). Back-Propagation Applied to Handwritten Zipcode Recognition. Neural Computation. Pearlmutter, B. (1988). Learning State Space Trajectories in Recurrent Neural Networks. Neural Computation, 1(2). Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., and Lang, K. (1989). Phoneme Recognition Using Time-Delay Neural Networks. IEEE Transactions on Acoustics, Speech and Signal Processing, 37:328-339.
1991
142
477
Principles of Risk Minimization for Learning Theory V. Vapnik AT &T Bell Laboratories Holmdel, NJ 07733, USA Abstract Learning is posed as a problem of function estimation, for which two principles of solution are considered: empirical risk minimization and structural risk minimization. These two principles are applied to two different statements of the function estimation problem: global and local. Systematic improvements in prediction power are illustrated in application to zip-code recognition. 1 INTRODUCTION The structure of the theory of learning differs from that of most other theories for applied problems. The search for a solution to an applied problem usually requires the three following steps: 1. State the problem in mathematical terms. 2. Formulate a general principle to look for a solution to the problem. 3. Develop an algorithm based on such general principle. The first two steps of this procedure offer in general no major difficulties; the third step requires most efforts, in developing computational algorithms to solve the problem at hand. In the case of learning theory, however, many algorithms have been developed, but we still lack a clear understanding of the mathematical statement needed to describe the learning procedure, and of the general principle on which the search for solutions 831 832 Vapnik should be based. This paper is devoted to these first two steps, the statement of the problem and the general principle of solution. The paper is organized as follows. First, the problem of function estimation is stated, and two principles of solution are discussed: the principle of empirical risk minimization and the principle of structural risk minimization. A new statement is then given: that of local estimation of function, to which the same principles are applied. An application to zip-code recognition is used to illustrate these ideas. 2 FUNCTION ESTIMATION MODEL The learning process is described through three components: 1. A generator of random vectors x, drawn independently from a fixed but unknown distribution P(x). 2. A supervisor which returns an output vector y to every input vector x, according to a conditional distribution function P(ylx), also fixed but unknown. 3. A learning machine capable of implementing a set of functions !(x, w), wE W. The problem of learning is that of choosing from the given set of functions the one which approximates best the supervisor's response. The selection is based on a training set of e independent observations: (1) The formulation given above implies that learning corresponds to the problem of function approximation. 3 PROBLEM OF RISK MINIMIZATION In order to choose the best available approximation to the supervisor's response, we measure the loss or discrepancy L(y, !(x, w» between the response y of the supervisor to a given input x and the response !(x, w) provided by the learning machine. Consider the expected value of the loss, given by the risk functional R(w) = J L(y, !(x, w»dP(x,y). (2) The goal is to minimize the risk functional R( w) over the class of functions !(x, w), w E W. But the joint probability distribution P(x, y) = P(ylx )P(x) is unknown and the only available information is contained in the training set (1). 4 EMPIRICAL RISK MINIMIZATION In order to solve this problem, the following induction principle is proposed: the risk functional R( w) is replaced by the empirical risk functional 1 l E(w) = i LL(Yi,!(Xi'W» i=l (3) Principles of Risk Minimization for Learning Theory 833 constructed on the basis of the training set (1). The induction principle of empirical risk minimization (ERM) assumes that the function I(x, wi) ,which minimizes E(w) over the set w E W, results in a risk R( wi) which is close to its minimum. This induction principle is quite general; many classical methods such as least square or maximum likelihood are realizations of the ERM principle. The evaluation of the soundness of the ERM principle requires answers to the following two questions: 1. Is the principle consistent? (Does R( wi) converge to its minimum value on the set wE W when f- oo?) 2. How fast is the convergence as f increases? The answers to these two questions have been shown (Vapnik et al., 1989) to be equivalent to the answers to the following two questions: 1. Does the empirical risk E( w) converge uniformly to the actual risk R( w) over the full set I(x, w), wE W? Uniform convergence is defined as Prob{ sup IR(w) - E(w)1 > £} 0 as f 00. (4) wEW 2. What is the rate of convergence? It is important to stress that uniform convergence (4) for the full set of functions is a necessary and sufficient condition for the consistency of the ERM principle. 5 VC-DIMENSION OF THE SET OF FUNCTIONS The theory of uniform convergence of empirical risk to actual risk developed in the 70's and SO's, includes a description of necessary and sufficient conditions as well as bounds for the rate of convergence (Vapnik, 19S2). These bounds, which are independent of the distribution function P(x,y), are based on a quantitative measure of the capacity of the set offunctions implemented by the learning machine: the VC-dimension of the set. For simplicity, these bounds will be discussed here only for the case of binary pattern recognition, for which y E {O, 1} and I(x, w), wE W is the class of indicator functions. The loss function takes only two values L(y, I(x, w)) = 0 if y = I(x, w) and L(y, I(x, w)) = 1 otherwise. In this case, the risk functional (2) is the probability of error, denoted by pew). The empirical risk functional (3), denoted by v(w), is the frequency of error in the training set. The VC-dimension of a set of indicator functions is the maximum number h of vectors which can be shattered in all possible 2h ways using functions in the set. For instance, h = n + 1 for linear decision rules in n-dimensional space, since they can shatter at most n + 1 points. 6 RATES OF UNIFORM CONVERGENCE The notion of VC-dimension provides a bound to the rate of uniform convergence. For a set of indicator functions with VC-dimension h, the following inequality holds: 834 Vapnik 2fe h 2 Prob{ SUp IP(w) - v(w)1 > c} < (-h) exp{-e fl· wEW It then follows that with probability 1 - T}, simultaneously for all w E W, pew) < v(w) + Co(f/h, T}), with confidence interval C (f/h ) _ . Ih(1n 21/h + 1) - In T} o ,T}-V f . (5) (6) (7) This important result provides a bound to the actual risk P( w) for all w E W, including the w· which minimizes the empirical risk v(w). The deviation IP(w) - v(w)1 in (5) is expected to be maximum for pew) close to 1/2, since it is this value of pew) which maximizes the error variance u(w) = J P( w)( 1 - P( w)). The worst case bound for the confidence interval (7) is thus likely be controlled by the worst decision rule. The bound (6) is achieved for the worst case pew) = 1/2, but not for small pew), which is the case of interest. A uniformly good approximation to P( w) follows from considering pew) - v(w) Prob{ sup > e}. wEW (j(w) (8) The variance of the relative deviation (P( w) - v( w))/ (j( w) is now independent of w. A bound for the probability (8), if available, would yield a uniformly good bound for actual risks for all P( w). Such a bound has not yet been established. But for pew) « 1, the approximation (j(w) ~ JP(w) is true, and the following inequality holds: pew) - v(w) 2le h e2f Prob{ sup > e} < (-) exp{--}. (9) wEW JP(w) h 4 It then follows that with probability 1 - T}, simultaneously for all w E W, pew) < v(w) + CI(f/h, v(w), T}), (10) with confidence interval CI(l/h,v(w),T}) =2 (h(ln2f/h;l)-lnT}) (1+ 1 v(w)f ) + h(1n 2f/h + 1) -In T} . (11) Note that the confidence interval now depends on v( w), and that for v( w) = 0 it reduces to CI(f/ h, 0, T}) = 2C'5(f/ h, T}), which provides a more precise bound for real case learning. 7 STRUCTURAL RISK MINIMIZATION The method of ERM can be theoretically justified by considering the inequalities (6) or (10). When l/h is large, the confidence intervals Co or C1 become small, and Principles of Risk Minimization for Learning Theory 835 can be neglected. The actual risk is then bound by only the empirical risk, and the probability of error on the test set can be expected to be small when the frequency of error in the training set is small. However, if ljh is small, the confidence interval cannot be neglected, and even v( w) = 0 does not guarantee a small probability of error. In this case the minimization of P( w) requires a new principle, based on the simultaneous minimization of v( w) and the confidence interval. It is then necessary to control the VC-dimension of the learning machine. To do this, we introduce a nested structure of subsets Sp = {lex, w), wE Wp}, such that SlCS2C ... CSn . The corresponding VC-dimensions of the subsets satisfy hl < h2 < ... < hn . The principle of structure risk minimization (SRM) requires a two-step process: the empirical risk has to be minimized for each element of the structure. The optimal element S* is then selected to minimize the guaranteed risk, defined as the sum of the empirical risk and the confidence interval. This process involves a trade-off: as h increases the minimum empirical risk decreases, but the confidence interval mcreases. 8 EXAMPLES OF STRUCTURES FOR NEURAL NETS The general principle of SRM can be implemented in many different ways. Here we consider three different examples of structures built for the set of functions implemented by a neural network. 1. Structure given by the architecture of the neural network. Consider an ensemble of fully connected neural networks in which the number of units in one of the hidden layers is monotonically increased. The set of implement able functions makes a structure as the number of hidden units is increased. 2. Structure given by the learning procedure. Consider the set of functions S = {lex, w), w E W} implementable by a neural net of fixed architecture. The parameters {w} are the weights of the neural network. A structure is introduced through Sp = {lex, w), Ilwll < Cp } and Cl < C2 < ... < Cn. For a convex loss function, the minimization of the empirical risk within the element Sp of the structure is achieved through the minimizat.ion of 1 l E(w"P) = l LL(Yi,!(Xi'W» +'PllwI12 i=l with appropriately chosen Lagrange multipliers II > 12 > ... > In' The well-known "weight decay" procedure refers to the minimization of this functional. 3. Structure given by preprocessing. Consider a neural net with fixed architecture. The input representation is modified by a transformation z = K(x, 13), where the parameter f3 controls the degree of the degeneracy introduced by this transformation (for instance f3 could be the width of a smoothing kernel). 836 Vapnik A structure is introduced in the set of functions S = {!(I«x, 13), w), w E W} through 13 > CP1 and Cl > C2 > ... > Cn· 9 PROBLEM OF LOCAL FUNCTION ESTIMATION The problem of learning has been formulated as the problem of selecting from the class of functions !(x, w), w E W that which provides the best available approximation to the response of the supervisor. Such a statement of the learning problem implies that a unique function !( x, w·) will be used for prediction over the full input space X. This is not necessarily a good strategy: the set !(x, w), w E W might not contain a good predictor for the full input space, but might contain functions capable of good prediction on specified regions of input space. In order to formulate the learning problem as a problem of local function approximation, consider a kernel I«x - Xo, b) ~ 0 which selects a region of input space of width b, centered at xo. For example, consider the rectangular kernel, I< (x _ x b) = { 1 if Ix - ~o I < b ,. 0, 0 otherwIse and a more general general continuous kernel, such as the gaussian r (x-xO)2 /ig(x-xo,b)=exp-{ b2 }. The goal is to minimize the local risk functional J K(x - Xo, b) R(w, b, xo) = L(y, !(x, w» K(xo, b) dP(x, V)· (12) The normalization is defined by K(xo, b) = J K(x - Xo, b) dP(x). (13) The local risk functional (12) is to be minimized over the class of functions !(x, w), w E Wand over all possible neighborhoods b E (0,00) centered at xo. As before, the joint probability distribution P( x, y) is unknown, and the only available information is contained in the training set (1). 10 EMPIRICAL RISK MINIMIZATION FOR LOCAL ESTIMATION In order to solve this problem, the following induction principle is proposed: for fixed b, the local risk functional (12) is replaced by the empirical risk functional 1 ~ K(Xi - Xo, b) E(w,b,xo) = l L..tL(Yj,!(Xj,w» 1« b) , i=l Xo, (14) Principles of Risk Minimization for Learning Theory 837 constructed on the basis of the training set. The empirical risk functional (14) is to be minimized over w E W. In the simplest case, the class of functions is that of constant functions, I(x, w) = C( w). Consider the following examples: 1. K-Nearest Neighbors Method: For the case of binary pattern recognition, the class of constant indicator functions contains only two functions: either I(x, w) = ° for all x, or I(x, w) = 1 for all x. The minimization of the empirical risk functional (14) with the rectangular kernel Kr(x-xo,b) leads to the K-nearest neighbors algorithm. 2. Watson-Nadaraya Method: For the case y E R, the class of constant functions contains an infinite number of elements, I(x,w) = C(w), C(w) E R. The minimization of the empirical risk functional (14) for general kernel and a quadratic loss function L(y, I(x, w)) = (y - I(x, w))2 leads to the estimator l I( ) - "". K(Xi - Xo, b) Xo ~YI l. , i=1 L;=I/\ (x; - xo, b) which defines the Watson-Nadaraya algorithm. These classical methods minimize (14) with a fixed b over the class of constant functions. The supervisor's response in the vicinity of Xo is thus approximated by a constant, and the characteristic size b of the neighborhood is kept fixed, independent of Xo. A truly local algorithm would adjust the parameter b to the characteristics of the region in input space centered at Xo . Further improvement is possible by allowing for a richer class of predictor functions I(x, w) within the selected neighborhood. The SRM principle for local estimation provides a tool for incorporating these two features. 11 STRUCTURAL RISK MINIMIZATION FOR LOCAL ESTIMATION The arguments that lead to the inequality (6) for the risk functional (2) can be extended to the local risk functional (12), to obtain the following result: with probability 1 - T}, and simultaneously for all w E Wand all b E (0,00) R(w,b,xo) < E(w,b,xo) + C2(flh, b, T}). (15) The confidence interval C2(flh, b, T}) reduces to Co(llh, T}) in the b -+ 00 limit. As before, a nested structure is introduced in the class of functions, and the empirical risk (14) is minimized with respect to both w E Wand bE (0,00) for each element of the structure. The optimal element is then selected to minimize the guaranteed risk, defined as the sum of the empirical risk and the confidence interval. For fixed b this process involves an already discussed trade-off: as h increases, the empirical risk decreases but the confidence interval increases. A new trade-off appears by varying b at fixed h: as b increases the empirical risk increases, but the confidence interval decreases. The use of b as an additional free parameter allows us to find deeper minima of the guaranteed risk. 838 Vapnik 12 APPLICATION TO ZIP-CODE RECOGNITION We now discuss results for the recognition of the hand written and printed digits in the US Postal database, containing 9709 training examples and 2007 testing examples. Human recognition of this task results in an approximately 2.5% prediction error (Sackinger et al., 1991). The learning machine considered here is a five-layer neural network with shared weights and limited receptive fields. When trained with a back-propagation algorithm for the minimization of the empirical risk, the network achieves 5.1% prediction error (Le Cun et al., 1990). Further performance improvement with the same network architecture has required the introduction a new induction principle. Methods based on SRM have achieved prediction errors of 4.1% (training based on a double-back-propagation algorithm which incorporates a special form of weight decay (Drucker, 1991» and 3.95% (using a smoothing transformation in input space (Simard, 1991». The best result achieved so far, of 3.3% prediction error, is based on the use of the SRM for local estimation of the predictor function (Bottou, 1991). It is obvious from these results that dramatic gains cannot be achieved through minor algorithmic modifications, but require the introduction of new principles. Acknowledgements I thank the members of the Neural Networks research group at Bell Labs, Holmdel, for supportive and useful discussions. Sara Solla, Leon Bottou, and Larry Jackel provided invaluable help to render my presentation more clear and accessible to the neural networks community. References V. N. Vapnik (1982), Estimation of Dependencies Based on Empirical Data, Springer-Verlag (New York). V. N. Vapnik and A. J a. Chervonenkis (1989) 'Necessary and sufficient conditions for consistency of the method of empirical risk minimization' [in Russian], Yearbook of the Academy of Sciences of the USSR on Recognition, Classification, and Forecasting, 2, 217-249, Nauka (Moscow) (English translation in preparation). E. Sackinger and J. Bromley (1991), private communication. Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel (1990) 'Handwritten digit recognition with a back-propagation network', Neural Information Processing Systems 2, 396-404, ed. by D. S. Touretzky, Morgan Kaufmann (California). H. Drucker (1991), private communication. P. Simard (1991), private communication. L. Bottou (1991), private communication.
1991
143
478
Networks with Learned Unit Response Functions John Moody and Norman Yarvin Yale Computer Science, 51 Prospect St. P.O. Box 2158 Yale Station, New Haven, CT 06520-2158 Abstract Feedforward networks composed of units which compute a sigmoidal function of a weighted sum of their inputs have been much investigated. We tested the approximation and estimation capabilities of networks using functions more complex than sigmoids. Three classes of functions were tested: polynomials, rational functions, and flexible Fourier series. Unlike sigmoids, these classes can fit non-monotonic functions. They were compared on three problems: prediction of Boston housing prices, the sunspot count, and robot arm inverse dynamics. The complex units attained clearly superior performance on the robot arm problem, which is a highly non-monotonic, pure approximation problem. On the noisy and only mildly nonlinear Boston housing and sunspot problems, differences among the complex units were revealed; polynomials did poorly, whereas rationals and flexible Fourier series were comparable to sigmoids. 1 Introduction A commonly studied neural architecture is the feedforward network in which each unit of the network computes a nonlinear function g( x) of a weighted sum of its inputs x = wtu. Generally this function is a sigmoid, such as g( x) = tanh x or g(x) = 1/(1 + e(x-9»). To these we compared units of a substantially different type: they also compute a nonlinear function of a weighted sum of their inputs, but the unit response function is able to fit a much higher degree of nonlinearity than can a sigmoid. The nonlinearities we considered were polynomials, rational functions (ratios of polynomials), and flexible Fourier series (sums of cosines.) Our comparisons were done in the context of two-layer networks consisting of one hidden layer of complex units and an output layer of a single linear unit. 1048 Networks with Learned Unit Response Functions 1049 This network architecture is similar to that built by projection pursuit regression (PPR) [1, 2], another technique for function approximation. The one difference is that in PPR the nonlinear function of the units of the hidden layer is a nonparametric smooth. This nonparametric smooth has two disadvantages for neural modeling: it has many parameters, and, as a smooth, it is easily trained only if desired output values are available for that particular unit. The latter property makes the use of smooths in multilayer networks inconvenient. If a parametrized function of a type suitable for one-dimensional function approximation is used instead of the nonparametric smooth, then these disadvantages do not apply. The functions we used are all suitable for one-dimensional function approximation. 2 Representation A few details of the representation of the unit response functions are worth noting. Polynomials: Each polynomial unit computed the function g(x) = alX + a2x2 + ... + anxn with x = wT u being the weighted sum of the input. A zero'th order term was not included in the above formula, since it would have been redundant among all the units. The zero'th order term was dealt with separately and only stored in one location. Rationals: A rational function representation was adopted which could not have zeros in the denominator. This representation used a sum of squares of polynomials, as follows: ( ) ao + alx + ... + anxn 9 x -1 + (bo + b1x)2 + (b2x + b3x2)2 + (b4x + b5x 2 + b6X3 + b7x4)2 + .,. This representation has the qualities that the denominator is never less than 1, and that n parameters are used to produce a denominator of degree n. If the above formula were continued the next terms in the denominator would be of degrees eight, sixteen, and thirty-two. This powers-of-two sequence was used for the following reason: of the 2( n - m) terms in the square of a polynomial p = am xm + '" + anxn , it is possible by manipulating am ... an to determine the n - m highest coefficients, with the exception that the very highest coefficient must be non-negative. Thus if we consider the coefficients of the polynomial that results from squaring and adding together the terms of the denominator of the above formula, the highest degree squared polynomial may be regarded as determining the highest half of the coefficients, the second highest degree polynomial may be regarded as determining the highest half of the rest of the coefficients, and so forth. This process cannot set all the coefficients arbitrarily; some must be non-negative. Flexible Fourier series: The flexible Fourier series units computed n g(x) = L: ai COS(bi X + Ci) i=O where the amplitudes ai, frequencies bi and phases Ci were unconstrained and could assume any value. 1050 Moody and Yarvin Sigmoids: We used the standard logistic function: g(x) = 1/(1 + e(x-9)) 3 Training Method All the results presented here were trained with the Levenberg-Marquardt modification of the Gauss-Newton nonlinear least squares algorithm. Stochastic gradient descent was also tried at first, but on the problems where the two were compared, Levenberg-Marquardt was much superior both in convergence time and in quality of result. Levenberg-Marquardt required substantially fewer iterations than stochastic gradient descent to converge. However, it needs O(p2) space and O(p2n) time per iteration in a network with p parameters and n input examples, as compared to O(p) space and O(pn) time per epoch for stochastic gradient descent. Further details of the training method will be discussed in a longer paper. With some data sets, a weight decay term was added to the energy function to be optimized. The added term was of the form A L~=l w;. When weight decay was used, a range of values of A was tried for every network trained. Before training, all the data was normalized: each input variable was scaled so that its range was (-1,1), then scaled so that the maximum sum of squares of input variables for any example was 1. The output variable was scaled to have mean zero and mean absolute value 1. This helped the training algorithm, especially in the case of stochastic gradient descent. 4 Results We present results of training our networks on three data sets: robot arm inverse dynamics, Boston housing data, and sunspot count prediction. The Boston and sunspot data sets are noisy, but have only mild nonlinearity. The robot arm inverse dynamics data has no noise, but a high degree of nonlinearity. Noise-free problems have low estimation error. Models for linear or mildly nonlinear problems typically have low approximation error. The robot arm inverse dynamics problem is thus a pure approximation problem, while performance on the noisy Boston and sunspots problems is limited more by estimation error than by approximation error. Figure la is a graph, as those used in PPR, of the unit response function of a oneunit network trained on the Boston housing data. The x axis is a projection (a weighted sum of inputs wT u) of the 13-dimensional input space onto 1 dimension, using those weights chosen by the unit in training. The y axis is the fit to data. The response function of the unit is a sum ofthree cosines. Figure Ib is the superposition of five graphs of the five unit response functions used in a five-unit rational function solution (RMS error less than 2%) of the robot arm inverse dynamics problem. The domain for each curve lies along a different direction in the six-dimensional input space. Four of the five fits along the projection directions are non-monotonic, and thus can be fit only poorly by a sigmoid. Two different error measures are used in the following. The first is the RMS error, normalized so that error of 1 corresponds to no training. The second measure is the ~ .; 2 o ~ o ! .. c o -2 -2.0 Figure 1: a . . " . . ' . . ' Networks with Learned Unit Response Functions 1051 Robot arm fit to data 40 20 o -zo -40 1.0 -4 b square of the normalized RMS error, otherwise known as the fraction of explained varIance. We used whichever error measure was used in earlier work on that data set. 4.1 Robot arm inverse dynamics This problem is the determination of the torque necessary at the joints of a twojoint robot arm required to achieve a given acceleration of each segment of the arm, given each segment's velocity and position. There are six input variables to the network, and two output variables. This problem was treated as two separate estimation problems, one for the shoulder torque and one for the elbow torque. The shoulder torque was a slightly more difficult problem, for almost all networks. The 1000 points in the training set covered the input space relatively thoroughly. This, together with the fact that the problem had no noise, meant that there was little difference between training set error and test set error. Polynomial networks of limited degree are not universal approximators, and that is quite evident on this data set; polynomial networks of low degree reached their minimum error after a few units. Figure 2a shows this. If polynomial, cosine, rational, and sigmoid networks are compared as in Figure 2b, leaving out low degree polynomials, the sigmoids have relatively high approximation error even for networks with 20 units. As shown in the following table, the complex units have more parameters each, but still get better performance with fewer parameters total. Type Units Parameters Error degree 7 polynomial 5 65 .024 degree 6 rational 5 95 .027 2 term cosine 6 73 .020 sigmoid 10 81 .139 sigmoid 20 161 .119 Since the training set is noise-free, these errors represent pure approximation error. 1052 Moody and Yarvin ~.Iilte ...... +ootII1n.. 3 ler .... 0.8 0.8 Ooooln.. 4 tel'lNl opoJynomleJ de, 7 XrationeJ do, 8 • ... "'0101 0.8 O.S ~ .. E • 0 0.4 0.4 0.2 0.2 0.0 L---,b-----+--~::::::::8~~§=t::::::!::::::1J 10 111 20 numbel' of WIIt11 number Dr WIIt11 Figure 2: a b The superior performance of the complex units on this problem is probably due to their ability to approximate non-monotonic functions. 4.2 Boston housing The second data set is a benchmark for statistical algorithms: the prediction of Boston housing prices from 13 factors [3]. This data set contains 506 exemplars and is relatively simple; it can be approximated well with only a single unit. Networks of between one and six units were trained on this problem. Figure 3a is a graph of training set performance from networks trained on the entire data set; the error measure used was the fraction of explained variance. From this graph it is apparent 0 .20 O. lfi ~ • 0.10 0.05 Figure 3: a 03 tenD coolh. x.itmold o polJDomll1 d., fi +raUo,,"1 dec 2 02 term.....m. 1.0 0 3 term COllin. x.tpnotd 0.5 b Networks with Learned Unit Response Functions 1053 that training set performance does not vary greatly between different types of units, though networks with more units do better. On the test set there is a large difference. This is shown in Figure 3b. Each point on the graph is the average performance of ten networks of that type. Each network was trained using a different permutation of the data into test and training sets, the test set being 1/3 of the examples and the training set 2/3. It can be seen that the cosine nets perform the best, the sigmoid nets a close second, the rationals third, and the polynomials worst (with the error increasing quite a bit with increasing polynomial degree.) It should be noted that the distribution of errors is far from a normal distribution, and that the training set error gives little clue as to the test set error. The following table of errors, for nine networks of four units using a degree 5 polynomial, is somewhat typical: Set training test Error 0.091 I 0.395 Our speculation on the cause of these extremely high errors is that polynomial approximations do not extrapolate well; if the prediction of some data point results in a polynomial being evaluated slightly outside the region on which the polynomial was trained, the error may be extremely high. Rational functions where the numerator and denominator have equal degree have less of a problem with this, since asymptotically they are constant. However, over small intervals they can have the extrapolation characteristics of polynomials. Cosines are bounded, and so, though they may not extrapolate well if the function is not somewhat periodic, at least do not reach large values like polynomials. 4.3 Sunspots The third problem was the prediction of the average monthly sunspot count in a given year from the values of the previous twelve years. We followed previous work in using as our error measure the fraction of variance explained, and in using as the training set the years 1700 through 1920 and as the test set the years 1921 through 1955. This was a relatively easy test set - every network of one unit which we trained (whether sigmoid, polynomial, rational, or cosine) had, in each of ten runs, a training set error between .147 and .153 and a test set error between .105 and .111. For comparison, the best test set error achieved by us or previous testers was about .085. A similar set of runs was done as those for the Boston housing data, but using at most four units; similar results were obtained. Figure 4a shows training set error and Figure 4b shows test set error on this problem. 4.4 Weight Decay The performance of almost all networks was improved by some amount of weight decay. Figure 5 contains graphs of test set error for sigmoidal and polynomial units, 1054 Moody and Yarvin 0.18 ,..-,------=..::.;==.::.....:::...:=:..:2..,;:.::.:..----r--1 0.25 ~---..::.S.::.:un:::;;a.!:..po.:...:l:....:t:.::e.:...:Bt:....:lI:.::e..:..l ..:.:,mre.::.:an~ __ --,-, .. 0 I: .. 0.14 O.IZ 0.10 O.OB Opolynomlal d •• 1\ """allon.. de. 2 02 term co.lne cs term coolne x.tamcld 0.08 '--+1-----±2-----!Se------+--' number of WIlle Figure 4: a 0.20 0.15 0.10 OP0lr.!:0mt .. dea t ~·leO:: o~:~~ C 3 hrm corlne X_lamold 2 3 Dumb .... of unit. b using various values of the weight decay parameter A. For the sigmoids, very little weight decay seems to be needed to give good results, and there is an order of magnitude range (between .001 and .01) which produces close to optimal results. For polynomials of degree 5, more weight decay seems to be necessary for good results; in fact, the highest value of weight decay is the best. Since very high values of weight decay are needed, and at those values there is little improvement over using a single unit, it may be supposed that using those values of weight decay restricts the multiple units to producing a very similar solution to the one-unit solution. Figure 6 contains the corresponding graphs for sunspots. Weight decay seems to help less here for the sigmoids, but for the polynomials, moderate amounts of weight decay produce an improvement over the one-unit solution. Acknowledgements The authors would like to acknowledge support from ONR grant N00014-89-J1228, AFOSR grant 89-0478, and a fellowship from the John and Fannie Hertz Foundation. The robot arm data set was provided by Chris Atkeson. References [1] J. H. Friedman, W. Stuetzle, "Projection Pursuit Regression", Journal of the American Statistical Association, December 1981, Volume 76, Number 376, 817-823 [2] P. J. Huber, "Projection Pursuit", The Annals of Statistics, 1985 Vol. 13 No. 2,435-475 [3] L. Breiman et aI, Classification and Regression Trees, Wadsworth and Brooks, 1984, pp217-220 Networks with Learned Unit Response Functions Boston housin 0.30 r-T"=::...:..:;.:;:....:r:-=::;.5I~;=::::..:;=:-;;..:..:..::.....;;-=..:.!ar:......::=~..., hi decay 0.25 ~0.20 • 0.15 00 +.0001 0.001 0.01 X.l ·.3 1.0 0.5 00 +.0001 0.001 0.01 )(.1 '.3 Figure 5: Boston housing test error with various amounts of weight decay 0.16 moids wilh wei hl decay O.IB 00 +.0001 0.001 0 .01 ><.1 · .3 0.111 0.14 1.8 0.1 • .. 1: 0.12 D ~ 0.12 ~~ 0.10 0.10 sea ::::::,. 0.08 2 3 <4 0.08 2 3 Dum be .. of 1IJlIt, Dumb.,. 01 WIll' Figure 6: Sunspot test error with various amounts of weight decay 1055 Perturbing Hebbian Rules Peter Dayan CNL, The Salk Institute PO Box 85800 San Diego CA 92186-5800, USA dayan~helrnholtz.sdsc.edu Geoffrey Goodhill COGS University of Sussex, Falmer Brighton BNl 9QN, UK geoffg~cogs.susx.ac.uk Abstract Recently Linsker [2] and MacKay and Miller [3,4] have analysed Hebbian correlational rules for synaptic development in the visual system, and Miller [5,8] has studied such rules in the case of two populations of fibres (particularly two eyes). Miller's analysis has so far assumed that each of the two populations has exactly the same correlational structure. Relaxing this constraint by considering the effects of small perturbative correlations within and between eyes permits study of the stability of the solutions. We predict circumstances in which qualitative changes are seen, including the production of binocularly rather than monocularly driven units. 1 INTRODUCTION Linsker [2] studied how a Hebbian correlational rule could predict the development of certain receptive field structures seen in the visual system. MacKay and Miller [3,4] pointed out that the form of this learning rule meant that it could be analysed in terms of the eigenvectors of the matrix of time-averaged presyna ptic correlations. Miller [5,8, 7] independently studied a similar correlational rule for the case of two eyes (or more generally two populations), explaining how cells develop in VI that are ultimately responsive to only one eye, despite starting off as responsive to both. This process is again driven by the eigenvectors and eigenvalues of the developmental equation, and Miller [7] relates Linsker's model to the two population case. Miller's analysis so far assumes that the correlations of activity within each population are identical. This special case simplifies the analysis enabling the projections from the two eyes to be separated out into sum and difference variables. In general, 19 20 Dayan and Goodhill one would expect the correlations to differ slightly, and for correlations between the eyes to be not exactly zero. We analyse how such perturbations affect the eigenvectors and eigenvalues of the developmental equation, and are able to explain some of the results found empirically by Miller [6]. Further details on this analysis and on the relationship between Hebbian and non-Hebbian models of the development of ocular dominance and orientation selectivity can be found in Goodhill (1991). 2 THE EQUATION MacKay and Miller [3,4] study Linsker's [2] developmental equation in the form: w = (Q + k2J)W+ kIn where W = [wd, i E [1, n] are the weights from the units in one layer 'R, to a particular unit in the next layer S, Q is the covariance matrix of the activities of the units in layer'R" J is the matrix hi = 1, Vi, j, and n is the 'DC' vector ni = 1, Vi. The equivalent for two populations of cells is: ( :~ ) = ( g~! ~~~ g~! ~~~ ) ( :~ ) + kl ( ~ ) where Ql gives the covariance between cells within the first population, Q2 gives that between cells within the second, and Qc (assumed symmetric) gives the covariance between cells in the two populations. Define Q. as this full, two population, development matrix. Miller studies the case in which Ql = Q2 = Q and Qc is generally zero or slightly negative. Then the development of WI - W2 (which Miller calls So) and WI + W2 (SS) separate; for Qc = 0, these go like: SS 0 SSS bt = QSo and St = (Q + 2k2J)SS + 2kln. and, up to various forms of normalisation and/or weight saturation, the patterns of dominance between the two populations are determined by the initial value and the fastest growing components of So. If upper and lower weight saturation limits are reached at roughly the same time (Berns, personal communication), the conventional assumption that the fastest growing eigenvectors of So dominate the terminal state is borne out. The starting condition Miller adopts has WI - W2 = €' a and WI + W2 = b, where €' is small, and a and b are 0(1). Weights are constrained to be positive, and saturate at some upper limit. Also, additive normalisation is applied throughout development, which affects the growth of the SS (but not the SO) modes. As discussed by MacKay and Miller [3,4]' this is approximately accommodated in the k2J component. Mackay and Miller analyse the eigendecomposition of Q + k2J for general and radially symmetric covariance matrices Q and all values of k2. It turns out that the eigendecomposition of Q. for the case Ql = Q2 = Q and Qc = 0 (that studied by Miller) is given in table form by: Perturbing Hebbian Rules 21 E-vector E-value Conditions (Xi, xt) Ai QXi = AiXi n'Xi = 0 (Xi, -xl) Ai QXi = AiXi n.Xi = 0 (Yi, -yt) ~i QYi = ~iYi n·Yi f. 0 (Zit zl) 'Vi (Q + 2k2J)Zi = 'ViZi n.zi f. 0 Figure 1 shows the matrix and the two key (y, -y) and (x, -x) eigenvectors. The details of the decomposition of Q. in this table are slightly obscured by degeneracy in the eigendecomposition of Q + k2J. Also, for clarity, we write (Xi, Xi) for (Xi, Xi) T. A consequence of the first two rows in the table is that (l1Xi, aXi) is an eigenvector for any 11 and a; this becomes important later. That the development of SD and S5 separates can be seen in the (u, u) and (u, -u) forms of the eigenvectors. In Miller's terms the onset of dominance of one of the two populations is seen in the (u, -u) eigenvectors - dominance requires that ~j for the eigenvector whose elements are all of the same sign (one such exists for Miller's Q) is larger than the ~i and the Ai for all the other such eigenvectors. In particular, on pages 296-300 of [6], he shows various cases for which this does and one in which it does not happen. To understand how this comes about, we can treat the latter as a perturbed version of the former. 3 PERTURBATIONS Consider the case in which there are small correlations between the projections and/ or small differences between the correlations within each projection. For instance, one of Miller's examples indicates that small within-eye anti-correlations can prevent the onset of dominance. This can be perturbatively analysed by setting Ql = Q + eEl, Q2 = Q + eE2 and Qe = eEe. Call the resulting matrix Q;. Two questions are relevant. Firstly, are the eigenvectors stable to this perturbation, ie are there vectors al and a2 such that (Ul + eal, U2 + ea2) is an eigenvector of Q; if (Ul, U2) is an eigenvector of Q. with eigenvalue 4>? Secondly, how do the eigenvalues change? One way to calculate this is to consider the equation the perturbed eigenvector must satisfy:l Q€ ( Ul + eal ) = (4) + elP) ( Ul + eal ) • U2 + ea2 U2 + ea2 and look for conditions on Ul and U2 and the values of al, a2 and lP by equating the O( e) terms. We now consider a specific exam pIe. Using the notation of the table above, (Yi + eal, -Yi + ea2) is an eigenvector with eigenvalue ~i + elPi if (Q ~i1) al + k2J(al + a2) -(El- Ee - lPd)Yi, and (Q ~i1) a2 + k2J (al + a2) = - (Ee - E2 + lPiI)Yi. Subtracting these two implies that (Q ~i1) (al - a2) = - (El - 2Ee + E2 - 2lPi1) Yi. lThis is a standard method for such linear systems, eg in quantum mechanics. 22 Dayan and Goodhill However, Y{ (Q - liiI) = 0, since Q is symmetric and Yi is an eigenvector with eigenvalue Iii, so multiplying on the left by yl, we require that 2lViyJ Yi = y[ (E 1 - 2Ee + E2) Yi which sets the value of lVi' Therefore (Yit -yt) is stable in the required manner. Similarly (Zit Zi) is stable too, with an equivalent perturbation to its eigenvalue. However the pair (Xit xt) and (Xit -Xi) are not stable - the degeneracy from their having the same eigenvalue is broken, and two specific eigenvectors, (~Xit (3iXi) and (- (3iXit ~Xi) are stable, for particular values (Xi and (3i' This means that to first order, SD and SS no longer separate, and the full, two-population, matrix must be solved. To model Miller's results, call Q;,m the special case of Q; for which El = E2 = E and Ee = O. Also, assume that the Xit Yi and Zi are normalised, let el (u) = uTE 1 ut etc, and define 1'(u) = (el (u) - e2(u) )/2ee(u), for ee (u) =f. 0, and 1'i = 1'(xt). Then we have (1) and the eigenvalues are: Eigenvalue for case: E-vector Q. Q;,m 9: ((XiXit (3iXt) Ai Ai + eel Xi Ai + e ell xl) + e2(Xi) + =i]/Z ( - (3iXit (XiX;.) Ai Ai + eel Xi Ai - e ell xd + e2(xd + =d/2 ("Yit -yt) Iii Iii + eel Yd Iii + e[ el Yi + e2 Yi - Zee YdJ/Z (Zit zt) 'Vi 'Vi + eel Zi 'Vi + e el Zi ) + e2 Zi +Zee zt)J/2 where =i = v'[ el (Xi) - e2(Xi)]2 + 4ee(xi)2. For the case Miller treats, since E 1 = E2, the degeneracy in the original solution is preserved, ie the perturbed versions of (Xit xt) and (Xit -xt) have the same eigenvalues. Therefore the SD and SS modes still separate. This perturbed eigendecomposition suffices to show how small additional correlations affect the solutions. We will give three examples. The case mentioned above on page 299 of [6], shows how small same-eye anti-correlations within the radius of the arbor function cause a particular (Yit -yt) eigenvector (Le. one for which all the components of Yi have the same sign) to change from growing faster than a (Xit -xt) (for which some components of Xi are positive and some negative to ensure that n.Xi = 0) to growing slower than it, converting a monocular solution to a binocular one. In our terms, this is the Q;' m case, with E 1 a negative matrix. Given the conditions on signs of their components, el (yt) is more negative than el(xi), and so the eigenvalue for the perturbed (Yit -Yi) would be expected to decrease more than that for the perturbed (Xit -xt). This is exactly what is found. Different binocular eigensolutions are affected by different amounts, and it is typically a delicate issue as to which will ultimately prevail. Figure 2 shows a sample perturbed matrix for which dominance will not develop. If the change in the correlations is large (0(1 ), then the eigenfunctions can change shape (eg Is becomes 2s in the notation of [4]). We do not address this here, since we are considering only changes of O( e). Perturbing Hebbian Rules 23 .. " 80 Figure 1: Unperturbed two-eye correlation matrix and (y, -y), (x, -x) eigenvectors. Eigenvalues are 7.1 and 6.4 respectively. 80 Figure 2: Same-eye anti-correlation matrix and eigenvectors. (y, -y), (x, -x) eigenvalues are 4.8 and 5.4 respectivel)" and so the order has swapped. 24 Dayan and Goodhill Positive opposite-eyecorrelations can have exactly the same effect. This time ec(yd is greater than ec(xd, and so, again, the eigenvalue for the perturbed (Yi. -Yd would be expected to decrease more than that for the perturbed (Xi. -Xi)' Figure 3 shows an example which is infelicitous for dominance. The third case is for general perturbations in Q!. Now the mere signs of the components of the eigenvectors are not enough to predict which will be affected more. Figure 4 gives an example for which ocular dominance will still occur. Note that the (Xi. -Xi) eigenvector is no longer stable, and has been replaced by one of the form (~Xi. f3i.xd. If general perturbations of the same order of magnitude as the difference between WI and W2 (ie €' ~ €) are applied, the OCi and f3i terms complicate Miller's So analysis to first order. Let Wl(O) - W2(0) = €a and apply Q! as an iteration matrix. WI (n) -w2(n), the difference between the projections aftern iterations has no 0(1) component, but two sets of O(€) components; {21l-f (a.Yi) yd, and { Af[l + €(Ti + 3i)/2Adn (OCiXi.Wl(O) + f3iXi.W2(0)) (OCi - f3i)Xi Af[l + €(Ti - 3i)/2Ai]n (OCiXi.W2(0) - f3iXi.Wl (0)) (OCi + f3i)Xi } where Ti = el(xi) + e2(xd. Collecting the terms in this expression, and using equation 1, we derive { Af [(oct + f3f)xi.a + 2n ~~),i~f3iXi.b 1 Xi} where b = Wl(O) + W2(0). The second part of this expression depends on n, and is substantial because Wl(O) + W2(0) is 0(1). Such a term does not appear in the unperturbed system, and can bias the competition between the Yi and the Xi eigenvectors, in particular towards the binocular solutions. Again, its precise effects will be sensitive to the unperturbed eigenvalues. 4 CONCLUSIONS Perturbation analysis applied to simple Hebbian correlational learning rules reveals the following: • Introducing small anti-correlations within each eye causes a tendency toward binocularity. This agrees with the results of Miller. • Introducing small positive correlations between the eyes (as will inevitably occur once they experience a natural environment) has the same effect. • The overall eigensolution is not stable to small perturbations that make the correlational structure of the two eyes unequal. This also produces interesting effects on the growth rates of the eigenvectors concerned, given the initial conditions of approximately equivalent projections from both eyes. Acknowledgements We are very grateful to Ken Miller for helpful discussions, and to Christopher Longuet-Higgins for pointing us in the direction of perturbation analysis. Support Perturbing Hebbian Rules 25 so Figure 3: Opposite-eye positive correlation matrix and eigenvectors. Eigenvalues of (y, -Y)I (x, -x) are 4.8 and 5.41 so ocular dominance is again inhibited. so Figure 4: The effect of random perturbations to the matrix. Although the order is restored (eigenvalues are 7.1 and 6.4)1 note the ((xx, (3x) eigenvector. 26 Dayan and Goodhill was from the SERC and a Nuffield Foundation Science travel grant to GG. GG is grateful to David Willshaw and the Centre for Cognitive Science for their hospitality. GG's current address is The Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, Scotland, and correspondence should be directed to him there. References [1] Goodhill, GJ (1991). Correlations, Competition and Optimality: Modelling the Development of Topography and Ocular Dominance. PhD Thesis, Sussex University. [2] Linsker, R (1986). From basic network principles to neural architecture (series). Proc. Nat. Acad. Sci., USA, 83, pp 7508-7512,8390-8394,8779-8783. [3] MacKay, DJC & Miller, KD (1990). Analysis of Linsker's simulations of Hebbian rules. Neural Computation, 2, pp 169-182. [4] MacKajj DJC & Miller, KD (1990). Analysis of Linsker' sa pplication of Hebbian rules to linear networks. Network, 1, pp 257-297. [5] Miller, KD (1989). Correlation-based Mechanisms in Visual Cortex: Theoretical and Empirical Studies. PhD Thesis, Stanford University Medical School. [6] Miller, KD (1990). Correlation-based mechanisms of neural development. In MA Gluck & DE Rumelhart, editors, Neuroscience and Connectionist Theory. Hillsborough, NJ: Lawrence Erlbaum. [7] Miller, KD (1990). Derivation of linear Hebbian equations from a nonlinear Hebbian model of synaptic plasticity. Neural Computation, 2, pp 321-333. [81 Miller, KD, Keller, JB & Stryker, MP (1989). Ocular dominance column development: Analysis and simulation. Science, 245, pp 605-615.
1991
144
479
480 Image Segmentation with Networks of Variable Scales Hans P. Grar Craig R. Nohl AT&T Bell Laboratories Crawfords Comer Road Holmdel, NJ 07733, USA ABSTRACT Jan Ben We developed a neural net architecture for segmenting complex images, i.e., to localize two-dimensional geometrical shapes in a scene, without prior knowledge of the objects' positions and sizes. A scale variation is built into the network to deal with varying sizes. This algorithm has been applied to video images of railroad cars, to find their identification numbers. Over 95% of the characlers were located correctly in a data base of 300 images, despile a large variation in lighting conditions and often a poor quality of the characters. A part of the network is executed on a processor board containing an analog neural net chip (Graf et aI. 1991). while the rest is implemented as a software model on a workstation or a digital signal processor. 1 INTRODUCTION Neural nets have been applied successfully to the classification of shapes, such as characters. However, typically, these networks do not tolerate large variations of an object's size. Rather, a normalization of the size has to be done before the network is able to perform a reliable classification. But in many machine vision applications an object's size is not known in advance and may vary over a wide range. If the objects are part of a complex image, finding their positions plus their sizes becomes a very difficult problem. Traditional techniques to locate objects of variable scale include the generalized Hough transform (Ballard 1981) and constraint search techniques through a feature space (Grimson 1990), possibly with some relaxation mechanisms. These techniques stan with a feature representation and then try to sort features into groups that may represent an object. Searches through feature maps lend to be very time consuming, since the number of comparisons that need to be made grows fast, typically exponentionally, with the number of features. Therefore, practical techniques must focus on ways to minimize the time required for this search. Image Segmentation with Networks of Variable Scales 481 Our solution can be viewed as a large network. divided into two parts. The first layer of the network provides a feature representation of the image. while the second layer locates the objects. The key element for this network to be practical. is a neural net chip (Graf et al. 1991) which executes the first layer. The high compute power of this chip makes it possible to extract a large number of features. Hence features specific to the objects to be found can be extracted, reducing drastically the amount of computation required in the second layer. The output of our network is not necessarily the final solution of a problem. Rather, its intended use is as part of a modular system, combined with other functional elements. Figure 1 shows an example of such a system that was used to read the identification numbers on railroad cars. In this system the network's outputs are the positions and sizes of characters. These are then classified in an additional network (LeCun et aI. 1990). specialized for reading characters. The net described here is not limited to finding characters. It can be combined with other classifiers and is applicable to a wide variety of object recognition tasks. Details of the network, for example the types of features that are extracted, are task specific and have to be optimized for the problem to be solved. But the overall architecture of the network and the data flow remains the same for many problems. Beside the application described here, we used this network for reading the license plates of cars, locating the address blocks on mail pieces, and for page layout analysis of printed documents. Camera Preprocessing Digital Signal Processor Segmentation Find characters scale size Neural Net Processor Classification Digital Signal Processor Figure 1: Schematic of the recognition system for reading the identification numbers on railroad cars. The network described here performs the part in the middle box, segmenting the image into characters and background. 2 THE NETWORK 2.1 THE ARCHITECTURE The network consists of two parts, the input layer extracting features and the second layer, which locates the objects. The second layer is not rigidly coupled through connections to the first one. Before data move from the first layer to the second, the input fields of the neurons in the second layer are scaled to an appropriate size. This size depends on 482 Graf, Noh), and Ben the data and is dynamically adjusted. FEATURE REPRESENTATION OF THE MODEL MODEL • • • ~-....-r. : MATCH MODEL Willi IMAGE FEATURE MAPS SIMPLE FEATURES (EDGES. CORNERS) INPUT IMAGE Figure 2: Schematic of the network. • • • • • • •• • It • .. . .. . .. . .. I .. I .. I • I • • Figure 2 shows a schematic of this whole network. The input data pass through the first layer of connections. From the other end of the net the model of the object is entered, and in the middle model and image are matched by scaling the input fields of the neurons in the second layer. In this way a network architecture is obtained that can handle a large variation of sizes. In the present paper we consider only scale variations, but other transformations, such as rotations can be integrated into this architecture as well. And how can a model representation be scaled to the proper size before one knows an object's size? With a proper feature representation of the image, this can be done in a straight-forward and time-efficient way. Distances between pairs of features are measured and used to scale the input fields. In section 4 it is described in detail how the distances between corners provide a robust estimate of the sizes of characters. There is no need to determine an object's size with absolute certainty here. The goal is to limit the further search to just a few possible sizes, in order to reduce the amount of computation. The time to evaluate the second layer of the network is reduced further by determining "areas-of-interest" and searching only these. Areas without any features, or without characteristic combinations of features, are excluded from the search. In this way, the neurons of the second layer have to analyze only a small part of the whole image. The key for the size estimates and the "area-of-interest" algorithm to work reliably, is a good feature representation. Thanks to the neural net chip, we can search an image for a large number of geometric features and have great freedom in choosing their shapes. Image Segmentation with Networks of Variable Scales 483 2.2 KERNELS FOR EXTRACTING FEATURES The features extracted in the first layer have to be detectable regardless of an objecCs size. Many features, for example comers, are in principle independent of size. In practice however, one uses two-dimensional detectors of a finite extent These detectors introduce a scale and tend to work best for a certain range of sizes. Hence, it may be necessary to use several detectors of different sizes for one feature. Simple features tend to be less sensitive to scale than complex ones. In the application described below, a variation of a factor of five in the characters' sizes is covered with just a single set of edge and comer detectors. Figure 3 shows a few of the convolution kernels used to extract these features. Figure 3: Examples of kernels for detecting edges and comers. Each of the kernels is stored as the connection weights of a neuron. These are ternary kernels with a size of 16 x 16 pixels. The values of the pixels are: black = -1, white = 0, hatched = + 1. A total of 32 kernels of this size can be scanned simultaneously over an image with the neural net chip. These kernels are scanned over an image with the neural net chip and wherever an edge or a comer of the proper orientation is located, the neuron tied to this kernel turns on. In this way, the neural net chip scans 32 kernels simultaneously over an image, creating 32 feature maps. The kernels containing the feature detectors have a size of 16 x 16 pixels. With kernels of such a large size, it is possible to create highly selective detectors. Moreover, a high noise immunity is obtained. 484 Graf, Nohl, and Ben 2.3 THE SECOND LAYER The neurons of the second layer have a rectangular receptive field with 72 inputs, 3 x 3 inputs from eight feature maps. These neurons are trained with feature representations of shapes. normalized in size. The 3 x 3 input field of a neuron does not mean that only an area of 9 pixels in a feature map is used as input. Before a neuron is scanned over a part of a feature map. its input field is scaled to the size indicated by the size estimator. Therefore. each input corresponds to a rectangular area in a feature map. For finding objects in an image. the input fields. scaled to the proper size. are then scanned over the areas marked by the "area-of-interest" algorithm. If an output of a neuron is high. the area is marked as position of an object and is passed along to the classifier. The second layer of the network does require only relatively few computations. typically a few hundred evaluations of neurons with 72 inputs. Therefore. this can be handled easily by a workstation or a digital signal processor. The same is true for the area-ofinterest algorithm. The computationally expensive part is the feature extraction. On an image with 512 x 512 pixels this requires over 2 billion connection updates. In fact. on a workstation this takes typically about half an hour. Therefore. here a special purpose chip is crucial to provide a speed-up to make this approach useful for practical applications. 3 THE HARDWARE ADDRESS BUS + EEPROM I EJ 2&ak ~32C ~. SRAM f NET32K ~ I t I I I VailE IF I DATA BUS VMEBUS Figure 4: Schematic of the neural net board. A schematic of the neural net board used for these applications is shown in Figure 4. The board contains an analog neural net chip. combined with a digital signal processor (DSP) and 256k of fast static memory. On the board. the DSP controls the data flow and the operation of the neural net chip. This board is connected over a VME bus to the host workstation. Signals. such as images. are sent from the host to the neural net board, where a local program operates on the data. The results are then sent back to the host for further processing and display. The time it takes to process an image of 512 x 512 pixels is one second. where the transfer of the data from the workstation to the board and back requires two thirds of this time. Image Segmentation with Networks of Variable Scales 485 The chip does a part of the computation in analog form. But analog signals are used only inside the chip, while all the input and the output data are digital. This chip works only with a low dynamic range of the signals. Therefore, the input signals are typically binarized before they are transferred to the chip. In the case of gray level images, the pictures are half toned first and then the features are extracted form the binarized images. This is possible, since the large kernel sizes suppress the noise introduced by the half toning pr0cess. 4 APPLICATION This network was integrated into a system to read the identification numbers on railroad cars. Identifying a rail car by its number has to be done before a train enters the switching yard, where the cars are directed to different tracks. Today this is handled by human operators reading the numbers from video screens. The present investigation is a study to determine whether this process can be automated. The pictures represent very difficult segmentation tasks, since the size of the characters varies by more than a factor of five and they are often of poor quality with parts rusted away or covered by dirt. Moreover, the positions of the characters can be almost anywhere in the picture, and they may be arranged in various ways, in single or in multiple lines. Also, they are written in many different fonts, and the contrast between characters and background varies substantially from one car to the next. Despite these difficulties, we were able to locate the characters correctly in over 95% of the cases, on a database of 300 video images of railroad cars. As mentioned in section 2, in the first layer feature maps are created from which areas of interest are determined. Since the characters are arranged in horizontal lines, the first step is to determine where lines of characters might be present in the image. For that purpose the feature maps are projected onto a vertical line. Rows of characters produce strong responses of the comer detectors and are therefore detected as maxima in the projected densities. The orientation of a comer indicates whether it resulted from the lower end of a character or from the upper end. The simultaneous presence of maxima in the densities of lower and upper ends is therefore a strong indication for the presence of a row of characters. In this way, bands within the image are identified that may contain characters. A band not only indicates the presence of a row of characters, but also provides a good guess of their heights. This simple heuristic proved to be very effective for the rail car images. It was made more robust by taking into account also the outputs of the vertical edge detectors. Characters produce strong responses of vertical edge detectors, while detractors, such as dirt create fewer and weaker responses. At this stage we do not attempted to identify a character. All we need is a yes/no answer whether a part of the image should be analyzed by the classifier or nOl The whole alphabet is grouped into five classes, and only one neuron to recognize any member within a class is created. A high output of one of these neurons therefore means that any character of its class may be present. Figures 5 and 6 show two examples produced by the segmentation network. The time required for the whole segmentation is less than three seconds, of which one second is spent for the feature extraction and the rest for the "focus-of486 Graf. Noh!, and Ben attention" algorithm and the second layer of the network. Figure 5: Image of a tank car. The crosses mark where corner detectors gave a strong response. The inset shows an enlarged section around the identification number. The result of the segmentation network is indicated by the black lines. 5 CONCLUSION The algorithm described combines neural nel techniques with heuristics to obtain a practical solution for segmenting complex images reliably and fast. Clearly. a "conventional" neural net with a fixed architecture lacks the flexibility to handle the scale variations required in many machine vision applications. To extend the use of neural nets. transformations have to be built into the architecture. We demonstrated the network's use for locating characters. but the same strategy works for a wide variety of other objects. Some details need to be adjusted to the objects to be found. In particular, the features extracted by the first layer are task specific. Their choice is critical, as they determine to a large extent the computational requirements for finding the objects in the second layer. The use of a neural net chip is crucial to make this approach feasible. since it provides the computational power needed for the feature extraction. The extraction of geometrical features for pattern recognition applications has been studied extensively. However, its use is not wide spread, since it is computationally very demanding. The neural net chip opens the possibility for extracting large numbers of features in a short time. The large size of the convolution kernels, 16 x 16 pixels, provides a great flexibility in choosing the feature detectors' shapes. Their large size is also the main reason for a good noise Image Segmentation with Networks of Variable Scales suppression and a high robustness of the described network. Figure 6: The result of the network on an image of high complexity. The white horizontal lines indicate the result of the "area-of-interest" algorithm. The final result is shown by the vertical white lines. References H.P. Graf, R. Janow, C.R. Nohl, and J. Ben, (1991), n A Neural-Net Board System for Machine Vision Applications", Proc. Int. Joint Con/. Neural Networks, Vol. I, pp. 481 486. D.H. Ballard, (1981), "Generalizing the Hough transform to detect arbitrary shapes", Pattern Recognition, Vol. 13, p. 111. W.H. Grimson, (1990), "The Combinatorics of Object Recognition in Cluttered Environments Using Constraint Search", Artificial Intelligence, Vol. 44, p. 121. Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel, (1990), "Handwritten Digit Recognition with a Back-Propagation Network", in: Neural Information Processing Systems, Vol. 2, D. Touretzky (ed.), Morgan Kaufman, pp. 396 - 404. 487
1991
15
480
Decoding of Neuronal Signals in Visual Pattern Recognition Emad N Eskandar Laboratory of Neuropsychology National Institute of Mental Health Bethesda MD 20892 USA Barry J Richmond Laboratory of Neuropsychology National Institute of Mental Health Bethesda MD 20892 USA John A Hertz NORDITA B1egdamsvej 17 DK-2100 Copenhagen 0, Denmark Lance M Optican Laboratory of Sensorimotor Research National Eye Institute Bethesda MD 20892 USA 356 Troels Kjmr NORDITA B1egdamsvej 17 DK-2100 Copenhagen 0, Denmark Abstract We have investigated the properties of neurons in inferior temporal (IT) cortex in monkeys performing a pattern matching task. Simple backpropagation networks were trained to discriminate the various stimulus conditions on the basis of the measured neuronal signal. We also trained networks to predict the neuronal response waveforms from the spatial patterns of the stimuli. The results indicate t.hat IT neurons convey temporally encoded information about both current and remembered patterns, as well as about their behavioral context. Decoding of Neuronal Signals in Visual Pattern Recognition 357 1 INTRODUCTION Anatomical and neurophysiological studies suggest that there is a cortical pathway specialized for visual object recognition, beginning in the primary visual cortex and ending in the inferior temporal (IT) cortex (Ungerleider and Mishkin, 1982). Studies of IT neurons in awake behaving monkeys have found that visually elicited responses depend on the pattern of the stimulus and on the behavioral context of the stimulus presentation (Richmond and Sato, 1987; Miller et aI, 1991). Until now, however, no attempt had been made to quantify the temporal pattern of firing in the context of a behaviorally complex task such as pattern recognition. Our goal was to examine the information present in IT neurons about visual stimuli and their behavioral context. We explicitly allowed for the possibility that this information was encoded in the temporal pattern of the response. To decode the responses, we used simple feed-forward networks trained by back propagation. In work reported elsewhere (Eskandar et al, 1991) this information is calculated another way, with similar results. 2 THE EXPERIMENT Two monkeys were trained to perform a sequent.ial nonmatch to sample task using a complete set of 32 black-and-white patterns based on 2-D Walsh functions. \\'hile the monkey fixated and grasped a bar, a sample pattern appeared for 352 msecs; after a pause of 500 msecs a test stimulus appeared for 352 msecs. The monkey indicated whether the test stimulus failed to match the sample stimulus by releasing the bar. (If the test matched the stimulus, the monkey waited for a third stimulus, different from the sample, before releasing the bar; see Fig. 1.) SAMPLE MATCH ~ ~----------~~----------~, -----------_ . 352 ms 550 ms 352 ms 550 ms REWARD SAMPLE NON MATCH ~ I----_ _+_, ____________ • INTER-TRIAL lNTER-STIMULUS REWARD Figure 1: The nonmatch-to-sample task. 358 Eskandar, Richmond, Hertz, Optican, and Kj<er The type of trial (match or nonmatch) and t.he pairings of sample stimuli with nonmatch stimuli were selected randomly. A single experiment usually contained several thousand trials; thus each of the 32 patterns appeared repeatedly under the three conditions (sample, match, and nonmatch). Single neuron recordings from IT cortex were carried out while the monkeys were performing the task. A IJ B SAMPLE Ji£ :,O,,,,,, I I • I • • " I. , , • I . .. .. "'" ,., I " .. " .. ... , .. MATCH NONMATCH , "" ," "', .. ..... Figure 2: Responses produced by 2 stimuli under 3 behavioural condit.ions. Fig. 2 shows the neuronal signals produced by two different stimulus patterns in the three behavioural conditions: sample, match and nonmatch. The lower parts of the figure show single-trial spike trains, while the upper parts show the effective time-dependent firing probabilities, inferred from the spike trains by convolving Decoding of Neuronal Signals in Visual Pattern Recognition 359 each spike with a Gaussian kernel, adding these up for each trial and averaging the resulting continuous signals over trials. It is evident that for a given stimulus pattern the average signals produced in different behavioural conditions are different. In v,,-hat follows, we proceed further to show that there is information about behavioural condition in the signal produced in a single trial. vVe will compute its average value explicitly. 3 DECODING NETWORKS To compute this information we trained networks to decode the measured signal. The form of the network is shown in Fig. 3. spike trains principal components hidden units output Figure 3: Network to decode neuronal signals for information about behavioural condition. The first two layers of t he network shown preprocess the spike trains as follows: We begin with the spikes measured in an interval starting 90 msec after the stimulus onset and lasting 255 msec. First each spike is convolved with a Gaussian kernel to produce a continuous signal. This signal is sampled at 4-msec intervals, giving a 54-dimensional input vector. In the second step this input vector is compressed by throwing out all hut a small number of its principal components (PC's). The PC basis was obtained by diagonalizing the 54 x 54 covariance matrix of the inputs computed over all trials in the experiment. The remaining PC's are then the input to the rest of the net work, which is a standard one with one further hidden layer. Earlier work showed that the first five PC's transmit most of the pattern information in a neuronal response (Richmond et aI, 1987). Furthermore, the first PC is highly correlated with the spike count. Thus, our subsequent analysis was either on the first PC alone, as a measure of spike count, or on the first five PC's, as a measure 360 Eskandar, Richmond, Hertz, Optican, and Kja::r that incorporates temporal modulation. We trained the networks to make pairwise discriminations between responses measured under different conditions (sample-match, sample-non match , or matchnonmatch). Thus there is a single output unit, and the target is a 1 or 0 according to the behavioural condition under which that spike train was measured. The final two layers of the network were trained by standard backpropa.gation of errors for the cross-entropy cost function (1 ) where TIJ is the target and OIA the network output produced by the input vector xiJ for training example J-l. The output of the network with the weights that result from this training is then the optimal estimate (given the chosen architecture) of the probability of a behavioural condition, given the measured neuronal signal used as input. The number of hidden units was adjusted to minimize the generalization error, which was computed on one quarter of the data that was reserved for this purpose. We then calculated the mean equivocation, f = -(O(x) log(O(x) + [1 - O(x)] log[l - O(x)])x, (2) where O(x) is the value of the output unit for input x and the average is over all inputs. (Vie calculated this by averagng over the test or training sets; the results were not sensitive to which one we chose.) The equivocation is a measure of the neuron's uncertainty with respect to a given discrimination. From it we can compute the transmitted information I = Ia priori f = 1 f. (3) The last equality follows because in our data sets the two conditions always occur equally often. It is evident from Fig. 2 that if we already know that our signal is produced by a particular st.imulus pattern, the discrimination of the behavioural condition will be easier than if we do not possess this a priori knowledge. This is because the signal varies with stimulus as well as behavioural condition (more strongly, in fact), and the dependence on the latter has to be sorted out from that on the former. To get an idea of the effect of this "distraction", we performed 4 separate calculations for each of the 3 behavioural-condition discriminations, using 1, 4, 8, and all 32 stimulus patterns, respectively. The results are summarized in Fig. 4, which shows the transmitted information about the 3 different behavioural-condition discriminations at the various levels of distraction, averaged over 5 cells. It. also indicates how much of the tra.nsmitted information in each case is contained in the spike count alone (i.e. the first PC of the signal). It is apparent that measurable information about behavioural condition is present in a single neuronal response, even in the total absence of a priori information about the stimulus pattern. It is also evident that most of this information is contained in Decoding of Neuronal Signals in Visual Pattern Recognition 361 en 0.5 ~ .0 0.4 0 c: "'C 0.3 CI> ... . ~ 0.2 E III c: 0.1 co ... ... # patterns 1 4 8 32 1 4 8 32 1 4 8 32 samplenonmatch samplematch matchnonmatch Figure 4: Transmitted information for the three behavioural discriminations with different numbers of patterns. The lower white region on each bar shows the information transmit.ted in the first PC alone. the time-dependence of the firing: the information cont.ained in the first PC of the signal is significantly less (paired t-test p < 0.001) and was barely out of the noise. A finite data set can lead to a biased estimate of the transmitted information (Optican et aI, 1991). In order to control for this we made a preliminary study of the dependence of the calculated equivocation on training set size. We varied the number of trials available to the network in a range (64 - 1024) for one pair of discriminations (sample vs. nonmatch). The calculated apparent equivocation increased with the sample size N, indicating a small-sample bias. The best correlation (Pearson r = -0.86) was obtained with a fit of the form: feN) = foo - CN- 1/ 2 (c> 0). (4) This gives us a systematic way to estimate the small-sample bias and thus provide an improved estimate foo of the true equivocation. Details will be reported elsewhere. 4 PREDICTING NEURONAL RESPONSES In a second set of analyses, we examined the neuronal encoding of both current and recalled patterns. The networks were trained to predict the neuronal response (as represented by its first 5 PC's) from the spatial pattern of the current non match stimulus, that of the immediately preceding sample stimulus, or both. The inputs were the pixel values of the patterns. The network is shown in Fig. 5. In order to avoid having different architectures for predictions from one and two input patterns, we always used a number of input units 362 Eskandar, Richmond, Hertz, Optican, and Kja::r equal to twice the number of pixels in the input. In the case where the prediction was to be made on the basis of both previous and current patterns, each pattern was fed into half the input units. For prediction from just one pattern (either the current or previous one), the single input pixel array was loaded separately onto both halves of the input array. As in the previous analyses, the number of hidden units was fixed by testing on a quarter of the data held out of the training set for this purpose. /' []: --...... --.... "~ /' ~ ~: ~ "Figure 5: Network for predicting neuronal responses from the stimulus. The inputs are pixel values of the stimuli (see text), and the targets are the first 5 PC's of the measured response. We performed this analysis on data from 6 neurons. Not surprisingly, the predicted waveforms were better when the input was the current pattern (normalized mean square error (mse) = 0.482) than when it was the previous pattern (mse = 0.589). However, the best prediction was obtained when the input reflected both the current and previous patterns (mse = 0.422). Thus the neurons we analyzed conveyed information about both remembered and current stimuli. 5 CONCLUSION The results presented here demonstrate the utilit.y of connectionist networks in analyzing neuronal information processing. \Ve have shown that temporally modulated responses in IT cortical neurons convey information about both spatial patterns and behavioral context. The responses also convey information about the patterns of remembered stimuli. Based on these results, we hypothesize that inferior temporal neurons playa role in comparing visual patterns with those presented at an earlier time. Decoding of Neuronal Signals in Visual Pattern Recognition 363 Acknowledgements This work was supported by NATO through Collaborative Research Grant CRG 900189. EE received support from the Howard Hughes Medical Institute as an NIH Research Scholar. References E N Eskandar et al (1991): Inferior temporal neurons convey information about stimulus patterns and their behavioral relevance, Soc Neurosci Abstr 17 443; Role of inferior temporal neurons in visual memory, submitted to J Neurophysiol. E K Miller et al (1991): A neural mechanism for working and recognition memory in inferior temporal cortex, Science 253 L M Optican et al (1991): Unbiased measures of transmitted information and channel capacity from multivariate neuronal data, Bioi Cybernetics 65 305-310. B J Richmond and T Sato (1987): Enhancement of inferior temporal neurons during visual discrimination, J NeurophysioL 56 1292-1306. B J Richmond et al (1987): Temporal encoding of two-dimensional patterns by single units in primate inferior temporal cortex, J Neurophysiol 57 132-178. L G Ungerleider and M Mishkin (1982): Two cortical visual systems, in Analysis of Visual Behavior, ed. D JIngle, M A Goodale and R J W Mansfield, pp 549-586. Cambridge: MIT Press.
1991
16
481
              ! "    $# % '&)(*+-, .+ /  0!21        +      43    3 5    4687 9;:=<?>A@CB-DEGFHJILK-MNFO PRQSO TVUWULHYX T M E)Z[QSO TV\ Z] I : I-^ _ I-^)<`HJ:)a4bdcL_I ^ <eH`E)Z-f gihRj.k0lnmpo qRkrts[uwvyxzL{`|})~y€j8m g v‚ o ƒ v j.j m o „…v  †ˆ‡ j‰ { ‡ v2ŠŒ‹ { }= o Ž v2Šˆ v)o „‘`jm Šp’ “ •”J–)— usVlp’ ˜…™ { m j –yšœ›ž‚Ÿ )Ÿ ¡?–)¢J£ ¤ ¥ˆ¦J§y¨2©[ªG§ «C¬A}ym j8Š0j v  j8­ } j mno ® | j v2nu h xyu  u°¯ mp{`± u`v uev ue² {`³ z šœ´¢¶µ ¢‚· kp¸ o ¹ }ºl ¸»ut o ¼ ±½ } s j | j v‚¾ŠN¾¿ j ‹ jm u ~2sÀ$Á$‰J~ynÂjv4u`xyu}G o à  j v j ~ m uesGv j ¾Ä { m€Å ¼ † j a  o ƒ v  }Æmp{2k j xy~ym j Š u`Çyx mpj8Š ~?sVÈÊÉ o à vº ’ Ë | j uvÆÌ¶¯$m jeÍ ~ j Ç)Î ”eÏ Ì { |°u.o Ѕv rem€j Ì2j Š k m ’ ÃÒÑ j x ¹ †ˆ‡ j Š jAo Ë vyk sV~yÓ j ÄÔj o Յ ‡` k{ v  j m€`j8Ö)kj€mnu8× „Òj kÈ { mpØ ÙÚ¬wŠ – jn­  m u k È o ¼‘{ v { ¯Lu Š0o „‘Û?v)u.s[o Ü vv { ’ ¼ÒŠnj –Guevyx Šnj8}Gr Ý m ue o ® { v { ¯ Š  r  o Ë Š  o „Rk8r ² sޔ k0{ |} hRj0­ Š o ß ÛJv ues ŠˆŠ ~ k ‡dà É } j j k0‡ „ á âyã §‚¨2äˆåˆæˆªG§yç èSä ã éÊê‘ëìo ¹  Š | { É   j v j m ue²J¯ {tm |– pí2jîðï îño Ë v=x j0} j v=x j v? k0{ | } {`Ç j v2iu.v)u s ”Jò j mLó · v Ë z ® £ ËÒô Ö j l Ï Ä-{`õ0Å4öp‹ j mn÷wø ²Vùú`¡`ûJ–yü2~ pý j vœŸ ú ¡eþ – Ÿ ú`ÿyŸÎ r v Ñ j ø Šnj Ì° { Š  j ln‡ j ¯ { s À{ Ä Ø Ë ³ k s à É Ø Ë Î r s Š€o q  ÖGr ²  m {yk0j8Š0Š€o ¹…Ö Û  mp{ Ñ s j | –  ’  jwv }=‡ ” Š0o ¼ k u  ” x ’ ¹ Š È o  v k  | j à ~ m j | j v‚È€É { ¯   ~ Ö=Å?Ö={ Ä v hÀo „ v j r mk8{ | Ñ o q Ö ue ’ {tÖyŠ{e¯î o „ vyx j })j v=x j v Š o !" u hÉ {wøWm k j É$#  ¸ j v j  Ä JmnÅ ue~  { Ï r x)u }  o !  j ²R” j ­=È m8u k È Šîðj%e~ ’ ?u sVjÖ2 o ß'& x j } j vyx j v‚È Š8o Ë ÛJv ÷$( Ë † ¸ j v j ¾Ä { mpÅYk {`ÖyŠno ßRŠ  Š {*)u Š0j  { ) î Š0o “ | }ys j } mn{ k j ÉÂÉ ’ ¹…v2³ ~=v ’ ¼ ¾Š o + v? jmÂk0{?v=v‚j k  j x Ñ ” o „ÒÖJÏ ‡ ’ ! Ñ ’ ¼  Jm-, Š ”2vyuw} Š j8Š .•Š0j8j0/  ~=m j Ÿ1¼ £ } m {2k8j ЀРo „ v2Û ~=v ’ „ 32 ! k ue² Î ~  u È j54 ’ !…ʊA{6) } 6 87:9 ; .=<5Ñ> j x {`ÖCØ  Š ’ v ?ÆA@ 9 B ö <u`v2xœ ¸=jDCLj8o „…³ ¿  j x Š0ø ± { ¯LȀ‡ j{ ø È } ~ È ŠFESmp{t| n‡ jm j | u ’ ¼ vHG ¼ v  JI Ÿ ~)v Ø „ È ŠK:L í jNÄLj8Ø ¼ Û ‡? ŠLrem€j ~ Gx r  j.x ~ Š0’ ® v  r| {2x‚o ß / j Ó ‹¬ ÑyÑ Ø Ž r v  jrwõ v Ø „ v2Û m¾øGsRj ó ‹ˆj ÑM Ÿ ú$NJú# ‹ j mnr ~2sÀŸ ú ¡`û=–?‰2~ € j8Ö Ÿ ú ¡`þ #POwú`ú=Ÿ1„ † ‡ ’ “ Šr`m k ‡ o QRpj.k  ~ mnj ‡ à s j Ì R u m Ø  { ~ Šz š ´TS o  |  s j | j v  r  o à { v ŠöVU o  n { òŸ ú ¡ ú2– zW ‡ jwv Ÿ ú?úJŸwu ôYX « j ‡)uZ j\[ | } s jw| j v nj x ʸ)m¾jwj5xH] ^`_ j8mpj v2 z š0a ¢ xcb Š o „  vc4Œ~ Š o d v‚Û4x ’ ¼ _ j m¾j v2 e j u mpv Ï f g5hjilkm k nocoqpRksrrut5v pwpRk ryx v$z ok zt kW{ v}|Wz o pVk~€  |u‚ oqpRk vƒ X „…‡† ˆ ‰ Š ‹ Œ  Ž             ! " # # $ % & ' & # # # # # % ( ) ( # # # # %+* ) * # # # , .0/ 1 .32 4 5 2 # # # %76 # # # 8:9 ;=<?>A@CBEDFG:HJIBLKNM+K OBQPCRTSUWVYX @ ZW[\9 ]_^"B Z ^ >`@WB a b Ocd@">efBQgh Zji k U Z > i l PWg X OAm7n B o i G <O7pLB P IS`mSeqS < a rs g ] HtRuS3Svxw [\BQy B pLz\{|S~}€\SP [ X‚„ƒ… B † XOAm+‡ B e S RTˆWP [ U B g IS‰|m‹ŠŒYސ ZQ‘ ’ U Z > i “ P”gh:P [AB•^W–\— ˜ U m€™ Š S[BOšD››`DX hTœ ››?ž Bp z eŸS }  ƒ¡Ae } g >  P IU ¢Q I S e n€Œ Ž£ PCB Z IAO`Se|Sc } ˜ H [ B z X@WP a ¤ Z >efX UgL¥ v^"IAB Zji F @ Z > a ] ^ g XOAm e¦B X@ O i k O`c @ >e=B g B p z {|S } B m i r O S > @ i b p zA§fB¨p©B¨ª`P XP i « SO gx¬ X … B©\BBO­mAB g Z U ‘ ® ‚ Bm a F O mB P X i ¯=° Be g B R – B @ B ™ Š ¥ I B O D±?±D~X h D±±AD ²hD ±±D Z hxD ±± ³´h P ¬ i µ gE¶ X ¶ B @ Z S¡ Z BOP"U¨X P·B¨g ¥ O7P I B P B g P ¶ @WS Z BmA> @ B g XO„¸ @WB g"¹ eŸw g > g i G ªAcºm i »½¼ B~@WB OP¾P } ¶ B Sv i ® O ¶¹ ^Tg i ¿ c OAX e g G ÀWO3g B ÁQP i ® S?O  R BYmBQg¨ÁQ@ a ˜¦„BYP IBÂP”Bg P ¶ @ SÁBm\>A@WBÃ> g Bm‹PCS­S‡g B @"…BÂP IB B …¥eŸ>P i k S?O‹SÄvJPWI\B RTÅ 9 Æc[^ g a Ç O P i È pÉBvÊ@ SpË@ B g B P¾P"SEÌSO … B @ < Å OÁQBxSv P I`BxOABP R S?@WV\GTÀWO g B ÁQP i ® SªÎ͕RTBxmBgÁ @ a Ç ‚ B PWB g P g mAB g a Ï cO\B m­P ¥ S g B@W…B•PIABxvÊ@WB Ё>`BªAÁQ} n Sy©X a µ ª0ÌQI\X@ ÑÁQPWB @ i ˜g P i Ò Á g Sv¾P – BÓªAB PÕÔTSU V Ò ÀWO g B Á¨^ i Ö SOL×ÙØTB¾mB gÁ@ a Ú ‚ B¾p•S @ B¾Ñp• i ¤ P i ´ S> g PWB oPWg i GÛ … S{=… a « OcÙg ¶ B BÁ [ g a Ö cOAÜ{qguXOmLSP [AB~@ X > m i ® SÝ ‚ XOAm g i ˜ cOAX ° g G Þ {|e@CB g > e|P g ¶ @B g B O?PWB m [ B @ BßRB @WBÂS‡AP X ‘ ¿ O`B m­vÕUCSp P [ABºàá@ o PÉâB o i ã cO ˜ ; X+ÁW[ i k ¶ PI\X Pº> o B m PWIABÙeqB X @ª i ¤ Oc©@ >‰fB g X O m n B g i ® < OÂP"B¨Á[ O i ˜_Ð > B g S v:ä£i Ö PWP SåxXO nEÞ @W@WB < > a È Ptæ äi G PWPWSå œ ›çè?é ® Žx>`U i ê p ¶ @WS … B¨y©B¨ªAP g Sª‹P IB i ˜ @ ¥@ i ´ c a ë OAX e i ë=ìEí e B p B ª ^ X~P i k S?O Ô B @ BÂpÉSg PWeŸ} i ® Od^"IABºm\BP Ñ i ® efg•Sv£P – ¢ Á a “ @WÌ > a Ú P g X?OAm0@"B g >§f^"B n i ¯ ª+X©g îgQP B p PWIAÑP¾ï„ÜmÎ{qB ggtg }gQPWBðÂXP a ® Á ¥ñ²g B P g 9 ® O0P"I Å a ¿ OAm a ˜… a ˜|òó Ü ° Ì ¥ p ¶ S?O Å ªô^ g æ Š Sõ B ª‹ö±?±Aö¨X ³ µx÷ ¥ RuBQ…B @ h g a øù 9 ®ú§fX@^”û gQPWgÙR¾õ B¨ü B ¶ Å @”výS ü p B òY¥ Û P”[AB ¥ PCõAB @ P R S m B g a ê=< OgØ i G ^ – g a þ p i Ö {fX@t@”B gWó {¦P g ê ÿ        HJI i Ö g P B  P:Ô Á¨I`S   B Û P S pºXP"Á [ ÁS?OAm a G ^ a b S Og>g B mvÊSU m i G < a G ^ Xeg a µ yL>AefXP a F S Û o SváP I ¢ ªB P Ô ¥ UV XOAmß^"SÉÁ Sp ¶ X@ B PWIBÙB … Sef>AP a È S Û Sv P [ B R B i ¤"!$# PCg i ˜ Oº^ [ B a G U Ô B i Ö"% IP”ˆ g z\XÁ¨B'&)( R S g i þ OBxRTXQ… B g Sv X ¶ í UjS+* i ® p•X ^WB e=}€D V-,å Ô B @WB/. « *ABm i ® O3P R SÎm — Ò ¼ ¢ U"B ª^x@ XP a 0 S   h X¡\m­^ – ¢ yE— ˜"1 B m324 ˜ % ¡576 2 æ8:9XO\m;8< ³ R B U"B ¶ @CB g B Û ^ B mÂP SÃP [ BLÁW– a Ò í>= H [ BLÁ I? rA@ S >AP ¶ >A^ g — ®"B OAX{ gDCFEHGXOAm;E)I ³ XOAm P [ B RuB a ® <[ P g CJ 9 < XÄO n J < 9 é R B @”B m a k < i G P a þ å¨B m X OAm z {¦SPK"Bm ’ HJIB UWB g > ° P g X@WB g –AS R O a ® ÛML c >A@ B£ = 8 a ¿=< >AU Å  æ Ü?³)N Ö g X ¶ IO¢uz {=¥ ^HP v:P – B Û B¨P R S?@Q h g i ¿ O @ ¹R gSN Ö cOAX eUT7 „Ñ ª n L < >AUB:V CC‚ ³ i & gÑ í IAXWT ¢ í {=S?PTS7X ^ # B:Y BPÕRTS @Z h gS7[\^ ¶ >A^ g a = B ª\X { g XWXÕ^ ¢ @ \ ]      !   "  $#   % '& () ! * ,+-/. 021 3546 798,:;<=>?:@?:@>?::@?:AB CDA?AE?B?AFHGIG>HJIGE KJILBHJIMHNAIAEHAM?O PQJB?JMM R 7 8TS < C JJJ?UV :W:?: G WX YZ J[\?V[?V X?:A?::] Z ^ ::?::@?::>?:@:?: ^ C :: S :> _ < ^ `bac ^ d2ef g ^ C ^ C < C h f P i ^ C K ^ ^ C P j < C P c c klm i C < C ^ C no n p cqr st t R @ ^ C ^ C uv n @ wx c y tz C ^{ ^ C ^ u| | x r m } ~ Z | ^ C |  a€ € ‚ }ƒ „ … Z ^ C ^ †‡ˆ | ‰Š 7'‹ Œ  € } }Ž  ‰ 7' ‘’ Z“” †• – Y P—˜ } Ž } ™ Š ^ š š <› ‡ˆ œ ž Ÿ }  Z ’ ¡¢ < £ ¤¥ ¦     k C i§¨ ¨ © ª n   i C | «¬ ­®¯ °  < C ^ C •± ²³ ´ µ m¶ ^ C i C ^ C ± † | · c¸   m k i C i C¹ C ± v † º¼»¾½ ¿ À ¬ Á  à R Ä?Å Y ÆÇ È ÉÊËÌÇHÍ ½ Ç È Î ½ Ç È Ç Ï Ð ½ Ç ½ Ç È Ç Ñ Â Ò ÍÍ ½ Ç?Í ½ ; ½ Í Ó Ô Õ Ö Ã Ô ×/Ô ¯ Í ½ Ç ½ Ç Ø Ù ½ Ç i Ú?Û R ½ Í ½ Ù ½ Ç i Í kÜÝÞ Ü ½ ß 7à8 Í á C¹ C ’ ¹ ¹ ½ Á ½ Î ½ Ç ½ R ½ Î È Ç ½ Ç Ó Î ½ R È Ç ½ Ç â?È Çã?Í ½ ä  Π¢ Í ½ Ç ½ Î ½ Ç ½ Ç È Í ½ å È æ j Î ½ Ç È æ Ð ½ R ½ Í ½ Ð ½ Ç ç j Î ½ å ½ Ç Î ½ Ç Â Á Í ½ Ç Î j R ½ Á j Î è ä é |êë ½ ì ^ ! › ^ ¹ › º P 8 7 7 ! 8 ºîí 8 7 7 í 8 ï'ð dÌñ ò ð dóñ 8 ô Í õ Ÿ Ø ö ÷Ø R ½ Ç ½ Ç ½ ø(ù Ù “ ú Õ Í û ü È Ç i Í Ø Ù Â Í i R Õ Ã Ô ½ Û R ½ Í Õ Ù ý ” Â Í Õ Ù Ø R Â Í ½ R ½ Í È Ô ½ þ ÿ þ È þ È   à  i  Õ þ “ Ù Õ C Ô 8Tô'i  ^ Õ Ö ½ Í Õ R i  ½ Í Õ Í i  i þ “ Í Õ › þ È Í È Á Õ þ Õ Õ i Í i Í Õ Í Õ Í Õ Í Õ C Í Ã Í Ø  Ø þ ÿ  “ þ Õ  “ þ ÿ Í Õ    “ þ Õ þ Õ þ “ Õ Ç i  }    d f g < ^  < < C d f i   C i  Z }  ¯ C i  ^  i Õ ’ ù £ œ ! í ô Y " µ  !$# Z Õ C% ¹ È !'& ô : Z( È ) *+, „      Z ’ Z ÿ < < ” § < ./ / ‰ ¶ ¯ ¯ š <01 C Õ 2 3  ^ 1 C ÿ . . n 4 5 k 5 Ý C < C Õ C Õ C 6 7 C§ C ^ C < C n *8 9 #;: ô È  } < Õ C <= C < Š # g ô R > Õ C < C ^ n?@ Ø A < <B C Õ C Õ  ^ C D  ù C ^Z  o| n E F ¯ ¯  ^ Õ  ^ C ^ G “  ù Z “ | ‡ ‡ H 7Tô    µ  Z i C <  “I 7ô Z Õ J Õ Z È Z ë K œ g E ¯  ^ Z j  J LNM ÕZO œ œP J È È ¯    j Z Z  Õ Z È Z P œ  Õ Z ^ Z . P Õ È  O ÈZ ” Z , Q Q k 0 ” < Z Õ C Z ’ Õ š ^ < R / . u ^ Õ S 7ô G  QTVUW V WXW VY WXWZ LX[?JY W Y WXW YX\J?JX]YJX]\JX^`_ V _ VIVV`abXcXdc ^ VcVXdYVe ^ C C V f 79ô e Z ZÕ g V WXWXW V W V WXWXhXij \X]kX\\ j \\ Z JlXm W GLX]?LA?LAnXoXlX\ V : dXpq A\X\o:r u c Ç <s s  Ð Y 7 !7 t 7 u v º L w 7x: ô #y ô  Ð ô 8 í ô z|{~}N d }€ ‚„ƒ …‡† ð‰ˆ ñ Í Š‹XŒX‹‹X‹ŽXXXŒX‘X`‘’X`‘’`X‘XX‘X  X“  X”•X”X”–X—X—–X—X˜ ™ ‘š Ž„‹ ° ‘š‘X›“–XœX“XžX“XX“Ÿ‘X‘X“ X˜¡ ‹ •X—X¢ ŽX£ –¤ ‹X‹ C ¥¦§ § d©¨ f ^ C i C ^ C i ˆVª f « 7¬  ô®­ ¯ k ° ^ i ^ ^ C v C ¹ | ± ” ¹ ˆ 7 ½ ² ® ± C³´ Ð ò ð ΄µ ¹ ¶ Õ ± · ¸ ¹ · ¹ º » ¥ · C ¼ 7½# ô È ¾ ¿ § · ·À  Á  › Õ Ã ? Y Ä Q · C < | C  · Õ C ± ¶ Å ^ C v   Æ ^  | ­ Ç Q ÈÉ Ê  ^ |Ë  k Å È  œ Q Å Ì ë Í ^ œ  « 7 g  ô Í Î ¯ ± CÏ ³ Ð  Ñ  ZÒ œ • / ®Ó ± ³ ³ · ò † ÔÕ ’ ’ < ³ ^ / ./ 7 ½ Ö × · À Õ ½ Ø 1 Õ Õ C ± · Õ C Ù Ú · . Û Ü ^ Ý Û Þ < ^ | 7 # ô È ß Û Ü à Í  á < Z â ½ L ã ³ ^ P Z } C “  œ } C < Z œ Q C ÈZ Ã Õ ä  È Ø Õ j “ È “ È È È È È j ù È ½   ½ È Â å  ½ Õ Â ½  ½ ½ æ Õ ½ ½ “ Õ “ Ø Õ Õ “ “ Õ ç “ Õ È “ Õ È Õ j Õ Õ ½½ ½½ ½ ½ ½éè ½½ ½½ Õ ½ ½½ Õ Õ Õ ½ Õ Â Õ Õ i Õ Õ Cê  Á ë È ÿ j “ ÿ Õ È j Y È ÿ È/Y ½½ ½ ÿ ½ ½ ì Õ i ½ Õ Õ Ø Õ Õ ½ Õ i Õ Õ i Õ Õ Ø Ø Õ Ø Õ Õ Õ í Õ Õ “ Õ Õ È Õ È È j Õ È “ ÿ È ÿ Õ È Õ È ÿ îHÿ È ½ Õ ½ È Õ j ½ ÕXï Õ Õ ÿ Õ Õ È È Õ È ÿ ð “ 7 w # 7 u ñ ò 7 zó{}õô d }ö€ ÷ f zø{ù}úô d }úû‡üþý f ÿg  g       ! " $# &%('*),+  .ö0/ 1% '  32 ì #  ' 5467985:2 ; /< %= > ) / 4@?A.B  # >  DC,EF '  g BHG % I L J # )K%L G ) ' % E NM )O E 86IQP ? R# % A8R#  )KLS!?7T  A ) SU8:V 86,8 0RXW )6YKAZ86[ \ # %  8   > % 86# Y@] í ÿ J B ^L_ ^` >Ra 2 J  8 M ]cb C )  % E d#e %gfQ) A,hji :kl?2 Ä B E %L 8  8nm ^o > %L2 p  # ) m  2 q S!rDs .Y ¬ / # t #   A )Ku % G v,Z&%  C 8 % F f E ? L  G !fQ2 wXx E %zy 8 <|{ y} Ä %   98HA  B A )~ o|@€5 : ‚ 2 Ä B  A.?1ƒ^„ T a 2 …  8 M G|† ? M|‡)  b6ˆ L‰ "#?  k )  Š r  ' ? ‹ x@Œ   \ g 2 g # % E   2 g0Ž  % 8 o > rl% Œ   #  O Ä 8  ‘ ’        ! " # $    % &')( *,+.-/ &021354 67098 / 3 ')/:&; 3 ;=< & 35>?A@=B:3DCE3F G 1 / @ ;H4 I; +KJ 3 8LNMO ; 3 8 ; 4 PRQ @ B 3TS ( 6VUWX:F *VY=; J ;Z[+]\ ^ /)_ 8 ( `R;a > 0 3 8 / 3 ; b ; 8 3[c d & ; 8 Je% 0 3fg> 0 ( h 8 & ;;iJ2j 3f & 0 > ' 3 % & 8 ( 6 0 1.k > ( G Q 8mlon U5p 3 ( h 1qr ; > 8 B:3 % 8 / & 0 8 / 3tsvu a uw[xTy ( hVz{J U 3 ;N| s 3w &}:+~€ l  'e‚ > 8 8 / 3T> J 8„ƒ J 8 ; 4 hVz 0 &… ; > l 8 / 3 Q 3 8 d L U‡† ( ˆV‰ 3 f ( 6,& @ 3 ‚‹Š & l @ 3 %ŒU 3 ; 3 @ & 0:+& lŽ8 3 %Œ-{‘ 3 %‡’ 3 :3  lD8‡“ 3.d Z” • z “:8 ; % 3 ; ' 3 8„( –R— 3 … Š{˜š™ 8 qe3 U 4 › c ':‚œZ j 3 Q 8  @„ž Ÿ  Q); 3 e¡ ( ¢,£24 Ÿ¤@ 3 +¥; ( ¦V§t¨ ‚ & %© 3 /:&—{4 6 > % ; 3[  3 ƒ @ 8 /:& 84 x  ( ª @ 4 6V& … > O ; Z@ ; 4 « 0 @ / 3 dE3 4 I 1B r ; ' & 3 s & ‚ 8 / > J 1¬ Z U @ &4 6 0 ‚ Š¥0 > 8 _ 8i­ 8 % > ® B 4 6VB 3 % Z w d 3 U 3¯3 ‚ ( ¦ j¥( Ÿ 0:8 Z +¥° Š J ; ( Ÿ 0 1 ( x j ƒ U > — 3²± ( ªV% JK( 6 8 @ 3 -[/ 0K³ ´,µ J 3 ; ¶ · B ( – ; @=Z ; 8 ( x ; 0 > 8 & ' ƒe¸ ( ¹ 3 f 8  … º z 3 %»} 3 8 d¼> %‡½{¾ +2J 3 8„n Y B 3 …,& %‡¿ Z 0 JÀ ° Z % > l ;=4 Á ’ Q)& … ;Â% 3 à J24 6ÄU Z ± 8 >Å> £2Æ 3 U ‘ 3 8 / 3Dd 3 ( G z B @ ; –¼Ç Š > À ® &È ( ¨  z¥@ / 3 > ‘ 3 º ¿ZÉQ)W 3 %iZ[; J ¸ 8 ;  lÂ8‡B:3 |DÊ | 0 Z r p > Ë ½; l > U p / ( Ì -=Í 8 B 3Dd Z 4 hV1 B 8 ; d 3 % 3 3i )Î Z % 0 &²Ï ¸œŠ >Ð ; 3 %‡‘ & © … 3 p F ÑVr=B > 8 Í 3 % 0 3 8„p > U½ ;Ò L U d /)( Ñ / 8„“ Z p 3 ( –Rz q 8 ; d 3 % 350K> 8 >Ð ; 3 U ‘ & © … 3Ó 4 h 8 d ­ + 3 8 Z U À 4 G 0 3 +¥Ô / & 8E8vB Z  +:+24 Õԇ4 ª   > l»8 / 3Ö> Ð ; 3 %=— & 8 4 ª n Q 4 ªVU J F 6 @ % Š ; ‚ > C 3 + > 0 ‘ ZU z 3 0 3 £ Š & l & 8 > U L ÒE× x Ø Ù Ú»ÛmÜ ÚÞÝDß)àâá5ãEäæåç èŽÝêé2Ú»ë Ü5ìîí ë ^ / ( 6 ; ® % > 3ïð % 3 ( Ÿ ; + 3 ; ( ´V10:3 + 8 > 8 3 ; 8 & 0 4 ñ c ' ‚ 3 j 3 0Ô 8 ( 6 > 0 Ó ; & °24 6 ‚ 4 h 8îŠ 8 >ò3i  8 % & 8 &.; 4 ¨V1Q)ó d / ( ô B94 6¤;.õöõ £÷ ºø h 3 + a ù ( ¨ 0 £)&Wi½{z% > J 0 f 0 > ( ú ;=û ñ ^ B 3 ¾4 ¨ 1 Q & ‚Hü„ýgþ Ó &ò; ( h 0 J ; > ( ÿR+  ( hV;TÔ > © 3t3  8 U & 8 Z+ ? U n j © & Qe+ ƒ ­ ;‚ 8 3 U Z + d Í 4 ñ 8 3 0 > 4 x ¾ Z ü ý  a d / 4 ¢ -/N/ ' 3 & & c ' …  ¢ 8 J ï{3 u f Ç 1 U 3  8 3 º 8 ¡ & @ / 3 ;‡4 P 1  & ‚ &  ï õ õ 3 Q Y % a % 3 µ J 3 0 Š & % > J 0e+ 8 / 3 lîU 3 J 3 0 W Š  lH8 B)Z ; ( ¢ z 0 & h ^ Z ; 8 % 3 ; J ‚ Ô=;D& % 3 ' ‚ >8=8 3 ï (  1 J U 3 Ñ Ÿ @ / 3 j  z Q F ´ @ ð23 ; ƒ 3 8 U J > l 8 / 3N> U F 6 4 ´ 0 & …Þ; ( z 0 &"! ;$# 8 Í 3 ( ´  ƒ J 8 ; ( ¢ 1 0 & ‚ ; 8 >g8 / Z  3²8 d > U=† a & 0 + 8 / 3Œ> J @ ® J 8 ; 4 % 1{0 & ‚ ; >  8 / 3 0 3 8'&)(U ½ò& l @ 3 % W > 0+* ‘ 3 %„z 3 0 3 & % 3 ; / > d 0 ¦-, / 3 -/ 4 . ' 4 ñR; & £ ‚ 3 8 > % Z +2J 3 8 / 3 °)&  1 % > J 0 + Q0/ F ´2143 Ð b 5 +65a & 0 + 3  8 %& @ 8‡B 387 ( ¨Ä} JK; > ( ¦V+ ¦ 9 & % 1 3 %  3 r dE> % ½{; s;:=< :+> d 3 % 3 8 3 ; 8 3 ftd ( P @ / & § h   8@? % 3 > l ;BA ;[( C 0 J ;  ( ªV+)& ‚ ; ( % 10 & ‚ ;Ö& % > J 0 ï ½EDGFò; ' & 3 + & 8t0 >8 % 3 1 ÷0H‹&%9( I  8 3 % ‘ & ‚ ;g& ' ' % >  ( 6 c & 8 Z[‚ Š | 5+J F & ƒ &% 8 ÑLK / 3 0 3 8 d!> % †; 7 J -3 ;=; lNM ‚œ‚ bPO 3 ' & % & 8 3 + 3 & -i/ ƒ J % 3 ; ( ú 0 J2; > 4 ñ + F Q 0SR > &¥; 3 ' & U & 8 Z  J 8 ' J 8 W B &  0 3 ‚ & 0 + ; J ®ET ')% 3U7 ¾ 3 + & ‚V‚ & +V W & 3 08 ; 4 ô,0 JK; > ( ú f ; £ Š & ' k % >  2” QYX &8 3 ‚ Š | u +6 ¨[Z > > 0 ‘ 3 U 1 Z]\ 3 ' % > © ‚ 3 j ; dÖ3 U 3 3 0 > J 0K8 3 % 3 + C  Ÿ 8 / 8 ¡  ¦ ; ‚ &% 1 3 % 0 3 8 p > %  ¦ ^ _ ÜNá ç ` ã[a"bDåAÝced ÚAë2Ü ì í ë ^ / 3 ; 3gf ih2j hYkml 0 3 8 d  % † ; p 3 % 3 0 > 8"0 3 3 ; ; &²U ( x ‚ Š 4 ª 028„30 + 3 + l > % ‚ 8 3 %=( h 02z @ / 3 8 Š' 3 > l ; ž n z 0 & ‚ ; 8 B & 8A % 3 J ; J  ‚œ‚ Š ; Š 08 / Z ;i4 ´ F 3f 4 ŸV0 & ‚ & © > U& @ > U Š ¦ ^ / 3 U 3 l > % 3 8 B 3 0 3 8 dH> U ½{; d Z % 3 &…V; > 8 3 ; Ô 3 ï J2; ( ˆ 021 c JK; ( ˆ,&  ï ; ' 3 3 -/ aK;( ´ 1{0 & ‚ ; 8 / & 8 B & ‘ Z c > % 3 >{c ƒ … 3po ; 8 & 8 ( ˆ ; 8 4 ˆ7&ó ƒ U > ' 3 U @ 4 ¶ 3 O ˆ q > U¥( ´ 0 ; 8 & 0 3 aÖ_ò% 3 > U‡+:( r 0 z > l & ; 3 z j Z Q28 > l r 3É  8 % 3  f ( W 0ts!0 1 ‚ ( – ; / & 0 +š& ; 3$ucwv 028  ? r 3yx @ % 3 &+ò( h 0 1 % 33 ½ò° Š 8 B 3 ; &j Z ; ' 3 &z 3 U d3 % 3 c A 3 +ò4 ñ Q 8 d > O ‚ ( ´ 1 / @ ! Š +K4 ¨ O 3 º 3  85% & @ ( ¨ > ; 8 > ' % > +:J 3 J 0 4 ¨ 028 3 … ‚  ñ 4 ñ Ð ‚ 3 ( ñ 0 k J 8 ;  ú 1 0 & …‹; ? MK% r B:3 | Ê | 0 Z @ C MU z ;"{ ^ / 3 ; ƒAZ 8 U > 1 % &j L ? & 8 Š ƒ F ª & ‚ ; 3 1 j 3 0 r > lD8 / 3}|  ª x 3 + ; F ú ¿Q:& ‚ ;Å( ¨ ;Å; B >d 0 ( 6 0~Þ1J2U 3m€ 6 K qK3 0 Z @ C L U=½K; 3 &; F ª ó Š U 3 - — Z U Z +R B 3 8 CH> LU=4 h  4 hVQ ‚ÞU Z W > U + F ´Qƒ; G I @iB:ZD¾ ƒ Z R U n ¿U& | ; L l8 q Z  J 8 < J r ; &U Z ¾ B > C 0     ! "   $#% &('  ) +*   , /. 103246587 9:; <4= <(> ?A@ B C D = D > E F ?A@ GH I G J B K L M N O C ?AP Q = Q/R B C ST SVU S @ U W1X Y Z W @[ \^]`_acbed fg h/i jk%lnmVoqpsr t uvo6wxzyo{ ls|} {z~8y€q‚„ƒ ‚c… o6}1†ˆ‡%y`‰Šlnwi j … kŒ‹{AŽ  … o‘ ‹6’o“‹”•—–˜‹ …š™n›˜œ {z|!ž o y oŸ †z sŽ ¡¢} o … £i ¤¥{Vo‘¦§i ¡ …s¨ lš} © i ª«c¬ k­y oq®s¯±°² ³¥´ Ž r |µ‹ m·¶`o {Als¸} w¹ºomo‘» ž‹ i ¼ … o6½Š†i ¾ } Œ¿ÁÀ ¡·o ™ m`Âà ‡ y ™sŽ t … k{ Ä ¿“ln{Ž j¢Å%Æ y Å  ¿‘Ç È i É … ‹ xi ¼Ê ” {¹ Ä ¿qls{Ž Ë ÅÌ«˜½Í{ ¨ÏÎ oÅAÐ ¯ Ñ ÒÔÓÖÕ±×ÏØ`ÙÚšÛ ÜÞÝÕ ßÔo  ‹à o ™ oá{ ÅA¶ Ž âã Î ™ x   o y Î {Vänå x {æ Ä ‹‘” oçè‡ yécê Ð Ž ëìà Р› Î yí  yî w±Ì lš} æï ‹ ™ ‹ð ç Ž ’ αñò ç`ó yŽ r «šô ë õöc÷ { Ž ø”nùŒ‹ … ‹ú ûýüÿþ °} o à «š%|¢‡ ô ö  o ’ oÌ à РŽ ¯ Î  Î •Œ‹‘y Î Ì ¸ x Ž ¿ Î  Å ‹µ‹–š| Î ‹…š™ ú ‡                 "!  #%$ &('%) * + , . / 0 1 2 3 ."4 5 . 1 6 7 8 9 : ; < = > ?@(ABDC EGFIHJ K L MN"O P OQ R%SUTV 0 W ODX Y[Z\ 1 1 ] ^ _ ` a b c dfe g h i(j k l jnm o c(p q r s d tvu 1xw y z { | }v~v 0 €  ‚ O ƒ 0 0 €…„ †ˆ‡ O O O ‰ 1 } Š ‹ Œ  O w } Ž  ‘ 5 r } 8 r“’ O•” d – O O .(— ˜ ™•š › € œDž Ÿ¡  ¢¤£  ¦¥ t §¨ ©  .«ª 1 c(¬v­ ®°¯²±³ ´µ ¶ ^· € ¸ O¹ Š © g º » 1 ¼ ½x¾¿(À ¹ Á  6xÃ Ä Å Æ } ÇÈ 1vÉÊ Ë ÌÎÍ ÏÑÐ j Ò w Ó Ô Õ 5 1 Ö × Ø Ù 1 ÚvÛ ÜÝßÞ — d à ¹ 1 á%â ãåäæ 0 O ç 0éè 1 . ê 1 ëíì î V ïð Õ O ñßòÎó ‘õô ö÷ ø 8 1Îùú û üþý ^ · © ÿ 8 O   —    M .  . } O r   } r t  } © Ð  ¹  Ð    Ð O  ÿ  . ! O j } . Ð c#"%$ & O ')( * © 5 O 5,+ © /.10 2 . 3/4 5 O 6 7  1 198 :; ‘ Ž<>= ? r —  OA@ 1 * » 4 BDC EF 6 jHGHI J Ð HK 4ML NMO „/P Q º R Q ST T } U } V WXY Z [D\ r ] ^ r d  _ ' O ] ` 7 a b © » c d O e O O f g } h r O i 8 ¹ jMk Oml 0 O ë 6 n o p q ' r ] 5  1 O s t .vu 5 a9w ] a x ¹  Ð » Ð T y O z b { | } ~H ë € 5 ¹  ; ¹ š‚ Ð O  JMƒ…„ † ‡   O  O 5 ˆ ‰ Á Š u ‹  O   ŒD ŽH ' ]  ‘ 0D’ O “ T OA”–• —™˜ š › a œž Ÿ ] © a  0 g . ¡  ¹ ¢ T£M¤ 5 ¥ ] 8 ¦H§ . ¾ ¨ „© .%ª † O « r ¬ ­ ¹ ® r š ¯ O ° ± — ²H³ д/µ ¶ ·¹¸ Ø º » ¹ ¼ ½ T T T¿¾ ÁÀ a# ¥ O ÃÅÄ Y¿Æ . ' w } O  5 . Ç w ÈÊÉ Á L }™Ë ] 1 ¹ T 8 Ì O . w>ÍÎ O 6ÐÏ r/Ñ . ] Ò a  . 8 r . Ó Ð d Ô Õ Ö 1 T VØ× ÿ 3 5 Ù 0  O O Ú Û wÜ O Ý 8 Þ ßáà u ] 5 ‘ â T O Ð ã Ø ¥ ä ] Á J ¸ å  1 Y . ÿ r } æ 5  ç rMè é ê ë O ì í ' 6 O a î 8 Ö rðï ñ à w . Oóò ] ôöõ K jø÷ r . 5ùûú » ü € ] T O w ý €óþ O } ÿ r ] } . O   ]   © u M  5 . . a O   €   O € © 6 Œ O ¹ ]  j a ¹ }   . ] r  0 â T €   O"!$# ·&%(' ] J . J ) a * O + r † a , a . / 0 1 2 4365 7 ÿ 6 8$9 Ä ] . .;: r < = >$? Ñ O ØA@ O B 5 C EDFHG I  J .LKNMPO Q R ]  Œ ] S ] T UWV r X .Y[Z \ ] a_^ ¥ Ja` a u bdc 5 r ] .  e f  T Ohg  Ð @ j r i ‘ . ) O 5 O O O T  O O ©  j r j k l  k k m npo qsr t O 3vuxw y{zh|L} ñ Ä O ~ V r ] € O‚ ƒ „A… * †ˆ‡ ‰ O ü Š 3 ‹ Œ 5  3Ž © ‘ — T 5 ’ O “ ] ” • – O˜— ™ . š d w a › OPœ Á }  . . ž r ñ Ÿ }  s¡E¢ Ð €  O r £ ¤v¥ r  È    O§¦ » ¬ 4 ¨ ©  ]Aª « ] ¬ ­ O ‘ O ‡ ® Ä 1 ¯  O = T ° ± ± ] ² ³ r O T´ µ ¶ J · ¸ Á ÿ ¹ ^ º » + ¼ ½ 6 Á . ¾ ˜À¿ÂÁ e r iLà Á ] Ä ÿ ] — „hÅ ] Æ ]Ç 5 ÈÊÉ k è   ]Ë Ì ]PÍ T Î a 8 Ì r ] Š ÏÐ ÑÒvÓ  Ô È Õ Ö  T × » O T . Ø r Ù O ÿ ' 0 1 i j Ú ] TAÛ Ü ¤ €Ý Þ r ß àÂá â ã ä å ¸ æ çè  T é Á$êÀë t ì w  0  O ¹ ‘ r Oí î ¯ ï j ' ¾ ð È ‘ wòñ ó T ô r O . õ ö÷ O j ø Ø  ùûú O . O ü ý w ¡ þ . ÿ —  . }  O  r 0    › )   Y ) ' w  Ø r  * © 0 … [   O r . r T   ! à ] r T [ " O i$# ] %'&)(+*  ,.-0/ O @1 2 T m O ±43 © 5 ž Ž ] Ë 6 7 T 8.9:<;.= O * » > ? r [ c TA@ O Ñ B 8 <C ‘ O r ¹ r Ø DFE rHG I 5 J 0 0 8 € K a r .ML ]N .PO O QR S T T U O T Û V OXW.Y Z È\[ . ¹ ]   ^ J – _ 'a` \b r ] c ] w aed O Y 8 f g h jiPk Ò l Ãnm € o â O – O © Tprq 8's O tvu u } r w ] = ¾ r x ¥ ë ‘y w z { |}.~ O  €  O O  ‚ ] ]Mƒ í 5 8 „ … † ‡ ˆ  @ Њ‰ 5 ‹ 5Œ »  Ž 5 O ^ ] a  ’‘”“ • ‘ – r . O˜— » T™›š œ  T žMŸ   T O ] 5 5 › ¡¢ O 1  ‘ a r 5 ] £ O ¤ O ¥§¦ ¨”© r 8 ª «<¬ ­ ®  ] ¯ } ] ° }  ± ² ÿM³ ´ € û .µ wv¶ · ¬ O O”¸ ¹ 5 r ºM» ' Y r ¼ ½¾À¿ ' O ] inÁ Âà Ä.Å J Æ Ç È ì É Ê Ë Ì JÍÏÎ ]ÑÐ € r Á w Ò 8 . r Ó 5 w a r f T . j BÕÔ  Á ] w š w’Ö ÿ  ) a O × 3  Á ' O } Ø Ù Ñ j ÚFÛ Ü”Ýßޒàâá+ãÀä Úæå’ç áéèëê àíì’îðïòñðó ìôõöáø÷ðù§áéú’ûöü ì ôý§Û þ ÿ á  õŠáâánè Þ õíá ú  ޛû íì û á ú›á û ì’ï õ  ì  á ïÑï á ñ Û  ñ û Û  ì ú ì ô û  á áøû  ì’ï"!$# á&%('  Û )á+*ßá , û. /021 ñ43 á5 ñ ú76 ñ ç ç 098 :<; ñ û Û ) ì>= õ4? û ì úA@CB á DFE û ñ ôMá HG I J ûK ï áná FL :NM ánú õ ONP7Q(@4R ì %>S , áøè û ï áøè ì ú›õUT ï Þ è û O ì +ôV ì 5 ü ' ï áXW õíè ì  Y ; *Z8 E õaÛ [ ì = ? J ï á M ì3 \ úF] ; ïíì õíõaû ñ<^ Û Oú û á  á `_ ì ú ába  8 ) ]  û @41 è ì 5c5 Þ = : ; ñ 8 deW = 12 f úFgøü ? ñ ú  J õíáh ñ ï ñ û O ì ú ì ô á 3 ì> ái ç ì '  û j ñk õ Û O Ý ú ñ 1 õ ôV ì ùml ñ è  ÷ ï ì Þ ú nonqp ñ   nsr ptQ ì þ õ ' . u õ Û ve ]xwzy|{ Töánè K ú ì  ì ]>} ñ ú  5c  è ï P>~(P  ' ï T è ú›Û €N Þ áøõ ï ' ñ‚ û E 5§á õaá4ƒ ñ>„ ñ†… O ì úˆ‡‰<õ 8 dNŠ Q`@ 0 ü Û )  û  á ñ Þ  ! ì õ  áøè ûíï Þ ù ñ ú  Þ ç ûöì ñ % ì ÞÀûs‹ wxŒ  Û : õ ç ‡ßüöõíÛ d l  'Ž‘ ï V ' = û“’ B ì`” •Tgéè _ = :  Þ ' õ –˜— ì  áXš™4›7œŸž Þ üaÛ € ú Ý l`  ç ì 1 @  á 3 Û þ ; áaõ ? ñ ú £¢ ¤ Ý K áXV ; E ïöï ᥠ1 á 3 á 1 õ õU¦ ‡`§ ¨ ánú ñ %F1 ácV'@ k© û Û : ù§á  V ì èaánõ õ Ü ú ] ì ô õ8 ª Ý ú ñ 1õ P†« ñ ôMá   Þ ú  ï á r¬  , ­ Þ û Þ ï á 'ì ï ^¡ Û [  § Û  ú7* ì 1 3 á  á 3 '+1 ì ç Û O ú Ý û K á è ñ ç ñ® 8 ,N1 8 , û } û ì  @¯   á õU ° Ý ú ñ 1  g¥0<±6’õFÛ : ú … V ì  Þ ; ' i l } û K áH5Ñá  d Þ ù û  ï P E ] K  _ , ;  û  áPõíÛ , Ý = ñ 1\õ²h ïíì  @ Ý @ û á ® á ‰ WŸV ' Vg ñ ; ³ ú Ý û K áPõíá›õU‡ V õ – @Hü ´ µ ¶ · ¸ ¹        !   #" $ &%  (' *) ,+-/. 0 1 &23 546 7988 :<; =?> @BA?C D E F G H IJ K L L MNPO Q RSPTVU WX Y#Z []\^ 0_ X `(a b ced T b fg H h T L i j k l m L?n oqpBr b L s o T b Y o o t u v wBx o y{z | } ~ X T €  ‚/ƒ„#…‡†ˆ _ Y T zЉ<‹ L ^BŒ  Ž   } ‘ ’ “•” – — } ˜ T ™ š y › TBœ  ž Ÿ Y z   ¡¢ £¥¤  ¦ T § _ ¨ z © T ªP« ¬ m m ­ o b ®°¯ zB± o ‰ ² Z ³ T Y ´ µ ¶ T } · T Z ¸ T ž ¹qº T L » T T ˜ ¼ T b ½ T ¾‡¿ À Á T  T T à «?Ä ÅÆ Ç ‚ Y#b T ÈÉ ÊBËqÌÍ Y L Î L L Ï ÐÒÑÓ ÔÖÕ¥× TØ Y Ù Ú Û T m T Ü } Ý9Þ ßáà T ž T°âäã T « } Œ  ž å ®Šæ ç } oè T?Têé ë ì L í } m î b Y z ï ð Ü ˜ b T j T Tqñ òBó zô z L T Y T ¨ õ]ö _ Ó ˜ L J z ÷ X ¬ø ù úüû b ˆþý ÿ  Ñ 0 A  L  }  Í   o   o   M ^ T b     ž       0   W  X  ˜ }  _ j `! X o " # _ Y Y ® } o o } o L%$ ¾ b & T ' L Ù Z ¾ ( m )+* b+, } z T m -. T /10 T Z Z ˆ Z j32 ¬ } Z o 4 T b 576 ¾ @ 8:9 ; < T  Y =?> L @ ZBA ú } "DC +E o Y o F oHG k I J 0 K j L T b Œ M!N3OQP R  S T m P } U ¾WV « _ X Y T ¬ o ú Z _ L } [ ¬ T \3] L ^`_ a _ T _ ^cb û T Z d a e _ _f g hji Zlknm o b m L žp Trq X } } " sWtvu w L T:x T y{z |!}~€‚ ® ƒ „ T }… L L ^‡† T Ù ˆ z } L }Љ z TŒ‹  b Ž!  T ‘“’ T T ” ²–• — ¾™˜ š 'Œ›lœ  j W j ž TŸ 0  ¡ ¬ ¢ T £ ¬{¤¦¥¨§ © ª z« 5¬®­ ˜ ¯l° L ± ‰ }W² _ T T T´³ k T µ o ' X _ ¶ } « ·¹¸ k º L T ® T»!¼ ž T } } ® ½ ' ¾ ¿+À } úÂÁQÃlÄ z Å b Æ b ½ T z z } L “ L Z ˜ } T T TÈÇ _ _ T o ¬ o–É b b z z Ê  Y j b Ë Ì™Í Î Ï A{Ð Ñ H Ò Ó Ô Õ _ Ü T ˆ “ Ö T " } × Ø ¨¨Ù _ “ Ú Û1Ü žlÝ Ù Þ ¾ ß à _já!â ã T ä À 广 ç !è z é ê ë W™ì í î ï X ð ž Tlñ } _ ˜ } ò1ó1ô´õ ö « S÷ X+ø o T ‹ m T ù ú lû T ž T üný ˜ Ç`þ ˜  ÿ «  " ¾    T  Y ˜ b Z m T T X    T/T L T T L b J b ¾ T T z     z LäL X   Z  L z É   } 5 L o y ½ ! " } T # $ o% & T  ' ' ( ) * Y  + b  , -/. 0 _ b ¾ Z } 1 } 2 _ L y Í ‘ 3 ¬ 465 7 8 T:9; <>=@? _ L TBA ì b C D T E 6F bG _ T _ ' kIH J b ® b ‚LK>M Z } L N T “ o O/P . Q R _ T S z T TVU W Z Ñ L T T Í X L z Y L Z [ L A ' \^] b`_   W Ü X _ a 6 X 0 0 0 b c d:ef gih jlk A  ¬ m T  n T T po6q b 0 X b  Z r os ^ ¾ t L ¾ Ä b } L  E u Í } L ' ‹wvyx z { |  m } o ˜ T z ~ }   € j z ¬ G z @  5 ‚ ƒ L ì } z TáT T „ … † ‡ û ˆ _  Í T ‰ ˆ 2 L LŠ ‹ é b Ü ^ @ bŒ Ž 0 ˆ L b>‘ _ T Ž T m ˆ " T T“’ S ” L T _ L L L–•  ¬ _ — b ˜ z b T g š™ › L œ _ b ž Ÿ Ü z ¡ ‘ _¢ 2 L £ ^ ¤¦¥ $ § }¨ T "ª© 2 } T ž L «­¬ ¾¯® } ° ± }–² } ³/´ b } µ·¶ b Z  0 Ù L L ¸ 0 _ T ˜ b ow¹ Ù Ü L m z Z ì “ } T z } o X o j o § ² Ë º» ¼`½¿¾IÀLÁÃÂIÄ ÅÇÆšÈ ÁÊÉÌËLÀ>ÍÏΚÀLÐÏÑÒÍ¿ÓÕÔÁ ½ÏÖØ×ÚÙÜÛ Ô Í¿ÓÇÝLÞÁàßÜá Ûãâ ¾IËäÍ¿Óæåèçêé ç ëIÁ·ËíìîÍ¿ïð Ôòñ ÍÚóyô Å Ùõ Ô Á È ÐÊö À å Ë>» ¼ Í Ùø÷ Ó“Ë ÞÁ Ô:ù Á¯ÁúÉÞøÔû» üý½ Ù ÐÚþ Ô ¼ ÿ  Þ ×  ÉòÍ   å »  ù пÀ Ë   ÀãÍIþlÁ Ö ÅøÆ ¾IÉ Ñ Í  Ð Û ÿ Í Ù Ô Í ËLÞÁøÐ! #"%$¿ïL» &  ñ Ñ ñ Ð(' ×*) Á¯Á·ë + Àãß  Í Ô Á-,/. 10 ¾IË Ý × Ù32 0 á ËãËãÁ Ù46578:9 å <; ÀLÁÌÉòÁ Ù ÝLþ . 3= þ å ÝË Ð Ù ?> Ð Î õ@ ABDCFE пËãËGIHJ 4  KMLNOLP%LRQTS-LU VXW ñ × MY Z & C G J:J GÐ: []\à٠пþ Í^`_baÕÆRced f È þ × f Á Ù ÝãÐ ËOg A Í Ù ÍÊÓ Ð  \ih  Í!j \  Ð ù Û ô ­ ' ×lk B Û Þm! ÿ É n Á·¾IÀLÐ-o n Á ÛOpî÷ ÀiÓ ÍšÀiq × ÐÏþrFsy» A f ×t Á È Ð¿ÀзËL» &­ß:uÍ¿Óvd Ù Á  Á Ù Rm ÙRw Æ » x/Î Ù<y!zÕÆ ß{Iï |6Á Ô-} ]~€  Ƃ„ƒl… † Þ ×6‡ » A ‡‰ˆŠ » A ÷:‹ × Iô ·Ð- ŒÇë^Ü» & ë ×Ê× ï @ ë ^ ˆ † ñ Á 0W  ٠ԏŽ Í È »  둏’yë @ “” Á ïãÔ» •`Ë ­ – Í—¯ë Y™˜š› ~ = ͚¾ » &œžI× ë = ­RŸ€  Ð Ù<¡€\ ëï × W á \ ü:¢  2 4 H J 4  `]£ t ô ¤ý» ¥ ÷¦ d ÖiÈ ¤§— Ö m Ù Ý å Ë ô ¨/÷ ë ÷%© п뙪yá Ý W‰« ªIÐ È Ýòô ¬ ” —­yÁ Û ì ÷ À D© ÷ ï1® × Ð‰¯§°²± ô  Öè×M³×(´ пïLÐÜË ÿ ÷ÏÙ W-µ c Ù ¶ È Á Ù ¡ —ÊëÝ t ô ¸·¹ å ¤¥º-» ¼ ¼ ½¾¿!À Á ÁÃÂ-Ä o ÅÆÈÇ*ÉFÊbËFÌ Á€Í:ÎÎÍMÏ Å%Ë Á(ÐÒÑ ËOÓ o É Å Ñ-ÔÖÕØ×-ÙbÚ É Ç Ä _ Û Ù É!ÅÝÜ Ä ¬ ¾ÞÀ Û Ä o ËFÇ Ñ Å ÂMÕ × Ç Ë Á(ß ÇÃà t » & ë ½ Ð ° á ß@ï × ˆ çJ%8G âRã:ä!å-æ x çèWé × ëê ¨ šë ˆ E Í@¾ z¥ì &œ ¾R—Ìëí îM зë  ª ë ï — W ¾€ª€ï ¢ C G ä!54   [ðt » ­ ô &  ÷ òñ`ó kôeôãÖ È oõ— Öi× ë r ö ÷ ø ù ú û ü         ! " #%$ & ' ('*) +-,.0/1(.325476*8 2:9;( <='*> ?@BADCEAF*GH,JI*KL1M,JI 'ON A:P A QRTSVUE> WXZYD[\A ] Q I QB'*^ _`,ab,1"cOd;e\A < AfdgehAai' [\> jlkmgnop[\,qgI r A s tvuwuyx!z { | } } ~ € h‚ ƒ {…„‡†Vˆ }Љ‹ †VˆŒ h;Ž\E‘ {  ’ }”“}  | }0{ –•f ’ {—  †  ˜ { š™=|  ˜ › œ  | œ ƒD ~ ™gž ƒ † } Ÿ:ƒ  =¡£¢ A¥¤B,N;aJ¦5§E, <h¨\> ©lm\ª¬«Ea\> ­@ A I*ª > ®l'M¯J°;±p( ²%'*> ³µ´Z,iI*A uh¶ QI ¯ R Qdh9°g·¸B¹ ºh·¹h»B¼ ½ ,i¾hAfa ¶¿­!ÀLÁ °E ,B4\Ã-Ä ÅÇÆÈ AfmÊÉÌË!Í + (ahÎÏ2£m=Ð\IÑA ,B4Ò2LÓ!Ô ÅHÕ » ÖB×\»fÎhؖÙwÙÚ2¬my2¬4h'ÛL25ÎhQ < ' Ä Ü @BA [ ¯ mh'…NhA '*> j r!ÝEA 4 IQÃ\Þ A'Oß, I Kp1V, IÌP AQ Ãlà…¡£á Ü ´ZA![\Af]hâ IãQ' Ä ä ,Bm5,1hc…mh9\A < A”a;Î\A aJ'å[ Ä ælç a;( à [\, 4 I*r Afªè uÚé êHë {ì œ } ~ í îh‚ ƒ { ’"†Mˆ }HïhððïE•  † }ñ  †  ò {  hó {  ò iô ‘ { î ’ }õ  | } { Lö5} Ž z B ö5} †O÷ { zùø ƒ  Jú A QB'' à A ° û 2 °hcüSMý;»B» þ ý »fÿ½  NhAfd _ À Qa;Ζ2¥m;ÎgI…A”, È 2  _ Õ » ×× ý Ø Ù ½ q II*A aJ' ¶ , 97A [ 4 h'ON;IA ª , à 9 Í [¿c*´ < o AZA”m â F á Ë , m  1D'*NhA–§EA IQ q F*S q ' A m q 'Û 9 Q < '*> j @ A CEA ' ß , I K j u u •! " # ó { Ž z h {%$ ™ {    ~'& ™hôя ô } ‘  ò z | ŽJ ( † ƒ ý*) Õ+-, u ¶ Q ¯/.× × ý ® § A10243 t Í ä65 » ×*7iר898;: ˆ }=<pz ‚ B> ? ƒ A@%B € {  { ’DC }!E\*F   {AGH û > ©JI-A ¯ ° Ý ALKNM!,iIãK Á § A I Q4OP ¤RQ QBah9 ¤ 4h'*'*Afm ½S Õ »× ·¹ Ø 8 Ùµ[ < QUTA ,I ¡EÄ t ´ A 2¥9;Q < 'Ñ> Á @BA[i> ÁWV ahQ ÃYX I ,TùA ªª > © a V 2 ¯ C£A4hIQ  CEA ' ß ,[Z K\ ,h9\A à ª tµu u ö œ Ž z *] ö5} †…÷ {Bz ø ƒE’ { ë ‘ { Ÿ_^ Ža`… (  ‚   2¥c*É ½ , mb AIA aT AŠÉ IÑ,ac r”AA 9h> d a ç ª »¸\» ° [\a7, ß >  I 9 ° « ¡ je Î á d '*A 9f ¯ ¤ ,N7m3[åt 3 A mhKAI ¼ ¤-g ' '*AfmNh QYiù» × · ) Ø 8 Ù ½ Q à r 4 ²j A”4hI,lkfm ­  A nV'*> Ü Æ 4 A A 'po QB> ©'*A  A a ' 9h4 [ > Q ViahQ*I ° 25a;Q oT¯ ª A3Aa ½ ,[q < ,ªfQAr ' A ¦5c…msut v < A m;Ðwm\'ÑA ª + uwu É yxJz|{} NhA ª > t ª °6~¬a Ä Ó @BA I€‚ t *A ƒÌ[ T > æA m7'*> j…„‡†J4\A:A '  A ˆ 9 Ä t r wAI A Ӊ S c…m ª '> t ' È ' j Q*Š Ä Ó ,Bm (1‹ÌÉ  I ¯ '%Œ*T  a\Ä +Ž ÈhA °|€‘l’“- à AJ°” I(aT AJÓ ¤ È ' 'OA a ½ Ó Qm Î À A IQ4 à  ¤‡• Õ » ×B× » Ø0ÙÚÙ9–—I-> t ah9š[iAf<gQBIQ'*Ä Å ,ia¿, 1¥[i,J4hI‚T A ª u ÉQ I!5c Ó Á 2¬m¿2 9g(< '*> ˜ @ Œ‡2 ‹ ç , I > ™' ¾ ´›š‡œ A9¿,ia¿CpA4hIO,[=ž Ÿ A '*> ˜ TD25I T ¾ >  'OA'T '*4hI…A  uwu ™y> ¡ ‚ g ê z { ì } ƒ ƒ  í î¢ ý ÿ ž tÇ»!£ »'¤ ° ¥  ª€ŒA¦ > j Œ I¨§6T > ? AaT A:É 4© à > ªª  A I ¦*« É Ã Q'' ¤­¬ (ah9 ” Q ç V Ä ® ap¯ Á 898 2°j A ' ß , I K_±ü,iI"' N AY² A < Q I Q '…>  ,d ,1Ì[  4\I‚TfA”ªp' N QB'H( I A [ 4i<=AIO> Á³q| <=,UA 9LQa\9 3 AfR%( ¯A 9 Å H u Œ¥~*F;| } ƒ  ´  ö5} Ž z aµ •  ’ {z ŸLA`O ò {  ê z { ì } ƒ!¶    ¢E™ žƒ‚`}”Ÿ:ƒ¸· ¶ , I ç QBa ¹ Q 4B1ü´ Q dhd É 4 º%> t ª N7A I ª ° ² (a ¶ Q' A ,\°Ì»!»i× ý¼ ½ > ³-'Ñ' ,[¾ eY¿ 2 Å Q m;9D25I*I A ç 47> #ÀDtÁ » × · × , 8 Ù ½ ¶ Í [ cÑai'*A ç IQB' Ä ¿ ,Ja  1 § AfI”Q 4 à 'OS ¤ q 'Ñ'A”m ½ A ÃTà ªÌ1ü,I [\A < Q I”QAÑ> Âl,Bm  1E[ , È I TA”ª j H uÃ0{z ø ƒ ˆ {Ä {  Œ5h { ¢ÆÅyÇ ™È  ~ ö¥} Ž z  ™;žƒ † œ ŸDƒ¥É ,iI*' à Q ahÐ ° Í I AVB, d j ¹ ‹ 4 ß A IH2¥r Q 9 A ´Z> Ó T5É"IOAªª °6j ,iI ßHA!%à °6 2 j
1991
17
482
Learning in Feedforward Networks with Nonsmooth Functions Nicholas J. Redding· Information Technology Division Defence Science and Tech. Org. P.O. Box 1600 Salisbury Adelaide SA 5108 Australia T.Downs Intelligent Machines Laboratory Dept of Electrical Engineering University of Queensland Brisbane Q 4072 Australia Abstract This paper is concerned with the problem of learning in networks where some or all of the functions involved are not smooth. Examples of such networks are those whose neural transfer functions are piecewise-linear and those whose error function is defined in terms of the 100 norm. Up to now, networks whose neural transfer functions are piecewise-linear have received very little consideration in the literature, but the possibility of using an error function defined in terms of the 100 norm has received some attention. In this latter work, however, the problems that can occur when gradient methods are used for non smooth error functions have not been addressed. In this paper we draw upon some recent results from the field of nonsmooth optimization (NSO) to present an algorithm for the non smooth case. Our motivation for this work arose out of the fact that we have been able to show that, in backpropagation, an error function based upon the 100 norm overcomes the difficulties which can occur when using the 12 norm. 1 INTRODUCTION This paper is concerned with the problem of learning in networks where some or all of the functions involved are not smooth. Examples of such networks are those whose neural transfer functions are piecewise-linear and those whose error function is defined in terms of the 100 norm. ·The author can be contacted via email atinternetaddressredding@itd.dsto.oz.au. 1056 Learning in Feedforward Networks with Nonsmooth Functions 1057 Up to now. networks whose neural transfer functions are piecewise-linear have received very little consideration in the literature. but the possibility of using an error function defined in terms of the £00 norm has received some attention [1]. In the work described in [1]. however. the problems that can occur when gradient methods are used for nonsmooth error functions have not been addressed. In this paper we draw upon some recent results from the field of nonsmooth optimization (NSO) to present an algorithm for the nonsmooth case. Our motivation for this work arose out of the fact that we have been able to show [2]1 that an error function based upon the £00 norm overcomes the difficulties which can occur when using backpropagation's £2 norm [4]. The framework for NSO is the class of locally Lipschitzian functions [5]. Locally Lipschitzian functions are a broad class of functions that include. but are not limited to. "smooth to (completely differentiable) functions. (Note. however. that this framework does not include step-functions.) We here present a method for training feedforward networks (FFNs) whose behaviour can be described by a locally Lipschitzian function y = f lJIIt(w, x). where the input vector x = (Xl, ... , xn) is an element of the set of patterns X C Rn. W E Ril is the weight vector. and y E Rm is the m-dimensional output. The possible networks that fit within the locally Lipschitzian framework include any network that has a continuous. piecewise differentiable description. i.e., continuous functions with nondifferentiable points ("non smooth functionstt). Training a network involves the selection of a weight vector W* which minimizes an error function E( w). As long as the error function E is locally Lipschitzian. then it can be trained by the procedure that we will outline. which is based upon a new technique for NSO [6]. In Section 2. a description of the difficulties that can occur when gradient methods are applied to nonsmooth problems is presented. In Section 3. a short overview of the BundleTrust algorithm [6] for NSO is presented. And in Section 4 details of applying a NSO procedure to training networks with an £00 based error function are presented. along with simulation results that demonstrate the viability of the technique. 2 FAll..URE OF GRADIENT METHODS Two difficulties which arise when gradient methods are applied to nonsmooth problems will be discussed here. The first is that gradient descent sometimes fails to converge to a local minimum. and the second relates to the lack of a stopping criterion for gradient methods. 2.1 THE "JAMMING" EFFECT We will now show that gradient methods can fail to converge to a local minimum (the "jamming" effect [7.8]). The particular example used here is taken from [9]. Consider the following function. that has a minimum at the point w* = (0,0): fl(W) = 3(w? + 2wi). (1) If we start at the point Wo = (2,1). it is easily shown that a steepest descent algorithm2 would generate the sequence WI = (2, -1)/3. W2 = (2,1)/9 •...• so that the sequence lThis is quite simple. using a theorem due to Krishnan [3]. lrrhis is achieved by repeatedly perfonning a line search along the steepest descent direction. 1058 Redding and Downs nondifferenliable half-line • Figure 1: A contour plot of the function h. {w/c} oscillates between points on the two half-lines Wl = 2'lV2 and Wl = -2'lV2 for Wl ~ O. converging to the optimal point w· = (0,0). Next. from the function ft. create a new function h in the following manner: (2) The gradient at any point of h is proportional to the gradient at the same point on It. so the sequence of points generated by a gradient descent algorithm starting from (2, 1) on h will be the same as the case for It. and will again converge3 to the optimal point. again w· = (0,0). Lastly. we shift the optimal point away from (0, 0), but keep a region including the sequence { w/c} unchanged to create a new function 13 (w): hew) = { V3(w? + 2wD if 0 ~ 1'lV21 ~ 2Wl (3) ~(Wl + 41'lV21) elsewhere. The new function 13, depicted in fig. 1, is continuous, has a discontinuous derivative only on the half-line Wl ~ O. 'lV2 = 0, and is convex with a "minimum" as Wl -- -00. In spite of this, the steepest descent algorithm still converges to the now nonoptimal "jamming" point (0, 0). A multitude of possible variations to It exist that will achieve a similar result, but the point is clear: gradient methods can lead to trouble when applied to non smooth problems. This lesson is important, because the backpropagation learning algorithm is a smooth gradient descent technique. and as such will have the difficulties described when it, or an extension (eg., [1]). are applied to a nonsmooth problem. 2.2 STOPPING CRITERION The second significant problem associated with smooth descent techniques in a nonsmooth context occurs with the stopping criterion. In normal smooth circumstances. a stopping 3Note that for this new sequence of points, the gradient no longer converges to 0 at (0,0), but oscillates between the values v'2(1, ±l). Learning in Feedforward Networks with Nonsmooth Functions 1059 criterion is determined using IIV'/II ~ (, (4) where ( is a small positive quantity determined by the required accuracy. However, it is frequently the case that the minimum of a non smooth function occurs at a nondifferentiable point or "kink", and the gradient is of little value around these points. For example, the gradient of /( w) = Iwl has a magnitude of 1 no matter how close w is to the optimum at w = O. 3 NONSMOOTH OYfIMIZATION For any locally Lipschitzian function /, the generalized directional derivative always exists, and can be used to define a generalized gradient or subdifferential, denoted by 8/. which ,is a compact convex set4 [5]. A particular element g E 8/(w) is termed a subgradientof / at W [5,10]. In situations where / is strictly differentiable at w. the generalized gradient of / at W is equal to the gradient, i.e., 8/( w) = V' /( w). We will now discuss the basic aspects of NSO and in particular the Bundle-Trust (Bn algorithm [6]. Quite naturally. subgradients in NSO provide a substitute for the gradients in standard smooth optimization using gradient descent. Accordingly, in an NSO procedure, we require the following to be satisfied: At every w, we can compute /(w) and any g E 8/(w). (5) To overcome the jamming effect, however. it is not sufficient replace the gradient with a subgradient in a gradient descent algorithm the strictly local information that this provides about the function's behaviour can be misleading. For example, an approach like this will not change the descent path taken from the starting point (2,1) on the function h (see fig. 1). The solution to this problem is to provide some "smearing" of the gradient information by enriching the information at w with knowledge of its surroundings. This can be achieved by replacing the strictly local subgradients g E 8/(w) by UVEB g E 8/(v) where B is a suitable neighbourhoodofw, and then define the (-generalized gradient 8f /(w) as 8f /(w) 6 co { u 8/(V)} VEB(W,f) (6) where ( > 0 and small, and co denotes a convex hull. These ideas were first used by [7] to overcome the lack of continuity in minimax problems, and have become the basis for extensive work in NSO. In an optimization procedure. points in a sequence {WI:, k = 0,1, ... } are visited until a point is reached at which a stopping criterion is satisfied. In a NSO procedure, this occurs when a point WI: is reached that satisfies the condition 0 E 8f/(wl:). and the point is said to be (-optimal. That is, in the case of convex /, the point WI: is (-optimal if (7) 4In other words, a set of vectors will define the generalized gradient of a non smooth function at a single point. rather than a single vector in the case of smooth functions. 1060 Redding and Downs and in the case of nonconvex I. I(Wk) ~ I(w) + (IIW - w,,11 + (forall wEB (8) where B is some neighbourhood of Wk of nonzero dimension. Obviously. as ( ~ 0. then Wk ~ w· at which 0 E () I( w·). i.e., Wk is ''within (" of the local minimum w·. Usually the (-generalized gradient is not available. and this is why the bundle concept is introduced. The basic idea of a bundle concept in NSO is to replace the (-generalized gradient by some inner approximating polytope P which will then be used to compute a descent direction. If the polytope P is a sufficiently good approximation to I. then we will find a direction along which to descend (a so-called serious step). In the case where P is not a sufficiently good approximation to I to yield a descent direction. then we perfonn a null step. staying at our current position W. and try to improve P by adding another subgI'adient () I ( v) at some nearby point v to our current position w. A natural way of approximating I is by using a cutting plane (CP) approximation. The CP approximation of I( w) at the point Wk is given by the expression [6] max {gNw - Wi) + I(Wi)}' (9) 1 (i(k where gi is a subgradient of I at the point Wi. We see then that (9) provides a piecewise linear approximation of convexs I from below. which will coincide with I at all points Wi. For convenience, we redefine the CP approximation in terms of d = W Wk. d E Rb, the vector difference of the point of approximation. W. and the current point in the optimization sequence. Wk, giving the CP approximation I Cp of I: I CP(Wk, d) = max {g/d + g/(Wk - w;) + I(Wi)}. (10) l(i(k Now, when the CP approximation is minimized to find a descent direction, there is no reason to trust the approximation far away from Wk. So, to discourage a large step size, a stabilizing term Ikd td. where tk is positive, is added to the CP approximation. If the CP approximation at Wk of I is good enough, then the dk given by dk = arg min I CP(Wk, d) + _1_d td (11) d 2tk will produce a descent direction such that a line search along Wk + Adk will find a new point Wk+l at which I(Wk+l) < I(Wk) (a serious step). It may happen that I Cp is such a poor approximation of I that a line search along dk is not a descent direction. or yields only a marginal improvement in I. If this occurs, a null step is taken and one enriches the bundle of subgradients from which the CP approximation is computed by adding a subgradient from () I (Wk + Adk) for small A > O. Each serious step guarantees a decrease in I. and a stopping criterion is provided by tenninating the algorithm as soon as dk in (11) satisfies the (-optimality criterion, at which point Wk is (-Optimal. These details are the basis of bundle methods in NSO [9,10]. The bundle method described suffers from a weak point: its success depends on the delicate selection of the parameter tk in (11) [6], This weakness has led to the incorporation of a "trust region" concept [11] into the bundle method to obtain the B T (bundle-trust) algorithm [6]. SIn the nonconVex f case, (9) is not an approximation to f from below, and additional tolerance parameters must be considered to accommodate this situation [6]. Learning in Feedforward Networks wirh Nonsmoorh Funcrions 1061 To incorporate a trust region, we define a "radius tt that defines a ball in which we can "trust" that fa> is a good approximation of f. In the BT algorithm, by following trust region concepts, the choice of t A: is not made a priori and is determined during the algorithm by varying tA: in a systematic way (trust part) and improving the CP approximation by null steps (bundle part) until a satisfactory CP approximation f a> is obtained along with a ball (in terms of t A:) on which we can trust the approximation. Then the dA: in (11) willlea1 to a substantial decrease in f. The full details of the BT algorithm can be found in [6], along with convergence proofs. 4 EXAMPLES 4.1 A SMOOTH NETWORK WITH NONSMOOTH ERROR FUNCTION The particular network example we consider here is a two-layer FFN (i.e .• one with a single layer of hidden units) where each output unit's value Yi is computed from its discriminant function Qo; = WiO+ 2:7=1 Wij Zj, by the transfer function Yi = tanh(Qo;), where Zj is the output of the j-th hidden unit. The j-th hidden unit's output Zj is given by Zj = tanh(Qhj)' where QhL" = VjO + 2:~=1 VjA:Xj is its discriminant function. The £00 error function (which is locally ipschitzian) is defined to be E(w) = max rJ¥lX IQo, (x) - ti(X)I, (12) xeX l(.(m where ti (x) is the desired output of output unit i for the input pattern x EX. To make use of the B T algorithm described in the previous section, it is necessary to obtain an expression from which a subgradient at w for E( w) in (12) can be computed. Using the generalized gradient calculus in [5, Proposition 2.3.12], a subgradientg E 8E(w) is given by the expression6 g = sgn (QO;I (x') - ti' (x')) VWQO;I (x') for some i', x' E .J (14) where .J is the set of patterns and output indices for which E (w) in (12) obtains it maximum value, and the gradient VWQO,I (x') is given by 1 Zj (1 zJ)Wilj x1:(1 ZJ)Wilj o (Note that here j = 1,2, ... , h and k = I, ... , n). w.r.t. Wi'O w.r.t. Wi'j w.r.t. VjO w.r.t. VjA: elsewhere. (15) The BT technique outlined in the previous section was applied to the standard XOR and 838 encoder problems using the £00 error function in (12) and subgradients from (14,15). 6Note that for a function f(w) = Iwl = max{w, -w}. the generalized gradient is given by the expression { 1 w>o 8f(w) = co{l, -l} x = 0 -1 x < 0 (13) and a suitable subgradient g E 8 f ( w) can be obtained by choosing g = sgn( w ). 1062 Redding and Downs In all test runs, the BT algorithm was run until convergence to a local minimum of the too error function occurred with (set at 10-4• On the XOR problem, over 20 test runs using a randomly initialized 2-2-1 network, an average of 52 function and subgradient evaluations were required. The minimum number of function and subgradient evaluations required in the test runs was 23 and the maximum was 126. On the 838 encoder problem, over 20 test runs using a randomly initialized 8-3-8 network, an average of 334 function and subgradient evaluations were required. For this problem, the minimum number of function and subgradient evaluations required in the test runs was 221 and the maximum was 512. 4.2 A NON SMOOTH NETWORK AND NONSMOOTH ERROR FUNCTION In this section we will consider a particular example that employs a network function that is nonsmooth as well as a nonsmooth error function (the too error function of the previous example). Based on the piecewise-linear network employed by [12], let the i-th output of the network be given by the expression n h n Yi = L "ikXk + L Wij L VjkXk + VjO + WiO (16) k=1 j=1 k=1 with an too -based error function E(w) = max m~ IYi(X) - ti(x)l. xeK U;,'m (17) Once again using the generalized gradient calculus from [5, Proposition 2.3.12], a single subgradient g E 8E(w) is given by the expression (Note that j = 1,2, ... , h, k = 1,2, ... , n). w.r.t. "i'k w.r.t. Wi'O w.r.t. Wi'j w.r.t. VjO w.r.t. Vjk elsewhere. (18) In all cases the (-stopping criterion is set at 10-4• On the XOR problem, over 20 test runs using a randomly initialized 2-2-1 network, an average of 43 function and subgradient evaluations were required. The minimum number of function and subgradient evaluations required in the test runs was 30 and the maximum was 60. On the 838 encoder problem, over 20 test runs using a randomly initialized 8-3-8 network, an average of 445 function and subgradient evaluations were required. For this problem, the minimum number of function and subgradient evaluations required in the test runs was 386 and the maximum was 502. 5 CONCLUSIONS We have demonstrated the viability of employing NSO for training networks in the case where standard procedures, with their implicit smoothness assumption, would have difficulties or find impossible. The particular nonsmooth examples we considered involved an error function based on the too norm, for the case of a network with sigmoidal characteristics and a network with a piecewise-linear characteristic. Learning in Feedforward Networks with Nonsmooth Functions 1063 Nonsmooth optimization problems can be dealt with in many different ways. A possible alternative approach to the one presented here (that works for most NSO problems) is to express the problem as a composite function and then solve it using the exact penalty method (termed composite NSO) [11]. Fletcher [11, p. 358] states that in practice this can require a great deal of storage or be too complicated to formulate. In contrast, the BT algorithm solves the more general basic NSO problem and so can be more widely applied than techniques based on composite functions. The BT algorithm is simpler to set up, but this can be at the cost of algorithm complexity and a computational overhead. The BT algorithm, however, does retain the gradient descent flavour of backpropagation because it uses the generalized gradient concept along with a chain rule for computing these (generalized) gradients. Nongradient-based and stochastic methods for NSO do exist, but they were not considered here because they do not retain the gradient-based deterministic flavour. It would be useful to see if these other techniques are faster for practical problems. The message should be clear however smooth gradient techniques should be treated with suspicion when a nonsmooth problem is encountered, and in general the more complicated nonsmooth methods should be employed. References [1] P. Burrascano, "A norm selection criterion for the generalized delta rule," IEEE Transactions on Neural Networks 2 (1991), 125-130. [2] N. J. Redding, "Some Aspects of Representation and Learning in Artificial Neural Networks," University of Queensland, PhD Thesis, June, 1991. [3] T. Krishnan, "On the threshold order of a Boolean function," IEEE Transactions on Electronic Computers EC-15 (1966),369-372. [4] M. L. Brady, R. Raghavan & J. Slawny, "Backpropagation fails to separate where perceptrons succeed," IEEE Transactions on Circuits and Systems 36 (1989). [5] F. H. Clarke, Optimization and Nonsmooth Analysis, Canadian Mathematical Society Series of Monographs and Advanced Texts, John Wiley & Sons, New York, NY, 1983. [6] H. Schramm & J. Zowe, "A version of the bundle idea for minimizing a nonsmooth function: conceptual ideas, convergence analysis, numerical results," SIAM Journal on Optimization (1991), to appear. [7] V. F. Dem'yanov & V. N. Malozemov, Introduction to Minimax, John Wiley & Sons, New York,NY, 1974. [8] P. Wolfe, "A method of conjugate subgradients for minimizing nondifferentiable functions," in Nondifferentiable Optimization, M. L. Balinski & P. Wolfe, eds., Mathematical Programming Study #3, North-Holland, Amsterdam, 1975,145-173. [9] C. Lemarechal, ''Nondifferentiable Optimization," in Optimization, G. L. Nemhauser, A. H. G. Rinnooy Kan & M. J. Todd, eds., Handbooks in Operations Research and Management Science #1, North-Holland,Amsterdam, 1989,529-572. [10] K. C. Kiwiel, Methods of Descent for Nondifferentiable Optimization, Leet. Notes in Math. # 1133, Springer-Verlag, New York-Heidelberg-Berlin, 1985. [11] R. Fletcher, Practical Methods of Optimization second edition, John Wiley & Sons, New York, NY, 1987. [12] R. Batruni, "A multilayer neural network with piecewise-linear structure and backpropagation learning," IEEE Transactions on Neural Networks 2 (1991),395-403.
1991
18
483
Constructing Proofs in Symmetric Networks Gadi Pinkas Computer Science Department Washington University Campus Box 1045 St. Louis, MO 63130 Abstract This paper considers the problem of expressing predicate calculus in connectionist networks that are based on energy minimization. Given a firstorder-logic knowledge base and a bound k, a symmetric network is constructed (like a Boltzman machine or a Hopfield network) that searches for a proof for a given query. If a resolution-based proof of length no longer than k exists, then the global minima of the energy function that is associated with the network represent such proofs. The network that is generated is of size cubic in the bound k and linear in the knowledge size. There are no restrictions on the type of logic formulas that can be represented. The network is inherently fault tolerant and can cope with inconsistency and nonmonotonicity. 1 Introduction The ability to reason from acquired knowledge is undoubtedly one of the basic and most important components of human intelligence. Among the major tools for reasoning in the area of AI are deductive proof techniques. However, traditional methods are plagued by intractability, inability to learn and adjust, as well as by inability to cope with noise and inconsistency. A connectionist approach may be the missing link: fine grain, massively parallel architecture may give us real-time approximation; networks are potentially trainable and adjustable; and they may be made tolerant to noise as a result of their collective computation. Most connectionist reasoning systems that implement parts of first-order logic (see for examples: (Holldobler 90], [Shastri et a1. 90]) use the spreading activation paradigm and usually trade expressiveness with time efficiency. In contrast, this 217 218 Pinkas paper uses the energy minimization paradigm (like [Derthick 88], [Ballard 86] and [Pinkas 91c]), representing an intractable problem, but trading time with correctness; i.e., as more time is given, the probability of converging to a correct answer increases. Symmetric connectionist networks used for constraint satisfaction are the target platform [Hopfield 84b], [Hinton, Sejnowski 86], (peterson, Hartman 89], [Smolensky 86]. They are characterized by a quadratic energy function that should be minimized. Some of the models in the family may be seen as performing a search for a global minimum of their energy function. The task is therefore to represent logic deduction that is bound by a finite proof length as energy minimization (without a bound on the proof length, the problem is undecidable). When a query is clamped, the network should search for a proof that supports the query. If a proof to the query exists, then every global minimum of the energy function associated with the network represents a proof. If no proof exists, the global minima represent the lack of a proof. The paper elaborates the propositional case; however, due to space limitations, the first-order (FOL) case is only sketched. For more details and full treatment of FOL see [Pinkas 91j]. 2 Representing proofs of propositional logic I'll start by assuming that the knowledge base is propositional. The proof area: A proof is a list of clauses ending with the query such that every clause used is either an original clause, a copy (or weakening) of a clause that appears earlier in the proof, or a result of a resolution step of the two clauses that appeared just earlier. The proof emerges as an activation pattern on special unit structures called the proof area, and is represented in reverse to the common practice (the query appears first). For example: given a knowledge base of the following clauses: 1) A 2) ..,Av B vC 3) ..,Bv D 4) ..,CV D we would like to prove the query D, by generating the following list of clauses: 1) D 2) A 3) ..,Av D 4) ..,CV D 5) -.AVCv D 6) -.Bv D 7) ..,Av B vC (obtained by resolution of clauses 2 and 3 by canceling A). (original clause no. 1). (obtained by resolution of clauses 4 and 5 by canceling C). (original clause no. 4). (obtained by resolution of clauses 6 and 7 by canceling B). (original clause no. 3). (original clause no. 2). Each clause in the proof is either an original clause, a copy of a clause from earlier in the proof, or a resolution step. The matrix C in figure 1, functions as a clause list. This list represents an ordered set of clauses that form the proof. The query clauses are clamped onto this area Constructing Proofs in Symmetric Networks 219 and activate hard constraints that force the rest of the units of the matrix to form a valid proof (if it exists). Query: D A .,AvD -CvD .,AvCvD -JJvD .,AvBvC 1 A 0 B C D n IN /2 C 3 4 RES KB O'Y k r-e 1 0 2 0 ~ 3 0 0 @ 4 0 ~ @ S 0 0@ G> ~ 0 0 0 0 0 6 0 !0~ 7 0 0 G> \G>J 0 0 k 2 3 4 k 1 2 3 4 k 1 0 2 0 0 3 0 4 0 S 6 0 p 7 0 t K l"igure 1: The proof area for a propositional case R 123 0 0 0 1 2 D k k Variable binding is performed by dynamic allocation of instances using a technique similar to [Anand an et a!. 891 and [Barnden 91]. In this technique, if two symbols need to be bound together, an instance is allocated from a pool of general purpose instances, and is connected to both symbols. An instance can be connected to a literal in a clause, to a predicate type, to a constant, to a function or to a slot of another instance (for example, a constant that is bound to the first slot of a predicate). The clauses that participate in the proof are represented using a 3-dimensional matrix (C.",;) and a 2-dimensional matrix (P";) as illustrated in figure 1. The rows of C represent clauses of the proof, while the rows of P represent atomic 220 Pinkas propositions. The columns of both matrices represent the pool of instances used for binding propositions to clauses. A clause is a list of negative and positive instances that represent literals. The instance thus behaves as a two-way pointer that binds composite structures like clauses with their constituents (the atomic propositions). A row i in the matrix C represents a clause which is composed of pairs of instances. If the unit C+,i,i is set, then the matrix represents a positive literal in clause i. If P A,i is also set, then C+,',j represents a positive literal of clause i that is bound to the atomic proposition A. Similarly C-"J represents a negative literal. The first row of matrix C in the figure is the query clause D. It contains only one positive literal that is bound to atomic proposition D via instance 4. For another example consider the third row of the C which represents a clause of two literals: a positive one that is bound to D via instance 4, and a negative one bound to A via instance 1 (it is the clause ..,A V D, generated as a result of a resolution step). Participation in the proof: The vector IN represents whether clauses in C participate in the proof. In our example, all the clauses are in the proof; however, in the general case some of the rows of C may be meaningless. When IN. is on, it means that the clause i is in the proof and must be proved as well. Every clause that participates in the proof is either a result of a resolution step (RES. is set), a copy of a some clause (CPYi is set), or it is an original clause from the knowledge base (K B. is set). The second clause of C in figure 1 for example is an original clause of the knowledge base. If a clause j is copied, it must be in the proof itself and therefore I Nj is set. Similarly, if clause i is a result of a resolution step, then the two resolved clauses must also be in the proof (I Ni+l,i and I Ni+2,i) and therefore must be themselves resolvents, copies or originals. This chain of constraints continues until all constraints are satisfied and a valid proof is generated. Posting a query: The user posts a query clamping its clauses onto the first rows of C and setting the appropriate IN units. This indicates that the query clauses participate in the proof and should be proved by either a resolution step, a copy step or by an original clause. Figure 1 represents the complete proof for the query D. We start by allocating an instance (4) for D in the P matrix, and clamping a positive literal D in the first row of C (C+,l,4); the rest of the first row's units are clamped zero. The unit INl is biased (to have the value of one), indicating that the query is in the proof; this cause a chain of constraints to be activated that are satisfied only by a valid proof. If no proof exists, the I Nl unit will become zero; i.e., the global minima is obtained by setting I Nl to zero despite the bias. Representing resolutions steps: The vector RES is a structure of units that indicates which are the clauses in C that are obtained by a resolution step. If RES, is set, then the ith row is obtained by resolving row i + 1 of C with row i + 2. Thus, the unit RESl in figure 1 indicates that the clause D of the first row of C is a resolvent of the second and the third rows of C representing ..,A V D and A respectfully. Two literals cancel each other if they have opposite signs and are represented by the same instance. In figure 1, literal A of the third row of C and literal ..,A of the second row cancel each other, generating the clause of the first row. The rows of matrix R represent literals canceled by resolution steps. If row i of Constructing Proofs in Symmetric Networks 221 C is the result of a resolution step, there must be one and only one instance j such that both clause i + 1 and clause i + 2 include it with opposite signs. For example (figure 1): clause D in the first row of C is the result of resolving clause A with clause..,A V D which are in the second and third rows of C respectfully. Instance 1, representing atomic proposition A, is the one that is canceled; RI,I is set therefore, indicating that clause 1 is obtained by a resolution step that cancels the literals of instance 1. Copied and original clauses: The matrix D indicates which clauses are copied to other clauses in the proof area. Setting Di,i means that clause i is obtained by copying (or weakening) clause j into clause i (the example does not use copy steps). The matrix K indicates which original knowledge-base clauses participate in the proof. The unit Ki,J indicates that a clause i in the proof area is an original clause, and the syntax of the j-th clause in the knowledge base must be imposed on the units of clause i. In figure 1 for example, clause 2 in the proof (the second row in C), assumes the identity of clause number 1 in the knowledge base and therefore K l ,2 is set. 3 Constraints We are now ready to specify the constraints that must be satisfied by the units so that a proof is found. The constraints are specified as well formed logic formulas. For example the formula (A V B) "C imposes a constraint over the units (A,B,C) such that the only possible valid assignments to those units are (011), (101), (111). A general method to implement an arbitrary logical constraint on connectionist networks is shown in [Pinkas 90b]. Most of the constraints specified in this section are hard constraints; i.e., must be satisfied for a valid proof to emerge. Towards the end of this section, some soft constraints are presented. In-proof constraints: If a clause participates in the proof, it must be either a result of a resolution step, a copy step or an original clause. In logic, the constraints may be expressed as: Vi : INiRESi V CP'Yi V K Bi. The three units (per clause i) consist a winner takes all subnetwork (WTA). This means that only one of the three units is actually set. The WTA constraints may be expressed as: RESi-..,CP'Yi " ..,K Bi CP'Yi--,RESi " ..,K Bi K Bi--,RESi " ..,C P'Yi The WTA property may be enforced by inhibitory connections between every pair of the three units. Copy constraints: If CPYi is set then clause i must be a copy of another clause j in the proof. This can be expressed as Vi : C P'Yi- V . (Di,i " I Ni ). The rows of Dare WTAs allowing-i to be a copy of only one j. In addition, if clause j is copied or weakened into clause i then every unit set in clause j must also be set in clause i. This may be specified as: Vi,j,l : Di,,- «C+,.,' +- C+",') " (C_,.,' +- C_",,». Resolution constraints: If a clause i is a result of resolving the two clauses i + 1 and i + 2, then there must be one and only one instance (j) that is canceled (represented by Ra,i)' and C. is obtained by copying both the instances of CHI and CH2, without the instance j. These constraints may be expressed as: 222 Pinkas Vi : RE Si- Vi Ri,i at least one instance is canceled Vi,j,j',j' ¢ j: Ri,i--'Ri,i' only one instance is canceled (WT.t Vi, j : ~,i-(C+,i+l,i " C-,i+2,i) V (C-,i+1J "C+,i+2,j) cancel literals with opposite signs. Vi : RESi-INi+l "INi+2 the two resolvents are also in proof Vi : RE Si-( C+,i,i +-+( C+,i+l,i V C+,i+2,i) " "'Ri,i copy positive literals Vi: RESi-(C-,iJ+-+(C-,i+1J V C-,i+2J) " -'~,i copy negative literals Clause-instance constraints: The sign of an instance in a clause should be unique; therefore, any instance pair in the matrix Cis WTA: Vi, j : C+,i,i--,C-,iJ' The columns of matrix P are WTAs since an instance is allowed to represent only one atomic prop06ition: VA, i, B :F A : PA,i-",PB,i. The rows of P may be also WTAs: VA,i,j:f; i: PA,i-"'PA,j (this constraint is not imposed in the FOL case). Knowledge base constraints: If a clause i is an original knowledge base clause, then there must be a clause j (out of the m original clauses) whose syntax is forced upon the units of the i-th row of matrix C. This constraint can be expressed as: Vi : K Bi- Vj Ki,i' The rows of K are WTA networks so that only one original clause is forced on the units of clause i: Vi, j, j' :F j : K',i--,Ki,i" The only hard constraints that are left are those that force the syntax of a particular clause from the knowledge base. Assume for example that Ki,4 is set, meaning that clause i in C must have the syntax of the fourth clause in the knowledge base of our example (..,CV D). Instances j and j' must be allocated to the atomic propositions C and D respectfully, and must appear also in clause i as the literals C-,iJ and C+,i,i" The following constraints capture the syntax of (..,CV D): Vi : Ki,4- V . (C_ ,iJ " PC,i) there exists a negative literal that is bound to C; Vi: Ki,4-V; (D+,i,i "Pc,i) there exists a positive literal that is bound to D. FOL extension: In first-order predicate logic (FOL) instead of atomic propositions we must deal with predicates (see [pinkas 91j] for details). As in the propositional case, a literal in a clause is represented by a positive or negative instance; however, the instance must be allocated now to a predicate name and may have slots to be filled by other instances (representing functions and constants). To accommodate such complexity a new matrix (NEST) is added, and the role of matrix P is revised. The matrix P must accommodate now function names, predicate names and constant names instead of just atomic propositions. Each row of P represents a name, and the columns represent instances that are allocated to those names. The rows of P that are associated with predicates and functions may contain several different instances of the same predicate or function, thus, they are not WTA anymore. In order to represent compound terms and predicates, instances may be bound to slots of other instances. The new matrix (N ESn,i,p) is capable of representing such bindings. If N ESn,i,p is set, then instance i is bound to the p slot of instance j. The columns of NEST are WTA, allowing only one instance to be bound to a certain slot of another instance. When a clause i is forced to have the syntax of some original clause I, syntactic constraints are triggered so that the literals of clause i become instantiated by the relevant predicates, functions, constants and variables imposed by clause I. Constructing Proofs in Symmetric Networks 223 Unification is implicitly obtained if two predicates are representing by the same instance while still satisfying all the constraints (imposed by the syntax of the two clauses). When a resolution step is needed, the network tries to allocate the same instance to the two literals that need to cancel each other. If the syntactic constraints on the literals permit such sharing of an instance, then the attempt to share the instance is successful and a unification occurs (occur check is done implicitly since the matrix NEST allows the only finite trees to be represented). Minimizing the violation of soft constraints: Among the valid proofs some are preferable to others. By means of soft constraints and optimization it is possible to encourage the network to search for preferred proofs. Theorem-proving thus is viewed as a constraint optimization problem. A weight may be assigned to each of the constraints [Pinkas 91c) and the network tries to minimize the weighted sum of the violated constraints, so that the set of the optimized solutions is exactly the set of the preferred proofs. For example, preference of proofs with most general unification is obtained by assignment of small penalties (negative bias) to every binding of a function to a position of another instance (in NEST). Using similar techniques, the network can be made to prefer shorter, more parsimonious or more reliable proofs, low-cost plans or even more specific arguments as in nonmonotonic reasonmg. 4 Summary Given a finite set T of m clauses, where n is the number of different predicates, functions and constants, and given also a bound k over the proof length, we can generate a network that searches for a proof with length not longer then k, for a clamped query Q. If a global minimum is found then an answer is given as to whether there exists such a proof, and the proof (with MGU's) may be extracted from the state of the visible units. Among the possible valid proofs the system prefers some "better" proofs by minimizing the violation of soft constraints. The concept of "better" proofs may apply to applications like planning (minimize the cost), abduction (parsimony) and nonmonotonic reasoning (specificity). In the propositional case the generated network is of O(k2 + km + kn) units and O( k3 + km + kn) connections. For predica;te logic there are O( k3 + km + kn) units and connections, and we need to add O( Pm) connections and hidden units, where i is the complexity-level of the syntactic constraints [Pinkas 91j). The results improve an earlier approach [Ballard 86]: There are no restrictions on the rules allowed; every proof no longer than the bound is allowed; the network is compact and the representation of bindings (unifications) is efficient; nesting of functions and multiple uses of rules are allowed; only one relaxation phase is needed; inconsistency is allowed in the knowledge base, and the query does not need to be negated and pre-wired (it can be clamped during query time). The architecture discussed has a natural fault-tolerance capability: When a unit becomes faulty, it simply cannot assume a role in the proof, and other units are allocated instead. Acknowledgment: I wish to thank Dana Ballard, Bill Ball, Rina Dechter, Peter Had dawy, Dan Kimura, Stan Kwasny, Ron Loui and Dave Touretzky for 224 Pinkas helpful conunents. References [Anand an et al. 89] P. Anandan, S. Letovsky, E. Mjolsness, "Connectionist variable binding by optimization," Proceedings of the 11th Cognitive Science Society 1989. [Ballard 86] D. H. Ballard "Parallel Logical Inference and Energy Minimization," Proceedings of the 5th National Conference on Artificial Intelligence, Philadelphia, pp. 203-208, 1986. [Bamden 91] J .A. Barnden, "Encoding complex symbolic data structures with some unusual connectionist techniques," in J.A Barnden and J.B. Pollack, Advances in Connectionist and Neural Computation Theory 1, Highlevel connectionist models, Ablex Publishing Corporation, 1991. [Derthick 88] M. Derthick "Mundane reasoning by parallel constraint satisfaction," PhD thesis, CMU-CS-88-182 Carnegie Mellon University, Sept. 1988 [Hinton, Sejnowski 86] G.E Hinton and T.J. Sejnowski, "Learning and re-learning in Boltzman Machines," in J. L. McClelland and D. E. Rumelhart, Parallel Distributed Processing: Explorations in The Microstructure of Cognition I, pp. 282 - 317, MIT Press, 1986. [Holldobler 90] S. Holldobler, "CHCL, a connectionist inference system for Horn logic based on connection method and using limited resources," International Computer Science Institute TR-90-042, 1990. [Hopfield 84b] J. J. Hopfield "Neurons with graded response have collective computational properties like those of two-state neurons," Proceedings of the National Academy of Sciences 81, pp. 3088-3092, 1984. [Peterson, Hartman 89] C. Peterson, E. Hartman, "Explorations of mean field theory learning algorithm," Neural Networks t, no. 6, 1989. [Pinkas 90b] G. Pinkas, "Energy minimization and the satisfiability of propositional calculus," Neural Computation 9, no. 2, 1991. [Pinkas 91c] G. Pinkas, "Propositional Non-Monotonic Reasoning and Inconsistency in Synunetric Neural Networks," Proceedings of IlCAI, Sydney, 1991. [Pinkas 91j] G. Pinkas, "First-order logic proofs using connectionist constraint relaxation," technical report, Department of Computer Science, Washington University, WUCS-91-S4, 1991. [Shastri et al. 90] L. Shastri, V. Ajjanagadde, "From simple associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings," technical report, University of Pennsylvania, Philadelphia, MS-CIS-90-0S, 1990. [Smolensky 86] P. Smolensky, "Information processing in dynamic systems: Foundations of harmony theory," in J.L.McClelland and D.E.Rumelhart, Parallel Distributed Processing: Explorations in The Microstructure of Cognition I , MIT Press, 1986.
1991
19
484
504 Recognizing Overlapping Hand-Printed Characters by Centered-Object Integrated Segmentation and Recognition Gale L. Martin- & Mosfeq Rashid MCC Austin, Thxas 78759 USA Abstract This paper describes an approach, called centered object integrated segmentation and recognition (COISR). for integrating object segmentation and recognition within a single neural network. The application is hand-printed character recognition. 1\vo versions of the system are described. One uses a backpropagation network that scans exhaustively over a field of characters and is trained to recognize whether it is centered over a single character or between characters. When it is centered over a character, the net classifies the cnaracter. The approach is tested on a dataset of hand-printed digits. Vel)' low errOr rates are reported. The second version, COISR-SACCADE, avoids the need for exhaustive scans. The net is trained as before. but also is trained to compute ballistic 'eye' movements that enable the input window to jump from one character to the next. The common model of visual processing includes multiple, independent stages. First, flltering operations act on the raw image to segment or isolate and enhance to-be-recognized clumps. These clumps are normalized for factors such as size, and sometimes simplified further through feature extraction. The results are then fed to one or more classifiers. The operations prior to classification simplify the recognition task. Object segmentation restricts the number of features considered for classification to those associated with a single object, and enables normalization to be applied at the individual object level. Without such pre-processing. recognition may be an intractable problem. However, a weak point of this sequential stage model is that recognition and segmentation decisions are often inter-dependent. Not only does a correct recognition decision depend on first making a correct segmentation decision, but a correct segmentation decision often depends on first making a correct recognition decision. This is a particularly serious problem in character recognition applications. OCR systems use intervening white space and related features to segment a field of characters into individual characters, so that classification can be accomplished one character at a time. This approach fails when characters touch each other or when an individual character is broken up by intervening white space. Some means of integrating the segmentation and recognition stages is needed. This paper descnbes an approach, called centered object integrated segmentation and recognition (COISR), for integrating character segmentation and recognition within one • Also with Eastman Kodak Company NET'S OUfPUT OVER TIME Recognizing Overlapping Hand-Printed Characters 505 Figure 1: The COISR Exhaustive Scan Approach. neural network. The general approach builds on previous work in pre-segmented character recognition (LeCun, Boser, Denker, Henderson, Howard, Hubbard, & Jackel, 1990; Martin & Pittman, 1990) and on the sliding window conception used in neural network speech applications, such as NETtalk (Sejnowski & Rosenberg(1986) and Tune Delay Neural Networks (Waibel, Sawai, & Shikano, 1988).'1\vo versions of the approach are descnbed. In both cases, a net is trained to recognize what is centered in its input window as it slides along a character field. The window size is chosen to be large enough to include more than one character. 1 COISR VERSION 1: EXHAUSTIVE SCAN As shown in Figure 1, the net is trained on an input window, and a target output vector representing what is in the center of the window. The top half of the figure shows the net's input window scanning successively across the field. Sometimes the middle of the window is centered on a character, and sometimes it is centered on a point between two characters. The target output vector consists of one node per category, and one node corresponding to a NOT-CENTERED condition. This latter node has a high target activation value when the input window is not centered over any character. A temporal stream of output vectors is created (shown at the bottom half of the figure) as the net scans the field. There is no need to explicitly segment characters, during training or testing, because recognition is defined as identifying what is in the center of the scanning window. The net learns to extract regularities in the shapes of individual characters even when those regularities occur in the context of overlapping and broken characters. The final stage of processing involves parsing the temporal stream generated as the net scans the field to yield an ascii string of recognized characters. 1.1 IMPLEMENTATION DETAILS The COISR approach was tested using the National Institute of Standards and Thchnology (NIST) database of hand-printed digit fields, using fields 6-30 of the form, which correspond to five different fields of length 2, 3, 4, 5, or 6 digits each. The training data included roughly 80,000 digits (BOO forms, 20,000 fields), and came from forms labeled fOOOO..-f0499, and f1SOO-f1799 in the dataset. The test data consisted of roughly 20,000 digits (200 forms, 5,000 fields) and came from forms labeled f1800-f1899 and f2OOO-f2099 in the dataset. The large test set was used because considerable variations 506 Martin and Rashid in test scores occurred with smaller test set sizes. The samples were scanned at a 300 pixel/inch resolution. Each field image was preprocessed to eliminate the white space and box surrounding the digit field. Each field was then size normalized with respect to the vertical height of the digit field to a vertical height of 20 pixels. Since the input is size normalized to the vertical height of the field of characters, the actual number of characters in the constant-width input window of 36 pixels varies depending on the height-to-width ratio for each character. The scan rate was a 3-pixel increment across the field. A key design principle of the present approach is that highly accurate integrated segmentation and recognition requires training on both the shapes of characters and their positions within the input window. The field images used for training were labeled with the horizontal center positions of each character in the field. The human Iabeler simply pointed at the horizontal center of each digit in sequence with a mouse cursor and clicked on a mouse button. The horizontal position of each character was then paired with its category label (0-9) in a data me. The labeling process is not unlike a human reading teacher using a pointer to indicate the position of each character as he or she reads aloud the sequence of characters making up the word or sentence. During testing this position information is not used. Position information about character centers is used to generate target output values for each possible position of the input window as it scans a field of characters. When the center position of a window is close to the center of a character, the target value of that character's output node is set at the maximum, with the target value of the NOT-CENTERED node set at the minimum. The activation values of all other characters' output nodes are set at the minimum. When the center position of a window is close to the half-way point between two character centers, the target value of all character output nodes are set to the minimum and the target value of the NOTCENTERED node is set to a maximum. Between these two extremes, the target values vary linearly with distance, creating a trapezoidal function (i.e., ~). The neural network is a 2-hidden-Iayer backpropagation network, with local, shared connections in the first hidden layer, and local connections in the second hidden layer (see Figure 2). The first hidden layer consists of 2016 nodes, or more speCifically 18 independent groups of 112 (16x7) nodes, with each group having local, shared conneeOutput Layer 1st Hidden Layer ~------36----~--~ Input Layer Figure 2: Architecture for the COISR-Exhaustive Scan Approach. Recognizing Overlapping Hand-Printed Characters 507 tions to the input layer. The local, overlapping receptive fields of size 6x8 are offset by 2 pixels, such that the region covered by each group of nodes spans the input layer. The second hidden layer consists of 180 nodes, having local, but NOT shared receptive fields of size 6x3. The output layer consists of 11 nodes, with each of these nodes connected to all of the nodes in the 2nd hidden layer. The net has a total of 2927 nodes (includes input and output nodes), and 157,068 connections. In a feedforward (nonlearning) mode on a DEC SOOO workstation, in which the net is scanning a field of digits, the system processes about two digits per second, which includes image pre-processing and the necessary number of feedforward passes on the net. As the net scans horizontally, the activation values of the 11 output nodes create a trace as shown in Figure 1. Th convert this to an ascii string corresponding to the digits in the field, the state of the NOT-CENTERED node is monitored continuously. When it's activation value falls below a threshold, a summing process begins for each of the other nodes, and ends when the activation value of the NOT-CENTERED node exceeds the threshold. At this point the system decides that the input window has moved off of a character. The system then classifies the character on the basis of which output node has the highest summed activation for the position just passed over. 1.2 GENERALIZATION PERFORMANCE As shown in Figure 3, the COISR technique achieves very low field-based error rates, til ~ 0 ~ ~ ~ 14 13.5 13 12.5 , COISR I Field-Based Errors 12 Field 11.5 Field Size Error Rate 11 10.5 10 9.5 9 8.5 8 7.5 7 6.5 6 5.5 5 4.5 4 3.5 3 2.5 2 1.5 \ 1-digits 1.01% \ 0.51% \ 3-digits 1.01% \ \ 0.51% \ , 4-digits 1.01% \ I 0.40% ~6\_ 5-digits 1.01% I \ \ 0.50% \ \ ~\ \ 6-digits 1.01% 5 \ 0.51% ~ \ ~ \ \ \ \ ~4\ 1\ \ 1\ \~ \ ~\I \\ " 3 ~ ... "'2 ~ ~ ....... -~ ........ ,. ~ ..... ~ 1 0.5 00 10 20 30 40 50 60 70 % REJECfIONS Figure 3: Field-based lest Error and Reject Rates Field Reject Rate 4.76% 7.09% 11.11% 19.80% 19.07% 19.06% 13.41% 18.25% 35.73% 56.68% 80 90 100 particularly for a single classifier system. The error rates are field-based in the sense that if the network mis-classifies one character in the field, the entire field is consid508 Martin and Rashid ered as mis-classified. Error rates pertain to the fields remaining, after rejection. Rejections are based on placing a threshold for the acceptable distance between the highest and the next highest running activation total. In this way, by varying the threshold, the error rate can be traded off against the percentage of rejections. Since the reported data apply to fields, the threshold applies to the smallest distance value found across all of the characters in the field. Figure 4 provides examples, from the test set, of fields that the COISR network correctly classifies. t).3:J.i" 3 S-s;, ~ 9;. 4171 J6B' a.1<6 JfO &07 16]{, f?;.J.j-Figure 4: 'lest Set Examples of Thuching and Broken Characters Correctly Recognized The COISR technique is a success in the sense that it does something that conventional charac~er recognition systems can not do. It robustly recognizes character fields containing touching, overlapping, and broken characters. One problem with the approach, however, lies with the exhaustive nature of the scan. The components needed to recognize a character in a given location are essentially replicated across the length of the to-be-classified input field, at the degree of resolution necessary to recognize the smallest and closest characters. While this has not presented any real difficulties for the present system, which processes 2 characters per second, it is likely to be troublesome when extensions are made to two-dimensional scans and larger vocabularies. A rough analogy with respect to human vision would be to require that all of the computational resources needed for recognizing objects at one point on the retina be replicated for each resolvable point on the retina. This design carries the notion of a compound eye to the ridiculous extreme. 2 COISR VERSION 2: SACCADIC SCAN Thking a cue from natural vision systems, the second version of the COISR system uses a saccadic scan. The system is trained to make ballistic eye movements, so that it can effectively jump from character to character and over blank areas. This version is similar to the exhaustive scan version in the sense that a backprop net is trained to recognize when it's input window is centered on a character, and if so, to classify the character. In addition, the net is trained for navigation control (Pomerleau ,1991). At each point in a field of characters, the net is trained to estimate the distance to the next character on the right, and to estimate the degree to which the center-most character is off-center. The trained net accomodates for variations in character width, spacing between characters, writing styles, and other factors. At run-time, the system uses the computed character classification and distances to navigate along a character field. If the character classification judgment, for a given position, has a high degree of certainty, the system accesses the next character distance infonnation computed by the net for the current position and executes the jump. If the system gets off-track, so that a Recognizing Overlapping Hand-Printed Characters 509 character can not be recognized with a high-degree of certainty, it makes a corrective saccade by accessing the off-center character distance computed by the net for the current position. This action corresponds to making a second attempt to center the character within the input window. The primary advantage of this approach, over the exhaustive scan, is improved efficiency, as illustrated in Figure S. The scanning input windows are shown at the top of the figure, for each approach, and each character-containing input window, shown below the scanned image for each approach, corresponds to a forward pass of the net. The exhaustive scan version requires about 4 times as many forward passes as the saccadic scan version. Greater improvements in effiCiency can be achieved with wider input windows and images containing more blank areas. The system is still under development, but accuracy approaches that of the exhaustive scan system. ExhausUve scan Saccadic Scan Figure 5: Number of Forward Passes for Saccadic & Exhaustive Scan Systems 3 COMPARISONS & CONCLUSIONS In comparing accuracy rates between different OCR systems, one relevant factor that should be reported is the number of classifiers used. For a given system, increasing the number of classifiers typically reduces error rates but increases processing time. The low error rates reported here for the COISR-Exhaustive Scan approach come from a single classifier operating at 2 characters per second on a general purpose workstation. Most OCR systems employ multiple classifiers. For example, at the NIPS workshops this year, Jonathan Hull described the University of Buffalo zip code recognition system that contains five classifiers and requires about one minute to process a character. Keeler and Rumelhart, at this conference, also described a two-classifier neural net system for NIST digit recognition. The fact that the COISR approach achieved quite low error rates with a single classifier indicates that the approach is a promising one. Clearly, another relevant factor in comparing systems is the ability to recognize touching and broken characters, since this is a dominant stumbling block for current OCR systems. Conventional systems can be altered to achieve integrated segmentation and recognition in limited cases, but this involves a lot of hand-crafting and a significant 510 Martin and Rashid amount of time-consuming iterative processing (Fenrich. 1991). Essentially, multiple segmenters are used, and classification is performed for each such possible segmentation. The final segmentation and recognition decisions can thus be inter-dependent, but only at the cost of computing multiple segmentations and correspondingly, multiple classification decisions. The approach breaks down as the number of possible segmentations increases, as would occur for example if individual characters are broken or touching in multiple places or if multiple letters in a sequence are connected. The COISR system does not appear to have this problem. The NIPS conference this year has included a number of other neural net approaches to integrated segmentation and recognition in OCR domains. Tho approaches similar to the COISR-Exhaustive Scan system were those described by Faggin and by Keeler and Rumelhart. All three achieve integrated segmentation and recognition by convolving a neural network over a field of characters. Faggin described an analog hardware implementation of a neural-network-based OCR system that receives as input a window that slides along the machine-print digit field at the bottom of bank checks. Keeler and Rumelhart descnbed a self-organizing integrated segmentation and recognition (SOISR) system. Initially, it is trained on characters that have been pre-segmented by a labeler effectively drawing a box around each. Then, in subsequent training, a net, with these pre-trained weights, is duplicated repetitively across the extent of a fixed-width input field, and is further trained on examples of entire fields that contain connecting or broken characters. All three approaches have the weakness, described previously of performing essentially exhaustive scans or convolutions over the to-be-classified input field. This complaint is not necessarily directed at the specific applications dealt with at this year's NIPS conference, particularly if operating at the high levels of effiCiency described by Faggin. Nor is the complaint directed at tasks that only require the visual system to focus on a few small clusters or fields in the larger, otherwise blank input field. In these cases, low-resolution filters may be sufficient to efficiently remove blank areas and enable efficient integrated segmentation and recognition. Howevei, we use as an example, the saccadic scanning behavior of human vision in tasks, such as reading this paragrapl!. In such cases that require high-resolution sensitivity across a large, dense image and classification of a very large vocabulary of symbols, it seems clear that other, more flexible and efficient scanning mechanisms will be necessary. This high-density image domain is the focus of the COISR-Saccadic Scan approach, which integrates not only the segmentation and recognition of characters, but also control of the navigational aspects of vision. Acknowledgements We thank Lori Barski, John Canfield, David Chapman, Roger Gaborski, Jay Pittman, and Dave Rumelhart for helpful discussions and/or development of supporting image handling and network software. I also thank Jonathan Martin for help with the position labeling. References Fenrich, R. Segmentation of automatically located handwritten words. paper presented at the International Workshop on frontiers in handwriting recognition. Chateau de Bonas, France. 'l3-27 September 1991. Keeler, J. D., Rumelhart, David E., Leow, Wee-Kheng. Integrated segmentation and recognition of hand-printed numerals, in R. P. Lippmann, John E. Moody, David S. Thuretzky (Eds) Advances in Neural Information Processing Systems 3, p.SS7-S63. San Mateo, CA: Morgan Kaufmann. 1991. LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R. E., Hubbard, W. & Jackel, L. D. Handwritten digit recognition, with a backpropagation network, in D. S. Thuretzky (Ed.) Advances in Neural Information Processing Systems 2. Morgan Kaufmann, 1990. Recognizing Overlapping Hand-Printed Characters 511 Martin, G. L & Pittman, J. A Recognizing hand-printed letters and digits. in D. S. 1buretzky (Ed.) Advances in Neural In/onnation Processing Systems 2. Morgan Kaufmann,I990. Pomerleau, D. A Efficient training of artiflCial neural networks for autonomous navigation. Neural Computation. 3, 1991, 88-97. Rumelhart, D. (1989) Learning and generalization in multi-layer networks. presentation given at the NATO Advanced Research Workshop on Neuro Computing Algorithms, Architectures and Applications. Les Arcs, France. February. 1989. Sejnowski. T. J. & Rosenberg. C. R. (1986) NETtalk: a parallel network that learns to read aloud. Johns Hopkins University Electrical Engineering and Computer Science Thchnical Report JHUIEECS-86/01. Waibel. A. Sawai, H., Shikano, K. (1988) Modularity and scaling in large phonemic neural networks. ATR Interpreting Thlephony Research Laboratories Thchnical Report TR-I-0034.
1991
2
485
A Weighted Probabilistic Neural Network David Montana Bolt Beranek and Newman Inc. 10 Moulton Street Cambridge, MA 02138 Abstract The Probabilistic Neural Network (PNN) algorithm represents the likelihood function of a given class as the sum of identical, isotropic Gaussians. In practice, PNN is often an excellent pattern classifier, outperforming other classifiers including backpropagation. However, it. is not. robust with respect to affine transformations of feature space, and this can lead to poor performance on certain data. We have derived an extension of PNN called Weighted PNN (WPNN) which compensates for this flaw by allowing anisotropic Gaussians, i.e. Gaussians whose covariance is not a multiple of the identity matrix. The covariance is optimized using a genetic algorithm, some interesting features of which are its redundant, logarithmic encoding and large population size. Experimental results validate our claims. 1 INTRODUCTION 1.1 PROBABILISTIC NEURAL NETWORKS (PNN) PNN (Specht 1990) is a pattern classification algorithm which falls into the broad class of "nearest-neighbor-like" algorithms. It is called a "neural network" because of its natural mapping onto a two-layer feedforward network. It works as follows. Let the exemplars from class i be the k-vectors iT} for j = 1, ... , Ni. Then, the likelihood function for class i is 1110 1 N, Li(i) = ----_ ""' e-(x-xj)2/u Ni(27r(j)k/2 ~ (1) A Weighted Probabilistic Neural Network 1111 class B class A class B (a) (b) Figure 1: PNN is not robust with respect to affine transformations of feature space. Originally (a), A2 is closer to its classmate Al than to B 1 ; however, after a simple affine transformation (b), A2 is closer to B1• and the conditional probability for class i is M Pi(i) = Li(i)/ L Lj(i) (2) j=l Note that the class likelihood functions are sums of identical isotropic Gaussians centered at the exemplars. The single free parameter of this algorithm is u, the variance of the Gaussians (the rest of the terms in the likelihood functions are determined directly from the training data). Hence, training a PNN consists of optimizing u relative to some evaluation criterion, typically the number of classification errors during cross-validation (see Sections 2.1 and 3). Since the search space is one-dimensional, the search procedure is trivial and is often performed by hand. 1.2 THE PROBLEM WITH PNN The main drawback of PNN and other "nearest-neighbor-like" algorithms is that they are not robust with respect to affine transformations (i.e., transformations of the form x 1--+ Ax + b) of feature space. (Note that in theory affine transformations should not affect the performance of backpropagation, but the results of Section 3 show that this is not true in practice.) Figures 1 and 2 depict examples of how affine transformations of feature space affect classification performance. In Figures la and 2a, the point A2 is closer (using Euclidean distance) to point Al , which is also from class A, than to point B 1 , which is from class B. Hence, with a training set consisting of the exemplars Al and B1 , PNN would classify A2 correctly. Figures Ib and 2b depict the feature space after affine transformations. In both cases, A2 is closer to Bl than to Al and would hence be classified incorrectly. For the example of Figure 2, the transformation matrix A is not diagonal (i.e., the principle axes of the transformation are not the coordinate axes), and the adverse effects of this transformation cannot be undone by any affine transformation with diagonal A. This problem has motivated us to generalize the PNN algorithm in such a way that it is robust with respect to affine transformations of the feature space. 1112 Montana class~Bl (a) A2 class A (b) Figure 2: The principle axes of the affine transformation do not necessarily correspond with the coordinate axes. 1.3 A SOLUTION: WEIGHTED PNN (WPNN) This flaw of nearest-neighbor-like algorithms has been recognized before, and there have been a few proposed solutions. They all use what Dasarathy (1991) calls "modified metrics", which are non-Euclidean distance measures in feature space. All the approaches to modified metrics define criteria which the chosen metric should optimize. Some criteria allow explicit derivation of the new metrics (Short and Fukunuga 1981; Fukunuga and Flick 1984). However, the validity of these derivations relies on there being a very large number of exemplars in the training set. A more recent set of approaches (Atkeson 1991; Kelly and Davis 1991) (i) use criteria which measure the performance on the training set using leaving-oneout cross-validation (see (Stone 1974) and Section 2.1), (ii) restrict the number of parameters of the metric to increase statistical significance, and (iii) optimize the parameters of the metric using non-linear search techniques. For his technique of "locally weighted regression", Atkeson (1991) uses an evaluation criterion which is the sum of the squares of the error using leaving-one-out. His metric has the form d2 = Wl(Xl-Yl?+ ... +Wk(Xk-Yk)2, and hence has k free parameters WI, ... , Wk. He uses Levenberg-Marquardt to optimize these parameters with respect to the evaluation criterion. For their Weighted K-Nearest Neighbors (WKNN) algorithm, Kelly and Davis (1991) use an evaluation criterion which is the total number of incorrect classifications under leaving-one-out. Their metric is the same as Atkeson's, and their optmization is done with a genetic algorithm. We use an approach similar to that of Atkeson (1991) and Kelly and Davis (1991) to make PNN more robust with respect to affine transformations. Our approach, called Weighted PNN (WPNN), works by using anisotropic Gaussians rather than the isotropic Gaussians used by PNN. An anisotropic Gaussian has the form 1 (i i) T ~ -1 (i i) Th . ~ . t' d fi . t k k (271')Jc/2(det ~)1/2 e0 0 • e covarIance LJ IS a nonnega Ive- e m e x symmetric matrix. Note that ~ enters into the exponent of the Gaussian so as to define a new distance metric, and hence the use of anisotropic Gaussians to extend PNN is analogous to the use of modified metrics to extend other nearest-neighborlike algorithms. The likelihood function for class i is I N • . _ _ '"'" _(i_x;)TE-l(i_xj) L,(x) Ni(271")kI2(det~)1/2 f;;:e (3) A Weighted Probabilistic Neural Network 1113 and the conditional probability is still as given in Equation 2. Note that when E is a multiple of the identity, i.e. E = (J'I, Equation 3 reduces to Equation 1. Section 2 describes how we select the value of E. To ensure good generalization, we have so far restricted ourselves to diagonal covariances (and thus metrics of the form used by Atkeson (1991) and Kelly and Davis (1991). This reduces the number of degrees of freedom of the covariance from k( k + 1) /2 to k. However, this restricted set of covariances is not sufficiently general to solve all the problems of PNN (as demonstrated in Section 3), and we therefore in Section 2 hint at some modifications which would allow us to use arbitrary covarIances. 2 OPTIMIZING THE COVARIANCE We have used a genetic algorithm (Goldberg 1988) to optimize the covariance of the Gaussians. The code we used was a non-object-oriented C translation of the OOGA (Object-Oriented Genetic Algorithm) code (Davis 1991). This code preserves the features of OOGA including arbitrary encodings, exponential fitness, steady-state replacement, and adaptive operator probabilities. We now describe the distinguishing features of our genetic algorithm: (1) the evaluation function (Section 2.1), (2) the genetic encoding (Section 2.2), and (3) the population size (Section 2.3). 2.1 THE EVALUATION FUNCTION To evaluate the performance of a particular covariance matrix on the training set, we use a technique called "leaving-one-out", which is a special form of cross-validation (Stone 1974). One exemplar at a time is withheld from the training set, and we then determine how well WPNN with that covariance matrix classifies the withheld exemplar. The full evaluation is the sum of the evaluations on the individual exemplars. For the exemplar X}, let lq(x}) for q = 1, ... , M denote the class likelihoods obtained upon withholding this exemplar and applying Equation 3, and let Pq(?) be the probabilities obtained from these likelihoods via Equation 2. Then, we define the performance as M N, E = 2:2:«(1- Pi(X;»2 + 2:(Pq(X;»2) (4) i=l j=l q# We have incorporated two heuristics to quickly identify covariances which are clearly bad and give them a value of 00, the worst possible score. This greatly speeds up the optimization process because many of the generated covariances can be eliminated this way (see Section 2.3). The first heuristic identifies covariances which are too "small" based on the condition that, for some exemplar x} and all q = 1, ... M, lq (x}) = 0 to within the precision of IEEE double-precision floating-point format. In this case, the probabilities Pq (X1) are not well-defined. (When E is this "small" , WPNN is approximately equivalent to WKNN with k = 1, and if such a small E is indeed required, then the WKNN algorithm should be used instead.) 1114 Montana The second heuristic identifies covariances which are too "big" in the sense that too many exemplars contribute significantly to the likelihood functions. Empirical observations and theoretical arguments show that PNN (and WPNN) work best when only a small fraction of the exemplars contribute significantly. Hence, we reject a particular E if, for any exemplar xJ, (5) Here, P is a parameter which we chose for our experiments to equal four. Note: If we wish to improve the generalization by discarding some of the degrees of freedom of the covariance (which we will need to do when we allow non-diagonal covariances), we should modify the evaluation function by subtracting off a term which is montonically increasing with the number of degrees of freedom discarded. 2.2 THE GENETIC ENCODING Recall from Section 1.3 that we have presently restricted the covariance to be diagonal. Hence, the set of all possible covariances is k-dimensional, where k is the dimension ofthe feature space. We encode the covariances as k+l integers (ao, ... , ak), where the ai's are in the ranges (ao)min ::; ao ::; (ao)max and 0 ::; ai ::; amax for i = 1, ... , k. The decoding map is (6) We observe the following about this encoding. First, it is a "logarithmic encoding" , i.e. the encoded parameters are related logarithmically to the original parameters. This provides a large dynamic range without the sacrifice of sufficient resolution at any scale and without making the search space unmanageably large. The constants C1 and C2 determine t.he resolution, while the constants (aO)min, (ao)max, and amax det.ermine t.he range. Second, it. is possibly a "redundant" encoding, i.e. there may be multiple encodings of a single covariance. We use this redundant encoding, despite the seeming paradox, t.o reduce the size of t.he search space. The ao term encodes the size of the Gaussian, roughly equivalent to (J' in PNN. The other aj's encode the relative weighting of the various dimensions. If we dropped the ao term, the other aj terms would have to have larger ranges to compensate, thus making the search space larger. Note: If we wish to improve the generalization by discarding some of the degrees of freedom of the covariance, we need to allow all the entries besides ao to take on the value of 00 in addition to the range of values defined above. When aj = 00, its corresponding entry in the covariance matrix is zero and is hence discarded. 2.3 POPULATION SIZE For their success, genetic algorithms rely on having multiple individuals with partial information in the population. The problem we have encountered is that the ratio of the the area of the search space with partial information to the entire search space is small. In fact, with our very loose heuristics, on Dataset 1 (see Section 3) about A Weighted Probabilistic Neural Network 1115 90% of the randomly generated individuals of the initial population evaluated to 00. In fact, we estimate very roughly that only 1 in 50 or 1 in 100 randomly generated individuals contain partial information. To ensure that the initial population has multiple individuals with partial information requires a population size of many hundreds, and we conservatively used a population size of 1600. Note that with such a large population it is essential to use a steady-state genetic algorithm (Davis 1991) rather than generational replacement. 3 EXPERIMENTAL RESULTS We have performed a series of experiments to verify our claims about WPNN. To do so, we have constructed a sequence of four datasets designed to illustrate the shortcomings of PNN and how WPNN in its present form can fix some of these shortcomings but not others. Dataset 1 is a training set we generated during an effort to classify simulated sonar signals. It has ten features, five classes, and 516 total exemplars. Dataset 2 is the same as Dataset 1 except that we supplemented the ten features of Dataset 1 with five additional features, which were random numbers uniformly distributed between zero and one (and hence contained no information relevant to classification), thus giving a total of 15 features. Dataset 3 is the same as Dataset 2 except with ten (rather than five) irrelevant features added and hence a total of 20 features. Like Dataset 3, Dataset 4 has 20 features. It is obtained from Dataset 3 as follows. Pair each of the true features with one of the irrelevant features. Call the feature values of the ith pair Ii and gi. Then, replace these feature values with the values 0.5(1i + gd and 0.5(1i - gi + 1), thus mixing up the relevant features with the irrelevant features via linear combinations. To evaluate the performance of different pattern classification algorithms on these four datasets, we have used lO-fold cross-validation (Stone 1974). This involves splitting each dataset into ten disjoint subsets of similar size and similar distribution of exemplars by class. To evaluate a particular algorithm on a dataset requires ten training and test runs, where each subset is used as the test set for the algorithm trained on a training set consisting of the other nine subsets. The pattern classification algorithms we have evaluated are backpropagation (with four hidden nodes), PNN (with (f = 0.05), WPNN and CART. The results of the experiments are shown in Figure 3. Note that the parenthesized quantities denote errors on the training data and are not compensated for the fact that each exemplar of the original dataset is in nine of the ten training sets used for cross-validation. We can draw a number of conclusions from these results. First, the performance of PNN on Datasets 2-4 clearly demonstrates the problems which arise from its lack of robustness with respect to affine transformations of feature space. In each case, there exists an affine transformation which makes the problem essentially equivalent to Dataset 1 from the viewpoint of Euclidean distance, but the performance is clearly very different. Second, WPNN clearly eliminates this problem with PNN for Datasets 2 and 3 but not for Dataset 4. This points out both the progress we have made so far in using WPNN to make PNN more robust and the importance of extending the WPNN algorithm to allow non-diagonal covariances. Third, although backpropagation is in theory transparent to affine transformations of feature space (because the first layer of weights and biases implements an arbitrary affine 1116 Montana ~ 1 2 3 4 Alaorithm 8ackprop 11 (69) 16 (51) 20 (27) 13 (64) PNN 9 94 109 29 WPNN 10 11 11 25 CART 14 17 18 53 Figure 3: Performance on the four datasets of backprop, CART, PNN and WPNN (parenthesized quantities are training set errors). transformation), in practice affine transformations effect its performance. Indeed, Dataset 4 is obtained from Dataset 3 by an affine transformation, yet backpropagation performs very differently on them. Backpropagation does better on the training sets for Dataset 3 than on the training sets for Dataset 4 but does better on the test sets of Dataset 4 than the test sets of Dataset 3. This implies that for Dataset 4 during the training procedure backpropagation is not finding the globally optimum set of weights and biases but is missing in such a way that improves its generalization. 4 CONCLUSIONS AND FUTURE WORK We have demonstrated through both theoretical arguments and experiments an inherent flaw of PNN, its lack or robustness with respect to affine transformations of feature space. To correct this flaw, we have proposed an extension of PNN, called WPNN, which uses anisotropic Gaussians rather than the isotropic Gaussians used by PNN. Under the assumption that the covariance of the Gaussians is diagonal, we have described how to use a genetic algorithm to optimize the covariance for optimal performance on the training set. Experiments have shown that WPNN can partially remedy the flaw with PNN. What remains to be done is to modify the optimization procedure to allow arbitrary (i.e., non-diagonal) covariances. The main difficulty here is that the covariance matrix has a large number of degrees offreedom (k(k+l)/2, where k is the dimension of feature space), and we therefore need to ensure that the choice of covariance is not overfit to the data. We have presented some general ideas on how to approach this problem, but a true solution still needs to be developed. Acknowledgements This work was partially supported by DARPA via ONR under Contract N0001489-C-0264 as part of the Artifical Neural Networks Initiative. A Weighted Probabilistic Neural Network 1117 Thanks to Ken Theriault for his useful comments. References C.G. Atkeson. (1991) Using locally weighted regression for robot learning. Proceedings of the 1991 IEEE Conference on Robotics and Automation, pp. 958-963. Los Alamitos, CA: IEEE Computer Society Press. B.V. Dasarathy. (1991) Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. Los Alamitos, CA: IEEE Computer Society Press. L. Davis. (1991) Handbook of Genetic Algorithms. New York: Van Nostrand Reinhold. K. Fukunaga and T.T. Flick. (1984) An optimal global nearest neighbor metric. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-6, No.3, pp. 314-318. D. Goldberg. (1988) Genetic Algorithms in Machine Learning, Optimization and Search. Redwood City, CA: Addison-Wesley. J.D. Kelly, Jr. and L. Davis. (1991) Hybridizing the genetic algorithm and the k nearest neighbors classification algorithm. Proceedings of the Fourth Internation Conference on Genetic Algorithms, pp. 377-383. San Mateo, CA: Morgan Kaufmann. R.D. Short and K. Fukunaga. (1981) The optimal distance measure for nearest neighbor classification. IEEE Transactions on Information Theory, Vol. IT-27, No. 5, pp. 622-627. D.F. Specht. (1990) Probabilistic neural networks. Neural Networks, vol. 3, no. 1, pp.109-118. M. Stone. (1974) Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, vol. 36, pp. 111-147.
1991
20
486
Shooting Craps in Search of an Optimal Strategy for Training Connectionist Pattern Classifiers J. B. Hampshire IT and B. V. K. Vijaya Kumar Department of Electrical & Computer Engineering Carnegie Mellon University Pittsbwgh. PA 15213-3890 hamps@speechl.cs.cmu.edu and kumar@gauss.ece.cmu.edu Abstract We compare two strategies for training connectionist (as well as nonconnectionist) models for statistical pattern recognition. The probabilistic strategy is based on the notion that Bayesian discrimination (i.e .• optimal classification) is achieved when the classifier learns the a posteriori class distributions of the random feature vector. The differential strategy is based on the notion that the identity of the largest class a posteriori probability of the feature vector is all that is needed to achieve Bayesian discrimination. Each strategy is directly linked to a family of objective functions that can be used in the supervised training procedure. We prove that the probabilistic strategy linked with error measure objective functions such as mean-squared-error and cross-entropy typically used to train classifiers necessarily requires larger training sets and more complex classifier architectures than those needed to approximate the Bayesian discriminant function. In contrast. we prove that the differential strategy linked with classificationfigure-of-merit objective functions (CF'MmoIlO) [3] requires the minimum classifier functional complexity and the fewest training examples necessary to approximate the Bayesian discriminant function with specified precision (measured in probability of error). We present our proofs in the context of a game of chance in which an unfair C-sided die is tossed repeatedly. We show that this rigged game of dice is a paradigm at the root of all statistical pattern recognition tasks. and demonstrate how a simple extension of the concept leads us to a general information-theoretic model of sample complexity for statistical pattern recognition. 1125 1126 Hampshire and Kumar 1 Introduction Creating a connectionist pattern classifier that generalizes well to novel test data has recently focussed on the process of finding the network architecture with the minimum functional complexity necessary to model the training data accurately (see, for example, the works of Baum. Cover, Haussler, and Vapnik). Meanwhile, relatively little attention has been paid to the effect on generalization of the objective function used to train the classifier. In fact, the choice of objective function used to train the classifier is tantamount to a choice of training strategy, as described in the abstract [2,3]. We formulate the proofs outlined in the abstract in the context of a rigged game of dice in which an unfair C-sided die is tossed repeatedl y. Each face of the die has some probability of turning up. We assume that one face is always more likely than all the others. As a result, all the probabilities may be different, but at most C - 1 of them can be identical. The objective of the game is to identify the most likely die face with specified high confidence. The relationship between this rigged dice paradigm and statistical pattern recognition becomes clear if one realizes that a single unfair die is analogous to a specific point on the domain of the randomfeature vector being classified. Just as there are specific class probabilities associated with each point in feature vector space, each die has specific probabilities associated with each of its faces. The number of faces on the die equals the number of classes associated with the analogous point in feature vector space. Identifying the most likely die face is equivalent to identifying the maximum class a posteriori probability for the analogous point in feature vector space the requirement for Bayesian discrimination. We formulate our proofs for the case of a single die, and conclude by showing how a simple extension of the mathematics leads to general expressions for pattern recognition involving both discrete and continuous random feature vectors. Authors' Note: In the interest of brevity, our proofs are posed as answers to questions that pertain to the rigged game of dice. It is hoped that the reader will find the relevance of each question/answer to statistical pattern recognition clear. Owing to page limitations, we cannot provide our proofs in full detail; the reader seeking such detail should refer to [1], Definitions of symbols used in the following proofs are given in table 1. 1.1 A Fixed-Point Representation The Mq-bit approximation qM[X] to the real number x E (-1, 1] is of the form MSB (most significant bit) = sign [x] MSB - 1 = 2-1 (1) ! LSB (least significant bit) = 2-(M.-1) with the specific value defined as the mid-point of the 2-(M.-1) -wide interval in which x is located: A { sign[x] . (L Ixl . 2(M.-l) J . 2-(M.-1) + 2-M.) , Ixl < 1 qM[X] = (2) sign [x] . (1 - 2-M.) , Ixl = 1 The lower and upper bounds on the quantization interval are LM.[X] < x < UM.[X] (3) An Optimal Strategy for Training Connectionist Pattern Classifiers 1127 Thble 1: Definitions of symbols used to describe die faces, probabilities, probabilistic differences, and associated estimates. Symbol Wrj P(Wrj) kq P(Wrj) where and Definition The lrUe jth most likely die face (w;j is the estimated jth most likely face). The probability of the lrUe jth most likely die face. The number of occurrences of the true jth most likely die face. An empirical estimate of the probability of the true jth most likely die face: P(wrj) =!c;. (note n denotes the sample size) The probabilistic difference involving the true rankings and probabilities of the C die faces: ..1ri = P(Wri) SUPj,..i P(Wrj) The probabilistic difference involving the true rankings but empirically estimated probabilities of the C die faces: " • k,; - lupifi ~ ..1ri = P(Wri) sUPhfj P(wrj) = II (4) (5) The fixed-point representation described by (1) - (5) differs from standard fixed-point representations in its choice of quantization interval. The choice of (2) - (5) represents zero as a negative more precisely, a non-positive finite precision number. See [1] for the motivation of this format choice. 1.2 A Mathematical Comparison of the Probabilistic and Differential Strategies The probabilistic strategy for identifying the most likely face on a die with C faces involves estimating the C face probabilities. In order for us to distinguish P(Wrl) from P(Wr2) , we must choose Mq (i.e. the number of bits in our fiXed-point representation of the estimated probabilities) such that (6) The distinction between the differential and probabilistic strategies is made more clear if one considers the way in which the Mq-bit approximation jrl is computed from a random sample containing krl occurrences of die face Wrl and kl'2. occurrences of die face Wr2. For the differential strategy .1rl dijferelltUJl = qM [krl : kl'2.] and for the probabilistic strategy .1rl probabilistic (7) (8) 1128 Hampshire and Kumar where 6. .d; P(Wi) sup P(Wj) i = 1,2, ... C (9) iii Note that when i = rl (10) and when i ::f r 1 (ll) Note also (12) Since rC = L P(Wj) (C - 2) P(Wrl) (13) i=1 i..,.3 we can show that the C differences of (9) yield the C probabilities by = ~ [1 -t .di] C izr2 (14) P(Wrj) = .drj + P(Wrl) Vj > 1 Thus, estimating the C differences of (9) is equivalent to estimating the C probabilities P(Wl), P(W2) , ... ,P(wc). Clearly, the sign of L1rl in (7) is modeled correctly (i.e., L1rl differentWl can correctly identify the most likely face) when Mq = 1, while this is typically not the case for .drl probabilistic in (8). In the latter case, L1rl probabilistic is zero when Mq = 1 because qm[p(Wrl)] and QM[P(Wr'2)] are indistinguishable for Mq below some minimal value implied by (6). That minimal value of Mq can be found by recognizing that the number of bits necessary for (6) to hold for asymptotically large n (Le., for the quantized difference in (8) to exceed one LSB) is 1 + r -log2 [.drd 1, ~" J sign bit magnit~de bits -log2 [P(Wrj)] :3 Z+ j E {1,2} ~ + J -log2 [.drd 1 + 1) sign bit magnit~de bits otherwise (15) where Z+ represents the set of all positive integers. Note that the conditional nature of Mq min in (15) prevents the case in which lime-+o P(Wrl) € = LM. [P(Wrl)] or P(Wr2) = UM.[P(Wr2)]; either case would require an infinitely large sample size before the variance of the corresponding estimated probability became small enough to distinguish QM[P(Wrl)] from QM[P(Wr'2)]. The sign bit in (15)is not required to estimate the probabilities themselves in (8), but it is necessary to compute the difference between the two probabilities in that equation this difference being the ultimate computation by which we choose the most likely die face. An Optimal Strategy for Training Connectionist Pattern Classifiers 1129 1.3 The Sample Complexity Product We introduce the sample complexity product (SCP) as a measure of both the number of samples and the functional complexity (measured in bits) required to identify the most likely face of an unfair die with specified probability. A SCP = n . Mq s.t. P(most likely face correctly IO'd) ~ a (16) 2 A Comparison of the Sample Complexity Requirements for the Probabilistic and Differential Strategies Axiom 1 We view the number of bits Mq in the finite-precision approximation qM[X] to the real number x E (-1, 1] as a measure of the approximation's functional complexity. That is, the functional complexity of an approximation is the number of bits with which it represents a real number on (-1, 1]. Assumption 1 If P(Wrl) > P(Wr2), then P(Wrl) will be greater than P(Wrj) Vj> 2 (see [1] for an analysis of cases in which this assumption is invalid). Question: What is the probability that the most likely face of an unfair die will be empirically identifiable after n tosses? Answer for the probabilistic strategy: P (qM[P(Wrl)] > qM[P(Wrj)] , V j > 1) ~ n! t P(Wrl)k., [t P(Wr2)~ (1 - P(Wrl) - P(Wr2))(II-k., -~)] (17) krl! kr2! (n - krl - kr2)! k.1=>', ~=>'l where Al = max ( B + 1, nC -_k~ + 1 ) VC > 2 Vl = n A2 = 0 (18) V]. = min (B , n - krl ) B = {BM9} = kUN9 [P(Wr2)] = kt..vq [P(Wrl)] 1 There is a simple recursion in [1] by which every possible boundary for M q-bit quantization leads to itself and two additional boundaries in the set {BM9} for (Mq + I)-bit quantization. Answer for the differential strategy: 1130 Hampshire and Kumar where Al = ( n -k~ ) max kL.vt [Llrd, C _ 1 + 1 \lC > 2 VI = n (20) A2 = max ( 0 , krl - kUJft [LlrlJ ) Vl = min (krl - kr.Jft [Llrd , n - krl ) Since the multinomial distribution is positive semi-definite, it should be clear from a comparisonof(17)-(18) and (19)-(20) thatP (LMt[Llrd < Lirl < UMt[Llrtl) islargest (and larger than any possible P (qM[P(Wrl)] > qM[P(Wrj)] , \I j > 1) ) for a given sample size n when the differential strategy is employed with Mq = 1 such that LMt [Llrtl = 0 and UM [Llrd = 1 (Le., lr. [Llrtl = 1 and ku~ [Llrd = n). The converse is also true, to wit: t ALNt -t Theorem 1 For aftxed value ofn in (19), the l-bitapproximationto Llrl yields the highest probability of identifying the most likely die face Wrl . It can be shown that theorem 1 does not depend on the validity of assumption 1 [1]. Given Axiom 1, the following corollary to theorem 1 holds: Corollary 1 The differential strategy's minimum-complexity l-bit approximation of Llrl yields the highest probability of identifying the most likely die face Wrl for a given number of tosses n. Corollary 2 The differential strategy's minimum-complexity l-bit approximation of Llrl requires the smallest sample size necessary (nmi,,) to identify P(Wrl) -and thereby the most likely die face Wrl correctly with specified confidence. Thus, the differential strategy requires the minimum SCP necessary to identify the most likely die face with specified confidence. 2.1 Theoretical Predictions versus Empirical Results Figures 1 and 2 compare theoretical predictions of the number of samples n and the number of bits Mq necessary to identify the most likely face of a particular die versus the actual requirements obtained from 1000 games (3000 tosses of the die in each game). The die has five faces with probabilities P(Wrl) = 0.37 ,P(Wr2) = 0.28, P(Wr3) = 0.2, P(Wr4) = 0.1 ,and P(Wrl) = 0.05. The theoretical predictions for Mq and n (arrows with boxed labels based on iterative searches employing equations (17) and (19)) that would with 0.95 confidence correctly identify the most likely die face Wrl are shown to correspond with the empirical results: in figure 1 the empirical 0.95 confidence interval is marked by the lower bound of the dark gray and the upper bound of the light gray; in figure 2 the empirical 0.95 confidence interval is marked by the lower bound of the P(Wrl) distribution and the upper bound of the An Optimal Strategy for Training Connectionist Pattern Classifiers 1131 O~~f __ ------------__ --__ --__ -G.l.'Ot;;:;;.-___________ _ FigUre 1: Theoretical predictions of the number of tosses needed to identify the most likely face Wrl with 95% confidence (Die 1): Differential strategy prediction superimposed on empirical results of 1000 games (3000 tosses each). ~ 1000 1~ 2000 2~ 3000 Figure 2: Theoretical predictions of the number of tosses needed to identify the most likely face Wrl with 95% confidence (Die 1): Probabilistic strategy prediction superimposed on empirical results of 1000 games (3000 tosses each). P(Wr2) distribution. These figures illustrate that the differential strategy's minimum SCP is 227 (n = 227, Mq = 1) while the minimum SCP for the probabilistic strategy is 2720 (n = 544 , Mq = 5). A complete tabulation of SCP as a function of P(Wrl) , P(Wr2) , and the worst-case choice for C (the number of classes/die faces) is given in [1]. 3 Conclusion The sample complexity product (SCP) notion of functional complexity set forth herein is closely aligned with the complexity measures of Kolmogorov and Rissanen [4, 6]. We have used it to prove that the differential strategy for learning the Bayesian discriminant function is optimal in terms of its minimum requirements for classifier functional complexity and number of training examples when the classification task is identifying the most likely face of an unfair die. It is relatively straightforward to extend theorem 1 and its corollaries to the general pattern recognition case in order to show that the expected SCP for the I-bit differential strategy E [SCP]diffenntial ~ Ix nmin [p (Wrl I x) , P (wr21 x)] '~q min [p (Wrl I~) , P (wr21 x) tp{x)dx =1 (21) (or the discrete random vector analog of this equation) is minimal [1]. This is because nmin is by corollary 2 the smallest sample size necessary to distinguish any and all P(Wrl) from 1132 Hampshire and Kumar lesser P(Wr2). The resulting analysis confinns that the classifier trained with the differential strategy for statistical pattern recognition (Le., using a CFMmoM objective function) has the highest probability of learning the Bayesian discriminant function when the functional capacity of the classifier and the available training data are both limited. The relevance of this work to the process of designing and training robust connectionist pattern classifiers is evident if one considers the practical meaning of the terms nmilt [p (Wrl I x) , P (wr21 x)] and Mq mill [p (Wrl I x) , P (wr21 x)] in the sample complexity product of (21). Oi ven one's choice of connectionist model to employ as a classifier, the M q milt term dictates the minimum necessary connectivity of that model. For example, (21) can be used to prove that a partially connected radial basis function (RBF) with trainable variance parameters and three hidden layer ''nodes'' has the minimum Mq necessary for Bayesian discrimination in the 3-class task described by [5]. However, because both SCP terms are functions of the probabilistic nature of the random feature vector being classified and the learning strategy employed. that minimal RBF architecture will only yield Bayesian discrimination if trained using the differential strategy. The probabilistic strategy requires significantly more functional complexity in the RBF in order to meet the requirements of the probabilistic strategy's SCP [1]. Philosophical arguments regarding the use of the differential strategy in lieu of the more traditional probabilistic strategy are discussed at length in [1]. Acknowledgement This research was funded by the Air Force Office of Scientific Research under grant AFOSR-89-0551. We gratefully acknowledge their support. References [1] J. B. Hampshire II. A Differential Theory of Statistical Pattern Recognition. PhD thesis, Carnegie Mellon University, Department of ElectricaI & Computer Engineering, Hammerschlag Hall, Pittsburgh. PA 15213-3890,1992. manuscript in progress. [2] J. B. Hampshire II and B. A. Pearlmutter. Equivalence Proofs for Multi-Layer Perceptton Classifiers and the Bayesian Discriminant Function. In Touretzky, Elman. Sejnowski, and Hinton, editors, Proceedings of the 1990 Connectionist Models Summer School. pages 159-172. San Mateo, CA, 1991. Morgan-Kaufmann. [3] J. B. Hampshire II and A. H. Waibel. A Novel Objective Function for Improved Phoneme Recognition Using Time-Delay Neural Networks. IEEE Transactions on Neural Networks, 1(2):216-228, June 1990. A revised and extended version of work first presented at the 1989 International Joint Conference on Neural Networks, vol. I. pp.235-241. [4] A. N. Kolmogorov. Three Approaches to the Quantitative Definition of Information. Problems of Information Transmission. 1(1):1-7, Jan. - Mar. 1965. Faraday Press ttanslation of Problemy Peredachi Informatsii. [5] M. D. Richard and R. P. Lippmann. Neural Network Classifiers Estimate Bayesian a posteriori Probabilities. Neural Computation, 3(4):461-483.1991. [6] J. Rissanen. Modeling by shortest data description. Automatica, 14:465-471,1978.
1991
21
487
Multimodular Architecture for Remote Sensing Operations. Sylvie Thiria(1,2) Fouad Badran(1,2) Carlos Mejia(l) Michel Crepon(3) (1) Laboratoire de Recherche en Informatique Universite de Paris Sud, B 490 - 91405 ORSAY Cedex France (2) CEDRIC, Conservatoire National des Arts et Metiers 292 rue Saint Martin - 75003 PARIS (3) Laboratoire d'Oceanographie et de Climatologie (LODYC) T14 Universite de PARIS 6 - 75005 PARIS (FRANCE) Abstract This paper deals with an application of Neural Networks to satellite remote sensing observations. Because of the complexity of the application and the large amount of data, the problem cannot be solved by using a single method. The solution we propose is to build multimodules NN architectures where several NN cooperate together. Such system suffer from generic problem for whom we propose solutions. They allow to reach accurate performances for multi-valued function approximations and probability estimations. The results are compared with six other methods which have been used for this problem. We show that the methodology we have developed is general and can be used for a large variety of applications. 675 676 Thiria, Mejia, Badran, and Crepon 1 INTRODUCTION Neural Networks have been used for many years to solve hard real world applications which involve large amounts of data. Most of the time, these problems cannot be solved with a unique technique and involve successive processing of the input data. Sophisticated NN architectures have thus been designed to provide good performances e.g. [Lecun et al. 90]. However this approach is limited for many reasons: the design of these architectures requires a lot of a priori knowledge about the task and is complicated. Such NN are difficult to train because of their large size and are dedicated to a specific problem. Moreover if the task is slightly modified, these NN have to be entirely redesigned and retrained. It is our feeling that complex problems cannot be solved efficiently with a single NN whatever sophisticated it is. A more fruitful approach is to use modular architectures where several simple NN modules cooperate together. This methodology is far more general and allows to easily build very sophisticated architectures which are able to handle the different processing steps which are necessary for example in speech or signal processing. These architectures can be easily modified to incorporate some additional knowledge about the problem or some changes in its specifications. We have used these ideas to build a multi-module NN for a satellite remote sensing application. This is a hard problem which cannot be solved by a single NN. The different modules of our architecture are thus dedicated to specific tasks and allow to perform successive processing of the data. This approach allows to take into account in successive steps different informations about the problem. Furthermore, errors which may occur at the output of some modules may be corrected by others which allows to reach very good performances. Making these different modules cooperate raises several problems which appear to be generic for these architectures. It is thus interesting to study different solutions for their design, training, and the efficient information exchanges between modules. In the present paper, we first briefly describe the geophysical problem and its difficulties, we then present the different modules of our architecture and their cooperation, we compare our results to those of several other methods and discuss the advantages of our method. 2 THE GEOPHYSICAL PROBLEM Scatterometers are active microwave radars which accurately measure the power of transmitted and backscatter signal radiations in order to compute the normalized radar cross section (ao) of the ocean surface. The ao depends on the wind speed, the incidence angle 9 (which is the angle between the radar beam and the vertical at the illuminated cell) and the azimuth angle (which is the horizontal angle X between the wind and the antenna of the radar). The empirically based relationship between ao and the local wind vector can be established which leads to the determination of a geophysical model function. The model developed by A. Long gives a more precise form to this functional. It has been shown that for an angle of incidence 9, the general expression for ao can be satisfactorily represented by a Fourrier series: Multimodular Architecture for Remote Sensing Options 677 (1) with U = A.v"! Long's model specifies that A and 'Y only depend on the angle of incidence 9, and that bi and b2 are a function of both the wind speed v and the angle of incidence 9 (Figure 1). Figure 1 : Definition of the different geophysical scales. For now, the different parameters bl, b2 A and y used in this model are determined experimentally. Conversely it becomes possible to compute the wind direction by using several antenna with different orientations with respect to the satellite track. The geophysical model function (1) can then be inverted using the three measurements of 0'0 given by the three antennas, it computes wind vector (direction and speed). Evidence shows that for a given trajectory within the swath (Figure 1) i.e. (91,92,93) fixed, 9i being the incidence angle of the beam linked to antenna i, the functional F is of the fonn presented in Fig.2 . In the absence of noise, the determination of the wind direction would be unique in most cases. Noise-free ambiguities arise due to the bi-hannonic nature of the model function with respect to X. The functional F presents singular points. At constant wind speed F yields a Lissajous curve; in the singular points the direction is ambiguous with respect to the triplet measurements (0'1,0'2,0'3) as it is seen in Fig. 2. At these points F yields two directions differing by 160°. In practice, since the backscatter signal is noisy the number and the frequency of ambiguities is increased. 678 Thiria, Mejia, Badran, and Crepon 270" 45 0 135 0 10" 1700 (a) (b) Figure 2 : (a) Representation of the Functional F for a given trajectory (b) Graphics obtained for a section of (a) at constant wind speed. The problem is therefore how to set up an accurate (exact) wind map using the observed measurements (0'1,0'2,0'3) . 3 THE METHOD We propose to use multi-layered quasi-linear networks (MLP) to carry out this inversion phase. Indeed these nets are able of approximate complex non-linear functional relations; it becomes possible by using a set of measurements to determine F and to realize the inversion. The determination of the wind's speed and direction lead to two problems of different complexity, each of them is solved using a dedicated multi-modular system. The two modules are then linked together to build a two level architecture. To take into account the strong dependence of the measurements with respect to the trajectory, each module (or level) consists of n distinct but similar systems, a specific system being dedicated to each satellite trajectory (n being the number of trajectories in a swath (Figure 1)). The first level will allow the determination of the wind speed at every point of the swath. The results obtained will then be supplied to the second level as supplementary data which allow to compute the wind direction. Thus, we propose a two-level architecture which constitutes an automatic method for the computation of wind maps (Figure 3). The computation is performed sequentially between the different levels, each one supplying the next with the parameters needed. Owing to the space variability of the wind, the measurements at a point are closely related to those performed in the neighbourhood. Taking into account this context must therefore bring important supplementary information to dealiase the ambiguities. At a point, the input data for a given system are therefore the measurements observed at that point and at it's eight closest neighbours. All the networks used by the different systems are MLP trained with the back-propagation algorithm. The successive modifications were performed using a second order stochastic gradient: which is the approximation of the Levenberg-Marquardt rule. Multimodular Architecture for Remote Sensing Options 679 uvtl3 : ~= AmbiguUies correction - uvel2 : 0 0 Wind Direction Si= compulillion - - - Luwtr Spud Wi .... uvtll: NtlWorl 0 0 Wind Speed ~= compulillion (a) (b) Figure 3 : The three systems SI, S2 and S3 for a given trajectory. One system is dedicated to a proper trajectory. As a result the networks used on the same level of the global architecture are of the same type; only the learning set numerical values change from one system to another. Each network learning set will therefore consist of the data mesured on its trajectory. We present here the results for the central trajectory, perfonnances for the others are similar. 3.1 THE NETWORK DECODING : FIRST LEVEL A system (S 1) in the first level allows to compute the wind speed (in ms-1) along a trajectory. Because the function Fl to be learned (signal ~ wind speed) is highly nonlinear, each system is made of three networks (see Figure 3) : Rl allows to decide the range of the wind speed (4 ~ v < 12 or 12 ~ v < 20); according to the Rl output an accurate value is computed using R2 for the first range and R3 for the other. The first level is built from 10 of these systems (one for each trajectory). Each network (Rl, R2, R3) consists of four fully connected layers. For a given point, we have introduced the knowledge of the radar measurements at the neighbouring points. The same experiments were performed without introducing this notion of vicinity, the learning and test performances were reduced by 17%, which proves the advantages of this approach. The input layer of each network consists of 27 automata: these 9x3 automata correspond to the 0'0 values relative to each antenna for the point to be considered and its eight neighbours. Rl output layer has two cells: one for 4 ~ v < 12 and the other for 12 ~ v < 20; so its 4 layers are respectively built of 27, 25, 25, 2 automata. R2 and R3 compute the exact wind speed. The output layer is represented by a unique output automaton and codes this wind speed v at the point considered between [-1, + I]. The four layers of each network are respectively formed of27, 25, 25,1 automata. 680 Thiria, Mejia, Badran, and Crepon 3.2 DECODING THE DIRECTION : SECOND LEVEL Now the function F2 (signal ~ wind direction) has to be learned. This level is located after the first one, so the wind speed has already been computed at all points. For each trajectory a system S2 allows to compute the wind direction, it is made of an MLP and a Decision Direction Process (we call it D). As for FI we used for each point a contextual information. Thus, the input layer of the MLP consists of 30 automata : the first 9x3 correspond to the ao values for each antenna, the last three represent three times the first level computed wind speed. However, because the original function has major ambiguities it is more convenient to compute, for a given input, several output values with their probabilities. For this reason we have discretized the desired output. It has been coded in degrees and 36 possible classes have been considered, each representing a 10° interval (between 0° and 360°). So, the MLP is four layered with respectively 30, 25, 25, 36 automata. It can be shown, according to the coding of the desired output, that the network approximates Bayes discriminant function or Bayes probability distribution related to the discretized transfer function F 2 [White, 89]. The interpretation of the MLP outputs using the D process allows to compute with accuracy the required function F 2. The network outputs represents the 36 classes corresponding to the 36 10° intervals. For a given input, a computed output is a ~36 vector whose components can be interpreted to predict the wind direction in degrees. Each component, which is a Bayes discrim inant function approximation, can be used as a coefficient of likelihood for each class. The Decision Direction Process D (see Fig. 3) computes real directions using this information. It performs the interpolation of the peaks' curve. D gives for each peak ist wind direction with its coefficients of likelihood. o 30 60 90 120 150 180 210 240 270 300 330 3600 Figure 4 : network's output. The points in the x -axis correspond to the 36 outputs. Each represents an interval of 10° between 0 and 360°. The Y-axis points give the automata computed output The point indicated by a d corresponds to the desired output angle, ~ is the most likely solution proposed by D and p is the second one. The computed wind speed and the most likely wind direction computed by the first two levels allow to build a complete map which still includes errors in the directions. As we have seen in section 2, the physical problem has intrinsic ambiguities, they appear in the results (table 2). The removal of these errors is done by a third level of NN. Multimodular Architecture for Remote Sensing Options 681 3.3 CORRECTING THE REMAINING ERRORS : THIRD LEVEL This problem has been dealt with in [Badran & al 91] and is not discussed here. The method is related to image processing using MLP as optimal filter. The use of different filters taking into account the 5x5 vicinities of the point considered permits to detect the erroneous directions and to choose among the alternative proposed solutions. This method enables to correct up to 99.5% of the errors. 4 RESULTS As actual data does not exist yet, we have tested the method on values computed from real meteorological models. The swaths of the scatterometer ERS 1 were simulated by flying a satellite on wind fields given by the ECMWF forecasting model. The sea roughness values (0'1,0'2,0'3) given by the three antennas were computed by inverting the Long model. Noise was then added to the simulated measurements in order to reproduce the errors made by the scatterometer. (A gaussian noise of zero average and of standard deviation 9.5% for both lateral antennas and 8.7% for the central antenna was added at each measurement).Twenty two maps obtained for the southern Atlantic Ocean were used to establish the learning sets. The 22 maps were selected randomly during the 30 days of September 1985 and nine remaining maps were used for the tests. 4.1 DECODING THE SPEED : FIRST LEVEL In the results presented in Table 1, a predicted measurement is considered correct if it differs from the desired output by 1 m/s. It has to be noticed that the oceanographer's specification is 2 m/s; the prescnt results illustrate the precision of the method. a e T bl 1 : per ormances on t e wm fi h . d spee d Performances performances bias Accuracy 1 ml s learninf? 99.3% 0.045m/s test 98,4 % 0.038m/s 4.2 DECODING THE DIRECTION : SECOND LEVEL It is found that good performances are obtained after the interpretation of the best two peaks only. When it is compared to usual methods which propose up to six possible directions, this method appears to be very powerful. Table 2 shows the performances using one or two peaks. The function F and its singularities have been recovered with a good accuracy, the noise added during the simulations in order to reproduce the noise made by the measuring devices has been removed. T bl 2 a e r£ h . dd' : pe ormances on t e wm uectlOn usmg th I e com~ ete ~stem Performances one peak two peaks Precision 20° learnim~ 68.0 % 99.1 % test 72.0 % 99.2 % 682 Thiria, Mejia, Badran, and Crepon 5 VALIDATION OF THE RESULTS In order to prove the power of the NN approach, table 3 compare our results with six classical methods [Chi & Li 88]. Table 3 shows that the NN results are very good compared to other techniques, moreover all the classical methods are based on the assumption that a precise analytical function «v ,X) ~ 0') exists, the NN method is more general and does not depend on such an assumption. Moreover the decoding of a point with NN requires approximately 23 ms on a SUN4 working station. This time is to be compared with the 0.25 second necessary for the decoding by present methods. Table 3 : performances simulation results Erms (in m/s) for different fixed wind speed Speed WLSL ML LS WLS AWLS L1 LWSS N.N Low 0.92 0.66 0.67 0.74 0.69 0.63 1.02 0.49 Middle 0.89 0.85 1.10 1.31 0.89 0.98 0.87 0.53 Hight 3.71 3.44 4.11 5.52 3.52 4.06 3.49 1.18 The wind vector error e is defined as follows: e = V1 - V2 where V1 is the true wind vector and V2 is the estimated wind vector, Erms = E( II ell). 6 CONCLUSION Performances reached when processing satellite remote sensing observations have proved that multi-modular architectures where simple NN modules cooperate can cope with real world applications. The methodology we have developed is general and can be used for a large variety of applications, it provides solutions to generic problems arising when dealing with NN cooperation. References Badran F, Thiria S, Crepon M (1991) : Wind ambiguity removal by the use of neural network techniques, J.G.R Journal of Geophysical Research vol 96 n °C 11 p 2052120529, November 15. Chong-Yung C, Fuk K Li (1969) : A Comparative Study of Several Wind Estimation Algorithms for Spacebomes scatterometers. IEEE transactions on geoscience and remote sensing, vol 26, No 2. Le Cun Y., Boser B., & aI., (1990) : Handwritten Digit Recognition with a BackPropagation Network- in D.Touretzky (ed.) Advances in Neural Information Processing Systems 2 , 396-404, Morgan Kaufmann White H. (1989) : Learning in Artificial Neural Networks: A Statistical Perspective. Neural Computation, 1,425-464.
1991
22
488
Learning in the Vestibular System: Simulations of Vestibular Compensation Using Recurrent Back-Propagation Thomas J. Anastasio University of Dlinois Beckman Institute 405 N. Mathews Ave. Urbana, II... 61801 Abstract Vestibular compensation is the process whereby normal functioning is regained following destruction of one member of the pair of peripheral vestibular receptors. Compensation was simulated by lesioning a dynamic neural network model of the vestibulo~ular reflex (VOR) and retraining it using recurrent back-propagation. The model reproduced the pattern of VOR neuron activity experimentally observed in compensated animals, but only if connections heretofore considered uninvolved were allowed to be plastic. Because the model incorporated nonlinear units, it was able to reconcile previously conflicting, linear analyses of experimental results on the dynamic properties of VOR neurons in normal and compensated animals. 1 VESTIBULAR COMPENSATION Vestibular compensation is one of the oldest and most well studied paradigms in motor learning. Although it is neurophysiologically well described, the adaptive mechanisms underlying vestibular compensation, and its effects on the dynamics of vestibular responses, are still poorly understood. The purpose of this study is to gain insight into the compensatory process by simulating it as learning in a recurrent neural network model of the vestibulo-ocular reflex (VOR). 603 604 Anastasio The VOR stabilizes gaze by producing eye rotations that counterbalance bead rotations. It is mediated by brainstem neurons in the vestibular nuclei (VN) that relay head velocity signals from vestibular sensory afferent neurons to the motoneurons of the eye muscles (Wilson and Melvill Jones 1979). The VOR circuitry also processes the canal signals, stretching out their time constants by four times before transmitting this signal to the motoneurons. This process of time constant lengthening is known as velocity storage (Raphan et al. 1979). The VOR is a bilaterally symmetric structure that operates in push-pull. The VN are linked bilaterally by inhibitory commissural connections. Removal of the vestibular receptors from one side (hemilabyrinthectomy) unbalances the system, resulting in continuous eye movement that occurs in the absence of head movement, a condition known as spontaneous nystagmus. Such a lesion also reduces VOR sensitivity (gain) and eliminates velocity storage. Compensatory restoration of VOR occurs in stages (Fetter and Zee 1988). It begins by quickly eliminating spontaneous nystagmus, and continues by increasing VOR gain. Curiously, velocity storage never recovers. 2 NETWORK ARCHITECTURE The horizontal VOR is modeled as a three-layered neural network (Figure 1). All of the units are nonlinear, passing their weighted input sums through the sigmoidal squashing function. This function bounds unit responses between zero and one. Input units represent afferents from the left (lhc) and right (mc) horizontal semicircular canal receptors. Output units correspond to motoneurons of the lateral (lr) and medial (mr) rectus muscles of the left eye. Interneurons in the VN are represented by hidden units on the left (lvnl, Ivn2) and right (rvnl, rvn2) sides of the model brainstem. Bias units stand for non-vestibular inputs, on the left (lb) and right (rb) sides. Network connectivity reflects the known anatomy of mammalian VOR (Wilson and Melvill Jones 1979). Vestibular commissures are modeled as recurrent connections between hidden units on opposite sides. All connection weights to the hidden units are plastic, but those to the outputs are initially fixed, because it is generally believed that synaptic plasticity occurs only at the VN level in vestibular compensation (Galiana et al. 1984). Fixed hidden-to-output weights have a crossed, reciprocal pattern. 3 TRAINING THE NORMAL NETWORK The simulations began by training the network shown in Figure I, with both vestibular inputs intact (normal network), to produce the VOR with velocity storage (Anastasio 1991). The network was trained using recurrent back-propagation (Williams and Zipser 1989). The input and desired output sequences correspond to the canal afferent signals and motoneuron eye-velocity commands that would produce the VOR response to two impulse head rotational accelerations, one to the left and the other to the right. One input (rhc) and desired output (lr) sequence is shown in Figure 2A (dotted and dashed, respectivley). Those for /hc and mr (not shown) are identical but inverted. The desired output responses are equal in amplitude to the inputs, producing VOR Learning in the Vestibular System 605 Figure 1. Recurrent Neural Network Model of the Horizontal Vestibulo-Ocular Reflex (VOR). /he, The: left and right horizontal semicircular canal afferents; lm1, lm2, rvnl, rvn2: vestibular nucleus neurons on left and right sides of model brainstem; lr, mr: lateral and medial rectus muscles of left eye; lb, rb: left and right non-vestibular inputs. This and subsequent figures redrawn from Anastasio (in press). eye movements that would perfectly counterbalance head movements. The output responses decay more slowly than the input responses. reflecting velocity storage. Between head movements. both desired outputs have the same spontaneous firing rate of 0.50. With output spontaneous rates (SRs) balanced. no push-pull eye velocity command is given and. consequently, no VOR eye movement would be made. The normal network learns the VOR transformation after about 4.000 training sequence presentations (passes). The network develops reciprocal connections from input to hidden units. as in the actual VOR (Wilson and Melvill Jones 1979). Inhibitory recurrent connections form an integrating (lvnl. rvnl) and a non-integrating (1m2. rvn2) pair of hidden units (Anastasio 1991). The integrating pair subserve storage in the network. They have strong mutual inhibition and exert net positive feedback on themselves. The non-integrating pair have almost no mutual inhibition. 606 Anastasio 4 SIMULATING VESTmULAR COMPENSATION After the normal network is constructed, with both inputs intact, vestibular compensation can be simulated by removing the input from one side and retraining with recurrent back-propagation. Left hemilabyrinthectomy produces deficits in the model that correspond to those observed experimentally. The responses of output unit Ir acutely (i.e. immediately) following left input removal are shown in Figure 2A. The SR of Ir (solid) is greatly increased above normal (dashed); that of mr (not shown) is decreased by the same amount. This output SR imbalance would result in eye movement to the left in the absence of head movement (spontaneous nystagmus). The gain of the outputs is greatly decreased. This is due to removal of one haJf the network input, and to the SR imbalance forcing the output units into the low gain extremes of ·the squashing function. Velocity storage is also eliminated by left input removal, due to events at the hidden unit level (see below). During retraining, the time course of simulated compensation is similar to that 0.75 0.65 en w 1'\1 B (J)o.65 A Z 0.55 2 t (J)o.55 , \ ' ... .V ".,w a: r'~ 0.45 I I t::0.45 1/ 2 o PASSES V ::> " , \ , 200 PASSES Q.350 Q.35 10 20 3l «l 50 til 0 10 20 30 40 !II til Q.65 Q.65 en w C D (J) ~o.55 0.55 .... (J) -w a:Q.45 t:: 0.45 2 900 PASSES 6,700 PASSES ::> Q.350 Q.35 10 20 3l 40 50 til 0 10 20 30 40 50 til NETWORK CYCLES NElWORK CYCLES Figure 2. Simulated Compensation in the VOR Neural Network Model. Response of Ir (solid) is shown at each stage of compensation: A, acutely (i.e. immediately) following the lesion; B, after spontaneous nystagmus has been eliminated; C, after VOR gain has been largely restored; D, after full recovery of VOR. Desired response of lr (dashed) shown in all plots. Intact input from rhe (dotted) shown in A only. Learning in the Vestibular System 607 observed experimentally (Fetter and Zee 1988). Spontaneous nystagmus is eliminated after 200 passes, as the SRs of the output units are brought back to their normal level (Figure 2B). Output unit gain is largely restored by 900 passes, but time constant remains close to that of the inputs (Figure 2C). At this stage. VOR gain would have increased substantially. but its time constant would remain low. indicating loss of velocity storage. This stage approximates the extent of experimentally observed compensation (ibid.). Completely restoring the normal VOR. with full velocity storage. requires over seven times more retraining (Figure 2D). The responses of the hidden units during each stage of simulated compensation are shown in Figure 3A and 3C. Average hidden unit SR and gain are shown as dotted lines in Figure 3A and 3C, respectively. Acutely following left input removal (AC stage). the SRs of left (dashed) and right (solid) hidden units decrease and increase. respectively (Figure 3A). One left hidden unit (lvnl) is actually silenced. Hidden unit gain at AC stage is greatly reduced bilaterally (Figure 3C). as for the outputs. At the point where spontaneous nystagmus is eliminated (NE stage). hidden units SRs are balanced bilaterally. and none of the units are spontaneously silent (Figure 3A). When VOR gain is largely restored (GR stage. corresponding to experimentally observed compensation), the gains of the hidden units have substantially increased (Figure 3C). At GR stage. average hidden unit SR has also increased but the bilateral SR balance has been strictly maintained (Figure 3A). A comparison with experimental data (Yagi and Markham 1984; Newlands and Perachio 1990) reveals that the behavior of hidden units in the model does not correspond to that observed for real VN neurons in compensated animals. Rather than having bilateral SR balance. the average SR of VN neurons in compensated animals is lower on the lesion-side and higher on the intact-side. Moreover, many lesion-side VN neurons are permanently silenced. Also. rather than substantially recovering gain. the gains of VN neurons in compensated animals increase little from their low values acutely following the lesion. The network model adopts its particular (and unphysiological) solution to vestibular compensation because. with fixed connection weights to the outputs. compensation can be brought about only by changes in hidden unit behavior. Thus. output SRs will be balanced only if hidden SRs are balanced. and output gain will increase only if hidden gain increases. The discrepancy between model and actual VN neuron data suggests that compensation cannot rely solely on synaptic plasticity at the VN level. 5 RELAXING CONSTRAINTS A better match between model and experimental VN neuron data can be achieved by rerunning the compensation simulation with modifiable weights at all allowed network connections (Figure 1). Bias-to-output and hidden-to-output synaptic weights. which were previously fixed, are now made plastic. These extra degrees of freedom give the adapting network greater flexibility in achieving compensation. and release it from a strict dependency upon the behavior of the hidden units. The time course of compensation in the all-weights-modifiable example is similar to the previous case (Figure 2). but each stage is reached after fewer passes. 608 Anastasio , A 0.8 W , ~ 0.8 UJ 0.6 ~ 0.6 0 W 0.4 Z ().4 ~ 02 ..... , ,/ ..... , " ..... ~ --AC NE GR ~M AC NE GR Z 02 ~ (/J ~M 3 3 D 2 2 1 1 AC NE GR ~M AC NE GR Figure 3. Behavior of Hidden Units at Various Stages of Compensation in the VOR Neural Network Model. Spontaneous rate (SR, A and B) and gain (C and D) are shown for networks with hidden layer weights only modifiable (A and C) or with all weights modifiable (B and D). Normal average SR (A and B) and gain (C and D) shown as dotted lines. NM. normal stage; AC, acutely following lesion; NE. after spontaneous nystagmus is eliminated; GR. after VOR gain is largely restored. The behavior of the hidden units in the all-weights simulation more closely matches that of actual VN neurons in compensated animals (Figure 3B and 3D). At NE stage. even though spontaneous nystagmus is eliminated. there remains a large bilateral imbalance in hidden unit SR. and one lesion-side hidden unit (lvn}) is silenced (Figure 3B). At GR stage. hidden unit gain has increased only modestly from the low acute level (Figure 3D). and the bilateral SR imbalance persists. with Ivnl still essentially spontaneously silent (Figure 3B). This modeling result constitutes a testable prediction that synaptic plasticity is occurring at the motoneuron as well as at the VN level in vestibular compensation. 6 NETWORK DYNAMICS In the all-weights simulation at GR stage. as well as in compensated animals, some lesion-side VN neurons are silenced. Hidden unit lvnl is silenced by its inhibitory commissural interaction with rvnl, which in the normal network allowed the pair to Learning in the Vestibular System 609 form an integrating, recurrent loop. Silencing of 1m} breaks the commissural loop and consequently eliminates velocity storage in the network. VN neuron silencing could also account for the loss of velocity storage in the real, compensated VOR. Loss of velocity storage in the model, in response to step head rotational acceleration stimuli, is shown in Figure 4. The output step response that would be expected given the longer VOR time constant is shown for lr in Figure 4A (dashed). The response of mr (not shown) is identical but inverted. Instead of expressing the longer VOR time constant, the actual step response of lr in the all-weights compensated network at GR stage (Figure 4A, dotted) has a rise time constant that is equal to the canal time constant, indicating complete loss of velocity storage. This is due to the behavior of the hidden units. The step responses of the integrating pair of hidden units in the compensated network at GR stage are shown in Figure 4B (lml, lower dotted; rvnl, upper dotted). Velocity storage is eliminated because lvnl is silenced, and this breaks the commissural loop that supports integration in the network. Paradoxically, in the normal network with all hidden units spontaneously active, the output step response rise time constant is also equal to that of the canal afferents, again indicating a loss of velocity storage. This is shown for lr from the normal network in Figure 4A (solid). The step responses of the hidden units in the normal network are shown in Figure 4B (lvnl, dashed; rvnl, solid). Unit lml, which is spontaneously active in the normal network, is quickly driven into cut-off by the step stimulus. This breaks the commissural loop and eliminates velocity storage, accounting for the short rise time constants of hidden and output units network wide. This result can explain some conflicting experimental findings concerning the 0.8 ffi 0.65 A l. ,,Ul rl 0.8 Z l ~ I 0.55 m 0.4 a: 1---' t: OAS 0.2 Z :J B r , 10-1 ~ ~ l--{ ;--'" \ I \ I 0.35 ______ ....L.._...10-___ ........ _"""" o o 10 3) al ..0 SJ 8) o 10 Zl 31 40 50 8) NElWORK CYCLES NETWORK CYCLES Figure 4. Responses of Units to Step Head Rotational Acceleration Stimuli in VOR Neural Network Model. A, expected response of lr with VOR time constant (dashed), and actual responses of lr in normal (solid) and all-weights compensated (dotted) networks. B, response of lml (dashed) and rvnl (solid) in normal network, and of lvnl (lower dotted) and rvnl (upper dotted) in all-weights compensated network. 610 Anastasio dynamics of VN neurons in normal and compensated animals. Using sinusoidal stimuli, the time constants of VN neurons were found to be lower in compensated than in normal gerbils (Newlands and Perachio 1990). In contrast, using step stimuli, no difference in rise time constants were found for VN neurons in DOrmal as compared to compensated cats (Yagi and Markham 1984). Rather than being a species difference, the disagreement may involve the type of stimulus used. Step accelerations are intense stimuli that can drive VN neurons to extreme levels. In response to a step in their off-directions, many VN neurons in normal cats were observed to cut-off (ibid.). As shown in Figure 4, this would disrupt commissural interactions and reduce velocity storage and VN neuron rise time constants, just as if these neurons were silenced as they are in compensated animals. In fact, VN neuron rise time constants were observed to be low in both normal and compensated cats (ibid.). In contrast, sinusoidal stimuli at an intensity that does not cause widespread VN neuron cut-off would not be expected to disrupt velocity storage in normal animals. Acknowledgements This work was supported by a grant from the Whitaker Foundation. References Anastasio TJ (1991) Neural network models of velocity storage m the horizontal vestibulo-ocular reflex. Bioi Cybem 64: 187-196 Anastasio TJ (in press) Simulating vestibular compensation using recurrent backpropagation. Bioi Cybem Fetter M, Zee DS (1988) Recovery from unilatera1labyrinthectomy in rhesus monkey. I Neurophysiol 59:370-393 Galiana HL, Flohr H, Melvill Iones G (1984) A reevauation of intervestibular nuclear coupling: its role in vestibular compensation. J Neurophysiol 51:242-259 Newlands SD, Perachio AA (1990) Compensation of horizontal canal related activity in the medial vestibular nucleus following unilateral labyrinth ablation in the decerebrate gerbil. I. type I neurons. Exp Brain Res 82:359-372 Raphan Th, Matsuo V, Cohen B (1979) Velocity storage in the vestibulo-ocular reflex arc (VOR). Exp Brain Res 35:229-248 Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comp 1:270-280 Wilson VI, Melvill Jones G (1979) Mammalian vestibular physiology. Plenum Press, New York Yagi T, Markham CH (1984) Neural correlates of compensation after hemilabyrinthectomy. Exp Neurol 84:98-108
1991
23
489
Repeat Until Bored: A Pattern Selection Strategy Paul W. Munro Depamnent of Information Science University of Pittsburgh Pittsburgh, PA 15260 ABSTRACT An alternative to the typical technique of selecting training examples independently from a fixed distribution is fonnulated and analyzed, in which the current example is presented repeatedly until the error for that item is reduced to some criterion value, ~; then, another item is randomly selected. The convergence time can be dramatically increased or decreased by this heuristic, depending on the task, and is very sensitive to the value of ~. 1 INTRODUCTION In order to implement the back propagation learning procedure (Werbos, 1974; Parker, 1985; Rumelhart, Hinton and Williams, 1986), several issues must be addressed. In addition to designing an appropriate network architecture and detennining appropriate values for the learning parameters, the batch size and a scheme for selecting training examples must be chosen. The batch size is the number of patterns presented for which the corresponding weight changes are computed before they are actually implemented; immediate update is equivalent to a batch size of one. The principal pattern selection schemes are independent selections from a stationary distribution (independent identically distributed, or i.i.d.) and epochal, in which the training set is presented cyclically (here, each cycle through the training set is called an epoch). Under Li.d. pattern selection, the learning perfonnance is sensitive to the sequence of training examples. This observation suggests that there may exist selection strategies that facilitate learning. Several studies have shown the benefit of strategic pattern selection (e.g., Mozer and Bachrach, 1990; Atlas, Cohn, and Ladner, 1990; Baum and Lang, 1991). 1001 1002 Munro lYPically, online learning is implemented by independent identically distributed pattern selection, which cannot (by definition) take advantage of useful sequencing strategy. It seems likely, or certainly plausible, that the success of learning depends to some extent on the order in which stimuli are presented. An extreme, though negative, example would be to restrict learning to a portion of the available training set; i.e. to reduce the effective training set. Let sampling functions that depend on the state of the learner in a constructive way be termed pedagogical. Determination of a particular input may require information exogenous to the learner; that is, just as training algorithms have been classified as supervised and unsupervised, so can pedagogical pattern selection techniques. For example, selection may depend on the networlc's performance relative to a desired schedule. The intent of this study is to explore an unsupervised selection procedure (even though a supervised learning rule, backpropagation, is used). The initial selection heuristic investigated was to evaluate the errors across the entire pattern set for each iteration and to present the pattern with the highest error, of course, this technique has a large computational overhead, but the question was whether it would reduce the number of learning trials. The results were quite to the contrary; preliminary trials on small tasks (two and three bit parity), show that this scheme performs very poorly with all patterns maintaining high error. A new unsupervised selection technique is introduced here. The "Repeat-Until-Bored" heuristic is easily implemented and simply stated: if the current training example generates a high error (Le. greater than a fixed criterion value), it is repeated; otherwise, another one is randomly selected. This approach was motivated by casual observations of behavior in small children; they seem to repeat seemingly arbitrary tasks several times, and then abruptly stop and move to some seemingly arbitrary alternative (Piaget, 1952). For the following discussion, lID and RUB will denote the two selection procedures to be compared. 2 METHODOLOGY RUB can be implemented by adding a condition to the lID statement; in C, this is simply old (lID) : patno = random () % numpatsi new(RUB): if (paterror<beta) patno = random() % numpatsj where patno identifies the selected pattern, numpats is the number of patterns in the training set, and paterror is the sum squared error on a particular pattern. Thus, an example is presented and repeated until it has been learned by the network to some criterion level, the squared error summed across the output units is less than a "boredom" criterion ~; then , another pattern is randomly selected. The action of RUB in weight space is illustrated in Figure 1, for a two dimensional environment consisting of just two patterns. Corresponding to each pattern, there is an isocline (or equilibrium surface) , defined by the locus of weight vectors that yield the desired response to that pattern (here, a or b). Since the delta rule drives the weight parallel to the presented pattern, trajectories in weight space are perpendicular to the pattern's isocline. Here, RUB is compared with alternate pattern selection. Repeat Until Bored: A Pattern Selection Strategy 1003 wea=A an II D trajectory a RUB trajectory Figure 1. Effect of pattern selection on weight state trajectory. A linear unit can be trained to give arbitrary responses (A and B) to given stimuli (a and b). The isoclines (bold lines) are defined to be the set of weights that satisfy each stimulus-response pair. Thus, the intersection is the weight state that satisfies both constraints. The delta rule drives the weights toward the isocline that corresponds to the presented pattern. The RUB procedure repeats a pattern until the state approaches the isocline. The RUB procedure was tested for a broad range of ~ across several tasks. Two performance measures were used; in both cases, performance was averaged across several (201(0) trials with different initial random weights. For the parity tasks, performance was measured as the fraction of trials for which the squared error summed over the training set reached a sufficiently low value (usually 0.1) within a specified number of training exampIes. Since the parity task always converged for sufficiently large ~,performance was measured as the number of trials that converged within a pre specified number of iterations required to reduce the total squared error summed across the pattern set to a low value (typically, 0.1). Note that each iteration of weight modification during a set of repeated examples was explicitly counted in the performance measure, so the comparison between lID and RUB is fair. Also, for each task, the learning rate and momentum were fixed (ususally 0.1 and 0.9, respectively). Consideration of RUB (see the above C implementation, for example) indicates that, for very small values of ~, the first example will be repeated indefinitely, and the task can therefore not be learned. At the other extreme, for ~ greater than or equal to the maximum possible squared error (2.0, in this case), perfonnance should match IID. 1004 Munro 3 RESULTS 3.1. PARITY While the expected behavior for RUB on the two and three bit parity tasks (Figure 2) is observed for low and high values of ~, there are some surprises in the intermediate range. Rather than proceeding monotonically from zero to its lID value, the performance curve exhibits an "up-down-up" behavior; it reaches a maximum in the range O.2<~O.25, then plummets to zero at J3=O.25, remains there for an interval, then partially recovers at its final (lID) level. This "dead zone" phenomenon is not as pronounced when the momentum parameter is set to zero (Figure 3). 100 100 80 80 60 60 40 40 20 20 0 0 .0001 .001 .01 .1 10 .0001 .001 .01 .1 10 Figure 2. Performance profiles for the parity task. Each point is the average number of successful simulations out of 100 trials. A log scale is used so that the behavior for very low values of the error cr~erion is evident. Note the critical falloff at ~"'0.25 for both the XOR task (left) and three-bit parity (right). 100 100 80 80 60 60 40 40 20 20 0 0 .0001 .001 .01 .1 10 .0001 .001 .01 .1 1 10 Figure 3. Performance profiles with zero momemtum. For these two tasks, the up-down-up phenomenon is still evident, but there is no "dead zone". Left: XOR Right: Three bit parity Repeat Until Bored: A Pattern Selection Strategy 1005 3.2 ENCODERS The 4-2-4 encoder shows no significant improvement over the lID for any value of RUB. Here, perfonnance was measured both in tenns of success rate and average number of iterations to success. Even though all simulations converge for ~>.001 (Le., there is no dead zone), the effect of ~ is reflected in another perfonnance measure: average number of iterations to convergence (Figure 4). However, experiments with the 5-2-5 encoder task show an effect. While backprop converges for all values of ~ (except very small values), the perfonnance, as measured by number of pattern presentations, does show a pronounced decrement. The 8-3-8 encoder shows a significant, but less dramatic, effect. 6000 1 st data value: 8691.0 • 5-2-5 CD u 6 4-2-4 ~ 6 8-3-8 CD ~ .. CD > ~ 4000 0 u 0 en ~ 0 ::= as .. CD 2000 == CD ~ as .. CD ~ 0 .001 .01 .1 1 10 13 Figure 4. Encoder performance profiles. See text. 3.3 THE MESH The mesh (Figure 5, left) is a 2-D classification task that can be solved by a strictly layered net with five hidden units. Like the encoder and unlike parity, lID is found to converge on 100% of trials; however, there is a critical value of ~ and a well-defined dead zone (Figure 5, right). Note that the curve depicting average number of iterations to convergence decreases monotonically, interrupted at the dead zone but continuing its apparent trend for higher values of ~. 1006 Munro 20 ~~----~~"~----T10+ 0 0 'b 0 4b ~ • ~ 0 • • • c: 06" 0 0 • UC'i 10 (I) .... • a~ '4 • • J! • :::J .S CI) 0 .0001 .001 .01 .1 1 10 ~ Figure 5. The mesh task. Left: the task. Right: Performance profile. Number of simulations that converge is plotted along the bold line (left vertical) axis. Average number of iterations are plotted as squares (right vertical axis). 3A NONCONVERGENCE Nonconvergence was examined in detail for three values of ~, corresponding to high perfonnance, poor perfonnance (the dead zone), and lID, for the three bit parity task. The error for each of the eight patterns is plotted over time. For trials that do not converge (Figure 6), the patterns interact differently, depending on the value of~. At (3=0.05 (a "good" value of ~ for this task), the error traces for the four odd-parity patterns are strongly correlated in an irregular oscillatory mode, as are the four even-parity traces, but the two groups are strongly anticorrelated. In the odd parity group, the error remains low for three of the patterns (001, 010, and 100), but ranges from less than 0.1 to values greater than 0.95 for the fourth (111). Traces for the even parity patterns correspond almost identically; i.e. not only are they correlated, but all four maintain virtually the same value. At this point, the dead zone phenomenon has only been observed in tasks with a single output unit. This property hints at the following explanation. Note first that each input/output pair in the training set divides the weight space into two halves, characterized by the sign of the linear activation into the output unit; that is, whether the output is above or below 0.5, and hence whether the magnitude of the difference between the actual and desired responses is above or below 0.5. Since ~ is the value of the squared error, learning is repeated for (3=0.25 only for examples for which the state is on the wrong half of weight space. Just when it is about to cross the category boundary, which would bring the absolute value of the error below .5, RUB switches to another example, and the state is not pushed to the other side of the boundary. This conjecture suggests that for tasks with multiple output units, this effect might be reduced or eliminated, as has been demonstrated in the encoder examples. Repeat Until Bored: A Pattern Selection Strategy 1007 ~ = 0.05 ~ = 0.3 1.0't:::-----"""':;;:::~-~::;:'1 1.0~----------------------~ 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 ~-=::=...-...... ~:=._IIIIIIIC:~ o.o ...... -----------~ 25500 25600 4900 5000 ~ = 2.0 1.0 -r---------------------~ 0.8 0.6 0.4 0.2 0.0~---------------4 29000 29100 4 DISCUSSION Figure 6. Error traces for individual patterns. For each of three values of the error criterion, the variation of the error for each pattern is plotted for 100 iterations of the three-bit parity task that did not converge. Note the large amplitude swings for low values (upper left), and the small amplitude oscillations in the "dead zone" (upper right). Active learning and boredom. The sequence of training examples has an undeniable effect on learning, both in the real world and in simulated learning systems. While the RUB procedure influences this sequence such that the learning perfonnance is either positively or negatively affected, it is just a minimal instance of active learning; more elaborate learning systems have explored similar notions of "boredom" (eg., Scott and Markovitch, 1989). Nonconvergence. From Figure 6 it can be seen, for both RUB and lID, that nonconvergence does not correspond to a local minimum in weight space. In situations where the overall error is "stuck" at a non-zero value, the error on the individual patterns continues to change. The weight trajectory is thus "trapped" in a nonoptimal orbit, rather than a nonoptimal equilibrium point. 1008 Munro Acknowledgements This research was supported in part by NSF grant 00-8910368 and by Siemens Corporate Research, which kindly provided the author with financial support and a stimulating research environment during the summer of 1990. David Cohn and Rile Belew were helpful in bringing relevant work to my attention. References Baum, E. and Lang, K. (1991) Constructing multi-layer neural networks by searching input space rather than weight space. In: Advances in Neural Information Processing Systems 3. D. S. Touretsky, ed. Morgan Kaufmann. Cohn, D., Atlas, L., and Ladner, R. (1990) Training connectionist networks with queries and selective sampling. In: Advances in Neural Information Processing Systems 2. D. S. Touretsky, ed. Morgan Kaufmann. Mozer, M. and Bachrach, J. (1990) Discovering the structure of a reactive environment by exploration. In: Advances in Neural Information Processing Systems 2. D. S. Touretsky, ed. Morgan Kaufmann. Parker, D. (1985) Learning logic. TR-47. MIT Center for Computational Economics and Statistics. Cambridge MA. Piaget, J. (1952) The Origins of Intelligence in Children. Norton. Rumelhart D., Hinton G., and Williams R. (1986) Learning representations by backpropagating errors. Nature 323:533-536. Scott, P. D. and Markovitch, S. (1989) Uncertainty based selection of learning experiences. Sixth International Workshop on Machine Learning. pp.358-361 Werbos, P. (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. Unpublished doctoral dissertation, Harvard University.
1991
24
490
Refining PIn Controllers using Neural Networks Gary M. Scott Department of Chemical Engineering 1415 Johnson Drive University of Wisconsin Madison, WI 53706 Jude W. Shavlik Department of Computer Sciences 1210 W. Dayton Street University of Wisconsin Madison, WI 53706 W. Harmon Ray Department of Chemical Engineering 1415 Johnson Drive University of Wisconsin Madison, WI 53706 Abstract The KBANN approach uses neural networks to refine knowledge that can be written in the form of simple propositional rules. We extend this idea further by presenting the MANNCON algorithm by which the mathematical equations governing a PID controller determine the topology and initial weights of a network, which is further trained using backpropagation. We apply this method to the task of controlling the outflow and temperature of a water tank, producing statistically-significant gains in accuracy over both a standard neural network approach and a non-learning PID controller. Furthermore, using the PID knowledge to initialize the weights of the network produces statistically less variation in testset accuracy when compared to networks initialized with small random numbers. 1 INTRODUCTION Research into the design of neural networks for process control has largely ignored existing knowledge about the task at hand. One form this knowledge (often called the "domain theory") can take is embodied in traditional controller paradigms. The 555 556 Scott, Shavlik, and Ray recently-developed KBANN (Knowledge-Based Artificial Neural Networks) approach (Towell et al., 1990) addresses this issue for tasks for which a domain theory (written using simple propositional rules) is available. The basis of this approach is to use the existing knowledge to determine an appropriate network topology and initial weights, such that the network begins its learning process at a "good" starting point. This paper describes the MANNCON (Multivariable Artificial Neural Network Control) algorithm, a method of using a traditional controller paradigm to determine the topology and initial weights of a network. The used of a PID controller in this way eliminates network-design problems such as the choice of network topology (i.e., the number of hidden units) and reduces the sensitivity of the network to the initial values of the weights. Furthermore, the initial configuration of the network is closer to its final state than it would normally be in a randomly-configured network. Thus, the MANNCON networks perform better and more consistently than the standard, randomly-initialized three-layer approach. The task we examine here is learning to control a Multiple-Input, Multiple-Output (MIMO) system. There are a number of reasons to investigate this task using neural networks. One, it usually involves nonlinear input-output relationships, which matches the nonlinear nature of neural networks. Two, there have been a number of successful applications of neural networks to this task (Bhat & McAvoy, 1990; Jordan & Jacobs, 1990; Miller et al., 1990). Finally, there are a number of existing controller paradigms which can be used to determine the topology and the initial weights of the network. 2 CONTROLLER NETWORKS The MANNCON algorithm uses a Proportional-Integral-Derivative (PID) controller (Stephanopoulos, 1984), one of the simplest of the traditional feedback controller schemes, as the basis for the construction and initialization of a neural network controller. The basic idea of PID control is that the control action u (a vector) should be proportional to the error, the integral of the error over time, and the temporal derivative of the error. Several tuning parameters determine the contribution of these various components. Figure 1 depicts the resulting network topology based on the PID controller paradigm. The first layer of the network, that from Y $P (desired process output or setpoint) and Y(n-l) (actual process output of the past time step), calculates the simple error (e). A simple vector difference, e=Y$p-Y accomplishes this. The second layer, that between e, e(n-l), and e, calculates the actual error to be passed to the PID mechanism. In effect, this layer acts as a steady-state pre-compensator (Ray, 1981), where e = GIe and produces the current error and the error signals at the past two time steps. This compensator is a constant matrix, G I , with values such that interactions at a steady state between the various control loops are eliminated. The final layer , that between e and u(n) (controller output/plant input), calculates the controller action Y(n-I) t:(n-I) Refining PID Controllers using Neural Networks 557 Fd Td den) Water F Tank WCO WHO WCI WHI WC2 WH2 T Yen) Figure 1: MANNCON network showing weights that are initialized using Ziegler-Nichols tuning parameters. based on the velocity form of the discrete PID controller: UC(n) = UC(n-l) + WCOCI(n) + WCICI(n-l) + WC2 CI(n-2) where Wca, wCb and WC2 are constants determined by the tuning parameters of the controller for that loop. A similar set of equations and constants (WHO, WHI, WH2) exist for the other controller loop. Figure 2 shows a schematic of the water tank (Ray, 1981) that the network controls. This figure also shows the controller variables (Fc and FH), the tank output variables (F(h) and T), and the disturbance variables (Fd and Td). The controller cannot measure the disturbances, which represent noise in the system. MANN CON initializes the weights of Figure 1 's network with va.lues that mimic the behavior of a PID controller tuned with Ziegler-Nichols (Z-N) parameters (Stephanopoulos, 1984) at a particular operating condition. Using the KBANN approach (Towell et al., 1990), it adds weights to the network such that all units in a layer are connected to all units in all subsequent layers, and initializes these weights to small random numbers several orders of magnitude smaller than the weights determined by the PID parameters. We scaled the inputs and outputs of the network to be in the range [0,1]. Initializing the weights of the network in the manner given above assumes that the activation functions of the units in the network are linear, that is, 558 Scott, Shavlik, and Ray Cold Stream Fe Hot Stream (at TH) ~ Dis t urban ce Fd,Td T = Temperature F = Flow Rate II I I Ih lOutput F(h), T Figure 2: Stirred mixing tank requiring outflow and temperature control. Table 1: Topology and initialization of networks. Network Topology Weight Initialization 1. Standard neural network 3-layer (14 hidden units) random 2. MANNCON network I PID topology random 3. MANNCON network II PID topology Z-N tuning The strength of neural networks, however, lie in their having nonlinear (typically sigmoidal) activation functions. For this reason, the MANNCON system initially sets the weights (and the biases of the units) so that the linear response dictated by the PID initialization is approximated by a sigmoid over the output range of the unit. For units that have outputs in the range [-1,1]' the activation function becomes 2 _ 1 1 + exp( -2.31 L WjiOi) where Wji are the linear weights described above. Once MANNCON configures and initializes the weights of the network, it uses a set of training examples and backpropagation to improve the accuracy of the network. The weights initialized with PID information, as well as those initialized with small random numbers, change during backpropagation training. 3 EXPERIMENTAL DETAILS We compared the performance of three networks that differed in their topology and/or their method of initialization. Table 1 summarizes the network topology and weight initialization method for each network. In this table, "PID topology" is the network structure shown in Figure 1. "Random" weight initialization sets Refining PID Controllers using Neural Networks 559 Table 2: Range and average duration of setpoints for experiments. Experiment Training Set Testing Set 1 [0.1,0.9] [0.1,0.9] 22 instances 22 instances 2 [0.1,0.9] [0.1,0.9] 22 instances 80 instances 3 [0.4,0.6] [0.1,0.9] 22 instances 80 instances all weights to small random numbers centered around zero. We also compare these networks to a (non-learning) PID controller. We trained the networks using backpropagation over a randomly-determined schedule of setpoint YsP and disturbance d changes that did not repeat. The setpoints, which represent the desired output values that the controller is to maintain, are the temperature and outflow of the tank. The disturbances, which represent noise, are the inflow rate and temperature of a disturbance stream. The magnitudes of the setpoints and the disturbances formed a Gaussian distribution centered at 0.5. The number of training examples between changes in the setpoints and disturbances were exponentially distributed. We performed three experiments in which the characteristics of the training and/or testing set differed. Table 2 summarizes the range of the setpoints as well as their average duration for each data set in the experiments. As can be seen, in Experiment 1, the training set and testing sets were qualitatively similar; in Experiment 2, the test set was of longer duration setpoints; and in Experiment 3, the training set was restricted to a subrange of the testing set. We periodically interrupted training and tested the network. Results are averaged over 10 runs (Scott, 1991). We used the error at the output of the tank (y in Figure 1) to determine the network error (at u) by propagating the error backward through the plant (Psaltis et al., 1988). In this method, the error signal at the input to the tank is given by f '( ) ~ °Yi 8ui = netui ~ 8yj OUi J where 8yj represents the simple error at the output of the water tank and 8ui is the error signal at the input of the tank. Since we used a model of the process and not a real tank, we can calculate the partial derivatives from the process model equations. 4 RESULTS Figure 3 compares the performance of the three networks for Experiment 1. As can be seen, the MANNCON networks show an increase in correctness over the standard neural network approach. Statistical analysis of the errors using a t-test show that they differ significantly at the 99.5% confidence level. Furthermore, while the difference in performance between MANNCON network I and MANNCON network II is 560 Scott, Shavlik, and Ray l~---------------------------------------------, 1 = Standard neural network 2 = MANNCON network I 3 = MANN CON network II 4 = PID controller (non-learning) 10000 15000 20000 25000 30000 Training Instances Figure 3: Mean square error of networks on the testset as a function of the number of training instances presented for Experiment 1. not significant, the difference in the variance of the testing error over different runs is significant (99.5% confidence level). Finally, the MANNCON networks perform significantly better (99.95% confidence level) than the non-learning PID controller. The performance of the standard neural network represents the best of several trials with a varying number of hidden units ranging from 2 to 20. A second observation from Figure 3 is that the MANNCON networks learned much more quickly than the standard neural-network approach. The MANNCON networks required significantly fewer training instances to reach a performance level within 5% of its final error rate. For each of the experiments, Table 3 summarizes the final mean error, as well as the number of training instances required to achieve a performance within 5% of this value. In Experiments 2 and 3 we again see a significant gain in correctness of the MANNCON networks over both the standard neural network approach (99.95% confidence level) as well as the non-learning PID controller (99.95% confidence level). In these experiments, the MANNCON network initialized with Z-N tuning also learned significantly quicker (99.95% confidence level) than the standard neural network. 5 FUTURE WORK One question is whether the introduction of extra hidden units into the network would improve the performance by giving the network "room" to learn concepts that are outside the given domain theory. The addition of extra hidden units as well as the removal of unneeded units is an area with much ongoing research. Refining PID Controllers using Neural Networks 561 Table 3: Comparison of network performance. Method I Mean Square Error I Training Instances Experiment 1 l. Standard neural network 0.0103 ± 0.0004 25,200 ± 2, 260 2. MANN CON network I 0.0090 ± 0.0006 5,000 ± 3,340 3. MANN CON network II 0.0086 ± 0.0001 640± 200 4. PID control (Z-N tuning) 0.0109 5. Fixed control action 0.0190 Experiment 2 l. Standard neural network 0.0118 ± 0.00158 14,400 ± 3, 150 2. MANN CON network I 0.0040 ± 0.00014 12, 000 ± 3,690 3. MANN CON network II 0.0038 ± 0.00006 2,080± 300 4. PID control (Z-N tuning) 0.0045 5. Fixed con trol action 0.0181 Experiment 3 l. Standard neural network 0.0112 ± 0.00013 25,200 ± 2, 360 2. MANN CON network I 0.0039 ± 0.00008 25,000 ± 1, 550 3. MANN CON network II 0.0036 ± 0.00006 9,400 ± 1,180 4. PID control (Z-N tuning) 0.0045 5. Fixed control action 0.0181 The "±" indicates that the true value lies within these bounds at a 95% confidence level. The values given for fixed control action (5) represent the errors resulting from fixing the control actions at a level that produces outputs of [0.5,0.5) at steady state. "Ringing" (rapid changes in controller actions) occurred in some of the trained networks. A future enhancement of this approach would be to create a network architecture that prevented this ringing, perhaps by limiting the changes in the controller actions to some relatively small values. Another important goal of this approach is the application of it to other real-world processes. The water tank in this project, while illustrative of the approach , was quite simple. Much more difficult problems (such as those containing significant time delays) exist and should be explored. There are several other controller paradigms that could be used as a basis for network construction and initialization. There are several different digital controllers, such as Deadbeat or Dahlin's (Stephanopoulos, 1984), that could be used in place of the digital PID controller used in this project. Dynamic Matrix Control (DMC) (Pratt et al., 1980) and Internal Model Control (IMC) (Garcia & Morari, 1982) are also candidates for consideration for this approach. Finally, neural networks are generally considered to be "black boxes," in that their inner workings are completely uninterpretable. Since the neural networks in this approach are initialized with information, it may be possible to interpret the weights of the network and extract useful information from the trained network. 562 Scott, Shavlik, and Ray 6 CONCLUSIONS We have described the MANNCON algorithm, which uses the information from a PID controller to determine a relevant network topology without resorting to trialand-error methods. In addition, the algorithm, through initialization of the weights with prior knowledge, gives the backpropagtion algorithm an appropriate direction in which to continue learning. Finally, we have shown that using the MANNCON algorithm significantly improves the performance of the trained network in the following ways: • Improved mean testset accuracy • Less variability between runs • Faster rate of learning • Better generalization and extrapolation ability Acknowledgements This material based upon work partially supported under a National Science Foundation Graduate Fellowship (to Scott), Office of Naval Research Grant N00014-90J-1941, and National Science Foundation Grants IRI-9002413 and CPT-8715051. References Bhat, N. & McAvoy, T. J. (1990). Use of neural nets for dynamic modeling and control of chemical process systems. Computers and Chemical Engineering, 14, 573-583. Garcia, C. E. & Morari, M. (1982). Internal model control: 1. A unifying review and some new results. I&EC Process Design & Development, 21, 308-323. Jordan, M. I. & Jacobs, R. A. (1990). Learning to control an unstable system with forward modeling. In Advances in Neural Information Processing Systems (Vol. 2, pp. 325- 331). San Mateo, CA: Morgan Kaufmann. Miller, W. T., Sutton, R. S., & Werbos, P. J. (Eds.)(1990). Neural networks for control. Cambridge, MA: MIT Press. Pratt, D. M., Ramaker, B. L., & Cutler, C. R. (1980). Dynamic matrix control method. Patent 4,349,869, Shell Oil Company. Psaltis, D., Sideris, A., & Yamamura, A. A. (1988). A multilayered neural network controller. IEEE Control Systems Magazine, 8, 17- 21. Ray, W. H. (1981). Advanced process control. New York: McGraw-Hill, Inc. Scott, G. M. (1991). Refining PID controllers using neural networks. Master's project, University of Wisconsin, Department of Computer Sciences. Stephanopoulos, G. (1984). Chemical process control: An introduction to theory and practice. Englewood Cliffs, NJ: Prentice Hall, Inc. Towell, G., Shavlik, J., & Noordewier, M. (1990). Refinement of approximate domain theories by knowledge-base neural networks. In Eighth National Conference on Aritificial Intelligence (pp. 861-866). Menlo Park, CA: AAAI Press.
1991
25
491
A Cortico-Cerebellar Model that Learns to Generate Distributed Motor Commands to Control a Kinematic Arm N.E. Berthier S.P. Singh A.G. Barto Department of Computer Science University of Massachusetts Amherst, MA 01002 .T.C. Honk Department of Physiology Northwestern University Medical School Chicago, IL 60611 Abstract A neurophysiologically-based model is presented that controls a simulated kinematic arm during goal-directed reaches. The network generates a quasi-feedforward motor command that is learned using training signals generated by corrective movements. For each target, the network selects and sets the output of a subset of pattern generators. During the movement, feedback from proprioceptors turns off the pattern generators. The task facing individual pattern generators is to recognize when the arm reaches the target and to turn off. A distributed representation of the motor command that resembles population vectors seen in vivo was produced naturally by these simulations. 1 INTRODUCTION We have recently begun to explore the properties of sensorimotor networks with architectures inspired by the anatomy and physiology of the cerebellum and its interconnections with the red nucleus and the motor cortex (Houk 1989; Houk et al.. 611 612 Berthier, Singh, Barto, and Houk 1990). It is widely accepted that these brain regions are important in the control of limb movements (Kuypers, 1981; Ito, 1984), although relatively little attention has been devoted to probing how the different regions might function together in a cooperative manner. Starting from a foundation of known anatomical circuitry and the results of microelectrode recordings from neurons in these circuits, we proposed the concept of rubrocerebellar and corticocerebellar information processing modules that are arranged in parasagittal arrays and function as adjustable pattern generators (APGs) capable of the storage, recall and execution of motor programs. The aim of the present paper is to extend the APG Model to a multiple degreeof-freedom task and to investigate how the motor representation developed by the model compares to the population vector representations seen by Georgopoulos and coworkers (e.g., Georopoulos, 1988). A complete description of the model and simulations reported here is contained in Berthier et al. (1991). 2 THE APG ARRAY MODEL As shown in Figure 1 the model has three parts: a neural network that generates control signals, a muscle model that controls joint angle, and a planar, kinematic arm. The control network is an array of APGs that generate signals that are fed to the limb musculature. Because here we are interested in the basic issue of how a collection of APGs might cooperatively control multiple degree-of-freedom movements, we use a very simplified model of the limb that ignores dynamics. The muscles convert APG activity to changes in muscle length, which determine the changes in the joint angles. Activation of an APG causes movement of the arm in a direction in joint-angle space that is specific to that APG 1, and the magnitude of an APG's activity determines the velocity of that movement. The simultaneous activation of selected APGs determines the arm trajectory as the superposition of these movements. A learning rule, based on long-term depression (e.g., Ito, 1984), adjusts the subsets of APGs that are selected as well as characteristics of their activity in order to achieve desired movements. Each APG consists of a positive feedback loop and a set of Purkinje cells (PCs). The positive feedback loop is a highly simplified model of a component of a complex cerebrocerebellar recurrent network. In the simplified model simulated here, each APG has its own feedback loop, and the loops associated with different APGs do not interact. When triggered by sufficiently strong activation, the neurons in these loops fire repetitively in a self-sustaining manner. An APG's motor command is generated through the action of its PCs which inhibit and modulate the buildup of activity in the feedback loop. The activity of loop cells is conveyed to spinal motor areas by rubrospinal fibers. PCs receive information that specifies and constrains the desired movements via parallel fibers. We hypothesize that the response of PCs to particular parallel fiber inputs is adaptively adjusted through the influence of climbing fibers that respond to corrective movements (Houk & Barto, 1991). The APG array model assumes that climbing fibers and PCs are aligned in a way that climbing fibers provide specialized inforITo simplify these initial simulations we ignore changes in muscle moment arms with posture of the arm. A Cortico-Cerebellar Model that Learns to Generate Distributed Motor Commands 613 Network ........................... , · · · APG Modules l 1 m M · · · · ................................... .1 Muscles T • T • Figure 1: APG Control of Joint Angles. A collection of of APGs (adjustable pattern generators) is connected to a simulated two degree-of-freedom, kinematic, planar arm with antagonistic muscles at each joint. The task is to move the arm in the plane from a central starting location to one of eight symmetrically placed targets. Activation of an APG causes a movement of the arm that is specific to that APG, and the magnitude of an APG's activity determines the velocity of that movement. The simultaneous activation of selected APGs determines the arm trajectory as a superposition of these movements. mation to PCs. Gellman et al. (1985) showed that proprioceptive climbing fibers are inhibited during planned movements, but the data of Gilbert and Thach (1977) suggest that they fire during corrective movements. In the present simulations, we assume that corrective movements are made when a movement fails to reach the target. These corrective movements stimulate proprioceptive climbing fibers which provides information to higher centers about the direction of the corrective movement. More detailed descriptions of APGs and relevant anatomy and physiology can be found in Houk (1989), Houk et al. (1990), and Berthier et al. (1991). The generation of motor commands occurs in three phases. In the first phase, we assume that all positive feedback loops are off, and inputs provided by teleceptive and proprioceptive parallel fibers and basket cells determine the outputs of the PCs. We call this first phase selection. We assume that noise is present during the selection process so that individual PCs are turned off (Le., selected) probabilistic ally. To begin the second phase, called the execution phase, loop activity is triggered by cortical activity. Once triggered, loop activity is self-sustaining because the loop cells have reciprocal positive connections. The triggering of loop activity causes the motor command to be "read out." The states of the PCs in the selection phase determine the speed and direction of the arm movement. As the movement is being performed, proprioceptive feedback and efference copy gradually depolarize the PCs. When a large proportion of the PCs are depolarized, PC inhibition reaches a critical value and terminates loop activity. In the third phase, the correction phase, corrective movements trigger climbing fiber activity that alters parallel fiber-PC connection weights. 614 Berthier, Singh, Barto, and Houk A [J ... " . , . 11-: .... ::: rI w..J I. II •• ' ... u '. '. II .,: • I.:' . '. I.. .::. • I. I • I,' I • • :. ~ ..... . ..' ..• . .~ .1-... tI , , ~ • n,.·· .. ....... ,:1. ~ L.!I ..• r LJ .". ~" .1 • ~ • I" I' • • ' .. . ' :. ' ... , ~ . . . • • '. • II c1: ~ '0 6 B Figure 2: A. Movement Trajectories After Training. The starting point for each movement is the center of the workspace, and the target location is the center of the open square. The position of the arm at each time step is shown as a dot. Three movements are shown to each target. B. APG selection. APG selection for movements to a given target is illustrated by a vector plot at the position of the target. An individual APG is represented by a vector, the direction of which is equal to the direction of movement caused by that APG in Cartesian space. The vector length is proportional to output of the Purkinje cells during the selection phase. The arrow points in the direction of the vector sum. 3 SIMULATIONS We trained the APG model to control a two degree-of-freedom, kinematic, planar arm. The task was similar to Georgopoulos (1988) and required APGs to move the arm from a central starting point to one of eight radially symmetric, equidistant targets. Each simulated trial started by placing the endpoint of the arm in the central starting location. The selection, execution, and correction phases of operation were then simulated. The task facing each of the selected APGs was to turn off at the proper time so that the movement stopped at the target. Simulations showed that the model could learn to control movements to the eight targets. Training typically required about 700 trials per target until the arm endpoint was consistently moved to within 1 em of the target. Figure 2 shows sample trajectories and population vectors of APG activity. Performance never resulted in precise movements due to the probabilistic nature of selection. Movement trajectories tended to follow straight lines in joint-angle space and were thus slightly curved lines in the workspace. About half of the APGs in the model were used to move to an individual target with population vectors similar to those seen by Georgopoulos (1988). The number of APGs used for each target was dependent on the sharpness of the climbing fiber receptive fields, with cardioid shaped receptive fields in joint-angle space giving population vectors that most resembled those experimentally observed. A Carrico-Cerebellar Model that Learns to Generate Distributed Motor Commands 615 4 ANALYSIS In order to understand how the model worked we undertook a theoretical analysis of its simulated behavior. Analysis indicated that the expected trajectory of a movement was a straight line in joint-angle space from the starting position to the target. This is a special case of a mathematical result by Mussa-Ivaldi (1988). Because selection is probabilistic in the APG Array Model, trajectories in the workspace varied from the expected trajectory. In these cases, trajectories were piecewise linear because of the asynchronous termination of APG activity. Because of the Law of Large Numbers, the more PCs in each APG, the more closely the movement will resemble the expected movement. The expected population of vectors of APG activity can be shown to be cosineshaped in joint-angle space. That is, the length of the vector representing the activity of APG m is proportional to the cosine of the angle between the direction of action of APG m and the direction of the target in joint-angle space. The shape of the population vectors in Cartesian space is dependent on the Jacobian of the arm, which is a function of the arm posture. The manner in which the outputs of PCs were set during selection leads to scaling of movement velocity with target distance. For any given movement direction, targets that are farther from the starting location lead to more rapid movements than closer targets. Updating network weights based on the expected corrective movement will, in some cases, result in changing the weights in a way that they converge to the correct values. However, in other cases inappropriate changes are made. In the current simulations, we could largely avoid this problem by selecting parameter and initial weight values so that movements were initially small in amplitude. Random initialization of the weight values sometimes led to instances from which the learning rule could not recover. 5 DISCUSSION In general, the present implementation of the modelled to adequate control of the kinematic arm and mimicked the general output of nervous system seen in actual experiments. The network implemented a spatial to temporal transformation that transformed a target location into a time varying motor command. The model naturally generated population vectors that were similar to those seen in vivo. Further research is needed to improve the model's robustness and to extend it to more realistic control of a dynamical limb. In the APG array model, APGs control arm movement in parallel so that the activity of all the modules taken together forms a distributed representation. The APG array executes a distributed motor program because it produces a spatiotemporal pattern of activity in the cerebrocerebellar recurrent network that is transmitted to the spinal cord to comprise a distributed motor command. 616 Berthier, Singh, Barto, and Houk 5.1 PARAMETRIZED MOTOR PROGRAMS Certain features of the APG array model relate well to the ideas about parameterized motor programs discussed by Keele (1973), Schmidt (1988), and Adams (1971, 1977). The selection phase of the APG array model provides a feasible neuronal mechanism for preparing a parameterized motor program in advance of movement. The execution phase is also consistent with the open-loop ideas associated with motor programming concepts, except that, like Adams (1977), we explain the termination of the execution phase as being a consequence of proprioceptive feedback and efference copy. In the APG array model, the counterpart of a generalized motor program is a set of parallel fiber weights for proprioceptive, efference copy, and target inputs. Given these weights, a particular constellation of parallel fiber inputs signifies that the desired endpoint of a movement is about to be reached, causing PCs to become depolarized. Once a set of parallel fiber weights corresponding to a desired endpoint is learned, the neuronal architecture and neurodynamics of the cerebellar network functions in a manner that parameterizes the motor program. Movement velocity is parameterized in the selection phase of the model's operation. The velocity that is selected is automatically scaled so that velocity increases as the amplitude of the movement increases. While this type of scaling is often observed in motor performance studies, velocity can also be varied in an independent manner where velocity scaling can be applied simultaneously to all elements of a motor program to slow down or speed up the entire movement. Although we have not addressed this issue in the present report, simulation of velocity scaling under control of a neuromodulator can naturally be accomplished in the APG array model. Movements terminate when the endpoint is recognized by PCs so that movement duration is dependent on the course of the movement instead of being determined by some internal clock because. Movement amplitude is parameterized by the weights of the target inputs, with smaller weights corresponding to larger amplitude movements. 5.2 CORRECTIVE MOVEMENTS We assume that the training information conveyed to the APGs is the result of crude corrective movements stimulating proprioceptive receptors. This sensory information is conveyed to the cerebellum by climbing fibers. Learning in the APG array model therefore requires the existence of a low-level system capable of generating movements to spatial targets with at least a ballpark level of accuracy. Lesion (Yu et al., 1980) and developmental studies (von Hofsten, 1982) support the existence of a low-level system. Other evidence indicates that when limb movements are not proceeding accurately toward their intended targets, corrective components of the movements are generated by an unconscious, automatic control system (Goodale et aI., 1986). We assume that collaterals from the corticospinal and rubrospinal system that convey the motor commands to the spinal cord gate off sensory transmission through the proprioceptive climbing fiber pathway, thus preventing sensory responses to the initial limb movement. As the initial movement proceeds, the low-level system reA Corrico-Cerebellar Model that Learns to Generate Distributed Motor Commands 617 ceives proprioceptive feedback from the limb and feedforward information about target location from the gaze control system. The latter information is updated as a consequence of corrective eye movements that typically occur after an initial gaze shift toward a visual target. Updated gaze information causes the spinal processor to generate a corrective component that is superimposed on the original motor command (Gielen & van Gisbergen, 1990; Flash & Henis, 1991). Since climbing fiber pathways would not be gated off by this low-level corrective process, climbing fibers should fire to indicate the direction of the corrective movement. We assume that the network by which climbing fiber activity is generated is specifically wired to provide appropriate training information to the APGs (Houk & Barto, 1991). The training signal provided by a climbing fiber is specialized for the recipient APG in that it provides directional information in joint-angle space that is relative to the direction in which that APG moves the arm. The fact that training information is provided in terms of joint-angle space greatly simplifies the problem of providing errors in the correct system of reference. For example, if the network used visual error information, the error information would have to be transformed to joint errors. The specialized training signals provided by the climbing fibers are determined by the structure of the ascending network conveying proprioceptive information. This ascending network has the same structure-but works in the opposite direction-as the network by which the APG array influences joint movement. This is reminiscent of the error backpropagation algorithm (e.g., Rumelhart et al., 1986, Parker, 1985) where the forward and backward passes through the network in the backpropagation algorithm are accomplished by the descending and ascending networks of the APG Array Model. This use of the ascending network to transform errors in the workspace to errors that are relative to a particular APG's direction of action is closely related to the use of error backpropagation for "learning with a distal teacher" as suggested by Jordan and Rumelhart (1991). Houk and Barto (1991) suggested that the alignment of the ascending and descending networks might come about through trophic mechanisms stimulated by use-dependent alterations in synaptic efficacy. In the context of the present model, this hypothesis implies that the ascending network to the inferior olive, is established first, and that the descending network by which APGs influence motoneurons changes. We have not yet simulated this mechanism to see if it could actually generate the kind of alignment we assume in the present model. Acknowledgements This research was supported by ONR N00014-88-K-0339, NIMH Center Grant P50 MH48185, and a grant from the McDonnell-Pew Foundation for Cognitive Neuroscience supported by the James S. McDonnell Foundation and the Pew Charitable Trusts. References Ac!ams JA (1971) A closed-loop theory of motor learning. J Motor Beh 3: 111-149 Adams J A (1977) Feedback theory of how joint receptors regulate the timing and positioning of a limb. Psychol Rev 84: 504-523 618 Berthier, Singh, Barto, and Houk Berthier NE Singh SP Barto AG Houk JC (1991) Distributed representation of limb motor programs in arrays of adjustable pattern generators. NPB Technical Report 3, Institute for Neuroscience, Northwestern University, Chicago IL Flash T Henis E (1991) Arm trajectory modifications during reaching towards visual targets. J Cognitive Neurosci 3:220-230 Gellman R Gibson AR Houk JC (1985) Inferior olivary neurons in the awake cat: Detection of contact and passive body displacement. J N europhys 54:40-60. Georgopoulos A (1988) Neural integration of movement: role of motor cortex in reaching. FASEB Journal 2:2849-2857. Gielen CCAM Gisbergen van JAM (1990) The visual guidance of saccades and fast aiming movements. News in Physiol Sci 5: 58-63 Gilbert PFC Thach WT (1977) Purkinje cell activity during motor learning. Brain Res 128:309-328. Goodale MA Pelisson D Prablanc C (1986) Large adjustments in visually guided reaching do not depend on vision of the hand or perception of target displacement. Nature 320: 748-750 Hofsten von C (1982) Eye-hand coordination in the newborn. Dev Psycho I 18: 450-461 Houk JC (1989) Cooperative control of limb movements by the motor cortex, brainstem and cerebellum. In: Cotterill RMJ (ed) Models of Brain Function. Cambridge Univ Press Cambridge UK, 309-325 Houk JC Barto AG (1991) Distributed sensorimotor learning. NPB Technical Report 1, Institute for Neuroscience, Northwestern University, Chicago IL Houk JC Singh SP Fisher C Barto AG (1990) An adaptive sensorimotor network inspired by the anatomy and physiology of the cerebellum. In: Miller WT Sutton RS Werbos PJ (eds) Neural Networks for Control. MIT Press Cambridge, MA 301-348 Ito M (1984) The Cerebellum and Neural Control. Raven Press New York Ito M (1989) Long-term depression. Annual review of Neuroscience 12: 85-102 Jordan MI Rumelhart DE (1991) Forward models: Supervised learning with a distal teacher. Occasional Paper #40 MIT Center for Cognitive Science Keele SW (1973) Attention and Human Performance. Goodyear Pacific Palisades, California Kuypers HGJM (1981) Anatomy of the descending pathways. In: Brooks VB (ed) Handbook of Physiology Section I Volume II Part 1. American Physiological Society Bethesda MD 597-666 Mussa-Ivaldi FA (1988) Do neurons in the motor cortex encode movement direction? An alternative hypothesis. Neurosci Lett 91:106-111 Parker DB (1985) Learning-Logic. Technical Report TR-47, Massachusetts Institute of Technology Cambridge MA Rumelhart DE Hinton GE Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE McClelland JL (eds) Parallel Distributed Processing. Explorations in the Microstructure of Cognition, Vol. 1: Foundations. Bradford Books/MIT Press Cambridge MA Schmidt RA (1988) Motor Control and Motor Learning. Human Kinetics Champaign, Illinois
1991
26
492
Linear Operator for Object Recognition Ronen Bssri Shimon Ullman· M.I.T. Artificial Intelligence Laboratory and Department of Brain and Cognitive Science 545 Technology Square Cambridge, MA 02139 Abstract Visual object recognition involves the identification of images of 3-D objects seen from arbitrary viewpoints. We suggest an approach to object recognition in which a view is represented as a collection of points given by their location in the image. An object is modeled by a set of 2-D views together with the correspondence between the views. We show that any novel view of the object can be expressed as a linear combination of the stored views. Consequently, we build a linear operator that distinguishes between views of a specific object and views of other objects. This operator can be implemented using neural network architectures with relatively simple structures. 1 Introduction Visual object recognition involves the identification of images of 3-D objects seen from arbitrary viewpoints. In particular, objects often appear in images from previously unseen viewpoints. In this paper we suggest an approach to object recognition in which rigid objects are recognized from arbitrary viewpoint. The method can be implemented using neural network architectures with relatively simple structures. In our approach a view is represented as a collection of points given by their location in the image, An object is modeled by a small set of views together with the correspondence between these views. We show that any novel view of the object • Also, Weizmann Inst. of Science, Dept. of Applied Math., Rehovot 76100, Israel 452 Linear Operator for Object Recognition 453 can be expressed as a linear combination of the stored views. Consequently, we build a linear operator that distinguishes views of a specific object from views of other objects. This operator can be implemented by a neural network. The method has several advantages. First, it handles correctly rigid objects, but is not restricted to such objects. Second, there is no need in this scheme to explicitly recover and represent the 3-D structure of objects. Third, the computations involved are often simpler than in previous schemes. 2 Previous Approaches Object recognition involves a comparison of a viewed image against object models stored in memory. Many existing schemes to object recognition accomplish this task by performing a template comparison between the image and each of the models, often after compensating for certain variations due to the different positions and orientations in which the object is observed. Such an approach is called alignment (Ullman, 1989), and a similar approach is used in (Fischler &, Bolles 1981, Lowe 1985, Faugeras &, Hebert 1986, Chien &, Aggarwal 1987, Huttenlocher &, Ullman 1987, Thompson &, Mundy 1987). The majority of alignment schemes use object-centered representations to model the objects. In these models the 3-D structure of the objects is explicitly represented. The acquisition of models in these schemes therefore requires a separate process to recover the 3-D structure of the objects. A number of recent studies use 2-D viewer-centered representations for object recognition. Abu-Mostafa &, Pslatis (1987), for instance, developed a neural network that continuously collects and stores the observed views of objects. When a new view is observed it is recognized if it is sufficiently similar to one of the previously seen views. The system is very limited in its ability to recognize objects from novel views. It does not use information available from a collection of object views to extend the range of recognizable views beyond the range determined by each of the stored views separately. In the scheme below we suggest a different kind of viewer-centered representations to model the objects. An object is modeled by a set of its observed images with the correspondence between points in the images. We show that only a small number of images is required to predict the appearance of the object from all possible viewpoints. These predictions are exact for rigid objects, but are not confined to such objects. We also suggest a neural network to implement the scheme. A similar representation was recently used by Poggio &, Edelman (1990) to develop a network that recognizes objects using radial basis functions (RBFs). The approach presented here has several advantages over this approach. First, by using the linear combinations of the stored views rather than applying radial basis functions to them we obtain exact predictions for the novel appearances of objects rather than an approximation. Moreover, a smaller number of views is required in our scheme to predict the appearance of objects from all possible views. For example, when a rigid object that does not introduce self occlusion (such as a wired object) is considered, predicting its appearance from all possible views requires only three views under the LC Scheme and about sixty views under the RBFs Scheme. 454 Basri and Ullman 3 The Linear Combinations (LC) Scheme In this section we introduce the Linear Combinations (LC) Scheme. Additional details about the scheme can be found in (Ullman & Basri, 1991). Our approach is based on the following observation. For many continuous transformations of interest in recognition, such as 3-D rotation, translation, and scaling, every possible view of a transforming object can be expressed as a linear combination of other views of the object. In other words, the set of possible images of an object undergoing rigid 3-D transformations and scaling is embedded in a linear space, spanned by a small number of 2-D images. We start by showing that any image of an object undergoing rigid transformations followed by an orthographic projection can be expressed as a linear combination of a small number of views. The coefficients of this combination may differ for the xand y-coordinates. That is, the intermediate view of the object may be given by two linear combinations, one for the x-coordinates and the other for the y-coordinates. In addition, certain functional restrictions may hold among the different coefficients. We represent an image by two coordinate vectors, one contains the x-values of the object's points, and the other contains their y-values. In other words, an image P is described by x = (XlJ ... , xn) and y = (Yll ... , Yn) where every (Xi, Yi), 1 < i ~ n, is an image point. The order of the points in these vectors is preserved in all the different views of the same object, namely, if P and pI are two views of the same object, then (Xi, Yi) E P and (x~, yD E pI are in correspondence (or, in other words, they are the projections of the same object point). Claim: The set of coordinate vectors of an object obtained from all different viewpoints is embedded in a 4-D linear space. (A proof is given in Appendix A.) Following this claim we can represent the entire space of views of an object by a basis that consists of any four linearly independent vectors taken from the space. In particular, we can construct a basis using familiar views of the object. Two images supply four such vectors and therefore are often sufficient to span the space. By considering the linear combinations of the model vectors we can reproduce any possible view of the object. It is important to note that the set of views of a rigid object does not occupy the entire linear 4-D space. Rather, the coefficients of the linear combinations reproducing valid images follow in addition two quadratic constraints. (See Appendix A.) In order to verify that an object undergoes a rigid transformation (as opposed to a general 3-D affine transformation) the model must consist of at least three snapshots of the object. Many 3-D rigid objects are bounded with smooth curved surfaces. The contours of such objects change their position on the object whenever the viewing position is changed. The linear combinations scheme can be extended to handle these objects as well. In this cases the scheme gives accurate approximations to the appearance of these objects (Ullman & Basri, 1991). The linear combination scheme assumes that the same object points are visible in the different views. When the views are sufficiently different, this will no longer hold, Linear Operator for Object Recognition 455 due to self-occlusion. To represent an object from all possible viewing directions (e.g., both "front" and "back"), a number of different models of this type will be required. This notion is similar to the use of different object aspects suggested by Koenderink & Van Doorn (1979). (Other aspects of occlusion are discussed in the next section.) 4 Recognizing an Object Using the LC Scheme In the previous section we have shown that the set of views of a rigid object is embedded in a linear space of a small dimension. In this section we define a linear operator that uses this property to recognize objects. We then show how this operator can be used in the recognition process. Let PI, ... , Pk be the model views, and P be a novel view of the same object. According to the previous section there exist coefficients a}, ... , ak such that: P = L:~=1 aiPi. Suppose L is a linear operator such that LPi = q for every 1 < i ~ n and some constant vector q, then L transforms P to q (up to a scale factor), Lp = (L:~=1 ai)q. If in addition L transforms vectors outside the space spanned by the model to vectors other then q then L distinguishes views of the object from views of other objects. The vector q then serves as a "name" for the object. It can either be the zero vector, in which case L transforms every novel view of the object to zero, or it can be a familiar view of the object, in which case L has an associative property, namely, it takes a novel view of an object and transforms it to a familiar view. A constructive definition of L is given in appendix B. The core of the recognition process we propose includes a neural network that implements the linear operator defined above. The input to this network is a coordinate vector created from the image, and the output is an indication whether the image is in fact an instance of the modeled object. The operator can be implemented by a simple, one layer, neural network with only feedforward connections, the type presented by Kohonen, Oja, & Lehtio (1981) . It is interesting to note that this operator can be modified to recognize several models in parallel. To apply this network to the image the image should first be represented by its coordinate vectors. The construction of the coordinate vectors from the image can be implemented using cells with linear response properties, the type of cells encoding eye positions found by Zipser & Andersen (1988). The positions obtained should be ordered according to the correspondence of the image points with the model points. Establishing the correspondence is a difficult task and an obstacle to most existing recognition schemes. The phenomenon of apparent motion (Marr & Ullman 1981) suggests, however, that the human visual system is capable of handling this problem. In many cases objects seen in the image are partially occluded. Sometimes also some of the points cannot be located reliably. To handle these cases the linear operator should be modified to exclude the missing points. The computation of the updated operator from the original one involves computing a pseudo-inverse. A method to compute the pseudo-inverse of a matrix in real time using neural networks has been suggested by Yeates (1991). 456 Basri and Ullman 5 Summary We have presented a method for recognizing 3-D objects from 2-D images. In this method, an object-model is represented by the linear combinations of several 2-D views of the object. It has been shown that for objects undergoing rigid transformations the set of possible images of a given object is embedded in a linear space spanned by a small number of views. Rigid transformations can be distinguished from more general linear transformations of the object by testing certain constraints placed upon the coefficients of the linear combinations. The method applies to objects with sharp as well as smooth boundaries. We have proposed a linear operator to map the different views of the same object into a common representation, and we have presented a simple neural network that implements this operator. In addition, we have suggested a scheme to handle occlusions and unreliable measurements. One difficulty in this scheme is that it requires to find the correspondence between the image and the model views. This problem is left for future research. The linear combination scheme described above was implemented and applied to a number of objects. Figures 1 and 2 show the application of the linear combinations method to artificially created and real life objects. The figures show a number of object models, their linear combinations, and the agreement between these linear combinations and actual images of the objects. Figure 3 shows the results of applying a linear operator with associative properties to artificial objects. It can be seen that whenever the operator is fed with a novel view of the object for which it was designed it returns a familiar view of the object. /; 1'\ I \ I \ I \ I \ < / \ ---~-:----~ Figure 1: Top: three model pictures of a pyramid. Bottom: two of their linear combinations. Appendix A In this appendix we prove that the coordinate vectors of images of a rigid object lie in a 4-D linear space. We also show that the coefficients of the linear combinations that produce valid images of the object follow in addition two quadratic constraints. Let 0 be a set of object points, and let x = (Xl, ... , X n), Y = (Yl, ... , Yn), and Linear Operator for Object Recognition 457 Figure 2: Top: three model pictures of a VW car. Bottom: a linear combination of the three images (left), an actual edge image (middle), and the two images overlayed (right). // \ I \ I \ I \ . / \ --::--~ Figure 3: Top: applying an associative pyramidal operator to a pyramid (left) returns a model view of the pyramid (right, compare with Figure 1 top left). Bottom: applying the same operator to a cube (left) returns an unfamiliar image (right). 458 Basri and Ullman z = (Zl, ... , zn) such that (Xi, Yi, Zi) E 0 for every 1 ~ i < n. Let P be a view of the object, and let x = (Xl, ... , xn) and y = (!Ill ... , !In) such that (Xi, !Ii) is the position of (Xi, Yi, Zi) in P. We call x, y, and z the coordinate vectors of 0, and x and y the corresponding coordinate vectors in P. Assume P is obtained from 0 by applying a rotation matrix R, a scale factor s, and a translation vector (t~, ty) followed by an orthographic projection. Claim: There exist coefficients at, a2, aa, a4 and bl, b2, ba, b4 such that: x alx+a2y+aaZ+a41 y blx+b2y+baz+b41 where 1 = (1, ... , 1) E 1?,". Proof: Simply by assigning: al srll h sr21 a2 sr12 b2 sr22 aa srla ba sr2a a4 t~ b4 ty Therefore, x, y E span{x, y, z, I} regardless of the viewpoint from which x and yare taken. Notice that the set of views of a rigid object does not occupy the entire linear 4-D space. Rather, the coefficients follow in addition two quadratic constraints: Appendix B a~ + a~ + a; = b~ + b~ + b; albl + a2b2 + aaba = 0 A "recognition matrix" is defined as follows. Let {PI, ... , Pk} be a set of k linearly independent vectors representing the model pictures. Let {Pk+t, ... , Pn} be a set of vectors such that {pt, ... , Pn} are all linearly independent. We define the following matrices: P (Pl, .. ·,Pk,Pk+l, ''',Pn) Q (q, .. ·,q,Pk+t, .. ·,Pn) We require that: LP=Q Therefore: L = QP- l Note that since P is composed of n linearly independent vectors, the inverse matrix p- l exists, therefore L can always be constructed. Acknowledgments We wish to thank Yael Moses for commenting on the final version of this paper. This report describes research done at the Massachusetts Institute of Technology within the Artificial Intelligence Laboratory. Support for the laboratory's artificial Linear Operator for Object Recognition 459 intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N0001485-K-0124. Ronen Basri is supported by the McDonnell-Pew and the Rothchild postdoctoral fellowships. References Abu-Mostafa, Y.S. & Pslatis, D. 1987. Optical neural computing. Scientific American, 256, 66-73. Chien, C.H. & Aggarwal, J.K., 1987. Shape recognition from single silhouette. Proc. of ICCV Conf. (London) 481-490. Faugeras, O.D. & Hebert, M., 1986. The representation, recognition and location of 3-D objects. Int. J. Robotics Research, 5(3), 27-52. Fischler, M.A. & Bolles, R.C., 1981. Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography. Communications of the ACM, 24(6), 381-395. Huttenlocher, D.P. & Ullman, S., 1987. Object recognition using alignment. Proc. of ICCV Conf. (London), 102-111. Koenderink, J.J. & Van Doorn, A.J., 1979. The internal representation of solid shape with respect to vision. Bioi. Cybernetics 32, 211-216. Kohonen, T., Oja, E., & Lehtio, P., 1981. Storage and processing of information in distributed associative memory systems. in Hinton, G.E. (3 Anderson, J.A., Parallel Models of Associative Memory. Hillsdale, NJ: Lawrence Erlbaum Associates, 105-143. Lowe, D.G., 1985. Perceptual Organization and Visual Recognition. Boston: Kluwer Academic Publishing. Man, D. & Ullman, S., 1981. Directional selectivity and its use in early visual processing. Proc. R. Soc. Lond. B 211, 151-180. Poggio, T. & Edelman, S., 1990. A network that learns to recognize three dimensionalobjects. Nature, Vol. 343, 263-266. Thompson, D.W. & Mundy J.L., 1987. Three dimensional model matching from an unconstrained viewpoint. Proc. IEEE Int. Con! on robotics and Automation, Raleigh, N.C., 208-220. S. Ullman and R. Basri, 1991. Recognition by Linear Combinations of Models. IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 13, No. 10, pp. 992-1006 Ullman, S., 1989. Aligning pictorial descriptions: An approach to object recognition: Cognition, 32(3), 193-254. Also: 1986, A.I. Memo 931, The Artificial Intelligence Lab., M.I. T .. Yeates, M.C., 1991. A neural network for computing the pseudo-inverse of a matrix and application to Kalman filtering. Tech. Report, California Institute of Technology. Zipser, D. & Andersen, R.A., 1988. A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679-684.
1991
27
493
LEARNING UNAMBIGUOUS REDUCED SEQUENCE DESCRIPTIONS Jiirgen Schmidhuber Dept. of Computer Science University of Colorado Campus Box 430 Boulder, CO 80309, USA yirgan@cs.colorado.edu Abstract Do you want your neural net algorithm to learn sequences? Do not limit yourself to conventional gradient descent (or approximations thereof). Instead, use your sequence learning algorithm (any will do) to implement the following method for history compression. No matter what your final goals are, train a network to predict its next input from the previous ones. Since only unpredictable inputs convey new information, ignore all predictable inputs but let all unexpected inputs (plus information about the time step at which they occurred) become inputs to a higher-level network of the same kind (working on a slower, self-adjusting time scale). Go on building a hierarchy of such networks. This principle reduces the descriptions of event sequences without 1088 of information, thus easing supervised or reinforcement learning tasks. Alternatively, you may use two recurrent networks to collapse a multi-level predictor hierarchy into a single recurrent net. Experiments show that systems based on these principles can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets. Finally you can modify the above method such that predictability is not defined in a yes-or-no fashion but in a continuous fashion. 291 292 Schmidhuber 1 INTRODUCTION The following methods for supervised sequence learning have been proposed: Simple recurrent nets [7][3], time-delay nets (e.g. [2]), sequential recursive auto-associative memories [16], back-propagation through time or BPTT [21] [30] [33], Mozer's 'focused back-prop' algorithm [10], the IID- or RTRL-algorithm [19][1][34], its accelerated versions [32][35][25], the recent fast-weight algorithm [27], higher-order networks [5], as well as continuous time methods equivalent to some of the above [14)[15][4]. The following methods for sequence learning by reinforcement learning have been proposed: Extended REINFORCE algorithms [31], the neural bucket brigade algorithm [22], recurrent networks adjusted by adaptive critics [23](see also [8]), buffer-based systems [13], and networks of hierarchically organized neuron-like "bions" [18]. With the exception of [18] and [13], these approaches waste resources and limit efficiency by focusing on every input instead of focusing only on relevant inputs. Many of these methods have a second drawback as well: The longer the time lag between an event and the occurrence of a related error the less information is carried by the corresponding error information wandering 'back into time' (see [6] for a more detailed analysis). [11], [12] and [20] have addressed the latter problem but not the former. The system described by [18] on the other hand addresses both problems, but in a manner much different from that presented here. 2 HISTORY COMPRESSION A major contribution of this work is an adaptive method for removing redundant information from sequences. This principle can be implemented with the help of any of the methods mentioned in the introduction. Consider a deterministic discrete time predictor (not necessarily a neural network) whose state at time t of sequence p is described by an environmental input vector zP(t), an internal state vector hP(t), and an output vector zP(t). The environment may be non-deterministic. At time 0, the predictor starts with zP(O) and an internal start state hP(O). At time t ~ 0, the predictor computes zP(t) = f(zP(t), hP(t)). At time t> 0, the predictor furthermore computes hP(t) = g(zP(t - 1), hP(t - 1)). All information about the input at a given time tz can be reconstructed from tz,f,g,zP(O),hP(O), and the pairs (t"zP(t,)) for which 0 < t, ~ tz and zP(t, -l);j: zP(t,). This is because if zP(t) = zP(t + 1) at a given time t, then the predictor is able to predict the next input from the previous ones. The new input is derivable by means of f and g. Information about the observed input sequence can be even further compressed beyond just the unpredicted input vectors zP(t,). It suffices to know only those elements of the vectors zP(t,) that were not correctly predicted. This observation implies that we can discriminate one sequence from another by knowing jud the unpredicted inputs and the corresponding time steps at which they Learning Unambiguous Reduced Sequence Descriptions 293 occurred. No information is lost if we ignore the expected inputs. We do not even have to know f and g. I call this the principle of history compression. From a theoretical point of view it is important to know at what time an unexpected input occurs; otherwise there will be a potential for ambiguities: Two different input sequences may lead to the same shorter sequence of un predicted inputs. With many practical tasks, however, there is no need for knowing the critical time steps (see section 5). 3 SELF-ORGANIZING PREDICTOR HIERARCHY Using the principle of history compression we can build a self-organizing hierarchical neural 'chunking' systeml . The basic task can be formulated as a prediction task. At a given time step the goal is to predict the next input from previous inputs. If there are external target vectors at certain time steps then they are simply treated as another part of the input to be predicted. The architecture is a hierarchy of predictors, the input to each level of the hierarchy is coming from the previous level. Pi denotes the ith level network which is trained to predict its own nezt input from its previous inputs2• We take Pi to be one of the conventional dynamic recurrent neural networks mentioned in the introduction; however, it might be some other adaptive sequence processing device as well3 . At each time step the input of the lowest-level recurrent predictor Po is the current external input. We create a new higher-level adaptive predictor P,+l whenever the adaptive predictor at the previous level, P" stops improving its predictions. When this happens the weight-changing mechanism of P, is switched off (to exclude potential instabilities caused by ongoing modifications of the lower-level predictors). If at a given time step P, (8 > 0) fails to predict its next input (or if we are at the beginning of a training sequence which usually is not predictable either) then P'+l will receive as input the concatenation of this next input of P, plus a unique representation of the corresponding time step4; the activations of P,+l 's hidden and output units will be updated. Otherwise P,+l will not perform an activation update. This procedure ensures that P'+l is fed with an unambiguous reduced descriptionS of the input sequence observed by P,. This is theoretically justified by the principle of history compression. In general, P,+l will receive fewer inputs over time than P,. With existing learning 1 See also [18] for a different hierarchical connectionist chun1cing system based on similar principles. 2Recently I became aware that Don Mathis had some related ideas (personal communication). A hierarchical approach to sequence generation was pursued by [9]. 3For instance, we might employ the more limited feed-forward networks and a 'time window' approach. In this case, the number of previous inputs to be considered as a basis for the next prediction will remain fixed. • A unique time representation is theoretically necessary to provide P.+l with unambiguous information about when the failure occurred (see also the last paragraph of section 2). A unique representation of the time that went by since the lad unpredicted input occurred will do as well. & In contrast, the reduced descriptions referred to by [11] are not unambiguous. 294 Schmidhuber algorithms, the higher-level predictor should have less difficulties in learning to predict the critical inputs than the lower-level predictor. This is because P,+l'S 'credit assignment paths' will often be short compared to those of P,. This will happen if the incoming inputs cany global temporal structure which has not yet been discovered by P,. (See also [18] for a related approach to the problem of credit assignment in reinforcement learning.) This method is a simplification and an improvement of the recent chunking method described by [24]. A multi-level predictor hierarchy is a rather safe way of learning to deal with sequences with multi-level temporal structure (e.g speech). Experiments have shown that multi-level predictors can quickly learn tasks which are practically unlearnable by conventionalrecunent networks, e.g. [6]. 4 COLLAPSING THE HIERARCHY One disadvantage of a predictor hierarchy as above is that it is not known in advance how many levels will be needed. Another disadvantage is that levels are explicitly separated from each other. It may be possible, however, to collapse the hierarchy into a single network as outlined in this section. See details in [26]. We need two conventional recurrent networks: The automatizer A and the chunker C, which cones pond to a distinction between automatic and attended events. (See also [13] and [17] which describe a similar distinction in the context ofreinforcement learning). At each time step A receives the current external input. A's enor function is threefold: One term forces it to emit certain desired target outputs at certain times. If there is a target, then it becomes part of the next input. The second term forces A at every time step to predict its own next non-target input. The third (crucial) term will be explained below. If and only if A makes an enor concerning the first and second term of its en or function, the un predicted input (including a potentially available teaching vector) along with a unique representation of the current time step will become the new input to C. Before this new input can be processed, C (whose last input may have occuned many time steps earlier) is trained to predict this higher-level input from its cunent internal state and its last input (employing a conventional recunent net algorithm). After this, C performs an activation update which contributes to a higher level internal representation of the input history. Note that according to the principle of history compression C is fed with an unambiguous reduced description of the input history. The information deducible by means of A's predictions can be considered as redundant. (The beginning of an episode usually is not predictable, therefore it has to be fed to the chunking level, too.) Since C's 'credit assignment paths' will often be short compared to those of A, C will often be able to develop useful internal representations of previous unexpected input events. Due to the final term of its error function, A will be forced to reproduce these internal representations, by predicting C's state. Therefore A will be able to create useful internal representations by itself in an early stage of processing a Learning Unambiguous Reduced Sequence Descriptions 295 given sequence; it will often receive meaningful error signals long before errors of the first or second kind occur. These internal representations in turn must cany the discriminating information for enabling A to improve its low-level predictions. Therefore the chunker will receive fewer and fewer inputs, since more and more inputs become predictable by the automatizer. This is the collapsing operation. Ideally, the chunker will become obsolete after some time. It must be emphasized that unlike with the incremental creation of a multi-level predictor hierarchy described in section 3, there is no formal proof that the 2-net on-line version is free of instabilities. One can imagine situations where A unlearns previously learned predictions because of the third term of its enor function. Relative weighting of the different terms in A's enor function represents an ad-hoc remedy for this potential problem. In the experiments below, relative weighting was not necessary. 5 EXPERIMENTS One experiment with a multi-level chunking architecture involved a grammar which produced strings of many a's and b's such that there was local temporal structure within the training strings (see [6] for details). The task was to differentiate between strings with long overlapping suffixes. The conventional algorithm completely failed to solve the task; it became confused by the great numbers of input sequences with similar endings. Not so the chunking system: It soon discovered certain hierarchical temporal structures in the input sequences and decomposed the problem such that it was able to solve it within a few hundred-thousand training sequences. The 2-net chunking system (the one with the potential for collapsing levels) was also tested against the conventionalrecUlrent net algorithms. (See details in [26].) With the conventional algorithms, with various learning rates, and with more than 1,000,000 training sequences performance did not improve in prediction tasks involving even as few as ~o time steps between relevant events. But, the 2-net chunking system was able to solve the task rather quickly. An efficient approximation of the BPTT-method was applied to both the chunker and the automatizer: Only 3 iterations of error propagation 'back into the past' were performed at each time step. Most of the test runs required less than 5000 training sequences. Still the final weight matriz of the automatizer often resembled what one would hope to get from the conventional algorithm. There were hidden units which learned to bridge the 20-step time lags by means of strong self-connections. The chunking system needed less computation per time step than the conventional method and required many fewer training sequences. 6 CONTINUOUS HISTORY COMPRESSION The history compression technique formulated above defines expectationmismatches in a yes-or-no fashion: Each input unit whose activation is not predictable at a certain time gives rise to an unexpected event. Each unexpected event provokes an update of the internal state of a higher-level predictor. The updates always take place according to the conventional activation spreading rules for re296 Schmidhuber current neural nets. There is no concept of a partial mismatch or of a 'near-miss'. There is no possibility of updating the higher-level net 'just a little bit' in response to a 'nearly expected input'. In practical applications, some 'epsilon' has to be used to define an acceptable mismatch. In reply to the above criticism, continuous history compression is based on the following ideas. In what follows, Viet) denotes the i-th component of vector vet). We use a local input representation. The components of zP(t) are forced to sum up to 1 and are interpreted as a prediction of the probability distribution of the possible zP(t + 1). Z}(t) is interpreted as the prediction of the probability that zHt + 1) is 1. The output entropy - 2: zr(t)log zr(t) j can be interpreted as a measure of the predictor's confidence. In the worst ease, the predictor will expect every possible event with equal probability. How much information (relative to the current predictor) is conveyed by the event z~(t + 1) = 1, once it is observed? According to [29] it is -log Z}(t). [28] defines update procedures based on Mozer's recent update function [12] that let highly informative events have a stronger influence on the history representation than less informative (more likely) events. The 'strength' of an update in response to a more or less unexpected event is a monotonically increasing function of the information the event conveys. One of the update procedures uses Pollack's recursive auto-associative memories [16] for storing unexpected events, thus yielding an entirely local learning algorithm for learning extended sequences. 7 ACKNOWLEDGEMENTS Thanks to Josef Hochreiter for conducting the experiments. Thanks to Mike Mozer and Mark Ring for useful comments on an earlier draft of this paper. This research was supported in part by NSF PYI award IRI-9058450, grant 90-21 from the James S. McDonnell Foundation, and DEC external research grant 1250 to Michael C. Mozer. References [1] J. Bachrach. Learning to represent state, 1988. Unpublished master's thesis, University of Massachusetts, Amherst. [2] U. Bodenhausen and A. Waibel. The tempo 2 algorithm: Adjusting time-delays by supervised learning. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural In/ormation Processing Systems 3, pages 155-161. San Mateo, CA: Morgan Kaufmann, 1991. Learning Unambiguous Reduced Sequence Descriptions 297 [3] J. L. Elman. Finding structure in time. Technical Report CRL Technical Report 8801, Center for Research in Language, University of California, San Diego, 1988. [4] M. Gherrity. A learning algorithm for analog fully recurrent neural networks. In IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 1, pages 643-644, 1989. [S] C. L. Giles and C. B. Miller. Learning and extracting finite state automata. Accepted for publication in Neural Computation, 1992. [6] Josef Hochreiter. Diploma thesis, 1991. Institut fur Informatik, Technische Universitiit Miinchen. [7] M. I. Jordan. Serial order: A parallel distributed processing approach. Technical Report ICS Report 8604, Institute for Cognitive Science, University of California, San Diego, 1986. [8] G. Lukes. Review of Schmidhuber's paper 'Recurrent networks adjusted by adaptive critics'. Neural Network Reviews, 4(1):41-42, 1990. [9] Y. Miyata. An unsupervised PDP learning model for action planning. In Proc. of the Tenth Annual Conference of the Cognitive Science Society, Hillsdale, NJ, pages 223-229. Erlbaum, 1988. [10] M. C. Mozer. A focused back-propagation algorithm for temporal sequence recognition. Complez Systems, 3:349-381, 1989. [11] M. C. Mozer. Connectionist music composition based on melodic, stylistic, and psychophysical constraints. Technical Report CU-CS-49S-90, University of Colorado at Boulder, 1990. [12] M. C. Mozer. Induction of multiscale temporal structure. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4, to appear. San Mateo, CA: Morgan Kaufmann, 1992. [13] C. Myers. Learning with delayed reinforcement through attention-driven buffering. TR, Imperial College of Science, Technology and Medicine, 1990. [14] B. A. Pearlmutter. Learning state space trajectories in recurrent neural networks. Neural Computation, 1:263-269, 1989. [IS] F. J. Pineda. Time dependent adaptive neural networks. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 710-718. San Mateo, CA: Morgan Kaufmann, 1990. [16] J. B. Pollack. Recursive distributed representation. Artificial Intelligence, 46:77-10S, 1990. [17] M. A. Ring. PhD Proposal: Autonomous construction of sensorimotor hierarchies in neural networks. Technical report, Univ. of Texas at Austin, 1990. [18] M. A. Ring. Incremental development of complex behaviors through automatic construction of sensory-motor hierarchies. In L. Birnbaum and G. Collins, editors, Machine Learning: Proceedings of the Eighth International Workshop, pages 343-347. Morgan Kaufmann, 1991. [19] A. J . Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.l, Cambridge University Engineering Department, 1987. 298 Schmidhuber [20] R. Rohwer. The 'moving targets' training method. In J. Kindermann and A. Linden, editors, Proceedings of 'Distributed Adaptive Neural Information Processing', St.Augustin, ~4.-~5.5,. Oldenbourg, 1989. [21] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, volume I, pages 318-362. MIT Press, 1986. [22] J. H. Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403-412, 1989. [23] J. H. Schmidhuber. Recurrent networks adjusted by adaptive critics. In Proc. IEEE/INNS International Joint Conference on Neural Networks, Washington, D. C., volume I, pages 719-722, 1990. [24] J. H. Schmidhuber. Adaptive decomposition of time. In T. Kohonen, K. Miikisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 909-914. Elsevier Science Publishers B.V., North-Holland, 1991. [25] J. H. Schmidhuber. A fixed size storage O(n3 ) time complexity learning algorithm for fully recurrent continually running networks. Accepted for publication in Neural Computation, 1992. [26] J. H. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Accepted for publication in Neural Computation, 1992. [27] J. H. Schmidhuber. Learning to control fast-weight memories: An alternative to recurrent nets. Accepted for publication in Neural Computation, 1992. [28] J. H. Schmidhuber, M. C. Mozer, and D. Prelinger. Continuous history compression. Technical report, Dept. of Compo Sci., University of Colorado at Boulder, 1992. [29] C. E. Shannon. A mathematical theory of communication (parts I and II). Bell System Technical Journal, XXVII:379-423, 1948. [30] P. J. Werbos. Generalization of back propagation with application to a recurrent gas market model. Neural Networks, 1, 1988. [31] R. J. Williams. Toward a theory of reinforcement-learning connectionist systems. Technical Report NU-CCS-88-3, College of Compo Sci., Northeastern University, Boston, MA, 1988. [32] R. J. Williams. Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27, Boston: Northeastern University, College of Computer Science, 1989. [33] R. J. Williams and J. Pengo An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 4:491-501, 1990. [34] R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection Science, 1(1):87-111, 1989. [35] R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1992, in press. PART VI RECURRENT NETWORKS
1991
28
494
Dynamically-Adaptive Winner-Take-All Networks Treat E. Laale Artif1cia1IntcUigeoce Laboratory Computer Science Department Univmity of California. Los Angeles. CA 90024 Abstract Winner-Take-All (WTA) networks. in which inhibitory interconnections are used to determine the most highly-activated of a pool of unilS. are an important part of many neural network models. Unfortunately, convergence of normal WT A networks is extremely sensitive to the magnitudes of their weights, which must be hand-tuned and which generally only provide the right amount of inhibition across a relatively small range of initial conditions. This paper presents DynamjcallyAdaptive Winner-Telke-All (DA WTA) netw<rls, which use a regulatory unit to provide the competitive inhibition to the units in the network. The DA WT A regulatory unit dynamically adjusts its level of activation during competition to provide the right amount of inhibition to differentiate between competitors and drive a single winner. This dynamic adaptation allows DA WT A networks to perform the winner-lake-all function for nearly any network size or initial condition. using O(N) connections. In addition, the DA WT A regulaaory unit can be biased 10 find the level of inhibition necessary to settle upon the K most highlyactivated units, and therefore serve as a K -Winners-Take-All network. 1. INTRODUCTION Winner-Take-All networks are fixed group of units which compete by mutual inhibition until the unit with the highest initial activation or input level suppresses the activation of all the others. Winner-lake-all selection of the most highJy-activated unit is an important part of many neural network models (e.g. McCleUand and Rumelhart, 1981; Feldman and Ballard. 1982; Kohonen. 1984; Tomelzky. 1989; Lange and Dyer, 1989a,b). Unfortunately, successful convergence in winner-lake-all networks is extremely sensitive to the magnitudes of the inhibitory weights between units and other network parameU7S. For example. a weight value for the mutually-inhibitory connections allowing the most highly-activated unit to suppress the other units in one initial condition (e.g. Figure la) may not provide enough inhibition to select a single winner if the initial input activation levels are closer together and/or higher (e.g. Figure Ib). On the other hand, if the compe341 342 Lange t.O O.sa 0.8 0.7 0.8 0.5 0.4 0.3 0.2 0.1 10 20 30 40 50 80 70 80 sao 1.0 0.11 0.8 0.7 0.8 0.5 0.4 0.3 0.2 0.1 0 10 20 30 ~ 50 110 70 10 110 7.0 # MIf-bill.1~ , MIf~ .120 / -,~ . 11U -MII-e- 1.0111 .I 5.0 3.0 1.0 -1.0 -3.0 -~o C) -7.0 . . . 2 3 5 8 7 I II Figure 1. Several plots of activation versus time for different initial conditions in a winner-lake-all network in which there is a bidirectional inhibitory connection of weight -0.2 between every pair of units. Unit activation function is that from the interactive activation model of McClelland and Rumelhan (1981). (a) Network in which five units are given an input self bias ranging from 0.10 to 0.14. (b) Network in which five units are given an input self bias ranging from 0.50 to 0.54. Note that the network ended up with three winners because the inhibitory connections of weight -0.2 did not provide enough inhibition to suppress the second and third most-active nodes. (c) Network in which 100 units are given an input self bias ranging from 0.01 to 0.14. The combined activation of all 100 nodes through the inhibitory weight of -0.2 provides far too much inhibition. causing the network to overreact and oscillate wildly 100 1011 10 tition involves a larger number of active units. then the same inhibitory weights may provide too much inhibition and either suppress the activations of all units or lead to oscillations (e.g. Figure Ie). Dynamically-Adaptive Winner-Take-All Networks 343 Because of these problems, it is genenlly necessary to hand-tune network paramderS to allow for successful winner-lake-all performance in a given neuraI network archilleCture having certain expected levels of incoming activations. For complex networks. this can require a detailed mathematical analysis of the model (cf. Touretzky & Hinton, 1988) ex' a heuristic, computer-assisted trial-and-error search process (cf. Reggia, 1989) to find the values of inhibitory weights, unit thresholds, and other network parameters necessary for clear-cut winner-lake-all performance in a given model's input space. In some cases. however, no set of network constant network parameters can be found to handle the range of possible initial conditions a model may be faced with (Bamden. Kankanaballi. and Dharmavaratha. 1990). such as when the numbers of units actually competing in a given network may be two at one time and thousands at another (e.g. Bamden, 1990; Lange, in press). This paper presents a new variant of winnet-take-all networks. the Dynamically-Adaptive Winner-Take-All (DAWfA) network. DAWTA networks. using O(N) connectioas. are able to robustly act as winner-lake-all networks for nearly any network initial condition without any hand-tuning of network parameters. In essence, the DA WT A network dynamically "tunes" itself by adjusting the level of inhibition sent to each unit in the network depending upon feedback from the current conditions of the competition. In addition. a biasing activation can be added to the network to allow it to act as a K-WinnersTake-All network (cf. Majani, Erlanson. and Abu-Mostafa, 1989). in which the K most highly-activated units end up active. 2. DYNAMICALL Y -ADAPTIVE WT A NETWORKS The basic idea behind the Dynamically-Adaptive Winner-Take-All mechanism can be described by looking at a version of a winner-lake-all network that is functionally equivalent to a nonnal winner-lake-all network but which uses only O(N) connections. Several researchers have pointed out that the (N2_N)(l. bidirectional inhibitory connections (each of weight -WI) normally needed in a winner-lake-all network can be replaced by an excitatory self-connection of weight WI for each unit and a single regulatory unit that sums up the activations of all N units and inhibits them each by that -WI times that amount (fouretzky & Hinton. 1988: Majani et al.. 1989) (see Figure 2). When viewed in this fashion, the mutually inhibitory connections of winner-lake-all networks can be seen as a regulator (i.e. the regulatory unit) that is attempting to IYOvide the right amount of inhibition to the network to allow the winner-to-be unit's activation to grow while suppressing the activations of all others. This is exactly what happens when WI has been chosen correctly for the activations of the network (as in Figure la). However, because the amounl of this regulatory inhibition is fixed precisely by that inhibitory weight (i.e. always equal to that weight times the sum of the network activations), there is no way for it to increase when it is not enough (as in Figure Ib) or decrease when it is too much (as in Figure lc). 2.1. THE DA WTA REGULATORY UNIT From the point of view of the competing units' inputs. the Dynamically-Adaptive Winner-Take-All network is equivalenl to the regulatory-unit simplification of a nonnal winner-take-all network. Each unit has an excitatory connection to itself and an inhibitory connection from a regulatory unit whose function is to suppress the activations 344 Lange Figure 2. Simplification of a standard WTA network using O(n) connectiOllS by introduction of a regulatory unit (top node) that sums up the activations of all network units. Each unit has an excitatory connection to itself and an inhibitay connection of weight -WI from the regulatory unit Shading of units (darker = higher) represents their levels of activation at a hypothetical time in the middle of network cycling. of all but the winning unitl. However, the regulatory unit itself, and how it calculates the inhibition it provides to the network, is different Whereas the connections to the regulatory unit in a nonnal winner-lake-all network cause it to produce an inhibitory activation (i.e. the sum of the units' activations) that happens to work if its inhibitory weights were set correctly, the structure of connections to the regulatory unit in a dynamically-adaptive winner-lake-all network cause it to continually adjust its level of activation until the right amount of inhibition is found, regardless of the network's initial conditions. As the network cycles and the winner-lake-all is being perfonned, the DA WT A regulatory unit's activation inhibits the networks' units, which results in feedback to the regulatory unit that causes it to increase its activation if more inhibition is required to induce a single winner, or decrease its activation if less is required. Accordingly, the DAWTA regulatory unit's activation (aR(t) now includes its previous activation, and is the following: netR(t+l) S -8 -8 < net R ( t + 1 ) < 8 netR(t+l) ~ 8 where netR (t+l) is the total net input to the regulator at time 1+1. and 8 is a small constant (typically 0.05) whose purpose is to stop the regulatory unit's activation from rising or falling too rapidly on any given cycle. Figure 3 shows the actual DynamicallyAdaptive Winner-Take-All network. As in Figure 2, the regulatory unit is the unit at the top and the competing units are the the circular units at the bottom that are inhibited by it and which have connections (of weight ws) to themselves. However, there are now two 1 As in all winner-lake-all networks, the competing units may also have inputs from outside the network that provide the initial activations driving the competition. Dynamically-Adaptive Winner-Take-All Networks 345 Figure 3. Dynamically-Adaptive Winnez-Take-All Network at a hypothetical time in the middle of network cycling. The topmost unit is the DA WT A regulatory unit. whose outgoing coonections to all of me competing units II abe bottom all have weight -1. The input -A:-wd is a constant self biasing activation to the regulatory unit whose value determines how many winners it will try to drive. The two middle units are simple linear summation units each having inputs of unit weight that calculate the total activation of the competing units at time I and time I-I, respectively. intennediate units that calculate the net inputs that increase or decrease the regulatory unit's inhibitory activation depending on the state of the competition. These inputs cause the regulatory unit to receive a net input netR (t+l) of: which simplifies to: netR(i+I) = Wt{o,(t-l) - k) + wio,(t-l) - o,{t-2» where Ol(t) is the total summed output of all of the competing units (calculated by the intennediate units shown), W, and Wd are constant weights, and k is the number of winners the network is attempting to seek (1 to perfonn a nonnal winner-lake-all). The effect of the above activation function and the connections shown in Figure 3 is to apply two different activation pressures on the regulatory unit. each of which combined over time drive the DA WTA regulatory unit's activation to find the right level of inhibition to suppress all but the winning uniL The most important pressure, and the key to the DA WT A regulatory unit's success, is that the regulatory unit's activation increases by a factor of W, if there is too much activation in the network, and decreases by a corresponding factor if there is not enough activation in the network. This is the result of the tenn w,(o,(I-I) - k) in its net input function, which simplifies to w,(o,(t-I) - 1) when k equals 1. The "right amount" of total activation in the network is simply the total summed activation of the goal state, i.e. the winner-lake-all network state in which there is one active unit (having activation I) and in which all other competing units have 346 Lange been driven down to an activation of 0, leaving the IDtal network activation 01..1) equal to 1. The factor w,(o,(t-l) - 1) of the regulaWry input's net input will therefore laid to increase the regulatory unit's activation if there are too many units active in abc network (e.g. if there are three units with activity 0.7, 0.5, and 0.3, since the total outpUt 0,(1) will be 1.5), to decrease its activation if there is not enough totally active units in the network (e.g. one unit with activation 0.2 and the rest with activation O.O), and to leave its activation unchanged if the activation is the same as the fmal goal activation. Noce that any temporary coincidences in which the total network activation sums to 1 but which is not the fmal winner-lake-all state (e.g. when one unit has activation 0.6 and another has activation 0.4) will be broken by the competing units lhemselves. since the winning unit's activation will always rise more quickly thaD the loser's just by ill own activation function (e.g. that of McClelland and Rumelhart, 1981). The other pressure on Ihe DAWTA regulatory unit. from the wct<o/..t-l) - 0I..t-2» tam of netR(t+ I), is to tend to decrease the regulator's activation if the overall network activation is falling too rapidly, or to increase it if the overall network activation is rising too rapidly. This is essentially a dampening term to avoid oscillaIions in the network in the early stages of the winner-lake-all, in which there may be many active units whose activations are falling rapidly (due inhibition from the regulatory unit), but in which Ihe total network activation is still above the final goal activation. As can be seen, this second term of the regulatory unit's net input will also sum to 0 and therefore leave the regulatory unit's activation unchanged when the goal state of the network has been reached, since the total activation of the network in the winner-take-all state will remain constanl All of the weights and connections of the D A WT A network are constant parameters that are the same for any size network or set of initial network conditions. Typically we have used W, = 0.025 and Wd = 0.5. The actual values are not critical, as long as Wd »Ws. which assures that Wd is high enough to dampen the rapid rise or fall in total network activation sometimes caused by the direct pressure of Wt. The value of the regulatory unit's self bias term !W, that sets the goal total network activation that the regulatory unit attempts to reach is simply detennined simply by Ie, the number of winnas desired (1 for a normal winner-lake-all network), and W,. 3. RESULTS Dynamically-adaptive winner-lake-all networks have been tested in the DESCARTES connectionist simulator (Lange, 1990) and used in our connectionist model of short-tenn sequential memory (Lange, in press). Figures 4a-c show the plots of activation vasus time in networks given the same initial conditions as those of the normal winner-lake-all network shown in Figures la-c. Note that in each case the regulatory unit's activation starts off at zero and increases until il reaches a level that provides sufficient inhibition to stan driving the winner-lake-all. So whereas the inhibitory weights of -0.2 that worked for inputs ranging from 0.10 to 0.14 in the winner-lake-all network in FigUle la could not provide enough inhibition to drive a single winner when the inputs were ovez 0.5 (Figure 1 b). the DA WT A regulatory unit simply increases its activation level until the inhibition it provides is sufficient 10 start suppressing the eventual losers (Figures 4a and 4b). As can also be seen in the figures, the activation of the regulatory unit tends to vary over time with different feedback from the network in a process that maximizes differentiation between units while assuring that the group of remaining potential winnas stays active and are nOl over-inhibited. Dynamically-Adaptive Winner-Take-All Networks 347 1~~-----------------:=-~ .. """""" ____ """ o.et---------:~"..IOi.._--------=+=_iiif.liIii:lN1 OJi-------------~~~------------------------.--.----~~ O.7+------~#C_-------------..=-=:...a .. -.oIua._i o.s+------:~L-----------;;;;~;;~H 0.5+---~';#Jt..----------------------_t 0.4+---:2~-------------------------_t 0 .7~""""'-0.8-t---f--0.5+.1--0.1 0.0 0 1.0 o.a 0.7 o. O.S 0.4 0.3 0.2 0.1 0 10 20 30 eo 7'0 eo 100 Figure 4. Plots of activation versus time in a dynamically-adaptive winnertake-all network given the same activation functions and initial conditions of the winner-take-all plots in Figure 1. The grey background plot shows the activation level of the regulatory uniL (a) With five units activated with selfbiases from 0.10 to 0.14. (b) With five units activated with self-biases from 0.50 to 0.54. (c) With 100 units activated with self-biases from 0.01 to 0.14 Finally, though there is not space to show the graphic results here, the same DAWTA netwex-ks have been simulated to drive a successful winnez-take-all within 200 cycles on networks ranging in size from 2 to 10,000 units and on initial conditions where the winning unit has an input of 0.000001 to initial conditions where the winning unit has an input of 0.999, without tuning the network in any way. The same networks have also been successfully simulated to act as K-wiMer-take-alJ networks (i.e. to select the K most active units) by simply setting the desired value for Ie in the DA WT A's self bias term kwd· 348 Lange 4. CONCLUSIONS We have presented Dynamically-Adaptive Winner-Taite-All netwcdcs. which UIC O(N) connections to perform the winner-take-all function. Unlike noonaI winner-lake-all networks, DA WT A networks are able to select the most highly-activated unit out of. group of units for nearly any network size and initial condition witbout lUning any network parameters. They are able to do so because the inhibition that drives the winner-taite-all network is provided by a regulatory-unit that is constantly getting feedback from the state of the network and dynamically adjusting its level to provide the right amount of inhibition to differentiate the winning unit from the losers. An important side-feature of this dynamically-adaptive inhibition approach is that it can be biased to select the K most highly-activated units, and therefore laVe as a K-winnas-take-all netw<rt. References Bamden.l. (1990). The power of some unusual coonectiooist dala-SlruCturing techniques. In 1. A. Bamden and 1. B. Pollack (Eels.), AdvalJces ill coltMctiollist and MJlTal computation theory, Norwood, NJ: Ablex. Bamden, J., Kankanahalli, S., Dhannavaratba. D. (1990). Winner-tate-all networks: Time-based versus activation-based mechanisms for various selection tasks. Proceedings 0/ the IEEE International Symposium on Circuits and Systems, New Orleans. LA. Feldman, J. A. & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6, 205-254. Kohonen, T. (1984). Self-organization and associative memory. New York: SpringerVerlag, Berlin. Lange, T. (1990). Simulation of heterogeneous neural networks on serial and parallel machines. Parallel Computing, 14,287-303. Lange, T. (in press). Hybrid connectionist models: Temporary bridges over the gap between the symbolic and the subsymbolic. To appear in J. Dinsmore (ed.), Closing the Gap: Symbolic vs. Subsymbolic Processing. Hillsdale, NJ: Lawrence Erlbaum Associates. Lange, T. & Dyer, M. O. (1989a). Dynamic, non-local role-bindings and inferencing in a localist network for natural Janguage understanding. In David S. Touretzky. editor, Advances in Neural In/ormation Processing Systems I, p. 545-552, Morgan Kaufmann, San Mateo, CA. Lange, T. & Dyer, M. O. (1989b). High-level inferencing in a connectionist network. Connection SciellCe, I (2), 181-217. Majani, E., ErIanson, R. & Abu-Mostafa, Y. (1989). On the k-winners-lake-all network. In David S. Touretzky, editor, Advances in Neural InformaJion Processing Systems I, p. 634-642, Morgan Kaufmann, San Mateo, CA. McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in JelkS perception: Part 1. An account of basic findings. Psychological Review.88,375-407. Reggia, J. A. (1989). Methods for deriving competitive activation mechanisms. Proceedings o/the First AnllUlJi International Joint Conference on Neural Networks. Touretzky, D. (1989). Analyzing the energy landscapes of distributed winner-lake-all networks (1989). In David S. Touretzky, editor, Advances in Neural Information Processing Systems I, p. 626-633, Morgan Kaufmann, San Mateo, CA. Touretzky, D., & Hinton, G. (1988). A distributed connectionist production system. Cognitive Science, 12. 423-466. PART VII VISION
1991
29
495
Adaptive Soft Weight Tying using Gaussian Mixtures Steven J. Nowlan Computational Neuroscience Laboratory The Salk Institute, P.O . Box 5800 San Diego, CA 92186-5800 Abstract Geoffrey E. Hinton Department of Computer Science . U ni versi ty of Toran to Toronto, Canada M5S lA4 One way of simplifying neural networks so they generalize better is to add an extra t.erm 10 the error fUll ction that will penalize complexit.y. \Ve propose a new penalt.y t.erm in which the dist rihution of weight values is modelled as a mixture of multiple gaussians. C nder this model, a set of weights is simple if the weights can be clustered into subsets so that the weights in each cluster have similar values . We allow the parameters of the mixture model to adapt at t.he same time as t.he network learns. Simulations demonstrate that this complexity term is more effective than previous complexity terms. 1 Introduction A major problem in training artificial nellral network:> is to ellsure t.hat they wIll gel/eraiIze well to ra .. .,f'~ thaI they h(lvl> 1I0t been tralHeu OIl. SUIlle recellt t.heuretical results (Baurn anu Iiallssier. 10S~I) Itave :,.ug,g,e~teU that ill order to guaralltee goou generalizatioll Ilw <IIIIOllnl of lllforillatiull requireJ te. dlr"L"t1~ "p~'c if~ Ihe Ulltput vectors of all t.he t rallllng casps ll11l..;t he considerahly larg,t>r than t hI' lllllTlber of independellt weight:,. III the llt'twork III 1I1any practIcal problt'lllS there IS only a small amount of labelled data available for traming and this creates problellls for any approach that uses a large. homogeneous network with many indepeIldent weights. As a result. there has been much recent int.erest in techniques that can train large networks wil h relatively small amounts of labelled data and still provide good generalization performance. In order to improve generalization, t.he number of free parameters in the network must be reduced. Olle of the oldest and simplest approaches to removing excess degrees of freedolll from a net work i~ to add an ext fa term 10 the error [Ullct 1011 993 994 Nowlan and Hinton that penalizes complexity: cost = data-misfit + A complexity (1) During learning, the network is trying to find a locally optimal trade-off between the data-misfit (the usual error term) and the complexity of the net. The relative importance of these two terms can be estimated by finding the value of A that optimizes generalization to a validation set. Probably the simplest approximation to complexity is the sum of the squares of the weights, Li w;. Differentiating this complexity measure leads to simple weight decay (Plaut, Nowlan and Hinton, 1986) in which each weight decays towards zero at a rate that is proportional to its magnitude. This decay is countered by the gradient of the error term, so weights which are not critical to network performance, and hence always have small error gradients, decay away leaving only the weights necessary to solve the problem. The use of a Li'IV; penalty term can also be interpreted from a Bayesian perspective. l The "complexity" of a set of weights, ALi w;, may be described as its negat.ive log probahilit.y dellsit.y under a radially symmetric gaussian prior distribution on the weights. The distribution is centered at the origin and has variance 1/ A. For multilayer networks, it is hard to find a good theoretical justificatioll for this prior, but Hinton (1987) justifies it empirically by sllOwiug tllat it greatly improves generalizat.ioll on a very difficult, task. MOI'e recently, Mackay (1991) has shown that even better generalization can be achieved by using different values of A for the weights in different layers. 2 A more conlplex measure of network complexity If we wish to eliminate small weights without forcing large weights away from the values they lleed to model the data, we can use a prior which is a mixture of a narrow (n) and a broad (b) gaussian, both centered at zero. 1 -5 1-6p(w) = trn yI2; e 2"n + trb ~ e b 27rl1n 27rl1b (2) where trn and trb are the mixing proportions of the two gaussians and are therefore constrained to sum to l. Assuming that. the weight values were generated from a gaussian mixture, the conditional probability that a particular weight, Wi, was generated by a particular gaussian, j, is called the responsibility of that gaussian fOI' the weight and is: (3) where Pj(Wj) is the probahilit.y density of Wi under gaussian j. When the mixing proportions of t.lw two gatlssians are comparable, t.he llal'l'OW gaussian gets most of the responsibilit.y for a small weight. Adopting the Bayesiall perspective, the cost of a weight under the narrow gaussian is proportional to w 2 /2l1~. As long as l1n is quite small there will be strong pressure to reduce the magnitude 1 R. Szeliski, personal communication, 1985. Adaptive Soft Weight Tying using Gaussian Mixtures 995 of small weights even further. Conversely, the broad gaussian takes most of the responsibility for large weight values, so there is much less pressure to reduce them. In the limiting case when the broad gaussian becomes a unifonTI distribution, there is almost no pressure to reduce very large weights because they are almost certainly generated by the uniform distribution. A complexity term very similar to this limiting case is used in t.he "weight elimination" technique of CWeigend, Huberman and Rumelhart, 1990) to improve generalization for a time series prediction task. 2 3 Adaptive Gaussian Mixtures and Soft Weight-Sharing A mixture of a narrow, zero-mean gaussian with a broad gaussian Or a uniform allows us to favor networks with many near-zero weights, and this improves generalization on many tasks. But practical experience with hand-coded weight constraints has also shown that great improvements can be achieved by constraining particular subsets of the weights t.o share the same value (Lang, '-\Taibel and Hinton, 1990; Le Cun, 1989). Mixtures of zero-mean gaussians and uniforms canllot implement this type of symllletry constraint. If however, we use multiple gaussians and allow their means and variances to adapt as t.lw lIetwol·k learns, we call implemellt a "soft" version of weight.-sharing III which the leawing algoritlllll decides for itself which weights should be t.ied together. (We may also allow the lllixillg, proportiolls to adapt so that. we are 1I0t assulllillg all sets of tied weights al·e the sallle size.) The basic idea is t.hat a gallssiall which takps responsibility for a subset of the weights will squeeze those weight.s t.ogether since it can then have a lower variance and assign a higher probability dellsit.y t.o each weight. If t.he gaussialls all start with high variallce, the initial division of weights into subsets will be very soft . As the variances shrink and the network learns, the decisions about how to group the weights iuto subsets are influenced by the task the network is learning t.o perforul. To make t.hese intuit.ive ideas a bit more concrete, \ve may define a cost function of the general form given in (1): (4) where 0"; is the variance of the squared error and each Pj (wd is a gaussian density with mean /1j and standard deviation O"j. \Ve optimize this function by adjusting the Wi and the mixture parameters 1fj, /1j, and O"j, and O"y.3 The partial derivative of C with respect to each weight is the sum of the usual squared error derivative and a term due to the complexity cost for the weight: (5) 2See (N owl au, 1991) for a precise descri pt iOll of t.he rela.tionshi p bet.weeu rni xture models and the model Ilsed by (Weigend. Huherman a.nd Rllmelltart. 1990). Jl/a~ lllay be tlLOUgltt of as playillg tlte sallle role a.s A ill equatiou 1 ill detcrminiug a trade-off between the misfit. auo complexity costs. K is a 1I0rlllaiiting factor ba.sed 011 it gaussia.u error LLlude!. 996 Nowlan and Hinton Method Train % Correct Test % Correct Vanilla Back Prop. 100.0 ± 0.0 67.3 ± 5.7 Cross Valid. 98.8 ± 1.1 83.5 ± 5.1 Weight Elimination 100.0 ± 0.0 89.8 ± 3.0 Soft-share - 5 Compo 100.0 ± 0.0 95.6 ± 2.7 Soft-share - 10 Compo 100.0 ± 0.0 97.1 ± 2.1 Table 1: SUllllllal'y of generalization performance of 5 different training techniques on the shift detection problem. The derivative of the complexity cost term is simply a weighted sum of the difference between the weight value and the center of each of the gaussians. The weighting factors are the responsibility measures defined in equation 3 and if over time a single gaussian claims most of the responsibility for a particular weight the effect of the complexity cost t.erm is simply to pull the weight towards the center of the responsible gaussian. The strength of this force is inversely proport.ional to the variance of the gaussian. In the simulations described below, all of the parameters (Wi, Pj, (Jj, 7rj) are updated simultaneously using a conjugate gradient descent procedure. To prevent variances shrinking too fast or going negative we optimize log (Jj rather than (Jj. To ensure that the mixing proportions sum t.o 1 and are positive, we optimize Xj where trj = exp(xj)/ L exp(x/,;). For furtiter details see (Nowlan and Hinton, 1992). 4 SilTIulation Results V>le compared the gelleralization performance of soft weight-tying to other techniques on two different. problems. The first problem, a 20 input., one output shift detection !letwork, vvas chosell because it was biJlary problem for which solutiotls which generalize well exhibit a lot. of repeat.ed weight structure. The generalizatioll perfOrlllallCt· of lwtworks trailled using, the co:st Cl"it.erion giveJl ill equation 4 was compared to Ilet.works t.rained in three other ways: No cost term to penalize complexity; No explicit complexity cost. term, but use of a validat.ion set to terminate learning; Weight elimination (Wf'igelld, Huberman a.nd Rumelhart, 1990)4. The simulation results art' sllmmarized in Table 1. The network had 20 input units, 10 hidden units, and a single output unit and contained 101 weights. The first 10 input units in this network were given a random binary pattern, and the second group of 10 input units were given the same pattern circularly shifted by 1 bit left or right. The desired output of the network was +1 for a left shift. and -1 for a right shift. A data set of 2400 patterns was created by randomly generating a 10 bit string, and choosing with equal probability to shift the string left or right. The data set was divided into 100 training cases, 1000 validation cnses, and 1 :WO t.est. cases. The training :set was deliberat.ely chosen to be very small « 5% of possible patterns) to explore the region in which complexity penalties should have the largest. impa.ct. Ten simulations were performed with each 4With a fixed value of >. chosen by cross-validation. 1,' 1.3 I.Z 1.1 0.9 0 .8 0.7 0.& 0.5 0 •• 0.3 0.2 Adaptive Soft Weight Tying using Gaussian Mixtures 997 o. ~-l=+7+=~~~+::=f=*B~:;::+:=+-'H+~\~=+=l~ -4.5-4-l.5·~-2.5-2-1,5-1-o,5 0 0,5 11,5 2 2.5 1 l,5 • ',5 5 Figure 1: Final mixture probability density for a typical solution to the shift detection problem. Five of the components in the mixture can be seen as distinct bumps in the probabilit.y densit.y. Of the remaining five components, two have been eliminated by having their mixing proportions go to zero and the other three are very broad and form the baseline offset of the density function. method, starting frolll ten difl·erent. initial weight sets (t.e. each method used the same ten initial weight. configurations). The final weight distl'ihlltiolls discovered by the soft weight-tyiug technique are shown in Figlll'e 1. There is no significant component with mean O. The classical assumpt.ioll t.hat. the nt't.work collt.aiw; a large lIulllber of illessellt.ial weight.s which can be eliI1IilIated to ililprove generalizatioll is lIOt appropriate COL' this problelll aBd network arcilitecture. Tilis may explaiu why the weight elimination model used by 'Veigend ef af ('Veigend, Huberman and Rumelhart, 1990) performs relatively poorly in this si tuation. The second task chosen to evaluate the effectiveness of our complexity penalty was the prediction of the yearly sunspot average from the averages of previous years. This task has been well studied as a time-series prediction benchmark in the statistics literature (Priestley, 1991b; Priestley, 19910.) and has also been investigated by (Weigend, Huberman and Rumelhart, 1990) using a complexity penalty similar to the one discllssed in section 2. The network archit.ect.me Llsed was identical to the one used in the study by VVeigend et af: The Iwtwork had 1 L input. unit.s which represent.ed the yearly average from til<-' preceding lL years, 8 hidden unit.s, and a silIgle lillear output unit which represented the predictioll for tlw averagl' Illllllhu' of SllIlSPOt.S ill t.he current year. Yearly sunspot dat.a from l700 to uno wa:-; lIsed to train the lIetwork to perform this OllCstep prediction task, aud t.he evaluation of the network was based on data from 998 Nowlan and Hinton Method Test arv TAR 0.097 RBF 0.092 \VRH 0.086 Soft-share - 3 Compo 0.077 ± 0.0029 Soft-share - 8 Compo 0.072 ± 0.0022 Table 2: Summary of average relat,ivp variance of 5 different models on the one-step sunspot prediction problelll. 1921 to 1955.5 The evaluation of prediction performance used the aver·age relative variance (ar·v) measure discussed in (Weigend, Huberman and Rumelhart, 1990). Simulations were performed using the same conjugate gradient method used for the first problem. Complexity measures based on gaussian mixtures with;) and 8 components were used and ten simulat.ions were performed with each (USillg the same training data but different initial weight configurations). The results of these simulations are summarized in Table 2 along with the best result obtailled by Weigend et at (Weigend, H ubermall and RUlllelhart, 1990) (HI RH), the bilinear auto-regression model of Tong and Lim (Tong ano Lim, 1980) (T A R)6, and the multi-layer RBF network of He and Lapeoes (lIe alld Lapedes, 1991) (RBF). All figure:::; represent the arv on t.he t.est set. For the mixture complexity models, this is the average over the ten simlllations, plus or minus one standard deviation. Since the results for the models ot.her than the mixture complexity trained networks are based on a single simulation it is difficult to assign statistical signifigance to the differences shown in Table 2. We may note however, that the difference between the 3 and 8 component mixture complexity models is significant (p > 0.95) and the differences bet.ween t.he 8 componellt. model and the other models are much larger. Figure 2 shows an 8 component mixture Blodel of the fillal weight distribution. It is quite unlike t.he distribution ill Figure 1 and is actually quite close to a mixture of two zero-meall gallssians, one hroad ano one lIarrow. This may explain why weight elimination works quite well for t.his t.ask. Weigeno el at point. Ollt that. fOJ" Lillie series preoiction tasks sllch as the SUllspot task a mudl rnore int,('resl.illg nlca:-;llI"t' of performance is the ability of the Illouel to preoict Illore thall aile t.illlt: st.ep into the fUl.LUe. One way to appl"Oacll the Illultistep prediction problem is to llse iterated szng/e-step predzctzon. In this method, the predicted output is fed back as input fOI· the next preJictioll and all otlter illput units have theil' values shifted back Olle unit. Thus t.he input. typically consists of a combination of act.ual and preJicted values. \Vhen preuictillg more thaJl one step int.o the future, the prediction error depends both on how ma.ny steps into the future one is predicting (/) ano on what point in the time series the prediction began. An appropriate enor measure for iterated prediction is the aVe1·age relaltve I-times iter'ated pr'ediclion V(lT"wnce (\Veigend, Huberman and Rumelhart, 1990) 5The aut.hors thallk Andreas vVeigend for providing his version of this data. 6This was the morlel fa.vored b~1 Priestly (Priestley, 1991a.) in a recent evaluation of classical stat.istical approaches 1.0 t.his t.ask. Adaptive Soft Weight Tying using Gaussian Mixtures 999 1.9 1.8 1.7 1.& 1.5 1.. 1.3 1.2 1.1 0.9 0.8 0.7 0.6 0.5 0 •• 0.3 0.2 0.1 3.5 ••. 5 5 Figure 2: Typical final mixture probability density for the SUllspot prediction pl'Oblem with a model containing 8 ruixt.llI'e components. 0.9 0.8 0./ O,r. 0.5 0.4 0,1 0.2 0.1 Figure 3: A verage relative I-times iterated prediction variance versus number of prediction iterations for t.he sunspot. time series from 1921 to 1955. Closed circles represent the TAR model, opell circles the W RH model, closed sljuares the j component complexity 1l10del, and opell squares the ~ componellt complexity lllodei. Ten different set.s of initial weights were used for the 3 and 8 component complexity models and one standard deviation error bars are shown. 1000 Nowlan and Hinton which averages predictions I steps into the future over all possible starting points. Using this measure, the performance of various models is shown in Figure 3. 5 Sun1mary The simulations we have described provide evidence that the use of a more flexible model for the distribution of weights in a network can lead to better generalization performance than weight decay, weight elimination, or techniques that control the learning time. The flexibility of our model is clearly demonstrated in the very different final weight distributions discovered for the two different problems investigated in this paper. The a.bility to automatically adapt to individual problems suggests that the method should ha.ve broad applicability. Acknowledgements This research was funded by the Ontario ITRC, the Canadian NSERC and the Howard Hughes Medical Institute. Hiutoll is the Noranda fellow of the Calladiall lllstitute for Advanced Research. References Baum, E. B. allo Hallssler. D. (I()~~l) . What size net gives vidid generalizat.ioJl ? New 'o/ Compu/a/ion.l :151 -- lGO . He, X. and Lapedes, A. (1991) . Nonlinear modelling alld prediction by stlccessive approximation \Ising Radial Basis FUllctioJls. Techllical Report LA-U H.-Yl-lJ75, Los Alamos National Laboratory. Hinton, G. "8. (1987) . Learning t.rallslatioll invariant recognition in a massively parallel network. In Pmc. Conf. Pam/ft'i Al'chitectw'es and Languages EU7"Ope, Eindhoveu. Lang, K. J., Waibel, A. H., and Hint.on, G . E. (1990). A time-delay neural network architecture for isolated word recognition. Neural Networks, 3:23-43. Le Cun, Y. (1989) . Generalization and network design strategies. Technical Report CRGTR-89-4, University of Toronto. MacKay, D. J. C. (1991). Bayesian Modelling and New'al Networ'ks. PhD thesis, Computation and Neural Systems, California Institute of Technology, Pasadena, CA . Nowlan, S. J. (1991) . Soft Competitive Adaptation: New'al Network Learning Algorithms based on Fitting Sta.tistical Mixtures. PhD thesis, School of Computer Science, Carnegie Melloll Uuiversit.y, Pittsburgh, PA. Nowlan, S. J. and Hinton, G. E. (Hlq~) . Sirnplifyillg neural networks by soft weightsharing. New'al ComputatlOlI . III press. Plaut, D. C ., Nowlall , S. J .. alld Hill/on, G. E. (lY86). Experimellt.s 0 11 learllillg by back-propagation. Tech nical Report. CM U -CS-86-l'26, Carnegie-Melloll U IIi versi ty, Pittsburgh PA 15213. Priestley, M. B. (1991a.). Non-lineal' ond Non-stationa ·ry Time Senes Analysis. Academic Press. Priestley, M. B. (1~~lb) . Spectml AIHllY8is and Time Series. Academic Press. Tong, H. and Lilli, 1\ . S. (1980) . Threshold autoregression, limit cycles, a.lld cyclical dat.a. 10'l.1mal Royal Stati"tical Society B, 42. Weigend, A. S., Huberman, B. A., alld Rurnelhart, D. E. (lY90). Predictillg the future: A connectiollist approach . lllte,.,w/wllal lou.null of Neuml Systems, 1.
1991
3
496
Some Approximation Properties of Projection Pursuit Learning Networks Ying Zhao Christopher G. Atkeson The Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract This paper will address an important question in machine learning: What kind of network architectures work better on what kind of problems? A projection pursuit learning network has a very similar structure to a one hidden layer sigmoidal neural network. A general method based on a continuous version of projection pursuit regression is developed to show that projection pursuit regression works better on angular smooth functions than on Laplacian smooth functions. There exists a ridge function approximation scheme to avoid the curse of dimensionality for approximating functions in L2(¢d). 1 INTRODUCTION Projection pursuit is a nonparametric statistical technique to find "interesting" low dimensional projections of high dimensional data sets. It has been used for nonparametric fitting and other data-analytic purposes (Friedman and Stuetzle, 1981, Huber, 1985). Approximation properties have been studied by Diaconis & Shahshahani (1984) and Donoho & Johnstone (1989). It was first introduced into the context of learning networks by Barron & Barron (1988). A one hidden layer sigmoidal feedforward neural network approximates f(x.) using the structure (Figure l(a)): n g(x.) = 'E a:jlT(pj9J x. + llj) (1) j=l 936 Some Approximation Properties of Projection Pursuit Learning Networks 937 (a) (b) Figure 1: (a) A One Hidden Layer Feedforward Neural Network. (b) A Projection Pursuit Learning Network. where iT is a sigmoidal function. 8; are direction parameters with 118;11 = I, and oj! P;, 6; are function parameters. A projection pursuit learning network based on projection pursuit regression (PPR) (Friedman and Stuetzle, 1981) or a ridge function approximation scheme (RA) has a very similar structure (Figure l(b». " g(x) = Lg;(8Jx) (2) ;=1 where 8; are also direction parameters with 118;11 = 1. The corresponding function parameters are ridge functions 9; which are any smooth function to be learned from the data. Since iT is replaced by a more general smooth function gj, projection pursuit learning networks can be viewed as a generalization of one hidden layer sigmoidal feedforward neural networks. This paper will discuss some approximation properties of PPR: 1. Projection pursuit learning networks work better on angular smooth functions than on Laplacian smooth functions. Here "work better" means that for fixed complexities of hidden unit functions and a certain accuracy, fewer hidden units are required. For the two dimensional case (d = 2), Donoho and Johnstone (1989) show this result using equispaced directions. The equispaced directions may not be available when d > 2. We use a set of directions generated from zeros of orthogonal polynomials and uniformly distributed directions on an unit sphere instead. The analysis method in D "J's paper is limited to two dimensions. We apply the theory of spherical harmonics (Muller, 1966) to develop a continuous ridge function representation of any arbitray smooth functions and then employ different numerical integration sehemes to discretize it for eases when d> 2. 2. The curse of dimensionality can be avoided when a proper ridge function approximation is applied. Once a continuous ridge function representation is established for any function in L2(4)41), a Monte Carlo type of integration scheme can be applied which has a RMS error convergence rate O(N-i) where N is the number of ridge functions in the linear combinations. This is a similar result to Barron's result (Barron, 1991) except that we have less restrictions on the underlying function class. 938 Zhao and Atkeson (a) (b) Figule 2: (a> A radial basil element 1014' (b) a harmonic basis element 1110. 2 SMOOTHNESS CLASSES AND A L'( tPd) BASIS We use L2(4),.) u our underlying function space with Gaussian meuure 4>4 = II I~IJ (2~)ye. 11/112 = JR. P4>4dx. The smoothness classes characterise the rates of convergence. Let A4 be the Laplacian operator and A4 be the Laplacian-Beltrami operator (Muller, 1966). The smoothness classes can be defined u: Definition 1 The function 1 E L2(4)4) will be .aid to have Cartuian .moo#Jane .. 01 order p i/ it hal p derivativu and thue derivativu are all in L2(<1>4) , It will be .aid to have angular .moo#Jane .. 0/ order q i/ A~f 1 E L2(<1>4) ' It will be .aid to have Laplacian .moo#Jane .. 0/ order 7' i/ A41 E L2(4)4)' Let;:, be the cia .. 0/ function. with Cartuian .moothneu P, A,f be #Jae cia .. 01 function. with Cartuian .moothne .. p and angular .moo#Jane .. q and c,~ be the cla.. 0/ function. with Cartuian .moothne .. p and Laplacian .moo#Jane .. 7', We derive an orthogonal basis in L2(<1>4) from the eigenfunctions of a self-adjoint operator, The basis element is defined as: (3) where x = 7'e, n = 0, ",,00, m = 0, "" 00, j = I, "" N(d, n),1'm = (_2)mm!, (t = n + 422. S"i(e) are linearly independent spherical harmonies of degree n in d dimensions (Muller, 1966), L!C;) is a Laguerre polynomial. The advantage of the basis comes from its representation u a product of a spherical harmonic and a radial polynomial. Specifically JOjm is a radial polynomial for n = 0 and J"jO is a harmonic polynomial for m = O. Figure 2(a),(b) show a radial buis element and a harmonic basis element when n + 2m = 8. The basis element J"jm has an orthogonality: where E denotes expectation with respect to <1>4' Since it is a basis in L2 (<1>4), any function / E L2(<1>4) hu an expansion in terms of basis elements J"jm 1 = L ~imJ";m (5) ".j,m Some Approximation Properties of Projection Pursuit Learning Networks 939 The circular harmonic einB is a special case of the spherical harmonic Snj (e). In two dimensions, d = 2, N(d,n) = 2 and x = (rcos8,rsin8). The spherical harmonic Snj(e) can be reduced to the following: Snl(e) = 7: cos n8, Sn2(e) = '* sin n8, which is the circular harmonic. Smoothness classes can also be defined qualitatively from expansions of functions in terms of basis elements Jnjm. Since 111112 = 2:c!jmJ~jm' one can think of Pnjm(J) = E!i2.,.J!i'; as representing the distribution of energy in 1 among the c .. i.,.J .. ; ... different modes of oscillation Jnjm . If 1 is cartesian smooth, Pnjm(J) peaks around small n + 2m. If 1 is angular smooth, Pnjm (J) peaks around small n. If 1 is Laplacian smooth, Pnjm (I) peaks around small m. To explain why PPR works better on angular smooth functions than on Laplacian smooth functions, we first examine how to represent these L2(¢d) basis elements systematically in terms of ridge functions and then use the expansion (5) to derive a error bound of RA for general smooth functions. 3 CONTINUOUS RIDGE FUNCTION SCHEMES There exists a continuous ridge function representation for any function I(x) E L 2(¢d) which is an integral of ridge functions through all possible directions. I(x) = 1 g(xT TJ, TJ)dwd(TJ)· (6) n~ This works intuitively because any object is determined by any infinite set of radiographs. More precisely, any function I(x) E L 2(¢d) can be approximated arbitrarily well by a linear combination of ridge functions 2:k g(xT TJk, TJk) provided infinitely many combination units (Jones, 1987). As k --+ 00, we have (6). The natural discrete approximation to (6) has the form: In(x) = 2:'1=1 Wjg(xTTJj,TJj), which becomes the usual PPR (2). We proved a continuous ridge function representation of basis elements Jnjm which is shown in Lemma 1. Lenuna 1 The continuou6 ridge function repre6entation of Jnjm i6: Jnjm(x) = Anmd 1 Hn+2m (TJT x)Snj(TJ)dWd(TJ) n~ (7) where Anmd i6 a con6tant and Hn+2m(:l) i6 a Hermite polynomial. Therefore any function 1 E L 2(¢d) has a continuous ridge function representation (6) with g(xT TJ,7J) = L Cnjm AnmdHn+2m(XT 7J)Snj(7J) (8) Gaussian quadrature and Monte Carlo integration schemes can be used to discretize (6). 4 GAUSSIAN QUADRATURE Since Jn~ g(xT TJ, TJ)dwd(TJ) = Jn~_1 J~l g(xT TJ, TJ)(1 t~_d ~;3 dtd_ldwd_l (TJd-d, simple product rules using Gaussian quadrature formulae can be used here. tij, i = 940 Zhao and Atkeson (a) (b) Figure 3: Directions (a) for a radial polynomial (b) for a hatmonic polynomial d -1, ... , 1, j = 1, ... , ft are lerGI of orthogonal polynomials with weight. (1 - t 2)¥. N = ,,4-1 points on the unit .phere {}4 can be formed using tii through [ "/1- tLI 00. y1-if 1 ,,= Jl- tLI· .. tl t4_1 (9) If g( r ", ,,) is a polynomial of degree at mo.t 2" - 1 (in term. of t1 , ... , t4-il, then N = ,,4-1 point! (direction.) ate .ufficient to represent the integral exactly. This can be demonstrated with two eX&lllples by taking d = 3. Example 1: a radial function 24 + 114 + %4 + 222112 + 222%2 + 2112%2 = Cl I (xT ,,)4dwa('1) (10) In. d = 3." = 3. ,,2 = 9 direction. from (9) are sufficient to represent this polynomial with tl = 0, /i, -/i (Ieros of a degree 3 Legendre polynomial) and tl = 0, llj. -~ ( .eros of a degree 3 Tschebyscheff polynomial). More directions ate needed to represent a hatmonic function with exactly the same number of ternu of monomials but with different coefficient •. EX&IIlpie 2: a hat monic function i(824 + 3114 + 3%4 - 2422112 - 2422%2 + 6~%2) = C2 L. (r,,)4S.i (,,)dwa(,,) (11) where S4/(") = i(35t4 30tl + 3)." = t€a + v'f=1J'h' ,,= 5, ,,2 = 25 direction. from (9) ate sufficient to represent the polynomial with t2 = 0,0.90618, -0.90618, 0.53847, -0.53847 and tl = COl 2{;1 ~,j = I, ... , 5. The distribution of these directioDJ on a unit .phere ate shown in Figure 3(a) and (b). In general, N = (ft + m + 1)4-1 direction. ate sufficient to represent J"im exactly by using leros of orthogonal polynomial.. If p = ,,+ 2m (the degree of the basis) i. fixed, N = (p - m + 1)4-1 = (~ + 1)4-1 is minimiled when" = 0 which corresponds to the radial basis element. N is maximiled when m = 0 which is the hat monic element. Using definition. of .moothnes. dassel in Section 2. We can show that ridge function approximation works better on angular smooth functions. The basic result i. as follow.: Some Approximation Properties of Projection Pursuit Learning Networks 941 Theorem liE .A"f' let RN 1 denote a .um 01 ridge Junction. which belt approzimate f by tI.ing a .et of direction. generated by zero. of orthogonal polynomial •. Then (12) This error bound says that ridge function approximation does take advantage of angular smoothness. Radial functions are the most angular smooth functions with q = +00 and harmonic functions are the least angular smooth functions when the Cartesian smoothness p is fixed. Therefore ridge function approximation works better on angular smooth functions than on Laplacian smooth functions. Radial and harmonic functions are the two extreme cases. 5 UNIFORMLY DISTRIBUTED DIRECTIONS ON nd Instead of using directions from zeros of orthogonal polynomials, N uniformly distributed directions on nd is an alternative to generalizing equispaced directions. This is a Monte Carlo type of integration scheme on Oct. To approximate the integral (7), N uniformly distributed directions 1/1, 112, ...... , TJN on nd drawn from the density I(TJ) = l/wct on Od are used: N Jnjm(x) = ';; Anmd L Hn+2m (XT TJIe)Snj(TJIe) Ie=l (13) (14) The variance is u 2(x) u1(x) = -(15) N where u2(x) = In. [AnmdWdHn+2m(XTTJ)Snj(TJ) - Jnjm(X)] 2 ..,l.dwd(TJ). Therefore Jnjm(X) approximates Jnjm(x) with a rate u(x)/VN. The difference between a radial basis and a harmonic basis is u(x). Let us average u(x) with respect to ¢d: Ilu(x)112 = IIJnjmll2 [ri~~;~:d 1) - 1] = IIJnjmll2anjm (16) For a fixed n + 2m = p, Ilu(x)112 is minimized at n = 0 (a radial element) and maximized at m = 0 (a harmonic element). The same justification can be done for a general function 1 E L2( ¢d) with (6) and (8). A RA scheme is: 942 Zhao and Atkeson mN(x) = I(x), O';;(x) = (T2kX) , 0'2(x) = fOil wdg2(xT TJ, TJ)dwct(TJ) - 12(x) and 110'(x)1I2 = L c!jmIIJnjmWanjm' Since anjm is small when n is small, large when m is small and recall the distribution Pnjm(J) in Section 3, 110'112/11/112 is small when I is smooth. Among these smooth functions, if I is angular smooth, 110'112/11/112 is smaller than that if I is Laplacian smooth. The RMS error convergence rate IIli",1I = ,,)I~J:v is consequently smaller for I being angular smooth than for I being Laplacian smooth. But both rates are D( N-i) no matter what class the underlying fundion belongs to. The difference is the constant which is related to the distribution of energy in I among the different modes of oscillations (angular or Laplacian). The radial and harmonic functions are two extremes. 6 THE CURSE OF DIMENSIONALITY IN RA Generally, if N directions TJ1, 1/2, ...... , TJN, on Od drawn from any distribution p(TJ) on the sphere Od to approximate (6) I(x) = ~ t 9(XTTJ1c ,TJ1c» (17) N 1c=1 p( TJ1c) mN = lex) IT;; = C;; where 0'2(x) = fOil p(~)g2(xTTJ,TJ)dwd(TJ) - 12(x). And 111T(x)112 = I _(1) IIgl1I12dwd(TJ) -11/112 = c/ (18) lOll P TJ Then (19) That is, I(x) --+ lex) with a rate D(N-~). Equation (19) shows that there is no curse of dimensionality if a ridge function approximation scheme (17) is used for lex). The same conclusion can be drawn when sigmoidal hidden unit function neural networks are applied to Barron's class of underlying function (Barron, 1991). But our function class here is the function class that can be represented by a continuous ridge function (6), which is a much larger function class than Barron's. Any function I E L 2(¢d) has a representation (6)(Section 4). Therefore, for any function I E L2( ¢d), there exists a node function g(xT TJ, TJ) and related ridge function approximation scheme (17) to approximate lex) with a rate D(N-i), which has no curse of dimensionality. In other words, if we are allowed to choose a node function g(xT TJ, TJ) according to the property of data, which is the characteristic ofPPR, then ridge function approximation scheme can avoid the curse of dimensionality. That is a generalization of Barron's result that the curse of dimensionality goes away if certain types of node function (e.g., CO$ and 0') are considered. The smoothness of a underlying function determines the size of the constant Ct. As shown in the previous section, ifp(TJ) = l/wd (i.e., uniformly distributed directions), then angular smooth functions have smaller c/ than Laplacian smooth functions do. Choosing different p( TJ) does not change this conclusion. But a properly chosen p( TJ) reduces c, in general. If I(x) is smooth enough, the node function g(xT TJ, TJ) can be Some Approximation Properties of Projection Pursuit Learning Networks 943 computed from the Radon transform R"f of f in the direction 1] which is defined as (20) and we proved: g(xT1],1]) = F-1(Fl1(t)ltld-1) It=X'l'l1, where Fl1(t) is the Fourier transform of R"f(') and F- 1 denotes the inverse Fourier transform. In practice, learning g(xT 11,11) is usually replaced by a smoothing step which seeks a one dimensional function to fit x T 1] best to the residual in this direction (Friedman and Stuetzle, 1981, Zhao and Atkeson, 1991). 7 CONCLUSION As we showed, PPR works better on angular smooth function than on Laplacian smooth functions by discretizing a continuous ridge function representation. PPR can avoid the curse of dimensionality by learning node functions from data. Acknowledgments Support was provided under Air Force Office of Scientific Research grant AFOSR89-0500. Support for CGA was provided by a National Science Foundation Presidential Young Investigator Award, an Alfred P. Sloan Research Fellowship, and the W. M. Keck Foundation Associate Professorship in Biomedical Engineering. Special thanks goes to Prof. Zhengfang Zhou and Prof. Peter Huber at Math Dept. in MIT, who provided useful discussions. References Barron, A. R. and Barron, R. L. (1988) "Statistical Learning Networks: A Unifying View." Computing Science and Stati6tic6: Proceeding6 of 20th Symp06ium on the Interface. Ed Wegman, editor, Amer. Statist. Assoc., Washington, D. C., 192-203. Barron, A. R. (1991) "Universal Approximation Bounds for Superpositions of A Sigmoidal Function". TR. 58. Dept. of Stat., Univ. of nlinois at UrbanaChampaign. Donoho, D. L. and Johnstone, I. (1989). "Projection-based Approximation, and Duality with Kernel Methods". Ann. Statid., 17, 58-106. Diaconis, P. and Shahshahani, M. (1984) "On Non-linear Functions of Linear Combinations", SIAM J. Sci. Stat. Compt. 5, 175-191. Friedman, J. H. and Stuetzle, W. (1981) "Projection Pursuit Regression". J. Amer. Stat. Auoc., 76, 817-823. Huber, P. J. (1985) "Projection Pursuit (with discussion)", Ann. Stati6t., 19, 495-4 75. Jones, L. (1987) "On A Conjecture of Huber Concerning the Convergence of Projection Pursuit Regression". Ann. Statid., 15, 880-882. Muller, C. (1966), Spherical Harmonic,. Lecture Notes in Mathematics, no.17. Zhao, Y. and C. G. Atkeson (1991) "Projection Pursuit Learning", Proc. IJCNN-91-SEATTLE.
1991
30
497
Unsupervised learning of distributions on binary vectors using two layer networks Yoav Freund· Computer and Information Sciences University of California Santa Cruz Santa Cruz, CA 95064 Abstract David Haussler Computer and Information Sciences University of California Santa Cruz Santa Cruz, CA 95064 We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors. We analyze the class of probability distributions that can be modeled by such machines. showing that for each n ~ 1 this class includes arbitrarily good appwximations to any distribution on the set of all n-vectors of binary inputs. We then present two learning algorithms for these machines .. The first learning algorithm is the standard gradient ascent heuristic for computing maximum likelihood estimates for the parameters (i.e. weights and thresholds) of the modeL Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine. The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of the standard method for projection pursuit density estimation. We give experimental results for these learning methods on synthetic data and natural data from the domain of handwritten digits. 1 Introduction Let us suppose that each example in our in put data is a binary vector i = {x I, ... , xn } E {± l}n. and that each such example is generated independently at random according some unknown distribution on {±l}n. This situation arises. for instance. when each example consists of (possibly noisy) measurements of n different binary attributes of a randomly selected object. In such a situation, unsupervised learning can be usefully defined as using the input data to find a good model of the unknown distribution on {± l}n and thereby learning the structure in the data. The process of learning an unknown distribution from examples is usually called denszty estimation or parameter estimation in statistics, depending on the nature of the class of distributions used as models. Connectionist models of this type include Bayes networks (14). mixture models [3.13], and Markov random fields [14,8]. Network models based on the notion of energy minimization such as Hopfield nets [9] and Boltzmann machines [1] can also be used as models of probability distributions . • yoavGcis. ucsc.edu 912 Unsupervised learning of distributions on binary vectors using 2-layer networks 913 The models defined by Hopfield networks are a special case of the more general Markov random field models in which the local interactions are restricted to symmetric pairwise interactions between components of the input. Boltzmann machines also use only pairwise interactions, but in addition they include hidden units, which correspond to unobserved variables. These unobserved variables interact with the observed variables represented by components of the input vector. The overall distribution on the set of possible input vectors is defined as the marginal distribution induced on the components of the input vector by the Markov random field over all variables, both observed and hidden. While the Hopfield network is relatively well understood, it is limited in the types of distributions that it can model. On the other hand, Boltzmann machines are universal in the sense that they are powerful enough to model any distribution (to any degree of approximation), but the mathematical analysis of their capabilities is often intractable. Moreover, the standard learning algorithm for the Boltzmann machine, a gradient ascent heuristic to compute the maximum likelihood estimates for the weights and thresholds, requires repeated stochastic approximation, which results in unacceptably slow learning. I In this work we attempt to narrow the gap between Hopfield networks and Boltzmann machines by finding a model that will be powerful enough to be universal, 2 yet simple enough to be analyzable and computationally efficient. 3 We have found such a model in a minor variant of the special type of Boltzmann machine defined by Smolensky in his harmony theory [16][Ch.6J. This special type of Boltzmann machine is defined by a network with a simple bipartite graph structure, which he called a harmonium. The harmonium consists of two types of units: input units, each of which holds one component of the input vector, and hidden units, representing hidden variables. There is a weighted connection between each input unit and each hidden unit, and no connections between input units or between hidden units (see Figure (1)). The presence of the hidden units induces dependencies, or correlations, between the variables modeled by input units. To illustrate the kind of model that results, consider the distribution of people that visit a specific coffee shop on Sunday. Let each of the n input variables represent the presence (+ 1) or absence (-1) of a particular person that Sunday. These random variables are clearly not independent, e.g. if Fred's wife and daughter are there, it is more likely that Fred is there, if you see three members of the golf club, you expect to see other members of the golf club, if Bill is there you are unlikely to see Brenda there, etc. This situation can be modeled by a harmonium model in which each hidden variable represents the presence or absence of a social group. The weights connecting a hidden unit and an ipput unit measure the tendency of the corresponding person to be associated with the corresponding group. In this coffee shop situation, several social groups may be present at one time, exerting a combined influence on the distribution of customers. This can be mo'deled easily with the harmonium, but is difficult to model using Bayes networks or mixture models. <4 2 The Model Let us begin by formalizing the harmonium model. To model a distribution on {±I}" we will use n input units and some number m ~ 0 of hidden units. These units are connected in a bipartite graph as illustrated in Figure (I). The random variables represented by the input units each take values in {+ I, -I}, while the hidden variables, represented by the hidden units, take values in to, I} . The state of the machine is defined by the values of these random variables. Define i = (XI," " xn) E {±l}n to be the state of the input units, and h = (hi , ... ,hm ) E {O,l}m to be the state of the hidden units. The connection weights between the input ~nits and the ith hidden unit are denoted 5 by w(') E Rn and the bias of the ith hidden unit is denoted by 9(') E R. The parameter vector ~ = {(w(l),O(l», . .. ,(w(m),o(m»)) lOne possible solution to this is tbe mean-field approximation [15], discussed furtber in section 2 below. 'In (4) we show tbat any distribution over (±1)" can be approximated to within any desired accuracy by a harmonium model using 2" bidden units. lSee also otber work relating Bayes nets and Boltzmann machines [12,1]. t Noisy-OR gates have been introduced in the framework of Bayes Networks to allow for such combinations. However, using this in networks with hidden units has not been studied, to the best of our knowledge. ~In (16)[Ch.6J, binary connection weights are used. Here we use real-valued weights. 914 Freund and Haussler Hidden Units m=3 Input Units 2:1 2:2 2:3 2:4 2:5 Figure 1: The bipartite graph of the harmonium defines the entire network, and thus also the probability model induced by the network. For a given ,p, the energy of a. state configuration of hidden and input units is defined to be m E(i, hl~) = - L(w(i) . i + 8(i»)hi (1) i=! and the probability of a configuration is 1 ~Pr(i,hl¢l) = -Ze-E(Z,hl.) where Z = L.,e-;.E(Z,hl.). z,;; Summing over h, it is easy to show that in the general case the probability distribution over possible state vectors on the input units is given by This product form is particular to the harmonium structure, and does not hold for general Boltzmann machines. Product form distribution models have been used for density estimation in Projection Pursuit [10,6,5]. We shall look further into this relationship in section 5. 3 Discussion of the model The right hand side of Equation (2) has a simple intuitive interpretation . The ith factor in the product corresponds to the hidden variable h. and is an increasing function of the dot product between i and the weight vector of the ith hidden unit. Hence an input vector i will tend to have large probability when it is in the direction of one of the weight vectors WCi) (i.e. when wei) . i is large). and small probability otherwise. This is the way that the hidden variables can be seen to exert their" influence"; each corresponds to a. preferred or "prototypical" direction in space. The next to the last formula. in Equation (2) shows that the harmonium model can be written as a mixture of 2m distributions of the form ~ exp (f)W(i) . i + 8('»)h.) , Z(h) i=! Unsupervised learning of distributions on binary vectors using 2-layer networks 915 where ii E to, l}m and Z(Ii) is the appropriate normalization factor. It is easily verified that each of these distributions is in fact a product of n Bernoulli distributions on {+l, -l}, one for each input variable Xj. Hence the harmonium model can be interpreted as a kind of mixture model. However, the number of components in the mixture represented by a harmonium is exponential in the number of hidden units. It is interesting to compare the class of harmonium models to the standard class of models defined by a mixture of products of Bernoulli distributions. The same bipartite graph described in Figure (1) can be used to define a standard mixture model. Assign each of the m hidden units a weight vector <.i;) and a probability Pi such that I:~l Pi = 1. To generate an example, choose one of the hidden units according to the distribution defined by the Pi'S, and then choose the vector i according to P;(i) = te.;J(·) ·I. where Zi is the appropriate normalization factor so that LIE{±I}" P;(i) = 1. We thus get the distribution m P(i) = L Pi eW(') I i=1 Z; (3) This form for presenting the standard mixture model emphasizes the similarity between this model and the harmonium model. A vector i will have large probability if the dot product <.ii) ·x is large for some 1 :s i :s m (so long as Pi is not too small). However, unlike the standard mixture model, the harmonium model allows more than one hidden variable to be +1 for any generated example. This means that several hidden influences can combine in the generation of a single example, because several hidden variables can be +1 at the same time. To see why this is useful, consider the coffee shop example given in the introduction. At any moment of time it is reasonable to find severa/social groups of people sitting in the shop. The harmonium model will have a natural representation for this situation, while in order for the standard mixture model to describe it accurately, a hidden variable has to be assigned to each combination of social groups that is likely to be found in the shop at the same time. In such cases the harmonium model is exponentially more succinct than the standard mixture model. 4 Learning by gradient ascent on the log-likelihood We now suppose that we are given a sample consisting of a set 5 of vectors in {± l}n drawn independently at random froro some unknown distribution. Our goal is use the sample 5 to find a good model for this unknown distribution using a harmonium with m hidden units, if possible. The method we investigate here is the method of maximum likelihood estimation using gradient ascent. The goal of learning is thus reduced to finding the set of parameters for the harmonium that maximize the (log of the) probability of the set of examples S. In fact, this gives the standard learning algorithm for general Boltzmann machines. For a general Boltzmann machine this would require stochastic estimation of the parameters. As stochastic estimation is very time-consuming, the result is that learning is very slow. In this section we show that stochastic estimation need not be used for the harmonium model. From (2), the log likelihood of a sample of input vectors 5 = {;{ I), ;(2), ... ,£(N)}, given a particular setting ¢J = {(w(l), 0(1» •. ..• (w(m) , Oem»~} of the parameters of the model is: m ( ) . . -(.) (.) 10g-hkehhood(¢J) = Lin Pr(i!¢J) = L L In(l + e'" H' ) - N In Z . IES .=1 IES (4) Taking the gradient of the log-likelihood results in the following formula for the jth component of wei) {} ~i) log-likelihood(¢) = L x, 1 + e-(W~') 1+9(.1) - N L Pr(il¢J)x, 1 + e-(W!.IH,(,I) (5) wJ IES IE!:!}" A similar formula holds for the derivative of the bias term. The purpose of the clamped and unclamped phases in the Boltzmann machine learning algorithm is to approximate these two terms. In general, this requires stochastic methods. However, here the clamped term is easy to calculate, it requires summing a logistic type function over all training examples. The same term 916 Freund and Haussler is obtained by making the mean field approximation for the clamped phase in the general algorithm [15], which is exact in this case. It is more difficult to compute the sleep phase term, as it is an explicit sum over the entire input space, and within each term of this sum there is an implicit sum over the entire space of configurations of hidden units in the factor Pr(i!,p). However, again taking advantage of the special structure of the harmonium, We can reduce this sleep phase gradient term to a sum only over the configurations of the hidden units, yielding for each component of w(i) 8(i)log-likelibood(¢l) = L: Zj 1 + e-(W~')'I+I('» - N L Pr(hl¢l)hi tanh(E hkWy» (6) 8wj les he{O,I}" k=1 where Pr(hl¢l) = exp(L~1 hi9(i» 0;=1 cosh(L~l hiW}i» . E.ii'e{o,I}" exp(E~1 h;9(i» OJ: 1 cosh(L~1 h;wJ'})] Direct computation of (6) is fast for small m in contrast to the case for general Boltzmann machines (we have performed experiments with m $ 10). However, for large m it is not possible to compute all 2m terms. There is a way to avoid this exponential explosion if we can assume that a small number of terms dominate the sums. If, for instance, we assume that the probability that more than k hidden units are acti ve (+ I) at the same time is negligibly small we can get a good approximation by computing only O( mk) terms. Alternately, if we are not sure which states of the hidden units have non-negligible probability, we can dynamically search, as part of the learning process, for the significant terms in the sum. This way we get an algorithm that is always accurate, and is efficient when the number of significant terms is small. In the extreme case where we assume that only one hidden unit is active at a time (i.e. k = 1), the harmonium model essentially reduces to the standard mixture model as discussed is section 3. For larger k, this type of assumption provides a middle ground between the generality of the harmonium model and the simplicity of the mixture model. 5 Projection Pursuit methods A statistical method that has a close relationship with the harmonium model is the Projection Pursuit (PP) technique [10,6i5). The use of projection pursuit in the context of neural networks has been studied by several researchers (e.g. [11]). Most of the work is in exploratory projection pursuit and projection pursuit regreSSIOn. In this paper we are interested in projection pursuit dellslty estimation. Here PP avoids the exponential blowup of the standard gradient ascent technique, and also has that advantage that the number m of hidden units is estimated from the sample as well, rather than being specified in advance. Projection pursuit density estimation [6] is based on several types of analysis, using the central limit theorem, that lead to the following general conclusion. If i E R" is a random vector for which the different coordinates are Independent, and w E R" is a vector from the n dimellsiollal ullit sphere, then the distribution of the projectIon w· i is close to gaussian for most w. Thus searching for those directions w for which the projection of a sample is most non-gaussian is a way for detecting dependencies between the coordinates in high dimensional distributions. Several "projection-indices" have been studied in the literature for measuring the "non-gaussianity" of projection, each enhancing different properties of the projected distribution. In order to find more than one projection direction, several methods of "structure elimination" have been devised. These methods transform the sample in such a way that the the direction in which non-gaussianity has been detected appears to be gaussian, thus enabling the algorithm to detect non-gaussian projections that would otherwise be obscured. The search for a description of the distribution of a sample in terms of its projections can be formalized in the context of maximal likelihood density estimation [6]. In order to create a formal relation between the harmonium model and projection pursuit, we define a variant of the model that defines a density over R" instead of a distribution over {±l}". Based on this form we devise a projection index and a structure removal method that are the basis of the following learning algorithm (described fully in [4]) • Initialization Set So to be the input sample. Set Po to be the initial distribution (Gaussian). Unsupervised learning of distributions on binary vectors using 2-layer networks 917 • Iteration Repeat the following steps for i = 1,2 . . . until no single-variable harmonium model has a significantly higher likelihood than the Gaussian distribution with respect to Si' 1. Perform an estimate-maximize (EM) [2) search on the log-likelihood of a single hidden variable model on the sample Si-I . Denote by 8i and wei) the parameters found by the search, and create a new hidden unit with associated binary r. v. hi with these weights and bias. 2. Transform Si-l into Si using the following structure removal procedure. For each example; E S'_1 compute the probability that the hidden variable h; found in the last step is 1 on this input: P(h; = 1) = (1 + e-<I.+W(') .I))-I Flip a coin that has probability of "head" equal to P(h; = 1). If the coin turns out "head" then add; - WCi) to S; else add; to Si. 3. Set Pie;) to be Pi_l(i)Z,l (1 + el,+W(').I). 6 Experimental work We have carried out several experiments to test the performance of unsupervised learning using the harmonium model. These are not, at this stage, extensive experimental comparisons, but they do provide initial insights into the issues regarding our learning algorithms and the use of the harmonium model for learning real world tasks. The first set of experiments studies two methods for learning the harmonium model. The first is the gradient ascent method, and the second is the projection pursuit method . The experiments in this set were performed on synthetically generated data. The input consisted of binary vectors of 64 bits that represent 8 x 8 binary images. The images are synthesized using a harmonium model with 10 hidden units whose weights were set as in Figure (2,a). The ultimate goal of the learning algorithms was to retrieve the model that generated the data. To measure the quality of the models generated by the algorithms we use three different measures. The likelihood.of the model, 6 the fraction of correct predictions the model makes when used to predict the value of a single input bit given all the other bits, and the performance of the model when used to reconstruct the inpu t from the most probable state of the hidden units. 7 All experiments use a test set and a train set, each containing 1000 examples. The gradient ascent method used a standard momentum term, and typically needed about 1000 epochs to stabilize. In the projection pursuit algorithm, 4 iterations of EM per hidden unit proved sufficient to find a stable solution. The results are summarized in the following table and in Figure (2). likelihood single bit predictlOn Input reconstructIOn train test train test train test gradient ascent for 1000 epochs 0.399 0.425 0.098 0.100 0.311 0.338 proJectIOn pursuit 0.799 0.802 0.119 0.114 0.475 0.480 ProJection pursuit followed by gradient ascent for 100 epochs 0.411 0.430 0.091 0.089 0.315 0.334 ProJection pursuit followed by gradient ascent for 1000 epochs 0.377 0.405 0.071 0.082 0.261 0.287 true model 0.372 0.404 0.062 0.071 0.252 0.283 Looking at the table and Figure (2), and taking into account execution times, it appears that gradient ascent is slow but eventually finds much of the underlying structure in the distribution, although several of the hidden units (see units 1,2,6,7, counting from the left, in Figure (2,a» have no obvious relation to the true model, In contrast, PP is fast and finds all of the features of the true model albeit sometimes aWe present the negation of the log-likelihood, scaled so that the uniform distribution will have likelihood 1.0 1More precisely, for each input unit I we compute the probability p. that it has value +1. Then (or example (XI, . . . ,1' .. ), we measure L:~.I log,(1/2 + %,(p, - 1/2» . 918 Freund and Haussler (b) (c) ...... , ........ " . . . .. .. ••••• ..... . . ... .. .. ...... ••••••• .. . .. . ... . . · .. ' .. .. . . .......... .. . . . . . .. . .... . ' " ... ...0· ••• .. ' ... .. ....... . · .. . .. . . . ....... .. E;~:·::: ::.::: : ~ .. .. . • -OQ ' D ... • ... · · . • CID .C· •• · · :-oj1:. · .. : :,,~t ~. ·08000a ~ . ::DO:O. . .. ~ ~.: ~~: .. ~.: ' · . ..•. . ' " ... .. · .•.. ' " .... . ...... . . .. ... . .. . . . .. · ....• . ['::-,":.t~. . . . .. . . ·cc. · 0 .. .. .... .. CICJ· •• • . .. ....... ·0 .. .. . . . CODa ... · ....... eO • CO• ~~ :o~~ : ~~~~O::";' : ••• ~.~ ~ OOOOOJ' ••• .. ...... DO .. D· ... .. ..... .. ..... . •• •••• • .. . .. D ... . .... ... ...... . • • • • • •• • ........ D ~ .. .. !O,: • • .:~ : .:; . .. . ...... . .. . . .... . ...... t:~::~ ~ :~~: : :;; '. .. .... .. .. .... DOD -0- . · ... ... .. .... D" . .. .. .. .. .. . . . . ... .. .. .. ........ .. a.. . .... •. I .. [XJDOQD . . . ..... . • D .. . . . . .. . ..... . · D.. • ....... . . . · 0" ,. .. .. •• .. , .... III ..... . ... .. • • • • •• . •• gaDa aD •••• · •• · .0 .. a •. ~~~c:-~g . . : .. ~~~ ... .•• 0 .• C· · ' ..... '" .•••• DO· --~~:::.,::':::~~~ • •• •• •• .. • .. • •• .. • • ... . . ... • ... I ... . .. • • • • • • • .. .. . ... ... • • •• • ••••••• • ••• ~~oq2: 2.~ .. . . • . • p . . .............. )D.... . . ........ ...... .. ... ... . .. . . . · .. ..... .... ... .. ~a.aa.OO ' . .... .. ···.0 .. ••. •••• .., ..... -a •• D· ........ I ' '' , •••• . ~. .. . . . . . .. .. ~"'I .~~§ .... . ... ...... -0-. . • I •• ...... aOODaD " •••• • • .. .. ......... .. ~~ggg:~~ .:.: .. . :.·::8·. :.:·:. :: .. ;::~::: ·~·-:7·: . ::7~~;: ~~::::.::;.~:::. <d, wrnrmwlli1iJ~lli1l1J Figure 2: The weight vectors of the models in the synthetic data experiments. Each matrix represents the 64 weights of one hidden unit. The square above the matrix represents the units bias. positive weights are displayed as full squares and negative weights as empty squares, the area of the square is proportional to the absolute value of the weight. (a) The weights in the model found by gradient ascent alone. (b) The weights in the model found by projection pursuit alone. (c) The weights in the model used for generating the data. (d) The weights in the model found by projection pursuit followed by gradient ascent. For this last model we also show the histograms of the projection of the examples on the directions defined by those weight vectors; the bimodality expected from projection pursuit analysis is evident. in combinations, However, the error measurements show that something is still missing from the models found by our implementation of PP. Following PP by a gradient ascent phase seems to give the best of both algoflthms, finding a good approximation after only 140 epochs (40 PP + 100 gradient) and recovering the true model almost exactly after 1040 epochs. In the second set of experiments we compare the performance of the harmonium model to that of the mixture model. The comparison uses real world data extracted from the NIST handwritten data base 8, Examples are 16 x 16 binary images (see Figure (3)). We use 60 hidden units to model the distribution in both of the models. Because of the large number of hidden units we cannot use gradient ascent learning and instead use projection pursuit. For the same reason it was not possible to compute the likelihood of the harmonium model and only the other two measures of error were used. Each test was run several times to get accuracy bounds on the measurements. The results are summarized in the following table smgle bIt predictIon anput reconstructIon train test train test Mixture model 0.185 ± 0.005 0.258 ± 0.005 0.518 ± 0.002 0.715 ± 0.002 HarmOnium model 0.20 ± 0.01 0.21 ± om 0.63 ± 0.05 0.66 ± 0.03 In Figure (4) we show some typical weight vectors found for the mixture model and for the harmonium model, it is clear that while the mixture model finds weIghts that are some kind of average prototypes of complete digits, the harmonium model finds weights that correspond to local features such as lines and contrasts. There is a small but definite improvement in the errors of the harmonium model with respect to the errors of the mixture model. As the experiments on synthetic data have shown that PP does not reach INIST Special Database 1, HWDB RelI-l.l, May 1990. Unsupervised learning of distributions on binary vectors using 2-layer networks 919 Figure 3: A few examples from the handwritten digits sample. .............. .:~:I~~~;~L!i: ll:~;~~! .l~. !::::~ !:·i::::: ......•.... ' ~~: .. ,~::: ::', '.' •••••••• I" • " •••• 0 •• •• .. - . .. '" : ~::::::.::r::. : .. ,' .. . ..... . .••••••••• ·.t • . :lIIl"!i:::::; ::::=,II~ : ::: ;: ~. :"~''';:::~:: '0' •• 1. so ••• eo. • • • •••• • •• •••••• II . ••• .' . ... ~'~~~~~~' .. ~ .. . II... ., . ....... . .......... :,ii~~:!:~i ........ -, .... ;.II'·~ .~~:a:: III::;:~,;I~: !!i:H' :i:::;: ....... ~ .. Figure 4: Typical weight vectors found by the mixture model (left) and the harmonium model (right) optimal solutions by itself we expect the advantage of the harmonium model over the mixture model will increase further by using improved learning methods. Of course, the harmonium model is a very general distribution model and is not specifically tuned to the domain of handwritten digit images, thus it cannot be compared to models specifically developed to capture structures in this domain. However, the experimental results supports our claim that the harmonium model is a simple and tractable mathematical model for describing distributions in which several correlation patterns combine to generate each individual example. References [1] D. H. Ackley, G. E. Hinton, ILnd T. J. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:}47-169, 1985. (2) A. Dempster, N. Laird, CLnd D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. ROil. Stall!t. Soc. 8, 39:1-38, 1977. [3] B. Everitt CLnd D. HlLnd. Finite mixture di,tributlon!. Chapman CLnd Hall. 1981. [4J Y. Freund CLnd D. HlLussler. Unsupervised learning of distributions on binary vectors using two Ia.yer networks. Technical Report UCSC-CRL-9I-20, Univ. of Calif. Computer ReselLrch Lab, Santa Cruz, CA, 1992 (To appear). (5) J. H. Friedman. Exploratory projection pursuit. J. Amer. Stot.Assoc., 82(397) :599-608, Mar. 1987. (6) J. H. Friedman, W.Stuetzle, ILnd A. Schroeder. Projection pursuit density estimation. I. Amer. Stat.Auoc., 79:599-608, 1984. (7) H. Ge(ner ILnd J. Peu!. On the probabilistic semantics of connectionist networks. Technical Report CSD-870033, UCLA Computer Science Deputment, July 1987. [8J S. Geman and D. Geman. Stochutic reluations, Gibbs distributions ILnd the BayesilLn restoration of imlLges. IEEE Tron!. on Pattern Analll!i! and Machine Intelligence, 6:721-742, 1984. [9J J. Hopfield. Neural networks and physical systems with emergent collective computationallLbilities. Proc. Natl. Acad Sci. USA, 79:2554-2558, Apr. 1982. [10J P. Huber. Projection pursuit (witb discussion). Ann. Stat., 13:435-525, 1985. (11) N. lntrator. FelLture extraction using an unsupervised neural network. In D. Touretzky, J. Ellman, T. Sejnowski, and G. Hinton, editors, Proceeding! 0/ the 1990 COnnectlOnI!t Model! Summer School, pages 310-318. Morgan KlLufmlLnn, San Mateo, CA., 1990. (12) R. M. Neal. Leuning stochastic feedforwlLrd networks. Technical report, Deputment of Computer Science, UniverSity of Toronto, Nov. 1990. (13) S. NowlCLn. Ma.ximum likelihood competitive learning. In D. Touretsky, editor, Advance! In Neurolinformation Proceumg Sy!teml, volume 2, pages 514-582. Morgan Kau(mlLnn, 1990. [14J J. Peul. Probabi/i!hc Retuoning in Intelligent Sy~tem!. Morgan KlLufmann, 1988. (15] C. Peterson and J. R. Anderson. A mean field theory learnillg algorithm (or neural networks. Complex SIiItem!, 1:995-1019,1987. (16) D. E. Rumelhart CLnd J. L. McClelland. Parallel Distributed Proceulng: ExploratIon! In the Mlcro!tructure of Cognition. Volume 1,' FoundatIon!. MIT Press, Cambridge, Mass., 1986.
1991
31
498
Best-First Model Merging for Dynamic Learning and Recognition Stephen M. Omohundro International Computer Science Institute 1947 CenteJ' Street, Suite 600 Berkeley, California 94704 Abstract "Best-first model merging" is a general technique for dynamically choosing the structure of a neural or related architecture while avoiding overfitting. It is applicable to both leaming and recognition tasks and often generalizes significantly better than fixed structures. We demonstrate the approach applied to the tasks of choosing radial basis functions for function learning, choosing local affine models for curve and constraint surface modelling, and choosing the structure of a balltree or bumptree to maximize efficiency of access. 1 TOWARD MORE COGNITIVE LEARNING Standard backpropagation neural networks learn in a way which appears to be quite different from human leaming. Viewed as a cognitive system, a standard network always maintains a complete model of its domain. This model is mostly wrong initially, but gets gradually better and better as data appears. The net deals with all data in much the same way and has no representation for the strength of evidence behind a certain conclusion. The network architecture is usually chosen before any data is seen and the processing is much the same in the early phases of learning as in the late phases. Human and animalleaming appears to proceed in quite a different manner. When an organism has not had many experiences in a domain of importance to it, each individual experience is critical. Rather than use such an experience to slightly modify the parameters of a global model, a better strategy is to remember the experience in detail. Early in learning. an organism doesn't know which features of an experience are important unless it has a strong 958 Best-First Model Merging for Dynamic Learning and Recognition 959 prior knowledge of the domain. Without such prior knowledgeJts best strategy is to generalize on the basis of a similarity measure to individual stored experiences. (Shepard, 1987) shows that there is a universal exponentially decaying form for this kind of similarity based generalization over a wide variety of sensory domains in several studied species. As experiences accumulate, the organism eventually gets enough data to reliably validate models from complex classes. At this point the animal need 00 longer remember individual experiences, but rather only the discovered generalities (eg. as rules). With such a strategy, it is possible for a system to maintain a measure of confidence in it its predictions while building ever more complex models of its environment. Systems based on these two types of learning have also appeared in the neural network, statistics and machine learning communities. In the learning literature one finds both "tablelookup" or "memory-based" methods and ''parameter-fitting" methods. In statistics the dislinction is made between "non-parametric" and "parametric" methods. Table-lookup methods work by storing examples and generalize to new situations on the basis of similarity to the old ones. Such methods are capable of one-shot learning and have a measure of the applicability of their knowledge to new situations but are limited in their generalization capability. Parameter fitting models choose the parameters of a predetermined model to best fit a set of examples. They usually take longer to train and are susceptible to computational difficulties such as local maxima but can potentially generalize better by extending the influence of examples over the whole space. Aside from computational difficulties, their fundamental problem is overfitting, ie. having insufficient data to validate a particular parameter setting as useful for generalization. 2 OVERFITTING IN LEARNING AND RECOGNITION There have been many recent results (eg. based on the Vapnik-Chervonenkis dimension) which identify the number of examples needed to validate choices made from specific parametric model families. We would like a learning system to be able to induce extremely complex models of the world but we don't want to have to present it with the enormous amount of data needed to validate such a model unless it is really needed. (Vapnik, 1982) proposes a technique for avoiding overfitling while allowing models of arbitrary complexity. The idea is to start with a nested familty of model spaces, whose members contain ever more complex models. When the system has only a small amount of data it can only validate models in in the smaller model classes. As more data arrives, however, the more complex classes may be considered. If at any point a fit is found to within desired tolerances, however, only the amount of data needed by the smallest class containing the chosen model is needed. Thus there is the potential for choosing complex models without penalizing situations in which the model is simple. The model merging approach may be viewed in these terms except that instead of a single nested family, there is a widely branching tree of model spaces. Like learning, recognition processes (visual, auditory, etc.) aim at constructing models from data. As such they are subject to the same considerations regarding overfitling. Figure 1 shows a perceptual example where a simpler model (a single segment) is perceptually chosen to explain the data (4 almost collinear dots) than a more complex model (two segments) which fits the data better. An intuitive explanations is that if the dots were generated by two segments, it would be an amazing coincidence that they are almost collinear, if it were generated by one, that fact is easily explained. Many of the Gestalt phenomena can be 960 Omohundro considered in the same tenns. Many of the processes used in recognition (eg. segmentation, grouping) have direct analogs in learning and vice versa. • Is:. • • = _______ or • Figure 1: An example of Occam's razor in recognition. There has been much recent interest in the network community in Bayesian methods for model selection while avoiding overfilling (eg. Buntine and Weigend, 1992 and MacKay 1992). Learning and recognition fit naturally together in a Bayesian framework. The Bayesian approach makes explicit the need for a prior distribution. The posterior distribution generated by learning becomes the prior distribution for recognition. The model merging process described in this paper is applicable to both phases and the knowledge representation it suggests may be used for both processes as well. There are at least three properties of the world that may be encoded in a prior distribution and have a dramatic effect on learning and recognition and are essential to the model merging approach. The continuity prior is that the world is geometric and unless there is contrary data a system should prefer continuous models over discontinuous ones. This prior leads to a wide variety of what may be called "geometric learning algorithms .. (Omohundro, 1990). The sparseness prior is that the world is sparsely interacting. This says that probable models naturally decompose into components which only directly affect one another in a sparse manner. The primary origin of this prior is that physical objects usually only directly affect nearby objects in space and time. This prior is responsible for the success of representations such as Markov random fields and Bayesian networks which encode conditional independence relations. Even if the individual models consist of sparsely interacting components, it still might be that the data we receive for learning or recognition depends in an intricate way on all components. The locality prior prefers models in which the data decomposes into components which are directly affected by only a small number of model components. For example, in the learning setting only a small portion of the knowledge base will be relevant to any specific situation. In the recognition setting, an individual pixel is detennined by only a small number of objects in the scene. In geometric settings, a localized representation allows only a small number of model parameters to affect any individual prediction. 3 MODEL MERGING Based on the above considerations, an ideal learning or recognition system should model the world using a collection of sparsely connected, smoothly parameterized, localized models. This is an apt description of many of the neural network models currently in use. Bayesian methods provide an optimal means for induction with such a choice of prior over models but are computationally intractable in complex situations. We would therefore like to develop heuristic approaches which approximate the Bayesian solution and avoid overfitting. Based on the idealization of animal learning in the frrst section, we would like is a system which smoothly moves between a memory-based regime in which the models are the data into ever more complex parameterized models. Because of the locality prior, model Best-First Model Merging for Dynamic Learning and Recognition 961 components only affect a subset of the data. We can therefore choose the complexity of components which are relevant to different portions of the data space according to the data which has been received there. This allows for reliably validated models of extremely high complexity in some regions of the space while other portions are modeled with low complexity. If only a small number of examples have been seen in some region, these are simply remembered and generalization is based on similarity. As more data arrives, if regularities are found and there is enough data present to justify them. more complex parameterized models are incorpoolted. There are many possible approaches to implementing such a strategy. We have investigated a particular heuristic which can be made computationally efficient and appears to work well in a variety of areas. The best-first model merging approach is applicable in a variety of situations in which complex models are constructed by combining simple ones. The idea is to improve a complex model by replacing two of its component models by a single model. This "merged" model may be in the same family as the original components. More interestingly. because the combined data from the merged components is used in determining the parameters of the merged model, it may come from a larger parameterized class. The critical idea is to never allow the system to hypothesize a model which is more complex than can be justified by the data it is based on. The "best-first" aspect is to always choose to merge the pair of models which decrease the likelihood of the data the least. The merging may be stopped according to a variety of criteria which are now applied to individual model components rather than the entire model. Examples of such criteria are those based on cross-validation, Bayesian Occam factors, VC bounds, etc. In experiments in a variety of domains, this approach does an excellent job of discovering regularities and allocating modelling resources efficiently. 3 MODEL MERGING VS. K·MEANS FOR RBF'S Our rust example is the problem of choosing centers in radial basis function networks for approximating functions. In the simplest approach, a radial basis function (eg. a Gaussian) is located at each training input location. The induced function is a linear combination of these basis functions which minimizes the mean square error of the training examples. Better models may be obtained by using fewer basis functions than data points. Most work on choosing the centers of these functions uses a clustering technique such as k-means (eg. Moody and Darken, 1989). This is reasonable because it puts the representational power of the model in the regions of highest density where errors are more critical. It ignores the structure of the modelled function, however. The model merging approach starts with a basis function at each training point and successively merges pairs which increase the training error the least. We compared this approach with the k-means approach in a variety of circumstances. Figure 2 shows an example where the function on the plane to be learned is a sigmoid in x centered at 0 and is constant in y. Thus the function varies most along the y axis. The data is drawn from a Gaussian distribution which is centered at (-.5,0). 21 training samples were drawn from this distribution and from these a radial basis function network with 6 Gaussian basis functions was learned. The X's in the figure show the centers chosen by k-means. As expected, they are clustered near the center fo the Gaussian source distribution. The triangles show the centers chosen by best-rust model merging. While there is some tendency to focus on the source center, there is also a tendency to represent the region where the modelled function varies the most. The training error is over 10 times less with model merging 962 Omohundro and the test error 00 an independent test set is about 3 times lower. These results were typical in variety of test runs. This simple example shows one way in which underlying structme is natmally discovered by the merging technique. y 1.5 0.5 • -0.5 -1.5 -1.0 Dots are training points : Triangles are mm centers A: x's are k-means centers : 21 samples, 6 centers X : RBF width .4 .. !. ................. _......... : Gaussian width .4 "'X;" • ............ \ Gaussian center -.5 (.. ••• A. X • :, Sigmoid width .4 ' .... • x. • >J A "I. • G . ,." ~ ······ .................... _~~.!!!!n ... ··· .... Sigmoid • • ... ····""··· ....... ~ .. · ... ····•· ... ···········IIII .. X -0.6 -0.2 0.2 x Figure 2: Radial basis function centers in two dimensions chosen by model merging and by k-means. The dots show the 21 training samples. The x's are the centers chosen by k-means, the triangles by model merging. The training error was .008098 for k-means and .000604 for model merging. The test error was .012463 for k-means and .004638 for model merging. 4 APPROXIMATING CURVES AND SURFACES As a second intuitive example, consider the problem of modelling a curve in the plane by a combination of straight line segments. The error function may be taken as the mean square error over each curve point to the nearest segment point A merging step in this case consists of replacing two segments by a single segment We always choose that pair such that the merged segment increases the emr the least. Figure 3 shows the approximations generated by this strategy. It does an excellent job at identifying the essentially linear portions of the curve and puts the boundaries between component models at the "comers". The corresponding "top-down" approach would start with a single segment and repeatedly split it This approach sometimes has to make decisions too early and often misses the comers in the curve. While not shown in the figure, as repeated mergings take place, more data is available for each segment This would allow us to use more complex models than linear segments such as Bezier curves. It is possible to reliably induce a representation which is linear in some portions and higher order in others. Such models potentially have many parameters and would be subject to overfitting if they were learned directly rather than by going through merge steps. Exactly the same strategy may be applied to modelling higher-dimensional constraint surfaces by hyperplanes or functions by piecewise linear portions. The model merging apBest-First Model Merging for Dynamic Learning and Recognition 963 proach naturally complements the efficient mapping and constraint surface representations described in (Omohundro, 1991) based on bumptrees. Error=1 Error=2 Error=5 Error=10 Error=20 Figure 3: Approximation of a curve by best-rust merging of segment models. The top row shows the endpoints chosen by the algorithm at various levels of allowed error. The bottom row shows the corresponding approximation to the curve. Notice, in this example, that we need only consider merging neighboring segments as the increased error in merging non-adjoinging segments would be too great This imposes a locality on the problem which allows for extremely efficient computation. The idea is to maintain a priority queue with all potential merges on it ordered by the increase in error caused by the merge. This consists of only the neighboring pairs (of which there are n-l if there are n segments). The top pair on the queue is removed and the merge operation it represents is performed if it doesn't violate the stopping critera. The other potential merge pairs which incorporated the merged segments must be removed from the queue and the new possible mergings with the generated segment must be inserted (alternatively, nothing need be removed and each pair is checked for viability when it reaches the top of the queue). The neighborhood structure allows each of the operations to be performed quickly with the appropriate data structures and the entire merging process takes a time which is linear (or linear times logarithmic) in the number of component models. Complex timevarying curves may easily be processed in real time on typical workstations. In higher dimensions. hierarchical geometric data structures (as in Omohundro. 1987, 1990) allow a similar reduction in computation based on locality. 964 Omohundro S BALLTREE CONSTRUCTION The model merging approach is applicable to a wide variety of adaptive structures. The "balItree" structure described in (Omohundro. 1989) provides efficient access to regions in geometric spaces. It consists of a nested hiernrchy of hyper-balls surrounding given leaf balls and effICiently supports querries which test for intersection. inclusion. or nearness to a leaf ball. The balItree construction algorithm itself provides an example of a best-first merge approach in a higher dimensional space. To detennine the best hierarchy we can merge the leaf balls pairwise in such a way that the total volume of all the merged regions is as small as possible. The figure compares the quality of balltrees constructed using bestflJ'St merging to those constructed using top-down and incremental algorithms. As in other domains. the top-down approach has to make major decisions too early and often makes suboptimal choices. The merging approach only makes global decisions after many local decisions which adapt well to the structure. Balltree Error 1074.48 859.58 644.69 429.79 214.90 ,., Iry 1\ i Top Down Construction ; " , V I ". Incremental Construction /.. ~ J" ~ J - /" ;.st-filSt Merge Construction l~~'~~~';-=-;:~::~:;~~ 0.00 o 100 200 300 400 500 Number of Balls Figme 4: Balltree error as a function of number of balls for the top-down. incremental. and best-fust merging construction methods. Leaf balls have uniformly distributed centers in 5 dimensions with radii uniformly distributed less than .1. 6 CONCLUSION We have described a simple but powerful heuristic for dynamically building models for both learning and recognition which constructs complex models that adapt well to the underlying structure. We presented three different examples which only begin to touch on the possibilities. To hint at the broad applicability, we will briefly describe several other applications we are currently examining. In (Omohundro. 1991) we presented an efficient structure for modelling mappings based on a collection of local mapping models which were combined according to a partition of unity formed by "influence functions" associated with each model. This representation is very flexible and can be made computationally efficient. While in the experiments of that paper, the local models were affme functions (constant plus linear). they may be chosen from any desired class. The model merging approach builds such a mapping representation by successively merging models and replacing them with a new model whose influence Best-First Model Merging for Dynamic Learning and Recognition 965 function extends over the range of the two original influence functions. Because it is based on more data, the new model can be chosen from a larger complexity class of functions than the originals. One of the most fundamental inductive tasks is density estimation. ie. estimating a probablity distribution from samples drawn from it. A powerful standard technique is adaptvie kernel estimation in which a nonnalized Gaussian (or other kernel) is placed at each sample point with a width determined by the local sample density (Devroye and Gyorfi. 1985). Model merging can be applied to improve the generalization performance of this approach by choosing successively more complex component densities once enough data has accumulated by merging. For example. consider a density supported on a curve in a high dimensi0l13l space. Initially the estimate will consist of radially-symmetric Gaussians at each sample point. After successive mergings, however, the one-dimensional linear structure can be discovered (and the Gaussian components be chosen from the larger class of extended Gaussians) and the generalization dramatically improVed. Other natural areas of application include inducing the structure of hidden Markov models. stochastic context-free grammars, Markov random fields. and Bayesian networks. References D. H. Ballard and C. M. Brown. (1982) Computer Vision. Englewood Cliffs, N. J: PrenticeHall. W. L. Buntine and A. S. Weigend. (1992) Bayesian Back-Propagation. To appear in: Complex Systems. L. Devroye and L. Gyorfi. (1985) Nonparametric Density Estimation: The Ll View, New York: Wiley. D. J. MacKay. (1992) A Practical Bayesian Framework for Backprop Networks. Caltech preprint. 1. Moody and C. Darken. (1989) Fast learning in networks of locally-tuned processing units. Neural Computation, 1,281-294. S. M. Omohundro. (1987) Efficient algorithms with neural network behavior. Complex Systems 1:273-347. S. M. Omohundro. (1989) Five balltree construction algorithms. International Computer Science Institute Technical Report TR-89-063. S. M. Omohundro. (1990) Geometric learning algorithms. Physica D 42:307-321. S. M. Omohundro. (1991) Bumptrees for Efficient Function, Constraint, and Oassification Learning. In Lippmann, Moody, and Touretzky. (eds.) Advances in Neural Information Processing Systems 3. San Mateo, CA: Morgan Kaufmann Publishers. R. N. Shepard. (1987) Toward a universal law of generalization for psychological science. Science. V. Vapnik. (1982) Estimation of Dependences Based on Empirical Data, New York: Springer-Verlag. PART XIII ARCHITECTURES AND ALGORITHMS
1991
32
499
Neural Computing with Small Weights Kai-Yeung Siu Dept. of Electrical & Computer Engineering University of California, Irvine Irvine, CA 92717 Abstract J ehoshua Bruck IBM Research Division Almaden Research Center San Jose, CA 95120-6099 An important issue in neural computation is the dynamic range of weights in the neural networks. Many experimental results on learning indicate that the weights in the networks can grow prohibitively large with the size of the inputs. Here we address this issue by studying the tradeoffs between the depth and the size of weights in polynomial-size networks of linear threshold elements (LTEs). We show that there is an efficient way of simulating a network of LTEs with large weights by a network of LTEs with small weights. In particular, we prove that every depth-d, polynomial-size network of LTEs with exponentially large integer weights can be simulated by a depth-(2d + 1), polynomial-size network of LTEs with polynomially bounded integer weights. To prove these results, we use tools from harmonic analysis of Boolean functions. Our technique is quite general, it provides insights to some other problems. For example, we are able to improve the best known results on the depth of a network of linear threshold elements that computes the COM PARI SO N, SUM and PRO DU CT of two n-bits numbers, and the MAX 1M U M and the SORTING of n n-bit numbers. 1 Introduction The motivation for this work comes from the area of neural networks, where a linear threshold element is the basic processing element. Many experimental results on learning have indicated that the magnitudes of the coefficients in the threshold elements grow very fast with the size of the inputs and therefore limit the practical use of the network. One natural question to ask is the following: How limited 944 Neural Computing with Small Weights 945 is the computational power of the network if we restrict ourselves to threshold elements with only "small" growth in the coefficients? We answer this question by showing that we can trade-off an exponential growth with a polynomial growth in the magnitudes of coefficients by increasing the depth of the network by a factor of almost two and a polynomial growth in the size. Linear Threshold Functions: A linear threshold function f(X) is a Boolean function such that where f(X) = sgn(F(X» = {_II if F(X) > 0 if F(X) < 0 n F(X) = 2:= Wi . Xi + Wo i=l Throughout this paper, a Boolean function will be defined as f : {I, _I}n --+ {I, -I}; namely, 0 and 1 are represented by 1 and -1, respectively. Without loss of generality, we can assume F(X):/; 0 for all X E {I,-I}n. The coefficients Wi are commonly referred to as the weights of the threshold function. We denote the class of all linear threshold functions by LT1 • --LT1 functions: In this paper, we shall study a subclass of LT1 which we denote by IT1 . Each function f(X) = sgn(L:~=l Wi' Xi + wo) in IT1 is characterized by the property that the weights Wi are integers and bounded by a polynomial in n, i.e. IWil ~ n C for some constant c > O. Threshold Circuits: A threshold circuit [5, 10] is a Boolean network in which --every gate computes an LT1 function. The size of a threshold circuit is the number --of LT1 elements in the circuit. Let LTk denote the class of threshold circuits of depth k with the size bounded by a polynomial in the number of inputs. We define LTk similarly except that we allow each gate in LTk to compute an LTI function. Although the definition of (LTd linear threshold function allows the weights to be real numbers, it is known [12] that we can replace each of the real weights by integers of O( n log n) bits, where n is the number of input Boolean variables. So in the rest of the paper, we shall assume without loss of generality that all weights are integers. However, this still allows the magnitudes of the weights to increase exponentially fast with the size of the inputs. It is natural to ask if this is necessary. In other words, is there a linear threshold function that must require exponentially large weights? Since there are 2n(n~) linear threshold functions in n variables [8, 14, 15], there exists at least one which requires O(n2 ) bits to specify the weights. By the pigeonhole principle, at least one weight of such a function must need O(n) bits, and thus is exponentially large in magnitude. i.e. LTI ~ LT1 The above result was proved in [9] using a different method by explicitly constructing an LT1 function and proving that it is not in LT1 . In the following section, we shall show that the COMPARISON function (to be defined later) also requires exponentially large weights. We will refer to this function later on in the proof of 946 Siu and Bruck our main results. Main Results: The fact that we can simulate a linear threshold function with exponentially large weights in a 'constant' number oflayers of elements with 'small' weights follows from the results in [3] and [11]. Their results showed that the sum of n n-bit numbers is computable in a constant number of layers of 'counting' gates, which in turn can be simulated by a constant number of layers of threshold elements with 'small' weights. However, it was not explicitly stated how many layers are needed in each step of their construction and direct application of their results would yield a constant such as 13. In this paper, we shall reduce the constant to 3 by giving a more 'depth'-efficient algorithm and by using harmonic analysis of Boolean functions [1,2,6]. We then generalize this result to higher depth circuits and show how to simulate a threshold circuit of depth-d and exponentially large weights in a depth-(2d + 1) threshold circuit of 'small' weights, i.e. LTd ~ fr2d+l. As another application of harmonic analysis, we also show that the COM P ARISON and ADDITION of two n-bit numbers is computable with only two layers of elements with 'small' weights, while it was only known to be computable in 3 layers [5]. We also indicate how our 'depth'-efficient algorithm can be -applied to show that the product of two n-bit numbers can be computed in LT4 . In addition, we show that the MAXIMUM and SORTING ofn n-bit numbers can be computed in fr3 and LT4 , respectively. 2 Main Results Definition: Let X = (Xl, ... , Xn), Y = (YI, ... , Yn) E {I, _l}n. We consider X and Y as two n-bit numbers representing E?=l Xi· 2' and E?=l Yi . 2i , respectively. The COMPARISON function is defined as C(X, Y) = 1 iff X ~ Y In other words, n C(X, Y) = sgn{L:: 2i(Xi - yd + I} i=l Lemma 1 COMPARISON tt LTI On the other hand, using harmonic analysis [2], we can show the following: Lemma 2 COMPARISON E m Spectral representation of Boolean functions: Recently, harmonic analysis has been found to be a powerful tool in studying the computational complexity of Boolean functions [1, 2, 7]. The idea is that every Boolean function f : {I, _1}n -+ {I, -I} can be represented as a polynomial over the field of rational numbers as follows: f(X) = L aaxa aE{O,l}n Neural Computing with Small Weights 947 h X a al al an were = x 1 x 2 .•. Xn • Such representation is unique and the coefficients of the polynomial, {aal Q E {a, l}n}, are called the spectral coefficients of f. We shall define the Ll spectral norm of f to be IIfll = ~ laal· ae{O,I}n The proof of Lemma 2 is based on the spectral techniques developed in [2]. Using probabilistic arguments, it was proved in [2] that if a Boolean function has Ll spec.tral norm which is polynomially bounded, then the function is computable in LT2 • We observe (together with Noga Alon) that the techniques in [2] can be generalized to show that any Boolean function with polynomially bounded Ll spectral norm can even be closely approximated by a sparse polynomial. This observation is crucial when we extend our result from a single element to networks of elements with large weights. Lemma 3 Let f(X) : {I, _l}n --+ {I, -I} such that IIfll ~ n C for some c. Then for any k > 0, there exists a sparse polynomial 1 F(X) = N 2:'.:: wa Xa such that aes IF(X) - f(X)1 ~ n- k , where Wa and N are integers, S c {O, l}n, the size of S, Wa and N are all bounded by a polynomial in n. Hence, f(X) E LT2 • As a consequence of this result, Lemma 2 follows since it can be shown that COM PARISON has a polynQmially bounded Ll spectral norm. Now we are ready to state our main results. Although most linear threshold functions require exponentially large weights, we can always simulate them by 3 layers of in elements. Theorem 1 LTI ~ LT3 The result stated in Theorem 1 implies that a depth-d threshold circuit with exponentially large weights can be simulated by a depth-3d threshold circuit with polynomially large weights. Using the result of Lemma 3, we can actually obtain a more depth-efficient simulation. Theorem 2 As another consequence of Lemma 3, we have the following : 948 Siu and Bruck Corollary 1 Let /1 (X), ... , fm(X) be functions with polynomially bounded Ll spectral norms, and g(/1 (X), ... , fm(X» be an fi\ function with fi(X) 's as inputs, I.e. m g(/1(X), ... , fm(X» = sgn(2: Wdi(X) + wo) i=l Then 9 can be expressed as a sign of a sparse polynomial in X with polynomially many number of monomial terms xcr 's and polynomially bounded integer coeffi--cients. Hence 9 E LT2. If all LTI functions have polynomially bounded Ll spectral norms, then it would follow that LTI C iT2 • However, even the simple MAJORITY function does not have a polynomially bounded Ll spectral norm. We shall prove this fact via the following theorem. (As in Lemma 3, by a sparse polynomial we mean a polynomial with only polynomially many monomial terms xcr's). Theorem 3 The iTl function MAJORITY: n sgn(2: X i) i=l cannot be approximated by a sparse polynomial with an error o( n -1). Other applications of the harmonic analysis techniques and the results of Lemma 3 yields the following theorems: Theorem 4 Let x, y be two n-bit numbers. Then ADDITION(x, y) E m --Theorem 5 The product of two n-bit integers can be computed in LT4 • --Theorem 6 The MAX I MU M of n n-bit numbers can be computed in LT3. Theorem 7 The SORTING ofn n-bit numbers can be computed in IT4 . 3 Concluding Remarks Our main result indicates that for networks of linear threshold elements, we can trade-off arbitrary real weights with polynomially bounded integer weights, at the expense of a polynomial increase in the size and a factor of almost two in the depth of the network. The proofs of the results in this paper can be found in [13]. We would like to mention that our results have recently been improved by Goldmann, Hastad and Razborov [4]. They showed that any polynomial-size depth-d network oflinear threshold elements with arbitrary weights can be simulated by a polynomial-size depth-( d + 1) network with "small" (polynomially bounded integer) weights. While our construction can be made explicit, only the existence of the simulation result is proved in [4]; it is left as an open problem in [4] if there is an explicit construction of their results. Neural Computing with Small Weights 949 Acknowledgements This work was done while Kai-Yeung Siu was a research student associate at IBM Almaden Research Center and was supported in part by the Joint Services Program at Stanford University (US Army, US Navy, US Air Force) under Contract DAAL0388-C-0011, and the Department of the Navy (NAVELEX), NASA Headquarters, Center for Aeronautics and Space Information Sciences under Grant NAGW-419S6. References [1] J. Bruck. Harmonic Analysis of Polynomial Threshold Functions. SIAM Journal on Discrete Mathematics, May 1990. [2] J. Bruck and R. Smolensky. Polynomial Threshold Functions, ACo Functions and Spectral Norms. Technical Report RJ 7140, IBM Research, November 1989. Appeared in IEEE Symp. on Found. of Compo Sci. October, 1990. [3] A. K. Chandra, L. Stockmeyer, and U. Vishkin. Constant depth reducibility. Siam J. Comput., 13:423-439, 1984. [4] M. Goldmann, J. Hastad, and A. Razborov Majority Gates vS. General Weighted Threshold Gates. Unpublished Manuscript. [5] A. HajnaI, W . Maass, P. PudIak, M. Szegedy, and G. Turan. Threshold circuits of bounded depth. IEEE Symp. Found. Compo Sci., 28:99-110, 1987. [6] R. J. Lechner. Harmonic analysis of switching functions. In A. Mukhopadhyay, editor, Recent Development in Switching Theory. Academic Press, 1971. [7] N. LiniaI, Y. Mansour, and N. Nisan. Constant Depth Circuits, Fourier Transforms, and Learnability. Proc. 30th IEEE Symp. Found. Compo Sci., 1989. [8] S. Muroga and 1. Toda. Lower Bound of the Number of Threshold Functions. IEEE Trans. on Electronic Computers, EC 15, 1966. [9] J. Myhill and W. H. Kautz. On the Size of Weights Required for Linear-Input Switching Functions. IRE Trans. on Electronic Computers, EC 10, 1961. [10] I. Parberry and G. Schnitger. Parallel Computation with Threshold Functions . Journal of Computer and System Sciences, 36(3):278-302, 1988. [11] N. Pippenger. The complexity of computations by networks. IBM J. Res. Develop., 31(2), March 1987. [12] P. Raghavan. Learning in Threshold Networks: A Computation Model and Applications. Technical Report RC 13859, IBM Research, July 1988. [13] K.-Y. Siu and J. Bruck. On the Power of Threshold Circuits with Small Weights. SIAM J. Discrete Math., 4(3):423-435, August 1991. [14] D. R. Smith. Bounds on the Number of Threshold Functions. IEEE Trans. on Electronic Computers, EC 15, 1966. [15] S. Yajima and T. Ibaraki. A Lower Bound on the Number of Threshold Functions. IEEE Trans. on Electronic Computers, EC 14, 1965.
1991
33