index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
500
A Segment-based Automatic Language Identification System Yeshwant K. Muthusamy & Ronald A. Cole Center for Spoken Language Understanding Oregon Graduate Institute of Science and Technology Beaverton OR 97006-1999 Abstract We have developed a four-language automatic language identification system for high-quality speech. The system uses a neural network-based segmentation algorithm to segment speech into seven broad phonetic categories. Phonetic and prosodic features computed on these categories are then input to a second network that performs the language classification. The system was trained and tested on separate sets of speakers of American English, Japanese, Mandarin Chinese and Tamil. It currently performs with an accuracy of 89.5% on the utterances of the test set. 1 INTRODUCTION Automatic language identification is the rapid automatic determination of the language being spoken, by any speaker, saying anything. Despite several important applications of automatic language identification, this area has suffered from a lack of basic research and the absence of a standardized, public-domain database of languages. It is well known that languages have characteristic sound patterns. Languages have been described subjectively as "singsong" , "rhythmic" , "guttural", "nasal" etc. The key to solving the problem of automatic language identification is the detection and exploitation of such differences between languages. We assume that each language in the world has a unique acoustic structure, and that this structure can be defined in terms of phonetic and prosodic features of speech. 241 242 Muthusamy and Cole Phonetic, or segmental features, include the the inventory of phonetic segments and their frequency of occurrence in speech. Prosodic information consists of the relative durations and amplitudes of sonorant (vowel-like) segments, their spacing in time, and patterns of pitch change within and across these segments. To the extent that these assumptions are valid, languages can be identified automatically by segmenting speech into broad phonetic categories, computing segmentbased features that capture the relevant phonetic and prosodic structure, and training a classifier to associate the feature measurements with the spoken language. We have developed a language identification system that uses a neural network to segment speech into a sequence of seven broad phonetic categories. Information about these categories is then used to train a second neural network to discriminate among utterances spoken by native speakers of American English, Japanese, Mandarin Chinese and Tamil. When tested on utterances produced by six new speakers from each language, the system correctly identifies the language being spoken 89.5% of the time. 2 SYSTEM OVERVIEW The following steps transform an input utterance into a decision about what language was spoken. Data Capture The speech is recorded using a Sennheiser HMD 224 noise-canceling microphone, low-pass filtered at 7.6 kHz and sampled at 16 kHz. Signal Representations A number of waveform and spectral parameters are computed in preparation for further processing. The spectral parameters are generated from a 128-point discrete Fourier transform computed on a 10 ms Hanning window. All parameters are computed every 3 ms. The waveform parameters consist of estimates of (i) zc8000: the zero-crossing rate of the waveform in a 10 ms window, (ii) ptp700 and ptp8000: the peak-to-peak amplitude of the waveform in a 10 ms window in two frequency bands (0-700 Hz and 0-8000 Hz), and (iii) pitch: the presence or absence of pitch in each 3 ms frame. The pitch estimate is derived from a neural network pitch tracker that locates pitch periods in the filtered (0-700 Hz) waveform [2]. The spectral parameters consist of (i) DFT coefficients, (ii) sda700 and sda8000: estimates of averaged spectral difference in two frequency bands, (iii) sdf: spectral difference in adjacent 9 ms intervals, and (iv) cmlOOO: the center-of-mass of the spectrum in the region of the first formant. Broad Category Segmentation Segmentation is performed by a fully-connected, feedforward, three-layer neural network that assigns 7 broad phonetic category scores to each 3 ms time frame of the utterance. The broad phonetic categories are: vac (vowel) , FRIC (fricative), A Segment-based Automatic Language Identification System 243 STOP (pre-vocalic stop), PRVS (pre-vocalic sonorant), INVS (inter-vocalic sonorant), POVS (post-vocalic sonorant), and CLOS (silence or background noise). A Viterbi search, which incorporates duration and bigram probabilities, uses these frame-based output activations to find the best scoring sequence of broad phonetic category labels spanning the utterance. The segmentation algorithm is described in greater detail in [31. Language Classification Language classification is performed by a second fully-connected feedforward network that uses phonetic and prosodic features derived from the time-aligned broad category sequence. These features, described below, are designed to capture the phonetic and prosodic differences between the four languages. 3 FOUR-LANGUAGE HIGH-QUALITY SPEECH DATABASE The data for this research consisted of natural continuous speech recorded in a laboratory by 20 native speakers (10 male and 10 female) of each of American English, Mandarin Chinese, Japanese and Tamil. The speakers were asked to speak a total of 20 utterances!: 15 conversational sentences of their choice, two questions of their choice, the days of the week, the months of the year and the numbers 0 through 10. The objective was to have a mix of unconstrained- and restricted-vocabulary speech. The segmentation algorithm was trained on just the conversational sentences, while the language classifier used all utterances from each speaker. 4 NEURAL NETWORK SEGMENTATION 4.1 SEGMENTER TRAINING 4.1.1 Training and Test Sets Five utterances from each of 16 speakers per language were used to train and test the segmenter. The training set had 50 utterances from 10 speakers (5 male and 5 female) from each of the 4 languages, for a total of 200 utterances. The development test set had 10 utterances from a different set of 2 speakers (1 male and 1 female) from each language, for a total of 40 utterances. The final test set had 20 utterances from yet another set of 4 speakers (2 male and 2 female) from each language for a total of 80 utterances. The average duration of the utterances in the training set was 4.7 secs and that of the test sets was 5.7 secs. 4.1.2 Network Architecture The segmentation network was a fully-connected, feed-forward network with 304 input units, 18 hidden units and 7 output units. The number of hidden units was determined experimentally. Figure 1 shows the network configuration and the input features. 1 Five speakers in Japanese and one in Tamil provided only 10 utterances each. 244 Muthusamy and Cole NEURAL NETWORK SEGMENTATION VOC FRIC CLOS STOP PRVS INVS POVS 7 OlJTPUT UNITS 18 HIDDEN UNITS 304 INPUT / , U~ L-...JL-...JL-...JL-JL-JL-JL-...JL-JL-J Z.o PTP Av~. PTP Av~. Pltcfl F_tCh ... g. CoM 84 OFT Cronlntl 0-700 SO 0-700 0-8000 SO 0-8000 SO 0-700 0·1000 Co.fflc!.nt. L ~ 30 samples Each Figure 1: Segmentation Network 4.1.3 Feature Measurements The feature measurements used to train the network include the 64 DFT coefficients at the frame to be classified and 30 samples each of zc8000, ptp700, ptp8000, sda 700, sda8000, sd/, pitch and cml 000, for a total of 304 features. These samples were taken from a 330 ms window centered on the frame, with more samples being taken in the immediate vicinity of the frame than near the ends of the window. 4.1.4 Hand-labeling Both the training and test utterances were hand-labeled with 7 broad phonetic category labels and checked by a second labeler for correctness and consistency. 4.1.5 Coarse Sampling of Frames As it was not computationally feasible to train on every 3 ms frame in each utterance, only a few frames were chosen at random from each segment. To ensure approximately equal number of frames from each category, fewer frames were sampled from the more frequent categories such as vowels and closures. 4.1.6 Network Training The networks were trained using backpropagation with conjugate gradient optimization [1]. Training was continued until the performance of the network on the development test set leveled off. A Segment-based Automatic Language Identification System 245 4.2 SEGMENTER EVALUATION Segmentation performance was evaluated on the 80-utterance final test set. The segmenter output was compared to the hand-labels for each 3 ms time frame. First choice accuracy was 85.1% across the four languages. When scored on the middle 80% and middle 60% of each segment, the accuracy rose to 86.9% and 88.0% respectively, pointing to the presence of boundary errors. 5 LANGUAGE IDENTIFICATION 5.1 CLASSIFIER TRAINING 5.1.1 Training and Test Sets The training set contained 12 speakers from each language, with 10 or 20 utterances per speaker, for a total of 930 utterances. The development test set contained a different group of 2 speakers per language with 20 utterances from each speaker, for a total of 160 utterances. The final test set had 6 speakers per language, with 10 or 20 utterances per speaker, for a total of 440 utterances. The average duration of the utterances in the training set was 5.1 seconds and that of the test sets was 5.5 seconds. 5.1.2 Feature Development Several passes were needed through the iterative process of feature development and network training before a satisfactory feature set was obtained. Much of the effort was concentrated on statistical and linguistic analysis of the languages with the objective of determining the distinguishing characteristics among them. For example, the knowledge that Mandarin Chinese was the only monosyllabic tonal language in the set (the other three being stress languages), led us to design features that attempted to capture the large variation in pitch within and across segments for Mandarin Chinese utterances. Similarly, the presence of sequences of equal-length broad category segments in Japanese utterances led us to design an "inter-segment duration difference" feature. The final set of 80 features is described below. All the features are computed over the entire length of an utterance and use the time-aligned broad category sequence provided by the segmentation algorithm. The numbers in parentheses refer to the number of values generated. • Intra-segment pitch variation: Average of the standard deviations of the pitch within all sonorant segments-VOC, PRVS, INVS, POVS (4 values) • Inter-segment pitch variation: Standard deviation of the average pitch in all sonorant segments (4 values) • Frequency of occurrence (number of occurrences per second of speech) of triples of segments. The following triples were chosen based on statistical analyses of the training data: VOC-INVS-VOC, CLOS-PRVS-VOC, VOC-POVS-CLOS, STOP-VOC-FRIC, STOP-VOG-CLOS, and FRIC-VOC-CLOS (6 values) • Frequency of occurrence of each of the seven broad phonetic labels (7 values) 246 Murhusamy and Cole • Frequency of occurrence of all segments (number of segments per second) (1 value) • Frequency of occurrence of all consonants (STOPs and FRICs) (1 value) • Frequency of occurrence of all sonorants (4 values) • Ratio of number of sonorant segments to total number of segments (1 value) • Fraction of the total duration of the utterance devoted to each of the seven broad phonetic labels (7 values) • Fraction of the total duration of the utterance devoted to all sonorants (1 value) • Frequency of occurrence of voiced consonants (1 value) • Ratio of voiced consonants to total number of consonants (1 value) • Average duration of the seven broad phonetic labels (7 values) • Standard deviation of the duration of the seven broad phonetic labels (7 values) • Segment-pair ratios: conditional probability of occurrence of selected pairs of segments. The segment-pairs were selected based on histogram plots generated on the training set. Examples of selected pairs: POVS-FRIC, VOC-FRIC, INVS-VOC, etc. (27 values) • Inter-segment duration difference: Average absolute difference in durations between successive segments (1 value) • Standard deviation of the inter-segment duration differences (1 value) • Average distance between the centers of successive vowels (1 value) • Standard deviation of the distances between centers of successive vowels (1 value) 5.2 LANGUAGE IDENTIFICATION PERFORMANCE 5.2.1 Single Utterances During the feature development phase, the 2 speakers-per-Ianguage development test set was used. The classifier performed at an accuracy of 90.0% on this small test set. For final evaluation, the development test set was combined with the original training set to form a 14 speakers-per-Ianguage training set. The performance of the classifier on the 6 speakers-per-Ianguage final test set was 79.6%. The individual language performances were English 75.8%, Japanese 77.0%, Mandarin Chinese 78.3%, and Tamil 88.0%. This result was obtained with training and test set utterances that were approximately 5.4 seconds long on the average. 5.2.2 Concatenated Utterances To observe the effect of training and testing with longer durations of speech per utterance, a series of experiments were conducted in which pairs and triples of utterances from each speaker were concatenated end-to-end (with 350 ms of silence in between to simulate natural pauses) in both the training and test sets. It is to be noted that the total duration of speech used in training and testing remained unchanged for all these experiments. Table 1 summarizes the performance of the A Segment-based Automatic Language Identification System 247 Table 1: Percentage Accuracy on Varying Durations of Speech Per Utterance A vge. Duration of Test Utts. (sec) Avge. Duration of 5.7 ll.~ 17.1 Training Utts. (sec) 5.3 79.6 73.6 71.2 10.6 71.8 86.8 85.0 15.2 67.9 85.5 89.5 classifier when trained and tested on different durations of speech per utterance. The rows of the table show the effect of testing on progressively longer utterances for a given training set, while the columns of the table show the effect of training on progressively longer utterances for a given test set. Not surprisingly, the best performance is obtained when the classifier is trained and tested on three utterances concatenated together. 6 DISCUSSION The results indicate that the system performs better on longer utterances. This is to be expected given the feature set, since the segment-based statistical features tend to be more reliable with a larger number of segments. Also, it is interesting to note that we have obtained an accuracy of 89.5% without using any spectral information in the classifier feature set. All of the features are based on the broad phonetic category segment sequences provided by the segmenter. It should be noted that approximately 15% of the utterances in the training and test sets consisted of a fixed vocabulary: the days of the week, the months of the year and the numbers zero through ten. It is likely that the inclusion of these utterances inflated classification performance. Nevertheless, we are encouraged by the 10.5% error rate, given the small number of speakers and utterances used to train the system. Acknowledgements This research was supported in part by NSF grant No. IRI-9003110, a grant from Apple Computer, Inc., and by a grant from DARPA to the Department of Computer Science & Engineering at the Oregon Graduate Institute. We thank Mark Fanty for his many useful comments. References [1] E. Barnard and R. A. Cole. A neural-net training program based on conjugategradient optimization. Technical Report CSE 89-014, Department of Computer Science, Oregon Graduate Institute of Science and Technology, 1989. 248 Muthusamy and Cole [2] E. Barnard, R. A. Cole, M. P. Vea, and F. A. Alleva. Pitch detection with a neural-net classifier. IEEE Transactions on Signal Processing, 39(2):298-307, February 1991. [3] Y. K. Muthusamy, R. A. Cole, and M. Gopalakrishnan. A segment-based approach to automatic language identification. In Proceedings 1991 IEEE International Conference on Acoustics, Speech, and Signal Processing, Toronto, Canada, May 1991. PART V TEMPORAL SEQUENCES
1991
34
501
A Topographic Product for the Optimization of Self-Organizing Feature Maps Hans-Ulrich Bauer, Klaus Pawelzik, Theo Geisel Institut fUr theoretische Physik and SFB Nichtlineare Dynamik U niversitat Frankfurt Robert-Mayer-Str. 8-10 W -6000 Frankfurt 11 Fed. Rep. of Germany email: bauer@asgard.physik.uni-frankfurt.dbp Abstract Optimizing the performance of self-organizing feature maps like the Kohonen map involves the choice of the output space topology. We present a topographic product which measures the preservation of neighborhood relations as a criterion to optimize the output space topology of the map with regard to the global dimensionality DA as well as to the dimensions in the individual directions. We test the topographic product method not only on synthetic mapping examples, but also on speech data. In the latter application our method suggests an output space dimensionality of DA = 3, in coincidence with recent recognition results on the same data set. 1 INTRODUCTION Self-organizing feature maps like the Kohonen map (Kohonen, 1989, Ritter et al., 1990) not only provide a plausible explanation for the formation of maps in brains, e.g. in the visual system (Obermayer et al., 1990), but have also been applied to problems like vector quantization, or robot arm control (Martinetz et al., 1990). The underlying organizing principle is the preservation of neighborhood relations. For this principle to lead to a most useful map, the topological structure of the output space must roughly fit the structure of the input data. However, in technical 1141 1142 Bauer, Pawelzik, and Geisel applications this structure is often not a priory known. For this reason several attempts have been made to modify the Kohonen-algorithm such, that not only the weights, but also the output space topology itself is adapted during learning (Kangas et al., 1990, Martinetz et al., 1991). Our contribution is also concerned with optimal output space topologies, but we follow a different approach, which avoids a possibly complicated structure of the output space. First we describe a quantitative measure for the preservation of neighborhood relations in maps, the topographic product P. The topographic product had been invented under the name of" wavering product" in nonlinear dynamics in order to optimize the embeddings of chaotic attractors (Liebert et al., 1991). P = 0 indicates perfect match of the topologies. P < 0 (P > 0) indicates a folding of the output space into the input space (or vice versa), which can be caused by a too small (resp. too large) output space dimensionality. The topographic product can be computed for any self-organizing feature map, without regard to its specific learning rule. Since judging the degree of twisting and folding by visually inspecting a plot of the map is the only other way of "measuring" the preservation of neighborhoods, the topographic product is particularly helpful, if the input space dimensionality of the map exceeds DA = 3 and the map can no more be visualized. Therefore the derivation of the topographic product is already of value by itself. In the second part of the paper we demonstrate the use of the topographic product by two examples. The first example deals with maps from a 2D input space with I,lonflat stimulus distribution onto rectangles of different aspect ratios, the second example with the map of 19D speech data onto output spaces of different dimensionality. In both cases we show, how the output space topology can be optimized using our method. 2 DERIVATION OF THE TOPOGRAPHIC PRODUCT 2.1 KOHONEN.ALGORlTHM In order to introduce the notation necessary to derive the topographic product, we very briefly recall the Kohonen algorithm. It describes a map from an input space V into an output space A. Each node j in A has a weight vector Wj associated with i.t, which points into V. A stimulus v is mapped onto that node i in the output space, which minimizes the input space distance dV (Wi, v): (1) During a learning step, a random stimulus is chosen in the input space and mapped onto an output node i according to Eq. 1. Then all weights Wj are shifted towards v, with the amount of shift for each weight vector being determined by a neighborhood function h~,j: (2) (dA(j, i) measures distances in the output space.) hj i effectively restricts the nodes participating in the learning step to nodes in the vl~inity of i. A typical choice for A Topograhic Product for the Optimization of Self-Organizing Feature Maps 1143 the neighborhood function is (3) In this way the neighborhood relations in the output space are enforced in the input space, and the output space topology becomes of crucial importance. Finally it should be mentioned that the learning step size c as well as the width of the neighborhood function u are decreased during the learning for the algorithm to converge to an equilibrium state. A typical choice is an exponential decrease. For a detailed discussion of the convergence properties of the algorithm, see (Ritter et al., 1988). 2.2 TOPOGRAPHIC PRODUCT After the learning phase, the topographic product is computed as follows. For each output space node j, the nearest neighbor ordering in input space and output space is computed (nt(j) denotes the k-th nearest neighbor of j in A, n"y (j) in V). Using these quantities, we define the ratios QI(j,k) dV (Wj, wn~(j) (4) V ' d (Wj, wn~(j) Q2(j, k) dA (j, nt (j» (5) dA(j, n"y (j» One has QI(j, k) = Q2(j, k) = 1 only, if the k-th nearest neighbors in V and A coincide. Any deviations of the nearest neighbor ordering will result in values for QI.2 deviating from 1. However, not all differences in the nearest neighbor orderings in V and A are necessarily induced by neighborhood violations. Some can be due to locally varying magnification factors of the map, which in turn are induced by spatially varying stimulus densities in V. To cancel out the latter effects, we define the products For these the relations PI(j, k) P2(j, k) PI(j, k) > 1, P2(j, k) < 1 (6) (7) hold. Large deviations of PI (resp. P2) from the value 1 indicate neighborhood violations, when looking from the output space into the input space (resp. from the input space into the output space). In order to get a symmetric overall measure, we further multiply PI and P2 and find P3(j, k) (8) 1144 Bauer, Pawelzik, and Geisel Further averaging over all nodes and neighborhood orders finally yields the topographic product p = 1 N N-l N(N - 1) f; ~ log(P3(j, k)). The possible values for P are to be interpreted as follows: P < 0: PO: P > 0: output space dimension DA too low, output space dimension DA o.k., output space dimension DA too high. (9) These formulas suffice to understand how the product is to be computed. A more detailed explanation for the rational behind each individual step of the derivation can be found in a forthcoming publication (Bauer et al., 1991). 3 EXAMPLES We conclude the paper with two examples which exemplify how the method works. 3.1 ILLUSTRATIVE EXAMPLE The first example deals with the mapping from a 2D input space onto rectangles of different aspect ratios. The stimulus distribution is flat in one direction, Gaussian Rhaped in the other (Fig 1a). The example demonstrates two aspects of our method at once. First it shows that the method works fine with maps resulting from nonflat stimulus distributions. These induce spatially varying areal magnification factors of the map, which in turn lead to twists in the neighborhood ordering between input space and output space. Compensation for such twists was the purpose of the multiplication in Eqs (6) and (7). Table 1: Topographic product P for the map from a square input space with a Gaussian stimulus distribution in one direction, onto rectangles with different aspect ratios. The values for P are averaged over 8 networks each. The 43x 6-output space matches the input data best, since its topographic product is smallest. N 256x 1 128x2 64x4 43x6 32x8 21 x 12 16x 16 aspect ratio 256 64 16 7.17 4 1.75 1 P -0.04400 -0.03099 -0.00721 0.00127 0.00224 0.01335 0.02666 1.0. P(x.yl as 0..6 0..4 0.2 Fig. la 1.0. D.S '0.6 f0..4 ~ 0..2 ,... 0. 0..2 Fig. Ie A Topograhic Product for the Optimization of Self-Organizing Feature Maps 1145 to. o.S 0..6 >. 0.4 0.2 0. y.. 0.2 0..4 0..6 D.S X Fig. Ib I to. 0..8 -0..6 >. 0..4 0..2 0. 0..4 0.6 D.S 0..2 0..L. 0..6 0..8 X x Fig. Id Figure 1: Self-organizing feature maps of a Gaussian shaped (a) 2-dimensional stimulus distribution onto output spaces with 128 x 2 (b), 43 x 6 (c) and 16 x 16 (d) output nodes. The 43 x 6-output space preserves neighborhood relations best. 1146 Bauer, Pawelzik, and Geisel Secondly the method cannot only be used to optimize the overall output space dimensionality, but also the individual dimensions in the different directions (i.e. the different aspect ratios). If the rectangles are too long, the resulting map is folded like a Peano curve (Fig. Ib), and neighborhood relations are severely violated perpendicular to the long side of the rectangle. If the aspect ratio fits, the map has a regular look (Fig. lc), neighborhoods are preserved. The zig-zag-form at the outer boundary of the rectangle does not correspond to neighborhood violations. If the rectangle approaches a square, the output space is somewhat squashed into the input space, again violating neighborhood relations (Fig. Id). The topographic product P coincides with this intuitive evaluation (Tab. 1) and picks the 43 x 6-net 38 the most neighborhood preserving one. 3.2 APPLICATION EXAMPLE In our second example speech data is mapped onto output spaces of various dimensionality. The data represent utterances of the ten german digits, given as 19-dimensional acoustical feature vectors (GramB et al., 1990). The P-values for the different maps are given in Tab. 2. For both the speaker-dependent as well as the speaker-independent case the method distinguishes the maps with DA = 3 as most neighborhood preserving. Several points are interesting about these results. First of all, the suggested output space dimensionality exceeds the widely used DA = 2. Secondly, the method does not generally judge larger output space dimensions as more neighborhood preserving, but puts an upper bound on DA. The data seems to occupy a submanifold of the input space which is distinctly lower than four dimensional. Furthermore we see that the transition from one to several speakers does not change the value of DA which is optimal under neighborhood considerations. This contradicts the expectation that the additional interspeaker variance in the data occupies a full additional dimension. Table 2: Topographic product P for maps from speech feature vectors in a 19D ir. put space onto output spaces of different dimensionality D V. DV N P P speakerspeakerdependent independent 1 256 -0.156 -0.229 2 16x 16 -0.028 -0.036 3 7x6x6 0.019 0.007 4 4x4x4x4 0.037 0.034 What do these results mean for speech recognition? Let us suppose that several utterances of the same word lead to closeby feature vector sequences in the input space. If the mapping was not neighborhood preserving, one should expect the trajectories in the output space to be separated considerably. If a speech recognition system compares these output space trajectories with reference trajectories corresponding to reference utterances of the words, the probability of misclassification rises. So one should expect that a word recognition system with a Kohonen-map A Topograhic Product for the Optimization of Self-Organizing Feature Maps 1147 preprocessor and a subsequent trajectory classifier should perform better if the neighborhoods in the map are preserved. The results of a recent speech recognition experiment coincide with these heuristic expectations (Brandt et al., 1991). The experiment was based on the same data set, made use of a Kohonen feature map as a preprocessor, and of a dynamic timewarping algorithm as a sequence classifier. The recognition performance of this hybrid system turned out to be better by about 7% for a 3D map, compared to a 2D map with a comparable number of nodes (0.795 vs. 0.725 recognition rate). Acknowledgements This work was supported by the Deutsche Forschungsgemeinschaft through SFB 185 "Nichtlineare Dynamik", TP A10. References H.-U. Bauer, K. Pawelzik, Quantifying the Neighborhood Preservation of SelfOrganizing Feature Maps, submitted to IEEE TNN (1991). W.D. Brandt, H. Behme, H.W. Strube, Bildung von Merkmalen zur Spracherkennung mittels Phonotopischer Karten, Fortschritte der Akustik - Proc. of DAGA 91 (DPG GmbH, Bad Honnef), 1057 (1991). T. GramB, H.W. Strube, Recognition of Isolated Words Based on Psychoacoustics and Neurobiology, Speech Comm. 9, 35 (1990). J .A. Kangas, T.K. Kohonen, J .T. Laaksonen, Variants of Self-Organizing Maps, IEEE Trans. Neur. Net. 1,93 (1990). T. Kohonen, Self-Organization and Associative Memory, 3rd Ed., Springer (1989). W. Liebert, K. Pawelzik, H.G. Schuster, Optimal Embeddings of Chaotic Attractors from Topological Considerations, Europhysics Lett. 14,521 (1991). T. Martinetz, H. Ritter, K. Schulten, Three-Dimensional Neural Net for Learning Visuomotor Coordination of a Robot Arm, IEEE Trans. Neur. Net. 1, 131 (1990). T. Martinetz, K. Schulten, A "Neural-Gas" Network Learns Topologies, Proc. ICANN 91 Helsinki, ed. Kohonen et al., North-Holland, 1-397 (1991). K. Obermaier, H. Ritter, K. Schulten, A Principle for the Formation of the Spatial Structure of Cortical Feature Maps, Proc. Nat. Acad. Sci. USA 87, 8345 (1990). H. Ritter, K. Schulten, Convergence Properties of Kohonen's Topology Conserving Maps: Fluctuations, Stability and Dimension Selection, BioI. Cyb. 60, 59-71 (1988). H. Ritter, T. Martinetz, K. Schulten, Neuronale Netze, Addison Wesley (1990). PART XIV PERFORMANCE COMPARISONS
1991
35
502
Learning How To Teach or Selecting Minimal Surface Data Davi Geiger Siemens Corporate Research, Inc 755 College Rd. East Princeton, NJ 08540 USA Ricardo A. Marques Pereira Dipartimento di Informatica Universita di Trento Via Inama 7, Trento, TN 38100 ITALY Abstract Learning a map from an input set to an output set is similar to the problem of reconstructing hypersurfaces from sparse data (Poggio and Girosi, 1990). In this framework, we discuss the problem of automatically selecting "minimal" surface data. The objective is to be able to approximately reconstruct the surface from the selected sparse data. We show that this problem is equivalent to the one of compressing information by data removal and the one oflearning how to teach. Our key step is to introduce a process that statistically selects the data according to the model. During the process of data selection (learning how to teach) our system (teacher) is capable of predicting the new surface, the approximated one provided by the selected data. We concentrate on piecewise smooth surfaces, e.g. images, and use mean field techniques to obtain a deterministic network that is shown to compress image data. 1 Learning and surface reconstruction Given a dense input data that represents a hypersurface, how could we automatically select very few data points such as to be able to use these fewer data points (sparse data) to approximately reconstruct the hypersurface ? We will be using the term surface to refer to hypersurface (surface in multidimen364 Learning How to Teach or Selecting Minimal Surface Data 365 sions) throughout the paper. It has been shown (Poggio and Girosi, 1990) that the problem of reconstructing a surface from sparse and noisy data is equivalent to the problem of learning from examples. For instance, to learn how to add numbers can be cast as finding the map from X = {pair 01 numbers} to F = {sum} from a set of noisy examples. The surface is F(X) and the sparse and noisy data are the set of N examples {(Xi, di)}, where i = 0,1, ... , N and Xi = (ai, bi) E X, such that ai + bi = di + TJi (TJi being the noise term). Some a priori information about the surface, e.g. the smoothness one, is necessary for reconstruction. Consider a set of N input-output examples, {(Xi, di)}, and a form II PI 112 for the cost of the deviation of I, the approximated surface, from smoothness. P is a differential operator and II . II is a norm (usually L2). To find the surface I, that best fits (i) the data and (ii) the smoothness criteria, is to solve the problem of minimizing the functional N-l V(f) = L (di - I(Xi»2 + #11 PI W i=O Different methods of solving the function can yield different types of network. In particular using the Green's method gives supervised backprop type of networks (Poggio and Girosi, 1990) and using optimization techniques (like gradient descent) we obtain unsupervised (with feedback) type of networks. 2 Learning how to teach arithmetic operations The problem of learning how to add and multiply is a simple one and yet provide insights to our approach of selecting the minimum set of examples. Learning arithmetic operations The surface given by the addition of two numbers, namely I(x, y) = X + y, is a plane passing through the origin. The multiplication surface, I(x, y) = Xv, is hyperbolic. The a priori knowledge of the addition and multiplication surface can be expressed as a minimum of the functional V(f) = 1: 1: II yr2/(x,y) II dxdy where {}2 {}2 yr2/(x, y) = ({}x2 + {}y2 )/(x, y) Other functions also minimize V(f), like I(x, y) = x2 - y2, and so a few examples are necessary to learn how to add and multiply given the above prior knowledge. If the prior assumption consider a larger class of basis functions, then more examples will be required. Given p input-output examples, {(Xi, Vi); di}, the learning problem of adding and multiplying can be cast as the optimization of 366 Geiger and Pereira p-l 00 00 V(f) = ~(f( X" y,) - d,)' + Jl 100 100 II \1' I( x, y) II d xd y We now consider the problem of selecting the examples from the full surface data. A sparse process for selecting data Let us assume that the full set of data is given. in a 2-Dimensionallattice. So we have a finite amount of data (N 2 data points), with the input-output set being {(Xi, Yj); dij}, where i, j = 0, 1, ... , N -1. To select p examples we introduce a sparse process that selects out data by modifying the cost function according to N-l 00 00 N-l V = ,~y-8,;)(f(X"y;)-d';)'+Jl 100 100 II \1'I(x,y) II +A(p-i~O (1-8,;»' where Sij = 1 selects out the data and we have added the last term to assure that p examples are selected. The data term forces noisy data to be thrown out first, the second order smoothness of I reduces the need for many examples (p ~ 10) to learn these arithmetic operations. Learning S is equivalent to learn how to select the examples, or to learn how to teach. The system (teacher) has to learn a set of examples (sparse data) that contains all the "relevant" information. The redundant information can be "filled in" by the prior knowledge. Once the teacher has learned these selected examples, he, she or it (machine) presents them to the student that with the a priori knowledge about surfaces is able to approximately learn the full input-output map (surface). 3 Teaching piecewise smooth surfaces We first briefly introduce the weak membrane model, a coupled Markov random field for modeling piecewise smooth surfaces. Then we lay down the framework for learning to teach this surface. 3.1 Weak membrane model Within the Bayes approach the a priori knowledge that surfaces are smooth (first order smoothness) but not at the discontinuities has been analyzed by (Geman and Geman, 1984) (Blake and Zisserman, 1987) (Mumford and Shah, 1985) (Geiger and Girosi, 1991). If we consider the noise to be white Gaussian, the final posterior probability becomes P(j,/lg) = ie-,I3VU,l) , where V(j,/) = I)(jij - gij)2 + J1.11 'VI Ilrj (1-lij) +,ijlij], i,j (1) We represented surfaces by lij at pixel (i, j), and discontinuities by lij. The input, data is gij, II 'V I Ilij is the norm of the gradient at pixel (i, j). Z is a normalization Learning How to Teach or Selecting Minimal Surface Data 367 constant, known as the partition function. f3 is a global parameter of the model and is inspired on thermodynamics, and J.L and lij are parameters to be estimated. This model, when used for image segmentation, has been shown to give a good pattern of discontinuities and eliminate the noise. Thus, suggesting that the piecewise assumption is valid for images. 3.2 Redundant data We have assumed the surface to be smooth and therefore there is redundant information within smooth regions. We then propose a model that selects the "relevant" information according to two criteria 1. Discontinuity data: Discontinuities usually capture relevant information, and it is possible to roughly approximate surfaces just using edge data (see Geiger and Pereira, 1990). A limitation of just using edge data is that an oversmoothed surface is represented. 2. Texture data: Data points that have significant gradients (not enough to be a discontinuity) are here considered texture data. Keeping texture data allows us to distinguish between flat surfaces, as for example a clean sky in an image, and texture surfaces, as for example the leaves in the tree (see figure 2). 3.3 The sparse process Again, our proposal is first to extend the weak membrane model by including an additional binary field - the sparse process s- that is 1 when data is selected out and 0 otherwise. There are natural connections between the process s and robust statistics (Huber, 1988) as discussed in (Geiger and Yuille, 1990) and (Geiger and Pereira, 1991). We modify (1) by considering (see also Geiger and Pereira, 1990) V(/, I, s) = 2:)(1 - Sij )(fij gij)2 + J.L II 'V I II;j (1 -lij) + TJijSij + lijlij]. (2) i,j where we have introduced the term TJijSij to keep some data otherwise Sij = 1 everywhere. If the data term is too large, the process S = 1 can suppress it. We will now assume that the data is noise-free, or that the noise has already been smoothed out. We then want to find which data points (s = 0) are necessary to keep to reconstruct I. 3.4 Mean field equations and unsupervised networks To impose the discontinuity data constraint we use the hard constraint technique (Geiger and Yuille, 1990 and its references). We do not allow states that throw out data (Sij = 1) at the edge location (lij = 1). More precisely, within the statistical framework we reduce the possible states for the processes S and I to Sij1ij = O. Therefore, excluding the state (Sij = 1,/ij = 1). Applying the saddle point approximation, a well known mean field technique (Geiger and Girosi, 1989 and its references), on the field I, we can compute the partition function 368 Geiger and Pereira s.l=O s.1=O Z = L L e-f3V (j,l,s) ~ L e-f3VCf,l,s) ~ II Zij f=(0, .. ,255)N2 s,1=(0 ,1)N2 s,1=(0,1)N2 ij Zij (e- f3h'ij +Cfij -9ij )2] + e- f3[JlIIVfll:j+T/;j] + e-f3[JlIIVfll~j+(jij-9,j)2]) (3) where f maximizes Z. After applying mean field techniques we obtain the following equations for the processes I and S (4) and, using the definition II \l f IIlj = [(fi,j+l - fi+l,j)2 + (Ji+l,j+l - fi,j)2 , the mean field self consistent equation (Geiger and Pereira, 1991) becomes -J.L{ f{ij(1 ~j) + f{i-l,j-l(l- [i-l,j-l) + Mi -1 ,j (1 ~ -1 ,j ) + Mi ,j -1 (1 - Ii ,j -1) } (5) where f{ij = (fi+l,j+l fi,j)2 and Mij = (Ji+l,j - fi,j+l?' The set of coupled equations (5) (4) can be mapped to an unsupervised network, we call a minimal surface representation network (MSRN), and can efficiently be solved in a massively parallel machine. Notice that Sij + lij ~ 1, because of the hard constraint, and in the limit of j3 --+ 00 the processes S and I becomes either 0 or 1. In order to throw away redundant (smooth) data keeping some of the texture we adapt the cost TJij according to the gradient of the surface. More precisely, we set (6) where (ilfjg)2 = (gi+l,j --gi_l,j)2 and (ilijg)2 = (9i,j+l - 9i,j_l)2. The smoother is the data the lower is the cost to discard the data (Sij = 1). In the limit of TJ --+ 0 only edge data (lij = 1) is kept, since from (4) limT/-+osij = l-lij . 3.5 Learning how to teach and the approximated surface With the mean field equations we compute the approximated surface f simultaneously to S and to I. Thus, while learning the process S (the selected data) the system also predict the approximated surface f that the student will learn from the selected examples. By changing the parameters, say J.L and TJ, the teacher can choose the optimal parameters such as to select less data and preserve the quality of the approximat~d surface. Once S has been learned the system only feeds the selected data points to the learner machinery. We actually relax the condition and feed the learner with the selected data and the corresponding discontinuity map (l). Notice that in the limit of TJ --+ 0 the selected data points are coincident with the discontinuities (I = 1). Learning How to Teach or Selecting Minimal Surface Data 369 4 Results: Image compression We show the results of the algorithm to learn the minimal representation of images. The algorithm is capable of image compression and one advantage over the cosine transform (traditional method) is that it does not have the problem of breaking the images into blocks. However, a more careful comparison is needed. 4.1 Learning s, f, and I To analyze the quality of the surface approximation, we show in figure 1 the performance of the network as we vary the threshold 1]. We first show a face image and the line process and then the predicted approximated surfaces together with the correspondent sparse process s. 4.2 Reconstruction, Generalization or "The student performance" We can now test how the student learns from the selected examples, or how good is the surface reconstruction from the selected data. We reconstruct the approximate surfaces by running (5) again, but with the selected surface data points (Sij = 0) and the discontinuities (iij = 1) given from the previous step. We show in figure 2f that indeed we obtain the predicted surfaces (the student has learned). References : E. B. Baum and Y. Lyuu. 1991. The transition to perfect generalization in perceptrons, Neural Computation, vo1.3, no.3. pp.386-401. A. Blake and A. Zisserman. 1987. Visual Reconstruction, MIT Press, Cambridge, Mass. D. Geiger and F. Girosi. 1989. Coupled Markov random fields and mean field theory, Advances in Neural Information Processing Systems 2, Morgan Kaufmann, D. Touretzky. D. Geiger and A. Yuille. 1991. A common framework for image segmentation, Int. Jour. Compo Vis.,vo1.6:3, pp. 227-243. D. Geiger and F. Girosi. 1991. Parallel and deterministic algorithms for MRFs: surface reconstruction, PAMI, May 1991, vol.PAMI-13, 5, pp.401-412 . D. Geiger and R. M. Pereira. 1991. The outlier process, IEEE Workshop on Neural Networks for signal Processing, Princeton, N J. S. Geman and D. Geman. 1984. Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images,PAMI, vol.PAMI-6, pp.721-741K. J.J. Hopfield. 1984. Neural networks and physical systems with emergent collective computational abilities, Proc. Nat. Acad. Sci.,79 , pp. 2554-2558. P.J. Huber. 1981. Robust Statistics, John Wiley and Sons, New York. D. Mumford and J. Shah. 1985. Boundary detection by minimizing functionals, I , Proc. IEEE Conf. on Computer Vision & Pattern Recognition, San Francisco, CA . T. Poggio and F. Girosi. 1990. Regularization algorithms for learning that are equivalent to multilayer network, Science,vol-247, pp. 978-982. D. E. Rumelhart, G. Hinton and R. J. Willians. 1986. Learning internal representations by error backpropagation. Nature, 323, 533. 370 Geiger and Pereira f a. h ... p .. c. d. e. f. Figure 1: (a) 8-bit image of 128 X 128 pixels. (b) The edge map for J-l ::::: 1.0, 'Yij ::::: 100.0. After 200 iterations and final f3 ::::: 25 ~ 00 (c) the approximated image for J-l ::::: 0.01, 'Yij ::::: 1.0 and TJ ::::: 0.0009. (d) the corresponding sparse process (e) approximated image J-l ::::: 0.01, 'Yij = 1.0 and TJ ::::: 0.0001. (f) the corresponding sparse process.
1991
36
503
Visual Grammars and their Neural Nets Eric Mjolsness Department of Computer Science Yale University New Haven, CT 06520-2158 Abstract I exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. 1 INTRODUCTION I show how systematically to derive optimizing neural networks that represent quantitative visual models and match them to data. This involves a design methodology which starts from first principles, namely a probabilistic model of a visual domain, and proceeds via Bayesian inference to a neural network which performs a visual task. The key problem is to find probability distributions sufficiently intricate to model general visual tasks and yet tractable enough for theory. This is achieved by probabilistic and expressive grammars which model the image-formation process, including heterogeneous sources of noise each modelled with a grammar rule. In particular these grammars include a crucial "relabelling" rule that removes the undetectable internal labels (or indices) of detectable features and substitutes an uninformed labeling scheme used by the perceiver. This paper is a brief summary of the contents of [Mjolsness, 1991]. 428 • instance • 2 I • • • 3 • 2 • I • • 3 • Visual Grammars and their Neural Nets 429 • • • (unordered dots) • 3 • 2 • • I (permuted dots) Figure 1: Operation of random dot grammar. The first arrow illustrates dot placement; the next shows dot jitter; the next arrow shows the pure, un-numbered feature locations; and the final arrow is the uninformed renumbering scheme of the perceIver. 2 EXAMPLE: A RANDOM-DOT GRAMMAR The first example grammar is a generative model of pictures consisting of a number of dots (e.g. a sum of delta functions) whose relative locations are determined by one out of M stored models. But the dots are subject to unknown independent jitter and an unknown global translation, and the identities of the dots (their numerical labels) are hidden from the perceiver by a random permutation operation. For example each model might represent an imaginary asterism of equally bright stars whose locations have been corrupted by instrument noise. One useful task would be to recognize which model generated the image data. The random-dot grammar is shown in (1). model a.nd r O : root --+ insta.nce of model Cl' a.t x loca.tion Eo (x) 1 IxI2 20'~ dot r 1 : insta.nce (Cl', x) {dotloc(Cl', m, xm = X + u~)} loca.tions --+ El({Xm}) -log TIm 6(xm - x u~), where < u~ >m=O ~ lim0'6-0 ~ E IXm - x U~12 + C(0'6) 0'6 m dot r 2 : dotloc( Cl', m, Xm) dot(m,xm} jitter --+ E2(Xm) 1 1 ~ 12 ~ Xm-Xm 0' il scra.mble r 3 : {dot(m, Xm)} --+ {ima.gedot(xi = Em Pm,iXm)} all dots E3({xd) -log [Pr(P) TIi 6(Xi - Em Pm,iXm)] where P is a. permuta.tion ~~ 430 Mjolsness The final joint probability distribution for this grammar allows recognition and other problems to be posed as Bayesian inference and solved by neural network optimization of A sum over all permutations has been approximated by the optimization over nearpermutations, as usual for Mean Field Theory networks [Yuille, 1990], resulting in a neural network implementable as an analog circuit. The fact that P appears only linearly in Efinal makes the optimization problems easier; it is a generalized "assignment" problem. 2.1 APPROXIMATE NEURAL NETWORK WITHOUT MATCH VARIABLES Short of approximating a P configuration sum via Mean Field Theory neural nets, there is a simpler, cheaper, less accurate approximation that we have used on matching problems similar to the model recognition problem (find a and x) for the dotmatching grammar. Under this approximation, } '" 1 (1 2 1 a 2) argmaxa,xIT(a,xl{xi ):::::: argmaxa,x L..,.exp -T 2N 21x l + --~ IXi - X - uml , . ~ 2u}t m,1 (3) for T = 1. This objective function has a simple interpretation when U r ----+ 00: it minimizes the Euclidean distance between two Gaussian-blurred images containing the Xi dots and a shifted version of the Um dots respectively: argmina X J dz IG * II (z) - G * I2(z - x)12 , 2 argmina,x J dz IGo"/V2 * 2:i 8(z - Xi) - G q / V2 * 2:m 8(z - X - u~)1 argmina,x [C1 - 2 2:mi J dz exp -~ [Iz - xd2 + Iz - X U~12]] argmaxa,x 2:mi exp 2~2 IXi - X - U~ 12 (4) Deterministic annealing from T = 00 down to T = 1, which is a good strategy for finding global maxima in equation (3), corresponds to a coarse-to-fine correlation matching algorithm: the global shift X is computed by repeated local optimization while gradually decreasing the Gaussian blurr parameter U down to Ujt. The approximation (3) has the effect of eliminating the discrete Pmi variables, rather than replacing them with continuous versions Vmi. The same can be said for the "elastic net" method [Durbin and Willshaw, 1987]. Compared to the elastic net, the present objective function is simpler, more symmetric between rows and columns, has a nicer interpretation in terms of known algorithms (correlation matching in scale space), and is expected to be less accurate. Visual Grammars and their Neural Nets 431 3 EXPERIMENTS IN IMAGE REGISTRATION Equation (3) is an objective function for recovering the global two-dimensional (2D) translation of a model consisting of arbitrarily placed dots, to match up with similar dots with jittered positions. We use it instead to find the best 2D rotation and horizontal translation, for two images which actually differ by a horizontal 3D translation with roughly constant camera orientation. The images consist of line segments rather than single dots, some of which are missing or extra data. In addition, there are strong boundary effects due to parts of the scene being translated outside the camera's field of view. The jitter is replaced by whatever positional inaccuracies come from an actual camera producing an 128 x 128 image [Williams and Hanson, 1988] which is then processed by a high quality line-segment finding algorithm [Burns, 1986]. Better results would be expected of objective functions derived from grammars which explicitly model more of these noise processes, such as the grammars described in Section 4. We experimented with minimizing this objective function with respect to unknown global translations and (sometimes) rotations, using the continuation method and sets of line segments derived from real images. The results are shown in Figures 2, 3 and 4. 4 MORE GRAMMARS Going beyond the random-dot grammar, we have studied several grammars of increasing complexity. One can add rotation and dot deletion as new sources of noise, or introduce a two-level hierarchy, in which models are sets of clusters of dots. In [Mjolsness et al., 1991] we present a grammar for multiple curves in a single image, each of which is represented in the image as a set of dots that may be hard to group into their original curves. This grammar illustrates how flexible objects can be handled in our formalism. We approach a modest plateau of generality by augmenting the hierarchical version of the random-dot grammar with multiple objects in a single scene. This degree of complexity is sufficient to introduce many interesting features of knowledge representation in high-level vision, such as multiple instances of a model in a scene, as well as requiring segmentation and grouping as part of the recognition process. We have shown [Mjolsness, 1991] that such a grammar can yield neural networks nearly identical to the "Frameville" neural networks we have previously studied as a means of mixing simple Artificial Intelligence frame systems (or semantic networks) with optimization-based neural networks. What is more, the transformation leading to Frameville is very natural. It simply pushes the permutation matrix as far back into the grammar as possible. 432 Mjolsness - :s::.....===: Figure 2: A simple image registration problem. (a) Stair image. (b) Long line segments derived from stair image. (c) Two unregistered line segment images derived from two images taken from two horizontally translated viewpoints in three dimensions. The images are a pair of successive frames in an image sequence. (d) Registered viersions of same data: superposed long line segments extracted from two stair images (taken from viewpoints differing by a small horizontal translation in three dimensions) that have been optimally registered in two dimensions. Visual Grammars and their Neural Nets 433 Figure 3: Continuation method (deterministic annealing). (a) Objective function at (j = .0863. (b) Objective function at (j = .300. (c) Objective function at (j = .105. (d) Objective function at (j = .0364. 434 Mjolsness Figure 4: Image sequence displacement recovery. Frame 2 is matched to frames 3-8 in the stair image sequence. Horizontal displacements are recovered. Other starting frames yield similar results except for frame 1, which was much worse. (a) Horizontal displacement recovered, assuming no 2-d rotation. Recovered dispacement as a function of frame number is monotonic. (b) Horizontal displacement recovered, along with 2-d rotation which is found to be small except for the final frame. Displacements are in qualitative agreement with (a), more so for small displacements. (c) Objective function before and after displacement is recovered (upper and lower curves) without rotation. Note gradual decrease in dE with frame number (and hence with displacement). (d) Objective function before and after displacement is recovered (upper and lower curves) with rotation. Visual Grammars and their Neural Nets 435 Acknowlegements Charles Garrett performed the computer simulations and helped formulate the linematching objective function used therein. References [Burns, 1986] Burns, J. B. (1986). Extracting straight lines. IEEE Trans. PAMI, 8(4):425-455. [Durbin and Willshaw, 1987] Durbin, R. and Willshaw, D. (1987). An analog approach to the travelling salesman problem using an elastic net method. Nature, 326:689-691. [Mjolsness, 1991] Mjolsness, E. (1991). Bayesian inference on visual grammars by neural nets that optimize. Technical Report YALEU jDCSjTR854, Yale University Department of Computer Science. [Mjolsness et al., 1991] Mjolsness, E., Rangarajan, A., and Garrett, C. (1991). A neural net for reconstruction of multiple curves with a visual grammar. In Seattle International Joint Conference on Neural Networks. [Williams and Hanson, 1988] Williams, L. R. and Hanson, A. R. (1988). Translating optical flow into token matches and depth from looming. In Second International Conference on Computer Vision, pages 441-448. Staircase test image sequence. [Yuille, 1990] Yuille, A. L. (1990). Generalized deformable models, statistical physics, and matching problems. Neural Computation, 2(1):1-24.
1991
37
504
Hierarchical Transformation of Space in the Visual System Alexandre Pouget Stephen A. Fisher Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute La Jolla, CA 92037 Abstract Neurons encoding simple visual features in area VI such as orientation, direction of motion and color are organized in retinotopic maps. However, recent physiological experiments have shown that the responses of many neurons in VI and other cortical areas are modulated by the direction of gaze. We have developed a neural network model of the visual cortex to explore the hypothesis that visual features are encoded in headcentered coordinates at early stages of visual processing. New experiments are suggested for testing this hypothesis using electrical stimulations and psychophysical observations. 1 Introduction Early visual processing in cortical areas VI, V2 and MT appear to encode visual features in eye-centered coordinates. This is based primarily on anatomical data and recordings from neurons in these areas, which are arranged in retinotopic maps. In addition, when neurons in the visual cortex are electrically stimulated [9], the direction of the evoked eye movement depends only on the retinotopic position of the stimulation site, as shown in figure 1. Thus, when a position corresponding to the left part of the visual field is stimulated, the eyes move toward the left (left figure), and eye movements in the opposite direction are induced if neurons on the right side are stimulated (right figure). 412 Hierarchical Transformation of Space in the Visual System 413 u u .," .......... .. .............. . .... ' \'. ' . :. " . \ . ,,L.: • ." .... 1. .\ R I L ~i -=~~~-l R \ \ , ~ ," ... --.D D ~ End Point of the Saccade Figure 1: Eye Movements Evoked by Electrical Stimulations in V1 A variety of psychophysical experiments provide further evidence that simple visual features are organized according to retinal coordinates rather than spatiotopic coordinates [10, 5]. At later stages of visual processing the receptive fields of neurons become very large and in the posterior parietal cortex, containing areas believed to be important for sensory-motor coordination (LIP, VIP and 7a), the visual responses of neurons are modulated by both eye and head position [1, 2]. A previous model of the parietal cortex showed that the gain fields of the neurons observed there are consistent with a distributed spatial transformation from retinal to head-centered coordinates [14]. Recently, several investigators have found that static eye position also modulates the visual response of many neurons at early stages of visual processing, including the LGN, V1 and V3a [3, 6, 13, 12]. Furthermore, the modulation appears to be qualitatively similar to that previously reported in the parietal cortex and could contribute to those responses. These new findings suggest that coordinate transformations from retinal to spatial representations could be initiated much earlier than previously thought. We have used network optimization techniques to study the spatial transformations in a feedforward hierarchy of cortical maps. The goals of the model were 1) to determine whether the modulation of neural responses with eye position as observed in V1 or V3a is sufficient to provide a head-centered coordinate frame, 2) to help interpret data based on the electrical stimulation of early visual areas, and 3) to provide a framework for designing experiments and testing predictions. 2 Methods 2.1 Network Task The task of the network was to compute the head-centered coordinates of objects. If E is the eye position vector and R is the vector for the retinal position of the 414 Pouget, Fisher, and Sejnowski H v Head -CeG1ered PCllli&iClll , 1"-. 1"-. ~ ~ OlJtput lLlL H v o 0 H V E~ PosiUCIIl Rcciaa Figure 2: Network Architecture Hiddea. aaye" 2 object, then the head-centered position P is given by: (1) A two layer network with linear units can solve this problem. However, the goal of our study was not to find the optimal architecture for this task, but to explore the types of intermediate representation developed in a multilayer network of non-linear units and to compare these results with physiological recordings. 2.2 Network Architecture We trained a partially-connected multilayer network to compute the head-centered position of objects from retinal and eye position signals available at the input layer. Weights were shared within each hidden layer [7] and adjusted with the backpropagation algorithm [11]. All simulations were performed with the SN2 simulator developed by Botou and LeCun. In the hierarchical architecture illustrated in figure 2, the sizes of the receptive fields were restricted in each layer and several hidden units were dedicated to each location, typically 3 to 5 units, depending on the layer. Although weights were shared between locations within a layer, each type of hidden unit was allowed to develop its own receptive field properties. This architecture preserves two essential aspects of the visual cortex: 1) restricted receptive fields organized in retinotopic maps and 2) the sizes of the receptive fields increase with distance from the retina. Training examples consisted of an eye position vector and a gaussian pattern of activity placed at a particular location on the input layer and these were systemHierarchical Transformation of Space in the Visual System 415 I ViIuI c.ta I Ana 7A I I ViIuI c.ta: Ana VJe I [;J 00@ @@@ ••• • • • • • • ••• (!)8@ 8.@ • • • • • • • • • ••• 0<30 (!)(!)@ • • • • •• • •• IIIWNal .. ,.. 3 I Im .... La,..2 I [iJ ~ S@@ I:~I ••• ~ I: : •• @ @Il. a: •• 8 • • • (!X!X!) Figure 3: Spatial Gain Fields: Comparison Between Hidden Units and Cortical Neurons (background activity not shown for V3a neurons) atically varied throughout the training. For some trials there were no visual inputs and the output layer was trained to reproduce the eye position. 2.3 Electrical Stimulation Experiments Determining the head-centered position of an object is equivalent to computing the position of the eye required to foveate the object (Le. for a foveated object R = 0, which, according to equation 1, implies that P = E). Thus, the output of our network can be interpreted as the eye position for an intended saccadic eye movement to acquire the object. For the electrical stimulation experiments we followed the protocol suggested by Goodman and Andersen [4] in an earlier study of the Zipser-Andersen model of parietal cortex [14]. The cortical model was stimulated by clamping the activity of a set of hidden units at a location in one of the layers to 1, their maximum values, and setting all visual inputs to o. The changes in the activity of the output units were computed and interpreted as an intended saccade. 3 Results We trained several networks with various numbers of hidden units per layer and found that they all converged to a nearly perfect solution in a few thousand sweeps through the training set. 3.1 Comparison Between Hidden Units and Cortical Neurons The influence of eye position on the visual response of a cortical neuron is typically assessed by finding the visual stimulus eliciting the best response and measuring the gain of this response at nine different eye fixations [1]. The responses are plotted as circles with diameters proportional to activity and the set of nine circles is called the spatial gain field of a unit. We adopted the same procedure for studying the hidden units in the model. 416 Pouget, Fisher, and Sejnowski Figure 4: Eye Movements Evoked by Stimulating the Retina The units in a fully-developed network have properties that are similar to those observed in cortical neurons (figure 3). Despite having restricted receptive fields, the overall activity of most units increased monotonically in one direction of eye position, each unit having a different preferred direction in head-centered space. Also, the inner and outer circles, corresponding to the visual activity and the overall activity (visual plus background) did not always increase along the same direction due to the nonlinear sigmoid squashing function of the unit. 3.2 Significance of the Spatial Gains Fields Each hidden layer of the network has a retinotopic map but also contains spatiotopic (i.e. head-centered) information through the spatial gain fields. We call these retinospatiotopic maps. At each location on a map, R is implicitly encoded by the position of a unit on the map, and E is provided by the inputs from the eye position units. Thus, each location contains all the information needed to recover P, the head-centered coordinate. Therefore, all of the visual features in the map, such as orientation or color, are encoded in head-centered coordinates. This suggests that some visual representations in Vl and V3a may be retinospatiotopic. 3.3 Electrical Stimulation Experiments Can electrical stimulation experiments distinguish between a purely retinotopic map, like the retina, and retinospatiotopic maps, like each of the hidden layers? When input units in the retina are stimulated, the direction of the evoked movement is determined by the location of the stimulation site on the map (figure 4), as expected from a purely retinotopic map. For example, stimulating units in the upper +++ + ... +' + .... , .. ., ..... .... . ........ ... " -. . --. , / , " '. , \ \ , \ @~ @@@ @@@ Hidden layer 2 +' " , . ........... , Hierarchical Transformation of Space in the Visual System 4 I 7 \ . \ \ , , \ \ \ \ ~?~ G~~ :$; Hidden layer 3 One Hidden Unit Type Stimulated Figure 5: Eye Movements Evoked by Stimulating one Hidden Unit Type left corner of the map produces an output in the upper left direction, regardless of initial eye position. There were several types of hidden units at each spatial position of a hidden layer. When the hidden units were stimulated independently, the pattern of induced eye movements was no longer a function solely of the location of the stimulation (figure 5). Other factors, such as the preferred head-centered direction of the stimulated cell, were also important. Hence, the intermediate maps were not purely retinotopic. If all the hidden units present at one location in a hidden layer were activated together, the pattern of outputs resembled the one obtained by stimulating the input layer (figure 6). Even though each hidden unit has a different preferred head-centered direction, when simultaneously activated, these directions balanced out and the dominant factor became the location of the stimulation. Strong electrical stimulation in area VI of the visual cortex is likely to recruit many neurons whose receptive fields share the same retinal location. This might explain why McIlwain [9] observed eye movements in directions that depended only on the position of the stimulation site. In higher visual areas with weaker retinotopy, it might be possible to obtain patterns closer to those produced by stimulating only one type of hidden unit. Such patterns of eye movements have already been observed in parietal area LIP [4]. 418 Pouget, Fisher, and Sejnowski W ~. -1# ~ * ;:;~~ , / \." \~ ,," ; / '\ ./ ;/ ' " \ ... ~ + * -* * $ -'......,.- ~~ .. , ~ "'- / ' 1 / m ~ ~ itt ~ , , , ./ " ~ ," "", ,,; " ~". / ,,..," .~ /" Hidden layer 2 Hidden layer 3 All Hidden Unit Types Stimulated Figure 6: Eye Movements Evoked by Stimulating all Hidden Unit Types 4 Discussion and Predictions The analysis of our hierarchical model shows that the gain modulation of visual responses observed at early stages of visual processing are consistent with the hypothesis that low-level visual features are encoded in head-centered coordinates. What experiments could confirm this hypothesis? Electrical stimulation cannot distinguish between a retinotopic and a retinospatiotopic representation unless the number of neurons stimulated is small or restricted to those with similar gain fields. This might be possible in an intermediate level of processing, such as area V3a. Most psychophysical experiments have been designed to test for purely headcentered maps [10, 5] and not for retinotopic maps receiving a static eye position signal. New experiments are needed that look for interactions between eye position and visual features. For example, it should be possible to obtain motion aftereffects that are dependent on eye position; that is, an aftereffect in which the direction of motion depends on the gaze direction. John Mayhew [8] has already reported this type of gaze-dependent aftereffect for rotation, which is probably represented at later stages of visual processing. Similar experiments with translational motion could probe earlier levels of visual processing. If information on spatial location is already present in area VI, the primary visual area that projects to other areas of the visual cortex in primates, then we need to re-evaluate the representation of objects in visual cortex. In the model presented here, the spatial location of an object was encoded along with its other features in a distributed fashion; hence spatial location should be considered on equal footing with other features of an object. Such early spatial transformations would affect Hierarchical Transformation of Space in the Visual System 419 other aspects of visual processing, such as visual attention and object recognition, and may also be important for nonspatial tasks, such as shape constancy (John Mayhew, personal communication). References [1] R.A. Andersen, G.K. Essick, and R.M. Siegel. Encoding of spatial location by posterior parietal neurons. Science, 230:456-458, 1985. [2] P.R. Brotchie and R.A. Andersen. A body-centered coordinate system in posterior parietal cortex. In Neurosc. Abst., page 1281, New Orleans, 1991. [3] C. Galleti and P.P. Battaglini. Gaze-dependent visual neurons in area v3a of monkey prestriate cortex. J. Neurosc., 9:1112-1125, 1989. [4] S.J. Goodman and R.A. Andersen. Microstimulations of a neural network model for visually guided saccades. J. Cog. Neurosc., 1:317-326, 1989. [5] D.E. Irwin, J.1. Zacks, and J .S. Brown. Visual memory and the perception of a stable visual environment. Perc. Psychophy., 47:35-46, 1990. [6] R. Lal and M.J. Freedlander. Gating of retinal transmission by afferent eye position and movement signals. Science, 243:93-96, 1989. [7] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, and 1.D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:540-566, 1990. [8] J.E.W. Mayhew. After-effects of movement contingent on direction of gaze. Vision Res., 13:877-880, 1973. [9] J .T. Mc Ilwain. Saccadic eye movements evoked by electrical stimulation of the cat visual cortex. Visual Neurosc., 1:135-143, 1988. [10] J.K. O'Regan and A. Levy-Schoen. Integrating visual information from successive fixations: does trans-saccadic fusion exist? Vision Res., 23:765-768, 1983. [11] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, editors, Parallel Distributed Processing, volume 1, chapter 8, pages 318-362. MIT Press, Cambridge, MA, 1986. [12] Y. Trotter, S. Celebrini, S.J. Thorpe, and Imbert M. Modulation of stereoscopic processing in primate visual cortex vI by the distance of fixation. In Neurosc. Abs., New-Orleans, 1991. [13] T.G. Weyand and J.G. Malpeli. Responses of neurons in primary visual cortex are influenced by eye position. In Neurosc. Abs., page 419.7, St Louis, 1990. [14] D. Zipser and R.A. Andersen. A back-propagation programmed network that stimulates reponse properties of a subset of posterior parietal neurons. Nature, 331:679-684, 1988.
1991
38
505
Illumination and View Position in 3D Visual Recognition Amnon Shashua M.LT. Artificial Intelligence Lab., NE43-737 and Department of Brain and Cognitive Science Cambridge, MA 02139 Abstract It is shown that both changes in viewing position and illumination conditions can be compensated for, prior to recognition, using combinations of images taken from different viewing positions and different illumination conditions. It is also shown that, in agreement with psychophysical findings, the computation requires at least a sign-bit image as input contours alone are not sufficient. 1 Introduction The task of visual recognition is natural and effortless for biological systems, yet the problem of recognition has been proven to be very difficult to analyze from a computational point of view. The fundamental reason is that novel images of familiar objects are often not sufficiently similar to previously seen images of that object. Assuming a rigid and isolated object in the scene, there are two major sources for this variability: geometric and photometric. The geometric source of variability comes from changes of view position. A 3D object can be viewed from a variety of directions, each resulting with a different 2D projection. The difference is significant, even for modest changes in viewing positions, and can be demonstrated by superimposing those projections (see Fig. 4, first row second image). Much attention has been given to this problem in the visual recognition literature ([9], and references therein), and recent results show that one can compensate for changes in viewing position by generating novel views from a small number of model views of the object [10, 4, 8]. 404 Illumination and View Position in 3D Visual Recognition 405 Figure 1: A 'Mooney' image. See text for details. The photometric source of variability comes from changing illumination conditions (positions and distribution of light sources in the scene). This has the effect of changing the brightness distribution in the image, and the location of shadows and specular reflections. The traditional approach to this problem is based on the notion of edge detection. The idea is that discontinuities in image brightness remain stable under changes of illumination conditions. This invariance is not complete and furthermore it is an open question whether this kind of contour information is sufficient, 01· even relevant, for purposes of visual recognition. Consider the image in Fig. 1, adopted from Mooney's Closure Faces Test [6]. Most observers show no difficulty in interpreting the shape of the object from the righthand image, but cannot identify the object when presented with only the contours. Also, many of the contours are shadow contours and therefore critically rely on the direction of light source. In Fig. 2 four frontal images of a doll from four different illumination conditions are shown together with their intensity step edges. The change in the contour image is significant and is not limited to shadow contours some object edges appear or disappear as a result of the change in brightness distribution. Also shown in Fig. 4 is a sign-bit image of the intensity image followed by a convolution with a Difference of Gaussians. As with the Mooney image, it is considerably more difficult to interpret the image of a complex object with only the zero-crossing (or level-crossing) contours than when the sign-bits are added. It seems, therefore, that a successful recognition scheme should be able to cope with changes in illumination conditions, as well as changes in viewing positions, by working wit.h a richer source of information than just contours (for a different point of view, see [1]). The minimal information that seems to be sufficient, at least for coping with the photometric problem, is the sign-bit image. The approach to visual recognition in this study is in line with the 'alignment' approach [9] and is also inspired by the work of Ullman and Basri [10] who show that the geometric source of variability can be handled by matching the novel projection to a linear combination of a small number of previously seen projections of that object. A recognition scheme that can handle both the geometric and photometric sources of variability is suggested by introducing three new results: (i) any image of a surface with a linear reflectance function (including Lambertian and Phong's model without point specularities) can be expressed as a linear combination of a fixed set of three images of that surface taken under different illumination conditions, (ii) from a computational standpoint, the coefficients are better recovered using the 406 Shashua sign-bit image rather than the contour image, and (iii) one can compensate for both changes in viewing position and illumination conditions by using combinations of images taken from different viewing positions and different illumination conditions. 2 Linear Combination of Images We start by assuming that view position is fixed and the only parameter that is allowed to change is the positions and distribution oflight sources. The more general result that includes changes in viewing positions will be discussed in section 4. Proposition 1 All possible images of a surface, with a linear reflectance function, generated by all possible illumination conditions (positions and distribution of light sources) are spanned by a linear combination of images of the 8urface taken from independent illumination conditions. Proof: Follows directly from the general result that if /j (x), x E Rk, j = 1, ... , k, are k linear functions, which are also linearly independent, then for any linear function f(x), we have that f(x) = Lj aj!i(x), for some constants aj. 0 The simplest case for which this result holds is the Lambertian reflectance model under a point light source (observed independently by Yael Moses, personal communication). Let r be an object point projecting to p. Let nr represent the normal and albedo at r (direction and magnitude), and s represent the light source and its intensity. The brightness at p under the Lambertian model is I(p) = nr . 8, and because 8 is fixed for all point p, we have I(p) = al II (p) + a2h(p) + a313(p) where Ij(p) is the brightness under light source 8j and where 81,82,83 are linearly independent. This result generalizes, in a straightforward manner, to the case of multiple light sources as well. The Lambertian model is suitable for matte surfaces, i.e. surfaces that diffusely reflect incoming light rays. One can add a 'shininess' component to account for the fact that for non-ideal Lambertian surfaces, more light is reflected in a direction making an equal angle of incidence with reflectance. In Phong's model of reflectance [7] this takes the form of (n r . h)C where h is the bisector of 8 and the viewer's direction v. The power constant c controls the degree of sharpness of the point specularity, therefore outside that region one can use a linear version of Phong's model by replacing the power constant with a multiplicative constant, to get the following function: I(p) = nr . [8 + p( v + 8)]. As before, the bracketed vector is fixed for all image points and therefore the linear combination result holds. The linear combination result suggests therefore that changes in illumination can be compensated for, prior to recognition, by selecting three points (that are visible to 8,81,82,83) to solve for aI, a2, a3 and then match the novel image I with I' = Lj aj I j . The two images should match along all points p whose object points rare visible to 81, S2, 83 (even if nr ·8 < 0, i.e. p is attached-shadowed); approximately match along points for which nr . Sj < 0, for some j (Ij(p) is truncated to zero, geometrically 8 is projected onto the subspace spanned by the remaining basis light sources) and not match along points that are cast-shadowed in I (nr . 8 > ° but r is not visible to 8 because of self occlusion). Coping with cast-shadows is an important task, but is not in the scope of this paper. Illumination and View Position in 3D Visual Recognition 407 Figure 2: Linear combination of model images taken from the same viewing positIOn and under different illumination conditions. Row 1,2: Three model images taken under a varying point light source, and the input image, and their brightness edges. Row 3: The image generated by the linear combination of the model images, its edges, and the difference edge image between the input and generated image. The linear combination result also implies that, for the purposes of recognition, one does not need to recover shape or light source direction in order to compensate for changes in hrightness distribution and attached shadows. Experimental results, on a non-ideal Lambertian surface, are shown in Fig. 2. 3 Coefficients fronl Contours and Sign-bits Mooney pictures, such as in Fig. 1, demonstrate that humans can cope well with situations of varying illumination by using only limited information from the input image, namely the sign-bits, yet are not able to do so from contours alone. This observation can be predicted from a computational standpoint, as shown below. Proposition 2 The coejJiczents that span an image I by the basis of three other images, as descnbed in proposition 1, can be solved, up to a common scale factor, 408 Shashua Figure 3: Compensating for both changes in view and illumination. Row 1: Three model images, one of which is taken from a different viewing direction (23 0 apart), and the input image from a novel viewing direction (in between the model images) and illumination condition. Row 2: difference image between the edges of the input image (shown separately in Fig. 4) and the edges of the view transformed first model image (first row, lefthand), the final generated image (linear combination of the three transformed model images), its edges, and the difference image between edges of input and generated image. from just the contours of I zero-crossings or level-crossings. Proof: Let aj be the coefficients that span I by the basis images Ij, j = 1,2,3, i.e. I = Lj aj Ij. Let f, J; be the result of applying a Difference of Gaussians (DOG) operator, with the same scale, on images I, Ij , j = 1,2,3. Since DOG is a linear operator we have that f = Lj aj J;. Since J(p) = 0 along zero-crossing points p of I, then by taking any three zero-crossing points, which are not on a cast-shadow border, we get a homogeneous set of equations from which aj can be solved up to a common scale factor. Similarly, let k be an unknown threshold applied to I. Therefore, along level crossings of I we have k = Lj aj Ij , hence 4 level-crossing points, that are visible to all four light sources, are sufficient to solve for aj and k. D This result is in accordance with what is known from image compression literature of reconstructing an image, up to a scale factor, from contours alone [2]. In both cases, here and in image compression, this result may be difficult to apply in practice because the contours are required to be given at sub-pixel accuracy. One can relax the accuracy requirement by using the gradients along the contours a technique that works well in practice. Nevertheless, neither gradients nor contours at subpixel accuracy are provided by Mooney pictures, which leaves us with the sign- bits as the source of information for solving for the coefficients. Illumination and View Position in 3D Visual Recognition 409 Figure 4: Compensating for changes in viewing position and illumination from a single view (model images are all from a single viewing position). Model images are the same as in Fig. 2, input image the same as in Fig. 3. Row 1: edges of input image, overlay of input edge image and edges of first model image, overlay with edges of the 2D affine transformed first model image, sign-bit input image with marked 'example' locations (16 of them). Row 2: linear combination image of the 2D affine transformed model images, the final generated image, its edges, overlay with edges of the input image. Proposition 3 Solving for the coefficients from the sign- bit image of I is equtvalent to solving for a separating hyperplane in 3D in which image points serve as 'examples '. Proof: Let z(p) = (II, 12, hf be a vector function and w = (aI, a2, a3)T be the unknown weight vector. Given the sign-bit image j of I, we have that for every point p, excluding zero-crossings, the scalar product wT z(p) is either positive or negative. In this respect, one can consider points in j as 'examples' in 3D space and the coefficients aj as a vector norma) to the separating hyperplane. 0 A similar result can be obtained for the case of a thresholded image. The separating hyperplane in that case is defined in 4D, rather than 3D. Many schemes for finding a separating hyperplane have been described in Neural Network literature (see [5] for review) and in Discriminant Analysis literature ([3], for example). Experimental results shown in the next section show that 10-20 points, distributed over the entire object, are sufficient to produce results that are indistinguishable from those obtained from an exact solution. By using the sign-bits instead of the zero-crossing contours we are trading a unique (up to a scale factor), but unstable, solution for an approximate, but stable, one. Also, by taking the sample points relatively far away from the contours (in order to minimize the chance of error) the scheme can tolerate a certain degree of misalign410 Shashua ment between the basis images and the novel image. This property will be used in one of the schemes, described below, for combining changes of viewing positions and illumination conditions. 4 Changing Illumination and Viewing Positions In this section, the recognition scheme is generalized to cope with both changes in illumination and viewing positions. Namely, given a set of images of an object as a model and an input image viewed from a novel viewing position and taken under a novel illumination condition we would like to generate an image, from the model, that is similar to the input image. Proposition 4 Any set of three images, satisfying conditions of proposition 1, of an object can be used to compensate for both changes in view and illumination. Proof: Any change in viewing position will induce both a change in the location of points in the image, and a change in their brightness (because of change in viewing angle and change in angle between light source and surface normal). From proposition 1, the change in brightness can be compensated for provided all the images are in alignment. What remains, therefore, is to bring the model images and the input image into alignment. Case 1: If each of the three model images is viewed from a different position, then the remaining proof follows directly from the result of Ullman and Basri [10] who show that any view of an object with smooth boundaries, undergoing any affine transformat.ion in space, is spanned by three views of the object. Case 2: If only two of the model images are viewed from different positions, then given full correspondence between all points in the two model views and 4 corresponding points with the input image, we can transform all three model images to align wit.h the input image in the following way. The 4 corresponding points between the input image and one of the model images define three corresponding vectors (taking one of the corresponding points, say 0, as an origin) from which a 2D affine transformation, ma.trix A and vector w, can be recovered. The result, proved in [8], is tha.t for every point p' in the input image who is in correspondence with p in the model image we have that p' = [Ap + 0' - Ao] + apw. The parameter a p is invariant to any affine transformation in space, therefore is also invariant to changes in viewing position. One can, therefore, recover ap from the known correspondence between two model images and use that to predict the location p'. It can be shown that this scheme provides also a good approximation in the case of objects with smooth boundaries (like an egg or a human head, for details see [8]). Case 3: All three model images are from the same viewing position. The model images are first brought into 'rough alignment' (term adopted from (10)) with the input image by applying the transformation Ap + 0' - Ao + w to all points p in each model image. The remaining displacement between the transformed model images and the input image is (ap - l)w which can be shown to be bounded by the depth variation of the surface [8]. (In case the object is not sufficiently fiat, more than 4 points may be used to define local transformations via a triangulation of those points). The linear combination coefficients are then recovered using the sign-bit Illumination and View Position in 3D Visual Recognition 411 scheme described in the previous section. The three transformed images are then linearly combined to create a new image that is compensated for illumination but is still displaced from the input image. The displacement can be recovered by using a brightness correlation scheme along the direction w to find Q p - 1 for each point p. (for details, see [B]). 0 Experimental results of the last two schemes are shown in Figs. 3 and 4. The four corresponding points, required for view compensation, were chosen manually along the tip of eyes, eye-brow and mouth of the doll. The full correspondence that is required between the third model view and the other two in scheme 2 above, was established by first taking two pictures of the third view, one from a novel illumination condition and the other from a similar illumination condition to one of the other model images. Correspondence was then determined by using the scheme described in [B]. The extra picture was then discarded. The sample points for the linear combination were chosen automatically by selecting 10 points in smooth brightness regions. The sample points using the sign-bit scheme were chosen manually. 5 Summary It has been shown that the effects photometry and geometry in visual recognition can be decoupled and compensated for prior to recognition. Three new results were shown: (i) photometric effects can be compensated for using a linear combination of images, (ii) from a computational standpoint, contours alone are not sufficient for recognition, and (iii) geometrical effects can be compensated for from any set of three images, from different illuminations, of the object. Acknowledgments I thank Shimon Ullman for his advice and support. Thanks to Ronen Basri, Tomaso Poggio, Whitman Richards and Daphna Weinshall for many discussions. A.S. is supported by NSF grant IRI-B900267. References [1] Cavana.gh,P. Proc. 19th ECVP, Andrei, G. (Ed.), 1990. [2] Curtis,S.R and Oppenheim,A.V. in Whitman,R. and Ullman,S. (eds.) Image Understanding 1989. pp.92-110, Ablex, NJ 1990. [3] Duda,R.O. and Hart,P.E. pattern classification and scene analysis. NY, Wiley 1973. [4] Edelman,S. and Poggio,T. Massachusetts Institute of Technology, A.I. Memo 1181, 1990 [5] Lippmann,R.P. IEEE ASSP Magazine, pp.4-22, 1987. [6] Mooney,C.M. Can. 1. Psychol. 11:219-226, 1957. [7] Phong,B.T. Comm. A CM, 18, 6:311-317, 1975. [8] Shashua,A. Massachusetts Institute of Technology, A.I. Memo 1927, 1991 [9] Ullman,S. Cognition,32:193-254, 1989. [10] Ullman,s. and Basri,R. Massachusetts Institute of Technology, A.I. Memo 1052, 1989
1991
39
506
Multi-State Time Delay Neural Networks for Continuous Speech Recognition Patrick Haffner CNET Lannion A TSSIRCP 22301 LANNION, FRANCE haffner@lannion.cnet.fr Abstract Alex Waibel Carnegie Mellon University Pittsburgh, PA 15213 ahw@cs.cmu.edu We present the "Multi-State Time Delay Neural Network" (MS-TDNN) as an extension of the TDNN to robust word recognition. Unlike most other hybrid methods. the MS-TDNN embeds an alignment search procedure into the connectionist architecture. and allows for word level supervision. The resulting system has the ability to manage the sequential order of subword units. while optimizing for the recognizer performance. In this paper we present extensive new evaluations of this approach over speaker-dependent and speaker-independent connected alphabet. 1 INTRODUCTION Classification based Neural Networks (NN) have been successfully applied to phoneme recognition tasks. Extending those classification capabilities to word recognition is an important research direction in speech recognition. However. connectionist architectures do not model time alignment properly. and they have to be combined with a Dynamic Programming (DP) alignment procedure to be applied to word recognition. Most of these "hybrid" systems (Bourlard. 1989) take advantage of the powerful and well tried probabilistic formalism provided by Hidden Markov Models (HMM) to make use of a reliable alignment procedure. However. the use of this HMM formalism strongly limits one's choice of word models and classification procedures. MS-TDNNs. which do not use this HMM formalism. suggest new ways to design speech recognition systems in a connectionist framework. Unlike most hybrid systems where connectionist procedures replace some parts of a pre-existing system. MS-TDNNs are designed from scratch as a global Neural Network that performs word recognition. No bootstrapping is required from an HMM. and we can apply learning procedures that correct the recognizer's errors explicitly. These networks have been successfully tested on 135 136 Haffner and Waibel difficult word recognition tasks. such as speaker-dependent connected alphabet recognition (Haffner et al. 1991a) and speaker-independent telephone digit recognition (Haffner and Waibel. 1991b). Section 2 presents an overview of hybrid Connectionist/HMM architectures and training procedures. Section 3 describes the MS-TDNN architecture. Section 4 presents our novel training procedure. In section 5. MS-TDNNs are tested on speakerdependent and speaker-independent continuous alphabet recognition. 2 HYBRID SYSTEMS HMMs are currently the most efficient and commonly used approach for large speech recognition tasks: their modeling capacity. however limited, fits many speech recognition problems fairly well (Lee. 1988). The main limit to the modelling capacity of HMMs is the fact that trainable parameters must be interpretable in a probabilistic framework to be reestimated using the Baum-Welch algorithm with the Maximal Likelihood Estimation training criterion (MLE). Connectionist learning techniques used in NNs (generally error back-propagation) allow for a much wider variety of architectures and parameterization possibilities. Unlike HMMs. NNs model discrimination surfaces between classes rather than the complete input/output distributions (as in HMMs) : their parameters are only trained to minimize some error criterion. This gain in data modeling capacity, associated with a more discriminant training procedure, has permitted improved performance on a number of speech tasks. especially those in which modeling sequential information is not necessary. For instance. Time Delay Neural Networks have been applied, with high performance, to phoneme classification (Waibel et al. 1989). To extend this performance to word recognition, one has to combine a front-end NN with a procedure performing time alignment, usually based on DP. A variety of alignment procedures and training methods have been proposed for those "hybrid" systems. 2.1 TIME ALIGNMENT To take into account the time distortions that may appear within its boundaries, a word is generally modeled by a sequence of states (l •...• s .... ,N) that can have variable durations. The score of a word in the vocabulary accumulates frame-level scores which are a function of the output Y(t) = (Y1(t), ...• Y.,(t» of the front end NN (1) The DP algorithm finds the optimal alignment {T I' ... , TN + I} which maximizes this word score. A variety of Score functions have been proposed for Eq.(l). They are most often treated as likelihoods, to apply the probabilistic Viterbi alignment algorithm. 2.1.1 NN outputs probabilities Outputs of classification based NNs have been shown to approximate Bayes probabilities, provided that they are trained with a proper objective function (Bourlard, 1989). For instance, we can train our front-end NN to output, at each time frame, state probabilities that can be used by a Viterbi alignment procedure (to each state s there corresponds a NN Multi-State Time Delay Neural Networks for Continuous Speech Recognition 137 output irs)~. Eq.(I) gives the resulting word log (likelihood) as a sum of frame-levellog(likelihoods) which are written1: ~ Scores (Y(t» = log (Yi(s)(t» (2) 2.1.2 Comparing NN output to a reference vector The front end NN can be interpreted as a system remapping the input to a single density continuous HMM (Bengio. 1991). In the case of identity covariance matrices, Eq.(l) gives the log(likelihood) for the k-th word (after Viterbi alignment) as a sum of distances between the NN frame-level output and a reference vector associated with the current state2. Scores (Y{t» = II yet) - yS l1 2 (3) Here. the reference vectors (ft. ... , r, ... , yN) correspond to the means of gaussian PDFs, and can be estimated with the Baum-Welch algorithm. 2.2 TRAINING The first hybrid models that were proposed (Bourlard. 1989; Franzini, 1991) optimized the state-level NN (with gradient descent) and the word-level HMM (with Baum-Welch) separately. Even though each level of the system may have reached a local optimum of its cost function, training is potentially suboptimal for the given complete system. Global optimization of hybrid connectionist/HMM systems requires a unified training algorithm, which makes use of global gradient descent (Bridle. 1990). 3 THE MS-TDNN ARCHITECTURE MS-TDNNs have been designed to extend TDNNs classification performance to the word level. within the simplest possible connectionist framework. Unlike the hybrid methods presented in the previous section, the HMM formalism is not taken as the underlying framework here. but many of the models developed within this formalism are applicable to MS-TDNNs. 3.1 FRAME·LEVEL TDNN ARCHITECTURE All the MS-TDNNs architectures described in this paper use the front-end TDNN architecture (Waibel et al. 1989), shown in Fig.l. at the state level. Each unit of the first hidden layer receives input from a 3-frame window of input coefficients. Similarly, each unit in the second hidden layer receives input from a 5-frame window of outputs of the first hidden layer. At this level of the system (2nd hidden layer). the network produces, at each time frame. the scores for the desired phonetic features. Phoneme recognition TDNNs are trained in a time-shift invariant way by integrating over time the output of a single state. 3.2 BASELINE MS· TDNN With MS-TDNNs, we have extended the formalism of TDNNs to incorporate time alignment. The front-end TDNN architecture has I output units, whose activations (ranging 1. State prior probabilities would add a constant tenn to Eq.(2) 2. State transition probabilities add an offset to Eq.(3) 138 Haffner and Waibel 2nd hidden layer Word2 [!] Input lpeec ••••••••• • •••••••••••••• • •••••••••••••• • •••••••••••••• • •••••••••••••• • ••••••••••• ••• TIME Figure 1: Frame-Level TDNN • • • • • wordllll Figure 2: MS-TDNN t State scores are copied from the 2nd hidden layer oftheTDNN from 0 to 1) represent the trame-level scores. To each state s corresponds a TDNN output i(s). Different states may share the same output (for instance with phone models). The DP procedure, as described in Eq.(1), determines the sequence of states producing the maximum sum of activations3: Scores(Y(t» = Yj(s) (4) The frame-level score used in the MS-TDNN combines the advantages of being simple with that of having a formal description as an extension of the TDNN accumulation process to multiple states. It becomes possible to model the accumulation process as a connectionist word unit that sums the activations from the best sequence of incoming state units, as shown in Fig.2. This is mostly useful during the back-propagation phase: at each time frame, we imagine a virtual connection between the active state unit and the word unit, which is used to backpropagate the error at the word level down to the state level. 4 3.3 EXTENDING MS· TDNNs In the previous section, we presented the baseline MS-TDNN architecture. We now present extensions to the word-level architecture, which provide additional trainable parameters. Eq.(4) is extended as: Scores(Y(t» = Weight;· Y;(s) +Biasj (5) 3. This equation is not very different from Eq.(2) presented in the previous section, however, all attempts to use log(Y,{t)) instead of Y,{t) have resulted in unstable learning runs, that have never con· verged properly. During the test phase, the two approaches may be functionally not very different. Outputs that affect the error rate in a critical way are mostly those of the correct word and the best incorrect word, especially when they are close. We have observed that frame level scores which play a key role in discrimination are close to 1.0: the two scores become asymptotically equivalent (less 1): log(Y,(t» - Y,{t) - 1. 4. The alignment path found by the DP routine during the forward phase is "frozen", so that it can be represented as a connectionist accumulation unit during the backward phase. The problem is that, after modification of the weights, this alignment path may no longer be the optimal one. Practical consequences of this seem minimal. Multi-State Time Delay Neural Networks for Continuous Speech Recognition 139 Weight; allows to weight differently the importance of each state belonging to the same word. We do not have to assume that each part of a speech pattern contains an equal amount of infonnation. Bias; is analog to a transition log(probability) in a HMM. However. we have observed that a small variation in the value of those parameters may alter recognition perfonnance a lot. The choice of a proper training procedure is critical. Our gradient back-propagation algorithm has been selected for its efficient training of the parameters of the front-end TDNN ; as our training procedure is global. we have also applied it to train Weight; and Bias;. but with some difficulty. In section 4.1, we show that they are useful to shift the word scores so that a sigmoid function separates the correct words (output 1) properly from the incorrect ones (output 0). 3.4 SEQUENCE MODELS We design very simple state sequence models by hand that may use phonetic knowledge (phone models) or may not (word models). Phone Models: The phonetic representation of a word is transcribed as a sequence of states. As an example shown in Fig.3, the letter 'p' combines 3 phone units. P captures the closure and the burst of this stop consonant. P-IY is a co-articulation unit. The phone IY is recognized in a context independent way. This phone is shared with all the other eset letters. States are duplicated to enforce minimal phone durations. 0+g.~@.0+~ Figure 3 Phone Model for 'p' Word Models: No specific phonemic meaning is associated with the states of a word. Those states cannot be shared with other words. Transition States: One can add specialized transition units that are trained to detect this transition more explicitly: the resulting stabilization in segmentation yields an increase in performance. This method is however sensitive to a good bootstrapping of our system on proper phone boundaries. and has so far only been applied to speaker dependent alphabet recognition. 4 TRAINING In many speech recognition systems. a large discrepancy is found between the training procedure and the testing procedure. The training criterion. generally Maximum Likelihood Estimation. is very far from the word accuracy the system is expected to maximize. Good perfonnance depends on a large quantity of data, and on proper modeling. With MSTDNNs. we suggest optimization procedures which explicitly attempt to minimize the number of word substitutions; this approach represents a move towards systems in which the training objective is maximum word accuracy. The same global gradient back-propagation is applied to the whole system, from the output word units down to the input units. Each desired word is associated with a segment of speech with known boundaries. and this association represents a learning sample. The DP alignment procedure is applied between the known word boundaries. We describe now three training procedures we have applied to MS-TDNNs. 140 Haffner and Waibel 4.1 STANDARD BACK-PROPAGATION WITH SIGMOIDAL OUTPUTS Word outputs Q1 = ftW 1 . 0 1 + B 1) are compared to word targets (J for the desired word, o for the other words), and the resulting error is back-propagated. Ok is the OP sum given by Eq.(1) for the k-th word in the vocabulary.jis the sigmoid function, Wk gives the slope of the sigmoid and B k is a bias term, as shown in Fig.4. They are trained so that the sigmoid function separates the correct word (Output 1) form the incorrect words (Output 0) properly. When the network is trained with the additional parameters of Eq.(5),Weighti and Biasi can account for these sigmoid slope and bias. MS-TDNNs are applied to word recognition problems where classes are highly confusable. The score of the best incorrect word may be very close to the score of the correct word: in this case, the slope and the bias are parameters which are difficult to tune, and the learning procedure has problems to attain the 0 and 1 target values. To overcome those difficulties, we have developed new training techniques which do not require the use of a sigmoid function and of fixed word targets. f I I I Other word scores S.l9pe A p==: I best incorrect bias correct Fig.4. The sigmoid Function 4.2 ON-LINE CORRECTION OF CLASSIFICATION ERRORS The testing procedure recognizes the word (or the string of words) with the largest output, and there is an error when this is not the correct word. As the goal of the training procedure is to minimize the number of errors, the "ideal" procedure would be, each time a classification error has occurred, to observe where it comes from, and to modify the parameters of the system so that it no longer happens. The MS-TDNN has to recognize the correct word CoWo. There is a training error if, for an incorrect word InWo, one has O/nWo > OCowo- m. No sigmoid function is needed to compare these outputs, m is an additional margin to ensure the robustness of the training procedure. Only in the event of a training error do we modify the parameters of the MSTONN. The word targets are moving (for instance, the target score for an incorrect word is OCowo- m) instead of fixed (0 or 1). This technique overcomes the difficulties due to the use of an output sigmoid function. Moreover, the number of incorrect words whose output is actually modified is greatly reduced: this is very helpful in training under-represented classes, as the numbers of positive and negative examples become much more balanced. Compared to the more traditional training technique (with a sigmoid) presented in the previous section, large increases in training speed and word accuracy were observed. 4.3 FUZZY WORD BOUNDARIES Training procedures we have presented so far do not take into account the fact that the sample words may come from continuous speech. The main difficulty is that their straightforward extension to continuous speech would not be computationally feasible, as the set Multi-State Time Delay Neural Networks for Continuous Speech Recognition 141 of possible training classes will consist of all the possible strings of words.We have adopted a staged approach: we modify the training procedure, so that it matches the continuous recognition conditions more and more closely, while remaining computationally feasible. The first step deals with the problem of word boundaries. During training, known word boundaries give additional information that the system uses to help recognition. But this information is not available when testing. To overcome this problem when learning a correct word (noted CoWo). we take as the correct training token the triplet PreWo-CoWoNexWo (PreWo is the preceding correct word, NexWo is the next correct word in the sentence). All the other triplets PreWo-InWo-NexWo are considered as incorrect. These triplets are aligned between the beginning known boundary of PrevWo and the ending known boundary of NexWo. What is important is that no precise boundary information is given forCoWo. The word classification training criterion presented here only minimizes word substitutions. In connected speech. one has to deal with deletions and insertions errors: procedures to describe them as classification errors are currently being developed. 5 EXPERIMENTS ON CONNECTED ALPHABET Recognizing spoken letters is considered one of the most challenging small-vocabulary tasks in speech recognition. The vocabulary, consisting of the 26 letters of the American English alphabet, is highly confusable, especially among subsets like the E-set ('B' :C' :D' :E' :G' :P' :T' :V' :Z') or ('M', 'N'). In all experiments. as input parameters, 16 filterbank melscale spectrum coefficients are computed at a lOmsec frame rate. Phone models are used. 5.1 SPEAKER DEPENDENT ALPHABET Our database consists of 1000 connected strings of letters, some corresponding to grammatical words and proper names, others simply random. There is an average of five letters per string. The learning procedure is described in §4.1 and applied to the extended MSTDNN (§3.3), with a bootstrapping phase where phone labels are used to give the alignment of the desired word. During testing, time alignment is performed over the whole sentence. A one-stage DP algorithm (Ney, 1984) for connected words (with no grammar) is used in place of the isolated word DP algorithm used in the training phase. The additional use of minimal word durations, word entrance penalties and word boundary detectors has reduced the number of word insertions and deletions (in the DP algorithm) to an acceptable level. On two speakers, the word error rates are respectively 2.4% and 10.3%. By comparison, SPHINX, achieved error rates of 6% and 21.7%, respectively, when contextindependent (as in our MS-TDNN) phone models were used. Using context-dependent models (as described in Lee, 1988), SPHINX performance achieves 4% and 16.1% error rates, respectively. No comparable results yet exist for the MS-TDNN for this case. 5.2 SPEAKER INDEPENDENT ALPHABET (RMspell) Our database, a part of the DARPA Resource Management database (RM), consists of 120 speakers, spelling about 15 words each. 109 speakers (about 10,000 spelled letters) are used for training. 11 speakers (about 1000 spelled letters) are used for testing. 57 phone units. in the second hidden layer. account for the phonemes and the co-articulation units. We apply the training algorithms described in §4.2 and §4.3 to our baseline MS142 Haffner and Waibel TDNN architecture (§3.2), without any additional procedure (for instance, no phonetic bootstrapping). An important difference from the experimental conditions described in the previous section is that we have kept training and testing conditions exactly similar (for instance, the same knowledge of the boundaries is used during training and testing). Table 1: Alphabet classification errors (we do not allow for insertions or deletions errors). Algorithm %Error Known Word Boundaries (§4.2) 5.7% Fuzzy Word Boundaries (§4.3) 6.5% 6 SUMMARY We presented in this paper MS-TDNNs, which extend TDNNs classification performance to the sequence level. They integrate the DP alignment procedure within a straightforward connectionist framework. We developed training procedures which are computationally reasonable and train the MS-TDNN in a global way. Their only supervision is the minimization of the recognizer's error rate. Experiments were conducted on speaker independent continuous alphabet recognition. The word error rates are 5.7% with known word boundaries and 6.5% with fuzzy word boundaries. Acknowledgments The authors would like to express their gratitude to Denis J ouvet and Michael Witbrock, for their help writing this paper, and to Cindy Wood, for gathering the databases. This work was partially performed at CNET laboratories, and at Carnegie Mellon University, under DARPA support. References Bengio, Y "Artificial Neural Networks and their Application to Sequence Recognition" Ph.D. Thesis, McGill University, Montreal, June 1991. Bourlard, H and Morgan, N. "Merging Multilayer Perceptrons and Hidden Markov Models: Some Experiments in Continuous Speech Recognition", TR-89-033, ICSI, Berkeley, CA, July 1989 Bridle, J.S. "Alphanets: a recurrent 'neural' network architecture with a hidden Markov model interpretation." Speech Communication, "Neurospeech" issue, Feb 1990. Franzini, M.A., Lee, K.F., and Waibel, A.H.,"Connectionist Viterbi Training: A New Hybrid Method for Continuous Speech Recognition," ICASSP, Albuquerque 1990 Haffner, P., FranzinLM. and Waibel A., " Integrating Time Alignment and Neural Networks for High Performance Continuous Speech Recognition " ICASSP, Toronto 1991a. Haffner. P and Waibel A. "Time-Delay Neural Networks Embedding Time Alignment: a Performance Analysis" Europseech'91 , Genova, September 1991b. Lee, K.F. "Large-Vocabulary Speaker-Independent Continuous Speech Recognition: the SPHINX system", PhD Thesis, Carnegie Mellon University, 1988. Ney, H. ''The Use of a One-Stage Dynamic Programming Algorithm for Connected Word Recognition" in IEEE Trans. on Acoustics, Speech and Signal Processing. April 1984. Waibel, A.H., Hanazawa, T., Hinton, G., Shikano, K., and Lang, K., "Phoneme Recognition using Time-Delay Neural Networks" in IEEE Transactions on Acoustics, Speech and Signal Processing 37(3):328-339, 1989.
1991
4
507
Constant-Time Loading of Shallow 1-Dimensional Networks Stephen Judd Siemens Corporate Research, 755 College Rd. E., Princeton, NJ 08540 judd@learning.siemens.com Abstract The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e. independent of the size of the network). This holds even when inter-node communication channels are short and local, thus adhering to more biological and VLSI constraints. 1 Introduction Shallow neural networks are defined in [J ud90]; the definition effectively limits the depth of networks while allowing the width to grow arbitrarily, and it is used as a model of neurological tissue like cortex where neurons are arranged in arrays tens of millions of neurons wide but only tens of neurons deep. Figure I exemplifies a family of networks which are not only shallow but "I-dimensional" as well-we allow the network to be extended as far as one liked in width (i.e. to the right) by repeating the design segments shown. The question we address is how learning time scales with the width. In [Jud88], it was proved that the worst case time complexity 863 864 Judd of training this family is linear in the width. But the proof involved an algorithm that was biologically very implausible and it is this objection that will be somewhat redressed in this paper. The problem with the given algorithm is that it operates only a monolithic serial computer; the single-CPU model of computing has no overt constraints on communication capacities and therefore is too liberal a model to be relevant to our neural machinery. Furthermore, the algorithm reveals very little about how to do the processing in a parallel and distributed fashion. In this paper we alter the model of computing to attain a degree of biological plausibility. We allow a linear number processors and put explicit constraints on the time required to communicate between processors. Both of these changes make the model much more biological (and also closer to the connectionist sty Ie of processing). This change alone, however, does not alter the time complexity-the worst case training time is still linear. But when we change the complexity question being asked, a different answer is obtained. We define a class of tasks (viz. training data) that are drawn at random and then ask for the expected time to load these tasks, rather than the worst-case time. This alteration makes the question much more environmentally relevant. It also leads us into a different domain of algorithms and yields fast loading times. 2 Shallow I-D Loading 2.1 Loading A family of the example shallow I-dimensional architectures that we shall examine is characterized solely by an integer, d, which defines the depth of each architecture in the family. An example is shown in figure 1 for d = 3. The example also happens to have a fixed fan-in of 2 and a very regular structure, but this is not essential. A member of the family is specified by giving the width n, which we will take to be the number of output nodes. A task is a set of pairs of binary vectors, each specifying an stimulus to a net and its desired response. A random task of size t is a set of t pairs of independently drawn random strings; there is no guarantee it is a function. Our primary question has to do with the following problem, which is parameterized by some fixed depth d, and by a node function set (which is the collection of different transfer functions that a node can be tuned to perform): Shallow 1-D Loading: Instance: An integer n, and a task. Objective: Find a function (from the node function set) for each node in the network in the shallow I-D architecture defined by d and n such that the resulting circuit maps all the stimuli in the task to their associated responses. Constant-Time Loading of Shallow I-Dimensional Networks 865 Figure 1: A Example Shallow 1-D Architecture 2.2 Model of Computation Our machine model for solving this question is the following: For an instance of shallow 1-D loading of width n, we allow n processors. Each one has access to a piece of the task, namely processor i has access to bits i through i + d of each stimulus, and to bit i of each response. Each processor i has a communication link only to its two neighbours, namely processors i-I and i + 1. (The first and nth processors have only one neighbour.) It takes one time step to communicate a fixed amount of data between neighbours. There is no charge for computation, but this is not an unreasonable cheat because we can show that a matrix multiply is sufficient for this problem, and the size of the matrix is a function only of d (which is fixed). This definition accepts the usual connectionist ideal of having the processor closely identified with the network nodes for which it is "finding the weights", and data available at the processor is restricted to the same "local" data that connectionist machines have. This sort of computation sets the stage for a complexity question, 2.3 Question and Approach We wish to demonstrate that Claim 1 This parallel machine solves shallow J-D loading where each processor is finished in constant expected time The constant is dependent on the depth of the architecture and on the size of the task, but not on the width. The expectation is over the tasks. 866 Judd For simplicity we shall focus on one particular processor-the one at the leftmost end-and we shall further restrict our at tention to finding a node function for one particular node. To operate in parallel, it is necessary and sufficient for each processor to make its local decisions in a "safe" manner-that is, it must make choices for its nodes in such a way as to facilitate a global solution. Constant-time loading precludes being able to see all the data; and if only local data is accessible to a processor, then its plight is essentially to find an assignment that is compatible with all nonlocal satisfying assignments. Theorem 2 The expected communication complexity of finding a "safe" node function assignment for a particular node in a shallow l-D architecture is a constant dependent on d and t, but not on n. If decisions about assignments to single nodes can be made easily and essentially without having to communicate with most of the network, then the induced partitioning of the problem admits of fast parallel computation. There are some complications to the details because all these decisions must be made in a coordinated fashion, but we omit these details here and claim they are secondary issues that do not affect the gross complexity measurements. The proof of the theorem comes in two pieces. First, we define a computational problem called path finding and the graph-theoretic notion of domination which is its fundamental core. Then we argue that the loading problem can be reduced to path finding in constant parallel time and give an upper bound for determining domination. 3 Path Finding The following problem is parameterized by an integer I<, which is fixed. Path finding : Instance: An integer n defining the number of parts in a partite graph, and a series of I<xI< adjacency matrices, M I , M 2 , ••. Mn - I . Mj indicates connections between the K nodes of part i and the I< nodes of part i + 1. Objective: Find a path of n nodes, one from each part of the n-partite graph. Define Xh to be the binary matrix representing connectivity between the first part of the graph and the ith part: Xl = MI and Xh(j, k) = 1 iff 3m such that Xh(j, m) = 1 and Mh(m, k) = 1. We say "i includes j at h" if every bit in the ith row of Xh is 1 whenever the corresponding bit in the jth row of X h is 1. We say "i dominates at h" or "i is a dominator' if for all rows j, i includes j at h. Lemma 3 Before an algorithm can select a node i from the first part of the graph to be on the path, it is necessary and sufficient for i to have been proven to be a dominator at some h. 0 Constant-Time Loading of Shallow l-Dimensional Networks 867 The minimum h required to prove domination stands as our measure of "communication complexity" . Lemma 4 Shallow J-D Loading can be reduced to path finding in constant parallel time. Proof: Each output node in a shallow architecture has a set of nodes leading into it called a support cone (or "receptive field"), and the collection of functions assigned to those nodes will determine whether or not the output bit is correct in each response. Nodes A,B,C,D,E,G in Figure 1 are the support cone for the first output node (node C), and D,E,F,G,H,J are the cone for the second. Construct each part of the graph as a set of points each corresponding to an assignment over the whole support cone that makes its output bit always correct. This can be done for each cone ih parallel, and since the depth (and the fan-in) is fixed, the set of all possible assignments for the support cone can be enumerated in constant time. Now insert edges between adjacent parts wherever two points correspond to assignments that are mutually compatible. (Note that since the support cones overlap one another, we need to ensure that assignments are consistent with each other.) This also can be done in constant parallel time. We call this construction a compatibility graph. A solution to the loading problem corresponds exactly to a path in the compatibility graph. 0 A dominator in this path-finding graph is exactly what was meant above by a "safe" assignment in the loading problem. 4 Proof of Theorem Since it is possible that there is no assignments to certain cones that correctly map the stimuli it is trivial to prove the theorem, but as a practical matter we are interested in the case where the architecture is actually capable of performing the task. We will prove the theorem using a somewhat more satisfying event. Proof of theorem 2: For each support cone there is 1 output bit per response and there are t such responses. Given the way they are generated, these responses could all be the same with probability .5t - 1 . The probability of two adjacent cones both having to perform such a constant mapping is .52(t-l). Imagine the labelling in Figure 1 to be such that there were many support cones to the left (and right) of the piece shown. Any path through the left side of the compatibility graph that arrived at some point in the part for the cone to the left of C would imply an assignment for nodes A, B, and D. Any path through the right side of the compatibility graph that arrived at some point in the part for the cone of I would imply an assignment for nodes G, H, and J. If cones C and F were both required to merely perform constant mappings, then any and all assignments to A, B, and D would be compatible with any and all assignments to G, H, and J (because nodes C and F could be assigned constant functions themselves, thereby making the others irrelevant). This insures that any point on a path to the left will dominate at the part for I. 868 Judd Thus 22(t-l) (the inverse of the probability of this happening) is an upper bound on the domination distance, i.e. the communication complexity, i.e. the loading time. 0 More accurately, the complexity is min(c(d, t), f(t), n), where c and f are some unknown functions. But the operative term here is usually c because d is unlikely to get so large as to bring f into play (and of course n is unbounded). The analysis in the proof is sufficient, but it is a far cry from complete. The actual Markovian process in the sequence of X's is much richer; there are so many events in the compatibility graph that cause domination to occur that is takes a lot of careful effort to construct a task that will avoid it! 5 Measuring the Constants Unfortunately, the very complications that give rise to the pleasant robustness of the domination event also make it fiendishly difficult to analyze quantitatively. So to get estimates for the actual constants involved we ran Monte Carlo experiments. We ran experiments for 4 different cases. The first experiment was to measure the distance one would have to explore before finding a dominating assignment for the node labeled A in figure 1. The node function set used was the set of linearly separable functions. In all experiments, if domination occurred for the degenerate reason that there were no solutions (paths) at all, then that datum was thrown out and the run was restarted with a different seed. Figure 2 reports the constants for the four cases. There is one curve for each experiment. The abscissa represents t, the size of the task. The ordinate is the number of support cones that must be consulted before domination can be expected to occur. All points given are the average of at least 500 trials. Since t is an integer the data should not have been interpolated between points, but they are easier to see as connected lines. The solid line (labeled LSA) is for the case just described. It has a bell shape, reflecting three facts: • when the task is very small almost every choice of node function for one node is compatible with choices for the neighbouring nodes. • when the task is very large, there so many constraints on what a node must compute that it is easy to resolve what that should be without going far afield. • when the task is intermediate-sized, the problem is harder. Note the very low distances involved-even the peak of the curve is well below 2, so nowhere would you expect to have to pass data more than 2 support cones away. Although this worst-expected-case would surely be larger for deeper nets, current work is attempting to see how badly this would scale with depth (larger d). The curve labeled LUA is for the case where all Boolean functions are used as the node function set. Note that it is significantly higher in the region 6 < t < 12. The implication is that although the node function set being used here is a superset of the linearly separable functions, it takes more computation at loading time to be able to exploit that extra power. Constant-Time Loading of Shallow I-Dimensional Networks 869 2.6 ~----~-------r------~------r-----~-------r------~----~ 2.4 2.2 2 1.8 1.6 1.4 1.2 x .. " .. :' ........ ~ : ,')(\ .' , '.', ! , \ "'X t, " \ ~ /' : ~ \ , • I • / ~;-~ \, / !,' \ \ ~ / ,'I \ \ '. / • I \ / •••• X I \ 0. \ I.' \ I?<' I \ , '. , . ~ ~ I : I \ I .: \ \tl. I! \ ~ \ f \ ~ , , \ , , X. ' ••••• I!l. ~ \ .'X \ if \\ :/ I \ :, \ ~, \ UI , ~ \ i \ '.' ',. '')( , . ~ ..... \ ''x \ " \ \ \ ! +--~ l' , \ , !!I o--A..-' . , , . . , . , "L5A" -+"L5B" -t-. "LUA" -G"LUB" ·M·_· l __ ~.-~~----~------~------~------L-----~--~--~--~--~ o 2 4 6 8 10 12 14 16 Figure 2: Measured Domination Distances. The curve labeled LSB shows the expected distance one has to explore before finding a dominating assignment for the node labelled B in figure 1. The node function set used was the set of linearly separable functions, Note that it is everywhere higher that the LSA curve, indicating that the difficulty of settling on a correct node function for a second-layer node is somewhat higher than finding one for a first-layer node. Finally, there is a curve for node B when all Boolean functions are used (LUB), It is generally higher than when just linearly separable functions are used, but not so markedly so as in the case of node A. 6 Conclusions The model of computation used here is much more biologically relevant than the ones previously used for complexity results, but the algorithm used here runs in an off-line "batch mode" (i.e. it has all the data before it starts processing). This has an unbiological nature, but no more so than the customary connectionist habit of repeating the data many times. 870 Judd A weakness of our analysis is that (as formulated here) it is only for discrete node functions, exact answers, and noise-free data. Extensions for any of these additional difficulties may be possible, and the bell shape of the curves should survive. The peculiarities of the regular 3-layer network examined here may appear restrictive, but it was taken as an example only; what is really implied by the term "l-D" is only that the bandwidth of the SCI graph for the architecture be bounded (see [J ud90] for definitions). This constraint allows several degrees of freedom in choosing the architecture, but domination is such a robust combinatoric event that the essential observation about bell-shaped curves made in this paper will persist even in the face of large changes from these examples. We suggest that whatever architectures and node function sets a designer cares to use, the notion of domination distance will help reveal important computational characteristics of the design. Acknowledgements Thanks go to Siemens and CalTech for wads of computer time. References [Jud88] J. S. Judd. On the complexity ofloading shallow neural networks. Journal of Complexity, September 1988. Special issue on Neural Computation, in press. [Jud90] J. Stephen Judd. Neural Network Design and the Complexity of Learning. MIT Press, Cambridge, Massachusetts, 1990.
1991
40
508
Combined Neural Network and Rule-Based Framework for Probabilistic Pattern Recognition and Discovery Hayit K. Greenspan and Rodney Goodman Department of Electrical Engineering California Institute of Technology, 116-81 Pasadena, CA 91125 Rama Chellappa Department of Electrical Engineering Institute for Advanced Computer Studies and Center for Automation Research University of Maryland, College Park, MD 20742 Abstract A combined neural network and rule-based approach is suggested as a general framework for pattern recognition. This approach enables unsupervised and supervised learning, respectively, while providing probability estimates for the output classes. The probability maps are utilized for higher level analysis such as a feedback for smoothing over the output label maps and the identification of unknown patterns (pattern "discovery"). The suggested approach is presented and demonstrated in the texture analysis task. A correct classification rate in the 90 percentile is achieved for both unstructured and structured natural texture mosaics. The advantages of the probabilistic approach to pattern analysis are demonstrated. 1 INTRODUCTION In this work we extend a recently suggested framework (Greenspan et al,1991) for a combined neural network and rule-based approach to pattern recognition. This approach enables unsupervised and supervised learning, respectively, as presented 444 A Framework for Probabilistic Pattern Recognition and Discovery 445 in Fig. 1. In the unsupervised learning phase a neural network clustering scheme is used for the quantization of the input features. A supervised stage follows in which labeling of the quantized attributes is achieved using a rule based system. This information theoretic technique is utilized to find the most informative correlations between the attributes and the pattern class specification, while providing probability estimates for the output classes. Ultimately, a minimal representation for a library of patterns is learned in a training mode, following which the classification of new patterns is achieved. The suggested approach is presented and demonstrated in the texture - analysis task. Recent results (Greenspan et aI, 1991) have demonstrated a correct classification rate of 95 - 99% for synthetic (texton) textures and in the 90 percentile for 2 - 3 class natural texture mosaics. In this paper we utilize the output probability maps for high-level analysis in the pattern recognition process. A feedback based on the confidence measures associated with each class enables a smoothing operation over the output maps to achieve a high degree of classification in more difficult (natural texture) pattern mosaics. In addition, a generalization of the recognition process to identify unknown classes (pattern "discovery"), in itself a most challenging task, is demonstrated. 2 FEATURE EXTRACTION STAGE The initial stage for a classification system is the feature extraction phase through which the attributes of the input domain are extracted and presented towards further processing. The chosen attributes are to form a representation of the input domain, which encompasses information for any desired future task. In the texture-analysis task there is both biological and computational evidence supporting the use of Gabor filters for the feature - extraction phase (Malik and Perona, 1990; Bovik et aI, 1990). Gabor functions are complex sinusoidal gratings modulated by 2-D Gaussian functions in the space domain, and shifted Gaussians in the frequency domain. The 2-D Gabor filters form a complete but non-orthogonal basis which can be used for image encoding into multiple spatial frequency and orientation channels. The Gabor filters are appropriate for textural analysis as they have tunable orientation and radial frequency bandwidths, tunable center frequencies, and optimally achieve joint resolution in space and spatial frequency. In this work, we use the Log Gabor pyramid, or the Gabor wavelet decomposition to define an initial finite set of filters. We implement a pyramidal approach in the filtering stage reminiscent of the Laplacian Pyramid (Burt and Adelson, 1983). In our simulations a computationally efficient scheme involves a pyramidal representation of the image which is convolved with fixed spatial support oriented Gabor filters. Three scales are used with 4 orientations per scale (0,90,45,-45 degrees), together with a non-oriented component, to produce a 15-dimensional feature vector for every local window in the original image, as the output of the feature extraction stage. The pyramidal approach allows for a hierarchical, multiscale framework for the image analysis. This is a desirable property as it enables the identification of features at various scales of the image and thus is attractive for scale-invariant pattern 446 Greenspan, Goodman, and Chellappa recognition. Window of Input Image UNSUPERVISED LEARNING SUPERVISED LEARNING via KohonenNN via Rule-System N-Dimensional Continuous Feature- Vector N-Dimensional Quantized Feature- Vector TEXTURE CLASSES ~4 __ --------"~~~. __ -----------4~~ ~."------------4.~ FEA TURE-EXTRACTION UNSUPERVISED PHASE LEARNING SUPERVISED LEARNING Figure 1: System Block Diagram 3 QUANTIZATION VIA UNSUPERVISED LEARNING The unsupervised learning phase can be viewed as a preprocessing stage for achieving yet another, more compact representation, of the filtered input. The goal is to quantize the continuous valued features which are the result of the initial filtering stage. The need for discretization becomes evident when trying to learn associations between attributes in a statistically-based framework, such as a rule-based system. Moreover, in an extended framework, the network can reduce the dimension of the feature domain. This shift in representation is in accordance with biological based models. The output of the filtering stage consists of N (=15) continuous valued feature maps; each representing a filtered version of the original input. Thus, each local area of the input image is represented via an N-dimensional feature vector. An array of such N -dimensional vectors, viewed across the input image, is the input to the learning stage. We wish to detect characteristic behavior across the N-dimensional feature space for the family of textures to be learned. By projecting an input set of samples onto the N-dimensional space, we search for clusters to be related to corresponding code-vectors, and later on, recognized as possible texture classes. A neural-network quantization procedure, based on Kohonen's model (Kohonen, 1984) is utilized for this stage. In this work each dimension, out of the N-dimensional attribute vector, is individually clustered. All samples are thus projected onto each axis of the space and A Framework for Probabilistic Pattern Recognition and Discovery 447 one-dimensional clusters are found; this scalar quantization case closely resembles the K-means clustering algorithm. The output of the preprocessing stage is an N -dimensional quantized vector of attributes which is the result of concatenating the discrete valued codewords of the individual dimensions. Each dimension can be seen to contribute a probabilistic differentiation onto the different classes via the clusters found. As some of the dimensions are more representative than others, it is the goal of the supervised stage to find the most informative dimensions for the desired task (with the higher differentiation capability) and to label the combined clustered domain. 4 SUPERVISED LEARNING VIA A RULE-BASED SYSTEM In the supervised stage we utilize the existing information in the feature maps for higher level analysis, such as input labeling and classification. In particular we need to learn a classifier which maps the output attributes of the unsupervised stage to the texture class labels. Any classification scheme could be used. However, we utilize a rule - based information theoretic approach which is an extension of a first order Bayesian classifier, because of its ability to output probability estimates for the output classes (Goodman et aI, 1992). The classifier defines correlations between input features and output classes as probabilistic rules of the form: If Y = y then X = x with prob. p, where Y represents the attribute vector and X is the class variable. A data driven supervised learning approach utilizes an information theoretic measure to learn the most informative links or rules between the attributes and the class labels. Such a measure was introduced as the J measure (Smyth and Goodman, 1991) which represents the information content of a rule as the average bits of information that attribute values y give about the class X. The most informative set of rules via the J measure is learned in a training stage, following which the classifier uses them to provide an estimate of the probability of a given class being true. When presented with a new input evidence vector, Y, a set of rules can be considered to "fire" . The classifier estimates the log posterior probability of each class given the rules that fire as: logp(xlrules that fire) = logp(x) + L Wj j W . = log (P(x IY») 1 p(x) where p(x) is the prior probability of the class x, and Wj represents the evidential support for the class as provided by rule j. Each class estimate can now be computed by accumulating the "weights of evidence" incident it from the rules that fire. The largest estimate is chosen as the initial class label decision. The probability estimates for the output classes can now be used for feedback purposes and further higher level processing. The rule-based classification system can be mapped into a 3 layer feed forward architecture as shown in Fig. 2. The input layer contains a node for each attribute. 448 Greenspan, Goodman, and Chellappa The hidden layer contains a node for each rule and the output layer contains a node for each class. Each rule (second layer node j) is connected to a class via the multiplicative weight of evidence Wi. 5 RESULTS Inputs Rules Class Probability Estimates Figure 2: Rule-Based Network In previous results (Greenspan et aI, 1991) we have shown the capability of the proposed system to recognize successfully both artificial ("texton") and natural textures. A classification rate of 95-99% was obtained for 2 and 3 class artificial images. 90-98% was achieved for 2 and 3 class natural texture mosaics. In this work we wish to demonstrate the advantage of utilizing the output probability maps in the pattern recognition process. The probability maps are utilized for higher level analysis such as a feedback for smoothing and the identification of unknown patterns (pattern "discovery"). An example of a 5 - class natural texture classification is presented in Fig. 3. The mosaic is comprised of grass, raffia, herring, wood and wool (center square) textures. The input mosaic is presented (top left), followed by the labeled output map (top right) and the corresponding probability maps for a prelearned library of 6 textures (grass, raffia, wood, calf, herring and wool, left to right, top to bottom, respectively). The input poses a very difficult task which is challenging even to our own visual perception. Based on the probability maps (with white indicating strong probability) the very satisfying result of the labeled output map is achieved. The 5 different regions have been identified and labeled correctly (in different shades of gray) with the boundaries between the regions very strongly evident. A feedback based on the probability maps was used for smoothing over the label map, to achieve the result presented. It is worthwhile noting that the probabilistic framework enables the analysis of both structural textures (such as the wood, raffia and herring) and unstructural textures (such as the grass and wool). Fig. 4. demonstrates the generalization capability of our system to the identification of an unknown class. In this task a presented pattern which is not part of the prelearned library, is to be recognized as such and labeled as an unknown area of interest. This task is termed "pattern discovery" and its application is wide spread from identifying unexpected events to the presentation of areas-of-interest in scene exploratory studies. Learning the unknown is a difficult problem in which the probability estimates prove to be valuable. In the presented example a 3 texture library A Framework for Probabilistic Pattern Recognition and Discovery 449 was learned, consisting of wood, raffia and grass textures. The input consists of wood, raffia and sand (top left). The output label map (top right) which is the result of the analysis of the respective probability maps (bottom) exhibits the accurate detection of the known raffia and wood textures, with the sand area labeled in black as an unknown class. This conclusion was based on the corresponding probability estimations which are zeroed out in this area for all the known classes. \Ve have thus successfuly analyzed the scene based on the existing source of knowledge. Our most recent results pretain to the application of the system to natural scenery analysis. This is a most challanging task as it relates to real-world applications, an example of which are NASA space exploratory goals. Initial simulation results are presented in Fig. 5. which presents a sand-rock scenerio. The training examples are presented, followed by two input images and their corresponding output label maps, left to right, respectively. Here, white represents rock, gray represents sand and black regions are classified as unknown. The system copes successfully with this challange. We can see that a distinction between the regions has been made and for a possible mission such as rock avoidence (landing purposes, navigation etc.) reliable results were achieved. These initial results are very encouraging and indicate the robustness of the system to cope with difficult real-world cases. Input Output -.. -.."......---~.., Probability Maps wood sand wool Figure 3: Five class natural texture classification 450 Greenspan, Goodman, and Chellappa Training Set sand rock Input Output Figure 4: Identification of an unknown pattern Example I Example 2 Figure 5: Natural scenery analysis Output A Framework for Probabilistic Pattern Recognition and Discovery 451 6 SUMMARY The proposed learning scheme achieves a high percentage classification rate on both artificial and natural textures. The combined neural network and rule-based framework enables a probabilistic approach to pattern recognition. In this work we have demonstrated the advantage of utilizing the output probability maps in the pattern recognition process. Complicated patterns were analyzed accurately, with an extension to real-imagery applications. The generalization capability of the system to the discovery of unknown patterns was demonstrated. Future work includes research into scale and rotation invariance capabilities of the presented framework. Acknowledgements This work is funded in part by DARPA under the grant AFOSR-90-0199 and in part by the Army Research Office under the contract DAAL03-89-K-0126. Part of this work was done at Jet Propulsion Laboratory. The advice and software support of the image-analysis group there, especially that of Dr. Charlie Anderson, is greatly appreciated. References H. Greenspan, R. Goodman and R. Chellappa. (1991) Texture Analysis via Unsupervised and Supervised Learning. Proceedings of the 1991 International Joint Conference on Neural Networks, Vol. 1:639-644. R. M. Goodman, C. Higgins, J. Miller and P. Smyth. (1992) Rule-Based Networks for Classification and Probability Estimation. to appear in Neural Computation. P. Smyth and R. M. Goodman. (1991) Rule Induction using Information Theory. In G. Piatetsky-Shapiro, W. Frawley (eds.), Knowledge Discovery in Databases, 159-176. AAAI Press. J. Malik and P. Perona. (1990) Preattentive texture discrimination with early vision mechanisms. Journal of Optical Society of America A, Vol. 7[5]:923-932. A. C. Bovik, M. Clark and W. S. Geisler. (1990) Multichannel Texture Analysis Using Localized Spatial Filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1):55-73. P.J. Burt and E. A. Adelson. (1983) The Laplacian Pyramid as a compact image code. IEEE Trans. Commun.,COM-31:532-540. T. Kohonen. (1984) Self Organisation and Associative Memory. Springer-Verlag.
1991
41
509
Experimental Evaluation of Learning in a Neural Microsystem Joshua Alspector Anthony Jayakumar Stephan Lunat Bellcore Morristown, NJ 07962-1910 Abstract We report learning measurements from a system composed of a cascadable learning chip, data generators and analyzers for training pattern presentation, and an X-windows based software interface. The 32 neuron learning chip has 496 adaptive synapses and can perform Boltzmann and mean-field learning using separate noise and gain controls. We have used this system to do learning experiments on the parity and replication problem. The system settling time limits the learning speed to about 100,000 patterns per second roughly independent of system size. 1. INTRODUCTION We have implemented a model of learning in neural networks using feedback connections and a local 1earning rule. Even though back-propagation[l) (Rwnelhart,1986) networks are feedforward in processing, they have separate. implicit feedback paths during learning for error pro~gation. Networks with explicit, full-time feedback paths can perform pattern completion!21 (Hopfield,1982), can learn many-lO-One mappings. can learn probability disuibutions. and can have interesting temporal and dynamical properties in contrast to the single forward pass processing of multilayer perceptrons trained with back-propagation or other means. Because of the potential for complex dynamics. feedback networks require a reliable method of relaxation for learning and reuieval of static patterns. The Boltzmann machine!3] (Ackley,1985) uses stochastic settling while the mean-field theory version[4] (peterson.1987) uses a more computationally efficient deterministic technique. We have previously shown that Boltzmann learning can be implemented in VLSI(S] (Alspector,1989). We have also shown, by simulation,[6] (Alspector, 1991a) that Boltzmann and mean-field networks can have powerful learning and representation properties just like the more thoroughly studied back-propagation methods. In this paper, we demonstrate these properties using new, expandable parallel hardware for on-chip learning. t Pennanenl address: University of California, Bericeley; EECS Dep't, Cory Hall; Berlceley, CA 94720 871 872 Alspector, Jayakumar, and Luna 1. VLSI IMPLEMENTATION 1.1 Electronic Model We have implemented these feedback networks in VLSI which speeds up learning by many orders of magnitude due to the parallel nature of weight adjustment and neuron state update. Our choice of learning technique for implementation is due mainly to the local learning rule which makes it much easier to cast these networks into electronics than back-propagation. Individual neurons in the Boltzmann machine have a probabilistic decision rule such that neuron i is in state Sj = 1 with probability 1 Pr(Sj = 1)= -~=­ l+e-.rr (1) where Wj = ~WjjSj is the net input to each neuron calculated by current summing and T j is a parameter that acts like temperature in a physical system and is represented by the noise and gain terms in Eq. (2), which follows. In the electronic mooel we use, each neuron performs the activation computation Sj = f (~* (Uj+Vj» (2) where f is a monotonic non-linear function such as tanh. The noise, v, is chosen from a zero mean gaussian distribution whose width is proportional to the temperature. This closely approximates the distribution in Eq. (1) and comes from our hardware implementation, which supplies uncorrelated noise in the form of a binomial distribution[7] (Alspector,I991b) to each neuron. The noise is slowly reduced as annealing proceeds. For mean-field learning, the noise is zero but the gain, ~, has a finite value proponional to liT taken from the annealing schedule. Thus the non-linearity sharpens as 'annealing' proceeds. The network is annealed in two phases, + and -, corresponding to clamping the outputs in the desired state (teacher phase) and allowing them to run free (student phase) at each pattern presentation. The learning rule which adjusts the weights Wjj from neuron j to neuron i is (3) Note that this measures the instantaneous correlations after annealing. For both phases each synapse memorizes the correlations measured at the end of the annealing cycle and weight adjustment is then made, (Le., online). The sgn matches our hardware implementation which changes weights by one each time. 1.1 Learning Microchip Fig. 1 shows the learning microchip which has been fabricated. It contains 32 neurons and 992 connections (496 bidirectional synapses). On the extreme right is a noise generator which supplies 32 un correlated pseudo-random noise sources[7] (Alspector,I991b) to the neurons to their left. These noise sources are summed in the form of current along with the weighted post-synaptic signals from other neurons at the input to each neuron in order to implement the simulated annealing process of the stochastic Boltzmann machine. The neuron amplifiers implement a non-linear activation Experimental Evaiuarion of Learning in a Neural Microsysrem 873 •.• ••• •••• • •••••• I •••• .. If ...... 11 ••••• • ••• It ••• .. . '" . ,. ,. . '" . .. . .. .. . ... "" .. . ... .. .. . .. '" . ... . ' .. .. . .. Figure 1. Photo of 32-Neuron Cascadable Learning Chip function which has variable gain to provide for the gain sharpening function of the mean-field technique. The range of neuron gain can also be adjusted to allow for scaling in summing currents due to adjustable network size. Most of the area is occupied by the synapse array. Each synapse digitally stores a weight ranging from -15 to +15 as 4 bits plus a sign. It multiples the voltage input from the presynaptic neuron by this weight to output a current. One conductance direction can be disconnected so that we can experiment with asymmetric networks[8) (Allen, 1990). Although the synapses can have their weights set externally, they are designed to be adaptive. They store correlations. in parallel, using the local learning rule of Eq. (3) and adjust their weights accordingly. A neuron state range of -Ito 1 is assumed by the digital learning processor in each synapse on the chip. Fig. 2a shows a family of transfer functions of a neuron. showing how the gain is continually adjustable by varying a control voltage. Fig. 2b shows the transfer function of a synapse as different weights are loaded. The input linear range is about 2 volts. Fig. 3 shows waveforms during exclusive-OR learning using the noise annealing of the Boltzmann machine. The top three traces are hidden neurons while the bottom trace is the output neuron which is clamped during the + phase. There are two input patterns presented during the time interval displayed, (-1,+1) and (+1,-1), both of which should output a +1 (note the state clamped to high voltage on the output neuron). Note the sequence of steps involved in each pattern presentation. 1) Outputs from the previous pattern are unclamped. 2) The new pattern is presented to the input neurons. 3) Noise is presented to the network and annealed. 4) The student phase latch captures the 874 Alspector, Jayakumar, and Luna 4 :> (I) (l) 3 ! "0 > '3 B- 2 ::l a Measured Neuron Transler Function Measured synapse transler lunction ______ 11 -----11 15 o LL~~-LLL~~-LLL~~-LLL~~LL~ -300 -200 -100 0 100 al Input current (flAl 1.5 2 2.5 J bl Input voltage (Vl 15 Figure 2. Transfer Functions of Electronic Neuron (2a) and Synapse (2b) correlations. 5) Data from the neuron states is read into the data analyzer. 6) The output neurons are clamped (no annealing is necessary for a three layer network). 7) The teacher phase latch captures the correlations. 8) Weights are adjusted (go to step 1). 8.85000 ms 11.3500 ms 13 .8500 ms ~~~~_=-~~ ____ :~=t=-··~~~--~=~~-~~~~~ ~ ~ J F~~~~:~~~-'~=~-:=:;=~~= \ __ ~~~ ~ ,-~ ':. • + __ /': ,,:f:~ __ +-_ .+ __ !--+-._y ~ 1. 411 2 uAu" U 5 •• 6u7ua~41I 2 • ."l: 4 u 5 U6 u 7 ... a .H ·1 Channel 1 5.000 VoltS/dlv Offset 2 . 500 Volts Channel 2· 5.000 Vult:-!'l1v Uff'3p.t .. 2.~U!J Vol':' Channel 3 5.000 Volts/dlv Offset • 2.500 Volts Chennel ~. 5.000 Volt./dly Off.et • 2.500 Volt. Tlmebase • 500 us/div Delay • 8.85000 ms Figure 3. Neuron Signals during Learning (see text for steps involved) Fig. 4a shows an expanded view of 4 neuron waveforms during the noise annealing portion of the chip operation during Boltzmann learning. Fig. 4b shows a similar portion during gain annealing. Note that, at low gain. the neuron states start at 2.5 volts and settle to an analog value between 0 and 5 volts. For the purposes of classification for the Experimental Evaluation of Learning in a Neural Microsystem 875 58.0000 UI 158.000 UI ----------- - ----o ---i---~t ._ - -- .. ----+----0- • Ch.nn.ll~ ~ 000 vii!t 17ii ~ --- ---'-- -- - . - a; I ,-at -. - 2. 500 Vii Its 1.: .... ".1 .l : . ,,:vJ V' ... l~.l:li .. · ""', -!.!. 2.5 ... 0 volt. Channel 3 5 000 UoltS / dl' Olfset • Z 500 Voltl Chann.l •• 5.000 VDlt./dlv orr •• t • 2.500 VDlt. Tlaeculse • 20 a uS / dlV DelilY ... -492000 UI Figure 4. Neuron Signals during Annealing with Noise (4a) and Gain (4b) digital problems we investigated, neurons are either + lor·} depending on whether their voltage is above or below 2.5 volts. This isn't clear until after settling. There are several instances in Figs. 3 and 4 where the neuron state changes after noise or gain annealing. The speed of pattern presentations is limited by the length of the annealing signal for system settling (100 ~ in Fig. 3). The rest of the operations can be made negligibly short in comparison. The annealing time could be reduced to 10 ~ or so, leading to a rate of about 100,000 patterns/sec. In comparison, a 10-10-10 replication problem, which fits on a single chip, takes about a second per panern on a SPARCstation 2. This time scales roughly with the number of weights on a sequential machine, but is almost constant on the learning chip due to its parallel nature. We can do even larger problems in a multiple chip system because the chip is designed to be cascaded with other similar chips in a board-level system which can be accessed by a computer. The nodes which sum current from synapses for net input into a neuron are available externally for connection to other chips and for external clamping of neurons or other external input We are currently building such a system with a VME bus interface for tighter coupling to our software than is allowed by the GPIB instrument bus we are using at the time of this writing. 2.3 Learning Experiments To study learning as a function of problem size, we chose the parity and replication (identity) problems. This facilitates comparisons with our previous simulations[6) 876 Alspector, Jayakumar, and Luna (Alspector.I991 a). The parity problem is the genenilization of exclusive-OR for arbitrary input size. It is difficult because the classification regions are disjoint with every change of input bit. but it has only one output The goal of the replication problem is for the output to duplicate the bit pattern found on the input after being encoded by the hidden layer. Note that the output bits can be shifted or scrambled in any order without affecting the difficulty of the problem. There are as many output neurons as input. For the replication problem. we chose the hidden layer to have the same number of neurons as the input layer. while for parity we chose the hidden layer to have twice the number as the input layer. =f:~~ 0 . 20 o. =~~F== ci _ _ o _ _ 1. 00 o.eo o.~ _.TlU", o. 0.20 _ !If" ","1 __ 1[11'01 I~II ~~J it _. !If" NnEIOI$ "'1[11'01 DIST~ _IIC ~. :~ o~~~~==.oo=========_====-------I_ 1.] o·l '~.TllI'" :.:11 •. 20 .... ......... til ""[IMS N[KNTm -< 1I_r1 O]-]-r[~-.2-WT"~O-' -2{''-' 0..,.. Figure 5. X-window Display for Learning on Chip (5a) and in Software (5b) Fig. 5 shows the X-window display for 5 mean-field runs for learning the 4 input. 4 hidden, 4 output (4-4-4) replication on the chip (Sa) and in the simulator (5b). The user specification is the same for both. Only the learning calculation module is different. Both have di~plays of the network topology, the neuron states (color and pie-shaped arc of circles) and the network weights (color and size of squares). There are also graphs of percent correct and error (Hamming distance for replication) and one of volatility of neuron stateS(9j (Alspector,I992) as a measure of the system temperature. The learning curves look quite similar. In both cases, one of the 5 runs failed to learn to 100 %. The boxes representing weights are signed currents (about 4 ~ per unit weight) in 5a and integers from -15 to + 15 in 5b. Volatility is plotted as a function of time (j..lsec) in 5a and shows that. in hardware (see Fig. 4), time is needed for a gain decrease at the start of the annealing as well as for the gain increase of the annealing proper. The volatility in 5b is ICII 10 60 PERCENT CORRECT 40 o Experimental Evaluation of Learning in a Neural Microsystem 877 plotted as a function of gain (BETA) which increases logarithmically in the simulator at each anneal step. 5 ICII 4 10 1 60 HAMMJI«i DISTANCE 2 40 HAMMING DISTANcr FQR MfT I J) o 0 o 1Cxx) I !ell • I(xx) :m> JCXX> NUMBER OF PATT'fRNS PltESEN1ED ® NUMBER OF PATTERNS PItESEN1B) Figure 6. On-chip Learning for 6 Input Replication (6a) and Parity (6b) Fig. 6a displays data from the average of 10 runs of 6-6-6 replication for both Boltzmann (BZ) and mean-field (MFI) learning. While the percent correct saturates at 90 % (70 % for Boltzmann), the output error as measured by the Hamming distance between input and output is less than 1 bit out of 6. Boltzmann learning is somewhat poorer in this experiment probably because circuit parameters have not yet been optimized. We expect that a combination of noise and gain annealing will yield the best results but have not tested this possibility at this writing. Fig.6b is a similar plot for 6-12-1 parity. We have done on-chip learning experiments using noise and gain annealing for parity and replication up to 8 input bits, nearly utilizing all the neurons on a single chip. To judge scaling behavior in these early experiments, we note the number of patterns required until no further improvement in percent correct is visible by eye. Fig. 7a plots, for an average of 10 runs of the parity problem, the number of patterns required to learn up to the saturation value for percent correct for both Boltzmann and mean-field learning. This scales roughly as an exponential in number of inputs for learning on chip just as it did in simulation[6] (Alspector,199Ia) since the training set size is exponential. The final percent correct is indicated on the plot Fig. 7b plots the equivalent data for the replication problem. Outliers are due to low saturation values. Overall, the training time per pattern on-chip is quite similar to our simulations. However, in real-time, it can be about 100,000 times as fast for a single chip and will be even faster for multiple chip systems. The speed for either learning or evaluation is roughly 108 connections per second per chip. 878 Alspector, Jayakumar, and Luna ~ m' 'ATllaNS 1II1II l.IAIIIIIG SATURA'I1!S 1l1li AYBAa PfltCEHfAllE AVEaMlEPElICEHTAIZ ~u.~_~ alUECT ATSAlWAl10H -.,. /Qi-~M n lQI"'mwoo~ \ M 1n",\/ -M i .,. \ I 1l1li - ~ AVfIlAGEPBC9CTAGC AVERAGE PBCEI<T AIlE COUECT AT SA 1l!lA l10H .---------- OJUEC'T ATSATURA l10H IQIWY .,. RJlMfT I. , 10 (j) _m'.vJ'1I1I Ii> _m'~1111 Figme 7. Scaling of Parity (7a) and Replicalion (7b) Problem with Input Size 3. CONCLUSION We have shown that Boltzmann and mean-field learning networks can be implemented in a parallel, analog VLSI system. While we report early experiments on a single-chip digital system, a mUltiple-chip VME-based electronic system with analog I/O is being constructed for use on larger problems. ACKNOWLEDGMENT: This work has been partially supported by AFOSR contract F49620-90-C-0042, DEF. REFERENCES 1. D.E. Rwnelhart, G.E. Hinton. cl R.I. Williams, "Learning Internal Representtiions by Error Propagation", in Parallel Distribllled Processing: Exploralions ill ,he MicroSlruetwe of Cognitioll. Vol. 1: FowtdaJions, D.E. Rumelhart cl 1.L. McClelland (eds.), MIT Press, Cambridge, MA (1986), p. 318. 2. JJ. Hopfield. "Neural Networks and Physical Systems with Emergent Collective CompUl4tional Abilities". Proc. NaJJ. Acad. Sci. USA, 79,2554-2558 (l982). 3. D.H. Ackley, G.E. Hinton. cl T.J. Sejnowski, "A Learning Algorithm for Boltzmann Machines", Cognitive Science 9 (1985) pp. 147-169. 4. C. Peterson cl I.R. Amerson. "A Mean Field Learning Algorithm for Neural Networks", ComplexSySlems, 1:5,995-1019, (l987). 5. 1. Alspector, B. Gupta, cl RB. Allen, ·Performance of a Stochastic Learning Microchip· in Advances in NewaJ In/ormaJioll Processing Systems 1, D. Touretzky (ed.). Morgan-Kaufmann. Palo Alto, (1989), pp. 748-760. 6.1. Alspector. R.B. Allen. A. layakumar, T. Zeppenfeld, &: R. Meir "Relaxation Networks for Large Supervised Learning Problems" in Advances in NewaJ In/ormaJioll Processing Systems 3, R.P Lippmann, IE. Moody. &: D.S. Touretzky (eds.), Morgan-Kaufmann, Palo Alto. (1991), pp. 1015-1021. 7. 1. Alspector, I.W. Gannett, S. Haber, MB. Parker, &: R. Chu, "A YLSI-Efficient Teclutique for Generating Multiple Unoorrelated Noise Sources and Its Application to Stochastic Neural Networks", IEEE TrOJU. CirCllUS &: Systems, 38,109, (Jan., 1991). 8. RB. Allen & J. Alspector, "Learning of Stable States in Stochastic Asymmetric Networks·', IEEE TrOJU. NellTaJ Networks, 1,233-238, (1990). 9. 1. Alspector, T. Zeppenfeld &: S. Luna, "A Volatility Measure for Annealing in Feedback Neural Networks", to appear in NewaJ ComplllalWft, (1m).
1991
42
510
A Neural Network for Motion Detection of Drift-Balanced Stimuli Hilary Tunley* School of Cognitive and Computer Sciences Sussex University Brighton, England. Abstract This paper briefly describes an artificial neural network for preattentive visual processing. The network is capable of determiuing image motioll in a type of stimulus which defeats most popular methods of motion detect.ion - a subset of second-order visual motion stimuli known as drift-balanced stimuli(DBS). The processing st.ages of the network described in this paper are integratable into a model capable of simultaneous motion extractioll. edge detection, and the determination of occlusion. 1 INTRODUCTION Previous methods of motion detection have generally been based on one of two underlying approaches: correlation; and gradient-filter. Probably the best known example of the correlation approach is th(! Reichardt movement detEctor [Reiehardt 1961]. The gradient-filter (GF) approach underlies the work of AdElson and Bergen [Adelson 1985], and Heeger [Heeger L9H8], amongst others. These motion-detecting methods eannot track DBS, because DBS Jack essential componellts of information needed by such methods. Both the correlation and GF approaches impose constraints on the input stimuli. Throughout the image sequence, correlation methods require information that is spatiotemporally correlatable; and GF motion detectors assume temporally constant spatial gradi,'nts. "Current address: Experimental Psychology, School of Biological Sciences, Sussex University. 714 A Neural Network for Motion Detection of Drift-Balanced Stimuli 715 The network discussed here does not impose such constraints. Instead, it extracts motion energy and exploits the spatial coherence of movement (defined more formally in the Gestalt theory of common fait [Koffka 1935]) to achieve tracking. The remainder of this paper discusses DBS image sequences, then correlation methods, then GF methods in more detail, followed by a qualitative description of this network which can process DBS. 2 SECOND-ORDER AND DRIFT-BALANCED STIMULI There has been a lot of recent interest in second-order visual stimuli, and DBS in particular ([Chubb 1989, Landy 1991]). DBS are stimuli which give a clear percept of directional motion, yet Fourier analysis reveals a lack of coherent motion energy, or energy present in a direction opposing that of the displacement (hence the term 'drift-balanced '). Examples of DBS include image sequences in which the contrast polarity of edges present reverses between frames. A subset of DBS, which are also processpd by the network, are known as microbalanced stimuli (MBS). MBS cont,ain no correlatable features and are driftbalanced at all scales. The MBS image sequences used for this work were created from a random-dot image in which an area is successively shifted by a constant displacement between each frame and sim ultaneously re-randomised. 3 EXISTING METHODS OF MOTION DETECTION 3.1 CORRELATION METHODS Correlation methods perform a local cross-correlation in image space: the matching of features in local neighbourhoods (depending upon displacement/speed) between image frames underlies the motion detection. Examples of this method include [Van Santen 1985J. Most correlation models suffer from noise degradation in that any noise features extracted by the edge detection are available for spurious correlation. There has been much recent debate questioning the validity of correlation methods for modelling human motion detection abilit.ies. In addition to DBS, there is also increasing psychophysical evidence ([Landy 1991, Mather 1991]) which correlation methods cannot account for. These factors suggest that correlation techniques are not suitable for low-level motion processing where no information is available concerning what is moving (as with MBS). However, correlation is a more plausible method when working with higher level constructs such as tracking in model-based vision (e.g. [Bray 1990]), 3.2 GRADIENT-FILTER (GF) METHODS GF methods use a combination of spatial filtering to determine edge positions and temporal filtering to determine whether such edges are moving. A common assumption used by G F methods is that spatial gradients are constant. A recent method by Verri [Verri 1990], for example, argues that flow det.ection is based upon the notion 716 Tunley •• • •• • •• •• •• ~ . ••• • •• • • T •• Model R: M: 0: E: Receptor UnIts - Detect temporal changes In IMage intensit~ (polarIty-independent) Motion Units - Detect distribution of change iniorMtlon OcclusIon Units - Detect changes In .otlon dIstribution Edge Units - Detect edges dlrectl~ from occluslon Figure 1: The Network (Schematic) of tracking spatial gradient magnitude and/or direction, and that any variation in the spatial gradient is due to some form of motion deformation - i.e. rotation, expansion or shear. Whilst for scenes containing smooth surfaces this is a valid approximation, it is not the case for second-order stimuli such as DBS. 4 THE NETWORK A simplified diagram illustrating the basic structure of the network (based upon earlier work ([Tunley 1990, Tunley 1991a, Tunley 1991b]) is shown in Figure 1 ( the edge detection stage is discussed elsewhere ([Tunley 1990, Tunley 1991 b, Tunley 1992]). 4.1 INPUT RECEPTOR UNITS The units in the input layer respond to rectified local changes in image intensity over time. Each unit has a variable adaption rate, resulting in temporal sensitivity - a fast adaption rate gives a high temporal filtering rate. The main advantages for this temporal averaging processing are: • Averaging removes the D.C. component of image intensity. This eliminates problematic gain for motion in high brightness areas of the image. [Heeger 1988] . • The random nature of DBS/MBS generation cannot guarantee that each pixel change is due to local image motion. Local temporal averaging smooths the A Neural Network for Motion Detection of Drift-Balanced Stimuli 717 moving regions, thus creating a more coherently structured input for the motion units. The input units have a pointwise rectifying response governed by an autoregressive filter of the following form: (1 ) where a E [0,1] is a variable which controls the degree of temporal filtering of the change in input intensity, nand n - 1 are successive image frames, and Rn and In are the filter output and input, respectively. The receptor unit responses for two different a values are shown in Figure 2. C\' can thus be used to alter the amount of motion blur produced for a particular frame rate, effectively producing a unit with differing velocity sensitivity. ( a) (b) Figure 2: Receptor Unit Response: (a) a = 0.3; (b) a = 0.7. 4.2 MOTION UNITS These units determine the coherence of image changes indicated by corresponding receptor units. First-order motion produces highly-tuned motion activity - i.e. a strong response in a particular direction - whilst second-order motion results in less coherent output. The operation of a basic motion detector can be described by: (2) w here !vI is the detector, (if, j') is a point in frame n at a distance d from (i, j), a point in frame n 1, in the direction k. Therefore, for coherent motion (i.e. first-order), in direction k at a speed of d units/frame, as n ---- 00: (3) 718 Tunley The convergence of motion activity can be seen using an example. The stimulus sequence used consists of a bar of re-randomising texture moving to the right in front of a leftward moving background with the same texture (i.e. random dots). The bar motion is second-order as it contains no correlatable features, whilst the background consists of a simple first-order shifting of dots between frames. Figures 3, 4 and 5 show two-dimensional images of the leftward motion activity for the stimulus after 3,4 and 6 frames respectively. The background, which has coherent leftward movement (at speed d units/frame) is gradually reducing to zero whilst the microbalanced rightwards-moving bar, remains active. The fact that a non-zero response is obtained for second-order motion suggests, according to the definition of Chubb and Sperling [Chubb 1989], that first-order detectors produce no response to MBS, that this detector is second-order with regard to motion detection. Figure 3: Leftward Motion Response to Third Frame in Sequence. HfOL(tlyllmh ~ .4) .. ' Figure 4: Leftward Motion Response to Fourth Frame. Hf Ol (llyrlnh ~. 6) Figure 5: Leftward Motion Response to Sixth Frame. The motion units in this model are arranged on a hexagonal grid. This grid is known as a flow web as it allows information to flow, both laterally between units of the same type, and between the different units in the model (motion, occlusion or edge). Each flow web unit is represented by three variables - a position (a, b) and a direction k, which is evenly spaced between 0 and 360 degrees. In this model each k is an integer between 1 and kmax - the value of kmax can be varied to vary the sensitivity of the units. A way of using first-order techniques to discriminate between first and secondorder motions is through the concept of coherence. At any point in the motionprocessed images in Figures 3-5, a measure of the overall variation in motion activity can be used to distinguish between the motion of the micro-balanced bar and its background. The motion energy for a detector with displacement d, and orientation A Neural Network for Motion Detection of Drift-Balanced Stimuli 719 k, at position (a, b), can be represented by Eabkd. For each motion unit, responding over distance d, in each cluster the energy present can be defined as: E _ mink(Mabkd) abkdn AI abkd (4) where mink(xk) is the minimum value of x found searching over k values. If motion is coherent, and of approximately the correct speed for the detector M, then as n -+ 00: (5) where km is in the actual direction of the motion. In reality n need only approach around 5 for convergence to occur. Also, more importantly, under the same convergence conditions: (6) This is due to the fact that the minimum activation value in a group of first-order detectors at point (a, b) will be the same as the actual value in the direction, km . By similar reasoning, for non-coherent motion as n -+ 00: Eabkdn 1 'Vk (7) in other words there is no peak of activity in a given direction. The motion energy is ambiguous at a large number of points in most images, except at discontinuities and on well-textured surfaces. A measure of motion coherence used for the motion units can now be defined as: Mc( abkd) = . Eabkd ",", k max E L...k=l abkd (8) For coherent motion in direction km as n -+ 00: (9) Whilst for second-order motion, also as n 00: (10) Using this approach the total Me activity at each position - regardless of coherence, or lack of it - is unity. Motion energy is the same in all moving regions, the difference is in the distribution, or tuning of that energy. Figures 6, 7 and 8 show how motion coherence allows the flow web structure to reveal the presence of motion in microbalanced areas whilst not affecting the easily detected background motion for the stimulus. 720 Tunley Figure 6: Motion Coherence Response to Third Frame Figure 7: Motion Coherence Response to Fourth Frame Figure 8: Motion Coherence Response to Sixth Frame 4.3 OCCLUSION UNITS These units identify discontinuities in second-order motion which are vitally important when computing the direction of that motion. They determine spatial and temporal changes in motion coherence and can process single or multiple motions at each image point. Established and newly-activated occlusion units work, through a gating process, to enhance continuously-displacing surfaces, utilising the concept of visual inertia. The implementation details of the occlusion stage of this model are discussed elsewhere [Tunley 1992], but some output from the occlusion units to the above secondorder stimulus are shown in Figures 9 and 10. The figures show how the edges of the bar can be determined. References [Adelson 1985) [Bray 1990) [Chubb 1989) E.H. Adelson and J .R. Bergen. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. 2, 1985. A.J . Bray. Tracking objects using image disparities. Image and Vision Computin,q, 8, 1990. C. Chubb and G. Sperling. Second-order motion perception: Space/time separable mechanisms. In Proc. Workshop on Visual Motion, Irvine, CA , USA, 1989. A Neural Network for Motion Detection of Drift-Balanced Stimuli 721 Figure 9: Occluding Motion Information: Occlusion activity produced by an increase in motion coherence activity. O( IlynnlJsl . 1") Figure 10: Occluding Motion Information: Occlusion activity produced by a decrease in motion activity at a point. Some spurious activity is produced due to the random nature of the second-order motion information. [Heeger 1988] [Koffka 1935] [Landy 1991] [Mather 1991] [Reichardt 1961] D.J. Heeger. Optical Flow using spatiotemporal filters. Int. J. Camp. Vision, 1, 1988. K. Koffka. Principles of Gestalt Psychology. Harcourt Brace, 1935. M.S. Landy, B.A. Dosher, G. Sperling and M.E. Perkins. The kinetic depth effect and optic flow II: First- and second-order motion. Vis. Res. 31, 1991. G. Mather. Personal Communication. W. Reichardt. Autocorrelation, a principle for the evaluation of sensory information by the central nervous system. In W. Rosenblith, editor, Sensory Communications. Wiley NY, 1961. [Van Santen 1985] J .P.H. Van Santen and G. Sperling. Elaborated Reichardt detectors. J. Opt. Soc. Am. 2, 1985. [Tunley 1990] [Tunley 1991a] [Tunley 1991b] [Tunley 1992] [Verri 1990] H. Tunley. Segmenting Moving Images. In Proc. Int. Neural Network Conf (INN C9 0) , Paris, France, 1990. H. Tunley. Distributed dynamic processing for edge detection. In Proc. British Machine Vision Conf (BMVC91), Glasgow, Scotland, 1991. H. Tunley. Dynamic segmentation and optic flow extraction. In. Proc. Int. Joint. Conf Neural Networks (IJCNN91), Seattle, USA, 1991. H. Tunley. Sceond-order motion processing: A distributed approach. CSRP 211, School of Cognitive and Computing Sciences, University of Sussex (forthcoming). A. Verri, F. Girosi and V. Torre. Differential techniques for optic flow. J. Opt. Soc. Am. 7, 1990. 380 Recurrent Eye Tracking Network Using a Distributed Representation of Image Motion P. A. Viola Artificial Intelligence Laboratory Massachusetts Institute of Technology S. G. Lisberger Department of Physiology W.M. Keck Foundation Center for Integrative Neuroscience Neuroscience Graduate Program University of California, San Francisco T. J. Sejnowski Salk Institute, Howard Hughes Medical Institute Department of Biology University of California, San Diego Abstract We have constructed a recurrent network that stabilizes images of a moving object on the retina of a simulated eye. The structure of the network was motivated by the organization of the primate visual target tracking system. The basic components of a complete target tracking system were simulated, including visual processing, sensory-motor interface, and motor control. Our model is simpler in structure, function and performance than the primate system, but many of the complexities inherent in a complete system are present. Recurrent Eye Tracking Network Using a Distributed Representation of Image Motion 381 Retinotopic Estimate of Target Maps Retinal Eye Velocity Velocity ! I Visual V Motor Motor V Processing Interface Control Eye Images -> Figure 1: The overall structure of the visual tracking model. 1 Introduction The fovea of the primate eye has a high density of photoreceptors. Images that fall within the fovea are perceived with high resolution. Perception of moving objects poses a particular problem for the visual system. If the eyes are fixed a moving image will be blurred. When the image moves out the of the fovea, resolution decreases. By moving their eyes to foveate and stabilize targets, primates ensure maximum perceptual resolution. In addition, active target tracking simplifies other tasks, such as spatial localization and spatial coordinate transformations (Ballard, 1991). Visual tracking is a feedback process, in which the eyes are moved to stabilize and foveate the image of a target. Good visual tracking performance depends on accurate estimates of target velocity and a stable feedback controller. Although many visual tracking systems have been designed by engineers, the primate visual tracking system has yet to be matched in its ability to perform in complicated environments, with unrestricted targets, and over a wide variety of target trajectories. The study of the primate oculomotor system is an important step toward building a system that can attain primate levels of performance. The model presented here can accurately and stably track a variety of targets over a wide range of trajectories and is a first step toward achieving this goal. Our model has four primary components: a model eye, a visual processing network, a motor interface network, and a motor control network (see Figure 1). The model eye receives a sequence of images from a changing visual world, synthetically rendered, and generates a time-varying output signal. The retinal signal is sent to the visual processing network which is similar in function to the motion processing areas of the visual cortex. The visual processing network constructs a distributed representation of image velocity. This representation is then used to estimate the velocity of the target on the retina. The retinal velocity of the target forms the input to the motor control network that drives the eye. The eye responds by rotating, 382 Viola, Lisberger, and Sejnowski Space-time Separable Units Motion Energy Output Unit Combination Layer Figure 2: The structure of a motion energy unit. Each space-time separable unit has a receptive field that covers 16 pixels in space and 16 steps in time (for a total of 256 inputs). The shaded triangles denote complete projections. which in turn affects incoming retinal signals. If these networks function perfectly, eye velocity will match target velocity. Our model generates smooth eye motions to stabilize smoothly moving targets. It makes no attempt to foveate the image of a target. In primates, eye motions that foveate targets are called saccades. Saccadic mechanisms are largely separate from the smooth eye motion system (Lisberger et. al. 1987). We do not address them here. In contrast with most engineered systems, our model is adaptive. The networks used in the model were trained using gradient descentl . This training process circumvented the need for a separate calibration of the visual tracking system. 2 Visual Processing INetwork simulations were carried out with the SN2 neural network simulator. Recurrent Eye Tracking Network Using a Distributed Representation of Image Motion 383 The middle temporal cortex (area MT) contains cells that are selective for the direction of visual motion. The neurons in MT are organized into a retinotopic map and small lesions in this area lead to selective impairment of visual tracking in the corresponding regions of the visual field (Newsome and Pare, 1988). The visual processing networks in our model contain directionally-selective processing units that are arranged in a retinotopic map. The spatio-temporal motion energy filter of Adelson and Bergen (Adelson and Bergen, 1985) has many of the properties of directionally-selective cortical neurons; it is used as the basis for our visual processing network. We constructed a four layer time-delay neural network that implements a motion energy calculation. A single motion-energy unit can be constructed from four intermediate units having separable spatial and temporal filters. Adelson and Bergen demonstrate that two spatial filters (of even and odd symmetry) and two temporal filters (temporal derivatives for fast and slow speeds) are sufficient to detect motion. The filters are combined to construct 4 intermediate units which project to a single motion energy unit. Because the spatial and temporal properties of the receptive field are separable, they can be computed separately and convolved together to produce the final output. The temporal response is therefore the same throughout the extent of the spatial receptive field. In our model, motion energy units are implemented as backpropagation networks. These units have a receptive field 16 pixels wide over a 16 time step window. Because the input weights are shared, only 32 parameters were needed for each space-time separable unit. Four space-time separable units project through a 16 unit combination layer to the output unit (see Figure 2). The entire network can be trained to approximate a variety of motion-energy filters. We trained the motion energy network in two different ways: as a single multilayered network and in stages. Staged training proceded first by training intermediate units, then, with the intermediate units fixed, by training the three layer network that combines the intermediate units to produce a single motion energy output. The output unit is active when a pattern in the appropriate range of spatial frequencies moves through the receptive field with appropriate velocity. Many such units are required for a range of velocities, spatial frequencies, and spatial locations. We use six different types of motion energy units - each tuned to a different temporal frequency - at each of the central 48 positions of a 64 pixel linear retina. The 6 populations form a distributed, velocity-tuned representation of image motion for a total of 288 motion energy units. In addition to the motion energy filters, static spatial frequency filters are also computed and used in the interface network, one for each band and each position for a total of 288 units. We chose an adaptive network rather than a direct motion energy calculation because it allows us to model the dynamic nature of the visual signal with greater :flexibility. However, this raises complications regarding the set of training images. Assuming 5 bits of information at each retinal position, there are well over 10 to the 100th possible input patterns. We explored sine waves, random spots and a variety of spatial pre-filters, and found low-pass filtered images of moving random spots worked best. Typically we began the training process from a plausible set of 384 Viola, Lisberger, and Sejnowski weights, rather than from random values, to prevent the network from settling into an initial local minima. Training proceeded for days until good performance was obtained on a testing set. Krauzlis and Lisberger (1989) have predicted that the visual stimulus to the visual tracking system in the brain contains information about the acceleration and impulse of the target as well as the velocity. Our motion energy networks are sensitive to target acceleration, producing transients for accelerating stimuli. 3 The Interface Network The function of the interface is to take the distributed representation of the image motion and extract a single velocity estimate for the moving object. We use a relatively simple method that was adequate for tracking single objects without other moving distractolS. The activity level of a single motion energy unit is ambiguous. First, it is necessary for the object to have a feature that is matched to the spatial frequency ba.ndpass of the motion energy unit. Second, there is an a.llay of units for each spatial frequency and the object will stimulate only a few of these at any given time. For instance, a large white object will have no features in its interior; a unit with its receptive field located in the interior can detect no motion. Conversely, detectors with receptive fields on the border between the object and the background will be strongly stimulated. We use two stages of processing to extract a velocity. In the first stage, the motion energy in each spatial frequency band is estimated by summing the outputs of the motion energy filters across the retina weighted by the spatial frequency filter at each location. The six populations of spatial frequency units each yield one value. Next, a 6-6-1 feedforward network, trained using backpropagation, predicts target velocity from these values. 4 The Motor Control Network In comparison with the visual processing network, the motor control network is quite small (see Figure 3). The goal of the network is to move the eye to stabilize the image of the object. The visual processing and interface networks convert images of the moving target into an estimate for the retinal velocity of the target. This retinal velocity can be considered a motor error. One approach to reducing this error is a simple proportional feedback controller, which drives the eye at a velocity proportional to the error. There is a large, 50-100 ms delay that occurs during visual processing in the primate visual system. In the presence of a large delay a proportional controller will either be inaccurate or unstable. For this reason simple proportional feedback is not sufficient to control tracking in the primate. Tracking can be made stable and accurate by including an internal positive feedback pathway to prevent instability while preserving accuracy (Robinson, 1971). The motor control network was based on a model of the primate visual tracking motor control system by Lisberger and Sejnowski (1992). This recurrent artificial neural network includes both the smooth visual tracking system and the vestibuloocular system, which is important for compensating head movements. We use a Recurrent Eye Tracking Network Using a Distributed Representation of Image Morion 385 Target Retinal Velocity Flocculus -·~~-·~lggl· ~~ l Igl--t ..... ~ -......... --1 ... Eye Velocity Brain Motor Stem Neurons Figure 3: The structure of the recurrent network. Each circle is a unit. Units within a box are not interconnected and all units between boxes were fully interconnected as indicated by the arrows. simpler version of that model that does not have vestibular inputs. The network is constructed from units with continuous smooth temporal responses. The state of a unit is a function of previous inputs and previous state: Bj(t + ~t) = (1 T~t)Bj(t) + IT~t where Bj(t) is the state of unit j at time t, T is a time constant and I is the sigmoided sum of the weighted pre-synaptic activities. The resulting network is capable of smooth responses to inputs. The motor control network has 12 units, each with a time constant of 5 ms (except for a few units with longer delay). There is a time delay of 50 ms between the interface network and control network. (see Figure 3). The input to the network is retinal target velocity, the output is eye velocity. The motor control network is trained to track a target in the presence of the visual delay. The motor control network contains a positive feedback loop that is necessary to maintain accurate tracking even when the error signal falls to zero. The overall control network also contains a negative feedback loop since the output of the network affects subsequent inputs. The gradient descent optimization procedure uses the relationship between the output and the input during training-this relationship can be considered a model of the plant. It should be possible to use the same approach with more complex plants. The control network was trained with the visual processing network frozen. A training example consists of an object trajectory and the goal trajectory for the eye. A standard recurrent network training paradigm is used to adjust the weights to minimize the error between actual outputs and desired outputs for step changes in target velocity. 386 Viola, Lisberger, and Sejnowski I I I , , , , , I I J --..... ~---.... --. ,~ -Seconds Figure 4: Response of the eye to a step in target velocity of 30 degrees per second. The solid line is target velocity, the dashed line is eye velocity. This experiment was performed with a target that did not appear in the training set. 5 Performance After training the network on a set of trajectories for a single target, the tracking performance was equally good on new targets. TYacking is accurate and stable with little tendency to ring (see Figure 4). This good performance is surprising in the presence of a 50 millisecond delay in the visual feedback signal2 • Stable tracking is not possible without the positive internal feedback loop in the model (eye velocity signal to the flocculus in Figure 3). 6 Limitations The system that we have designed is a relatively small one having a one-dimensional retina only 64 pixels wide. The eye and the target can only move in one dimensionalong the length of the retina. The visual analysis that is performed is not, however, limited to one dimension. Motion energy filters are easily generalized to a twodimensional retina. Our approach should be extendable to the two-dimensional tracking problem. The backgrounds of images that we used for tracking were featureless. The current system cannot distinguish target features from background features. Also, the interface network was designed to track a single object in the absence of moving distractors. The next step is to expand this interface to model the attentional phenomena observed in primate tracking, especially the process of initial target 2We selected time constants, delays, and sampling rates throughout the model to roughly approximate the time course of the primate visual tracking response. The model runs on a workstation taking approximately thirty times real-time to complete a processing step. Recurrent Eye Tracking Network Using a Distributed Representation of Image Motion 387 acquisition. 7 Conclusion In simulations, our eye tracking model performed well. Many additional difficulties must be addressed, but we feel this system can perform well under real-world realtime constraints. Previous work by Lisberger and Sejnowski (1992) demonstrates that this visual tracking model can be integrated with inertial eye stabilization-the vestibulo-ocular reflex. Ultimately, it should be possible to build a physical system using these design principles. Every component of the system was designed using network learning techniques. The visual processing, for example, had a variety of components that were trained separately and in combinations. The architecture of the networks were based on the anatomy and physiology of the visual and oculomotor systems. This approach to reverse engineering is based on the existing knowledge of the flow of information through the relevant brain pathways. It should also be possible to use the model to develop and test theories about the nature of biological visual tracking. This is just a first step toward developing a realistic model of the primate oculomotor system, but it has already provided useful predictions for the possible sites of plasticity during gain changes of the vestibuloocular reflex (Lisberger and Sejnowski, 1992). References [1] E. H. Adelson and J. R. Bergen. Spatiotemporal energy models of the perception of motion. Journal of the Optical Society of America, 2(2):284-299, 1985. [2] D. H. Ballard. Animate vision. Artificial Intelligence, 48:57-86, 1991. [3] R.J. Krauzlis and S. G. Lis berger. A control systems model of smooth pursuit eye movements with realistic emergent properties. Neural Computation, 1:116122, 1992. [4] S. G. Lisberger, E. J. Morris, and L. Tychsen. Ann. Rev. Neurosci., 10:97-129, 1987. [5] S.G. Lisberger and T.J. Sejnowski. Computational analysis suggests a new hypothesis for motor learning in the vestibulo-ocular reflex. Submitted for publication., 1992. [6] W.T. Newsome and E. B. Pare. A selective impairment of motion perception following lesions of the middle temporal visual area (MT). J. Neuroscience, 8:2201-2211, 1988. [7] D. A. Robinson. Models of oculomotor neural organization. In P. Bach y Rita and C. C. Collins, editors, The Control of Eye Movements, page 519. Academic, New York, 1971.
1991
43
511
Against Edges: Function Approximation with Multiple Support Maps Trevor Darrell and Alex Pentland Vision and Modeling Group, The Media Lab Massachusetts Institute of Technology E15-388, 20 Ames Street Cambridge MA, 02139 Abstract Networks for reconstructing a sparse or noisy function often use an edge field to segment the function into homogeneous regions, This approach assumes that these regions do not overlap or have disjoint parts, which is often false. For example, images which contain regions split by an occluding object can't be properly reconstructed using this type of network. We have developed a network that overcomes these limitations, using support maps to represent the segmentation of a signal. In our approach, the support of each region in the signal is explicitly represented. Results from an initial implementation demonstrate that this method can reconstruct images and motion sequences which contain complicated occlusion. 1 Introduction The task of efficiently approximating a function is central to the solution of many important problems in perception and cognition. Many vision algorithms, for instance, integrate depth or other scene attributes into a dense map useful for robotic tasks such as grasping and collision avoidance. Similarly, learning and memory are often posed as a problem of generalizing from stored observations to predict future behavior, and are solved by interpolating a surface through the observations in an appropriate abstract space. Many control and planning problems can also be solved by finding an optimal trajectory given certain control points and optimization constraints. 388 Against Edges: Function Approximation with Multiple Support Maps 389 In general, of course, finding solutions to these approximation problems is an illposed problem, and no exact answer can be found without the application of some prior knowledge or assumptions. Typically, one assumes the surface to be fit is either locally smooth or has some particular parametric form or basis function description. Many successful systems have been built to solve such problems in the cases where these assumptions are valid. However in a wide range of interesting cases where there is no single global model or universal smoothness constraint, such systems have difficulty. These cases typically involve the approximation or estimation of a heterogeneous function whose typical local structure is known, but which also includes an unknown number of abrupt changes or discontinuities in shape. 2 Approximation of Heterogeneous Functions In order to accurately approximate a heterogeneous function with a minimum number of parameters or interpolation units, it is necessary to divide the function into homogeneous chunks which can be approximated parsimoniously. When there is more than one homogeneous chunk in the signal/function, the data must be segmented so that observations of one object do not intermingle with and corrupt the approximation of another region. One simple approach is to estimate an edge map to denote the boundaries of homogeneous regions in the function, and then to regularize the function within such boundaries. This method was formalized by Geman and Geman (1984), who developed the "line-process" to insert discontinuities in a regularization network. A regularized solution can be efficiently computed by a neural network, either using discrete computational elements or analog circuitry (Poggio et al. 1985; Terzopoulos 1988). In this context, the line-process can be thought of as an array of switches placed between interpolation nodes (Figure la). As the regularization proceeds in this type of network, the switches of the line process open and prevent smoothing across suspected discont.inuities. Essentially, these switches are opened when the squared difference between neighboring interpolated values exceeds some threshold (Blake and Zisserman 1987; Geiger and Girosi 1991). In practice a continuation method is used to avoid problems with local minima, and a continuous non-linearity is used in place of a boolean discontinuity. The term "resistive fuse" is often used to describe these connections between interpolation sites (Harris et al. 1990). 3 Limitations of Edge-based Segmentation An edge-based representation assumes that homogeneous chunks of a function are completely connected, and have no disjoint subregions. For the visual reconstruction task, this implies that the projection of an object onto the image plane will always yield a single connected region. While this may be a reasonable assumption for certain classes of synthetic images, it is not valid for realistic natural images which contain occlusion and/or transparent phenomena. While a human observer can integrate over gaps in a region split by occlusion, the line process will prevent any such smoothing, no matter how close the subregions are in the image plane. When these disjoint regions are small (as when viewing an object through branches or leaves), the interpolated values provided by such a 390 Darrell and Pentland (a) (b) Figure 1: (a) Regularization network with line-process. Shaded circles represent data nodes, while open circles represent interpolation nodes. Solid rectangles indicate resistorsj slashed rectangles indicate "resistive fuses". (b) Regularization network with explicit support mapSj support process can be implemented by placing resistive fuses between data and interpolation nodes (other constraints on support are described in text). network will not be reliable, since observation noise can not be averaged over a large number of samples. Similarly, an edge-based approach cannot account for the perception of motion transparency, since these stimuli have no coherent local neighborhoods. Human observers can easily interpolate 3-D surfaces in transparent random-dot motion displays (Husain et al. 1989). In this type of display, points only last a few frames, and points from different surfaces are transparently intermingled. With a lineprocess, no smoothing or integration would be possible, since neighboring points in the image belong to different 3-D surfaces. To represent and process images containing this kind of transparent phenomena, we need a framework that does not rely on a global 2D edge map to make segmentation decisions. By generalizing the regularization/surface interpolation paradigm to use support. maps rather than a line-process, we can overcome limitations the discontinuity approach has with respect to transparency. 4 U sing Support Maps for Segmentation Our approach decomposes a heterogeneous function into a set of individual approximations corresponding to the homogeneous regions of the function. Each approximation covers a specific region, and ues a support map to indicate which points belong to that region. Unlike an edge-based representation, the support of an approximation need not be a connected region in fact, the support can consist of a scattered collection of independent points! Against Edges: Function Approximation with Multiple Support Maps 391 For a single approximation, it is relatively straight-forward to compute a support map. Given an approximation, we can find the support it has in the function by thresholding the residual error of that approximation. In terms of analog regularization, the support map (or support "process") can be implemented by placing a resistive fuse between the data and the interpolating units (Figure 1b). A single support map is limited in usefulness, since only one region can be approximated. In fact, it reduces to the "outlier" rejection paradigm of certain robust estimation methods, which are known to have severe theoretical limits on the amount of outlier contamination they can handle (Meer et al. 1991; Li 1985). To represent true heterogeneous stimuli, multiple support maps are needed, with one support map corresponding to each homogeneous (but not necessarily connected) region. We have developed a method to estimate a set of these support maps, based on finding a minimal length description of the function. We adopt a three-step approach: first, we generate a set of candidate support maps using simple thresholding techniques. Second, we find the subset of these maps which minimally describes the function, using a network optimization to find the smallest set of maps that covers all the observations. Finally, we re-allocate the support in this subset, such that only the approximation with the lowest residual error supports a particular point. 4.1 Estimating Initial Support Fields Ideally, we would like to consider all possible support patterns of a given dimension as candidate support maps. Unfortunately, the combinatorics of the problem makes this impossible; instead, we attempt to find a manageable number of initial maps which will serve as a useful starting point. A set of candidate approximations can be obtained in many ways. In our work we have initialized their surfaces either using a table of typical values or by fitting a small fixed regions of the function. We denote each approximation of a homogeneous region as a tuple, (ai,si,ui,fi), where si = {Sij} is a support map, ui = {Uij} is the approximated surface, and ri = {l'ij} is the residual error computed by taking the difference of ui with the observed data. (The scalar ai is used in deciding which subset of approximations are used in the final representation.) The support fields are set by thresholding the residual field based on our expected (or assumed) observation variance e. if (rij)2 < e } otherwise 4.2 Estimating the Number of Regions Perhaps the most critical problem in recovering a good heterogeneous description is estimating how many regions are in the function. Our approach to this problem is based on finding a small set of approximations which constitutes a parsimonious description of the function. We attempt to find a subset of the candidate approximations whose support maps are a minimal covering of the function, e.g. the smallest subset whose combined support covers the entire function. In non-degenerate cases this will consist of one approximation for each real region in the function. 392 Darrell and Pentland The quantity ai indicates if approximation i is included in the final representation. A positive value indicates it is "active" in the representation; a negative value indicates it is excluded from the representation. Initially ai is set to zero for each approximation; to find a minimal covering, this quantity is dynamically updated as a function of the number of points uniquely supported by a particular support map. A point is uniquely supported in a support map if it is supported by that map and no other. Essentially, we find these points by modulating the support values of a particular approximation with shunting inhibition from all other active approximations. To compute Cij, a flag that indicates whether or not point j of map i is uniquely supported, we multiply each support map with the product of the inverse of all other maps whose aj value indicates it is active: Cij = Sij II (1 SkjO"(ak» k~i where 0"0 is a sigmoid function which converts the real-valued ai into a multiplicative factor in the range (0, 1). The quantity Cij is close to one at uniquely supported points, and close to zero for all other points. If there are a sufficient number of uniquely supported points in an approximation, we increase ai, otherwise it is decreased: d dt ai = L Cij - a. (1) j where a specifies the penalty for adding another approximation region to the representation. This constant determines the smallest number of points we are willing to have constitute a distinct region in the function. The network defined by these equations has a corresponding Lyoponov function: N M E = L ai( - I)O"(Sij) II (1 - O"(Skj )O"(ak») + a) i j k~i so it will be guaranteed to converge to a local minima if we bound the values of ai (for fixed Sij and a). After convergence, those approximations with positive ai are kept, and the rest are discarded. Empirically we have found the local minima found by our network correspond to perceptually salient segmentations. 4.3 Refining Support Fields Once we have a set of approximations whose support maps minimally cover the function (and presumably correspond to the actual regions of the function), we can refine the support using a more powerful criteria than a local threshold. First, we interpolate the residual error values through unsampled points, so that support can be computed even where there are no observations. Then we update the support maps based on which approximation has the lowest residual error for a given point: if (rij)2 < (J and (rij)2 = min{klak>o}(rkj)2 -- { 0 1 Sij otherwise Against Edges: Function Approximation with Multiple Support Maps 393 ( Figure 2: (a) Function consisting of constant regions with added noise. (b) Same function sparsely sampled. (c) Support maps found to approximate uniformly sampled function. (d) Support maps found for sparsely sampled function. 5 Results We tested how well our network could reconstruct functions consisting of piecewise constant patches corrupted with random noise of known variance. Figure 2( a) shows the image containing the function the used in this experiment. We initialized 256 candidate approximations, each with a different constant surface. Since the image consisted of piecewise constant regions, the interpolation performed by each approximation was to compute a weighted average of the data over the supported points. Other experiments have used more powerful shape models, such as thin-plate or membrane Markov random fields, as well as piecewise-quadratic polynomials (Darrell et al. 1990). Using a penalty term which prevented approximations with 10 or fewer support points to be considered (0' = 10.0), the network found 5 approximations which covered the entire image; their support maps are shown in Figure 2( c). The estimated surfaces corresponded closely to the values in the constant patches before noise was added. We ran a the same experiment on a sparsely sampled version of this function, as shown in Figure 2(b) and (d), with similar results and only slightly reduced accuracy in the recovered shape of the support maps. 394 Darrell and Pentland (b) -0 0 -'L0_0 (d) Figure 3: ( a) First frame from image sequence and (b) recovered regions. (c) First frame from random dot sequence described in text. (d) Recovered parameter values across frames for dots undergoing looming motion; solid line plots Tz , dotted line plots Tx, and circles plot Ty for each frame. We have also applied our framework to the problem of motion segmentation. For homogeneous data, a simple "direct" method can be used to model image motion (Horn and Weldon 1988). Under this assumption, the image intensities for a region centered at the origin undergoing a translation (Tx, Ty, Tz) satisfy at each point dI dI dI dI dI o = dt + Tx dx + Ty dy + Tz (x dx + y dy) where I is the image function. Each approximation computes a motion estimate by selecting a T vector which minimizes the square of the right hand side of this equation over its support map, using a weighted least-squares algorithm. The residual error at each point is then simply this constraint equation evaluated with the particular translation estimate. Figure 3( a) shows the first frame of one sequence, containing a person moving behind a stationary plant. Our network began with 64 candidate approximations, with the initial motion parameters in each distributed uniformly along the parameter axes. Figure 3(b) shows the segmentation provided by our method. Two regions were found to be needed, one for the person and one for the plant. Most of the person has been correctly grouped together despite the occlusion caused by the plant's leaves. Points that have no spatial or temporal variation in the image sequence are not attributed to any approximation, since they are invisible to our motion model. Note that there is a cast shadow moving in synchrony with the person in the scene, .and is thus grouped with that approximation. Against Edges: Function Approximation with Multiple Suppon Maps 395 Finally, we ran our system on the finite-lifetime, transparent random dot stimulus described in Section 2. Since our approach recovers a global motion estimate for each region in each frame, we do not need to build explicit pixel-to-pixel correspondences over long sequences. We used two populations of random dots, one undergoing a looming motion and one a rightward shift. After each frame 10% of the dots died off and randomly moved to a new point on the 3-D surface. Ten 128x128 frames were rendered using perspective projection; the first is shown in Figure 3(c) We applied our method independently to each trio of successive frames, and in each case two approximations were found to account for the motion information in the scene. Figure 3(d) shows the parameters recovered for the looming motion. Similar results were found for the translating motion, except that the Tx parameter was nonzero rather than Tz • Since the recovered estimates were consistent, we would be able to decrease the overall uncertainty by averaging the parameter values over successive frames. References Geman, S., and Geman, D. (1984) Stochastic relaxation, Gibbs distribution, and Bayesian restoration of images. Trans. Pattern Anal. Machine Intell. 6:721-741. Poggio, T., Torre, V., and Koch, C. (1985) Computational vision and regularization theory. Nature 317(26). Terzopoulos, D. (1988) The computation of visible surface representations. IEEE Trans. Pattern Anal. Machine Intel/. 10:4. Geiger, D., and Girosi, F. (1991) Parallel and deterministic algorithms from MRF's: surface reconstruction. Trans. Pattern Anal. Machine Intell. 13:401-412. Blake, A. and Zisserman, A. (1987) Visual Reconstruction; MIT Press, Cambridge, MA. Harris J., Koch, C., Staats, E., and Luo, J. (1990) Analog hardware for detecting discontinuities in early vision Inti. 1. Computer Vision 4:211-233. Husain, M., Treue, S., and Andersen, R. A. (1989) Surface interpolation in threedimensional structure-from-motion perception. Neural Computation 1:324-333. Meer, P., Mintz, D., and Rosenfeld, A. (1991) Robust regression methods for computer vision: A review. Inti. 1. Computer Vision; 6:60-70. Li, G. (1985) Robust Regression. In D.C. Hoaglin, F. Mosteller and J.W. Tukey (Eds.) Exploring Data, Tables, Trends and Shapes: John Wiley & Sons, N.Y. Darrell, T ., Sclaroff, S., and Pentland, A. P. (1990) Segmentation by minimal description. Proc. IEEE 3nd Inti. Con! Computer Vision; Osaka, Japan. Horn, B.K.P., and Weldon, E.J. (1988) Direct methods for recovering motion. Inti. 1. Computer Vision 2:51-76.
1991
44
512
Induction of Multiscale Temporal Structure Michael C. Moser Department of Computer Science &: Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 Abstract Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is available at any instant. Although variants of back propagation can in principle be used to find structure in sequences, in practice they are not sufficiently powerful to discover arbitrary contingencies, especially those spanning long temporal intervals or involving high order statistics. For example, in designing a connectionist network for music composition, we have encountered the problem that the net is able to learn musical structure that occurs locally in time-e.g., relations among notes within a musical phrase-but not structure that occurs over longer time periods--e.g., relations among phrases. To address this problem, we require a means of constructing a reduced deacription of the sequence that makes global aspects more explicit or more readily detectable. I propose to achieve this using hidden units that operate with different time constants. Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard back propagation. Many patterns in the world are intrinsically temporal, e.g., speech, music, the unfolding of events. Recurrent neural net architectures have been devised to accommodate time-varying sequences. For example, the architecture shown in Figure 1 can map a sequence of inputs to a sequence of outputs. Learning structure in temporally-extended sequences is a difficult computational problem because the input pattern may not contain all the task-relevant information at any instant. Thus, 275 276 Mozer Figure 1: A generic recurrent network architecture for processing input and output sequences. Each box corresponds to a layer of units, each line to full connectivity between layers. the context layer must hold on to relevant aspects of the input history until a later point in time at which they can be used. In principle, variants of back propagation for recurrent networks (Rumelhart, Hinton, &; Williams, 1986; Williams &; Zipser, 1989) can discover an appropriate representation in the context layer for a particular task. In practice, however, back propagation is not sufficiently powerful to discover arbitrary contingencies, especially those that span long temporal intervals or that involve high order statistics (e.g., Mozer, 1989j Rohwer, 1990j Schmidhuber, 1991). Let me present a simple situation where back propagation fails. It involves remembering an event over an interval of time. A variant of this task was first studied by Schmid huber (1991). The input is a sequence of discrete symbols: A, B, C, D, . ", I, Y. The task is to predict the next symbol in the sequence. Each sequence begins with either an I or a Y-call this the trigger .ymbol--and is followed by a fixed sequence such as ABCDE, which in turn is followed by a second instance of the trigger symbol, i.e., IABCDEI or or YABCDEY. To perform the prediction task, it is necessary to store the trigger symbol when it is first presented, and then to recall the same symbol five time steps later. The number of symbols intervening between the two triggers-call this the gapcan be varied. By training different networks on different gaps, we can examine how difficult the learning task is as a function of gap. To better control the experiments, all input sequences had the same length and consisted of either I or Y followed by ABCDEFGHIJK. The second instance of the trigger symbol was inserted at various points in the sequence. For example, IABCDIEFGHIJK represents a gap of 4, YABCDEFGHYIJK a gap of 8. Each training set consisted of two sequences, one with I and one with Y. Different networks were trained on different gaps. The network architecture consisted of one input and output unit per symbol, and ten context units. Twenty-five replications of each network were run with different random initial weights. IT the training set was not learned within 10000 epochs, the replication was counted as a "failure." The primary result was that training sets with gaps of 4 or more could not be learned reliably, as shown in Table 1. Induction of Multiscale Temporal Structure 277 Tabl 1 L e : f earnIng con mgencles across E aps gap % failure. mean # epoch. to learn 2 0 468 4 36 7406 6 92 9830 8 100 10000 10 100 10000 The results are suprisingly poor. My general impression is that back propagation is powerful enough to learn only structure that is fairly local in time. For instance, in earlier work on neural net music composition (Mozer & Soukup, 1991), we found that our network could master the rules of composition for notes within a musical phrase, but not rules operating at a more global level-rules for how phrases are interrelated. The focus of the present work is on devising learning algorithms and architectures for better handling temporal structure at more global scales, as well as multiscale or hierarchical structure. This difficult problem has been identified and studied by several other researchers, including Miyata and Burr (1990), Rohwer (1990), and Schmidhuber (1991). 1 BUILDING A REDUCED DESCRIPTION The basic idea behind my work involves building a redueed de.eription (Hinton, 1988) of the sequence that makes global aspects more explicit or more readily detectable. The challenge of this approach is to devise an appropriate reduced description. I've experimented with a scheme that constructs a reduced description that is essentially a bud's eye view of the sequence, sacrificing a representation of individual elements for the overall contour of the sequence. Imagine a musical tape played at double the regular speed. Individual sounds are blended together and become indistinguishable. However, coarser time-scale events become more explicit, such as an ascending trend in pitch or a repeated progression of notes. Figure 2 illustrates the idea. The curve in the left graph, depicting a sequence of individual pitches, has been smoothed and compressed to produce the right graph. Mathematically, "smoothed and compressed" means that the waveform has been low-pass filtered and sampled at a lower rate. The result is a waveform in which the alternating upwards and downwards :ftow is unmistakable. Multiple views of the sequence are realized using context units that operate with different time eon.tantl: (1) where Ci(t) is the activity of context unit i at time t, net,(t) is the net input to unit i at time t, including activity both from the input layer and the recurrent context connections, and T, is a time constant associated with each unit that has the range (0,1) and determines the responsiveness of the unit-the rate at which 278 Mozer (a) p i t c h time (b) P i t c h reduced description time (compressed) Figure 2: (a) A sequence of musical notes. The vertical axis indicates the pitch, the horizontal axis time. Each point corresponds to a particular note. (b) A smoothed, compact view of the sequence. its activity changes. With 7'i == 0, the activation rule reduces to the standard one and the unit can sharply change its response based on a new input. With large 7'i, the unit is sluggish, holding on to much of its previous value and thereby averaging the response to the net input over time. At the extreme of 7'i == 1, the second term drops out and the unit's activity becomes fixed. Thus, large 7'i smooth out the response of a context unit over time. Note, however, that what is smoothed is the activity of the context units, not the input itself as Figure 2 might suggest. Smoothing is one property that distinguishes the waveform in Figure 2b from the original. The other property, compactness, is also achieved by a large 7'i, although somewhat indirectly. The key benefit of the compact waveform in Figure 2b is that it allows a longer period of time to be viewed in a single glance, thereby explicating contingencies occurring in this interval during learning. The context unit activation rule (Equation 1) permits this. To see why this is the case, consider the relation between the error derivative with respect to the context units at time t, 8E/8c(t), and the error back propagated to the previous step, t - 1. One contribution to 8E/8ci(t - 1), from the first term in Equation 1, is (2) This means that when 7'i is large, most of the error signal in context unit i at time t is carried back to time t - 1. Intuitively, just as the activation of units with large 7'i changes slowly forward in time, the error propagated back through these units changes slowly too. Thus, the back propagated error signal can make contact with points further back in time, facilitating the learning of more global structure in the input sequence. Time constants have been incorporated into the activation rules of other connectionist architectures (Jordan, 1987; McClelland, 1979; Mozer, 1989; Pearlmutter, 1989; Pineda, 1987). However, none of this work has exploited time constants to control the temporal responsivity of individual units. Induction of Multiscale Temporal Structure 279 2 LEARNING AABA PHRASE PATTERNS A simple simulation illustrates the benefits of temporal reduced descriptions. I generated pseudo musical phrases consisting of five notes in ascending chromatic order, e.g., F#2 G2 G#2 12 1#2 or C4 C#4 Dot D#4 &ot, where the first pitch was selected at random.1 Pairs of phrases-call them A and B-were concatenated to form an AABA pattern, terminated by a special EID marker. The complete melody then consisted of 21 elements-four phrases offive notes followed by the EID marker-an example of which is: Two versions of CONCERT were tested, each with 35 context units. In the ,tandard version, all 35 units had T = 0; in the reduced de.eMption or RD version, 30 had T = 0 and 5 had T = 0.8. The training set consisted of 200 examples and the test set another 100 examples. Ten replications of each simulation were run for 300 passes through the training set. See Mozer and Soukup (1991) for details of the network architecture and note representations. Because ofthe way that the sequences are organized, certain pitches can be predicted based on local structure whereas other pitches require a more global memory of the sequence. In particular, the second through fifth pitches within a phrase can be predicted based on knowledge of the immediately preceding pitch. To predict the first pitch in the repeated A phrases and to predict the EID marker, more global information is necessary. Thus, the analysis was split to distinguish between pitches requiring only local structure and pitches requiring more global structure. As Table 2 shows, performance requiring global structure was significantly better for the RD version (F(l,9)=179.8, p < .001), but there was only a marginally reliable difference for performance involving local structure (F(l,9)=3.82, p=.08). The global structure can be further broken down to prediction of the EID marker and prediction of the first pitch of the repeated A phrases. In both cases, the performance improvement for the RD version was significant: 88.0% versus 52.9% for the end of sequence (F(l,9)=220, p < .001); 69.4% versus 61.2% for the first pitch (F(l,9)=77.6, p < .001). Experiments with different values of T in the range .7-.95 yielded qualitatively similar results, as did experiments in which the A and B phrases were formed by random walks in the key of C major. lOne need not understand the musical notation to make sense of this example. Simply consider each note to be a unique symbol in a set of symbols having a fixed ordering. The example is framed in terms of music because my original work involved music composition. Table 2: Performance on AABA phrases .trueture .tandard ver,ion RD ver.ion local 97.3% 96.7% global 58.4% 75.6% 280 Mozer 3 DETECTING CONTINGENCIES ACROSS GAPSREVISITED I now return to the prediction task involving sequences containing two I's or Y's separated by a stream of intervening symbols. A reduced description network had no problem learning the contingency across wide gaps. Table 3 compares the results presented earlier for a standard net with ten context units and the results for an RD net having six standard context units (T = 0) and four units having identical nonzero T, in the range of .75-.95. More on the choice of T below, but first observe that the reduced description net had a100% success rate. Indeed, it had no difficulty with much wider gaps: I tested gaps of up to 25 symbols. The number of epochs to learn scales roughly linearly with the gap. When the task was modified slightly such that the intervening symbols were randomly selected from the set {!,B,e,D}, the RD net still had no difficulty with the prediction task. The bad news here is that the choice of T can be important. In the results reported above, T was selected to optimize performance. In general, a larger T was needed to span larger gaps. For sma.ll gaps, performance was insensitive to the particular T chosen. However, the larger the temporal gap that had to be spanned, the sma.ller the range of T values that gave acceptable results. This would appear to be a serious limitation of the approach. However, there are several potential solutions. 1. One might try using back propagation to train the time constants directly. This does not work particularly well on the problems I've examined, apparently because the path to an appropriate T is fraught with local optima. Using gradient descent to fine tune T, once it's in the right neighborhood, is somewhat more successful. 2. One might include a complete range of T values in the context layer. It is not difficult to determine a rough correspondence between the choice of T and the temporal interval to which a unit is optimally tuned. If sufficient units are used to span a range of intervals, the network should perform well. The down side, of course, is that this gives the network an excess of weight parameters with which it could potentia.lly overfit the training data. However, because the different T correspond to different temporal scales, there is much less freedom to abuse the weights here than, say, in a situation where additional hidden units are added to a feedforward network. Table 3: Learning contingencies across gaps (revisited) ,tandard net reduced de,criptaon net gap % failure, mean # epoch, % failure, mean # epoch, to learn to learn 2 0 468 0 328 4 36 7406 0 584 6 92 9830 0 992 8 100 10000 0 1312 10 100 10000 0 1630 lower net Induction of Multiscale Temporal Structure 281 upper net Figure 3: A sketch of the Schmidhuber (1991) architecture 3. One might dynamically adjust T as a sequence is presented based on external criteria. In Section 5, I discuss one such criterion. 4 MUSIC COMPOSITION I have used music composition as a domain for testing and evaluating different approaches to learning multiscale temporal structure. In previous work (Mozer &; Soukup, 1991), we designed a sequential prediction network, called CONCERT, that learns to reproduce a set of pieces of a particular musical style. CONCERT also learns structural regularities of the musical style, and can be used to compose new pieces in the same style. CONCERT was trained on a set of Bach pieces and a set of traditional European folk melodies. The compositions it produces were reasonably pleasant, but were lacking in global coherence. The compositions tended to wander randomly with little direction, modulating haphazardly from major to minor keys, flip-flopping from the style of a march to that of a minuet. I attribute these problems to the fact that CONCERT had learned only local temporal structure. I have recently trained CONCERT on a third set of examples-waltzes-and have included context units that operate with a range of time constants. There is a consensus among listeners that the new compositions are more coherent. I am presently running more controlled simulations using the same musical training set and versions of CONCERT with and without reduced descriptions, and am attempting to quantify CONCERT'S abilities at various temporal scales. 5 A HYBRID APPROACH Schmidhuber (1991; this volume) has proposed an alternative approach to learning multiscale temporal structure in sequences. His approach, the chunking architecture, basically involves two (or more) sequential prediction networks cascaded together (Figure 3). The lower net receives each input and attempts to predict the next input. When it fails to predict reliably, the next input is passed to the upper net. Thus, once the lower net has been trained to predict local temporal structure, such structure is removed from the input to the upper net. This simplifies the task of learning global structure in the upper net. 282 Mozer Schmidhuber's approach has some serious limitations, as does the approach I've described. We have thus merged the two in a scheme that incorporates the strengths of each approach (Schmidhuber, Prelinger, Mozer, Blumenthal, &: Mathis, in preparation). The architecture is the same as depicted in Figure 3, except that all units in the upper net have associated with them a time constant Tu , and the prediction error in the lower net determines Tu. In effect, this allows the upper net to kick in only when the lower net fails to predict. This avoid the problem of selecting time constants, which my approach suffers. This also avoids the drawback of Schmidhuber's approach that yes-or-no decisions must be made about whether the lower net was successful. Initial simulation experiments indicate robust performance of the hybrid algorithm. Acknowledgements This research was supported by NSF Presidential Young Investigator award ffiI-9058450, grant 90--21 from the James S. McDonnell Foundation, and DEC extemal research grant 1250. Thanks to Jiirgen Schmidhuber and Paul Smolensky for helpful comments regarding this work, and to Darren Hardy for technical assistance. References Hinton, G. E. (1988). Representing part-whole hierarchies in connectionist networks. Proceeding' of the Eighth Annual Conference of the Cognitive Science Society. Jordan, M. I. (1987). Attractor dynamics and parallelism in a connectionist sequential machine. In Proceeding, of the Eighth Annual Conference of the Cognitive Science Society (pp. 531-546). Hillsdale, NJ: Erlbaum. McClelland, J. L. (1979). On the time relations of mental processes: An examination of systems of processes in cascade. P,ychological Review, 86, 287-330. Miyata, Y., k Burr, D. (1990). Hierarchical recurrent networks for learning musical structure. Unpublished Manuscript. Moser, M. C. (1989). A focused back-propagation algorithm for temporal pattem recognition. Complez Syltem" 3, 349-381. Moser, M. C., k Soukup, T. (1991). CONCERT: A connectionist composer of erudite tunes. In R. P. Lippmann, J. Moody, k D. S. Tourebky (Eds.), Advance, in neural information proce"ing ,ylteml 3 (pp. 789-796). San Mateo, CA: Morgan Kaufmann. Pearlmutter, B. A. (1989). Learning state space trajectories in recurrent neural networks. Neural Computation, 1, 263-269. Pineda, F. (1987). Generalisation of back propagation to recurrent neural networks. Phy,ical Review Letter" 19, 2229-2232. Rohwer, R. (1990). The 'moving targets' training algorithm. In D. S. Tourebky (Ed.), Advance, in neural information proce"ing ,yltem, I (pp. 558-565). San Mateo, CA: Morgan Kaufmann. Rumelhart, D. E., Hinton, G. E., k Williams, R. J. (1986). Learning intemal representations by error propagation. In D. E. Rumelhart k J. L. McClelland (Eds.), Parallel di,tributed proce"ing: Ezploration, in the microltructure of cognition. Volume I: Foundation, (pp. 318-362). Cambridge, MA: MIT Press/Bradford Books. Schmidhuber, J. (1991). Neural ,equence chunker, (Report FKI-148-91). Munich, Germany: Technische Universitaet Muenchen, Institut fuel Informatik. Williams, R. J., k Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1, 270--280.
1991
45
513
Locomotion in a Lower Vertebrate: Studies of the Cellular Basis of Rhythmogenesis and Oscillator Coupling James T. Buchanan Department of Biology Marquette University Milwaukee, WI 53233 Abstract To test whether the known connectivies of neurons in the lamprey spinal cord are sufficient to account for locomotor rhythmogenesis, a CCconnectionist" neural network simulation was done using identical cells connected according to experimentally established patterns. It was demonstrated that the network oscillates in a stable manner with the same phase relationships among the neurons as observed in the lamprey. The model was then used to explore coupling between identical <?scillators. It was concluded that the neurons can have a dual role as rhythm generators and as coordinators between oscillators to produce the phase relations observed among segmental oscillators during swimming. 1 INTRODUCTION One approach to analyzing neurobiological systems is to use simpler preparations that are amenable to techniques which can investigate the cellular, synaptic, and network levels of organization involved in the generation of behavior. This approach has yielded significant progress in the analysis of rhythm pattern generat.ors in several invertebrate preparations (e.g., the stomatogastric ganglion of lobster, Selverston et al., 1983). We have been carrying out similar types of studies of locomotor rhythm generation in a vertebrate preparation, the lamprey spinal cord, which offers many of the same technical advantages of invertebrate nervous systems. To aid our understanding of how identified lamprey interneurons might participate 101 102 Buchanan in rhythmogenesis and in the coupling of oscillators, we have used neural network models. 2 FICTIVE SWIMMING The neuronal correlate of swimming can be induced in the isolated lamprey spinal cord by exposure to glutamate, which is considered to be the principal endogenous excitatory neurotransmitter. As in the intact swimming lamprey, this "fictive" swimming is characterized by periodic bursts of motoneuron action potentials in A B EIN-UN ~\~lltnV -,,",,pot! -~ • 20 tn, LIN-inhibitory CC IN I &~ alrychnine ~re7.r 20m. C .-.. N ::t: '-' ~ u I:l II) ::s C1' II) '" r... II) .lIII .... CJ. !'Il midline ----EIN axon lateral edge 100 -EIN 80 1st • 2nd 110 ...r----L-I last 40 20 0 0.0 0.& 1.0 1.5 2.0 2.& 3.0 3.& 4.0 4.& &.0 Input Current (nA) Figure 1: Lamprey spinal interneurons. A, drawings of three types of interneurons after intracellular dye injections. B, inhibitory and excitatory postsynaptic potentials and the effects of selective antagonists. C, firing frequency of the first, second, and last spike intervals during a 400ms current injection. Locomotion Network 103 the ventral roots, and these bmsts alternate between sides of the spinal cord and propagate in a head-to-tail direction during forward swimming (Cohen and Wallen, 1980; Wallen and Williams, 1984). Thus, the cellular mechanisms for generating the basic swimming pattern reside within the spinal cord as has been demonstrated for many other vertebrates (Grillner, 1981). -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.11 0.7 0.8 0.9 1.0 VR MN LIN CC II,rulJlllllfll' 1Jf1tlB 111111 = EIN !!XI! ~ !!XI! p;x2*99+&'SM Peak Peak Depolarization Repolarizalion -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.8 0.7 0.8 0.9 1.0 SWIM CYCLE Figure 2: Connectivity and activity patterns. Top: synaptic connectivity among the interneurons and motoneurons (MN). Bottom: histograms summarizing the activity of cells recorded intracelllllar1y during fict.ive swimming. Timing of activit.y of neurons with the onset of the ipsilateral ventral root burst. 104 Buchanan The swimming rhythm generator is thought to consist of a chain of coupled oscillators distributed throughout the length of the spinal cord. The isolated spinal cord can be cut into pieces as small as two or three segments in length from any head-to-taillevel and still exhibit alternating ventral root bursting upon application of glutamate. The intrinsic swimming frequency in each of these pieces of spinal cord is different by as much as two-fold, and no consistent relationship between intrinsic frequency and the head-to-tail level from which the piece originated has been observed (Cohen, 1986). Thus, coupling among the oscillators must provide some "buffering capacity" to cope with these intrinsic frequency differences. Another feature of the coupling is the constancy of phase lag, such that over a wide range of swimming cycle periods, the delay of ventral root burst onsets between segments is a constant fraction of the cycle period (Wallen and Williams, 1984). Since the cycle period in swimming lamprey can vary over a ten-fold range, axonal conduction time probably is not a factor in the delay between segments. 3 SPINAL INTERNEURONS In recent years, many c1asses of spinal neurons have been characterized using a variety of neurobiological techniques, particularly intracellular recording of membrane potential (Rovainen, 1974; Buchanan, 1982; Buchanan et a,l., 1989). Several of these classes of neurons are active during fictive swimming. These include the lateral int.erneurons (LIN), cens with axons projecting contralaterally and caudally (CC), and the excitatory interneurons (EIN). The LINs are large neurons with an ipsilaterally and caudally projecting inhibitory axon (Fig. lA,B). The CC interneurons are medium-sized inhibitory cells (Fig. lA). The EINs are small interneurons with ipsilaterally and either caudally or rostrally projecting axons (Fig. lA,B,C). The axons of all these cell types project at least five segments and interact with neurons in multiple segments. The neurons have similar resting and firing properties. They are indistinguishable in their resting potentials, their thresholds, and their action potential amplitudes, durat.ions, and after-spike potentials. Their main differences are size-related parameters such as input resistance and membrane time constant. They fire action potentials throughout the duration of long, depolarizing current pulses, showing some adaptation (a declining frequency with successive action potentials). The plots of spike frequency vs. input current for these various cell types are generally monotonic, with a tendency to saturate at higher levels of input current (Fig. lC)(Buchanan, 1991). The synaptic connectivites of these cells have been established with simultaneous intracellular recording of pre- and post-synaptic neurons, and the results are summarized in Fig. 2 along with their activity patterns during fictive swimming. All of the cells exhibit oscillating membrane potentials with depolarizing peaks which tend to occur during the ventral root burst and with repolarizing troughs which occur about one-half cycle later (Buchanan and Cohen 1982). These oscillations appear to be due in large part to two phases of synaptic input: an excitatory depolarizing phase and an inhibitory repolarizing phase (Kahn, 1982; Russell and Wallen, 1983). The excitatory phase of motoneurons comes from EINs and the inhibitory phase from CCs. However, these interneurons not only interact with motoneurons but with other interneurons as well. So the possibility exists that these interneurons provide the synaptic drive for all neurons of the network, not just motoneurons. AdditionLocomotion Network 105 ally, it is possible that rhythmicity itself originates from the pattern of synaptic connectivity because the circuit has a basic alternating network of reciprocal inhibition between ce interneurons on opposite sides of the spinal cord. Reciprocal inhibition as an oscillatory network needs some form of burst-termination, and this could be provided by the feedforward inhibition of ipsilateral ee interneurons by the LINs. This inhibition could also account for the early peak observed in many ec interneurons during fictive swimming (Fig. 2). 4 NEURAL NETWORK MODEL The ability of the network of Fig. 2 to generate the basic oscillatory pattern of fictive swimming was tested using a "connectionist" neural network simulation (Buchanan, 1992). All of the cells of the neural network had identical S-shaped input-output curves and differed only in their excitatory levels and their synaptic connectivity, which was set according to the scheme of Fig. 2. If the excitation of ees was made larger than LINs, the network would oscillate (Fig. 3). These oscillations began fairly promptly and could continued for at least thousands of cycles. The phase relations among the units were similar to those in the lamprey: cells on opposite sides of the spinal cord were anti-phasic while most cells on the same side of the cord were co-active. Significantly, both in the model and in the lamprey, the CCs were phase advanced, presumably due to their inhibition by LINs. I.UNUV iV V n n: n (\ I.CCJVU \j\ I I. EINJV1JV~ I Figure 3: Activity of the neural network model for the lamprey locomotor circuit.. 106 Buchanan 4.1 COUPLING The neural network model of the lamprey swimming oscillator was further used to explore how the coupling among locomotor oscillators might be achieved. Two identical oscillator networks were coupled using the various pairs of cells in one network connected to pairs of cells in the second network. All nine pairs of possible connections were tested since all of the interneurons interact with neurons in multiple segments. The coupling was evaluated by several criteria based on observations of lamprey swimming: 1) the stability of the phase difference between oscillators and the rate of achieving the steady-state, 2) the ability of the coupling t.o tolerate intrinsic frequency differences hetween oscillators, and 3) the constancy of the phase lag over a wide range of oscillator frequencies. A B 1.5 MNa & MNb ..... 1.0 (U > (U 0 .5 ....J Q 0 0 .0 ..... ., ct1 > -0.5 ..... ., u <: -1 .0 -1 .5 0 50 100 150 200 Time C D 0.8 1.5 0.6 1.0 ..0 ..... (U .... > 0 0 .4 (U 0.5 ....:I bD I:l ct1 .....:l 0.2 0 0.0 ..... (U .... .., • • • • ct1 fIl 0 .0 > ct1 ..... -0.5 ..c: .., C) 0... <: -0.2 -1.0 -0.4 -1.5 0.0 0.2 0.4 0 .6 0.8 1.0 1.2 50 100 150 200 250 300 350 Cycle Period Time Figure 4: Coupling between two identical oscillators. A, the connectivity. H, steady-state coupling within a single cycle. C, constancy of phase lag over a range of oscillator periods. D, adding LIN -CC from oscillator a-b, reverses the phase, simulating backward swimming. Locomotion Network 107 Each of the nine pairs of coupled interneurons between oscillat.ors were capable of producing stable phase locking, although some coupling connections operated over a much wider range of synaptic weights than others. The steady-state phase difference between the oscillators and the rate of reaching it were also dependent on the synaptic weight of the coupling connections. The direction of the phase difference, that is, whether the postsynaptic oscillator was lagging or leading, depended both on the type of postsynaptic cell and the sign of the coupling input t.o it. If the postsynaptic cell was one which speeds the network (LIN or EIN) then their excitation by the coupling connection produced a lead of the postsynaptic network and their inhibition produced a lag. The opposite pattern held for CCs, which slow the network. An example of a coupling scheme that satisfied several criteria for lamprey-like coupling is shown in Fig. 4. In this case (Fig. 4A), there was bidirectional, symmetric coupling of EINs in the two oscillators. This gave the network the ability to tolerate intrinsic frequency differences between the oscillators (buffering capacity). To provide a phase lag of oscillator b, EINs were connected to LINs bidirectionally but with greater weight in one direction (b---ta). Such coupling reached a steady-state within a single cycle (Fig. 4B), and the phase difference was maintained at the same value over a range of cycle periods (Fig. 4C). 4.2 BACKWARD SWIMMING It has been shown recently that there is rhythmic presynaptic inhibition of interneuronal axons in the lamprey spinal cord (Alford et al., 1990). This type of cyc1e-by-cycle modulation of synaptic strength could account for shifts in phase coupling in the lamprey, such as occurs when the animal switches to brief bouts of backward swimming. One mechanism for backward swimming might be the inhibitory connection of LIN ---tCCs. The LINs have axons which descend up to 50 segments (one-half body length). In the neural network model, this descending inhibition of CC interneurons promotes backward swimming, i.e. a phase lead of the postsynaptic oscillators. Thus, presynaptic inhibition of these connections in nonlocal segments would allow forward swimming, while a removal of this presynaptic inhibition would initiate backward swimming (Fig. 4D). 5 CONCLUSIONS The modeling described here demonstrates that the identified interneurons in the lamprey spinal cord may be multi-functional. They are known to contribute to the synaptic input to motoneurons during fictive swimming and thus to the shaping of the final motor output, but they may also function as components of t.he rhythm generating network itself. Finally, by virtue of their multi-segmental connections, they may have the additional role of providing the coupling signals among oscillators. Further experimental work will be required to determine which of t.hese connections are actually used in the lamprey spinal cord for these functions. 108 Buchanan References S. Alford, J. Christenson, & S. Grillner. (1990) Presynaptic GABAA and GABAB receptor-mediated phasic modulation in axons of spinal motor interneurons. Eur. J. Neurolfci., 3:107-117. J .T. Buchanan. (1982) Identification of interneurons with contralateral, caudal axons in the lamprey spinal cord: synaptic interactions and morphology. J. NeurophYlfiol., 47:961-975. J.T. Buchanan. (1991) Electrophysiological properties of lamprey spinal neurons. Soc. Neurolfci. Ablftr., 17:1581. J .T. Buchanan. (1992) Neural network simulations of coupled locomotor oscillators in the lamprey spinal cord. Bioi. Cybern., 74: in press. J .T. Buchanan & A.H. Cohen. (1982) Activities of identified interneurons, motoneurons, and muscle fibers during fictive swimming in the lamprey and effects of reticulospinal and dorsal cell stimulation. J. NeurophYlfiol., 47:948-960. J .T. Buchanan, S. Grillner, S. Cullheim, & M. Risling. (1989) Identification of excitatory interneurons contributing to generation of locomotion in lamprey: structure, pharmacology, and function. J. NeurophYlfiol., 62:59-69. A.H. Cohen. (1986) The intersegmental coordinating system of the lamprey: experimental and theoretical studies. In S. Grillner, P.S.G. Stein, D.G. Stuart, H. Forssberg, R.M. Herman (eds.), Neurobiology of l'ertebrate Locomotion, 371-382. London: Macmillan. A.H. Cohen & P. Wanen. (1980) The neuronal correlate of locomotion in fish: "fictive swimming" induced in an in vitro preparation of the lamprey spinal cord. Ezp. Brain Relf., 41:11-18. S. Grillner. (1981) Control of locomotion in bipeds, tetrapods, and fish. In V.B. Brooks (ed.), Handbook of PhYlfiology, Sect. 1. The Nervoulf SYlftem Vol. II. Motor Control, 1179-1236. Maryland: Waverly Press. J .A. Kahn. (1982) Patterns of synaptic inhibtion in motoneurons and interneurons during fictive swimming in the lamprey, as revealed by CI- injections. J. Compo Neurol., 147:189-194. C.M. Rovainen. (1974) Synaptic interactions of identified nerve cells in the spinal cord of the sea lamprey. J. Compo Neurol., 154:189-204. D.F. Russell & P. Wallen. (1983) On the control of myotomal motoneurones during Clfictive swimming" in the lamprey spinal cord in vitro. Acta PhYlfiol. Scand., 117:161-170. A.I. Selverston, J.P. Miller, & M. Wadepuhl. (1983) Cooperative mechanisms for the production of rhythmic movements. Sym. Soc. Ezp. Bioi., 37:55-88. P. Wallen & T.L. Williams. (1984) Fictive locomotion in the lamprey spinal cord in vitro compared with swimming in the intact and spinal animal. J. Phy .• iol., 64:862-871.
1991
46
514
SINGLE NEURON MODEL: RESPONSE TO WEAK MODULATION IN THE PRESENCE OF NOISE A. R. Bu/,ara and E. W. Jaco6, Naval Ocean Syat.em.a Cenw, Materials Reaean:h Branch, San Diego, CA 92129 F.Mou Physics Dept.., Univ. of Missouri, St. Louis, MO 63121 ABSTRACT We consider a noisy bist.able single neuron model driven by a periodic external modulation. The modulation introduces a correlated switching between st.ates driven by the noise. The information flow through the system from the modulation to the output switching events, leads to a succession of strong peaks in the power spectrum. The signal-to-noise ratio (SNR) obtained from this power spectrum is a measure of the information content in the neuron response. With increasing noise intensity, the SNR passes t.hrough a maximum, an effect which has been called stochastic resonance. We treat t.he problem wit.hin the framework of a recently developed approximate theory, valid in the limits of weak noise intensity, weak periodic forcing and low forcing frequency. A comparison of the results of this theory with those obtained from a linear syst.em FFT is also presented. INTRODUCTION Recently, there has been an upsurge of interest in s1ngie or few-neuron nonlinear dynamics (see e.g. Li and Hopfield~ 1989; Tuckwell, 1988; Paulus, Gass and Mandell, 1990; Aihara, Takake and Toyoda, 1990). However, the precise relationship between the manyneuron connected model and a single effect.ive neuron dynamics has not been examined in detail. Schieve, Bulsara and Davis (1991) have considered a network of N symmetrically interconnected neurons embodied} for example in the "connectionist." models of Hopfield (1982, 1984) or Shamma (1989) \the latter corresponding to a mammalian auditory network). Through an adiabatic elimination procedure, they have obtained, in closed form, the dynamics of a single neuron from the system of coupled differential equations describing the N-neuron problem. The problem has been treated both deterministically and stochastically (through the inclusion of additive and multiplicative noise terms). It. is important. to point. out that the work of Schieve, Bulsara, and Davis does not include a prion' a self-coupling term, although the inclusion of such a term can be readily implemented in their theory; this has been done by Bulsara and Schieve (1991). Rather, t.heir theory results in an explicit. form of t.he self-coupling term, in terms of the parameters of the remaining neurons in the net.work . This term, in effect, renormalizes the self-coupling t.erm in the Shamma and Hopfield models. The reduced or "effect.ive" neuron model is expected to reproduce some of the gross features of biological neurons. The fact that simple single neuron models, such as the model t.o be considered in this work, can indeed reproduce several feat.ures observed in bi~ logical experiments has been strikingly demonstrated by Longtin, Bulsara and Moss (1991) through their construction of the inter-spike-interval histograms (ISIHs) using a Schmidt trigger to model the neuron. The results of their simple model agree remarkably well with data obtained in two different experiments (on the auditory nerve fiber of squirrel monkey (Rose, Brugge, Andersen and Hind, 1967) and on the cat visual cort.ex (Siegal, 1990)), In this work, we consider such a "reduced" neural element subject to a weak periodic external modulation. The modulation int.roduces a correlat.ed switching between the 67 68 Buisara, Jacobs, and Moss bistable states, driven by the noise with the signal-to-noise ratio (SNR) obtained from the power spectrum, being taken as a measure of the information content in the neuron response. As the additive noise variance increases, the SNR passes through a maximum. This effect has been called wstochastic resonance W and describes a phenomenon in which the noise actually enhances the information content, i.e., the observability of the signal. Stochastic resonance has been observed in a modulated ring laser experiment (McNamara, Wiesenfeld and Roy, 1988; Vemuri and Roy, 1989) as well as in electron paramagnetic resonanCe experiments (Gammaitoni, Martinelli, Pardi and Santucci, 1991) and in a modulated magnetoselastic ribbon (Spano and Ditto, 1991). The introduction at multiplicative noise (in the coefficient of the sigmoid transfer function) tends to degrade this effect. THE MODEL; STOCHASTIC RESONANCE The reduced neuron model consists of a single Hopfield-type computational element, which may be modeled as a R-C circuit with nonlinear feedback provided by an operational amplifier having a sigmoid transfer function. The equation (which may be rigorously derived from a fully connected network model as outlined in the preceding section) may be cast in the form, i + a x - b ta.nhx = Xo+ F(t), (1) where F( tJ is Gaussian delta-correlated noise with zero mean and variance 2D, Xo bein9 a dc input (which we set equal to zero for the remainder of this work). An analysis of lIl' including multiplicative noise effects, has been given by Bulsara, Boss and Jacobs (1989 . For the purposes of the current work, we note that the neuron may be treated as a partic e in a one-dimensional potential given by, a x2 U(x) = -2- - b In cosh x , (2) x being the one-dimensional state variable representing the membrane potential. In general, the coefficients a and b depend on the details of the interaction of our reference neuron to the remaining neurons in the network (Schieve, Bulsara and Davis, 1990). The potential described by (2) is bimodal for '7 > 1 With the extrema occurring at (we set a=1 throughout the remainder of this work), c=o, ± [11-ta.nhb j::=:bta.nhb, (3) 1- b sech2b the approximation holding for large b. Note that the N-shaped characteristic inherent in the firing dynamics derived from the Hodgkin-Huxley equations (Rinzel and Ermentrout, 1990) is markedly similar to the plot of dV/dx vs. x for the simple bistable system (1). For a stationary potential, and for D« Vo where Vo is the depth of the deterministic potential, the probability that a switching event will occur in unit time, i.e. the switching rate, is given by the Kramers frequency (Kramers, 1940), '.= \ D l. dy .xp (U(y)/ D) (, duxp (- U(z)/ D) r. (40) which, for small noise, may be cast in the form (the local equilibrium assumption of Kramers), ro::=: (271"rl ll V(21(0) I V(21(c)]1/'2 exp (- Vo/ D), (4b) where V(2}(x) == d2 V /dx 2. We now include a periodic modulation term esinwt on the right-hand-side of (1) (note that for «2(b-1)3/(3b) one does not observe SWitchinq in the noise-free system) . This leads to a modulation (i.e. rocking) of the potential 2) with time: an additional term - xesinwt is now present on the right-hand-side of (2). n this case, the Kramers rate (4) becomes time-dependent: r(t)::=:roexp(-Xisinwt/D), (5) which is accurate only for e« Vo and w« {VI21(±c )}1/'2 . The latter condition is referred to as the adiabatic approximation. It ensures that the probability density corresponding to Single Neuron Model: Response to Weak Modulation in the Presence of Noise 69 the time-modulated potential is approximately stationary (the modulation is slow enough that the instantaneous probability density can "adiabatically" relax to a succession of quasi-stationary states). We now follow the work of McNamara and Wiesenfeld (1989), developing a two-state model by introducing a probability of finding the system in the left or right well of the potential. A rate equation is constructed based on the Kramers rate r(t) given by (5). Within the framework of the adiabatic approximation, this rate equation may be integrated to yield the time-dependent conditional probability density function for finding the system in a given well of the potential. This leads directly to the autocorrelation function < :z:{t) :z:{t + 1') > and finally, via the Wiener-Khinchine theorem, to the power spectral density P(O). The details are given by Bulsara, Jacobs, Zhou, Moss and Kiss (1991): P 0 = 1+ 6 w- 0 [ 2rg f2C2 1 [ 8c2ro 1 47rc 4rg f2 () D2{4rl+(2) 4rl+02 D2(4rg+02) ( ), (6) where the first term on the right-hand-side represents the noise background, the second term being the signal strength. Taking into account the finite bandwidth of the measuring syste~, ~e replace {for the I?urpose of compariso~ with e~perimental results} t~e .deltafunctlOn m (6) by the quantity (.6w)-1 where .6w IS the Width of a frequency bm m the (experimental) Fourier transformation. We introduce signal-to-noise ratio SNR = 10 log R in decibels, where R is given by R == 1 + D24~Cr~~f:2) (.6w)-1 [1- D2 ~:~t:2w2) r J [ 4r~gc~~2l · (7) In writing down the above expressions, the approximate Kramers rate (4b) has been used . However, in what follows, we discuss the effects of replacing it by the exact expression (4a). The location of the maximum of the SNR is found by differentiating the above equation; It depends on the amplitude f and the frequency w of the modulation, as well as the additive noise variance D and the parameters a and b in the potential. The SNR computed via the above expression increases as the modulation frequency is lowered relative to the Kramers frequency . Lowering the modulation frequency also sharpens the resonance peak, and shifts it to lower noise values, an effect that has been demonstrated, for example, by Bulsara, Jacobs, Zhou, Moss and Kiss (1991). The above may be readily explained. The effect of the weak modulating signal is to alternately raise and lower the potential well with respect to the barrier height Vo. In the absence of noise and for l « Vo, the system cannot switch states, i.e. no information is transferred to the output. In the presence of noise, however, the system can switch states through stochastic activation over the barrier . Although the switching process is statistical, the transition probability is periodically modulated by the external signal. Hence, the output will be correlated, to some degree, with the input signal (the modulation "clocks" the escape events and the whole process will be optimized if the noise by itself produces, on average, two escapes within one modulation cycle). Figure 1 shows the SNR as a function of the noise variance 2D . The potential barrier height Vo:;;:: 2.4 for the b = 2.5 case considered. Curves corresponding to the adiabatic expression (7), as well as the SNR obtained through an exact (numerical) calculation of the Kramers rate, using (4a) are shown, along with the data points obtained via direct numerical simulation of (1). The Kramers rate at the maximum (2D ~ Vo) of the SNR curve is 0.72. This is much greater than the driving frequency w = 0.0393 used in this plot. The curve computed using the exact expression (4a) fits the numerically obtained data points better than the adiabatic curve at high noise strengths. This is to be expected in light of the approximations used in deriving (4b) from (4a). Also, the expression (6) has been derived from a two-state theory (taking no account of the potential). At low noise, we expect the two-state theory to agree with the actual system more closely . This is reflected in the resonance curves of figure 1 with the adiabatic curve differing (at the maximum) from the data points by approximately Idb. We reiterate that the SNR, as well as the agreement between the data points and the theoretical curves improves as the modulation frequency is lowered relative to the Kramers rate (for a fixed frequency this can be achieved by changing the potential barrier height via the parameters a and b in (2)). On the same plot, we show the SNR obtained by computing directly the Fourier transform of the signal and noise. At very 70 Bulsara, Jacobs, and Moss low noise, the Mideal linear filter M yields results that are considerably better than stochastic resonance. However, at moderate-to-high noise, the stochastic resonance, which may be looked upon as a Mnonlinear filter M, offers at least a 2.5db improvement for the parameters of the figure. As indicated above, the improvement in performance achieved by stochastic resonance over the "ideal linear filterM may be enhanced by raising the Kramers frequency of the nonlinear filter relative to the modulation frequency w. In fact, as long as the basic conditions of stochastic resonance are realized, the nonlinear filter will outperform the best linear filter except at very low noise. co ~ a: z C/) ZZ.5 o 20.0 o 0 17.5 15.0 o "- , .... o ' .... o o o ~ ................. . .... .......... ........ .... .... 12.5+----+---+--_--+----+----<----<--...... -.:.. 0.00 1.Z5 Z.50 3.75 5.00 Noise Variance 2D Fig 1. SNR using adiabatic theory, eqn. (7), with (b ,w,e)= (2.5,0.0393,0.3) and ro given by (4b) (solid curve) and (4a) (dotted curve). Data points correspond to SNR obtained via direct simulation of (1) (frequency resolution =6.1x 10-6 Hz). Dashed curve corresponds to best possible linear filter (see text) . Multiplicative Noise Effects We now consider the case when the neuron is exposed to both additive and multiplicative noise. In this case, we set b(t) = bo+ €(t) where <€(t» =0, < €(t) €(s) > =2Dm ott - s) . (8) In a real system such fluctuations might arise through the interaction of the neuron with other neurons in the network or with external fluctuations. In fact, Schieve, Bulsara and Davis (1991) have shown that when one derives the MreducedM neuron dynamics in the form (1) from a fully connected N-neuron network with fluctuating synaptic couplings, then the resulting dynamics contain multiplicative noise terms of the kind being discussed here. Even Langevin noise by itself can introduce a pitchfork bifurcation into the long-time dynamics of such a reduced neuron model under the appropriate conditions (Bulsara and Schieve, 1991). In an earlier pUblication (Bulsara, Boss and Jacobs, 1989), it was shown that these fluctuations can qualitatively alter the behavior of the stationary probability density function that describes the stochastic response of the neuron. In particular, the multiplicative noise may induce additional peaks or erase peaks already present in the density (see for example Horsthemke and Lefever 1984). In this work we maintain Dm sufficiently small that such effects are absent. In the absence of modulation, one can write down a Fokker Planck equation for the probability density function p (:z; ,t) describing the neuron response: E.1!. a 1 a2 at=-a;[a(x)p!+z ax2 [.8(x)pj, (9) where a(x) == - x + botanhx + Dm tanhx sech 2x, .8(x);: 2(D + Dm tanh2x) I D being the additive noise intensity. In the steady state, (9) may be solved Mmacroscopic potential" function analogous to the function U(:z;} defined in (2): &~ U (x) = - 2 f .8( z) dx + In .8(:z;) . (10) to yield a (11) Single Neuron Model: Response to Weak Modulation in the Presence of Noise 71 From (11), one obtains the turning points of the potential through the solution of the transcendental equation x - bo tanhx + Dm tanhx sech2x = 0 . (12) The modified Kramers rate, rpm, for this x-dependent diffusion process has been derived by Englund, Snapp and Schieve lI984): rOm = ~ [ U(21(Xl) I U(21(0) I p'" exp [ U(XI) - U(O) I, (13) 2,.. where the maximum of the potential occurs at x=o and the left minimum occurs at Z=ZI' If we now assume that a weak sinusoidal modulation £sinwt is present, we may once again introduce this term into the potential as in the preceding case, again making the adiabatic approximation . We easily obtain for the modified time-dependent Kramers rate, r ±(t) = If..Ql [ U(21(xI) I U(21(0) III", exp [ U(XI) - U(O) ± 2 (0 £SID(' w) t dz ]. (14) 4,.. 0 (3 z Following the same procedure as we used in the additive noise case, we can obtain the ratio R = 1 + S / l::lw N, for the case of both noises being present. The result is, where, and 20 EO ~ 10 a: Z Cf) o 2 -I [ 2"1lr7J ]-1 R = 1 + 7r"l0'10 (.6w) 1 2 2 ' "10 + '10 (15) (16a) '1 -~J,~o dz € [xl+ml/2 tan-l (m l/2 tanhx l)], 0='0 (3(z) (D ) 2 + Dm (16b) 0.3 0.6 0.9 D 1.2 1.5 Fig 2. Effect of multiplicative noise, eqn. (15). (b ,w,€) = (2,0.31,0.4) and Dm =0 (top curve), 0.1 (middle curve) and 0.2 (bottom curve) . In figure 2 we show the effects of both additive and multiplicative noise by plotting the SNR for a fixed external frequency w=O.31 with (bo, £) = (2,0.4) as a function of the additive noise intensity D. The curves correspond to different values of Dm, with the uppermost curve corresponding to Dm =0, i.e., for the case of additive noise only . We note that increasing Dm leads to a decrease in the SNR as well as a shift in its maximum to lower values of D. These effects are easily explained using the results of Bulsara, Boss and Jacobs 72 Buisara, Jacobs, and Moss (1989), wherein it was shown that the effect of mUltiplicative noise is to decrease, on average, the potential barrier height and to shift the locations of the stable steady states. This leads to a degradation of the stochastic resonance effect at large Dm while shifting the location of the maximum toward lower D . THE POWER SPECTRUM We turn now to the power spectrum obtained via direct numerical simulation of the dynamics (1). It is evident that a time series obtained by numerical simulation of (1) would display SWitching events between the stable states of the potential, the residence time in each state being a random variable. The intrawell motion consists of a random component superimposed on a harmonic component, the latter increasing as the amplitude i of the modulation increases. In the low noise limit, the deterministic motion dominates. However, the adiabatic theory used in deriving the expressions (6) and (7) is a two-state theory that simply follows the switching events between the states but takes no account of this intrawell motion. Accordingly, in what follows, we draw the distinction between the full dynamics obtained via direct simulation of (1) and the "equivalent two-state dynamics" obtained by passing the output through a two-state filter. Such a filter is realized digitally by replacing the time series obtained from a simulation of (1) with a time series wherein the x variable takes on the values x = ± c, depending on which state the system is in. Figure 3 shows the power spectral density obtained from this equivalent two-state system. The cop curve represents the signal-free case and the bottom curve shows the effects of turning on the signal . Two features are readily apparent: CD ~ a: z V) 50 00 35 75 7 2S -700L---~~ __ -===::::::::;;;;;;~~~ 000 O . O~ O.OB 011 0.15 lrequency (Hz) Fig 3. Power spectral density via direct simulation of (1). (b ,w ,< ,2D) = (1.6056,0.03,0.65,0.25). Bottom curve: <=0 case. l. The power spectrum displays odd harmonics of the modulation; this is a hallmark of stochastic resonance (Zhou and Moss, 1990). If one destroys the symmetry of the potential (1) (through the introduction of a small de driving term, for example), the even harmonics of the modulation appear. 2. The noise floor is lowered when the signal is turned on. This effect is particularly striking In the two-state dynamics. It stems from the fact that the total area under the spectral density curves in figure 3 (i.e. the total power) must be conserved (a consequence of Parseval's theorem). The power in the signal spikes therefore grows at the expense of the background noise power. This is a unique feature of weakly modulated bistable noisy systems of the type under consideration in this work, and ~raPhiCallY illustrates the ability of noise to assist information flow to the output (the signal. The effect may be quantified on examining equ~tion (6) a~ove . The noise power spectra ~density (reJ?resented by the first Cerm on the nght-hand-slde) decreases as the term 2ro£2c2{D2(4rg +(2)}-1 approaches unity . This reduction in the noise floor is most pronounced when the signal is of low frequency (compared to the Kramers rate) and large amplitude. A similar, effect may be observed in the spectral density correspondin~ to the full system dynamics. In this case, the total power is only approximately conserved tin a finite bandwidth) and the effect is not so Single Neuron Model: Response to Weak Modulation in the Presence of Noise 73 pronounced. DISCUSSION In this paper we have presented the details of a cooperative stochastic process that occurs in nonlinear systems subject to weak deterministic modulating signals embedded in a white noise background. The so-called "stochastic resonance" phenomenon may actually be interpreted as a noise-assisted flow of information to the output. The fact that such simple nonlinear dynamic systems (e.g. an electronic Schmidt trigger) are readily realizeable in hardware, points to the possible utility of this technique (far beyond the application to signal processing in simple neural networks) as a nonlinear filter. We have demonstrated that, by suitably adjusting the system parameters (in effect changing the Kramers rate), we can optimize the response to a given modulation frequency and background noise. In a practical system, one can move the location and height of the bell-shaped response curve of figure 1 by changing the potential parameters and, possibly, infusing noise into the system. The noise-enhancement of the SNR improves with decreasing frequency. This is a hallmark of stochastic resonance and provides one with a possible filtering technique at low frequency . It is important to point out that all the effects reported in this work have been reproduced via analog simulations (Bulsara, Jacobs, Zhou, Moss and Kiss, 1991: Zhou and Moss, 1990). Recently a new approach to the processing of information in noisy nonlinear dynamic systems, based on the probability density of residence times in one of the stable states of the potential, has been developed by Zhou, Moss and Jung (1990). This technique, which offers an alternative to the FFT, was applied by Longtin, Moss and Bulsara (1991) in their construction of the inter-spike-interval histograms that describe neuronal spike trains in the central nervous system. Their work points to the important role played by noise in the procesing of information by the central nervous system. The beneficial role of noise has already been recognized by Buhmann and Schulten (1986, 87). They found that noise, deliberately added to the deterministic equations governing individual neurons in a network significantly enhanced the network's performance and concluded that " ... the noise ... is an essential feature of the information processing abilities of the neural network and not a mere SOurce of disturbance better suppressed ... " Acknowledgements This work was carried out under funding from the Office of Naval Research grant nos. NOOOI4-90-AF-OOOOI and NOOOOI4-90-J-1327. References Aihara K., Takake T., and Toyoda M., 1990; "Chaotic Neural Networks", Phys. Lett. A144, 333-340. Buhmann J., and Schulten K., 1986; "Influence of Noise on the Behavior of an Autoassociative Neural Network", in J. Denker (ed) Neural networks for Computing (AlP conf. procedings, vol 151). Buhmann J ., and Schulten K., 1987; "Influence of Noise on the Function of a "Physiological" Neural Network", BioI. Cyber. 56,313-327. Bulsara A., Boss R. and Jacobs E., 1989; "Noise Effects in an Electronic Model of a Single Neuron", BioI. Cyber. 61, 212-222. Bulsara A., Jacobs E., Zhou T., Moss F. and Kiss L., 1991; "Stochastic Resonance in a Single Neuron Model: Theory and Analog Simulation", J. Theor. BioI. 154, 5~1-555. Bulsara A. and Schieve W., 1991; "Single Effective Neuron: Macroscopic Potential and Noise-Induced Bifurcations·, Phys. Rev . A, in press. Englund J., Snapp R., Schieve W., 1984; "Fluctuations, Instabilities and Chaos in the Laser-Driven Nonlinear Ring Cavity", in E. Wolf (ed) Progress in Optics, vol XXI. (North Holland, Amsterdam) . 74 Bulsara, Jacobs, and Moss Gammaitoni L., Martinelli M., Pardi L., and Santucci S., 1991; MObservation of Stochastic Resonance in Bistable Electron Paramagnetic Resonance SystemsM, preprint. Hopfield J ., 1982; MNeural Networks and Physical Systems with Emergent Computational Capabilities M, Proc. N atl. Acad. Sci. 79, 2554-2558. Hopfield J ., 1984; MNeurons with Graded Responses have Collective Computational Abilities like those of Two-State NeuronsM, Proc. Natl. Acad. Sci., 81, 3088-3092. Horsthemke W., and Lefever R., 1984; Noise-Induced Transitions. (Springer-Verlag, Berlin). Kramers H., 1940; MBrownian Motion in a Field of Force and the Diffusion Model of Chemical ReactionsM, Physica 1, 284-304. Li Z. , and Hopfield J ., 1989; "Modeling the Olfactory Bulb and its Neural Oscillatoy ProcessingsM, BioI. Cyber. 61, 379-392. Longtin A., Bulsara A., and Moss F., 1991; MTime-Interval Sequences in Bistable Systems and the Noise-Induced Transmission of Information by Sensory NeuronsM, Phys. Rev. Lett. 67,656-659. McNamara B., Wiesenfeld K., and Roy R., 1988; "Observation of Stochastic Resonance in a Ring Laser", Phys. Rev. Lett. 60, 2626-2629. McNamara B., and Wiesenfeld K., 1989; "Theory of Stochastic ResonanceM, Phys. Rev. A39, 4854-4869. Paulus M., Gass S., and Mandell A., 1990; M A Realistic Middle-Layer for Neural Networks M, Physica D40, 135-155. Rinzel J ., and Ermentrout B., 1989; M Analysis of Neural Excitability and Oscillations", in Methods in Neuronal Modeling, eds. C. Koch and I. Segev (MIT Press, Cambridge, MA). Rose J ., Brugge J., Anderson D., and Hind J., 1967; ·Phase-locked Response to Lowfrequency Tones in Single Auditory Nerve Fibers of the Squirrel Monkey", J . Neurophysial., 30, 769-793. Schieve W., Bulsara A. and Davis G., 1990; "Single Effective Neuron", Phys. Rev. A43 2613-2623. Shamma S., 1989; MSpatial and Temporal Processing in Central Auditory Networks", in Methods in Neuronal Modeling, eds. C. Koch and I. Segev (MIT Press, Cambridge, MA). Siegal R.,1990; MNonlinear Dynamical System Theory and Primary Visual Cortical Processing", Physica 42D, 385-395. Spano M., and Ditto W., 1991; MExperimental Observation of Stochastic R Resonance in a Magnetoelastic Ribbon", preprint. Tuckwell H., 1989; "Stochastic Processes in the Neurosciences", (SIAM, Philadelphia). Vemuri G., and Roy R., 1990; MStochastic Resonance in a Bistable Ring Laser N, Phys. Rev. A39, 4668-4674. Zhou T., and Moss F., 1990; M Analog Simulations of Stochastic ResonanceM, Phys. Rev. A41, 4255-4264. Zhou T., Moss F., and Jung P., 1991; MEscape-Time Distributions of a Periodically Modulated Bistable System with Noise", Phys. Rev. A42, 3161-3169.
1991
47
515
Fast Learning with Predictive Forward Models Carlos Brody· Dept. of Computer Science lIMAS, UNAM Mexico D.F. 01000 Mexico. e-mail: carlos@hope. caltech. edu Abstract A method for transforming performance evaluation signals distal both in space and time into proximal signals usable by supervised learning algorithms, presented in [Jordan & Jacobs 90], is examined. A simple observation concerning differentiation through models trained with redundant inputs (as one of their networks is) explains a weakness in the original architecture and suggests a modification: an internal world model that encodes action-space exploration and, crucially, cancels input redundancy to the forward model is added. Learning time on an example task, cartpole balancing, is thereby reduced about 50 to 100 times. 1 INTRODUCTION In many learning control problems, the evaluation used to modify (and thus improve) control may not be available in terms of the controller's output: instead, it may be in terms of a spatial transformation of the controller's output variables (in which case we shall term it as being "distal in space"), or it may be available only several time steps into the future (termed as being "distal in time"). For example, control of a robot arm may be exerted in terms of joint angles, while evaluation may be in terms of the endpoint cartesian coordinates; furthermore, we may only wish to evaluate the endpoint coordinates reached after a certain period of time: the co·Current address: Computation and Neural Systems Program, California Institute of Technology, Pasadena CA. 563 564 Brody ordinates reached at the end of some motion, for instance. In such cases, supervised learning methods are not directly applicable, and other techniques must be used. Here we study one such technique (proposed for cases where the evaluation is distal in both space and time by [Jordan & Jacobs 90)), analyse a source of its problems, and propose a simple solution for them which leads to fast, efficient learning. We first describe two methods, and then combine them into the "predictive forward modeling" technique with which we are concerned. 1.1 FORWARD MODELING "Forward Modeling" [Jordan & Rumelhart 90] is useful for dealing with evaluations which are distal in space; it involves the construction of a differentiable model to approximate the controller-action -+ evaluation transformation. Let our controller have internal parameters w, output c, and be evaluated in space e, where e = e(c) is an unknown but well-defined transformation. If there is a desired output in space e, called e*, we can write an "error" function, that is, an evaluation we wish minimised, and differentiate it w.r.t. the controller's weights to obtain E = (e* - e)2 8E 8c 8e 8E 8w = 8w . 8c . 8e (1) Using a differentiable controller allows us to obtain the first factor in the second equation, and the third factor is also known; but the second factor is not. However, if we construct a differentiable model (called a ''forward model") of e(c), then we can obtain an approximation to the second term by differentiating the model, and use this to obtain an estimate of the gradient 8E / 8w through equation (1); this can then be used for comparatively fast minimisation of E, and is what is known as "forward modeling". 1.2 PREDICTIVE CRITICS To deal with evaluations which are distal in time, we may use a "critic" network, as in [Barto, Sutton & Anderson 83]. For a particular control policy implemented by the controller network, the critic is trained to predict the final evaluation that will be obtained given the current state - using, for example, Sutton's TD algorithm [Sutton 88]. The estimated final evaluation is then available as soon as we enter a state, and so may in turn be used to improve the control policy. This approach is closely related to dynamic programming [Barto, Sutton & Watkins 89]. 1.3 PREDICTIVE FORWARD MODELS While the estimated evaluation we obtain from the critic is no longer distal in time, it may still be distal in space. A natural proposal in such cases, where the evaluation signal is distal both in space and time, is to combine the two techniques described above: use a differentiable model as a predictive critic [Jordan & Jacobs 90]. If we know the desired final evaluation, we can then proceed as in equation (1) and obtain the gradient of the error w.r.t. the controller's weights. Schematically, this would look like figure 1. When using a backprop network for the predictive model, state vector control CONTROLLER NETWORK Fast Learning with Predictive Forward Models 565 predicted evaluation PREDICTIVE MODEL Figure 1: Jordan and Jacobs' predictive forward modeling architecture. Solid lines indicate data paths, the dashed line indicates back propagation. we would backpropagate through it, through it's control input, and then into the controller to modify the controller network. We should note that since predictions make no sense without a particular control policy, and the controller is only modified through the predictive model, both networks must be trained simultaneously. [Jordan & Jacobs 90] applied this method to a well-known problem, that of learning to balance an inverted pendulum on a movable cart by exerting appropriate horizontal forces on the cart. The same task, without differentiating the critic, was studied in [Barto, Sutton & Anderson 83]. There, reinforcement learning methods were used instead to modify the controller's weights; these perform a search which in some cases may be shown to follow, on average, the gradient of the expected evaluation w.r .t. the network weights. Since differentiating the critic allows this gradient to be found directly, one would expect much faster learning when using the architecture of figure 1. However, Jordan and Jacobs' results show precisely the opposite: it is surprisingly slow. 2 THE REDUNDANCY PROBLEM We can explain the above surprising result if we consider the fact that the predictive model network has redundant inputs: the control vector c is a function of the state vector; (call this c = 17( S)). Let K. and u be the number of components of the control and state vectors, respectively. Instead of drawing its inputs from the entire volume of (K.+u)-dimensional input space, the predictor is trained only with inputs which lie on the u-dimensional manifold defined by the relation 17. A way from the manifold the network is free to produce entirely arbitrary outputs. Differentiation of the model will then provide non-arbitrary gradients only for directions tangential to the manifold; this is a condition that the axes of the control dimensions will not, in general, satisfy.l This observation, which concerns any model trained with redundant inputs, is the very simple yet principal point of this paper. One may argue that since the control policy is continually changing, the redundancy picture sketched out here is not in fact accurate: as the controller is modified, many lNote that if it is single-valued, there is no way the manifold can "fold around" to cover all (or most) of the K. + (T input space. 566 Brody EVALUATION EVALUATION FUNCTION MODELS (( lC\..D _____ ~ • b c d CONTROL OUTPUT Figure 2: The evaluation as a function of control action. Curves A,B,C,D represent possible (wrong) estimates of the "real" curve made by the predictive model network. possible control policies are "seen" by the predictor, so creating volume in input space and leading to correct gradients obtained from the predictor. However, the way in which this modification occurs is significant. An argument based on empirical observations will be made to sustain this. Consider the example shown in figure 2. The graph shows what the "real" evaluation at some point in state space is, as a function of a component of the control action taken at that pointj this function is what the predictive network should approximate. Suppose the function implemented by the predictive network initially looks like the curve which crosses the "real" evaluation function at point (a)j suppose also that the current action taken also corresponds to point (a). Here we see a one-dimensional example of the redundancy problem: though the prediction at this point is entirely accurate, the gradient is not. If we wish to minimise the predicted evaluation, we would change the action in the direction of point (b). Examples of point (a) will no longer be presented to the predictive network, so it could quite plausibly modify itself simply so as to look like the estimated evaluation curve "B" which is shown crossing point (b) (a minimal change necessary to continue being correct). Again, the gradient is wrong and minimising the prediction will change the action in the same direction as before, perhaps to point (c)j then to (d), and so on. Eventually, the prediction, though accurate, will have zero gradient, as in curve "D", and no modifications will occur. In practice, we have observed networks "getting stuck" in this fashion. Though the objective was to minimise the evaluation, the system stops "learning" at a point far from optimal. The problem may be solved, as Jordan and Jacobs did, by introducing noise in the controller's output, thus breaking the redundancy. Unfortunately, this degrades .. ~[ vector control vector Fast Learning with Predictive Forward Models 567 predicted state --.. ---....... --predicted evaluation - -- ----- -- CONTROLLER NETWORK INTERMEDIATE (WORLD) MODEL PREDICTIVE MODEL Figure 3: The proposed system architecture. Again, solid lines represent data paths while the dashed line represents backpropagation (or differentiation). signal quality and means that since we are predicting future evaluations, we wish to predict the effects of future noise - a notoriously difficult objective. The predictive network eventually outputs the evaluation's expectation value, but this can take a long time. 3 USING AN INTERMEDIATE MODEL 3.1 AN EXTRA WORLD MODEL Another way to solve the redundancy problem is through the use of what is here called an "intermediate model": a model of the world the controller is interacting with. That is, if 8(t) represents the state vector at time t, and c(t) the controller output at time t, it is a model of the function 1 where 8(t + 1) = 1(8(t), c(t)). This model is used as represented schematically in figure 3. It helps in modularising the learning task faced by the predictive model [Chrisley 90], but more interestingly, it need not be trained simultaneously with the controller since its output does not depend on future control policy. Hence, it can be trained separately, with examples drawn from its entire (state x action) input space, providing gradient signals without arbitrary components when differentiated. Once trained, we freeze the intermediate model's weights and insert it into the system as in figure 3; we then proceed to train the controller and predictive model as before. The predictive model will no longer have redundant inputs when trained either, so it too will provide correct gradient signals. Since all arbitrary components have been eliminated, the speedup expected from using differentiable predictive models should now be obtainable.2 3.2 AN EXAMPLE TASK The intermediate model architecture was tested on the same example task as used by Jordan and Jacobs, that of learning to balance a pole which is attached through a hinge on its lower end to a movable cart. The control action is a real valued-force 2This same architecture was independently proposed in [Werbos 90], but without the explanation as to why the intermediate model is necessary instead of merely desirable. 568 Brody 1100 1000 I--900 --800 700 ..... ..... .. .... 600 0 ... II 500 ~ H 400 300 200 100 . II! LII J J o o 100 200 300 400 500 600 700 800 900 1000 L.arninq trial Figure 4: The evolution of eight different learning networks, using the intermediate model. applied to the cart; the evaluation signal is a "0" while the pole has not fallen over, and the cart hasn't reached the edge of the finite-sized tracks it is allowed to move on, a "I" when either of these events happens. A trial is then said to have failed, and terminates.3 We count the number of learning trials needed before a controller is able to keep the pole balanced for a significant amount of a time (measured in simulated seconds). Figure 4 shows the evolution of eight networks; most reach balancing solutions within 100 to 300 faiulres. (These successful networks came from a batch of eleven: the other three never reached solutions.) This is 50 to 100 times faster than without the intermediate model, where 5000 to 30000 trials were needed to achieve similar balancing times [Jordan & Jacobs 90]. We must now take into account the overhead needed to train the intermediate model. This was done in 200 seconds of simulated time, while training the whole system typically required some 400 seconds- the overhead is small compared to the improvement achieved through the use of the intermediate model. However, off-line training of the intermediate model requires an additional agency to organise the selection and presentation of training examples. In the real world, we would either need some device which could initialise the system at any point in state space, or we would have to train through "flailing": applying random control actions, over many trials, so as to eventually cover all possible states and actions. As the dimensionality of the state representation rises for larger problems, intermediate model training will become more difficult. 3The differential equations which were used as a model of this system may be found in [Barto, Sutton & Anderson 83]. The parameters of the simulations were identical to those used in [Jordan & Jacobs 90]. Fast Learning with Predictive Forward Models 569 3.3 REMARKS We should note that the need for covering all state space is not merely due to the requirement of training an intermediate model: dynamic-programming based techniques such as the ones mentioned in this paper are guaranteed to lead us to an optimal control solution only if we explore the entire state space during learning. This is due to their generality, since no a priori structure of the state space is assumed. It might be possible to interleave the training of the intermediate model with the training of the controller and predictor networks, so as to achieve both concurrently. High-dimensional problems will still be problematic, but not just due to intermediate model training- the curse of dimensionality is not easily avoided! 4 CONCLUSIONS If we differentiate through a model trained with redundant inputs, we eliminate possible arbitrary components (which are due to the arbitrary mixing of the inputs that the model may use) only if we differentiate tangentially along the manifold defined by the relationship between the inputs. For the architecture presented in [Jordan & Jacobs 90], this is problematic, since the axes of the control vector will typically not be tangential to the manifold. Once we take this into account, it is clear why the architecture was not as efficient as expected; and we can introduce an "intermediate" world model to avoid the problems that it had. Using the intermediate model allows us to correctly obtain (through backpropagation, or differentiation) a real-valued vector evaluation on the controller's output. On the example task presented here, this led to a 50 to 100-foid increase in learning speed, and suggests a much better scaling-up performance and applicability to real-world problems than simple reinforcement learning, where real-valued outputs are not permitted, and vector control outputs would train very slowly. Acknowledgements Many thanks are due to Richard Rohwer, who supervised the beginning of this project, and to M. I. Jordan and R. Jacobs, who answered questions enlighteningly; thanks are also due to Dr F. Bracho at lIMAS, UNAM, who provided the environment for the project's conclusion. This work was supported by scholarships from CON ACYT in Mexico and from Caltech in the U.S. References [Ackley 88] D. H. Ackley, "Associative Learning via Inhibitory Search", in D. S. Touretzky, ed., Advances in Neural Information Processing Systems 1, Morgan Kaufmann 1989 [Barto, Sutton & Anderson 83] A. G. Barto, R. S. Sutton, and C. W. Anderson, "Neuronlike Adaptive Elements that can Solve Difficult Control Problems", IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-13, No.5, Sept/Oct. 1983 570 Brody [Barto, Sutton & Watkins 89] A. G. Barto, R. S. Sutton, and C. J. C. H. Watkins, "Learning and Sequential Decision Making", University of Massachusetts at Amherst COINS Technical Report 89-95, September 1989 [Chrisley 90] R. L. Chrisley, "Cognitive Map Construction and Use: A Parallel Distributed Approach", in Touretzky, Elman, Sejnowski, and Hinton, eds., Connectionist Models: Proceedings of the 1990 Summer School, Morgan Kaufmann 1991, [Jordan & Jacobs 90] M. I. Jordan and R. A. Jacobs, "Learning to Control an Unstable System with Forward Modeling", in D. S. Touretzky, ed., Advances in Neural Information Processing Systems 2, Morgan Kaufmann 1990 [Jordan & Rumelhart 90] M. I. Jordan and D. E. Rumelhart, "Supervised learning with a Distal Teacher" , preprint. [Nguyen & Widrow 90] D. Nguyen and B. Widrow, ''The Truck Backer-Upper: An Example of Self-Learning in Neural Networks", in Miller, Sutton and Werbos, eds., Neural Networks for Control, MIT Press 1990 [Sutton 88] R. S. Sutton, "Learning to Predict by the Methods of Temporal Differences", Machine Learning 3: 9-44, 1988 [Werbos 90] P. Werbos, "Architectures for Reinforcement Learning", in Miller, Sutton and Werbos, eds., Neural Networks for Control, MIT Press 1990
1991
48
516
Nonlinear Pattern Separation in Single Hippocampal Neurons with Active Dendritic Membrane Anthony M. Zador t Brenda J. Claiborne § t Thomas H. Brown t Depts. of Psychology and Cellular §Division of Life Sciences & Molecular Physiology Yale University New Haven, CT 06511 zador@yale.edu ABSTRACT University of Texas San Antonio, TX 78285 The dendritic trees of cortical pyramidal neurons seem ideally suited to perfonn local processing on inputs. To explore some of the implications of this complexity for the computational power of neurons, we simulated a realistic biophysical model of a hippocampal pyramidal cell in which a "cold spot"-a high density patch of inhibitory Ca-dependent K channels and a colocalized patch of Ca channels-was present at a dendritic branch point. The cold spot induced a non monotonic relationship between the strength of the synaptic input and the probability of neuronal fIring. This effect could also be interpreted as an analog stochastic XOR. 1 INTRODUCTION Cortical neurons consist of a highly branched dendritic tree that is electrically coupled to the soma. In a typical hippocampal pyramidal cell, over 10,000 excitatory synaptic inputs are distributed across the tree (Brown and Zador, 1990). Synaptic activity results in current flow through a transient conductance increase at the point of synaptic contact with the membrane. Since the primary means of rapid intraneuronal signalling is electrical, information flow can be characterized in tenns of the electrical circuit defIned by the synapses, the dendritic tree, and the soma. Over a dozen nonlinear membrane channels have been described in hippocampal pyramidal neurons (Brown and Zador, 1990). There is experimental evidence for a heterogeneous distribution of some of these channels in the dendritic tree (e.g. Jones et al .• 1989). In the absence of these dendritic channels, the input-output function can sometimes be reasonably approximated by a modifIed sigmoidal model. Here we report that introducing a cold spot 51 52 Zador, Claiborne, and Brown at the junction of two dendritic branches can result in a fundamentally different, nonmonotonic input-output function. 2 MODEL The biophysical details of the circuit class defined by dendritic trees have been well characterized (reviewed in RaIl, 1977; Jack et al., 1983). The fundamental circuit consists of a linear and a nonlinear component The linear component can be approximated by a set of electrical compartments coupled in series (Fig. 1C), each consisting of a res is tor and capacitor in parallel (Fig. 1B). The nonlinear component consists of a set of nonlinear resistors in parallel with the capacitance. The model is summarized in Fig. 1A. Briefly, simulations were performed on a 3000-compartment anatomical reconstruction of a region CAl hippocampal neuron (Claiborne et aI., 1992; Brown et al., 1992). All dendritic membrane was passive, except at the cold spot (Fig. 1A). At the soma, fast K and Na channels (cf. Hodgkin-Huxley, 1952) generated action potentials in response to stimuli. The parameters for these channels were modified from Lytton and Sejnowski (1991; cf. Borg-Graham, 1991). A Synapti_c ......;+ input-~ Cold spot Fast somatic andNa c r-------------------------------- I I ~ , " I , t , " ___________________ .. _____________ ~ I < ) Radial and longitudinal Ca+2 diffusion Fig. 1 The model. (A) The 3000-compartrnent electrical model used in these simulations was obtained from a 3-dimensional reconstruction of a hippocampal region CAl pyramidal neuron (Clai· borne et al, 1992). Each synaptic pathway (A-D) consisted of an adjustable number of synapses arrayed along the single branch indicated (see text). Random background activity was generated with a spatially uniform distribution of synapses firing according to Poisson statistics. The neuronal mem· brane was completely passive (linear), except at the indicated cold spot and at the soma. (B) In the nonlinear circuit associated with a patch a neuronal membrane containing active channels, each chan· nel is described by a voltage-dependent conductance in series with its an ionic battery (see text). In the present model the channels were spatially localized, so no single patch contained all of the non· linearities depicted in this hypothetical illustration. (Cl. A dendritic segment is illustrated in which both electrical and ca2+ dynamics were modelled. Ca + buffering, and both radial and longitudinal Ca2+ diffusion were simulated. Nonlinear Panern Separation in Single Hippocampal Neurons 53 We distinguished four synaptic pathways A-D (see Fig. lA). Each pathway consisted of a population of synapses activated synchronously. The synapses were of the fast AMP A type (see Brown et. al., 1992). In addition. random background synaptic activity distributed uniformly across the dendritic tree fIred according to Poisson statistics. The cold spot consisted of a high density of a Ca-activated K channel. the AHP current (Lancaster and Nicoll. 1987; Lancaster et. aI., 1991) colocalized with a low density patch ofN-type Ca channels (Lytton and Sejnowski, 1991; cf. Borg-Graham, 1991). Upon localized depolarization in the region of the cold ~t. influx of Ca2+ through the Ca channel resulted in a transient increase in the local rCa +]. The model included Ca2+ buffering, and both radial and longitudinal diffusion in the region of the cold spot. The increased [Ca2+] activated the inhibitory AHP current. The interplay between the direct excitatory effect of synaptic input, and its inhibitory effect via the AHP channels formed the functional basis of the cold spot. 3 RESULTS 3.1 DYNAMIC BEHAVIOR Representative behavior of the model is illustrated in Fig. 2. The somatic potential is plotted as a function of time in a series of simulations in which the number of activated synapses in pathway AlB was increased from 0 to about 100. For the fIrst 100 msec of each simulation, background synaptic activity generated a noisy baseline. At t = 100 msec, the indicated number of synapses fired synchronously five times at 100 Hz. Since the background activity was noisy, the outcome of the each simulation was a random process. The key effect of the cold spot was to impose a limit on the maximum stimulus amplitude that caused firing, resulting in a window of stimulus strengths that triggered an action potential. In the absence of the cold spot a greater synaptic stimulus invariably increased the likelihood that a spike fIred. This limit resulted from the relative magnitude of the AH P ~ .. I 0 >60 0 -60 tic Voltage Tracelll Sample Soma <:> Fig. 2 Sample runs. The membrane voltage at the soma is plotted as a f'wtction of time and synaptic stimulus intensity. At t = 100 msec, a synaptic stimulus consisting of 5 pulses was activitated. The noisy baseline resulted from random synaptic input. A single action potential resulted for input intensities within a range determined by the kinetics of the cold spot 54 Zador, Claiborne, and Brown current "threshold" to the threshold for somatic spiking. The AHP current required a relatively high level of activity for its activation. This AHP current "threshold" reflected the sigmoidal voltage dependence of N-type Ca current activation (V1I2 = -28 mV), since only as the dendritic voltage approached V1I2 did dendritic [Ca2+] rise enough to activate the AHP current. Because V1I2 was much higher than the threshold for somatic spiking (about -55 mV under current clamp), there was a window of stimulus strengths sufficient to trigger a somatic action potential but insufficient to activate the AHP current Only within this window of between about 20 and 60 synapses (Fig. 2) did an action potential occur. 3.2 LOCAL NON-MONOTONIC RESPONSE FUNCTION Because the background activity was random, the outcome of each simulation (e.g. Fig. 2) represented a sample of a random process. This random process can be used to defme many different random variables. One variable of interest is whether a spike fired in response to a stimulus. Although this measure ignores the dynamic nature of neuronal activity, it was still relatively informative because in these simulations no more than one spike fired per experiment Fig. 3A shows the dependence of firing probability on stimulus strength. It was obtained by averaging over a population of simulations of the type illustrated in Fig. 2. In the absence of AHP current (dotted line), the fIring probability was a sigmoidal function of activity. In its presence, the firing probability was a smooth nonmonotonic function of the activity (solid line). The firing probability was maximum at about 35 synapses, and occurred only in the range between about 10 and 80 synapses. The statistics illustrated in Fig. 3A quantify the nonmonotonicity that is implied by the single sample shown in Fig. 2. Spikes required the somatic Hodgkin-Huxley-like Na and K channels. To a first approximation, the effect of these channels was to convert a continuous variable-the somatic voltage-into a discrete variable-the presence or absence of a spike. Although this approximation ignores the complex interactions between the soma and the cold spot, it is useful for a qualitative analysis. The nonmonotonic dependence of somatic activity on synA 1.0 B >.. Cold Spol..... 0.6 ..... ;0 Cold Spol+ III ,ll 0.6 0 I '"' I Po. 0.4 I I DO ~ C .~ , r;: 0.2 • 0.0 '--__ '--_ ........ _ ........ --.i::o ......... ~"__._.......J o 20 40 60 80 100 120 Number of active synpases -56 - -56 > E cu -60 tIO CI -62 .. 0 p -64 JI: CI cu -66 Po. Cold SpotCold Spot+ -" .. ..." " .. '-" .o 20 40 60 60 100 120 Number of active synpases Fig. 3 Nonmonotonic input-output relation. (A) Each point represents the probability that at least one spike was fIred at the indicated activity level. In the absence of a cold spot, the fIring probability increased sharply and monotonically as the number of synapses in pathway C/ D increased (dotted Une). In contrast, the fIring probability reached amaximumforpathwayA/B and then decreased (solid line). (B) Each point represents the peak somatic voltage for a single simulation at the indicated activity level in the presence (pathway AlB; solid line) and absence (pathway C/D,· dotted Une) of a cold spot Because each point represents the outcome of a single simulation, in contrast to the average used in (A), the points reflect the variance due to the random background activity. Nonlinear Pattern Separation in Single Hippocampal Neurons 55 aptic activity was preserved even when active channels at the soma were eliminated (Fig. 3B). This result emphasizes that the critical nonlinearity was the cold spot itself. 3.3 NONLINEAR PATTERN SEPARATION So far. we have treated the output as a function of a scalar-the total activity in pathway AlB (or CID). In Fig. 3 for example. the total activity was defmed as the sum of the activities in pathway A and B. The spatial organization of the afferents onto 2 pairs of branches-A & B and C & D (Fig. I)-suggested considering the output as a function of the activity in the separate elements of each pair. The effect of the cold spot can be viewed in terms of the dependence of fIring as a function of separate activity in pathways A and B (Fig. 4). Each fIlled circle indicates that the neuron fIred for the indicated input intensity of pathways A and B. while a small dot indicates that it did not fire. As suggested by (Fig. 3). the fIring probability was highest when the total activity in the two pathways was at some intennediate level. The neuron did not fIre when the total activity in the two pathways was too large or too small. In the absence of the cold spot, only a minimum activity level was required. In our model the probability of fIring was a continuous function of the inputs. In the presence of the dendritic cold spot, the corners of this function suggested the logical operation XOR. The probability of fIring was high if only one input was activated and low if both or neither was activated. 4 DISCUSSION 4.1 ASSUMPTIONS Neuronal morphology in the present model was based on a precise reconstruction of a region CAl pyramidal neuron. The main additional assumptions involved the kinetics and distribution of the four membrane channels. and the dynamics of Ca2+in the neighborhood of influx. The forms assumed for these mechanisms were biophysically plausible. and the kinetic parameters were based on estimates from a collection of experimental studies (listed in Lytton and Sejnowski. 1991; Zador et aI .• 1990). Variation within the range of uncertainty of these parameters did not alter the main conclusions. The chief untested assumption of this model was the existence of cold spots. Although there is experimental evidence ...... ....... .. t --..... .. .. . -.e .••. . . .... ~ _ ...... . _ ..... ~ _ ..... ==' =::::~ p... ~::~:~ ~ :~ .. ~~~:.: ...... :::::::;Z!Zi :::::::::: :. iiiilli Iii i I • •••• · ... . • •••• iq:I:I!'! : fifl iii Input A -+ Fig.4 Nonlinear pattern separation Neuronal fIring is represented as a joint nmction of two input pathways (AlB). Filled circles indicate that the neuron fIred for the indicated stimulus parameters. Some indication of the stochastic nature of this function. resulting fonn the noisy background, is given by the density of interdigitation of points and circles. 56 Zador, Claiborne, and Brown supporting the presence of both Ca and AHP channels in the dendrites. there is at present no direct evidence regarding their colocalization. 4.2 COMPUTATIONS IN SINGLE NEURONS 4.2.1 Neurons and Processing Elements The limitations of the McCulloch and Pitts (1943) PE as a neuron model have long been recognized. Their threshold PEt in which the output is the weighted sum of the inputs passed through a threshold, is static, deterministic and treats all inputs equivalently. This model ignores at least three key complexities of neurons: temporal, spatial and stochastic. In subsequent years, augmented models have attempted to capture aspects of these complexities. For example, the leaky integrator (Caianiello, 1961; Hopfield, 1984) incorporates the temporal dynamics implied by the linear RC component of the circuit element pictured in Fig. IB. We have demonstrated that the input-output function of a realistic neuron model can have qualitatively different behavior from that of a single processing element(pE). 4.2.2 Interactions Within The Dendritic Tree The early work ofRall (1964) stressed the spatial complexity of even linear dendritic models. He noted that input from different synapses cannot be considered to arrive at a single point, the soma. Koch et al. (1982) extended this observation by exploring the nonlinear interactions between synaptic inputs to different regions of the dendritic tree. They emphasized that these interactions can be local in the sense that they effect subpopulations of synapses and suggested that the entire dendritic tree can be considered in terms of electrically isolated subunits. They proposed a specific role for these subunits in computing a vetoan analog AND-NOT ---that might underlie directional selectivity in retinal ganglion cells. The veto was achieved through inhibitory inputs. The underlying neuron models of Koch et al. (1982) and Rall (1964) were time-varying but linear, so it is not surprising that the resulting nonlinearities were monotonic. Much steeper nonlinearities were achieved by Shepherd and Brayton (1987) in a model that assumed excitable spines with fast Hodgkin-Huxley K and Na channels. These channels alone could implement the digital logic operations AND and OR. With the addition of extrinsic inhibitory inputs, they showed that a neuron could implement a full complement of digital logic operations, and concluded that a dendritic tree could in principle implement arbitrarily complex logic operations. The emphasis of the present model differs from that of both the purely linear and of the digital approaches, although it shares their emphasis on the locality of dendritic computation. Because the cold spot involved strongly nonlinear channels, it implemented a non mono tonic response function, in contrast to strictly linear dendritic models. At the same time, the present model retained the essentially analog nature of intraneuronal signalling, in contrast to the digital dendritic models. This analog mode seems better suited to processing large numbers of noisy inputs because it preserves the uncertainties rather than making an immediate decision. Focussing on the analog nature of the response eliminated the requirement for operating within the digital range of channel dynamics. The NMDA receptor-gated channel can give rise to an analog AND with a weaker voltagedependence than that induced by fast Na and K channels. Mel (1992) described a model in which synapses mediating increases to both the NMDA and AMP A conductances were distributed across the dendritic tree of a cortical neuron. When the synaptic activity was disNonlinear Pattern Separation in Single Hippocampal Neurons 57 tributed in appropriately sized clusters, the resulting neuronal response function was reminiscent of that of a sigma-pi unit With suitable preprocessing of inputs. the neuron could perform complex pattern discrimination. A unique feature of the present model is that functional inhibition arose from purely excitatory inputs. This mechanism underlying this inhibition -the AHP current-was intrinsic to the membrane. In both the Koch et ale (1982) and Brayton and Shepherd (1987) models. the veto or NOT operation was achieved through extrinsic synaptic inhibition. This requires additional neuronal circuitry. In the case of a dedicated sensory system like the directionally selective retinal granule cell. it is not unreasonable to imagine that the requisite neuronal circuitry is hardwired. In the limiting case of the digital model, the requisite circuitry would involve a separate inhibitory interneuron for each NOT-gate. 4.2.3 Adaptive Dendritic Computation What algorithms can harness the computational potential of the dendritic tree? Adaptive dendritic computation is a very new subject. Brown et ale (1991, 1992) developed a model in which synapses distributed across the dendritic tree showed interesting forms of spatial self-organization. Synaptic plasticity was governed by a local biophysically-motivated Hebb rule (Zador et al' J 1990). When temporally correlated but spatially uncorrelated inputs were presented to the neuron, spatial clusters of strengthened synapses emerged within the dendritic tree. The neuron converted a temporal correlation into a spatial correlation. The computational role of clusters of strengthened synapses within the dendritic tree becomes important in the presence of nonlinear membrane. If the dendrites are purely passive. then saturation ensures that the current injected per synapse actually decreases as the clustering increases. If purely regenerative nonlinearities are present (Brayton and Shepherd. 1987; Mel. 1992), then the response increases. The cold spot extends the range of local dendritic computations. What might control the formation and distribution of the cold spot itself? Cold spots might arise from the fortuitous colocalization of Ca and KAHP channels. Another possibility is that some specific biophysical mechanism creates cold spots in a use-dependent manner. Candidate mechanisms might involve local changes in second messengers such as [Ca2+] or longitudinal potential gradients (if. Poo, 1985). Bell (1992) has shown that this second mechanism can induce computationally interesting distributions of membrane channels. 4.3 WHY STUDY SINGLE NEURONS? We have illustrated an important functional difference between a single neuron and aPE. A neuron with cold spots can perform extensive local processing in the dendritic tree, giving rise to a complex mapping between input and output. A neuron may perhaps be likened to a "micronet" of simpler PEs. since any mapping can be approximated by a sufficiently complex network of sigmoidal units (Cybenko, 1989). This raises the objection that since micronets represent just a subset of all neural networks, there may be little to be gained by studying the properties of the special case of neurons. The intuitive justification for studying single neurons is that they represent a large but highly constrained subset that may have very special properties. Knowledge of the properties general to all sufficiently complex PE networks may provide little insight into the properties specific to single neurons. These properties may have implications for the behavior of circuits of neurons. It is not unreasonable to suppose that adaptive mechanisms in biological circuits will utilize the specific strengths of single neurons. 58 Zador, Claiborne, and Brown Acknowledgments We thank Michael Hines for providing NEURON-MODL assisting with new membrane mechanisms. This research was supported by grants from the Office of Naval Research, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research. References Bell, T. (1992) Neural in/ormation processing systems 4 (in press). Borg-Graham, L.J. (1991) In H. Wheal and J. Chad (Eds.) Cellular and Molecular Neurobiology: A Practical Approach. New York: Oxford University Press. Brown, T.H. and Zador, AM. (1990). In G. Shepherd (Ed.) The synaptic organization of the brain (Vol. 3, pp. 346-388). New York: Oxford University Press. Brown, T.H., Mainen, Z.F., Zador, A.M. and Claiborne, B.l (1991) Neural inJormationprocessing systems 3: 39-45. Brown, T .H., Zador, A.M., Mainen, Z.F., and Claiborne, B.J. (1992). In: Single neuron computation. Eds. T. McKenna, J. Davis, and S.F. Zornetzer. Academic Press (inpress). Caianiello, E.R. (1961) J. Thear. BioI. 1: 209-235. Claiborne, BJ., Zador, A.M., Mainen, Z.F., and Brown, T.H. (1992). In: Single neuron computation. Eds. T. McKenna, J. Davis, and S.F. Zornetzer. Academic Press (in press). Cybenko, G. (1989) Math. Control, Signals Syst. 2: 303-314. Hines, M. (1989). Int. J. Biomed. Comp, 24: 55-68. Hodgkin, A.L. and Huxley, A.F. (1952) J. Physiol.117: 500-544. Hopfield. JJ. (1984)Proc. Natl. Acad. Sci. USA 81: 3088-3092. Jack, J. Noble, A. and Tsien, R.W. (1975) Electrical current flow in excitable membranes. London: Oxford Press. Jones, O.T., Kunze, D.L. and Angelides, KJ. (1989) Science. 244:1189-1193. Koch, C., Poggio, T. and Torre, V. (1982) Proc. R. Soc. London B. 298: 227-264. Lancaster, B. and Nicoll, R.A. (1987) J. Physiol. 389: 187-203. Lancaster, B., Perkel, 0.1, and Nicoll, R.A. (1991) J. Neurosci. 11:23-30. Lytton, W.W. and Sejnowski, T.l (1991) J. Neurophys. 66: 1059-1079. McCulloch, W.S. and Pitts, W. (1943) Bull. Math. Biophys. 5: 115-137. Mel, B. (1992) Neural Computation (in press). Poo, M-m. (1985) Ann. Rev. Neurosci. 8: 369-406. Rall, W. (1977) In: Handbook of physiology. Eds. E. Kandel and S. Geiger. Washington D.C.: American Physiological Society, pp. 39-97. Rall, W. (1964) In: Neural theory and modeling. Ed. R.F. Reiss. Stanford Univ. Press, pp. 73-79. Shepherd, G.M. and Brayton, R.K. (1987) Neuroscience 21: 151-166. Zador, A., Koch, C. and Brown, T.H. (1990) Proc. Natl. Acad. Sci. USA 87: 6718-6722.
1991
49
517
Neural Network - Gaussian Mixture Hybrid for Speech Recognition or Density Estimation Yoshua Bengio Dept. Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Renato De Morl School of Computer Science McGill University Canada Ralf Kompe Giovanni Flammia Speech Technology Center, Aalborg University, Denmark Erlangen University, Computer Science Erlangen, Germany Abstract The subject of this paper is the integration of multi-layered Artificial Neural Networks (ANN) with probability density functions such as Gaussian mixtures found in continuous density Hidden Markov Models (HMM). In the first part of this paper we present an ANN/HMM hybrid in which all the parameters of the the system are simultaneously optimized with respect to a single criterion. In the second part of this paper, we study the relationship between the density of the inputs of the network and the density of the outputs of the networks. A few experiments are presented to explore how to perform density estimation with ANNs. 1 INTRODUCTION This paper studies the integration of Artificial Neural Networks (ANN) with probability density functions (pdf) such as the Gaussian mixtures often used in continuous density Hidden Markov Models. The ANNs considered here are multi-layered or recurrent networks with hyperbolic tangent hidden units. Raw or preprocessed data is fed to the ANN, and the outputs of the ANN are used as observations for a parametric probability density function such as a Gaussian mixture. One may view either the ANN as an adaptive preprocessor for the Gaussian mixture, or the Gaussian mixture as a statistical postprocessor for the ANN. A useful role for the ANN would be to transform the input data so that it can be more efficiently modeled by a Gaussian mixture. An interesting situation is one in which most of the input data points can be described in a lower dimensional space. In this case, it is desired that the ANN learns the possibly non-linear transformation to a more compact representation. 175 176 Bengio, De Mori, Flammia, and Kampe In the first part of this paper, we briefly describe a hybrid of ANNs and Hidden Markov Models (HMM) for continuous speech recognition. More details on this system can be found in (Bengio 91). In this hybrid, all the free parameters are simultaneously optimized with respect to a single criterion. In recent years, many related combinations have been studied (e.g., Levin 90, Bridle 90, Bourlard & Wellekens 90). These approaches are often motivated by observed advantages and disadvantages of ANNs and HMMs in speech recognition (Bourlard & Wellekens 89, Bridle 90). Experiments of phoneme recognition on the TIMIT database with the proposed ANN /HMM hybrid are reported. The task under study is the recognition (or spotting) of plosive sounds in continuous speech. Comparative results on this task show that the hybrid performs better than the ANN alone, better than the ANN followed by a dynamic programming based postprocessor using duration constraints, and better than the HMM alone. Furthermore, a global optimization of all the parameters of the system also yielded better performance than a separate optimization. In the second part of this paper, we attempt to extend some of the findings of the first part, in order to use the same basic architecture (ANNs followed by Gaussian mixtures) to perform density estimation. We establish the relationship between the network input and output densities, and we then describe a few experiments exploring how to perform density estimation with this system. 2 ANN/HMM HYBRID In a HMM, the likelihood of the observations, given the model, depends in a simple continuous way on the observations. It is therefore possible to compute the derivative of an optimization criterion C, with respect to the observations of the HMM. For example, one may use the criterion of the Maximum Likelihood (ML) of the observations, or of the Maximum Mutual Information (MMI) between the observations and the correct sequence. If the observation at each instant is the vector output, Yi, of an ANN, then one can use this gradient, gf" to optimize the parameters of the ANN with back-propagation. See (Bridle 90, Bottou 91, Bengio 91, Bengio et a192) on ways to compute this gradient. 2.1 EXPERIMENTS A preliminary experiment has been performed using a prototype system based on the integration of ANNs with HMMs. The ANN was initially trained based on a prior task decomposition. The task is the recognition of plosive phonemes pronounced by a large speaker population. The 1988 version of the TIM IT continuous speech database has been used for this purpose. SI and SX sentences from regions 2, 3 and 6 were used, with 1080 training sentences and 224 test sentences, 135 training speakers and 28 test speakers. The following 8 classes have been considered: /p/,/t/,/k/,/b/,/d/,/g/,/dx/,/all other phones/. Speaker-independent recognition of plosive phonemes in continuous speech is a particularly difficult task because these phonemes are made of short and non-stationary events that are often confused with other acoustically similar consonants or may be merged with other unit segments by a recognition system. Neural Network-Gaussian Mixture Hybrid for Speech Recognition or Density Estimation 177 Levell initially trained specialized to re~ze networks broad phonetic classes •••• SPEECH J.OUl'U·U'preprocessing Level 2 Initially trained to principal components of lower levels gradielll ··•·•••• .. · .. ·1"··. Level 3 initially trained to perfonn some specialized task e.g. ploslve discrirnation Figure 1: Architecture of the ANN/HMM Hybrid for the Experiments. The ANNs were trained with back-propagation and on-line weight update. As discussed in (Bengio 91), speech knowledge is used to design the input, output, and architecture of the system and of each one of the networks. The experimental system is based on the scheme shown in Figure 1. The architecture is built on three levels. The approach that we have taken is to select different input parameters and different ANN architectures depending on the phonetic features to be recognized. At levell, two ANNs are initially trained to perform respectively plosive recognition (ANN3) and broad classification of phonemes (ANN2). ANN3 has delays and recurrent connections and is trained to recognize static articulatory features of plosives in a way that depends of the place of articulation of the right context phoneme. ANN2 has delays but no recurrent connections. The design of ANN2 and ANN3 is described in more details in (Bengio 91). At level 2, ANNI acts.as an integrator of parameters generated by the specialized ANNs oflevel 1. ANNI is a linear network that initially computes the 8 principal components of the concatenated output vectors of the lower level networks (ANN2 and ANN3). In the experiment described below, the combined network (ANN1+ANN2+ANN3) has 23578 weights. Level 3 contains the HMMs, in which each distribution is modeled by a Gaussian mixture with 5 densities. See (Bengio et al 92) for more details on the topology of the HMM. The covariance matrix is assumed to be diagonal since the observations are initially principal components and this assumption reduces significantly the number of parameters to be estimated. After one iteration of ML re-estimation of the HMM parameters only, all the parameters of the hybrid system were simultaneously tuned to maximize the ML criterion for the next 2 iterations. Because of the simplicity of the implementation of the hybrid trained with ML, this criterion was used in these experiments. Although such an optimization may theoretically worsen performance1, we observed an marked improvement in performance after the final global tuning. This may be explained by the fact that a nearby local maximum of 1 In section 3, we consider maximization of the likelihood of the inpu ts of the network, 178 Bengio, De Mori, Flammia, and Kompe the likelihood is attained from the initial starting point based on prior and separate training of the ANN and the HMM. Table 1: Comparative Recognition Results. % recognized = 100 - % substitutions - % deletions. % accuracy = 100 - % substitutions - % deletions -% insertions. % rec %ms % del % subs % acc ANNs alone 85 32 0.04 15 53 HMMs alone 76 6.3 2.2 22.3 69 ANNs+DP 88 16 0.01 11 72 ANNs+HMM 87 6.8 0.9 12 81 ANNs+HMM+global opt. 90 3.8 1.4 9.0 86 In order to assess the value of the proposed approach as well as the improvements brought by the HMM as a post-processor for time alignment, the performance of the hybrid system was evaluated and compared with that of a simple postprocessor applied to the outputs of the ANNs and with that of a standard dynamic programming postprocessor that models duration probabilities for each phoneme. The simple post-processor assigns a symbol to each output frame of the ANNs by comparing the target output vectors with actual output vectors. It then smoothes the resulting string to remove very short segments and merges consecutive segments that have the same symbol. The dynamic programming (DP) postprocessor finds the sequence of phones that minimizes a cost that imposes durational constraints for each phoneme. In the HMM alone system, the observations are the cepstrum and the energy of the signal, as well as their derivatives. Comparative results for the three systems are summarized in Table 1. 3 DENSITY ESTIMATION WITH AN ANN In this section, we consider an extension of the system of the previous section. The objective is to perform density estimation of the inputs of the ANN. Instead of maximizing a criterion that depends on the density of the outputs of an ANN, we maximize the likelihood of inputs of the ANN. Hence the ANN is more than a preprocessor for the gaussian mixtures, it is part of the probability density function that is to be estimated. Instead of representing a pdf only with a set of spatially local functions or kernels such as gaussians (Silverman 86), we explore how to use a global transformation such as one performed by an ANN in order to represent a pdf. Let us first define some notation: f x (x) def p( X = x), fy (y) def p(Y = y), and fXIY(x)(x) def p(X = x I Y = y(x)). 3.1 RELATION BETWEEN INPUT PDF AND OUTPUT PDF Theorem Suppose a random variable Y (e.g., the outputs of an ANN) is a deterministic parametric function y(X) of a random variable X (here, the inputs of the ANN), where y and x are vectors of dimension ny and n x . Let J 8(Yl.h.·· .Yn v) 8(Xl.Xl, oo .X n .. ) not the outputs of the network. Neural Network-Gaussian Mixture Hybrid for Speech Recognition or Density Estimation 179 be the Jacobian of the transformation from X to Y, and assume J = U DVt be a singular value decomposition of J, with s(x) =1 Il~1/ Dii 1 the product of the singular values. Suppose Y is modeled by a probability density function fy(y). Then, for nz >= ny and s(x) > 0 fx(x) = fy(y(x» fXIY(x)(x) s(x) (1) Proof. In the case in which nz = ny, by change of variable y -- x in the following integral, 1 fy(y) dy = 1 01/ (2) we obtain the following result2 : fx(x) = fy(y(x» 1 Determinant(J) 1 (3) Let us now consider the case ny < nz , i.e., the network has less outputs than inputs. In order to do so we will introduce an intermediate transformation to a space Z of dimension nz in which some dimensions directly correspond to Y. Define Z such that f} Zl,Z2,···,Z.. = V t • Decompose Z into Z' and Z": f} Xl ,X2, . .. ,X .... z' = (Zl' ... , zn1/) , Z" = (Zn1/+1, ... , zn",) (4) There is a one-to-one mapping Yz (z') between Z' and Y, and its Jacobian is U D', where D' is the matrix composed of the first ny columns of D. Perform a change of variables y -- z' in the integral of equation 2: 1 fy (yz (z'» s dz' = 1 (5) 0.1 In order to make a change of variable to the variable x, we have to specify the conditional pdf fXIY(x)(x) and the corresponding pdf p(z" 1 z') = p(z", z, 1 z') =3 p(z 1 y) =4 fXIY(X)(x). Hence we can write 1 p(z" 1 z') dz" = 1 (6) 0.11 Multiplying the two integrals in equations 5 and 6, we obtain the following: 1= 1 p(z"lz')dz" 1 fy(yz(z'»sdz'= 1 fy(yz(z')p(z"lz')sdz (7) 0.11 0.1 o. and substituting z __ vtx: 1 fy(y(x» fXIY(X)(X) s(x) dx 0 .. 1, (8) which yields to the general result of equation 1 D. Unfortunately, it is not clear how to efficiently evaluate fXIY(x)(x) and then compute its derivative with respect to the network weights. In the experiments described in the next section we first study empirically the simpler case in which nx = n y • 2in that case, 1 Determinant(l) 1= sand IXIY(x)(x) = 1. 3knowing z' is equivalent to knowing y. fbecause z = Vtx and Determinant(V) = 1. 180 Bengio, De Mori, Flammia, and Kampe Figure 2: First Series of Experiments on Density Estimation with an ANN, for data generated on a non-linear input curve. From left to right: Input samples, density of the input, X, estimated with ANN+Gaussian, ANN that maps X to Y, density of the output, Y, as estimated by a Gaussian. 3.2 ESTIMATION OF THE PARAMETERS When estimating a pdf, one can approximate the functions fy(y) and y(x) by parameterized functions. For example, we consider for the output pdf the class of densities fy (y; 8) modeled by a Gaussian mixture of a certain number of components, where 8 is a set of means, variances and mixing proportions. For the non-linear transformation y(x;w) from X to Y, we choose an ANN, defined by its architecture and the values of its weights w. In order to choose values for the Gaussian and ANN parameters one can maximize the a-posteriori (MAP) probability of these parameters given the data, or if no prior is known or assumed, maximize the likelihood (ML) of the input data given the parameters. In the preliminary experiments described here, the logarithm of the likelihood of the data was maximized, i.e., the optimal parameters are defined as follows: (0, w) = argmax L log(Jx(x» (9,w) xeS (9) where::: is the set of inputs samples. In order to estimate a density with the above described system, one computes the derivative of p(X = x I 8,w) with respect to w. If the output pdf is a Gaussian mixture, we reestimate its parameters 8 with the EM algorithm (only fy (y) depends on 8 in the expression for f x (x) in equations 3 or 1). Differentiating equation 3 with respect to w yields: 8 8 '" 8 8J·· 8w(logfx(x» = 8w(logfy(y(x;w); 8» + L...J 8J .. (log(Determinant(J») 8:: i,j I, (10) The derivative of the logarithm of the determinant can be computed simply as follows (Bottou 91): 8~ij (log(Determinant(J») = (J-1)ji, (11) since VA, Determinant(A) = Ej AijCofactorij(A) ,and (A-l)ij = ~=;,;;..;;...Io~~ • . . • . , • Neural Network-Gaussian Mixture Hybrid for Speech Recognition or Density Estimation 181 \ • , \ • . \ , I . . , . Figure 3: Second Series of Experiments on Density Estimation with an ANN. From left to right: Input samples, density with non-linear net + Gaussian, output samples after network transformation. 3.3 EXPERIMENTS The first series of experiments verified that a transformation of the inputs with an ANN could improve the likelihood of the inputs and that gradient ascent in the ML criterion could find a good solution. In these experiments, we attempt to model some two-dimensional data extracted from a speech database. The 1691 training data points are shown in the left of Figure 2. In the first experiment, a diagonal Gaussian is used, with no ANN. In the second experiment a linear network and a diagonal Gaussian are used. In the third experiment, a non-linear network with 4 hidden units and a diagonal Gaussian are used. The average log likelihoods obtained on a test set of 617 points were -3.00, -2.95 and -2.39 respectively for the three experiments. The estimated input and output pdfs for the last experiment are depicted in Figure 2, with white indicating high density and black low density. The second series of experiments addresses the following question: if we use a Gaussian mixture with diagonal covariance matrix and most of the data is on a nonlinear hypersurface cI> of dimension less than nx , can the ANN's outputs separate the dimensions in which the data varies greatly (along ~) from those in which it almost doesn't (orthogonal to ~)7 Intuitively, it appears that this will be the case, because the variance of outputs which don't vary with the data will be close to zero, while the determinant of the Jacobian is non-zero. The likelihood will correspondingly tend to infinity. The first experiment in this series verified that this was the case for linear networks. For data generated on a diagonal line in 2-dimensional space, the resulting network separated the" variant" dimension from the "invariant" dimension, with one of the output dimensions having near zero variance, and the transformed data lying on a line parallel to the other output dimension. Experiments with non-linear networks suggest that with such networks, a solution that separates the variant dimensions from the invariant ones is not easily found by gradient ascent. However, it was possible to show that such a solution was at a maximum (possibly local) of the likelihood. A last experiment was designed to demonstrate this. The input data, shown in Figure 3, was artificially generated to make sure that a solution existed. The network had 2 inputs, 3 hidden units and 2 182 Bengio, De Mori, Flammia, and Kampe outputs. The input samples and the input density corresponding to the weights in a maximum of the likelihood are displayed in Figure 3, along with the transformed input data for those weights. The points are projected by the ANN to a line parallel to the first output dimension. Any variation of the weights from that solution, in the direction of the gradient, even with a learning rate as small as 10-14, yielded either no perceptible improvement or a decrease in likelihood. 4 CONCLUSION This paper has studied an architecture in which an ANN performs a non-linear transformation of the data to be analyzed, and the output of the ANN is modeled by a Gaussian mixture. The design of the ANN can incorporate prior knowledge about the problem, for example to modularize the task and perform an initial training of the sub-networks. In phoneme recognition experiments, an ANN/HMM hybrid based on this architecture performed better than the ANN alone or the HMM alone. In the second part of th paper, we have shown how the pdf of the input of the network relates to the pdf of the outputs of the network. The objective of this work is to perform density estimation with a non-local non-linear transformation of the data. Preliminary experiments showed that such estimation was possible and that it did improve the likelihood of the resulting pdf with respect to using only a Gaussian pdf. We also studied how this system could perform a non-linear analogue to principal components analysis. References Bengio Y. 1991. Artificial Neural Networks and their Application to Sequence Recognition. PhD Thesis, School of Computer Science, McGill University, Montreal, Canada. Bengio Y., De Mori R., Flammia G., and Kompe R. 1992. Phonetically motivated acoustic parameters for continuous speech recognition using artificial neural networks. To appear in Speech Communication. Bottou L. 1991. Une approche theorique a. l'apprentissage connexioniste; applications a. la reconnaissance de la parole. Doctoral Thesis, Universite de Paris Sud, France. Bourlard, H. and Wellekens, C.J. (1989). Speech pattern discrimination and multilayer perceptrons. Computer, Speech and Language, vol. 3, pp. 1-19. Bridle J .S. 1990. Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters. Advances in Neural Information Processing Systems 2, (ed. D.S. Touretzky) Morgan Kauffman Publ., pp. 211-217. Levin E. 1990. Word recognition using hidden control neural architecture. Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Albuquerque, NM, April 90, pp. 433-436. Silverman B.W. 1986. Density Estimation for Statistics and Data Analysis. Chapman and Hall, New York, NY.
1991
5
518
A Neural Network for Motion Detection of Drift-Balanced Stimuli Hilary Tunley* School of Cognitive and Computer Sciences Sussex University Brighton, England. Abstract This paper briefly describes an artificial neural network for preattentive visual processing. The network is capable of determiuing image motioll in a type of stimulus which defeats most popular methods of motion detect.ion - a subset of second-order visual motion stimuli known as drift-balanced stimuli(DBS). The processing st.ages of the network described in this paper are integratable into a model capable of simultaneous motion extractioll. edge detection, and the determination of occlusion. 1 INTRODUCTION Previous methods of motion detection have generally been based on one of two underlying approaches: correlation; and gradient-filter. Probably the best known example of the correlation approach is th(! Reichardt movement detEctor [Reiehardt 1961]. The gradient-filter (GF) approach underlies the work of AdElson and Bergen [Adelson 1985], and Heeger [Heeger L9H8], amongst others. These motion-detecting methods eannot track DBS, because DBS Jack essential componellts of information needed by such methods. Both the correlation and GF approaches impose constraints on the input stimuli. Throughout the image sequence, correlation methods require information that is spatiotemporally correlatable; and GF motion detectors assume temporally constant spatial gradi,'nts. "Current address: Experimental Psychology, School of Biological Sciences, Sussex University. 714 A Neural Network for Motion Detection of Drift-Balanced Stimuli 715 The network discussed here does not impose such constraints. Instead, it extracts motion energy and exploits the spatial coherence of movement (defined more formally in the Gestalt theory of common fait [Koffka 1935]) to achieve tracking. The remainder of this paper discusses DBS image sequences, then correlation methods, then GF methods in more detail, followed by a qualitative description of this network which can process DBS. 2 SECOND-ORDER AND DRIFT-BALANCED STIMULI There has been a lot of recent interest in second-order visual stimuli, and DBS in particular ([Chubb 1989, Landy 1991]). DBS are stimuli which give a clear percept of directional motion, yet Fourier analysis reveals a lack of coherent motion energy, or energy present in a direction opposing that of the displacement (hence the term 'drift-balanced '). Examples of DBS include image sequences in which the contrast polarity of edges present reverses between frames. A subset of DBS, which are also processpd by the network, are known as microbalanced stimuli (MBS). MBS cont,ain no correlatable features and are driftbalanced at all scales. The MBS image sequences used for this work were created from a random-dot image in which an area is successively shifted by a constant displacement between each frame and sim ultaneously re-randomised. 3 EXISTING METHODS OF MOTION DETECTION 3.1 CORRELATION METHODS Correlation methods perform a local cross-correlation in image space: the matching of features in local neighbourhoods (depending upon displacement/speed) between image frames underlies the motion detection. Examples of this method include [Van Santen 1985J. Most correlation models suffer from noise degradation in that any noise features extracted by the edge detection are available for spurious correlation. There has been much recent debate questioning the validity of correlation methods for modelling human motion detection abilit.ies. In addition to DBS, there is also increasing psychophysical evidence ([Landy 1991, Mather 1991]) which correlation methods cannot account for. These factors suggest that correlation techniques are not suitable for low-level motion processing where no information is available concerning what is moving (as with MBS). However, correlation is a more plausible method when working with higher level constructs such as tracking in model-based vision (e.g. [Bray 1990]), 3.2 GRADIENT-FILTER (GF) METHODS GF methods use a combination of spatial filtering to determine edge positions and temporal filtering to determine whether such edges are moving. A common assumption used by G F methods is that spatial gradients are constant. A recent method by Verri [Verri 1990], for example, argues that flow det.ection is based upon the notion 716 Tunley •• • •• • •• •• •• ~ . ••• • •• • • T •• Model R: M: 0: E: Receptor UnIts - Detect temporal changes In IMage intensit~ (polarIty-independent) Motion Units - Detect distribution of change iniorMtlon OcclusIon Units - Detect changes In .otlon dIstribution Edge Units - Detect edges dlrectl~ from occluslon Figure 1: The Network (Schematic) of tracking spatial gradient magnitude and/or direction, and that any variation in the spatial gradient is due to some form of motion deformation - i.e. rotation, expansion or shear. Whilst for scenes containing smooth surfaces this is a valid approximation, it is not the case for second-order stimuli such as DBS. 4 THE NETWORK A simplified diagram illustrating the basic structure of the network (based upon earlier work ([Tunley 1990, Tunley 1991a, Tunley 1991b]) is shown in Figure 1 ( the edge detection stage is discussed elsewhere ([Tunley 1990, Tunley 1991 b, Tunley 1992]). 4.1 INPUT RECEPTOR UNITS The units in the input layer respond to rectified local changes in image intensity over time. Each unit has a variable adaption rate, resulting in temporal sensitivity - a fast adaption rate gives a high temporal filtering rate. The main advantages for this temporal averaging processing are: • Averaging removes the D.C. component of image intensity. This eliminates problematic gain for motion in high brightness areas of the image. [Heeger 1988] . • The random nature of DBS/MBS generation cannot guarantee that each pixel change is due to local image motion. Local temporal averaging smooths the A Neural Network for Motion Detection of Drift-Balanced Stimuli 717 moving regions, thus creating a more coherently structured input for the motion units. The input units have a pointwise rectifying response governed by an autoregressive filter of the following form: (1 ) where a E [0,1] is a variable which controls the degree of temporal filtering of the change in input intensity, nand n - 1 are successive image frames, and Rn and In are the filter output and input, respectively. The receptor unit responses for two different a values are shown in Figure 2. C\' can thus be used to alter the amount of motion blur produced for a particular frame rate, effectively producing a unit with differing velocity sensitivity. ( a) (b) Figure 2: Receptor Unit Response: (a) a = 0.3; (b) a = 0.7. 4.2 MOTION UNITS These units determine the coherence of image changes indicated by corresponding receptor units. First-order motion produces highly-tuned motion activity - i.e. a strong response in a particular direction - whilst second-order motion results in less coherent output. The operation of a basic motion detector can be described by: (2) w here !vI is the detector, (if, j') is a point in frame n at a distance d from (i, j), a point in frame n 1, in the direction k. Therefore, for coherent motion (i.e. first-order), in direction k at a speed of d units/frame, as n ---- 00: (3) 718 Tunley The convergence of motion activity can be seen using an example. The stimulus sequence used consists of a bar of re-randomising texture moving to the right in front of a leftward moving background with the same texture (i.e. random dots). The bar motion is second-order as it contains no correlatable features, whilst the background consists of a simple first-order shifting of dots between frames. Figures 3, 4 and 5 show two-dimensional images of the leftward motion activity for the stimulus after 3,4 and 6 frames respectively. The background, which has coherent leftward movement (at speed d units/frame) is gradually reducing to zero whilst the microbalanced rightwards-moving bar, remains active. The fact that a non-zero response is obtained for second-order motion suggests, according to the definition of Chubb and Sperling [Chubb 1989], that first-order detectors produce no response to MBS, that this detector is second-order with regard to motion detection. Figure 3: Leftward Motion Response to Third Frame in Sequence. HfOL(tlyllmh ~ .4) .. ' Figure 4: Leftward Motion Response to Fourth Frame. Hf Ol (llyrlnh ~. 6) Figure 5: Leftward Motion Response to Sixth Frame. The motion units in this model are arranged on a hexagonal grid. This grid is known as a flow web as it allows information to flow, both laterally between units of the same type, and between the different units in the model (motion, occlusion or edge). Each flow web unit is represented by three variables - a position (a, b) and a direction k, which is evenly spaced between 0 and 360 degrees. In this model each k is an integer between 1 and kmax - the value of kmax can be varied to vary the sensitivity of the units. A way of using first-order techniques to discriminate between first and secondorder motions is through the concept of coherence. At any point in the motionprocessed images in Figures 3-5, a measure of the overall variation in motion activity can be used to distinguish between the motion of the micro-balanced bar and its background. The motion energy for a detector with displacement d, and orientation A Neural Network for Motion Detection of Drift-Balanced Stimuli 719 k, at position (a, b), can be represented by Eabkd. For each motion unit, responding over distance d, in each cluster the energy present can be defined as: E _ mink(Mabkd) abkdn AI abkd (4) where mink(xk) is the minimum value of x found searching over k values. If motion is coherent, and of approximately the correct speed for the detector M, then as n -+ 00: (5) where km is in the actual direction of the motion. In reality n need only approach around 5 for convergence to occur. Also, more importantly, under the same convergence conditions: (6) This is due to the fact that the minimum activation value in a group of first-order detectors at point (a, b) will be the same as the actual value in the direction, km . By similar reasoning, for non-coherent motion as n -+ 00: Eabkdn 1 'Vk (7) in other words there is no peak of activity in a given direction. The motion energy is ambiguous at a large number of points in most images, except at discontinuities and on well-textured surfaces. A measure of motion coherence used for the motion units can now be defined as: Mc( abkd) = . Eabkd ",", k max E L...k=l abkd (8) For coherent motion in direction km as n -+ 00: (9) Whilst for second-order motion, also as n 00: (10) Using this approach the total Me activity at each position - regardless of coherence, or lack of it - is unity. Motion energy is the same in all moving regions, the difference is in the distribution, or tuning of that energy. Figures 6, 7 and 8 show how motion coherence allows the flow web structure to reveal the presence of motion in microbalanced areas whilst not affecting the easily detected background motion for the stimulus. 720 Tunley Figure 6: Motion Coherence Response to Third Frame Figure 7: Motion Coherence Response to Fourth Frame Figure 8: Motion Coherence Response to Sixth Frame 4.3 OCCLUSION UNITS These units identify discontinuities in second-order motion which are vitally important when computing the direction of that motion. They determine spatial and temporal changes in motion coherence and can process single or multiple motions at each image point. Established and newly-activated occlusion units work, through a gating process, to enhance continuously-displacing surfaces, utilising the concept of visual inertia. The implementation details of the occlusion stage of this model are discussed elsewhere [Tunley 1992], but some output from the occlusion units to the above secondorder stimulus are shown in Figures 9 and 10. The figures show how the edges of the bar can be determined. References [Adelson 1985) [Bray 1990) [Chubb 1989) E.H. Adelson and J .R. Bergen. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. 2, 1985. A.J . Bray. Tracking objects using image disparities. Image and Vision Computin,q, 8, 1990. C. Chubb and G. Sperling. Second-order motion perception: Space/time separable mechanisms. In Proc. Workshop on Visual Motion, Irvine, CA , USA, 1989. A Neural Network for Motion Detection of Drift-Balanced Stimuli 721 Figure 9: Occluding Motion Information: Occlusion activity produced by an increase in motion coherence activity. O( IlynnlJsl . 1") Figure 10: Occluding Motion Information: Occlusion activity produced by a decrease in motion activity at a point. Some spurious activity is produced due to the random nature of the second-order motion information. [Heeger 1988] [Koffka 1935] [Landy 1991] [Mather 1991] [Reichardt 1961] D.J. Heeger. Optical Flow using spatiotemporal filters. Int. J. Camp. Vision, 1, 1988. K. Koffka. Principles of Gestalt Psychology. Harcourt Brace, 1935. M.S. Landy, B.A. Dosher, G. Sperling and M.E. Perkins. The kinetic depth effect and optic flow II: First- and second-order motion. Vis. Res. 31, 1991. G. Mather. Personal Communication. W. Reichardt. Autocorrelation, a principle for the evaluation of sensory information by the central nervous system. In W. Rosenblith, editor, Sensory Communications. Wiley NY, 1961. [Van Santen 1985] J .P.H. Van Santen and G. Sperling. Elaborated Reichardt detectors. J. Opt. Soc. Am. 2, 1985. [Tunley 1990] [Tunley 1991a] [Tunley 1991b] [Tunley 1992] [Verri 1990] H. Tunley. Segmenting Moving Images. In Proc. Int. Neural Network Conf (INN C9 0) , Paris, France, 1990. H. Tunley. Distributed dynamic processing for edge detection. In Proc. British Machine Vision Conf (BMVC91), Glasgow, Scotland, 1991. H. Tunley. Dynamic segmentation and optic flow extraction. In. Proc. Int. Joint. Conf Neural Networks (IJCNN91), Seattle, USA, 1991. H. Tunley. Sceond-order motion processing: A distributed approach. CSRP 211, School of Cognitive and Computing Sciences, University of Sussex (forthcoming). A. Verri, F. Girosi and V. Torre. Differential techniques for optic flow. J. Opt. Soc. Am. 7, 1990.
1991
50
519
Generalization Performance in PARSEC-A Structured Connectionist Parsing Architecture Ajay N. Jain· School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890 ABSTRACT This paper presents PARSEC-a system for generating connectionist parsing networks from example parses. PARSEC is not based on formal grammar systems and is geared toward spoken language tasks. PARSEC networks exhibit three strengths important for application to speech processing: 1) they learn to parse, and generalize well compared to handcoded grammars; 2) they tolerate several types of noise; 3) they can learn to use multi-modal input. Presented are the PARSEC architecture and performance analyses along several dimensions that demonstrate PARSEC's features. PARSEC's performance is compared to that of traditional grammar-based parsing systems. 1 INTRODUCTION While a great deal of research has been done developing parsers for natural language, adequate solutions for some of the particular problems involved in spoken language have not been found. Among the unsolved problems are the difficulty in constructing task-specific grammars, lack of tolerance to noisy input, and inability to effectively utilize non-symbolic information. This paper describes PARSEC-a system for generating connectionist parsing networks from example parses. *Now with Alliant Techsystems Research and Technology Center (jain@rtc.atk.com). 209 210 Jain --=---) INPUT--+l Figure 1: PARSEC's high-level architecture PARSEC networks exhibit three strengths: • They automatically learn to parse, and generalize well compared to hand-coded grammars. • They tolerate several types of noise without any explicit noise modeling. • They can learn to use multi-modal input such as pitch in conjunction with syntax and semantics. The PARSEC network architecture relies on a variation of supervised back-propagation learning. The architecture differs from some other connectionist approaches in that it is highly structured, both at the macroscopic level of modules, and at the microscopic level of connections. Structure is exploited to enhance system performance.1 Conference registration dialogs formed the primary development testbed for PARSEC. A separate speech recognition effort in conference registration provided data for evaluating noise-tolerance and also provided an application for PARSEC in speech-to-speech translation (Waibel et al. 1991). PARSEC differs from early connectionist work in parsing (e.g. Fanty 1985; Selman 1985) in its emphasis on learning. It differs from recent connectionist approaches (e.g. Elman 1990; Miikkulainen 1990) in its emphasis on performance issues such as generalization and noise tolerance in real tasks. This papers presents the PARSEC architecture, its training algorithms, and performance analyses that demonstrate PARSEC's features. 2 PARSEC ARCHITECTURE The PARSEC architecture is modular and hierarchical. Figure 1 shows the high-level architecture. PARSEC can learn to parse complex English sentences including multiple clauses, passive constructions, center-embedded constructions, etc. The input to PARSEC is presented sequentially, one word at a time. PARSEC produces a case-based representation of a parse as the input sentence develops. IpARSEC is a generalization of a previous connectionist parsing architecture (Jain 1991). For a detailed exposition of PARSEC, please refer to Jain' s PhD thesis (in preparation). Generalization Performance in PARSEC-A Structured Connectionist Parsing Architecture 211 (mDD' ~ -. Units -+~ OUTPUT: (labels for input) Figure 2: Basic structure of a PARSEC module The parse for the sentence, "I will send you a form immediately:' is: ([statement] ([clause] ([agent] ([action] ([recipient] ([patient] ([time] I) will send) you) a form) immediately))) Input words are represented as binary feature patterns (primarily syntactic with some semantic features). These feature representations are hand-crafted. Each module of PARSEC can perform either a transformation or a labeling of its input. The output function of each module is represented across localist connectionist units. The actual transformations are made using non-connectionist subroutines.2 Figure 2 shows the basic structure of a PARSEC module. The bold ovals contain units that learn via backpropagation. There are four steps in generating a PARSEC network: 1) create an example parse file; 2) define a lexicon; 3) train the six modules; 4) assemble the full network. Of these, only the first two steps require substantial human effort, and this effort is small relative to that required for writing a grammar by hand. Training and assembly are automatic. 2.1 PREPROCESSING MODULE This module marks alphanumeric sequences, which are replaced by a single special marker word. This prevents long alphanumeric strings from overwhelming the length constraint on phrases. Note that this is not always a trivial task since words such as "a" and "one" are lexically ambiguous. INPUT: "It costs three hundred twenty one dollars." OUTPUT: "It costs ALPHANUM dollars." Prhese transfonnations could be carried out by connectionist networks, but at a substantial computational cost for training and a risk of undergeneralization. 212 Jain 2.2 PHRASE MODULE The Phrase module processes the evolving output of the Prep module into phrase blocks. Phrase blocks are non-recursive contiguous pieces of a sentence. They correspond to simple noun phrases and verb groups.3 Phrase blocks are represented as grouped sets of units in the network. Phrase blocks are denoted by brackets in the following: INPUT: "I will send you a new form in the morning." OUTPUT: "[I] [will send] [you] [a new form] [in the morning]." 2.3 CLAUSE MAPPING MODULE The Clause module uses the output of the Phrase module as input and assigns the clausal structure. The result is an unambiguous bracketing of the phrase blocks that is used to transform the phrase block representation into representations for each clause: INPUT: "[I] [would like] [to register] [for the conference]." OUTPUT: "([I] [would like]) ([to register] [for the conference]}." 2.4 ROLE LABELING MODULE The Roles module associates case-role labels with each phrase block in each clause. It also denotes attachment structure for prepositional phrases ("MOD-I" indicates that the current phrase block modifies the previous one): INPUT: "( [The titles] [of papers] [are printed] [in the forms])" OUTPUT: "([The titles] [of papers] [are printed] [in the forms])" PATIENT MOD-l ACTION LOCATION 2.S INTERCLAUSE AND MOOD MODULES The Interclause and Mood modules are similar to the Roles module. They both assign labels to constituents, except they operate at higher levels. The Interclause module indicates, for example, subordinate and relative clause relationships. The Mood module indicates the overall sentence mood (declarative or interrogative in the networks discussed here). 3 GENERALIZATION Generalization in large connectionist networks is a critical issue. This is especially the case when training data is limited. For the experiments reported here, the training data was limited to twelve conference registration dialogs containing approximately 240 sentences with a vocabulary of about 400 words. Despite the small corpus, a large number of English constructs were covered (including passives, conditional constructions, center-embedded relative clauses, etc.). A set of 117 disjoint sentences was obtained to test coverage. The sentences were generated by a group of people different from those that developed the 12 dialogs. These sentences used the same vocabulary as the 12 dialogs. 3Abney has described a similar linguistic unit called a chunk (Abney 1991). Generalization Performance in PARSEC-A Structured Connectionist Parsing Architecture 213 3.1 EARLY PARSEC VERSIONS Straightforward training of a PARSEC network resulted in poor generalization performance, with only 16% of the test sentences being parsed correctly. One of the primary sources for error was positional sensitivity acquired during training of the three transformational modules. In the Phrase module, for example, each of the phrase boundary detector units was supposed to learn to indicate a boundary between words in specific positions. Each of the units of the Phrase module is perfonning essentially the same job, but the network doesn't "know" this and cannot learn this from a small sample set. By sharing the connection weights across positions, the network is forced to be position insensitive (similar to TDNN's as in Waibel et al. 1989). After modifying PARSEC to use shared weights and localized connectivity in the lower three modules, generalization performance increased to 27%. The primary source of error shifted to the Roles module. Part of the problem could be ascribed to the representation of phrase blocks. They were represented across rows of units that each define a word. In the phrase block "the big dog," "dog" would have appeared in row 3. This changes to row 2 if the phrase block is just "the dog." A network had to learn to respond to the heads of phrase blocks even though they moved around. An augmented phrase block representation in which the last word of the phrase block was copied to position 0 solved this problem. With the augmented phrase block representation coupled with the previous improvements, PARSEC achieved 44% coverage. 3.2 PARSEC: FINAL VERSION The final version of PARSEC uses all of the previous enhancements plus a technique called Programmed Constructive Learning (PCL). In PCL, hidden units are added to a network one at a time as they are needed. Also, there is a specific series of hidden unit types for each module of a PARSEC network. The hidden unit types progress from being highly local in input connectivity to being more broad. This forces the networks to learn general predicates before specializing and using possibly unreliable infonnation. The final version of PARSEC was used to generate another parsing network.4 Its performance was 67% (78% including near-misses). Table 1 summarizes these results. 3.3 COMPARISON TO HAND-CODED GRAMMARS PARSEC's performance was compared to that of three independently constructed grammars. Two of the grammars were commissioned as part of a contest where the first prize ($700) went to the grammar-writer with best coverage of the test set and the second prize ($300) went to the other grammar writer.S The third grammar was independently constructed as part of the JANUS system (described later). The contest grammars achieved 25% and 38% coverage, and the other grammar achieved just 5% coverage of the test set 4nus [mal parsing network was not trained all the way to completion. Training to completion hurts generalization performance. Srrne contest participants had 8 weeks to complete their grammars, and they both spent over 60 hours doing so. The grammar writers work in Machine Translation and Computational Linguistics and were quite experienced. 214 Jain Table 1: PARSEC's comparative perfonnance PARSECV4 Grammar 1 Grammar 2 Grammar 3 Coverage 67% (78%) 38% (39%) 25% (26%) 5% (5%) Noise 77% 70% Ungram. 66% 34% 38% 2% (see Table 1). All of the hand-coded grammars produced NIL parses for the majority of test sentences. In the table, numbers in parentheses include near-misses. PARSEC's performance was substantially better than the best of the hand-coded grammars. PARSEC has a systematic advantage in that it is trained on the incremental parsing task and is exposed to partial sentences during training. Also, PARSEC's constructive learning approach coupled with weight sharing emphasizes local constraints wherever possible, and distant variations in input structure do not adversely affect parsing. 4 NOISE TOLERANCE The second area of performance analysis for PARSEC was noise tolerance. Preliminary comparisons between PARSEC and a rule-based parser in the JANUS speech-to-speech translation system were promising (Waibel et al. 1991). More extensive evaluations corroborated the early observations. In addition, PARSEC was evaluated on synthetic ungrammatical sentences. Experiments on spontaneous speech using DARPA's ATIS task are ongoing. 4.1 NOISE IN SPEECH-TO-SPEECH TRANSLATION In the JANUS system, speech recognition is provided by an LPNN (Tebelskis et al. 1991), parsing can be done by a PARSEC network or an LR parser, translation is accomplished by processing the interlingual output of the parser using a standard language generation module, and speech generation is provided by off-the-shelf devices. The system can be run using a single (often noisy) hypothesis from the LPNN or a ranked list of hypotheses. When run in single-hypothesis mode, JANUS using PARSEC correctly translated 77% of the input utterances, and J ANUS using the LR parser (Grammar 3 in the table) achieved 70%. The PARSEC network was able to parse a number of incorrect recognitions well enough that a successful translation resulted. However, when run in multi-hypothesis mode, the LR parser achieved 86% compared to PARSEC's 80%. The LR parser utilized a very tight grammar and was able to robustly reject hypotheses that deviated from expectations. This allowed the LR parser to "choose" the correct hypothesis more often than PARSEC. PARSEC tended to accept noisy utterances that produced incorrect translations. Of course, given that the PARSEC network's coverage was so much higher than that of the grammar used by the LR parser, this result is not surprising. 4.2 SYNTHETIC UNGRAMMATICALITY Using the same set of grammars for comparison, the parsers were tested on ungrammatical input from the CR task. These sentences were corrupted versions of sentences used for Generalization Performance in PARSEC-A Structured Connectionist Parsing Architecture 215 FILE: s.O.O "Okay: duration = 409.1 msec, mean fraq = 113.2 0.1 •••••••• Il.. . ........... . 0.0 ••••••••••••••••••••••••••••••••••••••••••••••••• FILE: q.O.O "Okay?duration = 377.0 msec, mean freq = 137.3 0.6 0.5 0.4 0.3 0.2 0.1 •••••••• • ••••••••••••••••• 0.0 ••••••••••••••••••••••••••• Figure 3: Smoothed pitch contours. training. Training sentences were used to decouple the effects of noise from coverage. Table 1 shows the results. They essentially mirror those of the coverage tests. PARSEC is substantially less sensitive to such effects as subject/verb disagreement, missing detenniners, and other non-catastrophic irregularities. Some researchers have augmented grammar-based systems to be more tolerant of noise (e.g. Saito and Tomita 1988). However, the PARSEC network in the test reported here was trained only on grammatical input and still produced a degree of noise tolerance for free. In the same way that one can explicitly build noise tolerance into a grammar-based system, one can train a PARSEC network on input that includes specific types of noise. The result should be some noise tolerance beyond what was explicitly trained. 5 MULTI-MODAL INPUT A somewhat elusive goal of spoken language processing has been to utilize information from the speech signal beyond just word sequences in higher-level processing. It is well known that humans use such infonnation extensively in conversation. Consider the utterances "Okay." and "Okay?" Although semantically distinct, they cannot be distinguished based on word sequence, but pitch contours contain the necessary infonnation (Figure 3). In a grammar-based system, it is difficult to incorporate real-valued vector input in a useful way. In a PARSEC network, the vector is just another set of input units. The Mood module of a PARSEC network was augmented to contain an additional set of units that contained pitch infonnation. The pitch contours were smoothed output from the OGI Neural Network Pitch Tracker (Barnard et al. 1991). PARSEC added another hidden unit to utilize the new infonnation. The trained PARSEC network was tolerant of speaker variation, gender variation, utterance variation (length and content), and a combination of these factors. Although not explicitly trained to do so, the network correctly processed sentences that were grammatical questions but had been pronounced with the declining pitch of a typical statement. Within the JANUS system, the augmented PARSEC network brings new functionality. Intonation affects translation in JANUS when using the augmented PARSEC network. The sentence, "This is the conference office." is translated to "Kaigi jimukyoku desu." "This is the conference office?" is translated to "Kaigi jimukyoku desuka?" This required no changes in the other modules of the JANUS system. It also should be possible to use other types of infonnation from the speech signal to aid in robust parsing (e.g. energy patterns to disambiguate clausal structure). 216 Jain 6 CONCLUSION PARSEC is a system for generating connectionist parsing networks from training examples. Experiments using a conference registration conversational task showed that PARSEC: 1) learns and generalizes well compared to hand-coded grammars; 2) tolerates noise: recognition errors and ungrammaticality; 3) successfully learns to combine intonational infonnation with syntactic/semantic infonnation. Future work with PARSEC will be continued by extending it to new languages, larger English tasks, and speech tasks that involve tighter coupling between speech recognition and parsing. There are numerous issues in NLP that will be addressed in the context of these research directions. Acknowledgements The author gratefully acknowledges the support of DARPA, the National Science Foundation, A1R Interpreting Telephony Laboratories, NEC Corp., and Siemens Corp. References Abney, S. P. 1991. Parsing by chunks. In Principle-Based Parsing, ed. R. Berwick. S. P. Abney, C. Tenny. Kluwer Academic Publishers. Barnard, E., R. A. Cole, M. P. Yea. F. A. Alleva. 1991. Pitch Detection with a Neural-Net Classifier. IEEE Transactions on Signal Processing 39(2): 298-307. Elman, J. L. 1989. Representation and Structure in Connectionist Networks. Tech. Rep. CRL 8903. Center for Research in Language, University of California. San Diego. Fanty, M. 1985. Context Free Parsing in Connectionist Networks. Tech. Rep. 1R174. Computer Science Department, University of Rochester. Jain, A. N. and A. H. Waibel. 1990. Robust connectionist parsing of spoken language. In Proceedings of the 1990 IEEE International Conference on Acoustics, Speech, and Signal Processing Jain. A. N. In preparation. PARSEC: A Connectionist Learning Architecture for Parsing Speech. PhD Thesis. School of Computer Science. Carnegie Mellon University. Miikkulainen. R. 1990. A PDP architecture for processing sentences with relative clauses. In Proceedings of the 13th Annual Conference of the Cognitive Science Society. Saito, H .• and M. Tomita. 1988. Parsing noisy sentences. In Proceedings of INFO JAPAN '88: International Conference of the Information Processing Society of Japan. 553-59. Selman, B. 1985. Rule-Based Processing in a Connectionist System/or Natural Language Understanding. Ph.D. Thesis, University of Toronto. Available as Tech. Rep. CSRI168. Tebelskis. J., A. Waibel. B. Petek, and O. Schmidbauer. 1991. Continuous speech recognition using linked predictive neural networks. In Proceedings of the 1991 IEEE International Conference on Acoustics, Speech, and Signal Processing. Waibel. A .• T. Hanazawa. G. Hinton, K. Shikano, and K. Lang. 1989. Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech, and Signal Processing 37(3):328-339. Waibel. A., A. N. Jain, A. E. McNair. H. Saito, A. G. Hauptmann, and J. Tebelskis. 1991. JANUS: A speech-to-speech translation system using connectionist and symbolic processing strategies. In IEEE Proceedings of the International Conference on Acoustics, Speech, and Signal Processing.
1991
51
520
Hierarchies of adaptive experts Michael I. Jordan Robert A. Jacobs Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Abstract In this paper we present a neural network architecture that discovers a recursive decomposition of its input space. Based on a generalization of the modular architecture of Jacobs, Jordan, Nowlan, and Hinton (1991), the architecture uses competition among networks to recursively split the input space into nested regions and to learn separate associative mappings within each region. The learning algorithm is shown to perform gradient ascent in a log likelihood function that captures the architecture's hierarchical structure. 1 INTRODUCTION Neural network learning architectures such as the multilayer perceptron and adaptive radial basis function (RBF) networks are a natural nonlinear generalization of classical statistical techniques such as linear regression, logistic regression and additive modeling. Another class of nonlinear algorithms, exemplified by CART (Breiman, Friedman, Olshen, & Stone, 1984) and MARS (Friedman, 1990), generalizes classical techniques by partitioning the training data into non-overlapping regions and fitting separate models in each of the regions. These two classes of algorithms extend linear techniques in essentially independent directions, thus it seems worthwhile to investigate algorithms that incorporate aspects of both approaches to model estimation. Such algorithms would be related to CART and MARS as multilayer neural networks are related to linear statistical techniques. In this paper we present a candidate for such an algorithm. The algorithm that we present partitions its training data in the manner of CART or MARS, but it does so in a parallel, on-line manner that can be described as the stochastic optimization of an appropriate cost functional. 985 986 Jordan and Jacobs Why is it sensible to partition the training data and to fit separate models within each of the partitions? Essentially this approach enhances the flexibility of the learner and allows the data to influence the choice between local and global representations. For example, if the data suggest a discontinuity in the function being approximated, then it may be more sensible to fit separate models on both sides of the discontinuity than to adapt a global model across the discontinuity. Similarly, if the data suggest a simple functional form in some region, then it may be more sensible to fit a global model in that region than to approximate the function locally with a large number of local models. Although global algorithms such as backpropagation and local algorithms such as adaptive RBF networks have some degree of flexibility in the tradeoff that they realize between global and local representation, they do not have the flexibility of adaptive partitioning schemes such as CART and MARS. In a previous paper we presented a modular neural network architecture in which a number of "expert networks" compete to learn a set of training data (Jacobs, Jordan, Nowlan & Hinton, 1991). As a result of the competition, the architecture adaptively splits the input space into regions, and learns separate associative mappings within each region. The architecture that we discuss here is a generalization of the earlier work and arises from considering what would be an appropriate internal structure for the expert networks in the competing experts architecture. In our earlier work, the expert networks were multilayer perceptrons or radial basis function networks. If the arguments in support of data partitioning are valid, however, then they apply equally well to a region in the input space as they do to the entire input space, and therefore each expert should itself be composed of competing sub-experts. Thus we are led to consider recursively-defined hierarchies of adaptive experts. 2 THE ARCHITECTURE Figure 1 shows two hierarchical levels of the architecture. (We restrict ourselves to two levels throughout the paper to simplify the exposition; the algorithm that we develop, however, generalizes readily to trees of arbitrary depth). The architecture has a number of expert networks that map from the input vector x to output vectors Yij. There are also a number of gating networks that define the hierarchical structure of the architecture. There is a gating network for each cluster of expert networks and a gating network that serves to combine the outputs of the clusters. The output of the ith cluster is given by Yi = L gjliYij j (1) where gjli is the activation of the ph output unit of the gating network in the ith cluster. The output of the architecture as a whole is given by (2) where gi is the activation of the ith output unit of the top-level gating network. Expert Network Expert Network Expert Network Expert Network Y22 Gating Network Gating Network 9 1/2 9 2/2 Hierarchies of adaptive experts 987 Gating Network Figure 1: Two hierarchical levels of adaptive experts. All of the expert networks and all of the gating networks have the same input vector. We assume that the outputs of the gating networks are given by the normalizing softmax function (Bridle) 1989): e S , gi = 'Ii:"""" S ~j e J (3) and gjli = Lk eSkl. (4) where Si and Sjli are the weighted sums arriving at the output units of the corresponding gating networks. The gating networks in the architecture are essentially classifiers that are responsible for partitioning the input space. Their choice of partition is based on the ability 988 Jordan and Jacobs of the expert networks to model the input-output functions within their respective regions (as quantified by their posterior probabilities; see below). The nested arrangement of gating networks in the architecture (cf. Figure 1) yields a nested partitioning much like that found in CART or MARS. The architecture is a more general mathematical object than a CART or MARS tree, however, given that the gating networks have non-binary outputs and given that they may form nonlinear decision surfaces. 3 THE LEARNING ALGORITHM We derive a learning algorithm for our architecture by developing a probabilistic model of a tree-structured estimation problem. The environment is assumed to be characterized by a finite number of stochastic processes that map input vectors x into output vectors y*. These processes are partitioned into nested collections of processes that have commonalities in their input-output parameterizations. Data are assumed to be generated by the model in the following way. For any given x, collection i is chosen with probability 9i, and a particular process j is then chosen with conditional probability 9jli. The selected process produces an output vector y* according to the probability density f(y* I x; Yij), where Yij is a vector of parameters. The total probability of generating y* is: P(y* I x) = L 9i L 9jld(Y* I x; Yij), (5) j where 9i, 9jli, and Yij are unknown nonlinear functions of x. Treating the probability P(y* Ix) as a likelihood function in the unknown parameters 9i, 9j Ii, and Yij, we obtain a learning algorithm by using gradient ascent to maximize the log likelihood. Let us assume that the probability density associated with the residual vector (y* - Yij) is the multivariate normal density, where Yij is the mean of the ph process of the ith cluster (or the (i, j)th expert network) and Eij is its covariance matrix. Ignoring the constant terms in the normal density, the log likelihoo d is: In L = In L 9i L 9jlilEij I-~ e-!(Y*-Y'J)Tl:;-;l(Y*-Y'J). j (6) (7) which is the posterior probability that a process in the ith cluster generates a particular target vector y*. We also define the conditional posterior probability: 9 ·I·IE "1- ~ e - ~(Y* - Y'J )Tl;;-;l (y* -Y'J) h . . J i iJ Jlz - L I~ I ~ _'!'(Y*-Y )Tl:-l(y*_y )' 9 'I' ~'. 2 e 2 'J'J 'J j J i ZJ (8) which is the conditional posterior probability that the ph expert in the ith cluster generates a particular target vector y*. Differentiating 6, and using Equations 3, 4, Hierarchies of adaptive experts 989 7, and 8, we obtain the partial derivative of the log likelihood with respect to the output of the (i,j)th expert network: f) In L of< -£)- = hi hjli (y - Yij). (9) UYij This partial derivative is a supervised error term modulated by the appropriate posterior probabilities. Similarly, the partial derivatives of the log likelihood with respect to the weighted sums at the output units of the gating networks are given by: (10) and f) In L -- - h· (h 'I' - g'I ') £) -! J! J!' USjli (ll) These derivatives move the prior probabilities associated with the gating networks toward the corresponding posterior probabilities. It is interesting to note that the posterior probability hi appears in the gradient for the experts in the ith cluster (Equation 9) and in the gradient for the gating network in the ith cluster (Equation 11). This ties experts within a cluster to each other and implies that experts within a cluster tend to learn similar mappings early in the training process. They differentiate later in training as the probabilities associated with the cluster to which they belong become larger. Thus the architecture tends to acquire coarse structure before acquiring fine structure, This feature of the architecture is significant because it implies a natural robustness to problems with overfitting in deep hierarchies. We have also found it useful in practice to obtain an additional degree of control over the coarse-to-fine development of the algorithm. This is achieved with a heuristic that adjusts the learning rate at a given level of the tree as a function of the timeaverage entropy of the gating network at the next higher level of the tree: j.l'li(t + 1) = G:j.l'li(f) + f3(Mi + L gili In gjli) j where Mi is the maximum possible entropy at level i of the tree. This equation has the effect that the networks at level i + 1 are less inclined to diversify if the superordinate cluster at level i has yet to diversify (where diversification is quantified by the entropy of the gating network). 4 SIMULATIONS We present simulation results from an unsupervised learning task and two supervised learning tasks. In the unsupervised learning task, the problem was to extract regularities from a set of measurements of leaf morphology. Two hundred examples of maple, poplar, oak, and birch leaves were generated from the data shown in Table 1. The architecture that we used had two hierarchical levels, two clusters of experts, and two experts 990 Jordan and Jacobs Maple Poplar Oak Birch Length 3,4,5,6 1,2,3 5,6,7,8,9 2,3,4,5 Width 3,4,5 1,2 2,3,4,5 1,2,3 Flare 0 0,1 0 1 Lobes 5 1 7,9 1 Margin Entire Crenate, Serrate Entire Doubly-Serrate Apex Acute Acute Rounded Acute Base Truncate Rounded Cumeate Rounded Color Light Yellow Light Dark Table 1: Data used to generate examples of leaves from four types of trees. The columns correspond to the type of tree; the rows correspond to the features of a tree's leaf. The table's entries give the possible values for each feature for each type of leaf. See Preston (1976). within each cluster. Each expert network was an auto-associator that maps fortyeight input units into forty-eight output units through a bottleneck of two hidden units. Within the experts, backpropagation was used to convert the derivatives in Equation 9 into changes to the weights. The gating networks at both levels were affine. We found that the hierarchical architecture consistently discovers the decomposition ,of the data that preserves the natural classes of tree species (cf. Preston, 1976). That is, within one cluster of expert networks, one expert learns the maple training patterns and the other expert learns the oak patterns. Within the other cluster, one expert learns the poplar patterns and the other expert learns the birch patterns. Moreover, due to the use of the autoassociator experts, the hidden unit representations within each expert are principal component decompositions that are specific to a particular species of leaf. We have also studied a supervised learning problem in which the learner must predict the grayscale pixel values in noisy images of human faces based on values of the pixels in surrounding 5x5 masks. There were 5000 masks in the training set. We used a four-level binary tree, with affine experts (each expert mapped from twentyfive input units to a single output unit) and affine gating networks. We compared the performance of the hierarchical architecture to CART and to backpropagation.1 In the case of backpropagation and the hierarchical architecture, we utilized crossvalidation (using a test set of 5000 masks) to stop the iterative training procedure. As shown in Figure 2, the performance of the hierarchical architecture is comparable to backpropagation and better than CART. Finally we also studied a system identification problem involving learning the simulated forward dynamics of a four-joint, three-dimensional robot arm. The task was to predict the joint accelerations from the joint positions, sines and cosines of joint positions, joint velocities, and torques. There were 6000 data items in the training set. We used a four-level tree with trinary splits at the top two levels, and binary splits at lower levels. The tree had affine experts (each expert mapped 1 Fifty hidden units were used in the backpropagation network, making the number of parameters in the backpropagation network and the hierarchical network roughly comparable. Hierarchies of adaptive experts 991 0.1 0.08 Relative 0.06Error 0.04 0.02 CART BP Hier4 Figure 2: The results on the image restoration task. The dependent measure is relative error on the test set. (cf. Breiman, et al., 1984). from twenty input units to four output units) and affine gating networks. We once again compared the performance of the hierarchical architecture to CART and to back propagation. In the case of back propagation and the hierarchical architecture, we utilized a conjugate gradient technique, and halted the training process after 1000 iterations. In the case of CART, we ran the algorithm four separate times on the four output variables. Two of these runs produced 100 percent relative error, a third produced 75 percent relative error, and the fourth (the most proximal joint acceleration) yielded 46 percent relative error, which is the value we report in Figure 3. As shown in the figure, the hierarchical architecture and backpropagation achieve comparable levels of performance. 5 DISCUSSION In this paper we have presented a neural network learning algorithm that captures aspects of the recursive approach to function approximation exemplified by algorithms such as CART and MARS . The results obtained thus far suggest that the algorithm is computationally viable, comparing favorably to backpropagation in terms of generalization performance on a set of small and medium-sized tasks. The algorithm also has a number of appealing theoretical properties when compared to backpropagation: In the affine case, it is possible to show that (1) no backward propagation of error terms is required to adjust parameters in multi-level trees (cf. the activation-dependence of the multiplicative terms in Equat.ions 9 and 11), (2) all of the parameters in the tree are ma..ximum likelihood estimators. The latter property suggests that the affine architecture may be a particularly suitable architecture in which to explore the effects of priors on the parameter space (cf. Nowlan 992 Jordan and Jacobs 0.6 0.4 Relative Error 0.2 0.0-'---CART BP Hier4 Figure 3: The results on the system identification task. & Hinton, this volume). Acknowledgements This project was supported by grant IRI-9013991 awarded by the National Science Foundation, by a grant from Siemens Corporation, by a grant from ATR Auditory and Visual Perception Research Laboratories, by a grant from the Human Frontier Science Program, and by an NSF Presidential Young Investigator Award to the first author. References Breiman, L., Friedman, J .H., Olshen, R.A., & Stone, C.J. (1984) Classification and Regression Trees. Belmont, CA: Wadsworth International Group. Bridle, J. (1989) Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogelman-Soulie & J. Herault (Eds.), Neuro-computing: Algorithms, Architectures, and Applications. New York: Springer-Verlag. Friedman, J .H. (1990) Multivariate adaptive regression splines. The Annals of Statistics, 19,1-141. Jacobs, R.A, Jordan, M.L, Nowlan, S.J., & Hinton, G.E. (1991) Adaptive mixtures of local experts. Neural Computation, 3, 79-87. Preston, R.J. (1976) North American Trees (Third Edition). Ames, IA: Iowa State University Press.
1991
52
521
Optical Implementation of a Self·Organizing Feature Extractor Dana Z. Anderson*, Claus Benkert, Verena Hebler, Ju-Seog Jang, Don Montgomery, and Mark Saffinan. Joint Institute for Laboratory Astrophysics, University of Colorado and the Department of Physics, University of Colorado, Boulder Colorado 80309-0440 Abstract We demonstrate a self-organizing system based on photorefractive ring oscillators. We employ the system in two ways that can both be thought of as feature extractors; one acts on a set of images exposed repeatedly to the system strictly as a linear feature extractor, and the other serves as a signal demultiplexer for fiber optic communications. Both systems implement unsupervised competitive learning embedded within the mode interaction dynamics between the modes of a set of ring oscillators. After a training period, the modes of the rings become associated with the different image features or carrier frequencies within the incoming data stream. 1 Introduction Self-organizing networks (Kohonen, Hertz, Domany) discover features or qualities about their input environment on their own; they learn without a teacher making explicit what is to be learned. This property reminds us of severa] ubiquitous behaviors we see in the physical and natural sciences such as pattern formation, morphogenesis and phase transitions (Domany). While in the natural case one is usually satisfied simply to analyze and understand the behavior of a self-organizing system, we usually have a specific function in mind that we wish a neural network to perform. That is, in the network case we wish to synthesize a system that will perform the desired function. Self-organizing principles are particularly valuable when one does not know ahead of time exactly what to expect from the input to be processed and when it is some property of the input itself that is of interest. For example, one may wish to determine some quality about the input statistics - this one can often do by applying self-organization principles. However, when one wishes to attribute some meaning to the data, self-organization principles are probably poor candidates for this task. 821 822 Anderson, Benkert, Hebler. lang. Montgomery. and Saffman It is the behavioral similarity between self-organizing network models and physical systems that has lead us to investigate the possibility of implementing a self-organizing network function by designing the dynamics for a set of optical oscillators. Modes of sets of oscillators undergo competition (Anderson, Benkert) much like that employed in competitive learning network models. Using photorefractive elements, we have tailored the dynamics of the mode interaction to perfonn a learning task. A physical optical implementation of selforganized learning serves two functions. Unlike a computer simulation, the physical system must obey certain physical laws just like a biological system does. We have in mind the consequences of energy conservation, finite gain and the effects of noise. Therefore, we might expect to learn something about general principles applicable to biological systems from our physical versions. Second, there are some applications where an optical system serves as an ideal "front end" to signal processing. Here we take a commonly used supervised approach for extracting features from a stream of images and demonstrate how this task can be done in a selforganizing manner. The conventional approach employs a holographic correlator (Vander Lugt). In this technique, various patterns are chosen for recognition by the optical system and then recorded in holographic media using angle-encoded reference beams. When a specific pattern is presented to the holographic correlator, the output is detennined by the correlation between the presented pattern and the patterns that have been recorded as holograms during the 'learning phase'. The angles and intensities of the reconstructed reference beams identify the features present in the pattern. Because the processing time-scale in holographic systems is detennined by the time necessary for light to scatter off of the holographic grating, the optical correlation takes place virtually instantaneously. It is the speed of this correlation that makes the holographic approach so interesting. While its speed is an asset, the holographic correlator approach to feature extraction from images is a supervised approach to the problem: an external supervisor must choose the relevant image features to store in the correlator holograms. Moreover the supervisor must provide an angle-encoded reference beam for each stored feature. For many applications, it is desirable to have an adaptive system that has the innate capacity to discover, in an unsupervised fashion, the underlying structure within the input data. A photorefractive ring resonator circuit that learns to extract spatially orthogonal features from images is illustrated schematically in figure 1. The resonator rings in figure 1 are constructed physically from optical fibers cables. The resonator is self-starting and is pumped by images containing the input data (White). The resonator learns to associate each feature in the input data set with one and only one of the available resonator rings. In other words, when the proper feature is present in the input data, the resonator ring with which it has become associated will light up. When this feature is absent from the input data, the corresponding resonator ring will be dark. The self-organizing capabilities of this system arise from the nonlinear dynamOptical Implementation of a Self-Organizing Feature Extractor 823 t 1111"" Figure1: Schematic diagram of the self-organizing photorefractive ring resonator. The two signal frequencies, (01 amd (02, are identical when the circuit is used as a feature extractor and are separated by 280 MHz when the system is used as a frequency demultiplexer. ics of competition between resonator modes for optical energy within the common photorefractive pump crystal (Benkert). We have used this system to accomplish two optical signal processing tasks. In the first case, the resonator can learn to distinguish between two spatially orthogonal input images that are impressed on the common pump beam in a piece-wise constant fashion. In the second case, frequency demultiplexing of a composite input image constructed from two spatially orthogonal image components of different optical frequencies can be accomplished (Saffman, 1991b). In both cases, the optical system has no a priori knowledge of the input data and self-discovers the important structural elements. 2 A Self·Organizing Photorefractive Ring Resonator The experimental design that realizes an optical self-organizing feature extractor is shown in figure l. The optical system consists of a two ring, multimode, unidirectional photorefractive ring resonator in which the rings are spatially distinct. The resonator rings are defined by loops of 100 Jl core multimode optical fiber. The gain for both modes is provided by a common BaTi03 crystal that is pumped by optical images presented as speckle patterns from a single 100 Jl multimode optical fiber. The light source is a single frequency argon-ion laser operating at 514.5 nm. The second BaTi03 crystal provides reflexive coupling within the resonator, which ensures that each resonator ring becomes associated with only one input feature. The input images are generated by splitting the source beam and passing it through two acousto-optic modulator cells. The optical signals generated by the acousto-optic modulators are then focused into a single 1.5 meter long stepindex, 100 Jl core, multimode optical fiber. The difference in the angle of incidence for the two signal beams at the fiber end face is sufficient to ensure that the corresponding speckle pattern images are spatially orthogonal (Safi'man, 824 Anderson, Benkert, Hebler, Jang, Montgomery, and Saffman 1991a). The acousto-optic cells are used in a conventional fashion to shift the optical frequency of the carrier signal, and are also used as shutters to impress time modulated information on the input signals. When the resonator is operating as a feature extractor, both input signals are carried on the same optical frequency, but are presented to the resonator sequentially. The presentation cycle time of 500 Hz was chosen to be much smaller than the characteristic time constan t of the BaTi03 pump crystal. When operating as a frequency demultiplexer, the acousto-optic modulators shift the optical carrier frequencies of the input signals such that they are separated by 280 MHz. The two input carrier signals are time modulated and mixed into the optical fiber to form a composite image composed of two spatially orthogonal speckle patterns having different optical frequencies. This composite image is used as the pump beam for the resonator. 3 Unsupervised Competitive Learning Correlations between the optical electric fields in images establish the criterion for a measure of similarity between different image features. The best measure of these correlations is the inner product between the complex-valued spatial electric field distribution across the input images, When S 12 = 0 the images are uncorrelated and we define such images as spatially orthogonal. When the resonator begins to oscillate, neither resonator ring has any preference for a particular input feature or frequency. The system modes have no internal bias (i.e., no a priori knowledge) for the input data. As the gain for photorefractive two-beam coupling in the common BaTi03 pump crystal saturates, the two resonator rings begin to compete with each other for the available pump energy. This competitive coupling leads to 'winner-takesall' dynamics in the resonator in which each resonator ring becomes associated with one or the other spatially orthogonal input images. In other words, the rings become labels for each spatially orthogonal feature present in the input image set. Phenomenologically, the dynamics of this mode competition can be described by Lotka-Volterra equations (Benkert, Lotka, Volterra), dli,p L - /. a· . I 9· · I dt I.p ( I,p P"p I,p . l,p;,,1 J,J) J,J Where Ii,p is the intensity of the oscillating energy in ring i due to energy transferred from the input feature p, ai,p is the gain for two-beam coupling between ring i and feature p, ~i,p is the self-saturation coefficient, and 9i,pj,l are the cross-saturation coefficients. The self-organizing dynamics are determined by the values of the cross coupling coefficients. Thus the competitive learning algorithm that drives the self-organization in this optical system is embedded Optical Implementation of a Self-Organizing Feature Extractor 825 resonalor beam Figure 2: Reflexive gain interaction. A fraction, 0, of the incident intensity is removed from the resonator beam, and then coupled back into itself b~ photorefractive two beam coupling. This ensures 'Winnertakes-all' competitive dynamics between the resonator rings. within the nonlinear dynamics of mode competition in the pump crystal. Once the system has learned, the spatially orthogonal features in the training set are represented as holograms in the BaTi03 pump crystal. These holograms act as linear projection operators, and any new image constructed from features in the training set will be projected in a linear fashion onto the learned feature basis set. The relative intensity of light oscillating in each ring corresponds to the fraction of each learned feature in the new image. Thus, the resonator functions as a feature extractor (Kohonen). 4 Reflexive Gain If each resonator ring was single mode, then competitive dynamics in the common pump crystal would be sufficient for feature extraction. However, a multimode ring system allows stability for certain pathological feature extracting states. The multimode character of each resonator ring can permit simultaneous oscillation of two spatially orthogonal modes within a single ring. Ostensibly, the system is performing feature extraction, but this form of output is not useful for further processing. These pathological states are excluded by introducing reflexive gain into the cavity. Any system that twists back upon itself and closes a loop is referred to as reflexive (Hofstadter, pg. 3). A reflexive gain interaction is achieved by removing a portion of the oscillating energy from each ring and then coupling it back into the same ring by photorefractive two-beam coupling, as illustrated in figure 2. The standard equations for photorefractive two-beam coupling (Kukhtarev, Hall) can be used to derive an expression for the steady-state transmission, T, through the reflexive gain element in terms of the number of spatially orthogonal modes, N, that are oscillating simultaneously within a single ring, Here, exp(Go) is the small signal gain and 0 is the fraction of light removed 826 Anderson, Benkert, Hebler, lang, Montgomery, and Saffman Ring 1 .... . ! I ~ ... _ill Ring 2 lr I, .L I v I, r ! 1/ / - --.. ..,.,... I, / ./ I, ~/ Figure 3: Time evolution of the intensities within each resonator ring due to 0>1 (11) and 0>2(12). After about 30 seconds, tne system has learned to demultiplex the two input frequencies. Ring 1 has become associated with 0>1 and Ring 2 has become associated with 0l2. The contrast ratio between 11 anal2 in each ring is about 40: 1. from the resonator. The transmission decreases for N > 1 causing larger cavity losses for the case of simultaneous oscillation of spatially orthogonal modes within a single ring. Therefore, the system favors 'winner-takes-all' dynamics over other pathological feature extracting states. 5 Experimental Results The self-organizing dynamics within the optical circuit require several seconds to reach steady state. In the case of frequency de multiplexing, the dynamical evolution of the system was observed by detecting the envelopes of the carrier modulation, as shown in figure 3. In the case of the feature extractor, transient system dynamics were observed by synchronizing the observation with the modulation of one feature or the other, as shown in figure 4. The frequency demultiplexing (figure 3) and feature extracting (figure 4) states develop a high contrast ratio and are stable for as long as the pump beam is present. Measurements with a spectrum analyzer show an output contrast ratio of better than 40:1 in the frequency demultiplexing case. The circuit described here extracts spatially orthogonal features while continFigure 4: Time evolution of the intensities in each resonator ring due to the two input pictures. The system requires about 30 seconds to learn to extract features from the input images. Picture 1 is associated with Ring 1 and picture 2 is associated with Ring2. ~ I +----+-~:___-+--_+_-_+ (n ..... ) Ring 1 1 : ··········l·········l···········r·······t············ : . : : 0.. ••·•••••·· .. r···· ·······;······· .. ···r····· .. · .. ·r .... · .. ·· .. 02 ........... r ...... ··· .... ;· .......... ·.;-............ .:. .. · :P .... 2 0+-""""'--;----;'"--;...---;--_+ o 10 50 ~~ . (n ..... ) Rmg2 ~ 0.8 10 20 10 '0 50 Optical Implementation of a Self-Organizing Feature Extractor 827 uously adapting to slow variation s in the spatial mode superposition due to drifts in the carrier frequency or perturbations to the fibers. Thus, the system is adaptive as well as unsupervised. 6 Summary An optical implementation of a self-organizing feature extractor that is adaptive has been demonstrated. The circuit exhibits the desirable dynamical property that is often referred to in the parlance of the neural networks as 'unsupervised learning'. The essential properties of this system arise from the nonlinear dynamics of mode competition within the optical ring resonator. The learning algorithm is embedded in these dynamics and they contribute to its capacity to adapt to slow changes in the input signal. The circuit learns to associate different spatially orthogonal images with different rings in an optical resonator. The learned feature set can represent orthogonal basis vectors in an image or different frequencies in a multiplexed optical signal. Because a wide variety of information can be encoded onto the input images presented to the feature extractor described here, it has the potential to find general application for tasks where the speed and adaptability of self-organizing and all-optical processing is desirable. Acknowledgements We are grateful for the support of both the Office of Naval Research, contract #N00014-91.J-1212 and the Air Force Office of Scientific Research, contract #AFOSR-90-0198. Mark Saffman would like to acknowledge support provided by a U.S. Air Force Office of Scientific Research laboratory graduate fellowship. References D.Z. Anderson and R. Saxena, Theory of Multimode Operation of a Unidirectional Ring Oscillator having Photorefractive Gain: Weak Field Limit, J. Opt. Soc. Am. B, 4, 164 (1987). C. Benkert and D.Z. Anderson, Controlled competitive dynamics in a photorefractive ring oscillator: 'Winner-takes-all" and the "voting-paradox" dynamics, Phys. Rev. A, 44,4633 (1991). E. Domany, J.L. van Hemmen and K Schulten, eds., Models of Neural Networks; Springer-Verlag (1991). T.J. Hall, R. Jaura, L.M. Connors and P.D. Foote, The Photorefractive EffectA Review; Prog. Quant. Electr., 10, 77 (1985). J. Hertz, A. Krogh and R.G. Palmer, Introduction to the Theory of Neural Computation; Addison-Wesley (1991). D. R. Hofstadter, Metamagical Themas: Questing for the Essence of Mind and Pattern; Bantam Books (1985). 828 Anderson, Benkert, Hebler, lang, Montgomery, and Saffman T. Kohonen, Self-Organization and Associative Memory, 3rd Edition; SpringerVerlag (1989). N. V. Kukhtarev, V.B. Markov, S.G. Odulov, M.S. Soskin and V.L. Vinetskii, Holographic Storage in Electrooptic Crystals. 1. Steady State; Ferroelectrics, 22, 949 (1979). N. V. Kukhtarev, V.B. Markov, S.G. Odulov, M.S. Soskin and V.L. Vinetskii, Holographic Storage in Electrooptic Crystals. II. Beam Coupling - Light Amplification; Ferroelectrics, 22, 961 (1979). A.J. Lotka, Elements of Physical Biology; Baltimore (1925). M. Saffman and D.Z. Anderson, Mode multiplexing and holographic demultiplexing communications channels on a multimode fiber, Opt. Lett., 16,300 (1991a). M. Safi'man, C. Benkert and D.Z. Anderson, Self-organizing photorefractive frequency demultiplexer, Opt. Lett., 16, 1993 (1991b). A Vander Lugt, Signal Detection by Complex Spatial Filtering; IEEE Trans. Infonn. Theor., IT-10, 139 (1964). V. Volterra, Lecons sur la Theorie Mathematiques de la Lutte pour la Vie; Gauthier-Villars (1931). J.O. White, M. Cronin-Golomb, B. Fischer, and A Yariv, Coherent Oscillation by Self-Induced Gratings in the Photorefractive Crystal BaTiO.j; Appl. Phys. Lett., 40, 450 (1982). PART XII LEARNING AND GENERALIZATION
1991
53
522
A Neurocomputer Board Based on the ANNA Neural Network Chip Eduard Sackinger, Bernhard E. Boser, and Lawrence D. Jackel AT&T Bell Laboratories Crawfords Corner Road, Holmdel, NJ 07733 Abstract A board is described that contains the ANN A neural-network chip, and a DSP32C digital signal processor. The ANNA (Analog Neural Network Arithmetic unit) chip performs mixed analog/digital processing. The combination of ANNA with the DSP allows high-speed, end-to-end execution of numerous signal-processing applications, including the preprocessing, the neural-net calculations, and the postprocessing steps. The ANNA board evaluates neural networks 10 to 100 times faster than the DSP alone. The board is suitable for implementing large (million connections) networks with sparse weight matrices. Three applications have been implemented on the board: a convolver network for slant detection of text blocks, a handwritten digit recognizer, and a neural network for recognition-based segmentation. 1 INTRODUCTION Many researchers have built neural-network chips, but few chips have been installed in board-level systems, even though this next level of integration provides insights and advantages that can't be attained on a chip testing station. Building a board demonstrates whether or not the chip can be effectively integrated into the larger systems required for real applications. A board also exposes bottlenecks in the system data paths. Most importantly, a working board moves the neural-network chip from the realm of a research exercise, to that of a practical system, readily available to users whose primary interest is actual applications. An additional bonus of carrying the integration to the board level is that the chip designer can gain the user feedback that will assist in designing new chips with greater utility. 773 774 Sackinger. Boser, and Jackel 32 BIT DATA BUS DATA STATE WEIGHT DATA SRAM ANNA DSP32C ADDR MICROCODE ADDR , \CODER / f 24 BIT ADDRESS BUS Figure 1: Block Diagram of the ANNA Board 2 ARCHITECTURE 0 ii: .. o w ~ > The neurocomputer board contains a special purpose chip called ANN A (Boser et al., 1991), for the parallel evaluation of neuron functions (a squashing function applied to a weighted sum) and a general purpose digital signal processor, DSP32C. The board also contains interface and clock synchronization logic as well as 1 MByte of static memory, SRAM (see Fig. 1). Two version of this board with two different bus interfaces have been built: a double height VME board (see Fig. 2) and a PC/ AT board (see Fig. 3). The ANNA neural network chip is an ALU (Arithmetic and Logic Unit) specialized for neural network functions. It contains a 12-bit wide state-data input, a 12-bit wide state-data output, a 12-bit wide weight-data input, and a 37-bit microinstruction input. The instructions that can be executed by the chip are the following (parameters are not shown): RFSH Write weight values from the weight-data input into the dynamic on-chip weight storage. SHIFT Shift on-chip barrel shifter to the left and load up to four new state values from state-data input into the right end of the shifter. STORE Transfer state vector from the shifter into the on-chip state storage and/or into the state-data latches of the arithmetic unit. CALC Calculate eight dot-products between on-chip weight vectors and the contents of the above mentioned data latches; subsequently evaluate the squashing function. OUT Transfer the results of the calculation to the state-data output. A Neurocomputer Board Based on the ANNA Neural Network Chip 775 Figure 2: ANNA Board with VME Bus Interface Figure 3: ANNA Board with PCI AT Bus Interface 776 Sackinger, Boser, and Jackel Figure 4: Photo Micrograph of the ANNA Chip Some of the instructions (like SHIFT and CALC) can be executed in parallel. The barrel shifter at the input as well as the on-chip state storage make the ANN A chip very effective for evaluating locally-connected, weight-sharing networks such as feature extraction and time-delay neural networks (TDNN). The ANNA neural network chip, implemented in a 0.9/-lm CMOS technology, contains 180,000 transistors on a 4.5 x 7 mm2 die (see Fig. 4). The chip implements 4,096 physical synapses which can be time multiplexed in order to realize networks with many more than 4,096 connections. The resolution of the synaptic weights is 6 bits and that of the states (input/output of the neurons) is 3 bits. Additionally, a 4-bit scaling factor can be programmed for each neuron to extend the dynamic range of the weights. The weight values are stored as charge packets on capacitors and are periodically refreshed by two on chip 6-bit D/ A converter. The synapses are realized by multiplying 3-bit D/ A converters (analog weight times digital state). The analog results of this multiplication are added by means of current summing and then converted back to digital by a saturating 3-bit A/D converter. Although the chip uses analog computing internally, all input/output is digital. This combines the advantages of the high synaptic density, the high speed, and the low power of analog with the ease of interfacing to a digital system like a digital signal processor (DSP). The 32-bit floating-point digital signal processor (DSP32C) on the same board runs at 40 MHz without wait states (100 ns per instruction) and is connected to 1 MByte of static RAM. The DSP has several functions: (1) It generates the micro instructions for the ANNA chip. (2) It is responsible for accessing the pixel, feature, and weight data from the memory and then storing the results of the chip in the memory. (3) If the precision of the ANNA chip is not sufficient the DSP can do the calculations with 32-bit floating-point precision. (4) Learning algorithms can be run A Neurocomputer Board Based on the ANNA Neural Network Chip 777 on the DSP. (5) The DSP is useful as a pre- and post-processor for neural networks. In this way a whole task can be carried out on the board without exchanging intermediate results with the host. As shown by Fig. 1 ANNA instructions are supplied over the DSP address bus, while state and weight data is transferred over the data bus. This arrangement makes it possible to supply or store ANN A data and execute a micro instruction simultaneously, i.e., using only one DSP instruction. The ANNA clock is automatically generated whenever the DSP issues a micro instruction to the ANN A chip. 3 PERFORMANCE Using a DSP for supplying micro instructions as well as accessing the data from the memory makes the board very flexible and fairly simple. Both data and instruction flow to and from the ANNA chip are under software control and can be programmed using the C or DSP32C assembly language. Because of DSP32C features such as one-instruction 32-bit memory-to-memory transfer with auto increment and overhead free looping, ANNA instruction sequences can be generated at a rate of approximately 5 MIPS. A similar rate of 5 MByte/s is achieved for reading and writing ANNA data from and to the memory. The speed of the board depends on the application and how well it makes use of the chip's parallelism and ranges between 30 MC/s and 400 MC/s. For concrete examples see the section on Applications. Compared to the DSP32C which performs at about 3 MC/s (for sparsely connected networks) the board with the ANNA chip is 10 to 100 times faster. The speed of the board is not limited by the ANNA chip but by the above mentioned data rates. The use of a dedicated hardware sequencer will improve the speed by up to ten times. The board can thus be used for prototyping an application, before building more specialized hardware. 4 SOFTWARE To make the board easily usable we implemented a LISP interpreter on the host computer (a SUN workstation) which allows us to make remote procedure calls (RPC) to the ANNA board. After starting the LISP interpreter on the host it will download the DSP object code to the board and start the main program on the DSP. Then, the DSP will transfer the addresses of all procedures that are available to the user to the LISP interpreter. From then on, all these procedures can be called as LISP functions of the form (==> anna procedure parameter{s) from the host. Parameters and return value are handled automatically by the LISP interpreter. Three ways of using the ANNA board are described. The first two methods do not require DSP programming; everything is controlled from the LISP interpreter. The third method requires DSP programming and results in maximum speed for any application. 778 Sackinger, Boser, and Jackel l. The simplest way to use the board together with this LISP interpreter is to call existing library functions on the board. For example a neural network for recognizing handwritten digits can be called as follows: (==> anna down-weight weight-matrix) (setq class (==> anna down-ree-up digit-pattern» The first LISP function activates the down-weight function on the ANN A board that transfers the LISP matrix, weight-matrix, to the board. This function defines all the weights of the network and has to be called only once. The second LISP function calls the down-ree-up function which takes the digit-pattern (pixel image) as an input, downloads this pattern, runs the recognizer, and uploads the class number (0 ... 9). This method requires no knowledge of the ANN A or nsp instruction set. The library functions are fast since they have been optimized by the implementer. At the moment library functions for nonlinear convolution, character recognition, and testing are available. 2. If a function which is not part of the library has to be implemented, an ANNA program must be written. A collection of LISP functions (ANNANAS), support the translation of symbolic ANNA program into micro code. The micro code is then run on the ANNA chip by means of a software sequencer implemented on the nsp. Assembling and running a simple ANNA program using ANNANAS looks like this: (anna-repeat 16) (anna-shift 4 0) (anna-store 0 'a 2) (anna-endrep) (anna-stop) (anna-run 0) REPEAT 16 SHIFT 4,RO; STORE RO,A.L2; END REP STOP start of loop ANNA shift instruction ANNA store instruction end of loop end of program start sequencer In this way, all the features of the ANN A chip and board can be used without nsp programming. This mode is also helpful for testing and debugging ANN A programs. Beside the assembler, ANN AN AS also provides several monitoring and debugging tools. 3. If maximum speed is imperative, an application specific sequencer has to be written (as opposed to the slower generic sequencer described above). To do this a nsp assembler and C compiler are required. A toolbox of assembly macros and C functions help implementing this sequencer. Besides the sequencer, pre- and post-processing software can also be implemented on the fast nsp hardware. After successfully testing the program it can be added to the library as a new function. 5 APPLICATIONS 5.1 CONVOLVER NETWORK In this application the ANNA chip is configured for 16 neurons with 256 synapses each. First, each of these neurons connect to the upper left 16 x 16 field of a A Neurocompurer Board Based on the ANNA Neural Network Chip 779 Table 1: Performance of the Recognizer. REJECT RATE IMPLEMENTATION ERROR RATE FOR 1 % ERROR Full Precision ANNA/DSP 4.9% 9.1 % ANN A/DSP /Retraining 5.3 ± 0.2% 4.9±0.2% 13.5± 0.8% 11.5 ± 0.8% a w 0 nay tu .a. 611. a FRl..o/ talha Rh b. c5· v oon /hRaRn 0/ r a h e NF .IFma ce aFN 780 Sackinger, Boser, and Jackel 5.3 RECOGNITION BASED SEGMENTATION Before individual digits can be passed to a recognizer as described in the previous section, they typically have to be isolated (segmented) from a string of characters (e.g. a ZIP code). When characters overlap, segmentation is a difficult problem and simple algorithms which look for connected components or histograms fail. A promising solution to this problem is to combine recognition and segmentation (Keeler et al., 1992, Matan et aI., 1992). For instance recognizers like the one described above can be replicated horizontally and vertically over the region of interest. This will guarantee, that there is a recognizer centered over each character. It is crucial, however, to train the recognizer such that it rejects partial characters. Such a replicated version of the recognizer (at 31 times 6 locations) with approximately 2 million connections has been implemented on the ANN A board and was used to segment ZIP codes. 6 CONCLUSION A board with a neural-network chip and a digital signal processor (DSP) has been built. Large pattern recognition applications have been implemented on the board giving a speed advantage of 10 to 100 over the DSP alone. Acknowledgements The authors would like to thank Steve Deiss for his excellent job in building the boards and Yann LeCun and Jane Bromley for their help with the digit recognizer. References Bernhard Boser, Eduard Sackinger, Jane Bromley, Yann LeCun, and Lawrence D. Jackel. An analog neural network processor with programmable network topology. IEEE J. Solid-State Circuits, 26(12):2017-2025, December 1991. Yann Le Cun, Bernhard Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne Hubbard, and Lawrence D. Jackel. Handwritten digit recognition with a back-propagation network. In David S. Touretzky, editor, Neural Information Processing Systems, volume 2, pages 396-404. Morgan Kaufmann Publishers, San Mateo, CA, 1990. Eduard Sackinger, Bernhard Boser, Jane Bromley, Yann LeCun, and Lawrence D. Jackel. Application of the ANNA neural network chip to high-speed character recognition. IEEE Trans. Neural Networks, 3(2), March 1992. J. D. Keeler and D. E. Rumelhart. Self-organizing segmentation and recognition neural network. In J. M. Moody, S. J. Hanson, and R. P. Lippman, editors, Neural Information Processing Systems, volume 4. Morgan Kaufmann Publishers, San Mateo, CA, 1992. Ofer Matan, Christopher J. C. Burges, Yann LeCun, and John S. Denker. Multidigit recognition using a space delay neural network. In J. M. Moody, S. J. Hanson, and R. P. Lippman, editors, Neural Information Processing Systems, volume 4. Morgan Kaufmann Publishers, San Mateo, CA, 1992.
1991
54
523
Adaptive Synchronization of Neural and Physical Oscillators Kenji Doya University of California, San Diego La Jolla, CA 92093-0322, USA Abstract Shuji Yoshizawa University of Tokyo Bunkyo-ku, Tokyo 113, Japan Animal locomotion patterns are controlled by recurrent neural networks called central pattern generators (CPGs). Although a CPG can oscillate autonomously, its rhythm and phase must be well coordinated with the state of the physical system using sensory inputs. In this paper we propose a learning algorithm for synchronizing neural and physical oscillators with specific phase relationships. Sensory input connections are modified by the correlation between cellular activities and input signals. Simulations show that the learning rule can be used for setting sensory feedback connections to a CPG as well as coupling connections between CPGs. 1 CENTRAL AND SENSORY MECHANISMS IN LOCOMOTION CONTROL Patterns of animal locomotion, such as walking, swimming, and fiying, are generated by recurrent neural networks that are located in segmental ganglia of invertebrates and spinal cords of vertebrates (Barnes and Gladden, 1985). These networks can produce basic rhythms of locomotion without sensory inputs and are called central pattern generators (CPGs). The physical systems of locomotion, such as legs, fins, and wings combined with physical environments, have their own oscillatory characteristics. Therefore, in order to realize efficient locomotion, the frequency and the phase of oscillation of a CPG must be well coordinated with the state of the physical system. For example, the bursting patterns of motoneurons that drive a leg muscle must be coordinated with the configuration of the leg, its contact with the ground, and the state of other legs. 109 110 Doya and Yoshizawa The oscillation pattern of a ePG is largely affected by proprioceptive inputs. It has been shown in crayfish (Siller et al., 1986) and lamprey (Grillner et aI, 1990) that the oscillation of a ePG is entrained by cyclic stimuli to stretch sensory neurons over a wide range of frequency. Both negative and positive feedback pathways are found in those systems. Elucidation of the function of the sensory inputs to CPGs requires computational studies of neural and physical dynamical systems. Algorithms for the learning of rhythmic patterns in recurrent neural networks have been derived by Doya and Yoshizawa (1989), Pearlmutter (1989), and Williams and Zipser (1989). In this paper we propose a learning algorithm for synchronizing a neural oscillator to rhythmic input signals with a specific phase relationship. It is well known that a coupling between nonlinear oscillators can entrainment their frequencies. The relative phase between oscillators is determined by the parameters of coupling and the difference of their intrinsic frequencies. For example, either in-phase or anti-phase oscillation results from symmetric coupling between neural oscillators with similar intrinsic frequencies (Kawato and Suzuki, 1980). Efficient locomotion involves subtle phase relationships between physical variables and motor commands. Accordingly, our goal is to derive a learning algorithm that can finely tune the sensory input connections by which the relative phase between physical and neural oscillators is kept at a specific value required by the task. 2 LEARNING OF SYNCHRONIZATION We will deal with the following continuous-time model of a CPG network. d e s Ti dtXi(t) = -Xi(t) + L Wijgj(Xj(t)) + L Vi1:yA:(t), j=1 1:=1 (1) where Xi(t) and gi(Xi(t)) (i = 1, ... , C) represent the states and the outputs ofCPG neurons and Y1:(t) (k = 1, ... , S) represents sensory inputs. We assume that the connection weights W = {Wij} are already established so that the network oscillates without sensory inputs. The goal oflearning is to find the input connection weights V = {Vij} that make the network state x(t) = (Xl (t), ... ,xc(t))t entrained to the input signal yet) = (Yl(t), .. . ,Ys(t))t with a specific relative phase. 2.1 AN OBJECTIVE FUNCTION FOR PHASE-LOCKING The standard way to derive a learning algorithm is to find out an objective function to be minimized. If we can approximate the waveforms of Xi(t) and Y1:(t) by sine waves, a linear relationship x(t) = Py(t) specifies a phase-locked oscillation of x(t) and Yet). For example, if we have Yl = sin wt and Y2 = cos wt, then a matrix P = (~ fi) specifies Xl = v'2 sine wt +1r /4) and X2 = 2 sine wt + 1r /3). Even when the waveforms are not sinusoidal, minimization of an objective function 1 1 c s E(t) = "2l1x(t) - py(t)1I2 = "2 2: {Xi(t) - L Pi1:Y1:(t)}2 i=l 1:=1 (2) Adaptive Synchronization of Neural and Physical Oscillators 111 determines a specific relative phase between x(t) and y(t). Thus we call P = {Pik} a phase-lock matrix. 2.2 LEARNING PROCEDURE Using the above objective function, we will derive a learning procedure for phaselocked oscillation of x(t) and y(t). First, an appropriate phase-lock matrix P is identified while the relative phase between x(t) and y(t) changes gradually in time. Then, a feedback mechanism can be applied so that the network state x(t) is kept close to the target waveform P y(t). Suppose we actually have an appropriate phase relationship between x(t) and y(t), then the phase-lock matrix P can be obtained by gradient descent of E(t) with respect to PH: as follows (Widrow and Stearns, 1985). d {}E(t) S dtPik = -TJ {}. = TJ {Xi(t) - LPijYj(t)}Yk(t). P,k j=1 (3) If the coupling between x(t) and y(t) are weak enough, their relative phase changes in time unless their intrinsic frequencies are exactly equal and the systems are completely noiseless. By modulating the learning coefficient TJ by some performance index of the total system, for example, the speed of locomotion, it is possible to obtain a matrix P that satisfies the requirement of the task. Once a phase-lock matrix is derived, we can control x(t) close to Py(t) using the gradient of E(t) with respect to the network state {}E(t) S {} .() = Xi(t) - L PikYk(t). X, t k=1 The simplest feedback algorithm is to add this term to the CPG dynamics as follows. d e s Ti dtXi(t) = -Xi(t) + L Wijgj(Xj(t)) - O'{Xi(t) - LPikYk(t)}. j=1 k=1 The feedback gain 0' (> 0) must be set small enough so that the feedback term does not destroy the intrinsic oscillation of the CPG. In that case, by neglecting the small additional decay term O'Xi(t), we have d e s Tj dt Xi(t) = -Xj(t) + L Wijgj(Xj (t)) + L O'PikYk(t), j=1 k=1 (4) which is equivalent to the equation (1) with input weights Vik = O'Pik. 112 Doya and Yoshizawa 3 DELAYED SYNCHRONIZATION We tested the above learning scheme on a delayed synchronization task; to find coupling weights between neural oscillators so that they synchronize with a specific time delay. We used the following coupled CPG model. c c Tdd xi(t) = -xi(t) + L wijyj(t) + ~ Lpi1:y~-n(t), (5) t . J=1 1:=1 yi(t) = g(xi(t)), (i = 1, . .. , C), where superscripts denote the indices of two CPGs (n = 1,2). The goal of learning was to synchronize the waveforms yHt) and y~(t) with a time delay ~T. We used z(t) = -Iy~(t ~T) y~(t)1 as the performance index. The learning coefficient 7] of equation (3) was modulated by the deviation of z(t) from its running average z(t) using the following equations. 7](t) = 7]0 {z(t) - z(t)}, d Ta dt z(t) = -z(t) + z(t). (6) a ..... y2 0.0 4. 0 8. 0 12. 0 16. 0 20. 0 24. 0 28. 0 32. 0 b d .... y1 y2~rvl\· O. 0 4. 0 8. 0 12. 0 16. 0 0.'-;;' O---;4~: 0i\""""'""-"8."A' o---:-l-;!-i"""o -~1S: 0 c e y1 . y2~y2 0.0 4.0 8.0 12. 0 16. 0 o .... 'o-----:4:-'-::o::---~8: ..... 0---:-1-;:1-i-:-0 --1~6: 0 Figure 1: Learning of delayed synchronization of neural oscillators. The dotted and solid curves represent yf(t) and y;(t) respectively. a:without coupling. b:~T = 0.0. c:~T = 1.0. c:~T = 2.0. d :~T = 3.0. Adaptive Synchronization of Neural and Physical Oscillators 113 First, two CPGs were trained independently to oscillate with sinusoidal waveforms of period Tl = 4.0 and T2 = 5.0 using continuous-time back-propagation learning (Doyaand Yoshizawa, 1989). Each CPG was composed of two neurons (C = 2) with time constants T = 1.0 and output functions g() = tanh(). Instead of following the two step procedure described in the previous section, the network dynamics (5) and the learning equations (3) and (6) were simulated concurrently with parameters a = 0.1, '10 = 0.2, and To = 20.0. Figure 1 a shows the oscillation of two CPGs without coupling. Figures 1 b through e show the phase-locked waveforms after learning for 200 time units with different desired delay times. 4 ZERO-LEGGED LOCOMOTION N ext we applied the learning rule to the simplest locomotion system that involves a critical phase-lock between the state of the physical system and the motor command-a zero-legged locomotion system as shown in Figure 2 a. The physical system is composed of a wheel and a weight that moves back and forth on a track fixed radially in the wheel. It rolls on the ground by changing its balance with the displacement of the weight. In order to move the wheel in a given direction, the weight must be moved at a specific phase with the rotation angle of the wheel. The motion equations are shown in Appendix. First, a CPG network was trained to oscillate with a sinusoidal waveform of period T = 1.0 (Doya and Yoshizawa, 1989). The network consisted of one output and two hidden units (C = 3) with time constants Ti = 0.2 and output functions giO = tanh(). Next, the output of the CPG was used to drive the weight with a force / = /max gl(Xl(t». The position T and the velocity T of the weight and the rotation angle (cos 0, sin 0) and the angular velocity of the wheel iJ were used as sensory feedback inputs Yl:(t) (k = 1, .. . ,5) after scaling to [-1,1]. In order to eliminate the effect of biases in x(t) and yet), we used the following learni~g equations. d S dtPil: = '1 ((Xi(t) - Xi(t» - L Pi; (y;(t) - y;(t»}(Yl:(t) - Yl:(t», ;=1 d Ttl: dt Xi(t) = -Xi(t) + Xi(t), (7) d Ty dtYl:(t) = -Yl:(t) + Yl:(t). The rotation speed of the wheel was employed as the performance index z(t) after smoothing by the following equation. d . T, dt z(t) = -z(t) + OCt). The learning coefficient '1 was modulated by equations (6). The time constants were Ttl: = 4.0, Ty = 1.0, T, = 1.0, and To = 4.0. Each training run was started from a random configuration of the wheel and was finished after ten seconds. 114 Doya and Yoshizawa a b c , , , , , 0.0 1.0 2.0 3.0 4.0 5.0 /' /' -0.5 sin90 • cos9O---9~ pos vel cos SID rot ,perle, 6.0 0.0 1.0 /' 0.0 pos "------' vel cos sm , ;-= 2.0 3. 0 4.0 5.0 6.0 /' /' /' 0.5 bidS ~ :r----..... , _ ....... ' ___ ,'-,-----'-, ,..----'-:-' _,-1-' _-::-I' O. 0 1. 0 2. 0 3. 0 4. 0 5. 0 6. 0 O. 0 1. 0 2. 0 3. 0 4. 0 5. 0 6. 0 /' /' /' /' -0.5 0.0 0.5 Figure 2: Learning of zero-legged locomotion. Adaptive Synchronization of Neural and Physical Oscillators 115 Figure 2 b is an example of the motion of the wheel without sensory feedback. The rhythms of the CPG and the physical system were not entrained to each other and the wheel wandered left and right. Figure 2 c shows an example of the wheel motion after 40 runs of training with parameters Tlo = 0.1 and (}' = 0.2. At first, the oscillation of the CPG was slowed down by the sensory inputs and then accelerated with the rotation of the wheel in the right direction. We compared the patterns of sensory input connections made after learning with wheels of different sizes. Table 1 shows the connection weights to the output unit. The positive connection from sin 0 forces the weight to the right-hand side of the wheel and stabilize clockwise rotation. The negative connection from cos 0 with smaller radius fastens the rhythm of the CPG when the wheel rotates too fast and the weight is lifted up. The positive input from r with larger radius makes the weight stickier to both ends of the track and slows down the rhythm of the CPG. Table 1: Sensory input weights to the output unit (Plk; k = 1, ... ,5). radius r r cosO sinO 0 2cm 0.15 -0.53 -1.35 1.32 0.07 4cm 0.28 -0.55 -1.09 1.22 0.01 6cm 0.67 -0.21 -0.41 0.98 0.00 8cm 0.70 -0.33 -0.40 0.92 0.03 10cm 0.90 -0.12 -0.30 0.93 -0.02 5 DISCUSSION The architectures of CPGs in lower vertebrates and invertebrates are supposed to be determined by genetic information. Nevertheless, the wayan animal utilizes the sensory inputs must be adaptive to the characteristics of the physical environments and the changing dimensions of its body parts. Back-propagation through forward models of physical systems can also be applied to the learning of sensory feedback (Jordan and Jacobs, 1990). However, learning of nonlinear dynamics of locomotion systems is a difficult task; moreover, multi-layer back-propagation is not appropriate as a biological model of learning. The learning rule (7) is similar to the covariance learning rule (Sejnowski and Stanton, 1990), which is a biological model of long term potentiation of synapses. Acknowledgements The authors thank Allen Selverston, Peter Rowat, and those who gave comments to our poster at NIPS Conference. This work was partly supported by grants from the Ministry of Education, Culture, and Science of Japan. 116 Doya and Yoshizawa References Barnes, W. J. P. & Gladden, M. H. (1985) Feedback and Motor Control in Invertebrates and Vertebrates. Beckenham, Britain: Croom Helm. Doya, K. & Yoshizawa, S. (1989) Adaptive neural oscillator using continuous-time back-propagation learning. Neural Networks, 2, 375-386. Grillner, S. & Matsushima, T. (1991) The neural network underlying locomotion in Lamprey-Synaptic and cellular mechanisms. Neuron, 7(July), 1-15. Jordan, M. I. & Jacobs, R. A. (1990) Learning to control an unstable system with forward modeling. In Touretzky, D. S. (ed.), Advances in Neural Information Processing Systems 2. San Mateo, CA: Morgan Kaufmann. Kawato, M. & Suzuki, R. (1980) Two coupled neural oscillators as a model of the circadian pacemaker. Journal of Theoretical Biology, 86, 547-575. Pearlmutter, B. A. (1989) Learning state space trajectories in recurrent neural networks. Neural Computation, 1, 263-269. Sejnowski, T. J. & Stanton, P. K. (1990) Covariance storage in the Hippocampus. In Zornetzer, S. F. et aI. (eds.), An Introduction to Neural and Electronic Networks, 365-377. San Diego, CA: Academic Press. Siller, K. T., Skorupski, P., Elson, R. C., & Bush, M. H. (1986) Two identified afferent neurones entrain a central locomotor rhythm generator. Nature, 323, 440443. Widrow, B. & Stearns, S. D. (1985) Adaptive Signal Processing. Englewood Cliffs, N J: Prentice Hall. Williams, R. J. & Zipser, D. (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1, 270-280. Appendix The dynamics of the zero-legged locomotion system: .. .f.(1 mR2 sin2 0) (0 mRsin20(r+RcosO» mr = JO + 10 - mgc cos + 10 R · Ov+2mr(r+RcosO)0' 0'2 +m sm 10 +mr , 100 -loR sin 0 + mgcsinO(r + RcosO) - (v + 2mr(r + RcosO»O, 10 Imax g(Xl(t» - ur3 - /Jr, 10 1+ MR2 + m(r + RcoSO)2. Parameters: the masses of the weight m = 0.2[kg) and the wheel M = 0.8[kg); the radius of the wheel R = 0.02throughO.l[m)j the inertial moment of the wheel I = t M R2 j the maximum force to the weight 1 max = 5[N) j the stiffness of the limiter of the weight u = 20/ R3 [N/m3); the damping coefficients of the weight motion /J = 0.2/ R [N/(m/s») and the wheel rotation v = 0.05(M +m)R [N/(rad/s»).
1991
55
524
Recurrent Networks and N ARMA Modeling Jerome Connor Les E. Atlas FT-lO Interactive Systems Design Laboratory Dept. of Electrical Engineering University of Washington Seattle, Washington 98195 Abstract Douglas R. Martin B-317 Dept. of Statistics University of Washington Seattle, Washington 98195 There exist large classes of time series, such as those with nonlinear moving average components, that are not well modeled by feedforward networks or linear models, but can be modeled by recurrent networks. We show that recurrent neural networks are a type of nonlinear autoregressive-moving average (N ARMA) model. Practical ability will be shown in the results of a competition sponsored by the Puget Sound Power and Light Company, where the recurrent networks gave the best performance on electric load forecasting. 1 Introduction This paper will concentrate on identifying types of time series for which a recurrent network provides a significantly better model, and corresponding prediction, than a feedforward network. Our main interest is in discrete time series that are parsimoniously modeled by a simple recurrent network, but for which, a feedforward neural network is highly non-parsimonious by virtue of requiring an infinite amount of past observations as input to achieve the same accuracy in prediction. Our approach is to consider predictive neural networks as stochastic models. Section 2 will be devoted to a brief summary of time series theory that will be used to illustrate the the differences between feedforward and recurrent networks. Section 3 will investigate some of the problems associated with nonlinear moving average and state space models of time series. In particular, neural networks will be analyzed as 301 302 Connor, Atlas, and Martin nonlinear extensions oftraditionallinear models. From the preceding sections, it will become apparent that the recurrent network will have advantages over feedforward neural networks in much the same way that ARMA models have over autoregressive models for some types of time series. Finally in section 4, the results of a competition in electric load forecasting sponsored by the Puget Sound Power and Light Company will discussed. In this competition, a recurrent network model gave superior results to feed forward networks and various types of linear models. The advantages of a state space model for multivariate time series will be shown on the Puget Power time series. 2 Traditional Approaches to Time Series Analysis The statistical approach to forecasting involves the construction of stochastic models to predict the value of an observation Xt using previous observations. This is often accomplished using linear stochastic difference equation models, with random inputs. A very general class of linear models used for forecasting purposes is the class of ARMA(p,q) models p q Xt = L <PXt-1 + L (Jet-i + et 1=1 i=l where et denotes random noise, independent of past X"~ The conditional mean (minimum mean square error) predictor Xt of Xt can be expressed in the recurrent form p q Xt = L<pXt-, + L(Jet-i· 1=1 i=l where ek is approximated by fk = Xk - Xk, Ie = t - 1, ... , t - q The key properties of interest for an ARMA(p,q) model are stationarity and invertibility. If the process Xt is stationary, its statistical properties are independent of time. Any stationary ARMA(p,q) process can be written as a moving average 00 Xt = L hket-k + et· k=l An invertible process can be equivalently expressed in terms of previous observations or residuals. For a process to be invertible, all the poles of the z-transform must lie inside the unit circle of the z plane. An invertible ARMA(p,q) process can be written as an infinite autoregression 00 Xt = L <PkXt-k + et· k=l As an example of how the inverse process occurs, let et be solved for in terms of Xt and then substitute previous et's into the original process. This can be illustrated with an MA(I) process Recurrent Networks and NARMA Modeling 303 Xt = et + (}et-1 et-i = Xt-i (}et-i-1 Xt = et + (}(Xt-1 (}et-2) Xt = et + ~(-I)i-1(}iXt_i i Looking at this example, it can be seen that an MA(I) processes with I(}I ~ 1 will depend significantly on observations in the distant past. However, if I(}I < 1, then the effect of the distant past is negligible. In the nonlinear case, it will be shown that it is not always possible to go back and forth between descriptions in terms of observables (e.g. Xi) and descriptions in terms of unobservables (e.g. ei) even when St = O. For a review of time series prediction in greater depth see the works of Box [1] or Harvey [2]. 3 Nonlinear ARMA Models Many types of nonlinear models have been proposed in the literature. Here we focus on feed forward and recurrent neural networks and how they relate to nonlinear ARMA models. 3.1 Nonlinear Autoregressive Models The simplest generalization to the nonlinear case would be the nonlinear autoregressive (NAR) model Xt = h(xt-1! Xt-2, ... , Xt-p) + et, where hO is an unknown smooth function with the assumption the best (i.e., minimum mean square error) prediction of Xt given Xt-1I ... , Xt-p is its conditional mean Zt = E(xtl x t-1I ... , Xt_p) = h(xt-1I ... , Xt-p). Feedforward networks were first proposed as an N AR model for time series prediction by Lapedes and Farber [3]. A feedforward network is a nonlinear approximation to h given by I p Zt = h(Xt-1I ... , Xt-p) = ~ Wd(~ WijXt-j). i=l ;=1 The weight matrix W is lower diagonal and will allow no feedback. Thus the feedforward network is a nonlinear mapping from previous observation onto predictions of future observations. The function /(x) is a smooth bounded monotonic function, typically a sigmoid. The parameters Wi and Wij are estimates from a training sample x~, ... , x')." thereby obtaining an estimate of h of h. Estimates are obtained by minimizing the sum of the square residuals E~l (Xt Zt)2 by gradient descent procedure known as "backpropagation" [4]. 304 Connor, Atlas, and Martin 3.2 NARMA or NMA A simple nonlinear generalization of ARMA models is It is natural to predict Zt = h(Xt-b Xt-2, ... , Xt-p, et-b ... , et-q). If the model h(Xt-b Xt-2, ... , Xt-p, et-l, ... , et-q) is chosen, then a recurrent network can approximate it as 1 p q Zt = h(Xt-1' ... , Xt-p) = L Wd(L WijXt-j + L wij(Xt-j - Zt-j». i=1 j=1 ;=1 This model is a special case of the fully interconnected recurrent network 1 n Zt = L Wd(L wijXt-j) i=1 j=1 where wij are coefficients of a full matrix. Nonlinear autoregressive models and nonlinear moving average models are not always equivalent for nondeterministic processes as in the linear case. If the probability of the next observation depends on the previous state of the process, a representation built on et may not be complete unless some information on the previous state is added[8]. The problem is that if et, ... , et-m are known, there is still not enough information to determine which state the series is in at t - m. Given the lack of knowledge of the initial state, it is impossible to predict future states and without the state information, the best predictions cannot be made. If the moving average representation cannot be made with et alone, it still may be possible to express a model in terms of past et and state information. It has been shown that for a large class of nondeterministic Markov processes, a model of this form can be constructed[8]. This link is important, because a recurrent network is this type of model. For further details on using recurrent networks to NARMA modeling see Connor et al[9]. 4 Competition on Load Forecasting Data A fully interconnected recurrent network trained with the Williams and Zipser algorithm [10] was part of a competition to predict the loads of the Puget Sound Power and Light Company from November 11, 1990 to March 31, 1991. The object was to predict the demand for the electric power, known as the load, profile of each day on the previous working day. Because the forecast is made on Friday morning, the Monday prediction is the most difficult. Actual loads and temperatures of the past are available as well as forecasted temperatures for the day of the prediction. Recurrent Networks and NARMA Modeling 305 Neural networks are not parsimonious and many parameters need to be determined. Seasonality limits the amount of useful data for the load forecasting problem. For example, the load profile in August is not useful for predicting the load profile in January. This limited amount of data severely constrains the number of parameters a model can accurately determine. We avoided seasonality, while increasing the size of the training set by including data form the last four winters. In total 26976 vectors were available when data from August 1 to March 31 for 1986 to 1990 were included. The larger training set enables neural network models be trained with less danger of overfitting the data. If the network can accurately model load growth over the years, then the network will have the added advantage of being exposed to a larger temperature spectrum on which to base future predictions. The larger temperature spectrum is hypothetically useful for predicting phenomenon such as cold snaps which can result in larger loads than normal. It should be noted that neural networks have been applied to this model in the past[6]. Initially five recurrent models were constructed, one for each day of the week, with Wednesday, Thursday and Friday in a single network. Each network has temperature and load values from a week previous at that hour, the forecasted temperature of the hour to be predicted, the hour year and the week of the forecast. The week of the forecast was included to allow the network to model the seasonality of the data. Some models have added load and temperature from earlier in the week, depending on the availability of the data. The networks themselves consisted of three to four neurons in the hidden layer. This predictor is of the form It(k) = et(k - 7) + I(lt(k - 7), et(k - 7), it(k), T8(k - 1), t, d, y), where 10 is a nonlinear function, It(k) is the load at time t and day k, et is the noise, T is the temperature, T is the forecasted temperature, d is the day of the week, and y is the year of the data. After comparing its performance to the winner of the competition, the linear model in Fig. 1, the poor performance could be attributed to the choice of model, rather than a problem with recurrent networks. It should be mentioned that the linear model took as one of its inputs, the square of the last available load. This is a parsimonious way of modeling nonlinearities. A second recurrent predictor was then built with the same input and output configuration as the linear model, save the square of the previous load term which the nets nonlinearities can handle. This net, denoted as the Recurrent Network, had a different recurrent model for each hour of the day. Each hour of the day had a different model, this yielded the best predictions. This predictor is of the form It(k) = et(k) + It.(lt(k - 1), et(k - 1), it(k), Ts(k - 1), d, y). All of the models in the figure use the last available load, forecasted temperature at the hour to be predicted, maximum forecasted temperature of the day to be predicted, the previous midnight temperatures, and the hour and year of the prediction. A second recurrent network was also trained with the last available load at that hour, this enabled et-l to be modeled. The availability of et-l turned out to be the difference between making superior and average predictions. It should be noted that the use of et-l did not improve the results of linear models. The three most important error measures are the weekly morning, afternoon, and total loads and are listed in the table below. The A.M. peak is the mean average 306 Connor, Atlas, and Martin Recurrent .0275 .0355 .0218 .0311 Table 1: Mean Square Error percent error (MAPE) of the summed predictions of 7 A.M. to 9 A.M., the P.M. peak is the MAPE of the summed predictions of 5 P.M. to 7 P.M, and the total is the MAPE of the summed predictions over the entire day. Results, of the total power for the day prediction, of the recurrent network and other predictors are shown in Fig. 1. The performance on the A.M. and P.M. peaks were similar[9]. The failure of the daily recurrent network to accurately predict is a product of trying to model to complex a problem. When the complexity of the problem was reduced to that of predicting a single hour of the day, results improved significantly[7]. The superior performance of the recurrent network over the feedforward network is time series dependent. A feedforward and a recurrent network with the same input representation was trained to predict the 5 P.M. load on the previous work day. The feedforward network succeeded in modeling the training set with a mean square error of .0153 compared to the recurrent networks .0179. However, when the tested on several winter outside the training set the results, listed in the table below, varied. For the 1990-91 winter, the recurrent network did better with a mean square error of .0311 compared to the feedforward networks .0331. For the other winter of the years before the training set, the results were quite different, the feedforward network won in all cases. The differences in prediction performance can be explained by the inability of the feedforward network to model load growth in the future. The loads experience in the 1990-91 winter were outside the range of the entire training set. The earlier winters range of loads were not as far form the training set and the feedforward network modeled them well. The effect of the nonlinear nature of neural networks was apparent in the error residuals of the training and test sets. Figs. 2 and 3 are plots of the residuals against the predicted load for the training and test sets respectively. In Fig. 2, the mean and variance of the residuals is roughly constant as a function of the predicted load, this is indicative of a good fit to the data. However, in Fig. 3, the errors tend to be positive for larger loads and negative for lesser loads. This is a product of the squashing effect of the sigmoidal nonlinearities. The squashing effect becomes acute during the prediction of the peak loads of the winter. These peak loads are caused when a cold spell occurs and the power demand reaches record levels. This is the only measure on which the performance of the recurrent networks is surpassed, human experts outperformed the recurrent network for predictions during cold spells. The recurrent network did outperform all other statistical models on this measure. .. E w ~ • ;: u .. i Recurrent Networks and NARMA Modeling 307 8~1 ----------------------------------~I I i :t ~. I ,I-J :1 n n Ori. I I I Recunenl Feed Forward Besl Linearl fj\!fIIJrl' I Recunenl I ~elWOr" I .'4elWOu! .'.fodel : ~r ~elWOrlt : °O~----------~:------3------4----~~--~6· Figure 1; Competition Performance on Total Power error 400 : .. 200 '.. . .. " " .. • : •• • ", I, ". • • .' • • #'. • • &. ..-, • . . .. ·,&·"~·i·-f/: ",,' d' d ___ ..\--I' l!o.... " ..................... ..-.i-'" r,.;'.,..c",. ~~i:..;o:!..·I..:.·~' ...... ____ -'-___ p r e ~ c t e ..... • .:2' ~J).~,·l-~~5·0... 3750 Load . . .'...... -. .. .' • •• • • I. :_ • • •••• -20a' :.e ,':-:' .' ... 1: '. ...' ., ... . . . -400 ~ . Figure 2: Prediction vs. Residual on Training Set error 400 200 '. . . . . .' . • .' • '.1 • '. ___ +-___ ~~~ ....... _ .. ..,.'~.:~:,~. ~.~.-..-. _ ................ _predi c ted 2750 .. · .. 3"25'0 3750 Load -200 -400 . . . .' . . . : . . . ..- . Figure 3: Prediction vs. Residual on Testing Set 308 Connor, Atlas, and Martin 5 Conclusion Recurrent networks are the nonlinear neural network analog of linear ARMA models. As such, they are well-suited for time series that possess moving average components, are state dependent, or have trends. Recurrent neural networks can give superior results for load forecasting, but as with linear models, the choice of model is critical to good prediction performance. 6 Acknowledgements We would like to than Milan Casey Brace of the Puget Power Corporation, Dr. Seho Oh, Dr. Mohammed EI-Sharkawi, Dr. Robert Marks, and Dr. Mark Damborg for helpful discussions. We would also like to thank the National Science Foundation for partially supporting this work. References [1] G. Box, Time series analysis: forecasting and control, Holden-Day, 1976. [2] A. C. Harvey, The econometric analysis 0/ time series, MIT Press, 1990. [3] A. Lapedes and R. Farber, "Nonlinear Signal Processing Using Neural Networks: Prediction and System Modeling", Technical Report, LA-UR87-2662, Los Alamos National Laboratory, Los Alamos, New Mexico, 1987. [4] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal representations by error propagation", in Parallel Distributed Processing, vol. 1, D.E. Rumelhart, and J.L. NcCelland,eds. Cambridge:M.I.T. Press,1986, pp. 318-362. [5] M.C. Brace, A Comparison 0/ the Forecasting Accuracy of Neural Networks with Other Established Techniques, Proc. of the 1st Int. Forum on Applications of Neural Networks to Power Systems, Seattle, July 23-26, 1991. [6] L. Atlas, J. Connor, et al., "Performance Comparisons Between Backpropagation Networks and Classification Trees on Three Real-World Applications", Advances in Neural In/ormation Processing Systems 2, pp. 622-629, ed. D. Touretzky, 1989. [7] S. Oh et al., Electric Load Forecasting Using an Adaptively Trained Layered Perceptron, Proc. of the 1st Int. Forum on Applications of Neural Networks to Power Systems, Seattle, July 23-26, 1991. [8] M. Rosenblatt, Markov Processes. Structure and Asymptotic Behavior, Springer-Verlag, 1971, 160-182. [9] J. Connor, L. E. Atlas, and R. D. Martin,"Recurrent Neural Networks and Time Series Prediction", to be submitted to IEEE Trans. on Neural Networks, 1992. [10] R. Williams and D. Zipser. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks, Neural Computation, 1, 1989, 270-280.
1991
56
525
ANN Based Classification for Heart Defibrillators M. Jabri, S. Pickard, P. Leong, Z. Chi, B. Flower, and Y. Xie Sydney University Electrical Engineering NSW 2006 Australia Abstract Current Intra-Cardia defibrillators make use of simple classification algorithms to determine patient conditions and subsequently to enable proper therapy. The simplicity is primarily due to the constraints on power dissipation and area available for implementation. Sub-threshold implementation of artificial neural networks offer potential classifiers with higher performance than commercially available defibrillators. In this paper we explore several classifier architectures and discuss micro-electronic implementation issues. 1.0 INTRODUCTION Intra-Cardia Defibrillators (lCDs) represent an important therapy for people with heart disease. These devices are implanted and perform three types of actions: l.monitor the heart 2.to pace the heart 3.to apply high energy/high voltage electric shock 1bey sense the electrical activity of the heart through leads attached to the heart tissue. Two types of sensing are commooly used: Single Chamber: Lead attached to the Right Ventricular Apex (RVA) Dual Chamber: An additional lead is attached to the High Right Atrium (HRA). The actions performed by defibrillators are based on the outcome of a classification procedure based on the heart rhythms of different heart diseases (abnormal rhythms or "arrhythmias"). 637 638 Jabri, Pickard, Leong, Chi, Flower, and Xie There are tens of different arrhythmias of interest to cardiologists. They are clustered into three groups according to the three therapies (actions) that ICDs perform. Figure 1 shows an example of a Normal Sinus Rhythm. Note the regularity in the beats. Of interest to us is what is called the QRS complex which represents the electrical activity in the ventricle during a beat The R point represents the peak, and the distance between two heart beats is usually referred to as the RR interval. FIGURE 1. A Normal Sinus Rhythm (NSR) waveform Figure 2 shows an example of a Ventricular Tachycardia (more precisely a Ventricular Tachycardia Slow or VTS). Note that the beats are faster in comparison with an NSR. Ventricular Fibrillation (VF) is shown in Figure 3. Note the chaotic behavior and the absence of well defined heart beats. FIGURE 2. A Ventricular Tachycardia (VT) wavefonn The three waveforms discussed above are examples of Intra-Cardia Electro-Grams (ICEG). NSR, VT and VF are representative of the type of action a defibrillator has to takes. For an NSR. an action of "continue monitoring" is used. For a VT an action of "pacing" is perfonned, whereas for VF a high energy/high voltage shock is issued. Because they are nearfield signals. ICEGs are different from external Eltro-Cardio-Grams (ECG). As a result, classification algorithms developed for ECG patterns may not be necessarily valuable for lCEG recognition. The difficulties in ICEG classification lie in that many arrhythmias share similar features and fuzzy situations often need to be dealt with. For instance, many ICDs make use of the heart rate as a fundamental feature in the arrhythmia classificatioo process. But several arrhythmias that require different type of therapeutic actions have similar heart rates. For example. a Sinus Tachycardia (ST) is an arrhythmia characterized with a heart rate that is higher than that of an NSR and in the vicinity of a VT. Many classifier would classify an ST as VT leading to a therapy of pacing, whereas an ST is supposed to be grouped under an NSR type of therapy. Another example is a fast VT which may be associated with heart rates that are indicative of ANN Based Classification for Heart Defibrillators 639 VF. In this case the defibrillator would apply a VF type of therapy when only a vr type therapy is required (pacing). FIGURE 3. A Ventricular Fibrillation (VF) waveform. 'The overlap of the classes when only heart rate is used as the main classification feature highlights the necessity of the consideration of further features with higher discrimination capabilities. Features that are commonly used in addition to the heart rate are: I.average heart rate (over a period of time) 2.arrhythmia onset 3.arrhythmia probability density nmctions Because of the limited power budget and area, arrhythmia classifiers in ICDs are kept extremely simple with respect to what could be achieved with more relaxed implementation constraints. As a result false positive (pacing or defibrillation when none is required) may be high and error rates may reach 13%. Artificial neural network techniques offer the potential of higher classification performance. In order to maintain as lower power consumption as possible, VLSI micro-power implementation techniques need to be considered. In this paper, we discuss several classifier architectures and sub-threshold implementation techniques. Both single and dual chamber based classifications are considered. 2.0 DATA Data used in our experiments were collected from Electro-Physiological Studies (EPS) at hospitals in Australia and the UK. Altogether, and depending on whether single or dual chamber is considered, data from over 70 patients is available. Cardiologists from our commercial collaborator have labelled this data All tests were performed on a testing set that was not used in classifier Mlding. Arrhythmias recorded during EPS are produced by stimulation. As a result, no natural transitions are captured. 3.0 SINGLE CHAMBER CLASSIFICATION We have evaluated several approaches for single chamber classification. It is important to note here that in the case of single chamber, not all arrhythmias could be correctly classified (not even by truman experts). This is because data from the RVA lead represents mainly the ventricular electrical activity. and many atrial arrhythmias require atrial information for proper diagnosis. 640 Jabri, Pickard, Leong, Chi, Flower, and Xie 3.1 MULTI·LAYER PERCEPTRONS Table 1 shows the performance of multi-layer perceptrons (MLP) trained using vanilla backpropagation, coo jugate gradient and a specialized limited precision training algorithm that we call Combined Search Algorithm (Xie and labri, 91). The input to the MLP are 21 features extracted from the time domain. There are three outputs representing three main groupings: NSR, VT and VF. We do not have the space here to elaborate on the choice of the input features. Interested readers are referenced to (Oli and labri, 91; Leong and labri, 91). TABLE 1. Perfonnance of Multi-layer Perceptron Based CIMsifiers Network Training Precision Average Algorithm Performance 21-5-3 backprop. unlimited 96% 21-5-3 conj.-grad unlimited 95.5% 21-5-3 CSA 6 bits 94.8% The summary here indicates that a high performance single chamber based c1assificatioo can be achieved for ventricular arrhythmias. It also indicates that limited precision training does not Significantly degrade this performance. In the case of limited precision MLP, 6 bits plus a sign bit are used to represent network activities and weights. 3.2 INDUCTION OF DECISION TREES The same training data used to train the MLP was used to create a decision tree using the C4.5 program developed by Ross Quinlan (a derivative of the ID3 algorithm). The resultant tree was then tested, and the performance was 95% correct classification. In order to achieve this high performance, the whole training data had to be used in the induction process (windowing disabled). This has a negative side effect in that the trees generated tend to be large. The implementation of decision trees in VLSI is not a difficult procedure. The problem however is that because of the binary decision process, the branching thresholds are difficult to be implemented in digital (for large trees) and even more difficult to be implemented in micropower analog. The latter implementation technique would be possible if the induction process can take advantage of the hardware characteristics in a similar way that ''in-loop'' training of sub-threshold VLSI MLP achieves the same objective. 4.0 DUAL CHAMBER BASED CLASSIFIERS Two architectures for dual chamber based classifiers have been investigated: Multi-ModuleNeural Networks and a hybrid Decision Tree/MLP. The difference between the classifier architectures is a function of which arrhythmia group is being targetted for classification. ANN Based Classification for Heart Defibrillators 641 4.1 MULTI-MODULE NEURAL NETWORK The multi-module neural network architecture aims at improving the performance with respect to the classification of Supra-Ventricular Tachycardia (SVT). The architecture is shown in Figure 4 with the classifier being the right most block. ---------. --------.~ NSI I I I L __ +-.....: .. ~ .. I ,n ~ I I ,..~ I -1---~----3I! JUU""~ r--lI( u.. I i\.. I I I L __ \ \ I I I I \ \, . ----- .. .. .. .. ~ ---.. -, .... \ , • • _______ ~ I \ \ I I J \ ~ IU1 r-----""" \ I ~------~, \ I I , \ .------.-I \ _________ . _____ 1 ___ _ VT \ I I I L __ -~ MIn ~--~ G.-a : I I 1 1 t I I i ----~------.-~---.----vr FIGURE 4. Multi-Module Neural Network Classifier The idea behind this architecture is to divide the classification problem into that of discriminating between NSR and SVT on one hand and VF and VT on the other. The details of the operation and the training of this structured classifier can be fotllld in (Oli and Jabri, 91). In order to evaluate the performance of the MMNN classifier, a single large MLP was also developed. The single large MLP makes use of the same input features as the MMNN and targets the same classes. The performance comparison is shown in Table 2 which clearly shows that a higher performance is achieved using the MMNN. 4.2 HYBRID DECISION TREEIMLP The hybrid decision tree/multi-Iayer perceptron "mimics" the classification process as performed by cardiologists. The architecture of the classifier is shown in Figure 5. The decision tree is used to produce a judgement on: 1. The rate aspects of the ventricular and atrial channels, 2.The relative timing between the atrial and ventricular beats. In parallel with the decision tree, a morphology based classifier is used to perform template matching. The morphology classifier is a simple MLP with input that are signal samples (sampled at half the speed of the normal sampling rate of the signal). The output of the timing and morphology classifiers are fed into an arbitrator which produces the class of the arrhythmia being observed. An "X out of Y" classifier is used to smooth out the 642 Jabri, Pickard, Leong, Chi, Flower, and Xie TABLE 2. Performance of Multi-Module Neural Network Classifier and comparison with that of a single large MLP. Rhythms MMNN MMNN Single Best % Worst % MLP% NSR 95.3 93.8 93.4 ST 98.6 98.6 97.5 svr 96.4 93.3 95.4 AT 95.9 93.2 71.2 AF 86.7 85.4 77.5 vr 99.4 99.4 100 VTF 100 100 80.3 VF 97 97 99.4 Average 96.2 95.1 89.3 SD 4.18 4.8 11.31 classification output by the arbitrator and to produce an "averaged" final output class. Further details on the implementation and the operation of the hybrid classifier can be fOtllld in (Leong and Jabri, 91). 'This classifier achieves a high performance classification over several types of arrhythmia Table 3 shows the performance on a multi-patient database and indicate a performance of over 99% correct classification. born QRS N funl Hrtwo:rk Clusifiu RVA (10,S,l)@ 12S Hz plobr Deuct Albitntr XoutofY FIHAL CLASS fiom. P TitniDi Clmif"lu plobr Dtttct FIGURE 5. Architecture of the hybrid dec~ion treelneural network classifier. 5.0 MICROELECTRONIC IMPLEMENTATIONS In all our classifier architecture investigations, micro-electronic implementation considerations were a constant constraint. Many other architectures that can achieve competitive performance were not discussed in this paper because of their unsuitability for low power/small area implementation. The main challenge in a low power/small area VLSI implementation of classifiers similar to those discussed above, is how to implement in very low power a MLP architecture that can reliably learn and achieve a performance comparable to that of the funcANN Based Classification for Heart Defibrillators 643 tiona! similations. Several design strategies can achieve the low power and small area objectives. TABLE 3. Performance 01 the hybrid decision treelMLP dassifier for dual chamber classification. SubClass Class NSR svr VT VF NSR NSR 5247 4 2 0 ST NSR 1535 24 2 1 svr SVT 0 1022 0 0 AT SVT 0 52 0 0 AP SVT 0 165 0 0 VT VT 0 0 322 0 VT 1:1 VT 2 0 555 0 VF VF 0 0 2 196 VTF VF 0 2 0 116 Both digital and analog implementation techniques are bemg investigated and we report here on our analog implementation efforts only. Our analog implementations make use of the subthreshold operation mode of MOS transistors in order to maintain a very low power dissipation. 5.1 MASTER PERTURBATOR CHIP The architecture of this chip is shown in Figure 6(a). Weights are implemented using a differential capacitor scheme refreshed from digital RAM. Synapses are implemented as four quadrant Gilbert multipliers (Pickard et al, 92). The chip has been fabricated and is currently being tested. TIle building blocks have so far been successfully tested. Two networks are implemented a 7-5-3 (total of 50 synapses) and a small single layer network. The single layer network has been successfully trained to perfonn simple logic operations using the Weight Perturbation algorithm (Jabri and Flower, 91). 5.2 THE BOURKE CHIP The BOURKE chip (Leong and Jahri, 92) makes use of Multiplying Digital to Analog Converters to implement the synapses. Weights are stored in digital registers. All neurons were implemented as external resistors for the sake of evaluation. Figure 6(b) shows the schematics of a synapse. The BOURKE chip has a small network 3-3-1 and has been successfully tested (it was successfully trained to perfonn an XOR). A larger version of this chip with a 10-6-4 network is being fabricated. 6.0 Conclusions We have presented in this paper several architectures for single and dual chamber arrhythmia 644 Jabri, Pickard, Leong, Chi, Flower, and Xie ---...hi ........ -r ;r-.... • •• •• '-'--I I --"..,.I D .. -ID FIGURE 6. (a) Architecture of the Master Perturbator chip. (b) Schematics 01 the BOURKE chip synapse implementation. classifiers. In both cases a good classification performance was achieved. In particular, for the case of dual chamber classification, the complexity of the problem calls on more structured classifier architectures. Two microelectronic low power implementation were briefly presented. Progress so far indicates that micro-power VLSI ANNs offer a technology that will enable the use of powerful classification strategies in implantable defibrillators. Acknowledgment Work presented in this paper was supported by the Australian Department of Industry Technology & Commerce, Telectronics Pacing Systems, and the Australian Research Council. References Z. Chi and M. Jabri (1991), "Identification of Supraventricular and Ventricular Arrhythmias Using a Combination of Three Neural Networks". Proceedings of the Computers in Cardiology Conference, 'Venice, Italy, September 1991. M. Jabri and B. Flower (1991), "Weight Perturbations: An optimal architecture and learning technique for analog VLSI feed-forward and recurrent multi-layer networks" , Neural Computation, Vol. 3, No.4, MIT Press. P.H. W Leong and M Jabri (1991), "Arrhythmia Oassification Using Two Intracardiac Leads" , Proceedings of the Computers in Cardiology Conference, 'knice, Italy. P.R W. Leong and M Jabri (1992), "An Analogue Low Power VLSI Neural Network", Proceedings of the Third Australian Conference on Neural Networks, pp. 147-150, Canberra. Australia S. Pickard, M. labri, P.H.W. Leong, B.O. Flower and P. Henderson (1992), "Low Power Analogue VLSI Implementation of A Feed-Forward Neural Network", Proceedings of the Third Australian Conference on Neural Networks, pp. 88-91, Canberra. Australia Y. Xie and M. Jabri (1991), "Training Algorithms for Limited Precision Feed-forward Neural Networks", submitted to IEEE Transactions on Neural Networks and Neural Computation.
1991
57
526
Adaptive Development of Connectionist Decoders for Complex Error-Correcting Codes Sheri L. Gish Mario Blalull IBM Rf'search Division Almaden Research Center 650 Harry Road San Jose, C A 95120 Abstract \Ve present. an approach for df'velopment of a decoder for any complex binary error-correct.ing code- (ECC) via training from examples of decoded received words. Our decoder is a connectionist architecture. We describe two sepa.rate solutions: A system-level solution (the Cascaded Networks Decoder); and the ECC-Enhanced Decoder, a solution which simplifies the mapping problem which must be solved for decoding. Although both solutions meet our basic approach constraint for simplicity and compactness. only the ECC-Enhanced Decoder meet.s our second basic constraint of being a generic solution. 1 INTRODUCTION 1.1 THE DECODING PROBLEM An error-correcting code (ECC) is used to identify and correct errors in a received binary vector which is possibly corrupted clue to transmission across a noisy channel. In order to use a selected error-correcting code. the information bits, or the bits containing t.he message. are tllCOdid int.o a valid ECC codeword by the addition of a set of f'xtra hits, the redulldallcy, detf'fmined by tlw properties of the selected ECC. To decode a received word. there is a pre-processing step first in which a syndrome is calculated from the word. The syndrome is a vector whose length is equal t.o the redundancy. If the syndrome is the all-zero vector, then the received 691 692 Gish and Blaum word is a valid codeword (no errors). The non-zero syndromes have a one-to-one relationship wit.h t.he error vectors provided the number of errors does not exceed the error-COlTect ing capability of the ('Ode. (An error wctor is a binary vector equal in length to an ECC codeword with the error positions having a value of 1 while the rest of t.1lf' positions have the value 0). The decoding process is defined as the mapping of a syndrome to it.s associat.ed error vector. Once an error vector is found, the correct,ed codeword can be calculated by XORillg the error vector with the received word. For more background in error-correct.ing codes, the reader is referred to any book in the field, such as [2, 9]. ECC's differ in the number of errors which they can correct and also in the distance (measured as a Hamming distance in codespace) which can be recognized between tllP received word and a t.rue code\vord. Codes which can correct. more errors and cover greater distances are considered more powerful. However, in practice the difficulty of developing an efficient. decoder 'which can correct many errors prevents the use of most ECC's in the solut.ion of real world problems. Although decoding can be done for any ECC via lookup tahle, this method quickly becomes intractable as the length of codewords and the numher of errors possibly corrected increase. Devdopment of an efficient. decoder for a part.icular ECC is not straightforward. Moreover, it was shown that decoding of a random code is an NP-hard problem [1, 4]. The purpose of our work is to develop an ECC decoder using the trainable machine paradigm; i.e. we develop a decoder via training using examples of decoded received words. To prove our collcept, we have selected a binary hlock code, the (23,12,7) Golay Code, v.'hich has "real world" complexity. The Golay Code corrects up to 3 errors and has minimum distance 7. A Golay codeword is 23 bits long (12 informat.ion hits, 11 bit redundancy); the syndrome is 11 bits long. There exist many efficient. decoding methods for the Golay code [2, 3, 9], but t.he code complexity represents quite a challenge for our proposed approach. 1.2 A CONNECTIONIST ECC DECODER \Ve use a connect.ionist archit.ecture as our ECC decoder; the input is a syndrome (we assume that the straight.forward step of syndrome calculation is pre-processing) and the output is the port.ion of t.he error vector conesponding to the information bits in the received word (we ignore the redundancy). The primary reason for our choice of a connect.ionist. architecturE' is its inherent simplicity and compactness; a connectionist. archit.ecture solut.ion is readily implemented in either hardware or software solutions to complex real world problems. The particular architecture we use is t.he multi-layer feedforward network with one hidclf'n layer. There are full connections only between adja.cent layers. The number of nodes in the input layer is the number of bit.s in the syndrome, and t.he number of nodes in the output layer is the number ofinformat.ion bit.:; in t.he ECC' codeword. Tlw number of nodes in the hidden layer is a free parameter, but typically this number is no more than 1 or 2 nodes great.f'l' t.han the number of nodes in t.he input. layer. Our activation function is t.he logistic funct.ion and our t.raining algorit.hm is backpropaga.tion (see [10] for a desniption of both) . This architectural approach has been demonst.rated to be both cost-effective and a superior performer compared to classical stat.istical alternative methods in t.he solut.ion of complex mapping prohlems when it is used as a trainable pattern classifier [6, 7]. Adaptive Development of Connectionist Decoders for Complex Error-Correcting Codes 693 There are two basic constraints which we have placed on our trainable connectionist decoder. First, the final connectionist archit.ect ure must be simple and contain as few nodes as possible. Second, the method we u::;e to develop our decoder must be able to be generalized to any binary ECC. To meet the second constraint, we insured t.hat t.he training uat.aset. cont.ained only examples of decoded words (i.e. no a priori knowledge of code patterning or exist.ing decoding algorithms was included), and also that the training dataset was a.<; small a subset of t.he possible error vectors as was required to obtain generalization by trained net.works. 2 RESULTS 2.1 THE CASCADED NETWORKS DECODER Using our basic approach, we have developed two separate solutions. One, the Cascaded Networks Decoder (see Figure 1) a systf'm-If'vf'l solution which parses t.he decoding problem into a set of more t.ractable problems each addressed by a separate network. These smaller networks each solve f'ither simple classification problems (binary decisions) or are specialized decoders. Performance of the Casca.ded Net.works Df'coder is 95% correct. for t.he Gola.y code (test.ed on all 211 possible error \"ect.ors). and the whole system is small and compact. How~ver, this solution does not meet our const.raint. that t.he solution method bf' gf'lleric since the parsing of thf' original prohlem does rf'quire t:'ome a priori knowledge about. the ECC, and t.he training of each network is dOHt' 011 a separate, self-contained schedule. 2.2 THE ECC-ENHANCED DECODER The approach taken by the Cascaded Networks Decoder simplifies the solution strategy of the decoding problem, while the E('('-Enhancpd Decoder simplifies the mapping problem to he solved by tlw decoder. In the ECC-Enhanced Decoder, both the input syndrome and the out.put f'rJ"or vector art' encoded as codewords of an EC(,. Such f'ncoding should serye to sf'parat.e tIlt' inputs in input space and the outputs in out.put. space, creating a "region-to-rpgion" mapping which is much easier t.han t.he "point-to-point" ma.pping required without. encoding [8]. In addition, the decoding of t.he network output. compensates for some level of uncertainty in the network's performance; an output vector within a small dista.nce of the target vector will be corrected to the actual target by the ECC. This enhances training procedures [.5, 8]. \Ve have founu that t.he ECC-Enhanced Decoder method meets all of our constraints for a connect.ionist architecture. However, we also have found that choosing the best ECC for encoding the input. and for encoding the output. represent.s two critical and quite separate problems which must he soh·ed in order for the method to succeed. 2.2.1 Choosing the Input ECC Encoding The goal for the chosen ECC int.o which t.he input is encoded is to achieve maximum sepal'ation of input patterns in code spacE'. The major constraint is the size of the codeword (number of bits which thf' lengt.h of the redundancy must be), because longer codewords increase the complexit.y of training and the size (in number of 694 Gish and Blaum ERROR VECTOR 12 BITS ' . : SYNDROME <S> 11 BITS ,': : Figure 1: Cascaded Networks Decoder. A system-level solution incorporating 5 casca.ded lleural networks. nodes) of the connectionist architecturf'. To det.ermine the effect of different types of ECC's on the separation of input patterns in code space, we constructed a 325 pattern training dataset (mapping 11 bit. syndrome to 12 bit error vector) and encoded only the inputs using 4 different ECC's. The candidate ECC's (with the size of redundancy required to encode t.he 11 bit syndrome) were • Hamming (bit level, 4 bit. redundancy) • Extended Ha.mming (bit. level, !) bit rpclundancy) • Reed Solomon (4 bit byt.f' level. 2 byt~ ff"!dundancy) • Fire (bit level, 11 bit redundancy) \Ve t.rained 5 networks (1 with no encoding of input. 1 each with a different ECC encoding) using this training elataset. Empirically, we had determined that this training dataset. is slightly t.oo small to achieve generalization for this task; we trained each net\"wrk until its performance level on a 435 pattern test dataset (differellt patterns from the training dataset but. encoded identically) degraded 20%. \Ve then analyzed the effect of the input encoding on the patterning of error positions we observed for the output. vectors. Adaptive Development of Connectionist Decoders for Complex Error-Correcting Codes 695 The ff'suHs of our analysis iUp illustrat.t'd in Figures 2 and 3. These bar graphs look only at. out.put vect.ors found t.o haH' 2 or more errors, a.nd show the proximity of error positions within an output vector. Each bar corre:sponds to the maximum distancp of error positions within a vector (adjacent posit ions have a distance of 1). The bar height. represent.s t.he total frf'quency of vect.ors with a given maximum distance; each bar is color-coded to break down t.he frequt'llcy by total number of errors per vect.or. This type of measurt'ment. shows the degree of burst (clustering of error posit.ions) in t he errors; knowing \'\·het.her or not one has burst errors influences t.he likf'lihood of correct.ion of those errors by an ECC (for instance, Fire codes are burst correcting codes). J .. .t • o..tacc. • 2 Enors 11113 En ... B4 orr'n .. ~~--------------------------~ .. " 10 .. o..laDC • .2En'" .lEn ... 1:m4.rron Os ...... FigUl'e 2: Bar Gl'aphs of Out.put Errors Made hy tllf' Decoder. There was no encoding of t.he illPut in this instance. Training datasd results are on left, test dataset. rf'Sult.s are on right. Our aualy:sis shows t.hat. t.he Reed Solomon ECC is t.he only input encoding which separat.ed t.he input pat.terns in a way which mack liSe of an output pa.ttern ECC encoding effect.ive (result.ed ill more burst-type errors, decreased the total number of error positions in output wctors which had errors). The J 1 bit redundancy required by the Fire code for input encoding increased complexity so that this solution was worse t.han t.llf' others in terms of performance. Thus, \V(' have chosen the Reed Solomon ECC for input. encoding in our ECC-Enhanced Decoder. 2.2.2 Choosing the Output ECC Encoding Tllf' goal for t.ht' chosell ECC into which t.he out.put is encoded is correction of the maximum I1llml)f'r of errors made by the decoder. Like t.he constraint imposed on the chosen ECC for input encoding, the ECC select.ell for encoding the output 696 Gish and Blaum ~r---------------------------~ .. J ,. Il: . 10 11 • 10 II DUtaoce ~e .2En ... IlbEnon ~4 ... or. .ZErron 111113Err ... m4 ...... Os ...... ~~--------------------------~ & , 10 II I)iollDCe .2Enon II1II3 Err ... t::£I4 ...... Figur{~ 3: Bar C.;raphs of Effects of Different ECC Input Encodings on Output Errors Made by the Decoder. Training dataset results are 011 left, test dataset results are on right. Top row is Hamming cod(=' encoding. bottom row is Reed Solomon encoding. should add as small a redundancy as possible. However, thne is another even more import.ant constraint on t.he choice of ECC for output. encoding: decoding simplicity. The major advant.age gained from encoding t.he out.put is the correction of slight uncert.ainty in the performance of the decoder, and t.his advantage is gained after the out.put is decoded. Thus, any ECC selected for output encoding should be one which can be decoded efficiently. The f'rror separat.ion results we gained from our analysis of the effects of input encoding were used t.o guide our choices for an ECC into which the output would be encoded. \Ve chose our ECC from the 4 candidat.es we considered for the input (these ECC's all can he decoded efficiently). The ff~dundancy cost for encoding a 12 bit. error vector was t.he same as in t.he 11 bit. input case for t.he Reed Solomon and Fire codes, but. was increased by 1 bit. for the Hamming codes. Based on the result. t.hat. a Reed Solomon encoding of t.he input both increased the amount of Adaptive Development of Connectionist Decoders for Complex Error-Correcting Codes 697 burst errors and decreased the total number of errors per output vector, we chose the Hamming cod~ and t.he Fire code for our output encoding ECC. Both encodings yielded excellent performance on the Golay code decoding problem; the Fire code output encoding result.ed in better generalization by the network and thus better performallce (87% correct) t.han the Hamming code output encoding (84% correct). References [1] E. R. Berlekarnp, R. J. McEliece and H. C. A. van Tilborg, "On the Inherent Intractability of Certain Coding Problems," IEEE Trans. on In/. Theory, Vol. IT-8. pp. 384-:386. May 1978. [2] R. E. Blahut, Thwr.1J and Practice of Error COlltrol Codes, Addison-Wesley, 1983. [3] M. Blaum and J. Bruck, "Decoding the Golay Code with Venn Diagrams," IEEE TrailS. 011 Illf. Theor.lJ, Vol. IT-:3G, pp. 906-910, July 1990. [4] .J. Bruck and M. Naor, "The Hardness of Decoding Linear Codes with Preprocessing," IEEE Tr'a 11 S. 011 In/. Thwr./j, Vol. IT-36, pp. 381-385, March 1990. [5) T. G. Dietterich anel G. Bakiri, "Error-Correcting Out.put Codes: A General Met.hod for Improving Mult.idass Inductive Learning Programs," Oregon State University Computer Science TR 91-30-2, 1991. [6] S. L. Gish and "V . E. Blanz, "Comparing a Connect.ionist Trainable Classifier with Classical Statistical Decision Analysis Methods," IBM Research Report RJ 6891 (65717), June 1989. [7] S. L. Gish and 'V. E. Blanz, "Comparing the Performance of a Connectionist. and St.at.istical Classifiers on an Image Segmentation Problem," in D. S. Touret.zky (eel) NfuralIlIformation ProCfssing ,,)'yste1Jls 2, pp. 614-621, Morgan Kaufmann Publishers, 1990. [8] H. Li, T. Kronaneler and I. Ingemarsson, "A Pattern Classifier Integrating Multilayer Percept.ron and Error-Correcting Code," in Proceedings of the IAPR \Vorkshop on Machine Vision Applications, pp. 113-116. Tokyo, November 1990. [9] F. J. Mac\Villiams and N. J. A. Sloane, The Theory of Error-Correcting Codes, Amst.erdam. The Netherlallds: North-Holland, 1977. [10] D. E. Rumelhart, G. E. Hinton, and R . .J. \\,illiams, "Learning Internal Represent.ations hy Error Propagation," in D. E. Rumelhart, J. L. McClelland et. al. (eds) Parallel Distributed Procc.';sing Vol. 1, Chaptf'f 8, MIT Press, 1986.
1991
58
527
Self-organisation in real neurons: Anti-Hebb in 'Channel Space'? Anthony J. Bell AI-lab, Vrije U niversiteit Brussel Pleinlaan 2, B-I050 Brussels BELGIUM, (tony@arti.vub.ac.be) Abstract Ion channels are the dynamical systems of the nervous system. Their distribution within the membrane governs not only communication of information between neurons, but also how that information is integrated within the cell. Here, an argument is presented for an 'anti-Hebbian' rule for changing the distribution of voltage-dependent ion channels in order to flatten voltage curvatures in dendrites. Simulations show that this rule can account for the self-organisation of dynamical receptive field properties such as resonance and direction selectivity. It also creates the conditions for the faithful conduction within the cell of signals to which the cell has been exposed. Various possible cellular implementations of such a learning rule are proposed, including activity-dependent migration of channel proteins in the plane of the membrane. 1 INTRODUCTION 1.1 NEURAL DYNAMICS Neural inputs and outputs are temporal, but there are no established ways to think about temporal learning and dynamical receptive fields. The currently popular simple recurrent nets have only one kind of dynamical component: a capacitor, or time constant. Though it is possible to create any kind of dynamics using capacitors and static non-linearities, it is also possible to write any program on a Turing machine. 59 60 Bell Biological evolution, it seems, has elected for diversity and complexity over uniformity and simplicity in choosing voltage-dependent ion channels as the 'instruction set' for dynamical computation. 1.2 ION CHANNELS As more ion channels with varying kinetics are discovered, the question of their computational role has become more pertinent. Figure 1, derived from a model thalamic cell, shows the log time constants of 11 currents, plotted against the voltage ranges over which they activate or inactivate. The variety of available kinetics is probably under-represented here since a combinatorial number of differences can be obtained by combining different protein sub-domains to make a channel [6]. Given the likelihood that channels are inhomogenously distributed throughout the dendrites [7], one way to tackle the question of their computational role is to search for a self-organisational principle for forming this distribution. Such a 'learning rule' could be construed as operating during development or dynamically during the life of an organism, and could be considered complementary to learning involving synaptic changes. The'U 1....--.-----,--,-----·---,----· resulting distribution and mix! ",.-... ~ of channels would then be, in] / "-.'" II' M 1 some sense, optimal for integrat- : ing and communicating the par- ~ ticular high-dimensional spatia10-1 temporal inputs which the cell was accustomed to receiving. Figure 1: Diversity of ion channel kinetics. The voltagedependent equilibrium log time constants of 11 channels are plotted here for the voltage ranges for which their activation (or inactivation) variables go from 10-2 10-3 0.1 ~ 0.9 (or 0.9 ~ 0.1). The channel kinetics are taken from a model by W.Lytton [10]. Notice W' the range of speeds of operation from the spiking N a+ channel around O.lms, to the J{M channel in the Is (cognitive) range. -100 ............. ......... VR£ST -50 "'. No act. o Membrane potential (mV) 2 THE BIOPHYSICAL SUBSTRATE 50 The substrate for self-organisation is the standard cable model for a dendrite or axon: (1 ) Anti-Hebb in 'Channel Space'? 61 In this Go represents the conductance along the axis of the cable, C is the capacitance and the two sums represent synaptic (indexed by j) and intrinsic (indexed by k) currents. G is a maximum conductance (a channel density or 'weight'), 9 is the time-varying fraction of the conductance active, and E is a reversal potential. The system can be summarised by saying that the current flow out of a segment of a neuron is equal to the sum of currents input to that segment, plus the capacitive charging of the membrane. This leads to a simpler form: i = L:9j ij + L:9kik (2) j k Here, i = 02V lox2, gj = Gj IGa , ij = gj(V -Ej) and C is considered as an intrinsic conductance whose 9k and ik are CIGa and oV lot respectively. In this form, it is more clear that each part of a neuron can be considered as a 'unit', diffusively coupled to its neighbours, to which it passes its weighted sum of inputs. The weights excitatory inhibitory leakage channels channels rohannels synaptic ~ capacitive aeDibrane charging Blectrodiffusive spread Figure 1: A compartment of a neuron, shown schematically and as a circuit. The cable equation is just Kirchoff's Law: current in = current out 9k' representing the Go-normalised densities of channel species k, are considered to span channel space, as opposed to the 9j weights which are our standard synaptic strength parameters. Parameters determining the dynamics of gk's specify points in kinetics space. Neuromodulation [8], a universally important phenomenon in real nervous systems, consists of specific chemicals inducing short-term changes in the kinetics space co-ordinates of a channel type, resulting, for example, in shifts in the curves in Figure 1. 3 THEARGUMENTFORANT~HEBB Learning algorithms, of the type successful in static systems, have not been considered for these low-level dynamical components (though see [2] for approaches to synaptic learning in realistic systems). Here, we address the issue of unsupervised learning for channel densities. In the neural network literature, unsupervised learning consists of Hebbian-type algorithms and information theoretic approaches based on objective functions [1]. In the absence of a good information theoretic framework for continuous time, non-Gaussian analog systems where noise is undefined, we resort to exploring the implications of the effects of simple local rules. 62 Bell The most obvious rule following from equation 2 would be a correlational one of the following form, with the learning rate f positive or negative: ~9k = fiki (3) While a polarising (or Hebbian) rule (see Figure 3) makes sense for synaptic channels as an a method for amplifying input signals, it makes less sense for intrinsic channels. Were it to operate on such channels, statistical fluctuations from the uniform channel distribution would give rise to self-reinforcing 'hot-spots' with no underlying 'signal' to amplify. For this reason, we investigate the utility of a rectifying (or anti-Hebbian) rule. Figure 3: A schematic display showing contingent positive and negative voltage curvatures (±i) above a segment of neuron, and inward and outward currents (±ik), through a particular channel type. In situations (a) and (b), a Hebbian version of Equation 3 will raise the channel density (9k T), and in (c) and (d) an anti-Hebbian rule will do this. In the first two cases, the channels are polarising the membrane potential, creating high voltage curvature, while in the latter two, they are rectifying (or flattening) it. Depending on the sign of f, equation 3 attempts to either maximise or minimise (8 2V /8x 2 )2. 4 EXAMPLES ",,~ -?' I -ve '--(a) gJ if E Is +ve i k -ve (c) '9J if E is -ve I k+ ve ~ (b) '9J if E Is +ve (d) ;J If E is -ve For the purposes of demonstration, linear RLC electrical components are often used here. These simple 'intrinsic' (non-synaptic) components have the most tractable kinetics of any, and as shown by [11] and [9], the impedances they create capture some of the properties of active membrane. The components are leakage resistances, capacitances and inductances, whose 9k'S are given by 1/ R, C and 1/ L respectively. During learning, all 9k's were kept above zero for reasons of stability. 4.1 LEARNING RESONANCE In this experiment, an RLC 'compartment' with no frequency preference was stimulated at a certain frequency and trained according to equation 3 with f negative. After training, the frequency response curve of the circuit had a resonant peak at the training frequency (Figure 4). This result is significant since many auditory and tactile sensory cells are tuned to certain frequencies, and we know that a major comp onent of the tuning is electrical, with resonances created by particular balances of ion channel populations [13]. Anti-Hebb in 'Channel Space'? 63 I.' ... ... '.' ..NVV .. , It ... sin 0.4t ... k '.+ ... f o· Figure 4: Learning resonance. The curves show the frequency-response curves of the compartment before and after training at a frequency of 0.4. 4.2 LEARNING CONDUCTION Another role that intrinsic channels must play within a cell is the faithful transmission of information. Any voltage curvatures at a point away from a synapse signify a net cross membrane current which can be seen as distorting the signal in the cable. Thus, by removing voltage curvatures, we preserve the signal. This is demonstrated , , , , , , , , , , 5 .:!.-1' _____ -= ~tnlli 4-~ 3 _ =----- _ :'Ni.f\V.0S(~\lf\~ 2 _ ~_ ~f.\V.l\V.r.J!.f\\lf\~ , ~ j: =~=:WJ\~W~\(~ ) LJ LJ LJ l/ ~t) l/ LJ l/ l/ l~t Figure 5: Learning conduction. The cable consists of a chain of compartments, which only conduct the impulse after they acquire active channels. in the following example: 'learning to be an axon'. A non-linear spiking compartment with Morris-Lecar Cal J{ kinetics (see [14]) is coupled to a long passive cable. Before learning, the signal decays passively in the cable (Figure 5). The driving compartment ?i-vector, and the capacitances in the cable are then clamped to stop the system from converging on the null solution (g -+ 0). All other g's (including spiking conductances in the cable) can then learn. The first thing learnt was that the inward and outward leakage conductances (?it and ?i"l) adjusted themselves to make the average voltage curvature in each compartment zero (just as bias units in error correction algorithms adjust to make the average error zero). Then the cable filled out with Morris-Lecar channels (9Ca and gK) in exactly the same ratios as the driving compartment, resulting in a cable that faithfully propagated the signal. 64 Bell 4.3 LEARNING PHASE-SHIFTING (DIRECTION SELECTIVITY) The last example involves 4 'sensory' compartments coupled to a 'somatic' compartment as in Figure 6. All are similar to the linear compartments in the resonance example except that the sensory ones receive 'synaptic' input in the form of a sinusoidal current source. The relative phases of the input were shifted to simulate left-to-right motion. After training, the 'dendritic' components had learned, using their capacitors and inductors, to cancel the phase shifts so that the inputs were synchronised in their effect on the 'soma'. This creates a large response in the trained direction, and a small one in the 'null' direction, as the phases cancelled each other. ------------------------~ .. " · " · , · " · . , ' · : !- .... _ ...... trained ------.-. direction , :,: ... ______ null ,-- -- - -- - -- - - - -- ---- - ------. direction Figure 6: Learning direction selectivity. After training on a drifting sine wave, the output compartment oscillates for the trained direction but not for the null direction (see the trace, where the direction of motion is reversed halfway). 5 DISCUSSION 5.1 CELLULAR MECHANISMS There is substantial evidence in cell biology for targeting of proteins to specific parts of the membrane, but the fact that equation 3 is dependent on the correlation of channel species' activity and local voltages leaves only 4 possible biological im plementations: 1. the cellular targeting machinery knows what kind of channel it is delivering, and thus knows where to put it 2, channels in the wrong place are degraded faster than those in the right place 3. channels migrate to the right place while in the membrane 4. the effective channel density is altered by activity-dependent neuromodulation or channel-blockage The third is perhaps the most intriguing. The diffusion of channels in the plane of the membrane, under the influence of induced electric fields has received both theoretical [4, 12] and empirical [7, 3] attention. To a first approximation, the Anti-Hebb in 'Channel Space'? 6S evolution of channel densities can be described by a Smoluchowski equation: ay" = a a2g" + b~ (g aV) at ax 2 ax " ax (4) where a is the coefficient of thermal diffusion and b is the coefficient of field induced motion. This system has been studied previously [4] to explain receptor-clustering in synapse formation, but if the sign of b is reversed, then it fits more closely with the anti-Hebbian rule discussed here. The crucial requirement for true activitydependence, though, is that b should be different when the channel is open than when it is closed. This may be plausible since channel gating involves movements of charges across the membrane. Coefficients of thermal diffusion have been measured and found not to exceed 10- 9 cm/sec. This would be enough to fine-tune channel distributions, but not to transport them all the way down dendrites. The second method in the list is also an attractive possibility. The half-life of membrane proteins can be as low as several hours [3], and it is known that proteins can be differentially labeled for recycling [5]. 5.2 ENERGY AND INFORMATION The anti-Hebbian rule changes g" 's in order to minimise the square membrane current density, integrated over the cell in units of axial conductance. This corresponds in two senses to a minimisation of energy. From a circuit perspective, the energy dissipated in the axial resistances is minimised. From a metabolic perspective, the ATP used in pumping ions back across the membrane is minimised. The computation consists of minimising the expected value of this energy, given particular spatiotemporal synaptic input (assuming no change in 9j'S). More precisely, it searches for: (5) This search creates mutual information between input dynamics and intrinsic dynamics. In addition, since the Laplacian (\7; V = 0) is what a diffusive system seeks to converge to anyway, the learning rule simply configures the system to speed this convergence on frequently experienced inputs. Simple zero-energy solutions exist for the above, for example the 'ultra-leaky' compartment (gl -l (0) and the 'point' (or non-existent) compartment (g" -l 0, Vk), for compartments with and without synapses respectively. The anti-Hebb rule alone will eventually converge to such solutions, unless, for example, the leakage or capacitance are prevented from learning. Another solution (which has been successfully used for the direction selectivity example) is to make the total available quantity of each g" finite. The g" can then diffuse about between compartments, following the voltage gradients in a manner suggested by equation 4. The resulting behaviour is a finite-resource version of equation 3. The next goal of this work is to produce a rigorous information theoretic account of single neuron computation. This is seen as a pre-requisite to understanding both neural coding and the computational capabilities of neural circuits, and as a step on the way to properly dynamical neural nets. 66 Bell Acknowledgements This work was supported by a Belgian government IMPULS contract and by ESPRIT Basic Research Action 3234. Thanks to Prof. L. Steels for his support and to Prof T. Sejnowski his hospitality at the Salk Institute where some of this work was done. References [1] Becker S. 1990. Unsupervised learning procedures for neural networks, Int. J. Neur. Sys. [2] Brown T., Mainen Z. et al. 1990. in NIPS 3, 39-45. Mel B. 1991. in Neural Computation, vol 4 to appear. [3] Darnell J., Lodish H. & Baltimore D. 1990. Molecular Cell Biology, Scientific American Books [4] Fromherz P. 1988. Self-organization of the fluid mosaic of charged channel proteins in membranes, Proc. Natl. Acad. Sci. USA 85, 6353-6357 [5] Hare J. 1990. Mechanisms of membrane protein turnover, Biochim. Biophys. Acta, 1031,71-90 [6] Hille B. 1992. Ionic channels of excitable membranes, 2nd edition, Sinauer Associates Inc., Sunderland, MA [7] Jones O. et al. 1989. Science 244, 1189-1193. Lo Y-J. & Poo M-M. 1991. Science 254, 1019-1022. Stollberg J. & Fraser S. 1990. 1. Neurosci. 10, 1, 247-255. Angelides K. 1990. Prog. in Clin. fj Bioi. Res. 343, 199-212 [8] Kaczmarek L. & Levitan I. 1987. Neuromodulation, Oxford Univ. Press [9] Koch c. 1984. Cable theory in neurons with active linearized membranes, Bioi. Cybern. 50, 15-33 [10] Lytton W. 1991. Simulations of cortical pyramidal neurons synchronized by inhibitory interneurons 1. Neurophysiol. 66, 3, 1059-1079 [11] Mauro A. Conti F. Dodge F. & Schor R. 1970. Subthreshold behaviour and phenomenological impedance of the giant squid axon, J. Gen. Physiol. 55, 497523 [12] Poo M-M. & Young S. 1990. Diffusional and electrokinetic redistribution at the synapse: a physicochemical basis of synaptic competition, 1. Neurobiol. 21, 1, 157-168 [13] Puil E. et al. 1. Neurophysiol. 55,5 .. Ashmore J.F. & Attwell D. 1985. Proc. R. Soc. Lond. B 226, 325-344. Hudspeth A. & Lewis R. 1988. 1. Physiol. 400, 275-297. [14] Rinzel J. & Ermentrout G. 1989. Analysis of Neural Excitability and Oscillations, in Koch C. & Segev I. (eds) 1989. Methods in Neuronal Modeling, MIT Press
1991
59
528
Threshold Network Learning in the Presence of Equivalences John Shawe-Taylor Department of Computer Science Royal Holloway and Bedford New College University of London Egham, Surrey TW20 OEX, UK Abstract This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output feedforward threshold networks in which the weights conform to certain equivalences. It is shown that the sample size for reliable learning can be bounded above by a formula similar to that required for single output networks with no equivalences. The best previously obtained bounds are improved for all cases. 1 INTRODUCTION This paper develops the results of Baum and Haussler [3] bounding the sample sizes required for reliable generalisation of a single output feedforward threshold network. They prove their result using the theory of Probably Approximately Correct (PAC) learning introduced by Valiant [11]. They show that for 0 < «: :S 1/2, if a sample of sIze 64W 64N m 2:: rna = -- log -«: «: is loaded into a feedforward network of linear threshold units with N nodes and W weights, so that a fraction 1- «:/2 of the examples are correctly classified, then with confidence approaching certainty the network will correctly classify a fraction 1 «: of future examples drawn according to the same distribution. A similar bound was obtained for the case when the network correctly classified the whole sample. The results below will imply a significant improvement to both of these bounds. 879 880 Shawe-Taylor In many cases training can be simplified if known properties of a problem can be incorporated into the structure of a network before training begins. One such technique is described by Shawe-Taylor [9], though many similar techniques have been applied as for example in TDNN's [6]. The effect of these restrictions is to constrain groups of weights to take the same value and learning algorithms are adapted to respect this constraint. In this paper we consider the effect of this restriction on the generalisation performance of the networks and in particular the sample sizes required to obtain a given level of generalisation. This extends the work described above by Baum and Haussler [3] by improving their bounds and also improving the results of ShaweTaylor and Anthony [10], who consider generalisation of multiple-output threshold networks. The remarkable fact is that in all cases the formula obtained is the same, where we now understand the number of weights W to be the number of weight classes, but N is still the number of computational nodes. 2 DEFINITIONS AND MAIN RESULTS 2.1 SYMMETRY AND EQUIVALENCE NETWORKS We begin with a definition of threshold networks. To simplify the exposition it is convenient to incorporate the threshold value into the set of weights. This is done by creating a distinguished input that always has value 1 and is called the threshold input. The following is a formal notation for these systems. A network N = (C, I, 0, no, E) is specified by a set C of computational nodes, a set I of input nodes, a subset 0 ~ C of output nodes and a node no E I, called the threshold node. The connectivity is given by a set E ~ (C u 1) x C of connections, with {no} x C ~ E. With network N we associate a weight function W from the set of connections to the real numbers. We say that the network N is in state w. For input vector i with values in some subset of the set 'R of real numbers, the network computes a function F./If(w, i). An automorphism')' of a network N = (C, I, 0, no, E) is a bijection of the nodes of N which fixes I setwise and {no} U 0 pointwise, such that the induced action fixes E setwise. We say that an automorphism')' preserves the weight assignment W if Wji = w(-yj)("Yi ) for all i E I u C, j E C. Let')' be an automorphism of a network N = (C, 1,0, no, E) and let i be an input to N. We denote by i"Y the input whose value on input k is that of i on input ,),-lk. The following theorem is a natural generalisation of part of the Group Invariance Theorem of Minsky and Pappert [8] to multi-layer perceptrons. Theorem 2.1 [9J Let')' be a weight preserving automorphism of the network N = ( C, I, 0, no, E) in state w. Then for every input vector i F./If(w, i) = F./If(w, P). Following this theorem it is natural to consider the concept of a symmetry network [9]. This is a pair (N, r), where N is a network and r a group of weight Threshold Network Learning in the Presence of Equivalences 881 preserving automorphims of N. We will also refer to the automorphisms as symmetries. For a symmetry network (N, r), we term the orbits of the connections E under the action of r the weight classes. Finally we introduce the concept of an equivalence network. This definition abstracts from the symmetry networks precisely those properties we require to obtain our results. The class of equivalence networks is, however, far larger than that of symmetry networks and includes many classes of networks studied by other researchers [6, 7]. Definition 2.2 An equivalence network is a threshold network in which an equivalence relation is dejined on both weights and nodes. The two relations are required to be compatible in that weights in the same class are connected to nodes in the same class, while nodes in the same class have the same set of input weight connection types. The weights in an equivalence class are at all times required to remain equal. Note that every threshold network can be viewed as an equivalence network by taking the trivial equivalence relations. We now show that symmetry networks are indeed equivalence networks with the same weight classes and give a further technical lemma. For both lemmas proofs are omitted. Lemma 2.3 A symmetry network (N, r) is an equivalence network, where the equivalence classes are the orbits of connections and nodes respectively. Lemma 2.4 Let N be an equivalence network and C be the set of classes of nodes. Then there is an indezing of the classes, Gi, i = 1, . . . , n, such that nodes in Gi do not have connections from nodes in Gj for j 2 i. 2.2 MAIN RESULTS We are now in a position to state our main results. Note that throughout this paper log means natural logarithm, while an explicit subscript is used for other bases. Theorem 2.5 Let N be an equivalence network with W weight classes and N computational nodes. If the network correctly computes a function on a set of m inputs drawn independently according to a jized probability distribution, where 1 [ (1.3) (6VN) 1 m 2 mo(€,I5) = €(1- J€) log 6 + 2Wlog -€.. then with probability at least 1 15 the error rate of the network will be less than € on inputs drawn according to the same distribution. Theorem 2.6 Let N be an equivalence network with W weight classes and N computational nodes. If the network correctly computes a function on a fraction 1 - (1 -1')€ of m inputs drawn independently according to a jized probability distribution, where m 2 mo(€,c5,1') = 1.jfJN [4 log (-154) + 6Wlog ( ;~ )] 1'2€(1 €/ N) l' € then with probability at least 1 15 the error rate of the network will be less than € on inputs drawn according to the same distribution. 882 Shawe-Taylor 3 THEORETICAL BACKGROUND 3.1 DEFINITIONS AND PREVIOUS RESULTS In order to present results for binary outputs ({O, I} functions) and larger ranges in a unified way we will consider throughout the task of learning the graph of a function. All the definitions reduce to the standard ones when the outputs are binary. We consider learning from examples as selecting a suitable function from a set H of hypotheses, being functions from a space X to set Y, which has at most countable Slze. At all times we consider an (unknown) target function c:X---+Y which we are attempting to learn. To this end the space X is required to be a probability space (X, lJ, p.), with appropriate regularity conditions so that the sets considered are measurable [4]. In particular the hypotheses should be measurable when Y is given the discrete topology as should the error sets defined below. The space S = X x Y is equipped with au-algebra E x 2Y and measure v = v(p., e), defined by its value on sets of the form U x {y}: v(U x {y}) = p. (U n e- 1(y)) . U sing this measure the error of a hypothesis is defined to be erv (h) = v{(:z:, y) E Slh(:z:) =1= y}. The introduction of v allows us to consider samples being drawn from S, as they will automatically reflect the output value of the target. This approach freely generalises to stochastic concepts though we will restrict ourselves to target functions for the purposes of this paper. The error of a hypothesis h on a sample x = ((:Z:1' yd, ... , (:Z:m, Ym)) E sm is defined to be erx(h) = ~ l{ilh(:Z:i) =1= ydl· m We also define the VC dimension of a set of hypotheses by reference to the product space S. Consider a sample x = ((:Z:1I yI), ... , (:Z:m' Ym)) E sm and the function x* : H ---+ {O, l}m, given by X*(h)i = 1 if and only if h(:z:,) = Yi, for i = 1, ... ,m. We can now define the growth function BH(m) as BH(m) = max l{x*(h)lh E H}I ~ 2m. XES"' The Vapnik-Chervonenkis dimension of a hypothesis space H is defined as if BH(m) = 2m , for all m; otherwise. In the case of a threshold network oN, the set of functions obtainable using all possible weight assignments is termed the hypothesis space of oN and we will refer Threshold Network Learning in the Presence of Equivalences 883 to it as N. For a threshold network N, we also introduce the state growth function SJV(m). This is defined by first considering all computational nodes to be output nodes, and then counting different output sequences. SJV(m) = . m'!-x I{(FJV'(w, il), FJV'(w, i2)'"'' FJV'(w, im))lw : E 'R}I X=(ll, ... ,lm.)EX= where X = [0,1]111 and N' is obtained from N by setting 0 = C. We clearly have that for all Nand m, BJV(m) ::; SJV(m). Theorem 3.1 [2J If a hypothesis space H has growth function BH(m) then for any € > 0 and k > m and 1 O<r<I--..;€k the probability that there is a function in H which agrees with a randomly chosen m sample and has error greater than € is less than €k(l-r)2 {km } k( )2 BH(m + k) exp -r€ k' € l-r -1 m+ This result can be used to obtain the following bound on sample size required for PAC learnability of a hypothesis space with VC dimension d. The theorem improves the bounds reported by Blumer et al. (4). Theorem 3.2 [2J If a hypothesis space H has finite VC dimension d > I, then there is mo = mo( €, 6) such that if m > mo then the probability that a hypothesis consistent with a randomly chosen sample of size m has error greater than € is less than 6. A suitable value of rna is rna = € (1 ~ 0) [log ( d / (d6 - 1)) + 2d log (~) ]. o For the case when we allow our hypothesis to incorrectly compute the function on a small fraction of the training sample, we have the following result. Note that we are still considering the discrete metric and so in the case where we are considering multiple output feedforward networks a single output in error would count as an overall error. Theorem 3.3 [10J Let 0 < € < 1 and 0 < "( ::; 1. Suppose H is a hypothesis space of functions from an input space X to a possibly countable set Y, and let v be any probability measure on S = X x Y. Then the probability (with respect to vm ) that, for x E sm, there is some h E H such that erll(h) > € and erx(h)::; (1 - ,,()erll(h) is at most ( "(2€m) 4BH(2m)exp --4. Furthermore, if H has finite VC dimension d, this quantity is less than 6 for m> mo(€,6,,,() = "(2€(11_ 0) [410g (~) + 6dlog ('Y2~3€)]' o 884 Shawe-Taylor 4 THE GROWTH FUNCTION FOR EQUIVALENCE NETWORKS We will bound the number of output sequences B,,(m) for a number m of inputs by the number of distinct state sequences S,,(m) that can be generated from the m inputs by different weight assignments. This follows the approach taken in [10]. Theorem 4.1 Let.N be an equivalence network with W weight equivalence classes and a total of N computational nodes. Then we can bound S,,(m) by Idea of Proof: Let Gi, i = 1, ... , n, be the equivalence classes of nodes indexed as guaranteed by Lemma 2.4 with IGil = Ci and the number of inputs for nodes in Gi being ni (including the threshold input). Denote by .AIj the network obtained by taking only the first j node equivalence classes. We omit a proof by induction that j S"j (m) :S II Bi(mci), i=1 where Bi is the growth function for nodes in the class Gi. Using the well known bound on the growth function of a threshold node with ni inputs we obtain SN( m) ~ ll. ( e:;, ) n, Consider the function !( ~) = ~ log~. This is a convex function and so for a set of values ~1, ..• , ~M, we have that the average of f(~i) is greater than or equal to f applied to the average of ~i. Consider taking the ~'s to be Ci copies of ni/ci for each i = 1, ... n. We obtain 12: n ni W W n-Iog > log or N _ ' Ci N N ,=1 and so S,,(m) :S ( emwN)W, as required. _ The bounds we have obtained make it possible to bound the Vapnik-Chervonenkis dimension of equivalence networks. Though we we will not need these results, we give them here for completeness. Proposition 4.2 The Vapnik-Chervonenkis dimension of an equivalence network with W weight classes and N computational nodes is bounded by 2Wlog2 eN. Threshold Network Learning in the Presence of Equivalences 885 5 PROOF OF MAIN RESULTS U sing the results of the last section we are now in a position to prove Theorems 2.5 and 2.6. Proof of Theorem 2.5: (Outline) We use Theorem 3.1 which bounds the probability that a hypothesis with error greater than E can match an m-sample. Substituting our bound on the growth function of an equivalence network and choosing and r as in [1], we obtain the following bound on the probability ( d) (e 4Em2)W d _ 1 W2 N W exp( -Em). By choosing m> me where me is given by 1 [ (1.3) (6..fN)] me = me(E, 6) = E(1 _ JE) log 6" + 2W log -Ewe guarantee that the above probability is less than 6 as required. _ Our second main result can be obtained more directly. Proof of Theorem 2.6: (Outline) We use Theorem 3.3 which bounds the probability that a hypothesis with error greater than E can match all but a fraction (1 -1') of an m-sample. The bound on the sample size is obtained from the probability bound by using the inequality for BH(2m). By adjusting the parameters we will convert the probability expression to that obtained by substituting our growth function. We can then read off a sample size by the corresponding substitution in the sample size formula. Consider setting d = W, E = E' IN and m = N m'. With these substitutions the sample size formula is m = 410 + 6 W 10 , 1 [( 4) ( 4N ) 1 1'2e'(1 - Je'IN) g 6 g 1'2/3e' as required. _ 6 CONCLUSION The problem of training feedforward neural networks remains a major hurdle to the application of this approach to large scale systems. A very promising technique for simplifying the training problem is to include equivalences in the network structure which can be justified by a priori knowledge of the application domain. This paper has extended previous results concerning sample sizes for feedforward networks to cover so called equivalence networks in which weights are constrained in this way. At the same time we have improved the sample size bounds previously obtained for standard threshold networks [3] and multiple output networks [10]. 886 Shawe-Taylor The results are of the same order as previous results and imply similar bounds on the Vapnik-Chervonenkis namely 2W log2 eN. They perhaps give circumstancial evidence for the conjecture that the loga eN factor in this expression is real, in that the same expression obtains even if the number of computational nodes is increased by expanding the equivalence classes of weights. Equivalence networks may be a useful area to search for high growth functions and perhaps show that for certain classes the VC dimension is O(Wlog N). References [1] Martin Anthony, Norman Biggs and John Shawe-Taylor, Learnability and Formal Concept Analysis, RHBNC Department of Computer Science, Technical Report, CSD-TR-624, 1990. [2] Martin Anthony, Norman Biggs and John Shawe-Taylor, The learnability of formal concepts, Proc. COLT '90, Rochester, NY. (eds Mark Fulk and John Case) (1990) 246-257. [3] Eric Baum and David Haussler, What size net gives valid generalization, Neural Computation, 1 (1) (1989) 151-160. [4] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler and Manfred K. Warmuth, Learnability and the Vapnik-Chervonenkis dimension, JACM, 36 (4) (1989) 929-965. [5] David Haussler, preliminary extended abstract, COLT '89. [6] K. Lang and G.E. Hinton, The development of TDNN architecture for speech recognition, Technical Report CMU-CS-88-152, Carnegie-Mellon University, 1988. [7] Y. Ie Cun, A theoretical framework for back propagation, in D. Touretzsky, editor, Connectionist Models: A Summer School, Morgan-Kaufmann, 1988. [8] M. Minsky and S. Papert, Perceptrons, expanded edition, MIT Press, Cambridge, USA, 1988. [9] John Shawe-Taylor, Building Symmetries into Feedforward Network Architectures, Proceedings of First lEE Conference on Artificial Neural Networks, London, 1989, 158-162. [10] John Shawe-Taylor and Martin Anthony, Sample Sizes for Multiple Output Feedforward Networks, Network, 2 (1991) 107-117. [11] Leslie G. Valiant, A theory of the learnable, Communications of the ACM, 27 (1984) 1134-1142.
1991
6
529
Tangent Prop - A formalism for specifying selected invariances in an adaptive network Patrice Simard AT&T Bell Laboratories 101 Crawford Corner Rd Holmdel, NJ 07733 Yann Le Cun AT&T Bell Laboratories 101 Crawford Corner Rd Holmdel, NJ 07733 Abstract Bernard Victorri Universite de Caen Caen 14032 Cedex France John Denker AT&T Bell Laboratories 101 Crawford Corner Rd Holmdel, NJ 07733 In many machine learning applications, one has access, not only to training data, but also to some high-level a priori knowledge about the desired behavior of the system. For example, it is known in advance that the output of a character recognizer should be invariant with respect to small spatial distortions of the input images (translations, rotations, scale changes, etcetera). We have implemented a scheme that allows a network to learn the derivative of its outputs with respect to distortion operators of our choosing. This not only reduces the learning time and the amount of training data, but also provides a powerful language for specifying what generalizations we wish the network to perform. 1 INTRODUCTION In machine learning, one very often knows more about the function to be learned than just the training data. An interesting case is when certain directional derivatives of the desired function are known at certain points. For example, an image 895 896 Simard, Victorri, Le Cun, and Denker Figure 1: Top: Small rotations of an original digital image of the digit "3" (center). Middle: Representation of the effect of the rotation in the input vector space space (assuming there are only 3 pixels). Bottom: Images obtained by moving along the tangent to the transformation curve for the same original digital image (middle). recognition system might need to be invariant with respect to small distortions of the input image such as translations, rotations, scalings, etc.; a speech recognition system n.ight need to be invariant to time distortions or pitch shifts. In other words, the derivative of the system's output should be equal to zero when the input is transformed in certain ways. Given a large amount of training data and unlimited training time, the system could learn these invariances from the data alone, but this is often infeasible. The limitation on data can be overcome by training the system with additional data obtained by distorting (translating, rotating, etc.) the original patterns (Baird, 1990). The top of Fig. 1 shows artificial data generated by rotating a digital image of the digit "3" (with the original in the center). This procedure, called the "distortion model" , has two drawbacks. First, the user must choose the magnitude of distortion and how many instances should be generated. Second, and more importantly, the distorted data is highly correlated with the original data. This makes traditional learning algorithms such as back propagation very inefficient. The distorted data carries only a very small incremental amount of information, since the distorted patterns are not very different from the original ones. It may not be possible to adjust the learning system so that learning the invariances proceeds at a reasonable rate while learning the original points is non-divergent. The key idea in this paper is that it is possible to directly learn the effect on the output of distorting the input, independently from learning the undistorted Tangent Prop-A formalism for specifying selected invariances in an adaptive network 897 F(x) F(x) x1 x2 x3 x4 x x1 x2 x3 x4 x Figure 2: Learning a given function (solid line) from a limited set of example (Xl to X4). The fitted curves are shown in dotted line. Top: The only constraint is that the fitted curve goes through the examples. Bottom: The fitted curves not only goes through each examples but also its derivatives evaluated at the examples agree with the derivatives of the given function. patterns. When a pattern P is transformed (e.g. rotated) with a transformation s that depends on one parameter a (e.g. the angle of the rotation), the set of all the transformed patterns S(P) = {sea, P) Va} is a one dimensional curve in the vector space of the inputs (see Fig. 1). In certain cases, such as rotations of digital images, this curve must be made continuous using smoothing techniques, as will be shown below. When the set of transformations is parameterized by n parameters ai (rotation, translation, scaling, etc.), S(P) is a manifold of at most n dimensions. The patterns in S(P) that are obtained through small transformations of P, i.e. the part of S( P) that is close to P, can be approximated by a plane tangent to the manifold S(P) at point P. Small transformations of P can be obtained by adding to P a linear combination of vectors that span the tangent plane (tangent vectors). The images at the bottom of Fig. 1 were obtained by that procedure. More importantly, the tangent vectors can be used to specify high order constraints on the function to be learned, as explained below. To illustrate the method, consider the problem of learning a single-valued function F from a limited set of examples. Fig. 2 (left) represents a simple case where the desired function F (solid line) is to be approximated by a function G (dotted line) from four examples {(Xi, F(Xi))}i=1,2,3,4. As exemplified in the picture, the fitted function G largely disagrees with the desired function F between the examples. If the functions F and G are assumed to be differentiable (which is generally the case), the approximation G can be greatly improved by requiring that G's derivatives evaluated at the points {xd are equal to the derivatives of F at the same points (Fig. 2 right). This result can be extended to multidimensional inputs. In this case, we can impose the equality of the derivatives of F and G in certain directions, not necessarily in all directions of the input space. Such constraints find immediate use in traditional learning problems. It is often the case that a priori knowledge is available on how the desired function varies with 898 Simard, Victorri, Le Cun, and Denker pattern P pattern P rotated by ex -tangent vector Figure 3: How to compute a tangent vector for a given transformation (in this case a rotation). respect to some transformations of the input. It is straightforward to derive the corresponding constraint on the directional derivatives of the fitted function G in the directions of the transformations (previously named tangent vectors). Typical examples can be found in pattern recognition where the desired classification function is known to be invariant with respect to some transformation of the input such as translation, rotation, scaling, etc., in other words, the directional derivatives of the classification function in the directions of these transformations is zero. 2 IMPLEMENTATION The implementation can be divided into two parts. The first part consists in computing the tangent vectors. This part is independent from the learning algorithm used subsequently. The second part consists in modifying the learning algorithm (for instance backprop) to incorporate the information about the tangent vectors. Part I: Let x be an input pattern and s be a transformation operator acting on the input space and depending on a parameter a. If s is a rotation operator for instance, then s( a, x) denotes the input x rotated by the angle a. We will require that the transformation operator s be differentiable with respect to a and x, and that s(O, x) = x. The tangent vector is by definition 8s(a, x)/8a. It can be approximated by a finite difference, as shown in Fig. 3. In the figure, the input space is a 16 by 16 pixel image and the patterns are images of handwritten digits. The transformations considered are rotations of the digit images. The tangent vector is obtained in two steps. First the image is rotated by an infinitesimal amount a. This is done by computing the rotated coordinates of each pixel and interpolating the gray level values at the new coordinates. This operation can be advantageously combined with some smoothing using a convolution. A convolution with a Gaussian provides an efficient interpolation scheme in O(nm) multiply-adds, where nand m are the (gaussian) kernel and image sizes respectively. The next step is to subtract (pixel by pixel) the rotated image from the original image and to divide the result Tangent Prop-A formalism for specifying selected invariances in an adaptive network 899 by the scalar 0 (see Fig. 3). If Ie types of transformations are considered, there will be Ie different tangent vectors per pattern. For most algorithms, these do not require any storage space since they can be generated as needed from the original pattern at negligible cost. Part IT: Tangent prop is an extension of the backpropagation algorithm, allowing it to learn directional derivatives. Other algorithms such as radial basis functions can be extended in a similar fashion. To implement our idea, we will modify the usual weight-update rule: oE 0 ~w = -7] ow is replaced with ~w = -7] ow (E + J.tEr) (1) where 7] is the learning rate, E the usual objective function, Er an additional objective function (a regularizer) that measures the discrepancy between the actual and desired directional derivatives in the directions of some selected transformations, and J.t is a weighting coefficient. Let x be an input pattern, y = G(x) be the input-output function of the network. The regularizer Er is of the form Er(x) :e e trainingset where Er(x) is (2) Here, Ki(x) is the desired directional derivative of G in the direction induced by transformation Si applied to pattern x. The second term in the norm symbol is the actual directional derivative, which can be rewritten as = G'{x). OSi(O, x) 0=0 00 0=0 where G'(x) is the Jacobian of G for pattern x, and OSi(O, x)Joo is the tangent vector associated to transformation Si as described in Part I. Multiplying the tangent vector by the Jacobian involves one forward propagation through a "linearized" version of the network. In the special case where local invariance with respect to the Si'S is desired, Ki(x) is simply set to o. Composition of transformations: The theory of Lie groups (Gilmore, 1974) ensures that compositions of local (small) transformations Si correspond to linear combinations of the corresponding tangent vectors (the local transformations Si have a structure of Lie algebra). Consequently, if Er{x) = 0 is verified, the network derivative in the direction of a linear combination of the tangent vectors is equal to the same linear combination of the desired derivatives. In other words if the network is successfully trained to be locally invariant with respect to, say, horizontal translation and vertical translations, it will be invariant with respect to compositions thereof. We have derived and implemented an efficient algorithm, "tangent prop" , for performing the weight update (Eq. 1). It is analogous to ordinary backpropagation, 900 Simard, Victorri, Le Cun, and Denker b'.-l , W'+l Iti '-I x· , Network W l+1 Iti e: l j3J-1 e;-I Jacobian nework Figure 4: forward propagated variables (a, x, a, e), and backward propagated variables (b, y, p, t/J) in the regular network (roman symbols) and the Jacobian (linearized) network (greek symbols) but in addition to propagating neuron activations, it also propagates the tangent vectors. The equations can be easily derived from Fig. 4. Forward propagation: a~ = ~ wLx'.-l • L...J I, , i Tangent forward propagation: , _ ~ , ~'-1 ai L...J wW"i i Tangent gradient backpropagation: (31 ~ w'+1.I.l+1 i L...J Iti ¥lit It Gradient backpropagation: b' - ~ w1+1yl+1 i L...J Iti It It Weight update: x~ = u(aD e! = u'(a~)a~ 8[E(W, Up) + I'Er (W, Up, Tp)] _ 1-1 , + ~'-l.I.' 8 ' Xi Yi I'\oi ¥Ii w·· I, (3) (4) (5) (6) (7) Tangent Prop--A formalism for specifying selected invariances in an adaptive network 901 %Erroron the test set 60 50 20 10 160 320 Training set size Figure 5: Generalization performance curve as a function of the training set size for the tangent prop and the backprop algorithms The regularization parameter jJ is tremendously important, because it determines the tradeoff between minimizing the usual objective function and minimizing the directional derivative error. 3 RESULTS Two experiments illustrate the advantages of tangent prop. The first experiment is a classification task, using a small (linearly separable) set of 480 binarized handwritten digit. The training sets consist of 10, 20, 40, 80, 160 or 320 patterns, and the training set contains the remaining 160 patterns. The patterns are smoothed using a gaussian kernel with standard deviation of one half pixel. For each of the training set patterns, the tangent vectors for horizontal and vertical translation are computed. The network has two hidden layers with locally connected shared weights, and one output layer with 10 units (5194 connections, 1060 free parameters) (Le Cun, 1989). The generalization performance as a function of the training set size for traditional backprop and tangent prop are compared in Fig. 5. We have conducted additional experiments in which we implemented not only translations but also rotations, expansions and hyperbolic deformations. This set of 6 generators is a basis for all linear transformations of coordinates for two dimensional images. It is straightforward to implement other generators including gray-Ievelshifting, "smooth" segmentation, local continuous coordinate transformations and independent image segment transformations. The next experiment is designed to show that in applications where data is highly 902 Simard, Victorri, Le Cun, and Denker Av"ge NMSE VI 1ge A-. NMSE VI. 0.15 .15 0.1 .1 o oL-~~==~~=;~==+=~~~ o 1000 2000 3000 4000 5000 6000 7000 8000 0000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 0000 10000 ..... ..... 15 15 " " o 0 -0.5 -.5 -1 -1 -1.5 +--_+_-_--+_-_+_-_-_ -1.5 +--_+_-_--+--_+_-_-__t -1.5 -1 -0.5 0 0.5 1.5 -1.5 -1 - .5 o .5 1.5 Distortion model Tangent prop Figure 6: Comparison of the distortion model (left column) and tangent prop (right column). The top row gives the learning curves (error versus number of sweeps through the training set). The bottom row gives the final input-output function of the network; the dashed line is the result for unadorned back prop. Tangent Prop-A formalism for specifying selected invariances in an adaptive network 903 correlated, tangent prop yields a large speed advantage. Since the distortion model implies adding lots of highly correlated data, the advantage of tangent prop over the distortion model becomes clear. The task is to approximate a function that has plateaus at three locations. We want to enforce local invariance near each of the training points (Fig. 6, bottom). The network has one input unit, 20 hidden units and one output unit. Two strategies are possible: either generate a small set of training point covering each of the plateaus (open squares on Fig. 6 bottom), or generate one training point for each plateau (closed squares), and enforce local invariance around them (by setting the desired derivative to 0). The training set of the former method is used as a measure the performance for both methods. All parameters were adjusted for approximately optimal performance in all cases. The learning curves for both models are shown in Fig. 6 (top). Each sweep through the training set for tangent prop is a little faster since it requires only 6 forward propagations, while it requires 9 in the distortion model. As can be seen, stable performance is achieved after 1300 sweeps for the tangent prop, versus 8000 for the distortion model. The overall speedup is therefore about 10. Tangent prop in this example can take advantage of a very large regularization term. The distortion model is at a disadvantage because the only parameter that effectively controls the amount of regularization is the magnitude of the distortions, and this cannot be increased to large values because the right answer is only invariant under small distortions. 4 CONCLUSIONS When a priori information about invariances exists, this information must be made available to the adaptive system. There are several ways of doing this, including the distortion model and tangent prop. The latter may be much more efficient in some applications, and it permits separate control of the emphasis and learning rate for the invariances, relative to the original training data points. Training a system to have zero derivatives in some directions is a powerful tool to express invariances to transformations of our choosing. Tests of this procedure on large-scale applications (handwritten zipcode recognition) are in progress. References Baird, H. S. (1990). Document Image Defect Models. In IAPR 1990 Workshop on Sytactic and Structural Pattern Recognition, pages 38-46, Murray Hill, NJ. Gilmore, R. (1974). Lie Groups, Lie Algebras and some of their Applications. Wiley, New York. Le Cun, Y. (1989). Generalization and Network Design Strategies. In Pfeifer, R., Schreter, Z., Fogelman, F., and Steels, L., editors, Connectionism in Perspective, Zurich, Switzerland. Elsevier. an extended version was published as a technical report of the University of Toronto.
1991
60
530
Human and Machine 'Quick Modeling' Karl Gustafson Jakob Bernasconi Asea Brown Boveri Ltd Corporate Research CH-5405 Baden, SWITZERLAND University of Colorado Department of Mathematics and Optoelectronic Computing Center Boulder, CO 80309 ABSTRACT We present here an interesting experiment in 'quick modeling' by humans, performed independently on small samples, in several languages and two continents, over the last three years. Comparisons to decision tree procedures and neural net processing are given. From these, we conjecture that human reasoning is better represented by the latter, but substantially different from both. Implications for the 'strong convergence hypothesis' between neural networks and machine learning are discussed, now expanded to include human reasoning comparisons. 1 INTRODUCTION Until recently the fields of symbolic and connectionist learning evolved separately. Suddenly in the last two years a significant number of papers comparing the two methodologies have appeared. A beginning synthesis of these two fields was forged at the NIPS '90 Workshop #5 last year (Pratt and Norton, 1990), where one may find a good bibliography of the recent work of Atlas, Dietterich, Omohundro, Sanger, Shavlik, Tsoi, Utgoff and others. It was at that NIPS '90 Workshop that we learned of these studies, most of which concentrate on performance comparisons of decision tree algorithms (such as ID3, CART) and neural net algorithms (such as Perceptrons, Backpropagation). Independently three years ago we had looked at Quinlan's ID3 scheme (Quinlan, 1984) and intuitively and rather instantly not agreeing with the generalization he obtains by ID3 from a sample of 8 items generalized to 12 items, we subjected this example to a variety of human experiments. We report our findings, as compared to the performance of ID3 and also to various neural net computations. 1151 1152 Bernasconi and Gustafson Because our focus on humans was substantially different from most of the other mentioned studies, we also briefly discuss some important related issues for further investigation. More details are given elsewhere (Bernasconi and Gustafson, to appear). 2 THE EXPERIMENT To illustrate his ID3 induction algorithm, Quinlan (1984) considers a set C consisting of 8 objects, with attributes height, hair, and eyes. The objects are described in terms of their attribute values and classified into two classes, "+" and "-", respectively (see Table 1). The problem is to find a rule which correctly classifies all objects in C, and which is in some sense minimal. Table 1: The set C of objects in Quinlan's classification example. Object Height Hair Eyes Class 1 (s) short (b) blond (bl) blue + 2 (t) tall (b) blond (br) brown 3 (t) tall (r) red (bl) blue + 4 (s) short (d) dark (bl) blue 5 (t) tall (d) dark (bl) blue 6 (t) tall (b) blond (bl) blue + 7 (t) tall (d) dark (br) brown 8 (s) short (b) blond (br) brown The ID3 algorithm uses an information-theoretic approach to construct a "minimal" classification rule, in the form of a decision tree, which correctly classifies all objects in the learning set C. In Figure 1, we show two possible decision trees which correctly classify all 8 objects of the set C. Decision tree 1 is the one selected by the ID3 algorithm. As can be seen, "Hair" as root of the tree classifies four of the eight objects immediately. Decision tree 2 requires the same number of tests and has the same number of branches, but "Eyes" as root classifies only three objects at the first level of the tree. Consider now how the decision trees of Figure 1 classify the remaining four possible objects in the set complement C'. Table 2 shows that the two decision trees lead to a different classification of the four objects of sample C'. We observe that the ID3preferred decision tree 1 places a large importance on the "red" attribute (which occurs only in one object of sample C), while decision tree 2 puts much less emphasis on this particular attribute. Human and Machine 'Quick Modeling' 1153 Decision tree 1 Decision tree 2 Figure 1: Two possible decision trees for the classification of sample C (Table 1) Table 2: The set C' of the remaining four objects, and their classification by the decision trees of Figure 1. Object Attribute Classification Values Tree 1 Tree 2 9 s d br 10 s r bl + + 11 s r br + 12 t r br + 3 GENERALIZATIONS BY HUMANS AND NEURAL NETS Curious about these differences in the generalization behavior, we have asked some humans (colleagues, graduate students, undergraduate students, some nonscientists also) to "look" at the original sample C of 8 items, presented to them without warning, and to "use" this information to classify the remaining 4 objects. Over some time, we have accumulated a "human sample" of total size 73 from 3 continents representing 14 languages. The results of this human generalization experiment are summarized in Table 3. We observe that about 2/3 of the test persons generalized in the same manner as decision tree 2, and that less than 10 percent arrived at the generalization corresponding to the ID3-preferred decision tree 1. 1154 Bernasconi and Gustafson Table 3: Classification of objects 9 through 12 by Humans and by a Neural Net. Based on a total sample of 73 humans. Each of the 4 contributing subsamples from different languages and locations gave consistent percentages. Object Attribute Classification Values A B C D E Other 9 s d br 10 s r bl + + + + 11 s r br + + 12 t r br + + Humans: 65.8% 8.2% 4.1% 9.6% 12.3% Neural Net: 71.4% 12.1% 9.4% 4.2% 2.9% We also subjected this generalization problem to a variety of neural net computations. In particular, we analyzed a simple perceptron architecture with seven input units representing a unary coding of the attribute values (i.e., a separate input unit for each attribute value). The eight objects of sample C (Table 1) were used as training examples, and we employed the perceptron learning procedure (Rumelhart and McClelland, 1986) for a threshold output unit. In our initial experiment, the starting weights were chosen randomly in (-1,1) and the learning parameter h (the magnitude of the weight changes) was varied between 0.1 and 1. After training, the net was asked to classify the unseen objects 9 to 12 of Table 2. Out of the 16 possible classifications of this four object test set, only 5 were realized by the neural net (labelled A through E in Table 3). The percentage values given in Table 3 refer to a total of 9000 runs (3000 each for h = 0.1, 0.5, and 1.0, respectively). As can be seen, there is a remarkable correspondence between the solution profile of the neural net computations and that of the human experiment. 4 BACKWARD PREDICTION There exist many different rules which all correctly classify the given set C of 8 objects (Table 1), but which lead to a different generalization behavior, i.e., to a different classification of the remaining objects 9 to 12 (see Tables 2 and 3). From a formal point of view, all of the 16 possible classifications of objects 9 to 12 are equally probable, so that no a priori criterion seems to exist to prefer one generalization over the other. We have nevertheless attempted to quantify the obviously ill-defined notion of "meaningful generalization". To estimate the relative "quality" of different classification rules, we propose to analyze the "backward prediction ability" of the respective generalizations. This is evaluated as follows. An appropriate learning method (e.g., neural nets) is used to construct rules which explain a given classification of objects 9 to 12, and these rules are applied to classify the initial set C of 8 objects. The 16 possible generalizations can then be rated according to their "backward prediction accuracy" with respect to the original classification of Human and Machine 'Quick Modeling' 1155 the sample C. We have performed a number of such calculations and consistently found that the 5 generalizations chosen by the neural nets in the forward prediction mode (cf. Table 3) have by far the highest backward prediction accuracy (on the average between 5 and 6 correct classifications). Their negations ("+" exchanged with "-"), on the other hand, predict only about 2 to 3 of the 8 original classifications correctly, while the remaining 6 possible generalizations all have a backward prediction accuracy close to 50% (4 out of 8 correct). These results, representing averages over 1000 runs, are given in Table 4. Table 4: Neural Net backward prediction accuracy for the different classifications of objects 9 to 12. Classification Backward prediction of objects accuracy (% ) 9 10 11 12 + 76.0 + + 71.2 + + + 71.1 + + 67.9 61.9 + 52.6 + 52.5 + + 52.5 + + + 47.4 + + + 47.3 + + 47.0 + + + + 37.2 + + 31.7 + 30.1 + + 28.3 + + + 23.6 In addition to Neural Nets, we have also used the ID3 method to evaluate the backward predictive power of different generalizations. This method generates fewer rules than the Neural Nets (often only a single one), but the resulting tables of backward prediction accuracies all exhibit the same qualitative features . As examples, we show in Figure 2 the ID3 backward prediction trees for two different generalizations, the ID3-preferred generalization which classifies the objects 9 to 12 as (- + ++), and the Human and Neural Net generalization (- + --). Both trees have a backward prediction accuracy of 75% (provided that "blond hair" in tree (a) is classified randomly). 1156 Bernasconi and Gustafson (a) (b) Figure 2: ID3 backward prediction trees, (a) for the ID3-preferred generalization (- + ++), and (b) for the generalization preferred by Humans and Neural Nets, (- + --) The overall backward prediction accuracy is not the only quantity of interest in these calculations. We can, for example, examine how well the original classification of an individual object in the set C is reproduced by predicting backwards from a given generalization. Some examples of such backward prediction profiles are shown in Figure 3. From both the ID3 and the Neural Net calculations, it is evident that the backward prediction behavior of the Human and Neural Net generalization is much more informative than that of the ID3-solution, even though the two solutions have almost the same average backward prediction accuracy. IDJ Backward Prediction: Object ,. ~eural ~et Backward Prediction: 1:1) O.S o -0.>......,.,.....,.,.."..>..>.,.;>0..>....>..>....:....,." 123.l5678 ObJect" Ib) Ibl Figure 3: Individual backward prediction probabilities for the ID3-preferred generalization [graphs (a)], and for the Human and Neural Net generalization [graphs (b )]. Human and Machine 'Quick Modeling' 1157 Finally, we have recently performed a Human backward prediction experiment. These results are given in Table 5. Details will be given elsewhere (Bernasconi and Gustafson, to appear). Note that the Backward Prediction results are commensurate with the Forward Prediction in both cases. Table 5: Human backward predictions and accuracy from the two principal forward generalizations A (Neural Nets, Humans) and B (ID3). Object Class Backward Backward from A from B 1 + + + + 2 + 3 + + + + + 4 + 5 + 6 + + + + 7 8 + Humans: 59% 12% 33% 17% Accuracy: 75% 100% 75% 75% 5 DISCUSSION AND CONCLUSIONS Our basic conclusion from this experiment is that the "Strong Convergence Hypothesis" that Machine Learning and Neural Network algorithms are "close" can be sharpened, with the two fields then better distinguished, by comparison to Human Modelling. From the experiment described here, we conjecture a "Stronger Convergence Hypothesis" that Humans and Neural Nets are "closer." Further conclusions related to minimal network size (re Pavel, Gluck, Henkle, 1989), crossvalidation (see Weiss and Kulikowski, 1991), sharing over nodes.(as in Dietterich, Hild, Bakiri, to appear, and Atlas et al., 1990), and rule extracting (Shavlik et al., to appear), will appear elsewhere (Bernasconi and Gustafson, to appear). Although we have other experiments on other test sets underway, it should be stressed that our investigations especially toward Human comparisons are only preliminary and should be viewed as a stimulus to further investigations. ACKNOWLEDGEMENT This work was partially supported by the NFP 23 program of the Swiss National Science Foundation and by the US-NSF grant CDR8622236. 1158 Bernasconi and Gustafson REFERENCES L. Y. Pratt .and S. W. Norton, "Neural Networks and Decision Tree Induction: Exploring the Relationship Between Two Research Areas," NIPS '90 Workshop #5 Summary (1990), 7 pp. J. Ross Quinlan, "Learning Efficient Classification Procedures and Their Application to Chess End Games," in Machine Learning: An Artificial Intelligence Approach, edited by R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, SpringerVerlag, Berlin (1984), 463-482. D. E. Rumelhart and J. L. McClelland (Eds.), Parallel Distributed Processing, Vol. 1 MIT Press, Cambridge, MA (1986). J. Bernasconi and K. Gustafson, "Inductive Inference and Neural Nets," to appear. J. Bernasconi and K. Gustafson, "Generalization by Humans, Neural Nets, and ID3," IJCNN-91-Seattle. Y. H. Pao, Adaptive Pattern Recognition and Neural Networks, Addison Wesley (1989), Chapter 4. M. Pavel, M. A. Gluck and V. Henkle, "Constraints on Adaptive Networks for Modelling Human Generalization," in Advances in Neural Information Processing Systems I, edited by D. Touretzky, Morgan Kaufmann, San Mateo, CA (1989), 2-10. S. Weiss and C. Kulikowski, Computer Systems that Learn, Morgan Kaufmann (1991). T. G. Dietterich, H. HiId, and G. Bakiri, "A Comparison of ID3 and Backpropagation for English Text-to-Speech Mapping," Machine Learning, to appear. L. Atlas, R. Cole, J. Connor, M. EI-Sharkawi, R. Marks, Y. Muthusamy, E. Barnard, "Performance Comparisons Between Backpropagation Networks and Classification Trees on Three Real-World Applications," in Advances in Neural Information Processing Systems 2, edited by D. Touretzky, Morgan Kaufmann (1990), 622-629. J. Shavlik, R. Mooney, G. Towell, "Symbolic and Neural Learning Algorithms: An Experimental Comparison (revised)," Machine Learning, (1991, to appear).
1991
61
531
Practical Issues in Temporal Difference Learning Gerald Tesauro IBM Thomas J. Watson Research Center P. O. Box 704 Yorktown Heights, NY 10598 tesauro@watson.ibm.com Abstract This paper examines whether temporal difference methods for training connectionist networks, such as Suttons's TO('\) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TO('\) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex nontrivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. The hidden units in these network have apparently discovered useful features, a longstanding goal of computer games research. Furthermore, when a set of hand-crafted features is added to the input representation, the resulting networks reach a near-expert level of performance, and have achieved good results against world-class human play. 1 INTRODUCTION We consider the prospects for applications of the TO('\) algorithm for delayed reinforcement learning, proposed in (Sutton, 1988), to complex real-world problems. TO('\) is an algorithm for adjusting the weights in a connectionist network which 259 260 Tesauro has the following form: t ~Wt = Q(PHI - Pt) L: At-I:VwPI: (1) 1:=1 where Pt is the network's output upon observation of input pattern Zt at time t, W is the vector of weights that parameterizes the network, and VwPI: is the gradient of network output with respect to weights. Equation 1 basically couples a temporal difference method for temporal credit assignment with a gradient-descent method for structural credit assigment; thus it provides a way to adapt supervised learning procedures such as back-propagation to solve temporal credit assignment problems. The A parameter interpolates between two limiting cases: A = 1 corresponds to an explicit supervised pairing of each input pattern Zt with the final reward signal, while A = 0 corresponds to an explicit pairing of Zt with the next prediction PHI. Little theoretical guidance is available for practical uses of this algorithm. For example, one of the most important i88ues in applications of network learning procedures is the choice of a good representation scheme. However, the existing theoretical analysis of TD( A) applies primarily to look-up table representations in which the network has enough adjustable parameters to explicitly store the value of every p088ible state in the state space. This will clearly be intractable for real-world problems, and the theoretical results may be completely inappropriate, as they indicate, for example, that every possible state in the state space has to be visited infinitely many times in order to guarantee convergence. Another important class of practical i88ues has to do with the nature of the task being learned, e.g., whether it is noisy or deterministic. In volatile environments with a high step-to-step variance in expected reward, TD learning is likely to be difficult. This is because the value of Pt+1, which is used as a heuristic teacher signal for Pt , may have nothing to do with the true value of the state Zt. In such cases it may be necessary to modify TD(A) by including a lookahead process which averages over the step-to-step noise. Additional difficulties must also be expected if the task is a combined predictioncontrol task, in which the predictor network is used to make control decisions, as opposed to a prediction only task. As the network's predictions change, its control strategies also change, and this changes the target predictions that the network is trying to learn. In this case, theory does not say whether the combined learning system would converge at all, and if so, whether it would converge to the optimal predictor-controller. It might be possible for the system to get stuck in a selfconsistent but non-optimal predictor-controller. A final set of practical i88ues are algorithmic in nature, such as convergence, scaling, and the p088ibility of overtraining or overfitting. TD( A) has been proven to converge only for a linear network and a linearly independent set of input patterns (Sutton, 1988; Dayan, 1992). In the more general case, the algorithm may not converge even to a locally optimal solution, let alone to a globally optimal solution. Regarding scaling, no results are available to indicate how the speed and quality of TD learning will scale with the temporal length of sequences to be learned, the dimensionality of the input space, the complexity of the task, or the size of the network. Intuitively it seems likely that the required training time might increase Practical Issues in Temporal Difference Learning 261 dramatically with the sequence length. The training time might also scale poorly with the network or input space dimension, e.g., due to increased sensitivity to noise in the teacher signal. Another potential problem is that the quality of solution found by gradient-descent learning relative to the globally optimal solution might get progressively worse with increasing network size. Overtraining occurs when continued training of the network results in poorer performance. Overfitting occurs when a larger network does not do as well on a task as a smaller network. In supervised learning, both of these problems are believed to be due to a limited data set. In the TD approach, training takes place on-line using patterns generated de novo, thus one might hope that these problems would not occur. But both overtraining and overfitting may occur if the error function minimized during training does not correspond to the performance function that the user cares about. For example, in a combined prediction-control task, the user may care only about the quality of control signals, not the absolute accuracy of the predictions. 2 A CASE STUDY: TD LEARNING OF BACKGAMMON STRATEGY We have seen that existing theory provides little indication of how TD(A) will behave in practical applications. In the absence of theory, we now examine empirically the above-mentioned issues in the context of a specific application: learning to play the game of backgammon from the outcome of self-play. This application was selected because of its complexity and stochastic nature, and because a detailed comparison can be made with the alternative approach of supervised learning from human expert examples (Tesauro, 1989j Tesauro, 1990). It seems reasonable that, by watching two fixed opponents play out a large number of games, a network could learn by TD methods to predict the expected outcome of any given board position. However, the experiments presented here study the more interesting question of whether a network can learn from its own play. The learning system is set up as follows: the network observes a sequence of board positions Zl, Zl, ••• , Z J leading to a final reward signal z. In the simplest case, z = 1 if White wins and z = 0 if Black wins. In this case the network's output Pe is an estimate of White's probability of winning from board position Ze. The sequence of board positions is generated by setting up an initial configuration, and making plays for both sides using the network's output as an evaluation function. In other words, the move which is selected at each time step is the move which maximizes Pe when White is to play and minimizes Pe when Black is to play. The representation scheme used here contained only a simple encoding of the "raw" board description (explained in detail in figure 2), and did not utilize any additional pre-computed "features" relevant to good play. Since the input encoding scheme contains no built-in knowledge about useful features, and since the network only observes its own play, we may say that this is a "knowledge-free" approach to learning backgammon. While it's not clear that this approach can make any progress beyond a random starting state, it at least provides a baseline for judging other approaches using various forms of built-in knowledge. 262 Tesauro The approach described above is similar in spirit to Samuel's scheme for learning checkers from self-play (Samuel, 1959), but in several ways it is a more challenging learning task. Unlike the raw board description used here, Samuel's board description used a number of hand-crafted features which were designed in consultation with human checkers experts. The evaluation function learned in Samuel's study was a linear function of the input variables, whereas multilayer networks learn more complex nonlinear functions. Finally, Samuel found that it was necessary to give the learning system at least one fixed intermediate goal, material advantage, as well as the ultimate goal of the game. The proposed backgammon learning system has no such intermediate goals. The networks had a feedforward fully-connected architecture with either no hidden units, or a single hidden layer with between 10 and 40 hidden units. The learning algorithm parameters were set, after a certain amount of parameter tuning, at Q = 0.1 and A = 0.7. The average sequence length appeared to depend strongly on the quality of play. With decent play on both sides, the average game length is about 50-60 time steps, whereas for the random initial networks, games often last several hundred or even several thousand time steps. This is one of the reasons why the proposed selflearning scheme appeared unlikely to work. Learning was assessed primarily by testing the networks in actual game play against Sun Microsystems' Gammontool program. Gammontool is representative of the playing ability of typical commercial programs, and provides a decent benchmark for measuring game-playing strength: human beginners can win about 40% of the time against it, decent intermediate-level humans would win about 60%, and human experts would win about 75%. (The random initial networks before training win only about 1%.) Networks were trained on the entire game, starting from the opening position and going all the way to the end. This is an admittedly naive approach which was not expected to yield any useful results other than a reference point for judging more sensible approaches. However, the rather surprising result was that a significant amount of learning actually took place. Results are shown in figure 1. For comparison purposes, networks with the same input coding scheme were also trained on a massive human expert data base of over 15,000 engaged positions, following the training procedure described in (Tesauro, 1989). These networks were also tested in game play against Gammontool. Given the complexity of the task, size of input space and length of typical sequences, it seems remarkable that the TO nets can learn on their own to play at a level substantially better than Gammontool. Perhaps even more remarkable is that the TO nets surpass the EP nets trained on a massive human expert data base: the best TO net won 66.2% against Gammontool, whereas the best EP net could only manage 59.4%. This was confirmed in a head-to-head test in which the best TO net played 10,000 games against the best EP net. The result was 55% to 45% in favor of the TO net. This confirms that the Gammontool benchmark gives a reasonably accurate measure of relative game-playing strength, and that the TO net really is better than the EP net. In fact, the TO net with no features appears to be as good as Neurogammon 1.0, backgammon champion of the 1989 Computer Practical Issues in Temporal Difference Learning 263 c .70 0 ~ (/) .65 Q.) E ctJ en ...... .60 0 c .55 0 .1'""'1 +J U ctJ .50 'l.J.... .45 0 10 20 40 Number of hidden units Figure 1: Plot of game performance against Gammontool vs. number of hidden units for networks trained using TD learning from self-play (TD), and supervised training on human expert preferences (EP). Each data point represents the result of a 10,000 game test, and should be accurate to within one percentage point. Olympiad, which does have features, and which wins 65% against Gammontool. A 10,000 game test of the best TD net against Neurogammon 1.0 yielded statistical equality: 50% for the TD net and 50% for N eurogammon. Ii is also of interest to examine the weights learned by the TD nets, shown in figure 2. One can see a great deal of spatially organized structure in the pattern of weights, and some of this structure can be interpreted as useful features by a knowledgable backgammon player. For example, the first hidden unit in figure 2 appears to be a race-oriented feature detector, while the second hidden unit appears to be an attack-oriented feature detector. The TD net has apparently solved the longstanding "feature discovery" problem, which was recently stated in (Frey, 1986) as follows: "Samuel was disappointed in his inability to develop a mechanical strategy for defining features. He thought that true machine learning should include the discovery and definition of features. Unfortunately, no one has found a practical way to do this even though more than two and a half decades have passed." The training times needed to reach the levels of performance shown in figure 1 were on the order of 50,000 training games for the networks with 0 and 10 hidden units, 100,000 games for the 20-hidden unit net, and 200,000 games for the 40-hidden unit net. Since the number of training games appears to scale roughly linearly with the number of weights in the network, and the CPU simulation time per game on a serial computer also scales linearly with the number of weights, the total CPU time thus scales quadratically with the number of weights: on an IBM RS/6000 workstation, the smallest network was trained in several hours, while the largest net required two weeks of simulation time. In qualitative terms, the TD nets have developed a style of play emphasizing run264 Tesauro 24 23 22 21 20 19 18 17 16 15 1-1 13 12 11 10 9 a 7 e S 4 3 2 1 2 3 4 Black pipces 234 2 3 4 4 White pieces Black pi('ces White pit"t:~~ Figure 2: Weights from the input units to two hidden units in the best TO net. Black squares represent negative weightsi white squares represent positive weightsi size indicates magnitude of weights. Rows represent spatial locations 1-24, top row represents no. of barmen, men off, and side to move. Columns represent number of Black and White men as indicated. The first hidden unit has two noteworthy features: a linearly increasing pattern of negative weights for Black blots and Black points, and a negative weighting of White men off and a positive weighting of Black men off. These contribute to an estimate of Black's probability of winning based on his racing lead. The second hidden unit has the following noteworthy features: strong positive weights for Black home board points, strong positive weights for White men on bar, positive weights for White blots, and negative weights for White points in Black's home board. These factors all contribute to the probability of a successful Black attacking strategy. ning and tactical play, whereas the EP nets favor more quiescent positional play emphasizing blocking rather than racing. This is more in line with human expert play, but it often leads to complex prime vs. prime and back-game situations that are hard for the network to evaluate properly. This suggests one possible advantage of the TO approach over the EP approach: by imitating an expert teacher, the learner may get itself into situations that it can't handle. With the alternative approach of learning from experience, the learner may not reproduce the expert strategies, but at least it will learn to handle whatever situations are brought about by its own strategy. It's also interesting that TO net plays well in early phases of play, whereas its play becomes worse in the late phases of the game. This is contrary to the intuitive notion that states far from the end of the sequence should be harder to learn than states near the end. Apparently the inductive bias due to the representation scheme and network architecture is more important than temporal distance to the final outcome. Practical Issues in Temporal Difference Learning 265 3 TD LEARNING WITH BUILT-IN FEATURES We have seen that TD networks with no built-in knowledge are able to reach computer championship levels of performance for this particular application. It is then natural to wonder whether even greater levels of performance might be obtained by adding hand-crafted features to the input representation. In a separate series of experiments, TD nets containing all of Neurogammon's features were trained from self-playas described in the previous section. Once again it was found that the performance improved monotonically by adding more hidden units to the network, and training for longer training times. The best performance was obtained with a network containing 80 hidden units and over 25,000 weights. This network was trained for over 300,000 training games, taking over a month of CPU time on an RS/6000 workstation. The resulting level of performance was 73% against Gammontool and nearly 60% against N eurogammon. This is very close to a human expert level of performance, and is the strongest program ever seen by this author. The level of play of this network was also tested in an all-day match against twotime World Champion Bill Robertie, one of the world's best backgammon players. At the end of the match, a total of 31 games had been played, of which Robertie won 18 and the TD net 13. This showed that the TD net was capable of a respectable showing against world-class human play. In fact, Robertie thinks that the network's level of play is equal to the average good human tournament player. It's interesting to speculate about how far this approach can be carried. Further substantial improvements might be obtained by training much larger networks on a supercomputer or special-purpose hardware. On such a machine one could also search beyond one ply, and there is some evidence that small-to-moderate improvements could be obtained by running the network in two-ply search mode. Finally, the features in Berliner's BKG program (Berliner, 1980) or in some of the top commercial programs are probably more sophisticated than Neurogammon's relatively simple features, and hence might give better performance. The combination of all three improvements (bigger nets, two-ply search, better features) could conceivably result in a network capable of playing at world-class level. 4 CONCLUSIONS The experiments in this paper were designed to test whether TD(.\) could be successfully applied to complex, stochastic, nonlinear, real-world prediction and control problems. This cannot be addressed within current theory because it cannot answer such basic questions as whether the algorithm converges or how it would scale. Given the lack of any theoretical guarantees, the results of these experiments are ve~y encouraging. Empirically the algorithm always converges to at least a local minimum, and the quality of solution generally improves with increasing network size. Furthermore, the scaling of training time with the length of input sequences, and with the size and complexity of the task, does not appear to be a serious problem. This was ascertained through studies of simplified endgame situations, which took about as many training games to learn as the full-game situation (Tesauro, 1992). Finally, the network's move selection ability is better than one would expect based on its prediction accuracy. The absolute prediction accuracy is only at 266 Tesauro the 10% level, whereas the difference in expected outcome between optimal and non-optimal moves is usually at the level of 1 % or less. The most encouraging finding, however, is a clear demonstration that TO nets with zero built-in knowledge can outperform identical networks trained on a massive data base of expert examples. It would be nice to understand exactly how this is possible. The ability of TO nets to discover features on their own may also be of some general importance in computer games research, and thus worthy of further analysis. Beyond this particular application area, however, the larger and more important issue is whether learning from experience can be useful and practical for more general complex problems. The quality of results obtained in this study indicates that the approach may work well in practice. There may also be some intrinsic advantages over supervised training on a fixed data set. At the very least, for tasks in which the exemplars are hand-labeled by humans, it eliminates the laborious and time-consuming process of labeling the data. Furthermore, the learning system would not be fundamentally limited by the quantity of labeled data, or by errors in the labeling process. Finally, preserving the intrinsic temporal nature of the task, and informing the system of the consequences of its actions, may convey important information about the task which is not necessarily contained in the labeled exemplars. More theoretical and empirical work will be needed to establish the relative advantages and disadvantages of the two approachesi this could result in the development of hybrid algorithms combining the best of both approaches. References H. Berliner, "Computer backgammon." Sci. Am. 243:1, 64-72 (1980). P. Dayan, "Temporal differences: TO{>') for general >.." Machine Learning, in press (1992). P. W. Frey, "Algorithmic strategies for improving the performance of game playing programs." In: O. Farmer et a1. (Eds.), Evolution, Game6 and Learning. Amsterdam: North Holland (1986). A. Samuel, "Some studies in machine learning using the game of checkers." IBM J. of Re6earch and Development 3, 210-229 (1959). R. S. Sutton, "Learning to predict by the methods of temporal differences. " Machine Learning 3, 9-44 (1988). G. Tesauro and T. J. Sejnowski, "A parallel network that learns to play backgammon." Artificial Intelligence 39, 357-390 (1989). G. Tesauro, "Connectionist learning of expert preferences by comparison training." In D. Touretzky (Ed.), Advance6 in Neural Information Proce66ing 1, 99-106 (1989). G. Tesauro, "Neurogammon: a neural network backgammon program." IJCNN Proceeding6 III, 33-39 (1990). G. Tesauro, "Practical issues in temporal difference learning." Machine Learning, in press (1992).
1991
62
532
A Network of Localized Linear Discriminants Martin S. Glassman Siemens Corporate Research 755 College Road East Princeton, NJ 08540 msg@siemens.siemens.com Abstract The localized linear discriminant network (LLDN) has been designed to address classification problems containing relatively closely spaced data from different classes (encounter zones [1], the accuracy problem [2]). Locally trained hyperplane segments are an effective way to define the decision boundaries for these regions [3]. The LLD uses a modified perceptron training algorithm for effective discovery of separating hyperplane/sigmoid units within narrow boundaries. The basic unit of the network is the discriminant receptive field (DRF) which combines the LLD function with Gaussians representing the dispersion of the local training data with respect to the hyperplane. The DRF implements a local distance measure [4], and obtains the benefits of networks oflocalized units [5]. A constructive algorithm for the two-class case is described which incorporates DRF's into the hidden layer to solve local discrimination problems. The output unit produces a smoothed, piecewise linear decision boundary. Preliminary results indicate the ability of the LLDN to efficiently achieve separation when boundaries are narrow and complex, in cases where both the "standard" multilayer perceptron (MLP) and k-nearest neighbor (KNN) yield high error rates on training data. 1 The LLD Training Algorithm and DRF Generation The LLD is defined by the hyperplane normal vector V and its "midpoint" M (a translated origin [1] near the center of gravity of the training data in feature space). Incremental corrections to V and M accrue for each training token feature vector Y j in the training set, as iIlustrated in figure 1 (exaggerated magnitudes). The surface of the hyperplane is appropriately moved either towards or away from Yj by rotating V, and shifting M along 1102 A Network of Localized Linear Discriminants 11 03 the axis defined by V~ M is always shifted towards Yj in the "radial" direction Rj (which is the componerit of D j orthogonal to V, where D j = Yj - M): ! TOKEN ON CORRECT SIDE OF HYPERPLANE! ! TOKEN ON WRONG SIDE OF HYPERPLANE I V,.."". R. ..' ,.... J .' " . " ~. " /T ". .6M ":i/-.6M .' " .• Vj v _~ O· .6~~M J V,.."". R. ..' ~ J .' "" ":'" """ ~.Vj " M" ll.V OJ .' Figure 1: LLD incremental correction vectors associated with training token Y j are shown above, and the corresponding LLD update rules below: ilV = ]L(n) Lil~ = ]L(n) L(-Se~e8j)0 j j IIDjl1 llMv = yen) L llMVj = yen) L( -SeWe8j)V j j llMR = f3(n) L llMRj = f3(n) L(We8j)~ j j The batch mode summation is over tokens in the local training set, and n is the iteration index. The polarity of ilVj and ilMRj is set by Se (c = the class of Yj), where Se = 1 if Yj is classified correctly, and Se = -1 if not. Corrections for each token are scaled by a sigmoidal error term: 8j = 1/(1 + exp «se1J/ A) I VTDj I», a function of the distance of the token to the plane, the sign of Se, and a data-dependent scaling parameter: A = I VT[B~ - B~] I, where 1J is a fixed (experimental) scaling parameter. The scaling of the sigmoid is proportional to an estimate of the boundary region width along the axis of V. Be is a weighted average of the class c token vectors: Be(n + 1) = (1 - a)Be(n) + aWe EjEe €j.e(n)Yj(n), where €j.e is a sigmoid with the same scaling as 8j, except that it is centered on Be instead of M, emphasizing tokens of class c nearest the hyperplane surface. For small1J's, Be will settle near the cluster center of gravity, and for large 1J's, Be will approach the tokens closest to the hyperplane surface. (The rate of the movement of Be is limited by the value of a, which is not critical.) The inverse of the number of tokens in class c, We, balances the weight of the corrections from each class. If a more Bayesian-like solution is required, the slope of 8 can be made class dependent (for example, replacing 1J with 1J e ex: we). Since the slope of the sigmoid error term is limited and distribution dependent, the use of We, along with the nonlinear weighting of tokens near the hyperplane surface, is important for the development of separating planes in relatively narrow boundaries (the assumption is that the distributions near these boundaries are non-Gaussian). The setting of 1J simultaneously ( for convenience) controls the focus on the "inner edges" of the class clusters and the slope of the sigmoid relative to the distance between the inner edges, with some resultant control over generalization performance. This local scaling of the error also aids the convergence rate. The range of good values for 1J has been found to be reasonably wide, and identical 1104 Glassman values have been used successfully with speech, ecg, and synthetic data; it could also be set/optimized using cross-validation. Separate adaptive learning rates (/L(n), yen), and f3(n» are used in order to take advantage ofthe distinct nature of the geometric function of each component. Convergence is also improved by maintaining M within the local region; this controls the rate at which the hyperplane can sweep through the boundary region, making the effect of Ll V more predictable. The LLD normal vector update is simply: V(n + 1) = (V(n) + LlV)/I!V(n) + LlVII ,so that V is always normalized to unit magnitude. The midpoint is just shifted: M(n + 1) = M(n) + LlMR + ~v . +Vk T o Bk . I ,c I Mk lambda ___________ .. _______ - - - - - - - - • .L O· k --L- .;gm~ C B~::··::>-1·: ~SigmaR~ ~I i,k,c lambda: estimate of the boundary region width sigma(V): dispersion of the training data in the discriminant direction (V) sigma(R): dispersion of the training data In all directions orthogonal to V Figure 2: Vectors and parameters associated with the DRF for class c, for LLD k DRF's are used to localize the response of the LLD to the region of feature space in which it was trained, and are constructed after completion ofLLD training. Each DRF represents one class, and the localizing component of the DRF is a Gaussian function based on simple statistics of the training data for that class. Two measures of the dispersion of the data are used: O'v ("normal" dispersion), obtained using the mean average deviation of the lengths of Pj,k,c, and O'R ("radial" dispersion), obtained correspondingly using the 0 j,k,c'S. (As shown, Pj,k,c is the normal component, and OJ,k,c the radial component of Yj - Bk,c') The output in response to an input vector Yj from the class c DRF associated with the LLD k is cPj,k,c: cPj,k,c = Eh,c(Sj,k -0.5)/ exp( d2:. +d2:. ); v J,k,c R,j,k,c Two components of the DRF incorporate the LLD discriminant; one is the sigmoid error function used in training the LLD but shifted down to a value of zero at the hyperplane surface.' The other is E> k,c, which is 1 if Yj is on the class c side of LLD k, and zero if not. (In retrospect, for generalization performance, it may not be desirable to introduce this discontinuity to the discriminant component.) The contribution of the Gaussian is based on the normal and radial dispersion weighted distances of the input vector to B k,c: dVJ,k,C = IIPj,k,cll/O'V,k,C' and . dRJ,k,c = IIOj,k,cll/O'R,k,C' 2 Network Construction Segmentation of the boundary between classes is accomplished by "growing" LLD's within the boundary region. An LLD is initialized using a closely spaced pair of tokens from each class. The LLD is grown by adding nearby tokens to the training set, using the k-nearest neighbors to the LLD midpoint at each growth stage as candidates for permanent inclusion. Candidate DRF's are generated after incremental training of the LLD to accommodate each A Network of Localized Linear Discriminants 1105 new candidate token. Two error measures are used to assess the effect of each candidate, the peak value of Bj over the local training set, and 'UJ', which is a measure of misc1assification error due to the receptive fields of the candidate DRF's extending over the entire training set. The candidate token with the lowest average 'UJ' is permanently added, as long as both its Bj and 'UJ' are below fixed thresholds. Growth the the LLD is halted if no candidate has both error measures below threshold. The B j and 'UJ' thresholds directly affect the granularity of the DRF representation of the data; they need to be set to minimize the number of DRF's generated, while allowing sufficient resolution of local discrimination problems. They should perhaps be adaptive so as to encourage coarse grained solutions to develop before fine grain structure. Figure 3: Four "snapshots" in the growth of an LLD/DRF pair. The upper two are "c1oseups." The initial LLD/DRF pair is shown in the upper left, along with the seed pair. Filled rectangles and ellipses represent the tokens from each class in the permanent local training set at each stage. The large markers are the B points, and the cross is the LLD midpoint. The amplitude of the DRF outputs are coded in grey scale. 1106 Glassman At this point the DRF's are fixed and added to the network; this represents the addition of two new localized features available for use by the network's output layer in solving the global discrimination problem. In this implementation, the output "layer" is a single LLD used to generate a two-class decision. The architecture is shown below: INPUT DATA LLD'S , ~, LOCALIZED FEATURES , OUTPUT DISCRIMINANT FUNCTION (LLD WI SIGMOID) "',\j , 0/ I I I I I ,'~ SlGMA" a,(V,R) v~ , IS • \ , , IJIJ __ A--SIGMAIr,a,(V,R) S/GMAIr,1,(V,R) , , , , , ERROR MEASURE ON TRAINING TOKENS USED TO SEED NEW LLD'S OR HALT TRAINING Figure 4: LLDN architecture for a two-dimensional, two-class problem The ouput unit is completely retrained after addition of a new DRF pair, using the entire training set. The output of the network to the input Yj is: 'Pj = 1/(1 +exp « 'Y)/ Ao)VT[<i>j - M]), where Ao = IVT[Bo - Bdl, and <i>j = [cPj,}, .•. , cPj,p] is the p dimensional vector of DRF outputs presented to the output unit. V is the output LLD normal vector, M the midpoint, and Be's the cluster edge points in the internal feature space. The output error for each token is then used to select a new seed pair for development of the next LLD/DRF pair. If all tokens are classified with sufficient confidence, of course, construction of the LLDN is complete. There are three possibilities for insufficient confidence: a token is covered by a DRF of the wrong class, it is not yet covered sufficiently by any DRF's, or it is in a region of "conflict" between DRF's of different classes. A heuristic is used to prevent the repeated selection of the same seed pair tokens, since there is no guarantee that a given DRF will significantly reduce the error for the data it covers after output unit retraining. This heuristic alternates between the types of error and the class for selection of the primary seed token. Redundancy in DRF shapes is also minimized by error-weighting the dispersion computations so that the resultant Gaussian focuses more on the higher error regions of the local training data. A simple but reasonably effective pruning algorithm was incorporated to further eliminate unnecessary DRF's. A Network of Localized Linear Discriminants 1107 Figure 5: Network response plots illustrating network development. The upper two sequences, beginning with the first LLD/DRF pair, and the bottom two plots show final network responses for these two problems. A solution to a harder version of the nested squares problem is on the lower left. 3 Experimental Results The first experiment demonstrates comparative convergence properties of the LLD and a single hyperplane trained by the standard generalized delta rule (GDR) method (no hidden units, single output unit "network" is used) on 14 linearly separable, minimal consonant 1108 Glassman pair data sets. The data is 256 dimensional (time/frequency matrix, described in [6]), with 80 exemplars per consonant. The results compare the best performance obtainable from each technique. The LLD converges roughly 12 times faster in iteration counts. The GDR often fails to .completely separate f/th, f/v, and s/sh; in the results in figure 6 it fails on the f/th data set at a plateau of 25% error. In both experiments described in this paper, networks were run for relatively long times to insure confidence in declaring failure to z 100K o ~ a: ~ 10K w 1/1 w Ii:i 1000 ...J a. ::IE o U o ..... 1/1 Z Q ~ 100 10 Figure 6: TRAINING A SINGLE HYPERPLANE (d06S not separate) w t: D+-----~----~--~~--~ Q CI Z III N >:J: :J: :J: ~ :J: .... a: a: MINIMAL PAIR j:: ~~itill 11:1- 0U cui i:J 1I:1Il~~a 50 10 ~ a: a: w 1 ffi u a: w a. 0 Figure 7: ERROR RATES VS. GEOMETRIES 29 29 29 29 4A 1 4A 1 %WlDTH DOn ~ Don n %~~~ solve the problem. The second experiment involves complete networks on synthetic twodimensional problems. Two examples of the nested squares problem (random distributions of tokens near the surface of squares of alternating class, 400 tokens total) are shown in figure 5. Two parameters controlling data set generation are explored: the relative boundary region width, and the relative offset from the origin of the data set center of gravity (while keeping the upper right comer of the outside square near the (1,1) coordinate); all data is kept within the unit square (except for geometry number 2). Relative boundary widths of 29%, 4.4%, and 1 % are used with offsets of 0%, 76%, and 94%. The best results over parameter settings are reported for each network for each geometry. Four MLP architectures were used: 2:16:1,2:32:1, 2:64:1, and 2:16:16:1; all of these converge to a solution for the easiest problem (wide boundaries, no offset), but all eventually fail as the boundaries narrow and/or the offset increases. The worst performing net (2:64: 1) fails for 7/8 problems (maximum error rate of 49%); the best net (2:16:16:1) fails in 3/8 (maximum of 24% error). The LLDN is 1 to 3 orders of magnitude faster in cpu time when the MLP does converge, even though it does not use adaptive learning rates in this experiment. (The average running time for the LLDN was 34 minutes; for the MLP's it was 3481 minutes [Stardent 3040, single cpu], but which includes non-converging runs. The 2:16:16:1 net did, however, take 4740 minutes to solve problem 6, which was solved in 7 minutes by the LLDN.) The best LLDN's converge to zero errors over the problem set (fig. 6), and are not too sensitive to parameter variation, which primarily affect convergence time and number of DRF's generated. In contrast, finding good values for learning rate and momentum for the MLP's for each problem was a time-consuming process. The effect of random weight initialization in the MLP is not known because of the long running times required. The KNN error rate was estimated using the leave-one-out method, and yields error rates of 0%, 10.5%, and 38.75% (for the best k's) respectively for the three values of boundary width. The LLDN is insensitive to offset and scale (like the KNN) because of the use of the local origin (M) and error scaling (A.). While global offset and scaling problems for the MLP can be ameliorated through normalization and origin translation, this method cannot guarantee elimination of local offset and scaling problems. The LLDN's utilization A Network of Localized Linear Discriminants 1109 ofDRF's was reasonably efficient, with the smallest networks (after pruning) using 20,32, and 54 DRF's for the three boundary widths. A simple pruning algorithm, which starts up after convergence, iteratively removes the DRF's with the lowest connection weights to the output unit (which is retrained after each link is removed). A range of roughly 20% to 40% of the DRF's were removed before developing misclassification errors on the training sets. The LLDN was also tested on the "two-spirals" problem, which is know to be difficult for the standard MLP methods. Because ofthe boundary segmentation process, solution ofthe two-spirals problem was straightforward for the LLDN, and could be tuned to converge in as fast as 2.5 minutes on an Apollo DN10000. The solution shown in fig. 5 uses 50 DRF's (not pruned). The generalization pattern is relatively "nice" (for training on the sparse version of the data set), and perhaps demonstrates the practical nature of the smoothed piecewise linear boundary for nonlinear problems. 4 Discussion The effect of LLDN parameters on generalization performance needs to be studied. In the nested squares problem it is clear that the MLP's will have better generalization when they converge; this illustrates the potential utility of a multi-scale approach to developing localized discriminants. A number of extensions are possible: Localized feature selection can be implemented by simply zeroing components of V. The DRF Gaussians could model the radial dispersion of the data more effectively (in greater than two dimensions) by generating principal component axes which are orthogonal to V. Extension to the multiclass case can be based on DRF sets developed for discrimination between each class and all other classes, using the DRF's as features for a multi-output classifier. The use of multiple hidden layers offers the prospect of more complex localized receptive fields. Improvement in generalization might be gained by including a procedure for merging neighboring DRF's. While it is felt that the LLD parameters should remain fixed, it may be advantageous to allow adjustment of the DRF Gaussian dispersions as part of the output layer training. A stopping rule for LLD training needs to be developed so that adaptive learning rates can be utilized effectively. This rule may also be useful in identifying poor token candidates early in the incremental LLD training. References [1] J. Sklansky and G.N. Wassel. Pattern Classifiers and Trainable Machines. Springer Verlag, New York, 1981 [2] S. Makram-Ebeid, lA. Sirat, and J.R. Viala. A rationalized error backpropagation learning algorithm. Proc. IlCNN, 373-380, 1988 [3] J. Sklansky, and Y. Park. Automated design of mUltiple-class piecewise linear classifiers. Journal of Classification, 6: 195-222, 1989 [4] R.D. Short, and K. Fukanaga. A new nearest neighbor distance measure. Proc. Fifth Inti. Conf. on Pattern Rec., 81-88 [5] R. Lippmann. A critical overview of neural network pattern classifiers. Neural Networks jor Signal Processing (IEEE), 267-275, 1991 [6] M.S. Glassman and M.B. Starkey. Minimal consonant pair discrimination for speech therapy. Proc. European Con! on Speech Comm. and Tech., 273-276, 1989
1991
63
533
Multi-Digit Recognition Using A Space Displacement Neural Network Ofer Matan*, Christopher J.C. Burges, Yann Le Cun and John S. Denker AT&T Bell Laboratories, Holmdel, N. J. 07733 Abstract We present a feed-forward network architecture for recognizing an unconstrained handwritten multi-digit string. This is an extension of previous work on recognizing isolated digits. In this architecture a single digit recognizer is replicated over the input. The output layer of the network is coupled to a Viterbi alignment module that chooses the best interpretation of the input. Training errors are propagated through the Viterbi module. The novelty in this procedure is that segmentation is done on the feature maps developed in the Space Displacement Neural Network (SDNN) rather than the input (pixel) space. 1 Introduction In previous work (Le Cun et al., 1990) we have demonstrated a feed-forward backpropagation network that recognizes isolated handwritten digits at state-of-the-art performance levels. The natural extension of this work is towards recognition of unconstrained strings of handwritten digits. The most straightforward solution is to divide the process into two: segmentation and recognition. The segmenter will divide the original image into pieces (each containing an isolated digit) and pass it to the recognizer for scoring. This approach assumes that segmentation and recognition can be decoupled. Except for very simple cases this is not true. Speech-recognition research (Rabiner, 1989; Franzini, Lee and Waibel, 1990) has demonstrated the power of using the recognition engine to score each segment in • Author's current address: Department of Computer Science, Stanford University, Stanford, CA 94305. 488 Multi-Digit Recognition Using a Space Displacement Neural Network 489 a candidate segmentation. The segmentation that gives the best combined score is chosen. "Recognition driven" segmentation is usually used in conjunction with dynamic programming, which can find the optimal solution very efficiently. Though dynamic programming algorithms save us from exploring an exponential number of segment combinations, they are still linear in the number of possible segments - requiring one call to the recognition unit per candidate segment. In order to solve the problem in reasonable time it is necessary to: 1) limit the number of possible segments, or 2) have a rapid recognition unit. We have built a ZIP code reading system that "prunes" the number of candidate segments (Matan et al., 1991). The candidate segments were generated by analyzing the image's pixel projection onto the horizontal axis. The strength of this system is that the number of calls to the recognizer is small (only slightly over twice the number of real digits). The weakness is that by generating only a small number of candidates one often misses the correct segmentation. In addition, generation of this small set is based on multi-parametric heuristics, making tuning the system difficult. It would be attractive to discard heuristics and generate many more candidates, but then the time spent in the recognition unit would have to be reduced considerably. Reducing the computation of the recognizer usually gives rise to a reduction in recognition rates. However, it is possible to have our segments and eat them too. We propose an architecture which can explore many more candidates without compromising the richness of the recognition engine. 2 The Design Let us describe a simplified and less efficient solution that will lead us to our final design. Consider a de-skewed image such as the one shown in Figure 1. The system will separate it into candidate segments using vertical cuts. A few examples of these are shown beneath the original image ill Figure 1. In the process of finding the best overall segmentation each candidate segment will be passed to the recognizer described in (Le Cun et al., 1990). The scores will be converted to probabilities (Bridle, 1989) that are inserted into nodes of a direct acyclic graph. Each path on this graph represents a candidate segmentation where the length of each path is the product of the node values along it. The Viterbi algorithm is used to determine the longest path (which corresponds to the segmentation with the highest combined score). It seems somewhat redundant to process the same pixels numerous times (as part of different, overlapping candidate segments). For this reason we propose to pass a whole size-normalized image to the recognition unit and to segment a feature map, after most of the neural network computation has been done. Since the first four layers in our recognizer are convolutional, we can easily extend the single-digit network by applying the convolution kernels to the multi-digit image. Figure 2 shows the example image (Figure 1) processed by the extended network. We now proceed to segment the top layer. Since the network is convolutional, segmenting this feature-map layer is similar to segmenting the input layer. (Because of overlapping receptive fields and reduced resolution, it is not exactly equivalent.) This gives a speed-up of roughly an order of magnitude. 490 Matan, Burges, Cun, and Denker Figure 1: A sample ZIP code image and possible segmentations. Figure 2: The example ZIP code processed by 4 layers of a convolutional feedforward network. Multi-Digit Recognition Using a Space Displacement Neural Network 491 In the single digit network, we can view the output layer as a lO-unit column vector that is connected to a zone of width 5 on the last feature layer. If we replicate the single digit network over the input in the horizontal direction, the output layer will be replicated. Each output vector will be connected to a different zone of width 5 on the feature layer. Since the width of a handwritten digit is highly variable, we construct alternate output vectors that are connected to feature segment zones of widths 4,3 and 2. The resulting output maps for the example ZIP code are shown in Figure 3. The network we have constructed is a shared weight network reminiscent of a TDNN (Lang and Hinton, 1988). We have termed this architecture a Space Displacement Neural Network (SDNN). We rely on the fact that most digit strings lie on more or less one line; therefore, the network is replicated in the horizontal direction. For other applications it is conceivable to replicate in the vertical direction as well. 3 The Recognition Procedure The output maps are processed by a Viterbi algorithm which chooses the set of output vectors corresponding to the segmentation giving the highest combined score. We currently assume that we know the number of digits in the image; however, this procedure can be generalized to an unknown number of digits. In Figure 3 the five output vectors that combined to give the best overall score are marked by thin lines beneath them. 4 The Training Procedure During training we follow the above procedure and repeat it under the constraint that the winning combination corresponds to the ground truth. In Figure 4 the constrained-winning output vectors are marked by small circles. We perform backpropagation through both the ground truth vectors (reinforcement) and highest scoring vectors (negative reinforcement). We have trained and tested this architecture on size normalized 5-digit ZIP codes taken from U.S Mail. 6000 images were used for training and 3000 where used for testing. The images were cleaned, deskewed and height normalized according to the assumed largest digit height. The data was not "cleaned" after the automatic preprocessing, leaving non centered images and non digits in both the training and test set. Training was done using stochastic back propagation with some sweeps using Newton's method for adjusting the learning rates. We tried various methods of initializing the gradient on the last layer: • Reinforce only units picked by the constrained Viterbi. (all other units have a gradient of zero). • Same as above, but set negative feedback through units chosen by regular Viterbi that are different from those chosen by the constrained version. (Push down the incorrect segmentation if it is different from the correct answer). This speeds up the convergence. • Reinforce units chosen by the constrained Viterbi. Set negative feed back 492 Matan, Burges, Cun, and Denker patt-nw>-t3 file-3U Viterl>i: (2 3 2 0" (0 •• IIU21 COftnraine« Viterl>i: (2 3 2 0 61 (0.'"4421 Figure 3: Recognition using the SDNN/Viterbi. The output maps of the SDNN are shown. White indicates a positive activation. The output vectors chosen by the Viterbi alignment are marked by a thin line beneath them. The input regions corresponding to these vectors are shown. One can see that the system centers on the individual digits. Each of the 4 output maps shown is connected to different size zone in the last feature layer (5,4,3 and 2, top to bottom). In order to implement weight sharing between output units connected to different zone sizes, the dangling connections to the output vectors of narrower zones are connected to feature units corresponding to background in the input. Multi-Digit Recognition Using a Space Displacement Neural Network 493 patt-nUIII-U' Ule-11276 V1terbi: I' J 0 8 0) 10.JU80J) Con_trained V1terb1: 11 , J 8 0) 10.294892) Figure 4: Training using the SDNN /Viterbi. The output vectors chosen by the Viterbi algorithm are marked by a thin line beneath them. The corresponding input regions are shown in the left column. The output vectors chosen by the constrained Viterbi algorithm are marked by small circles and their corresponding input regions are shown to the right. Given the ground truth the system can learn to center on the correct digit. 494 Matan, Burges, Cun, and Denker through all other units except those that are "similar" to ones in the correct set. ("similar" is defined by corresponding to a close center of frame in the input and responding with the correct class). As one adds more units that have a non zero gradient, each training iteration is more similar to batch-training and is more prone to oscillations. In this case more Newton sweeps are required. 5 Results The current raw recognition rates for the whole 5-digit string are 70% correct from the training set and 66% correct from the test set. Additional interesting statistics are the distribution of the number of correct digits across the whole ZIP code and the recognition rates for each digit's position within the ZIP code. These are presented in the tables shown below. Table 1: Top: Distribution of test images according to the number of correct single digit classifications out of 5. Bottom: Rates of single digit classification according to position. Digits on the edges are classified more easily since one edge is predetermined. Number of digits correct Percent of cases 5 66.3 4 19.7 3 7.2 2 4.7 1 1.4 0 0.7 I Digit position I Percent correct 1st 92 2nd 87 3rd 87 4th 86 5th 90 6 Conclusions and Future Work The SDNN combined with the Viterbi algorithm learns to recognize strings of handwritten digits by "centering" on the individual digits in the string. This is similar in concept to other work in speech (Haffner, Franzini and Waibel, 1991) but differs from (Keeler, Rumelhart and Leow, 1991), where no alignment procedure is used. The current recognition rates are still lower than our best system that uses pixel projection information to guide a recognition based segmenter. The SDNN is much faster and lends itself to parallel hardware. Possible improvements to the architecture may be: Multi-Digit Recognition Using a Space Displacement Neural Network 495 • Modified constraints on the segmentation rules of the feature layer. • Applying the Viterbi algorithm in the vertical direction as well might overcome problems due to height variance. • It might be too hard to segment using local information only; one might try using global information, such as pixel projection or recognizing doublets or triplets. Though there is still considerable work to be done in order to reach state-of-the-art recognition levels, we believe that this type of approach is the correct direction for future image processing applications. Applying recognition based segmentation at the line, word and character level on high feature maps is necessary in order to achieve fast processing while exploring a large set of possible interpretations. Acknowledgements Support of this work by the Technology Resource Department of the U.S. Postal Service under Task Order 104230-90-C-2456 is gratefully acknowledged. References Bridle, J. S. (1989). Probabilistic Interpretation of Feedforward Classification Network Outputs with Relationships to Statistical Pattern Recognition. In Fogelman-Soulie, F. and Herault, J., editors, N euro-computing: algorithms, architectures and applications. Springer-Verlag. Franzini, M., Lee, K. F., and Waibel, A. (1990). Connectionist Viterbi Training: A New Hybrid Method For Continuous Speech Recognition. In Proceedings ICASSP 90, pages 425-428. IEEE. Haffner, P., Franzini, M., and Waibel, A. (1991). Integrating Time Alignment and Neural Networks for High Performance Continuous Speech Recognition. In Proceedings ICASSP 91. IEEE. Keeler, J. D., Rumelhart, D. E., and Leow, W. (1991). Integrated Segmentation and Recognition of Handwritten-Printed Numerals. In Lippman, Moody, and Touretzky, editors, Advances in Neural Information Processing Systems, volume 3. Morgan Kaufman. Lang, K. J. and Hinton, G. E. (1988). A Time Delay Neural Network Architecture for Speech Recognition. Technical Report CMU-cs-88-152, Carnegie-Mellon University, Pittsburgh PA. Le Cun, Y., Matan, 0., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., and Baird, H. S. (1990). Handwritten Zip Code Recognition with Multilayer Networks. In Proceedings of the 10th International Conference on Pattern Recognition. IEEE Computer Society Press. Matan, 0., Bromley, J., Burges, C. J. C., Denker, J. S., Jackel, 1. D., Le Cun, Y., Pednault, E. P. D., Satterfield, W. D., Stenard, C. E., and Thompson, T. J. (1991). Reading Handwritten Digits: A ZIP code Recognition System (To appear in COMPUTER). Rabiner, L. R. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77:257-286.
1991
64
534
JANUS: Speech-to-Speech Translation Using Connectionist and Non-Connectionist Techniques Alex Waibel· Ajay N. Jain t Arthur McNair Joe Tebelskis School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Louise OsterhoItz Computational Linguistics Program Carnegie Mellon University Hiroaki Saito Keio University Tokyo, Japan Otto Schmidbauer Siemens Corporation Munich, Germany Tilo Sloboda Monika Woszczyna University of Karlsruhe Karlsruhe, Germany ABSTRACT We present JANUS, a speech-to-speech translation system that utilizes diverse processing strategies, including connectionist learning, traditional AI knowledge representation approaches, dynamic programming, and stochastic techniques. JANUS translates continuously spoken English and German into German, English, and Japanese. JANUS currently achieves 87% translation fidelity from English speech and 97% from German speech. We present the JANUS system along with comparative evaluations of its interchangeable processing components, with special emphasis on the connectionist modules. • Also with University of Karlsruhe, Karlsruhe. Germany. 1N"ow with Alliant Techsystems Research and Technology Center. Hopkins. Minnesota. 183 184 Waibel, et al. 1 INTRODUCTION In an age of increasing globalization of our economies and ever more efficient communication media. one important challenge is the need for effective ways of overcoming language barriers. Human translation efforts are generally expensive and slow. thus eliminating this possibility between individuals and around rapidly changing material (e.g. newscasts. newspapers). This need has recently lead to a resurgence of effort in machine translation-mostly of written language. Much of human communication. however. is spoken, and the problem of spoken language translation must also be addressed. If successful. speech-to-text translation systems could lead to automatic subtitles in TV-broadcasts and cross-linguistic dictation. Speech-tospeech translation could be deployed as interpreting telephone service in restricted domains such as cross-linguistic hoteVconference reservations. catalog purchasing, travel planning, etc., and eventually in general domains. such as person-to-person telephone calls. Apart from telephone service. speech translation could facilitate multilingual negotiations and collaboration in face-to-face or video-conferencing settings. With the potential applications so promising, what are the scientific challenges? Speech translation systems will need to address three distinct problems: • Speech Recognition and Understanding: A naturally spoken utterance must be recognized and understood in the context of ongoing dialog. • Machine Translation: A recognized message must be translated from one language into another (or several others). • Speech Synthesis: A translated message must be synthesized in the target language. Considerable challenges still face the development of each of the components, let alone the combination of the three. Among them only speech synthesis is mature enough for commercial systems to exist that can synthesize intelligible speech in several languages from text But even here, to guarantee acceptance of the translation system, research is needed to improve naturalness and to allow for adaptation of the output speech (in the target language) to the voice characteristics of the input speaker. Speech recognition systems to date are generally limited in vocabulary size. and can only accept grammatically well-formed utterances. They require improvement to handle spontaneous unrestricted dialogs. Machine Translation systems require considerable development effort to work in a given language pair and domain reasonably well, and generally require syntactically well-formed input sentences. Improvements are needed to handle ill-formed sentences well and to allow for flexibility in the face of changes in domain and language pairs. Beyond the challenges facing each system component, the combination of the three also introduces extra difficulties. Both the speech recognition and machine translation components, must deal with spoken languager-ill-formed noisy input, both acoustically as well as syntactically. Therefore, the speech recognition component must be concerned less with transcription fidelity than semantic fidelity, while the MT-component must try to capture the meaning or intent of the input sentence without being guaranteed a syntactically legal sequence of words. In addition, non-symbolic prosodic information (intonation, rhythm, etc.) and dialog state must be taken into consideration to properly translate an input utterance. A closer cooperation between traditional signal processing and language level processing must be achieved. JANUS: Speech-to-Speech Translation Input Translated Utterance PARSEC Parse Utterance ~ / Network 1 ( Speech 1.( System )-( :;::: )-LR DecTalk Parser DTCOI Figure 1: High-level JANUS architecture JANUS is our first attempt at multilingual speech translation. It is the result of a collaborative effort between AlR Interpreting Telephony Research Laboratories, Carnegie Mellon University, Siemens Corporation, and the University of Karlsruhe. JANUS currently accepts continuously spoken sentences from a conference registration scenario, where a fictitious caller attempts to register to an international conference. The dialogs are read aloud from dialog scripts that make use of a vocabulary of approximately 400 words. Speakerdependent and independent versions of the input recognition systems have been developed. JANUS currently accepts continuously spoken English and German input and produces spoken German, English, and Japanese output as a result. While JANUS has some of the limitations mentioned above, it is the first tri-lingual continuous large vocabulary speech translation system to-date. It is a vehicle toward overcoming some of the limitations described. A particular focus is the trainability of system components, so that flexible, adaptive, and robust systems may result. JANUS is a hybrid system that uses a blend of computational strategies: connectionist, statistical and knowledge based techniques. This paper will describe each of JANUS's processing components separately and particularly highlight the relative contributions of connectionist techniques within this ensemble. Figure 1 shows a high-level diagram of JANUS's components. 2 SPEECH RECOGNITION Two alternative speech recognition systems are currently used in JANUS: Linked Predictive Neural Networks (LPNNs) and Learned Vector Quantization networks (LVQ) (Tebelskis et al. 1991; Schmidbauer and Tebelskis 1992). They are both connectionist, continuous-speech recognition systems, and both have vocabularies of approximately 400 English and 400 German words. Each use statistical bigram or word-pair grammars derived from the conference registration database. The systems are based on canonical phoneme models (states) that can be logically concatenated in any order to create models for different words. The need for training data with labeled phonemes can be reduced by first bootstrapping the networks on a small amount of speech with forced phoneme boundaries, then training on the whole database using only forced word boundaries. In the LPNN system, each phoneme model is implemented by a predictive neural network. Each network is trained to accurately predict the next frame of speech within segments of speech corresponding to its phoneme model. Continuous scores (prediction errors) are accumulated for various word candidates. The LPNN module produces either a single 185 186 Waibel, et al. hypothesized sentence or the first N best hypotheses using a modified dynamic-programming beam-search algorithm (Steinbiss 1989). The LPNN system has speaker-dependent word accuracy rates of 93% with first-best recognition, and sentence accuracy of 69%. LVQ is a vector clustering technique based on neural networks. We have used LVQ to automatically cluster speech frames into a set of acoustic features; these features are fed into a set of output units that compute the emission probability for HMM states. This technique gives speaker-dependent word accuracy rates of 98%,86%, and 82% for English conference registration tasks of perplexity 7, 61, and Ill, respectively. The sentence recognition rate at perplexity 7 is 80%. We are also evaluating other approaches to speech recognition, such as the Multi-State TDNN for continuous-speech (Haffner, Franzini, and Waibel 1991) and a neural-network based word spotting system that may be useful for modeling spontaneous speech effects (Zeppenfield and Waibel 1992). The recognitions systems' text output serves as input to the alternative parsing modules of JANUS. 3 LANGUAGE UNDERSTANDING AND TRANSLATION 3.1 LANGUAGE ANALYSIS The translation module of JANUS is based on the Universal Parser Architecture (UPA) developed at Carnegie Mellon (Tomita and Carbonell 1987; Tomita and Nyberg 1988). It is designed for efficient multi-lingual translation. Text in a source language is parsed into a language independent frame-based inter lingual representation. From the interlingua, text can be generated in different languages. The system requires hand-written parsing and generation grammars for each language to be processed. The parsing grammars are based on a Lexical Functional Grammar formalism, and are implemented using Tomita's Generalized LR parsing Algorithm (Tomita 1991). The generation grammars are compiled into LISP functions. Both parsing and generation with UP A approach real-time. Figure 2 shows an example of the input, interlingual representation, and the output of the JANUS system 3.2 PARSEC: CONNECTIONIST PARSING JANUS can use a connectionist parser in place of the LR parser to process the output of the speech system. PARSEC is a structured connectionist parsing architecture that is geared toward the problems found in spoken language (for details, see Jain 1992 (in this volume) and Jain's PhD thesis, in preparation). PARSEC networks exhibit three strengths: • They automatically learn to parse, and generalize well compared to hand-coded grammars. • They tolerate several types of noise without any explicit noise modeling. • They can learn to use multi-modal input such as pitch in conjunction with syntax and semantics. The PARSEC network architecture relies on a variation of supervised back-propagation learning. The architecture differs from some other connectionist approaches in that it is highly structured, both at the macroscopic level of modules. and at the microscopic level of connections. JANUS: Speech-to-Speech Translation 187 Input Hello is this the office for the conference. Interlingual Representation ((CFNAME *is-this-phone) (MOOD *interrogative) (OBJECT ((NUMBER sg) (DET the) (CFNAME *conf-office))) (SADJUNCTl ((CFNAME *hello)))) Output Japanese: MOSHI MOSHI KAIGI JIMUKYOKU DESUKA German: HALLO 1ST DIES DAS KONFERENZBUERO Figure 2: Example of input, interlingua, and output of JANUS 3.2.1 Learning and Generalization Through exposure to example output parses, PARSEC networks learn parsing behavior. Trained networks generalize well compared to hand-written grammars. In direct tests of coverage for the conference registration domain, PARSEC achieved 67% correct parsing of novel sentences, whereas hand-written grammars achieved just 5%,25%, and 38% correcl Two of the grammars were written as part of a contest with a large cash prize for best coverage. The process of training PARSEC networks is highly automated, and is made possible through the use of constructive learning coupled with a robust control procedure that dynamically adjusts learning parameters during training. Novice users of the PARSEC system were able to train networks for parsing a German-language version of the conference registration task and a novel English air-travel reservation task. 3.2.2 Noise Tolerance We have compared PARSEC's performance on noisy input with that of hand-written grammars. On synthetic ungrammatical conference registration sentences, PARSEC produced acceptable interpretations 66% of the time, with the three hand-coded grammars mentioned above performing at 2%, 38%, and 34%, respectively. We have also evaluated PARSEC in the context of noisy speech recognition in JANUS, and this is discussed later. 3.2.3 Multi-Modal Input A somewhat elusive goal of spoken language processing has been to utilize information from the speech signal beyond just word sequences in higher-level processing. It is well known that humans use such information extensively in conversation. Consider the utterances "Okay." and "Okay?" Although semantically distinct, they cannot be distinguished based on word sequence, but pitch contours contain the necessary information (Figure 3). 188 Waibel, et al. FILE: S.O.O "Okay." duration = 409.1 msec, mean freq = 113.2 0.1 *.......... . ........... . 0.0 ••••••••••••••••••••••••••••••••••••••••••••••••• FILE: q.O.O "Okay?duration = 377.0 msec, mean freq = 137.3 0.6 0.5 0.4 0.3 0.2 0.1 •••••••• • •••••• _---, •••••• Figure 3: Smoothed pitch contours. ........ In a grammar-based system, it is difficult to incorporate real-valued vector input in a useful way. In a PARSEC network, the vector is just another set of input units. A module of a PARSEC network was augmented to contain an additional set of units that contained pitch information. The pitch contours were smoothed output from the OGI Neural Network Pitch Tracker (Barnard et al. 1991). Within the JANUS system, the augmented PARSEC network brings new functionality. Intonation affects translation in JANUS when using the augmented PARSEC network. The sentence, "This is the conference office." is translated to "Kaigi jimukyoku desu." "This is the conference office?" is translated to ''Kaigi jimukyoku desuka?" This required no changes in the other modules of the JANUS system. It also should be possible to use other types of information from the speech signal to aid in robust parsing (e.g. energy patterns to disambiguate clausal structure). 4 SPEECH SYNTHESIS To generate intelligible speech in the respective target languages, we have predominantly used commercial devices. Most notably, DEC-talk has provided unrestricted English textto-speech synthesis. DEC-talk has also been used for Japanese and German synthesis. The internal English text-to-phoneme conversion rules and tables of DEC-talk were bypassed by external German and Japanese text-to-phoneme look-up tables that convert the German/Japanese target sentences into phonemic strings for DEC-talk synthesis. The resulting synthesis is limited to the task vocabulary, but the external tables result in intelligible German and Japanese speech-albeit with a pronounced American accent To allow for greater flexibility in vocabulary and more language specific synthesis, several alternate devices are currently being integrated. For Japanese, in particular, two high quality speech synthesizers developed separately by NEC and A1R will be used to provide more satisfactory results. In JANUS, no attempt has so far been made to adapt the output speech to the input speaker's voice characteristics. However, this has recently been demonstrated by work with code book mapping (Abe, Shikano, and Kuwabara 1990) and connectionist mapping techniques (Huang, Lee, and Waibel 1991). JANUS: Speech-to-Speech Translation 189 5 IMPLEMENTATION ISSUES AND PERFORMANCE 5.1 Parallel Hardware Neural network forward passes for the speech recognizer were programmed on two general purpose parallel machines. a MasPar computer at the University of Karlsruhe, Germany and an Intel iWarp at Carnegie Mellon. The MasPar is a parallel SIMD machine with 4096 processing elements. The iWarp is a MIMD machine, and a 16MHz, 64 cell experimental version was used for testing. The use of parallel hardware and algorithms has significantly decreased JANUS's processing time. Compared to forward pass calculations performed by a DecStation 5000, the iWarp is 9 times faster (41.4 million connections per second). The MasPar does the forward pass calculations for a two second utterance in less than 500 milliseconds. Both the iWarp and MasPar are scalable. Efforts are underway to implement other parts of JANUS on parallel hardware with the goal of near real-time performance. 5.2 Performance Currently, English JANUS using the LR parsing module (JANUS-LR) performs at 87% correct translation using the LPNN speech system with the N-best sentence hypotheses. Gennan JANUS performs at 97% correct translation (on a subset of the conference registration database) using Gennan versions of the LPNN system and LR parsing grammar. English JANUS using PARSEC (JANUS-NN) does not perform as well as the LR parser version in N-best mode, with 80% correct translation. PARSEC is not able to select from a list of ranked candidate utterance hypotheses as robustly as is the LR parser using a very tight grammar. However, the grammar used for this comparison only achieves 5% coverage of novel test sentences, compared with PARSEC's 67%. This vast difference in coverage explains some of the N -best performance difference. In First-best mode, however, JANUS-NN does better than J ANUS-LR (77% versus 70%). The PARSEC network is able to produce acceptable parses for a number of noisy speech recognition hypotheses, but JANUS-LR tends to reject those hypotheses as unparsable. PARSEC's flexibility, which hurt its N-best performance, enhances its F-best performance. No performance evaluations were carried out using German PARSEC in German JANUS. 6 CONCLUSION In this paper we have described JANUS, a multi-lingual speech-to-speech translation system. JANUS uses a mixture of connectionist, statistical and rule based strategies to achieve this goal. Connectionist models have contributed in providing high performance recognition and parsing performance as well as greater robustness in the light of task variations and syntactically ill-formed sentences. Connectionist models also provide a mechanism for merging traditionally distinct symbolic (syntax) and signal-level (intonation) information gracefully and achieve successful disambiguation between grammatical statements whose mood can be affected by intonation. Finally, connectionist sentence analysis appears to offer high flexibility as the relevant modules can be retrained automatically for new tasks, domains and even languages without laborious recoding. We plan to continue exploring different mixtures of computing paradigms to achieve higher performance. 190 Waibel, et al. Acknowledgements The authors gratefully acknowledge the support of A1R Interpreting Telephony Laboratories, Siemens Corporation, NEC Corporation, and the National Science Foundation. References Abe, M., K. Shikano, H. Kuwabara. 1990. Cross Language Voice Conversion. In IEEE Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. Barnard, E., R. A. Cole, M. P. Vea, F. A. Alleva 1991. Pitch Detection with a Neural-Net Classifier. IEEE Transactions on Signal Processing 39(2): 298-307. Haffner, P., M. Franzini, and A. Waibel. 1991. Integrating time alignment and neural networks for high performance speech recognition. In IEEE Proceedings of the I nternational Conference on Acoustics, Speech, and Signal Processing. Huang, X. D., K. F. Lee, A. Waibel. 1991. In Proceedings of the IEEE-SP Workshop on Neural Networksfor Signal Processing. Jain, A. N. 1992. Generalization performance in PARSEC-A structured connectionist learning architecture. In Advances in Neural Information Processing Systems 4, ed. J. E. Moody, S. J. Hanson, and R. P. Lippmann. San Mateo, CA: Morgan Kaufmann Publishers. Jain, A. N. In preparation. PARSEC: A Connectionist Learning Architecture for Parsing Spoken Language. PhD Thesis, School of Computer Science, Carnegie Mellon University. Schmidbauer, O. and J. Tebelskis. 1992. An LVQ based reference model for speaker-adaptive speech recognition. In IEEE Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. Steinbiss, V. 1989. Sentence-hypothesis generation in a continuous-speech recognition system. In Proceedings of the 1989 European Conference on Speech Communication and Technology, Vol. 2, 51-54. Tebelskis, J., A. Waibel, B. Petek, O. Schmidbauer. 1991. Continuous speech recognition by Linked Predictive Neural Networks. In Advances in Neural Information Processing System 3, ed. R. Lippmann, J. Moody, and D. Touretzky. San Mateo, CA: Morgan Kaufmann Publishers. Tomita, M. (ed.). 1991. Generalized LR Parsing. Norwell, MA: Kluwer Academic Publishers. Tomita, M. and 1. G. Carbonell. 1987. The Universal Parser Architecture for KlWwledgeBased Machine Translation. Technical Report CMU-CMT-87-01, Center for Machine Translation, Carnegie Mellon University. Tomita, M. and E. Nyberg. 1988. Generation Kit and Transformation Kit. Technical Report CMU-CMT-88-MEMO, Center for Machine Translation, Carnegie Mellon University. Zeppenfield, T. and A. Waibel. 1992. A hybrid neural network, dynamic programming word spotter. In IEEE Proceedings of the International Conference on Acoustics, Speech, and Signal Processing.
1991
65
535
A Parallel Analog CCD/CMOS Signal Processor Charles F. Neugebauer Amnon Yariv Department of Applied Physics California Institute of Technology Pasadena, CA 91125 Abstract A CCO based signal processing IC that computes a fully parallel single quadrant vector-matrix multiplication has been designed and fabricated with a 2j..un CCO/CMOS process. The device incorporates an array of Charge Coupled Devices (CCO) which hold an analog matrix of charge encoding the matrix elements. Input vectors are digital with 1 - 8 bit accuracy. 1 INTRODUCTION Vector-matrix multiplication (VMM) is often used in neural network theories to describe the aggregation of signals by neurons. An input vector encoding the activation levels of input neurons is multiplied by a matrix encoding the synaptic connection strengths to create an output vector. The analog VLSI architecture presented here has been devised to perfonn the vector-matrix multiplication using CCO technology. The architecture calculates a VMM in one clock cycle, an improvement over previous semiparallel devices (Agranat et al., 1988), (Chiang. 1990). This architecture is also useful for general signal processing applications where moderate resolution is required, such as image processing. As most neural models have robust behavior in the presence of noise and inaccuracies, analog VLSI offers the potential for highly compact neural circuitry. Analog multiplication circuitry can be made much smaller than its digital equivalent, offering substantial savings in power and IC size at the expense of limited accuracy and programmability. Oigitall/O, however, is desirable as it allows the use of standard memory and control circuits at the system level. The device presented here has digital input and analog output and elucidates all relevant perfonnance characteristics including 748 A Parallel Analog CCD/CMOS Signal Processor 749 accuracy, speed, power dissipation and charge retention of the VMM. In practice. on-chip charge domain AID converters are used for converting analog output signals to facilitate digital communication with off-chip devices. Matrix Charge I 2 .. = ~ > -= Q. -= 0 I . I U. J Column Gate Input Vector Figure 1: Simplified Schematic of CID Vector Matrix Multiplier 2 ARCHITECTURE DESCRIPTION The vector-matrix multiplier consists of a matrix of CCD cells that resemble Charge Injection Device (CID) imager pixels in that one of the cell's gates is connected vertically from cell to cell fonning a column electrode while another gate is connected horizontally fonning a row electrode. The charge stored beneath the row and column gates encodes the matrix. A simplified schematic in Figure 1 shows the array organization. 2.1 BINARY VECTOR MATRIX MULTIPLICATION In its most basic configuration, the VMM circuit computes the product of a binary input vector, Uj' and an analog matrix of charge. The computation done by each CID cell in the matrix is a multiply-accumulate in which the charge, Qij' is multiplied by a binary input vector element, Uj' encoded on the column line and this product is summed with other products in the same row to fonn the vector product, Ii, on the row lines. Multiplication by a binary num ber is equivalent to adding or not adding the charge at a 750 Neugebauer and Yariv particular matrix element to its associated row line. The matrix element operation is shown in Figure 2 which displays a cross-section of one of the rows with the associated potential wells at different times in the computation. Matrix Charge Column Gate \ +lOV OV i"",l",~""J"",. OV V row+ Q/C I IER~ l(floating) Y2ZlZi;ZZU~ QV22lZUA (c) ."."."."." . ................. . ".".".".". +10V OV V (out) L\ 1 +10V V row R"",l",~""J2~~:!~ting) +10V V row+ Q/C rzz,Laarwul2!:ting) (d) Figure 2: CID Cell Operation In the initial state, prior to the VMM computation, the matrix of charges Qij is moved A Parallel Analog CCO/CMOS Signal Processor 751 beneath the column electrodes by placing a positive voltage on all column lines, shown in Figure 2(a).. A positive voltage creates a deep potential well for electrons. At this point, the row lines are reset to a reference voltage, V row' by FETs Q 1 and then disconnected from the voltage source, shown in Figure 2(b). The computation occurs when the column lines are pulsed to a negative voltage corresponding to the input vector Uj' shown in Figure 2(c). The binary Uj is represented by a negative pulse on the jth column line if the element Uj is a binary 1, otherwise the column line is kept at the positive voltage. This causes the charges in the columns that correspond to binary l's in the input vector to be transferred to their respective row electrodes which thus experience a voltage change given by N-l Q .1Vi = L ijUj j=O Crow where N is the number of elements in the input vector and Crow is the total capacitance of the row electrode. Once the charge has been transferred, the column lines are reset to their original positive voltages 1 , resulting in the potential diagram in Figure 2(d). The voltage changes on the row lines are then sampled and the matrix of charges are returned to the column electrodes in preparation for the next VMM by pulsing the row electrodes negative as in Figure 2(e). In this manner, a complete binary vector is multiplied by an analog matrix of charge in one CCD clock cycle. 3 DESIGN AND OPERATION The implementation of this architecture contains facilities for electronic loading of the matrix. Originally proposed as an optically loaded device (Agranat et al., 1988), the electronically loaded version has proven more reliable and consistent. 3.1 LOADING THE CCD ARRAY WITH MATRIX ELEMENTS The CCD matrix elements described above can be modified to operate as standard four phase CCD shift registers by simply adding another gate. The matrix cell is shown in Figure 3. The fabricated single quadrant cell size is 24J.l.m by 24J.lIl1 using a 2J.lIl1 minimum feature size CCD/CMOS process. More aggressive design rules in the same process can reduce this to 20J.lIl1 by 20J.un. These cells, when abutted with each other in a row, form a horizontal shift register which is used to load the matrix. Electronic loading of the matrix is accomplished in a fashion similar to CCD imagers. A fast CCD shift register running vertically is added along one side of the matrix which is loaded with one column of matrix charges from a single external analog data source. Once the fast shift register is loaded, it is transferred into the array by clocking the matrix electrodes to act as an array of horizontal shift registers, shown in Figure 3(a). This process is repeated until the entire matrix has been filled with charge. 1 Returning the column lines to their original voltage levels has the effect of canceling the effect of stray capacitive coupling between the row and column lines, since the net column voltage change is zero. 752 Neugebauer and Yariv Phasel Phase2 Phase3 Phase4 DC .,1 Column DC Row F1 !1 1 r1 ~l~~$~$~·i~a;;;:·14====~------~r(b) Figure 3: CID Cell Used to Load Matrix When the matrix has been loaded, the charge can be used for computation with two of the four gates at each matrix cell kept at constant potentials, shown in Figure 3(b). The computation process moves the charge repeatedly between two electrodes. Incomplete charge transfer, a problem with our previous architecture (Agranat et al., 1990), does not degrade perfonnance since any charge left behind under the column gates during computation is picked up on the next cycle, shown in Figure 2(e). Only dark current generation degrades the matrix charges during VMM, causing them to increase nonunifonnly. In order to limit the effects of dark current generation on the matrix precision, the matrix charge must be refreshed periodically. 3.2 FLOATING GATE ROW AMPLIFIERS In order to achieve better linearity when sensing charge, a floating gate amplifier is often used in CCD circuits. In the scheme described above, the induced voltage change of the row electrode significantly modifies its parasitic capacitance, resulting in a nonlinear voltage versus charge characteristic. To alleviate this problem, an operational amplifier with a capacitor in the feedback loop is added to each row line, shown in Figure 4. When A Parallel Analog CCO/CMOS Signal Processor 753 charge is moved underneath the row line in the course of a VMM operation, the row voltage is kept constant by the action of the op-amp with an output voltage given by N-l Q U AVi=L ~ j=o Cf where Cf is the feedback capacitance. Reset Cf " Column Gate Figure 4: Linear Charge Sensing The feedback capacitor is a poly-poly structure with vastly improved linearity compared to the row capacitance. This enhancement also has the effect of speeding the row line summation due to the well known benefits of current mode transmission. In addition. the possibility of digitally selecting a feedback capacitor value by switching power-of-two sized capacitors into the feedback loops creates a practical means of controlling the gain of the output amplifiers, with the potential for significantly extending the dynamic range of the device. 3.3 DIGITAL INPUT BUFFER AND DIVIDE-BY-TWO CIRCUITRY Many applications such as image processing require multilevel input capability. This can easily be implemented by using the VMM circuitry in a bit-serial mode. The operation of the device is identical to the structure described above except that processing n-bit input precision requires n cycles of the device. Digital shift registers are added to each input column line that sequentially present the column lines with successively more significant bits of the input vector, shown in Figure 5. Using the notation u}n-I), which represents the binary vector formed by taking the nth bits of all the input elements, the first VMM done by the circuit is given by N-l Q"U(O) (0) ~ IJ j AV· = £.J 1 j=O Cf where AV i(O) is the output vector represented as voltage changes on the row lines. The row voltages are stored on large capacitors, CI, which are allowed to share charge with another set of equally sized capacitors, C2, effectively dividing the output vector by two. 754 Neugebauer and Yariv Matrix Charge Row Line /' ~ Reset Cr -. Column Gate Figure 5: Switched Capacitor Divide-By-Two Circuit The next most significant bit input vector, Uj(1), is then multiplied and creates another set of row voltage changes which are stored and shared to add another charge to the previously divided charge giving N-1 Q. -0(1) N-1 Q"U(O) out (1) _ ~ IJ j 1 ~ IJ j V· -£,.. +-£,.. 1 j=O Cf 2j =0 Cf where V iout( 1) is the voltage on C2 after two clock cycles. The process is repeated n times, effectively weighting each successive bit's data by the proper power of two factor giving a total output VOltage~Of N-l { ) ) N-1 v~ut (n-l) = 1 L Qi r. 2k-nut-1) =...L L QijDj Cf '=0 k=l Cfj=o after n clock cycles where Dj now represents the multivalued digital input vector. In this manner, multivalued input of n-bit precision can be processed where n is only limited by the analog accuracy of the components2. 4 EXPERIMENTAL RESULTS A number of VMM circuits have been fabricated implementing the architecture described above in a 2Jl.11l double-poly CCD/CMOS process. The largest circuit contains a 128x128 array of matrix elements. The matrix is loaded electronically through a single pin using the CCD shift register mode of the CID cell, shown in Figure 3. Matrix element mismatches due to threshold variations are avoided since all matrix elements are created by the same set of electrodes. A list of relevant system characteristics is given in Table 1. The matrix of charge is 2 If 4-bit input is required the device is simply clocked four times. Since the power of two scaling is divisive. the most significant bit is always given the same weighting regardless of the input word length. A Parallel Analog CCD/CMOS Signal Processor 755 loaded in 4ms and needs to be refreshed every 20ms to retain acceptable weight accuracy at room temperature. giving a refresh overhead of 20%. A simple linear ftlter bank was loaded with a sinusoidal matrix and multiplied with a slowly chirped input signal to detennine the linearity and noise limits. Table 1: Experimental Results Charge Transfer Efficiency Cell Size Bit Rate Refresh Time Noise Limits Linearity Power Consumption (excluding output drivers) Connections Per Second (binary input vectors) 5 SUMMARY 0.99995 24 J.1ffi x 24 Jlffi 4 MHz 4ms 7 bits 5 bits <l00mW 6.4 x 1010 A CCD based vector matrix multiplication scheme has been developed that offers high speed and low power in addition to provisions for digital I/O. Intended for neural network and image processing applications. the architecture is intended to integrate well into digital environments. Acknowledgements This work was supported by a grant from the U.S. Army Center for Signals Warfare. Rererences A. Agranat. C. F. Neugebauer and A. Yariv. (1988) Parallel Optoelectronic Realization of Neural Network Models Using CID Technology. Applied Optics 27 :4354-4355. A. Agranat. C. F. Neugebauer. R.D. Nelson and A. Yariv. (1990) The CCD Neural Processor: A Neural Integrated Circuit with 65,536 Programmable Analog Synapses. IEEE Trans. on Circuits and Systems 37 :1073-1075. A. M. Chiang. (1990) A CCD Programmable Signal Processor. IEEE Journal of Solid State Circuits 25 :1510-1517.
1991
66
536
Iterative Construction of Sparse Polynomial Approximations Terence D. Sanger Massachusetts Institute of Technology Room E25-534 Cambridge, MA 02139 tds@ai.mit.edu Richard S. Sutton GTE Laboratories Incorporated 40 Sylvan Road Waltham, MA 02254 sutton@gte.com Christopher J. Matheus GTE Laboratories Incorporated 40 Sylvan Road Waltham, MA 02254 matheus@gte.com Abstract We present an iterative algorithm for nonlinear regression based on construction of sparse polynomials. Polynomials are built sequentially from lower to higher order. Selection of new terms is accomplished using a novel look-ahead approach that predicts whether a variable contributes to the remaining error. The algorithm is based on the tree-growing heuristic in LMS Trees which we have extended to approximation of arbitrary polynomials of the input features. In addition, we provide a new theoretical justification for this heuristic approach. The algorithm is shown to discover a known polynomial from samples, and to make accurate estimates of pixel values in an image-processing task. 1 INTRODUCTION Linear regression attempts to approximate a target function by a model that is a linear combination of the input features. Its approximation ability is thus limited by the available features. We describe a method for adding new features that are products or powers of existing features. Repeated addition of new features leads to the construction of a polynomial in the original inputs, as in (Gabor 1961). Because there is an infinite number of possible product terms, we have developed a new method for predicting the usefulness of entire classes of features before they are included. The resulting nonlinear regression will be useful for approximating functions that can be described by sparse polynomials. 1064 Iterative Construction of Sparse Polynomial Approximations 1065 f Xn Figure 1: Network depiction of linear regression on a set of features Xi. 2 THEORY Let {xdi=l be the set of features already included in a model that attempts to predict the function f . The output of the model is a linear combination n i = LCiXi i=l where the Ci'S are coefficients determined using linear regression. The model can also be depicted as a single-layer network as in figure 1. The approximation error is e = f - j, and we will attempt to minimize E[e2] where E is the expectation operator. The algorithm incrementally creates new features that are products of existing features. At each step, the goal is to select two features xp and Xq already in the model and create a new feature XpXq (see figure 2). Even if XpXq does not decrease the approximation error, it is still possible that XpXqXr will decrease it for some X r . So in order to decide whether to create a new feature that is a product with x p , the algorithm must "look-ahead" to determine if there exists any polynomial a in the xi's such that inclusion ofaxp would significantly decrease the error. If no such polynomial exists, then we do not need to consider adding any features that are products with xp. Define the inner product between two polynomials a and b as (alb) = E[ab] where the expected value is taken with respect to a probability measure I-" over the (zeromean) input values. The induced norm is IIal12 = E[a2], and let P be the set of polynomials with finite norm. {P, (·I·)} is then an infinite-dimensional linear vector space. The Weierstrass approximation theorem proves that P is dense in the set of all square-integrable functions over 1-", and thus justifies the assumption that any function of interest can be approximated by a member of P. Assume that the error e is a polynomial in P. In order to test whether axp participates in e for any polynomial a E P, we write e = apxp + bp 1066 Sanger, Sutton, and Matheus f Figure 2: Incorporation of a new product term into the model. where ap and bp are polynomials, and ap is chosen to minimize lIapxp - ell 2 E[( apxp - e )2]. The orthogonality principle then shows that apxp is the projection of the polynomial e onto the linear subspace of polynomials xpP. Therefore, bp is orthogonal to xpP, so that E[bpg] = 0 for all g in xpP. We now write E[e2] = E[a;x;] + 2E[apxpbp] + E[b;] = E[a;x;] + E[b;] since E[apxpbp] = 0 by orthogonality. If apxp were included in the model, it would thus reduce E[e2] by E[a;x;], so we wish to choose xp to maximize E[a;x;]. Unfortunately, we have no dIrect measurement of ap • 3 METHODS Although E[a;x;] cannot be measured directly, Sanger (1991) suggests choosing xp to maximize E[e2x~] instead, which is directly measurable. Moreover, note that E[e2x;] = E[a;x;] + 2E[apx;bp] + E[x;b;] = E[a;x;] and thus E[e2x;] is related to the desired but unknown value E[a;x;]. Perhaps better would be to use E[e2x2] E[a2x4 ] ~=-::-:p- p p E[x~] E[x~] which can be thought of as the regression of (a;x~)xp against xp' More recently, (Sutton and Matheus 1991) suggest using the regression coefficients of e2 against xr for all i as the basis for comparison. The regression coefficients Wi are called "potentials", and lead to a linear approximation of the squared error: (1) Iterative Construction of Sparse Polynomial Approximations 1067 If a new term apxp were included in the model of f, then the squared error would be b; which is orthogonal to any polynomial in xpP and in particular to x;. Thus the coefficient of x; in (1) would be zero after inclusion of apxp, and wpE[x;] is an approximation to the decrease in mean-squared error E[e 2 ] - E[b;] which we can expect from inclusion of apxp. We thus choose xp by maximizing wpE[x;]. This procedure is a form of look-ahead which allows us to predict the utility of a high-order term apxp without actually including it in the regression. This is perhaps most useful when the term is predicted to make only a small contribution for the optimal ap , because in this case we can drop from consideration any new features that include xp. We can choose a different variable Xq similarly, and test the usefulness of incorporating the product XpXq by computing a "joint potential" Wpq which is the regression of the squared error against the model including a new term x~x~. The joint potential attempts to predict the magnitude of the term E[a~qx;xi]. We now use this method to choose a single new feature XpXq to include in the model. For all pairs XiXj such that Xi and Xj individually have high potentials, we perform a third regression to determine the joint potentials of the product terms XiXj. Any term with a high joint potential is likely to participate in f. We choose to include the new term XpXq with the largest joint potential. In the network model, this results in the construction of a new unit that computes the product of xp and xq, as in figure 2. The new unit is incorporated into the regression, and the resulting error e will be orthogonal to this unit and all previous units. Iteration of this technique leads to the successive addition of new regression terms and the successive decrease in mean-squared error E[e 2 ]. The process stops when the residual mean-squared error drops below a chosen threshold, and the final model consists of a sparse polynomial in the original inputs. We have implemented this algorithm both in a non-iterative version that computes coefficients and potentials based on a fixed data set, and in an iterative version that uses the LMS algorithm (Widrow and Hoff 1960) to compute both coefficients and potentials incrementally in response to continually arriving data. In the iterative version, new terms are added at fixed intervals and are chosen by maximizing over the potentials approximated by the LMS algorithm. The growing polynomial is efficiently represented as a tree-structure, as in (Sanger 1991a). Although the algorithm involves three separate regressions, each is over only O( n) terms, and thus the iterative version of the algorithm is only of O(n) complexity per input pattern processed. 4 RELATION TO OTHER ALGORITHMS Approximation of functions over a fixed monomial basis is not a new technique (Gabor 1961, for example). However, it performs very poorly for high-dimensional input spaces, since the set of all monomials (even of very low order) can be prohibitively large. This has led to a search for methods which allow the generation of sparse polynomials. A recent example and bibliography are provided in (Grigoriev et al. 1990), which describes an algorithm applicable to finite fields (but not to 1068 Sanger, Sutton, and Matheus j Figure 3: Products of hidden units in a sigmoidal feedforward network lead to a polynomial in the hidden units themselves. real-valued random variables). The GMDH algorithm (Ivakhnenko 1971, Ikeda et al. 1976, Barron et al. 1984) incrementally adds new terms to a polynomial by forming a second (or higher) order polynomial in 2 (or more) of the current terms, and including this polynomial as a new term if it correlates with the error. Since GMDH does not use look-ahead, it risks avoiding terms which would be useful at future steps. For example, if the polynomial to be approximated is xyz where all three variables are independent, then no polynomial in x and y alone will correlate with the error, and thus the term xy may never be included. However, x 2y2 does correlate with x 2y2 Z2, so the look-ahead algorithm presented here would include this term, even though the error did not decrease until a later step. Although GMDH can be extended to test polynomials of more than 2 variables, it will always be testing a finite-order polynomial in a finite number of variables, so there will always exist target functions which it will not be able to approximate. Although look-ahead avoids this problem, it is not always useful. For practical purposes, we may be interested in the best Nth-order approximation to a function, so it may not be helpful to include terms which participate in monomials of order greater than N, even if these monomials would cause a large decrease in error. For example, the best 2nd-order approximation to x 2 + ylOOO + zlOOO may be x 2 , even though the other two terms contribute more to the error. In practice, some combination of both infinite look-ahead and GMDH-type heuristics may be useful. 5 APPLICATION TO OTHER STRUCTURES These methods have a natural application to other network structures. The inputs to the polynomial network can be sinusoids (leading to high-dimensional Fourier representations), Gaussians (leading to high-dimensional Radial Basis Functions) or other appropriate functions (Sanger 1991a, Sanger 1991b). Polynomials can I terative Construction of Sparse Polynomial Approximations 1069 even be applied with sigmoidal networks as input, so that Xi = (T (I: SijZj ) where the z;'s are the original inputs, and the Si;'S are the weights to a sigmoidal hidden unit whose value is the polynomial term Xi. The last layer of hidden units in a multilayer network is considered to be the set of input features Xi to a linear output unit, and we can compute the potentials of these features to determine the hidden unit xp that would most decrease the error if apxp were included in the model (for the optimal polynomial ap ). But ap can now be approximated using a subnetwork of any desired type. This subnetwork is used to add a new hidden unit C&pxp that is the product of xp with the subnetwork output C&p, as in figure 3. In order to train the C&p subnetwork iteratively using gradient descent, we need to compute the effect of changes in C&p on the network error £ = E[(f - j)2]. We have where S 4pXp is the weight from the new hidden unit to the outpu t. Without loss of generality we can set S4pXp = 1 by including this factor within C&p. Thus the error term for iteratively training the subnetwork ap is which can be used to drive a standard backpropagation-type gradient descent algorithm. This gives a method for constructing new hidden nodes and a learning algorithm for training these nodes. The same technique can be applied to deeper layers in a multilayer network. 6 EXAMPLES We have applied the algorithm to approximation of known polynomials in the presence of irrelevant noise variables, and to a simple image-processing task. Figure 4 shows the results of applying the algorithm to 200 samples of the polynomial 2 + 3XIX2 + 4X3X4X5 with 4 irrelevant noise variables. The algorithm correctly finds the true polynomial in 4 steps, requiring about 5 minutes on a Symbolics Lisp Machine. Note that although the error did not decrease after cycle 1, the term X4X5 was incorporated since it would be useful in a later step to reduce the error as part of X3X4X5 in cycle 2. The image processing task is to predict a pixel value on the succeeding scan line from a 2x5 block of pixels on the preceding 2 scan lines. If successful, the resulting polynomial can be used as part of a DPCM image coding strategy. The network was trained on random blocks from a single face image, and tested on a different image. Figure 5 shows the original training and test images, the pixel predictions, and remaining error . Figure 6 shows the resulting 55-term polynomial. Learning this polynomial required less than 10 minutes on a Sun Sparcstation 1. 1070 Sanger, Sutton, and Matheus 200 sa.mples of IJ = 2 + 3z1 z2 + 4x3 z4 Zs with 4 additional irrelevant inputs, z6-z9 Original MSE: 1.0 Cycle 1: MSE: 0.967 Terms: Xl X2 X3 X 4 Xs X6 X7 X8 X9 Coeffs: -0.19 0.14 0.24 0.31 0.17 0.48 0.03 O.OS 0.S8 Po ten tials: 0.22 0.24 0.2S 0.32 0 .33 0.01 0.08 0.01 0.05 Top Pairs: (S 4) (5 3) (43) (4 4) New Term: XIO =X4 XS Cycle 2: MSE: 0.966 Terms: Xl X2 X3 X4 Xs X6 X7 X8 X9 XlO Coeffs: -0.19 0.14 0.24 0.30 0.18 0.48 0.03 O.OS 0.S7 O.OS Potentials: 0.25 0.22 0.2S O.OS 0 .02 0.03 0.08 0.02 0.03 0.47 Top Pairs: (103) (101) (102) (10 10) New Term: Xu =X10 X3 =X3 X 4X S Cycle 3: MSE: 0.349 Terms: Xl X2 X3 X4 Xs X6 X7 X8 X9 X10 Xll Coeffs: 0.04 -0.26 0.09 0.37 -0.04 0.27 0.10 0 .22 0.42 -0.26 4.07 Potentials: 0.S2 0.S9 0.03 0.02 -0.08 0.03 -O.OS -0.06 0.05 -O.OS O.OS Top Pairs: (2 1) (2 9) (22) (1 9) New Term: Xu =X1 X 2 Cycle 4: MSE: 0.000 Terms: Xl X2 X3 X4 Xs X6 X7 X8 X9 X10 Xu Xl2 Coeffs: -0.00 -0.00 -0.00 0.00 -0.00 0 .00 0.00 0.00 0.00 -0.00 4.00 3.00 Solution: 2 + 3X1 X2 + 4X3X4X5 Figure 4: A simple example of polynomial learning. Figure 5: Original, predicted, and error images. The top row is the training image (RMS error 8.4), and the bottom row is the test image (RMS error 9.4). Iterative Construction of Sparse Polynomial Approximations 1071 -40·1z0 + -23.9z1 + -5.4z2 + -17·1z3+ (1.1z5 + 2.4z8 + -1.1z2 + -1.5z0 + -2.0Z1 + 1.3z4 + 2.3z6 + 3·1z7 + -25 .6)z4 + ( (-2.9z9 + 3.0z8 + -2.9z4 + -2.8z3 + -2.9z2 + -1.9z5 + -6.3%0 + -5.2%1 + 2.5z6 + 6.7z7 + 1.1)z9+ (3 .9z8 + Z5 + 3.3z4 + 1.6z3 + 1.1z2 + 2 .9z6 + 5.0Z7 + 16.1)z8+ -2.3%3 + -2.1%2 + -1.6.%1 + 1.1z4 + 2·1z6 + 3.5%7 + 28 .6)z5+ 87·1z6 + 128.1%7 + 80.5%8+ ( (-2·6.%9 + -2.4%5 + -4.5%0 + -3.9%1 + 3.4%6 + 7 .3%7 + -2.5)%9+ 21.7%8 + -16.0%4 + -12·1z3 + -8.8%2 + 31.4)%9+ 2.6 Figure 6: 55-term polynomial used to generate figure 5. Acknowledgments We would like to thank Richard Brandau for his helpful comments and suggestions on an earlier draft of this paper. This report describes research done both at GTE Laboratories Incorporated, in Waltham MA, and at the laboratory of Dr. Emilio Bizzi in the department of Brain and Cognitive Sciences at MIT. T. Sanger was supported during this work by a National Defense Science and Engineering Graduate Fellowship, and by NIH grants 5R37 AR26710 and 5R01NS09343 to Dr. Bizzi. References BarronR. L., Mucciardi A. N., CookF. J., CraigJ. N., Barron A. R., 1984, Adaptive learning networks: Development and application in the United States of algorithms related to GMDH, In Farlow S. J., ed., Self-Organizing Methods in Modeling, pages 25-65, Marcel Dekker, New York. Gabor D., 1961, A universal nonlinear filter, predictor, and simulator which optimizes itself by a learning process, Proc. lEE, 108B:422-438. Grigoriev D. Y., Karpinski M., Singer M. F., 1990, Fast parallel algorithms for sparse polynomial interpolation over finite fields, SIAM J. Computing, 19(6):10591063. Ikeda S., Ochiai M., Sawaragi Y., 1976, Sequential GMDH algorithm and its application to river flow prediction, IEEE Trans. Systems, Man, and Cybernetics, SMC-6(7):473-479. Ivakhnenko A. G., 1971, Polynomial theory of complex systems, IEEE Trans. Systems, Man, and Cybernetics, SMC-1( 4):364-378. Sanger T. D., 1991a, Basis-function trees as a generalization of local variable selection methods for function approximation, In Lippmann R. P., Moody J. E., Touretzky D. S., ed.s, Advances in Neural Information Processing Systems 3, pages 700-706, Morgan Kaufmann, Proc. NIPS'90, Denver CO. Sanger T. D., 1991b, A tree-structured adaptive network for function approximation in high dimensional spaces, IEEE Trans. Neural Networks, 2(2):285-293. Sutton R. S., Matheus C. J., 1991, Learning polynomial functions by feature construction, In Proc. Eighth Inti. Workshop on Machine Learning, Chicago. Widrow B., Hoff M. E., 1960, Adaptive switching circuits, In IRE WESCON Conv. Record, Part 4, pages 96-104.
1991
67
537
A Computational Mechanism To Account For Averaged Modified Hand Trajectories Ealan A. Henis*and Tamar Flash Department of Applied Mathematics and Computer Science The Weizmann Institute of Science Rehovot 76100, Israel Abstract Using the double-step target displacement paradigm the mechanisms underlying arm trajectory modification were investigated. Using short (10110 msec) inter-stimulus intervals the resulting hand motions were initially directed in between the first and second target locations. The kinematic features of the modified motions were accounted for by the superposition scheme, which involves the vectorial addition of two independent point-topoint motion units: one for moving the hand toward an internally specified location and a second one for moving between that location and the final target location. The similarity between the inferred internally specified locations and previously reported measured end-points of the first saccades in double-step eye-movement studies may suggest similarities between perceived target locations in eye and hand motor control. 1 INTRODUCTION The generation of reaching movements toward unexpectedly displaced targets involves more complicated planning and control problems than in reaching toward stationary ones, since the planning of the trajectory modification must be performed before the initial plan is entirely completed. One possible scheme to modify a trajectory plan is to abort the rest of the original motion plan, and replace it with a new one for moving toward the new target location. Another possible modifica·Current address IRCS/GRASP, University of Pennsylvania. 619 620 Henis and Flash tion scheme is to superimpose a second plan with the initial one, without aborting it. Both schemes are discussed below. Earlier studies of reaching movements toward static targets have shown that pointto-point reaching hand motions follow a roughly straight path, having a typical bellshaped velocity profile. The kinematic features ofthese movements were successfully accounted for (Figure 1, left) by the minimum-jerk model (Flash & Hogan, 1985). In that model the X -components of hand motions (and analogously the Y -components) were represented by: -~B A IKlaft Vy o. VI o. -0.2 , ) c ! ., B ,., ..... -+ A b. (1) Time --Figure 1: The Minimum-jerk Model and The Non-averaged Superposition Scheme o. c • I Computer B • 100 .. 2' « o 100 o o .: . • I • .-.. ..... .. . .. :: . . . . . .:. :. D msec 500 Figure 2: The Experimental Setup and The Initial Movement Direction Vs. n A Computational Mechanism ro Accoum for Averaged Modified Hand Trajecrories 621 where tf is the movement duration, and XB -XA is the X-component of movement amplitude. In a previous study (Henis & Flash, 1989; Flash & Henis, 1991) we have used the double-step target displacement paradigm (see below) with inter-stimulus intervals (ISIs) of 50-400 msec. Many of the resulting motions were found to be initially directed toward the first target location (non-averagerl) (for larger ISIs a larger percentage of the motions were non-averaged). The kinematic features of these modified motions were successfully accounted for (Figure 1 right) by a superposition modification scheme that involves the vectorial addition of two time-shifted independent point-to-point motion units (Equation (1)) that have amplitudes that correspond to the two target displacements. In the present study shorter ISIs of 10-110 msec were used, hence, all target displacements occurred before movement initiation. Most of the resulting hand motions were found to be initially directed in between the first and second target locations (averaged motions). For increasing values of D, where D = RTI - lSI (RTl is the reaction time), the initial motion direction gradually shifted from the direction of the first toward the direction of the second stimulus (Figure 2 right). The averaging phenomenon has been previously reported for hand (Van Sonderen et al., 1988) and eye (Aslin & Shea, 1987; Van Gisbergen et al., 1987) motions. In this work we wished to account for the kinematic features of averaged trajectories as well as for the dependence of their initial direction on D. It was observed (Van Sonderen et al., 1988) that when the first target displacement was toward the left and the second one was obliquely downwards and to the right most of the resulting motions were averaged. Averaged motions were also induced when the first target displacement was downwards and the second one was obliquely upwards and to the left. In this study we have used similar target displacements. Four naive subjects participated in the experiments. The motions were performed in the absence of visual feedback from the moving limb. In a typical trial, initially the hand was at rest at a starting position A (Figure 2 left). At t = 0 a visual target was presented at one of two equally probable positions B. It either remained lit (control condition, probability 0.4) or was shifted again, following an lSI, to one of two equally probable positions C (double-step condition, probability 0.3 each). In a block of trials one target configuration was used. Each block consisted of five groups of 56 trials, and within each group one lSI pair was used. The five lSI pairs were: 10 and 80, 20 and 110, 30 and 150, 40 and 200, and 50 and 300 msec. The target presentation sequence was randomized, and included appropriate control trials. 2 MODELING RATIONALE AND ANALYSIS 2.1 THE SUPERPOSITION SCHEME The superposition scheme for averaged modified motions is based on the vectorial addition of two time-shifted independent elemental point-to-point hand motions that obey Equation (1). The first elemental trajectory plan is for moving between the initial hand location and an intermediate location Bi , internally specified. This plan continues unmodified until its intended completion. The second elemental trajectory plan is for moving between Bi and the final location of the target. The durations of the elemental motions may vary among trials, and are therefore a 622 Henis and Flash priori unknown. With short ISIs the elemental motion plans may be added (to give the modified plan) preceding movement initiation. Several possibilities for Bi were examined: a) the first location of the stimulus, b) an a priori unknown position, c) same as (b) with Bi constrained to lie on the line connecting the first and second locations of the stimulus, and d) same as (b) with Bi constrained to lie on the line of initial movement direction. Version (a) is equivalent to the superposition scheme that successfully accounted for non-averaged modified trajectories (Flash & Henis, 1991). In versions (b), (c) and (d) it was assumed that due to the quick displacement of the target, the specification of the end-point for the first motion plan may differ from the actual first location of the target. The first motion unit was represented by: t Xl(t) = X A + (XB. - XA)(10T3 15T4 + 6T5 ), where T = . (2) Tl In (2), (XBi - XA) is the X -component of the first unit amplitude. The duration of this unit is denoted by T i , a priori unknown. The expression for Yi (t) was analogous to Equation (2). The X-component of the second motion unit was taken to be: t - t, t - t, X2(t) = (Xc - XB.)(lOT3 15T4 + 6T5 ), where T = = --. (3) tl - t, T2 In (3), (Xc - XBJ is the X-component of the amplitude of the second trajectory unit. The start and end times of the second unit are denoted by t, and t I, respectively. The duration of the second motion unit T2 = tl-t, is a priori unknown. The X -component of the entire modified motion (and similarly for the Y -component) was represented by: (4) The unknown parameters T1 , T2 , BiX and BiY that can best describe the entire measured trajectory were determined by using least-squares best-fit methods (Marquardt, 1963). This procedure minimized the sum of the position errors between the simulated and measured data points, taking into account (in versions (a), (c) and (d)) the assumed constraints on the location B i . 2.2 THE ABORT-REPLAN SCHEME In the abort-replan scheme it was assumed that initially a point-to-point trajectory plan is generated for moving toward an initial target (Equation (2»). The same four different possibilities for the end-point of the initial motion plan were examined. It was assumed that at some time-instant t, the initial plan is aborted and replaced by a new plan for moving between the expected hand position at t = t, and the final target location. The new motion plan was assumed to be represented by: 5 XNEW(t) = L:ai(t)i. (5) i=O The coefficients a3, a4 and a5 were derived by using the the measured values of position, velocity and acceleration at t = t I. For versions (b), (c) and (d) the analysis was performed simultaneously for the X and Y components of the trajectory. Choosing a trial Bi and Tl the initial motion plan (Equation (2» was A Computational Mechanism to Account for Averaged Modified Hand Trajectories 623 calculated. Choosing a trial t" the remaining three unknown coefficients ao, al and a2 of Equation (5) were calculated using the continuity conditions of the initial and new position, velocity and acceleration at t = t, (method I). Alternatively, these three coefficients were calculated using the corresponding measured values at t = t, (method II). To determine the best choice of the unknown parameters BiX , Biy , Tl and t, the same least squares methods (Marquardt, 1963) were used as described above. For version (a), for each cartesian component, a point-to-point minimumjerk trajectory AB was speed-scaled to coincide with the initial part of the measured velocity profile. The time t, of its deviation from the measured speed profile was extracted. From t, on, the trajectory was represented by Equation (5). The values of ao, al and a2 were derived by using the same least squares methods (method I). Alternatively, these values were determined by using the measured position, velocity and acceleration at t = t, (method II). 3 RESULTS The motions recorded in the control trials were roughly straight with bell-shaped speed profiles. The mean reaction time in these control trials was 367.1 ± 94.6 msec (N = 120). The mean movement time was 574.1 ± 127.0 msec (N = 120). The change in target location elicited a graded movement toward an intermediate direction in between the two target locations, followed by a subsequent motion toward the final target (Figure 3, middle). Occasionally the hand went directly toward the final target location (Figure 3, right). For values of D less than 100 ms the movements were found to be initially directed toward the first target (Figure 3, left). As D increased, the initial direction of the motions gradually shifted (Figure 2, right) from the direction of the first (non-averaged) toward the direction of the second (direct) target locations (The initial direction depended on D rather than on lSI). The mean reaction time to the first stimulus (RTI) was 350.4 ± 93.5 msec (N=192). The mean reaction time to the second stimulus (RT2) (inferred from the superposition version (b)) was 382.8 ± 119.9 msec (N=192) . This value is much smaller than that predicted by successive processing of information: RT2 = 2RTl lSI (Poulton, 1981), and might be indicative of the presence of parallel planning. The mean durations Tl and T2 of the two trajectory units (of superposition version (b)) were: 373.0 ± 112.2 and 592.1 ± 98.1 msec (N = 192), respectively. 3.1 MODIFICATION SCHEMES The most statistically successful model (Table-I) in accounting for the measured motions was the superposition version (b), which involves an internally specified location (a priori unknown) for the end-point of the first motion unit. In particular, the averaged initial direction of the measured motions was accounted for. Superposition version (d) was equivalent to version (b). The velocities simulated on the basis the other tested schemes substantially deviated from the measured ones (Table 1 and Figure 4). It should be noted that in both the superposition and abortreplan versions (b), (c) and (d) there were 4, 3 and 3 unknown parameters. In the abort-replan versions (aI) there were 3 unknown parameters, compared to 2 in the superposition version (a). Hence the relative success of the superposition version (b) in accounting for the data was not due to a larger number of free parameters. 624 Henis and Flash Table 1: Normalized Velocity Deviations and The t-score With SP(b» SP(a) 18.60 ± 50.16 (4.711) SP(b) 0.035 ± 0.036 (0.000) SP(c) 0.126 ± 0.132 (8.465) Sped) 0.042 ± 0.045 (1 .546) Non- Averaqed c+~ ISI·200 0'80 l~em B+ B A " ... :~ 0'120 o C AB(aI) 0.083 ± 0.093 (6.126) AB(aU) 0.084 ± 0.088 (6.559) Averaged c~ 1St- 40 0·250 B+ B A + ~ lSI· 50 O' 280 b AB(bI) AB(bU) 0.081 0.078 ± 0.101 ± 0.102 (5.460) (5.050) Direct -~ ~ lSI '50 0·400 +B B A + lc ISI · ao 0 - 450 c Figure 3: Types of Modified Trajectories B+ \A 8 (+A C+~A + SP(h) AB(hII) \ SP(h) c C \~tm +B o~ o. 0 f T .... i • ·02 :.0 --I 100_ AB(d) 0.084 ± 0.108 (5.4 78) AB(clI) 0.083 ± 0.096 (5.959) i'-'0" AB(bII) +B r~ v. 1-0 AB(dI) 0.082 ± 0.097 (5.782) r.tooP--+---Figure 4: Representative Simulated Vs. Measured Trajectories AB(dU) 0.085 ± 0.101 (5.935) A Computational Mechanism to Accoum for Averaged Modified Hand Trajectories 625 3.2 THE END-POINTS INFERRED FROM SUPERPOSITION (b) The mean locations Bi resulting from different trials performed by the same subject were computed by pooling together Bi of movements with the same D ± 15 msec (Figure 5 left). For D < 100 msec, the measured motions were non-averaged and the inferred Bi were in the vicinity of the first target. For increasing values of D, Bi gradually shifted from the first toward the second target location, following a typical path that curved toward the initial hand position. For D 2:: 400 msec, Bi were in the vicinity of the second target location. Since initially the motions are directed toward Bi, this gradual shift of Bi as a function of D may account for the observed dependence of the initial direction of motion on D . The locations Bi obtained on the basis of the other tested schemes did not show any regular behavior as functions of D. 4 DISCUSSION This paper presents explicit possible mechanisms to account for the kinematic features of averaged modified trajectories. the most statistically successful scheme in accounting for the measured movements involves the vectorial addition of two independent point-to-point motion units: one for moving between the initial hand position and an internally specified location, and a second one for moving between that location and the final target location. Taken together with previous results for non-averaged modified trajectories (Flash & Renis, 1991), it was shown that the same superposition principle may account for both modified trajectory types. The differences between the observed types stem from differences in the time available to modify the end-point of the first unit. Our simulations have enabled us to infer the locations of the intermediate target locations, which were found to be similar to previously reported (Aslin & Shea, 1987) experimentally measured end-points of the first saccades, obtained by using the double-step paradigm (Figure 5 right!). This result may suggests underlying similarities between internally perceived target locations in eye and hand motor control and may suggest a common "where" command (Gielen et al., 1984; 1990) for both systems. c ~ + 410· .1210 ISc", + .... -~=--..... B B ~ C~A C~A A B B Figure 5: Inferred First Unit End-points and Measured Eye Positions 1 Reprinted with permission from Vision Res., Vol. 27, No. 11, 1925-1942, Aslin, R.N. and Shea S.L.: The Amplitude And Angle of Saccades to Double-Step Target Displacements, Copyright [1987], Pergamon Press pic. 626 Henis and Flash Why is the internally specified location dependent on D, which is a parameter associated with both sensory information and motor execution? One possible explanation is that following the target displacement the effect of the first stimulus on the motion planning decays, and that of the second stimulus becomes larger. These changes may occur in the transformations from the visual to the motor system. A purely sensory change in the perceived target location was also proposed (Van Sonderen et aI., 1988; Becker & Jurgens 1979). Another possibility is that the direction of hand motion is internally coded in the motor system (Georgopoulos et al., 1986), and it gradually rotates (within the motor system) from the direction of the first to the direction of the second target. It is not clear which of these possibilities provides a better explanation for the observations. In the superposition scheme there is no need to keep track of the actual or planned kinematic state of the hand. Hence, in contrast to the abort-replan scheme, an efference copy of the planned motion is not required. The successful use of motion plans expressed in extrapersonal coordinates provides support to the idea that arm movements are internally represented in terms of hand motion through external space. The construction of complex movements from simpler elementary building blocks is consistent with a hierarchical organization of the motor system. The independence of the elemental trajectories allows to plan them in parallel. Acknowledgements This research was supported by a grant no. 8800141 from the United-States Israel Binational Science Foundation (BSF), Jerusalem, Israel. Tamar Flash is incumbent of the Corinne S. Koshland career development chair. References Aslin, R.N. and Shea S.L. (1987). The Amplitude And Angle of Saccades to Double-Step Target Displacements. Vision Res., Vol. 27, No. 11, 1925-1942. Becker W. and Jurgens R. (1979). An Analysis of The Saccadic System By Means of Double-Step Stimuli. Vision Res., 19, 967-983. Flash T. and Henis E. (1991). Arm Trajectory Modification During Reaching Towards Visual Targets. Journal of Cognitive Neuroscience Vol. 3, no. 3, 220-230. Flash, T. & Hogan, N. (1985). The coordination of arm movements: an experimentally confirmed mathematical model. J. Neurosci., 7, 1688-1703. Georgopoulos A.P., Schwartz A.B. & Kettner R.E. (1986). Neuronal population coding of movement direction. Science 233, 1416-1419. Gi~len, C.C.A.M., ~an ?en Heuvel, P.J.~. & Denier Van der Gon, J.J. (1984). Modificatlon. of muscle activation pat terns durmg fast goal-directed arm movements. J. Motor BehaVlor, 16, 2-19. Gielen C.C.A.M. & Van Gisbergen J .A.M. (1990). The visual guidance of saccades and fast aiming movements. News in Physiological Sciences Vol.5, 58-63. Henis E. and Flash T. (1989). Mechanisms Sub serving Arm Trajectory Modification. Perception 18(4):495. Marquardt, D.W., (1963). An algorithm for least-squares estimation of non-linear parameters. J. SIAM, 11, 431-441. Van Gisbergen, J.A.M., Van Opstal, A.J. & Roebroek, J.G.H. {1987}. Stimulus-induced midflight modification of saccade trajectories. In J.K. O'Regan & A. Levy-Schoen (Eds.), Eye Movements: From Physiology to Cognition, Amsterdam: Elsevier, 27-36. Van S~n~eren, J.F., D.eniex: Van Der Gon, J.J. & Gielen, C.C.A.M. (1988). Conditions detenmrung early modificatlon of motor programmes in response to change in target location. Exp. Brain Res., 71, 320-328.
1991
68
538
Learning Global Direct Inverse Kinematics David DeMers· Computer Science & Eng. UC San Diego La Jolla, CA 92093-0114 Abstract Kenneth Kreutz-Delgado t Electrical & Computer Eng. UC San Diego La Jolla, CA 92093-0407 We introduce and demonstrate a bootstrap method for construction of an inverse function for the robot kinematic mapping using only sample configurationspace/workspace data. Unsupervised learning (clustering) techniques are used on pre-image neighborhoods in order to learn to partition the configuration space into subsets over which the kinematic mapping is invertible. Supervised leaming is then used separately on each of the partitions to approximate the inverse function. The ill-posed inverse kinematics function is thereby regularized, and a globa1 inverse kinematics solution for the wristless Puma manipulator is developed. 1 INTRODUCTION The robot forward kinematics function is a continuous mapping f : C ~ en - w ~ Xm which maps a set of n joint parameters from the configuration space, C, to the mdimensiona1 task space, W. If m S n, the robot has redundant degrees-of-freedom (dof's). In general, control objectives such as the positioning and orienting of the endeffector are specified with respect to task space co-ordinates; however, the manipulator is typica1ly controlled only in the configuration space. Therefore, it is important to be able to find some 0 E C such that f(O) is a particular target va1ue xo E W. This is the inverse kinematics problem. • e-mail: demers@cs.ucsd.edu t e-mail: kreutz@ece.ucsd.edu 589 590 DeMers and Kreutz-Delgado The inverse kinematics problem is ill-posed. If there are redundant doCs then the problem is locally ill-posed, because the solution is non-unique and consists of a non-trivial manifold1 in C. With or without redundant dof's, the problem is generally globally ill-posed because of the existence of a finite set of solution branches there will typically be multiple configurations which result in the same task space location. Thus computation of a direct inverse is problematic due to the many-to--one nature (and therefore non-invertibility) of the map I . The inverse problem can be solved explicitly, that is, in closed form, for only certain kinds of manipulators. E.g. six dof elbow manipulators with separable wrist (where the first three joints are used for positioning and the last three have a common origin and are used for orientation), such as the Puma 560, are solvable, see (Craig, 86). The alternative to a closed form solution is a numerical solution, usually either using the inverse of the Jacobian, which is a Newton-style approach, or by using gradient descent (also a Jacobian-based method). These methods are iterative and require expensive Jacobian or gradient computation at each step, thus they are not well-suited for real-time control. Neural networks can be used to find an inverse by implementing either direct inverse modeling (estimating the explicit function 1-1) or differential methods. Implementations of the direct inverse approach typically fail due to the non-linearity of the solution sef, or resolve this problem by restriction to a single solution a priori. However, such a prior restriction of the solutions may not be possible or acceptable in all circumstances, and may drastically reduce the dexterity and manipulability of the arm. The differential approaches either find only the nearest local solution, or resolve the multiplicity of solutions at training time, as with Jordan's forward modeling (Jordan & Rumelhart, 1990) or the approach of (Nguyen & Patel, 1990). We seek to regularize the mapping in such a way that all possible solutions are available at run-time, and can be computed efficiently as a direct constant-time inverse rather than approximated by slower iterative differential methods. To achieve the fast run-time solution, a significant cost in training time must be paid; however, it is not unreasonable to invest resources in off-line learning in order to attain on-line advantages. Thus we wish to gain the run-time computational efficiency of a direct inverse solution while also achieving the benefits of the differential approaches. This paper introduces a method for performing global regularization; that is, identifying the complete, finite set of solutions to the inverse kinematics problem for a non-redundant manipulator. This will provide the ability to choose a particular solution at run time. Resolving redundancy is beyond the scope of this paper, however, preliminary work on a method which may be integrated with the work presented here is shown in (DeMers & Kreutz-Delgado, 1991). In the remainder of this paper it will be assumed that the manipulator does not have redundant dof's. It will also be assumed that all of the joints are revolute, thus the configuration space is a subset of the n-torus, Tn. IGenerically of dimensionality equal to n - m. Zrhe target values are assumed to be in the range of I, i E W = I(C), so the existence of a solution is not an issue in this paper. 3Training a network to minimize mean squared error with multiple target values for the same input value results in a "learned" response of the average of the targets. Since the targets lie on a number of non-linear manifolds (for the redundant case) or consist of a finite number of points (for the non-redundant case), the average of multiple targets will typically not be a correct target. Learning Global Direct Inverse Kinematics 591 2 TOPOLOGY OF THE KINEMATICS FUNCTION The kinematics mapping is continuous and smooth and, generically, neighborhoods in configuration space map to neighborhoods in the task space4• The configuration space, C, is made up of a finite number of disjoint regions or partitions, separated by n - 1 dimensional surfaces where the Jacobian loses rank (called critical surfaces), see (Burdick, 1988, Burdick, 1991). Let I : Tn -- Rn be the kinematic mapping. Then k W = I(C) = U Ii (Cd i=l where Ii is the restriction of I to Ci , Ii : Ci = en /1 -- Rn and the factor space en /1 is locally diffeomorphic to Rn. The Ci are each a connected region such that VOECi , det(J(O));tO where J is the Jacobian of I, J == d(J!. Define Wi as I(Cd. Generically, Ii is one-to-one and onto open neighborhoods of Wi5, thus by the inverse function theorem :3 gi(X) = I j- 1 : Wi -- Ci, such that I 0 gi(X) = X, Vx E Wi In the general case, with redundant dof's, the kinematics over a single configuration-space region can be viewed as a fiber bundle, where the fibers are homeomorphic to Tn-m. The base space is the reachable workspace (the image of Ci under j). Solution branch resolution can be done by identifying distinct connected open coordinate neighborhoods of the configuration space which cover the workspace. Redundancy resolution can be done by a consistent parameterization of the fibers within each neighborhood. In the case at hand, without redundant dof's, the "fibers" are singleton sets and no resolution is needed. In the remainder of this paper, we will use input/output data to identify the individual regions, Ci, of a non-redundant manipulator, over which the mapping Ii : Ci -- Wi is invertible. The input/output data will then be partitioned modulo the configuration regions Ci, and each li- 1 approximated individually. 3 SAMPLING APPROACH If the manipulator can be measured and a large sample of (0, i) pairs taken, stored such that the x samples can be searched efficiently, a rough estimate of the inverse solutions at a particular target point io may be obtained by finding all of the 0 points whose image lies within some ( of xo. The pre-image of this (-ball will generically consist of several distinct (distorted) balls in the configuration space. If the sampling is adequate then there will be one such ball for each of the inverse solution branches. If each of the the points in each ball is given a label for the solution branch, the labeled data may then be used for supervised 4This property fails when the manipulator is in a singular configuration, at which the Jacobian, deft loses rank. sSince it is generically true that J is non-singular. 592 DeMers and Kreutz-Delgado learning of a classifier of solution branches in the configuration space. In this way we will have "bootstrapped" our way to the development of a solution branch classifier. Taking advantage of the continuous nature of the forward mapping, note that if io is slightly perturbed by a "jump" to a neighboring target point then the pre-image balls will also be perturbed. We can assign labels to the new data consistent with labels already assigned to the previous data, by computing the distances between the new, unlabeled balls and the previously labeled balls. Continuing in this fashion, io traces a path through the entire workspace and solution branch labels may be given to all points in C which map to within f of one of the selected i points along the sweep. This procedure results in a significant and representative proportion of the data now being labeled as to solution branch. Thus we now have labeled data (if, i, B( en, where 8( e) = {I, ... , k} indicates which of the k solution branches, Ci , the point e is in. We can now construct a classifier using supervised learning to compute the branches B( e) for a given B. Once an estimate of B( 0) is developed, we may use it to classify large amounts of (if, i) data, and partition the data into k sets, one for each of the solution branches, Ci. 4 RESOLUTION OF SOLUTION BRANCHES We applied the above to the wristIess Puma 560, a 3-R manipulator for end-effector positioning in R3. We took 40,000 samples of (if, i) points, and examined all points within lOcm of selected target values ii. The ii formed a grid of 90 locations in the workspace. 3,062 of the samples fell within 10 cm of one of the ii. The configuration space points for each target ii were clustered into four groups, corresponding to the four possible solution branches of the wristless Puma 560. About 3% of the points were clustered into the wrong group, based on the labeling scheme used. These 3,062 points were then used as training patterns for a feedforward neural network classifier. A point was classified into the group associated with the output unit of the neural network with maximum activation. The output values were normalized to sum to 1.0. The network was tested on 50,000 new, previously unseen (if, i) pairs, and correctly classified more than 98% of them. All of the erroneous classifications were for points near the critical surfaces. Therefore the activation levels of the output units can be used to estimate closeness to a critical surface. Examining the test data and assigning all 0 points for which no output unit has activation greater than or equal to 0.8 to the "near-a-singularity" class, the remaining points were 100% correctly classified. Figure 1 shows the true critical manifold separating the regions of configuration space, and the estimated manifold consisting of points from the test set where the maximum activation of output units of the trained neural network is less than 0.8. The configuration space is a subset of the 3-torus, which is shown here "sliced" along three generators and represented as a cube. Because the Puma 560 has physiCal limits on the range of motion of its joints, the regions shown are in fact six distinct regions, and there is no wraparound in any direction. This classifier network is our candidate for an estimate of B( e). With it, the samples can be separated into groups corresponding to the domains of each of the Ii, thus regularizing into k = 6 one-to-one invertible pieces6 • 6 Although there are only four inverse solutions for any i. If there were no joint limits, then the Learning Global Direcr Inverse Kinemarics 593 Joint 2 Joint 2 Figure 1: The analytically derived critical surfaces, along with J ,000 points/or which no unit 0/ the neural network classifier has greater than 0.8 activation. 5 DIRECT INVERSE SOLUTIONS The classifier neural network can now be used to partition the data into four groups, one for each of the branches, Ci . For each of these data sets, we train a feedforward network to learn the mapping in the inverse direction. The target vectors were represented as vectors of the sine of the half-angle (a measure motivated by the quatemion representation of orientation). MSE under 0.001 were achieved for each of the four. This looks like a very small error, however, this error is somewhat misleading. The configuration space error is measured in units which are difficult to interpret. More important is the error in the workspace when the solution computed is used in the forward kinematics mapping to position the ann. Over a test set of 4,000 points, the average positioning error was 5.2 cm over the 92 cm radius workspace. We have as yet made no attempts to optimize the network or training for the direct inverse; the thrust of our work is in achieving the regularization. It is clear that substantially better performance can be developed, for example, by following (Ritter, et al., 1989), and we expect end-effector positioning errors of less than 1 % to be easily achievable. 6 DISCUSSION We have shown that by exploiting the topological property of continuity of the kinematic mapping for a non-redundant 3-dof robot we can determine all of the solution regions of the inverse kinematic mapping. We have mapped out the configuration space critical surfaces and thus discovered an important topological property of the mapping, corresponding to an important physical property of the manipulator, by unsupervised learning. We can boostrap from the original input/output data, unlabeled as to solution branch, and construct an accurate classifier for the entire configuration space. The data can thereby be partitioned into sets which are individually one-t()-{)ne and invenible, and the inverse mapping can be directly approximated for each. Thus a large learning-time investment results in a fast run-time direct inverse kinematics solution. cube shown would be a true 3-torus, with opposite faces identified. Thus the small pieces in the corners would be part of the larger regions by wraparound in the Joint 2 direction. 594 DeMers and Kreutz-Delgado We need a thorough sampling of the configuration space in order to ensure that enough points will fall within each f-ball, thus the data requirements are clearly exponential in the number of degrees of freedom of the manipulator. Even with efficient storage and retrieval in geometric data structures, such as a k-d tree, high dimensional systems may not be tractable by our methods. Fortunately practical and useful robotic systems of six and seven degrees of freedom should be amenable to this method, especially if separable into positioning and orienting subsystems. Acknowledgements This work was supported in part by NSF Presidential Young Investigator award IRI9057631 and a NASA/Netrologic grant. The first author would like to thank NIPS for providing student travel grants. We thank Gary Cottrell for his many helpful comments and enthusiastic discussions. References Joel Burdick (1991), "A Classification of 3R Regional Manipulator Singularities and Geometries", Proc. 19911£E£ Inti. Con! Robotics & Automation, Sacramento. Joel Burdick (1988), "Kinematics and Design of Redundant Robot Manipulators", Stanford Ph.D. Thesis, Dept. of Mechanical Engineering. John Craig (1986), Introduction to Robotics, Addison-Wesley. David DeMers & Kenneth Kreutz-Delgado (1991), "Learning Global Topological Properties of Robot Kinematic Mappings for Neural Network-Based Configuration Control", in Bekey, ed. Proc. USC Workshop on Neural Networks in Robotics, (to appear). Michael I. Jordan (1988), "Supervised Learning and Systems with Excess Degrees of Freedom", COINS Technical Report 88-27, University of Massachusetts at Amherst. Michael!. Jordan & David E. Rumelhart (1990), "Forward Models: Supervised Learning with a Distal Teacher". Submitted to Cognitive Science. L. Nguyen & R.V. Patel (1990), "A Neural Network Based Strategy for the Inverse Kinematics Problem in Robotics", in Jamshidi and Saif, eds., Robotics and Manufacturing: recent Trends in Research, Education and Applications, vol. 3, pp. 995-1000 (ASME Press). Helge J. Ritter, Thomas M. Martinetz, & Klaus J. Schulten (1989), ''Topology-Conserving Maps for Learning Visuo-Motor-Coordination", Neural Networks, Vol. 2, pp. 159-168.
1991
69
539
Node Splitting: A Constructive Algorithm for Feed-Forward Neural Networks 1072 Mike Wynne-Jones Research Initiative in Pattern Recognition St. Andrews Road, Great Malvern WR14 3PS, UK mikewj@hermes.mod.uk Abstract A constructive algorithm is proposed for feed-forward neural networks, which uses node-splitting in the hidden layers to build large networks from smaller ones. The small network forms an approximate model of a set of training data, and the split creates a larger more powerful network which is initialised with the approximate solution already found. The insufficiency of the smaller network in modelling the system which generated the data leads to oscillation in those hidden nodes whose weight vectors cover regions in the input space where more detail is required in the model. These nodes are identified and split in two using principal component analysis, allowing the new nodes t.o cover the two main modes of each oscillating vector. Nodes are selected for splitting using principal component analysis on the oscillating weight vectors, or by examining the Hessian matrix of second derivatives of the network error with respect to the weight.s. The second derivat.ive method can also be applied to the input layer, where it provides a useful indication of t.he relative import.ances of parameters for the classification t.ask. Node splitting in a standard Multi Layer Percept.ron is equivalent to introducing a hinge in the decision boundary to allow more detail to be learned. Initial results were promising, but further evaluation indicates that the long range effects of decision boundaries cause the new nodes to slip back to the old node position, and nothing is gained. This problem does not occur in networks of localised receptive fields such as radial basis functions or gaussian mixtures, where the t.echnique appears to work well. Node Splitting: A Contructive Algorithm for Feed-Forward Neural Networks 1073 1 Introduction To achieve good generalisation in neural networks and other techniques for inferring a model from data, we aim to match the number of degrees of freedom of the model to that of the system generating the data. With too small a model we learn an incomplete solution, while too many free parameters capture individual training samples and noise. Since the optimum size of network is seldom known in advance, there are two alternative ways of finding it. The constructive algorithm aims to build an approximate model, and then add new nodes to learn more detail, thereby approaching the optimum network size from below. Pruning algorithms, on the other hand, start with a network which is known to be too big, and then cut out nodes or weights which do not contribute to the model. A review of recent techniques [\VJ91a] has led the author to favour the constructive approach, since pruning still requires an estimate of the optimum size, and the initial large net.works can take a long time t.o train. Constructive algorithms offer fast training of the initial small networks, with the network size and training slowness reflecting the amount of information already learned. The best approach of all would be a constructive algorithm which also allowed the pruning of unnecessary nodes or weights from the net.work. The constructive algorithm trains a net.work until no further detail of the training data can be learned, and then adds new nodes to t.he network. New nodes can be added with random weights, or with pre-determined weight.s. Random weights are likely to disrupt the approximate solut.ion already found, and are unlikely to be initially placed in parts of the weight space where they can learn something useful, although encouraging results have been reported in t.his ar~a.[Ash89] This problem is likely to be accentuated in higher dimensional spaces. Alt.ernatively, weights can be pre-determined by measurements on the performance of the seed network, and this is the approach adopted here. One node is turned into two, each wit.h half the output weight. A divergence is introduced in the weights into the nodes which is sufficient for them behave independently in future training without disrupting the approximate solution already found. 2 Node-Splitting A network is trained using standard techniques until no furt.her improvement on training set performance is achieved. Since we begin with a small network, we have an approximate model of the data, which captures the dominant properties of the generating system but lacks detail. We now freeze the weight.s in the network, and calculate the updates which would be made them, using simple gradient descent, by each separate t.raining pattern. Figure 1 shows t.he frozen vector of weights into a single hidden node, and the scatter of proposed updates around the equilibrium posit.ion. The picture shows the case of a hidden node where there is one clear direction of oscillation. This might be caused by two clusters of data within a class, each trying to use the node in its own area of the input space, or by a decision boundary pulled clockwise by some patterns and anti clockwise by others. If the oscillation is strong, either in its exhibition of a clear direction or in comparison with other 1074 Wynne-Jones New Node #1 --~U( Weight Update Vectors Figure 1: A hidden node weight vector and updates proposed hy individual t.raining patterns nodes in the same layer, then the node is split in two. The new nodes are placed one standard deviation either side of the old position. \Vhile this divergence gives the nodes a push in the right direction, allowing them t.o continue to diverge in later t.raining, the overall effect on the network is small. In most cases t.here is very little degradation in performance as a result of the split. The direction and size of oscillation are calculated by principal component analysis of the weight updates. By a traditional method, we are required to make a cova.riance matrix of the weight updat.es for the weight vector int.o each node: c = L6w6wT (1) p where p is the number of patterns. The mat.rix is then decomposed to a set of eigenvalues and eigenvectors; the largest. eigenvalue is the variance of oscillation and the corresponding eigenvector is it.s direction. Suitable techniques for performing this decomposition include Singular Value Dewmposition and Householder Reduction. [Vet86] A much more suit.able way of calculating the principal components of a stream of continuous measurements such as weight updat.es is iterative est.imation. An est.imate is stored for each required principal component. vector, and the estimat.es are updated using each sample. [Oja83, San89] By Oja's method, the scalar product of t.he current sample vector wit.h each current est.imate of the eigenvectors is used as a mat.ching coefficient., M. The matching coefficient is used to re-estima.te the eigenvalues and eigenvectors, in conjunction wit.h a gain term). which decays as the number of patterns seen increases. The eigenvectors are updated by a proportion )'M of the current sample, and t.he eigenvalues hy ).lU 2. The trace (sum of eigenvalues) can also be est.imated simply as the mean of the traces (sum of diagonal elements) of t.he individual sample covariance mat.rices. The principal component vectors are renormalised and orthogonalised after every few updat.es. This algorithm is of order n, the number of eigenvalues required, for the re-estimation, and O(n2) for the orthogonalisation; the matrix decomposition method can take exponential Node Splitting: A Contructive Algorithm for Feed-Forward Neural Networks 1075 time, and is always much slower in practice. In a recent paper on At eiosis Networks, Hanson introduced stochastic weights in the multi layer perceptron, with the aim of avoiding local minima in training.[Han90] A sample was taken from a gaussian distribution each time a weight was used; the mean was updated by gradient descent, and the variance reflected the network convergence. The variance was allowed to decay with time, so that the network would approach a deterministic state, but was increased in proportion to the updates made to the mean. \Vhile the network wa.g far from convergence these updates were large, and the variance remained large. Node splitting wa.g implemented in this system, in nodes where the variances on the weights were large compared with the means. In such cases, two new nodes were created with the weights one standard deviation either side of the old mean: one SD is added to all weights to one node, and subtracted for all weights to the other. Preliminary results were promising, but there appear to be two problems with this approach for node-splitting. First, the splitting criterion is not good: a useless node with all weights close to zero could have comparatively large variances on the weights owing to noise. This node would be split indefinit.ely. Secondly and more interestingly, the split is made wit.hout regard to the correlations in sign between the weight updates, shown as dots in the scatter plot.s of figure 2. In figure 2a, Meiosis would correctly place new nodes in the positions marked with crosses, while in figure 2b, the new nodes would he placed in completely the wrong places. This problem does not occur in the node splitting scheme based on principal component analysis. (a) (b) • • • • .~ • • • •• • • X • • • • • • • •• • .. • • '-. .. . ~ . . ~ ... ••• • •• • • •• ~ . ..... • •• -.~ .. ••• • • • • •• • •• • • . .. •• • • ~. .. . • •• • • • .~ .. • • • • • X • • • • • • Figure 2: Meiosis networks split correctly if the weight. updates are correlated in sign (a), but fail when they are not (b). 3 Selecting nodes for splitting N ode splitting is carried out in t.he direct.ion of maximum variance of the scatter plot of weight updates proposed by individual training samples. The hidden layer nodes most likely t.o benefit from splitting are those for which the non-spherical nature 1076 Wynne-Jones of the scatter plot is most pronounced. In later implementations this criterion was measured by comparing the largest eigenvalue with the sum of the eigenvalues, both these quantities being calculated by the iterative method. This is less simple in cases where there are a number of dominant directions of variance; the scatter plot might, for example be a four dimensional disk in a ten dimensional space, and hence present the possibility of splitt.ing one node into eight. It is hoped that these more complicat.ed splits will be the suhject of further research. An alternative approach in determining the need of nodes to be split, in comparison with other nodes in the same layer, is to use the second derivat.ives of t.he network error with respect to a parameter of the nodes which is normalised across all nodes in a given layer of the network. Such a parameter wa.c;; proposed by Mozer and Smolensky in [Sm089]: a multiplicative gat.ing function is applied to the outputs of the nodes, with its gating parameter set to one. Small incrempnt.s in this parameter can be used to characterise the error surface around the unity value, with the result that derivatives are normalised a.cross all nodes in a given layer of the network. Mozer and Smolensky rpplaced the sum squared error crit.erion with a modulus error criterion to preserve non-zero gradients close to the local minimum reached in training; we prefer to characterise the t.rue error surface by mpans of second derivat.ives, which can be calculated by repeated use of the chain rule (hackpropagat.ion). Backpropagat.ion of second derivat.ivps has previously been rpport.ed in [So190] and [Hea90]. Since a high curvat.ure error minimum in t.he space of t.he gat.ing parampt.er for a particular nocie indicat.es st.eep gradipnt.s surrounding thp minimum, it is t.hese nodes which exhibit. t.he great.est instability in their weight-space position. In t.he weight space, if the curvat.ure is high only in cert.ain directions, we have the situat.ion in figure 1, where the node is oscillating, and is in need of splitt.ing. If the curvature is high in all directions in comparison with other nodes, the network is highly sensitive to changes in t.he node or it.s weights, and again it will benefit from splitting. At t.he ot.her end of the scale of curvat.ure sensitivity, a node or weight wit.h very low curvat.ure is one to which t.he network error is quit.e insensit.ive, and the parameter is a suitable candidate for pruning. This scheme has previously been used for weight pruning by Le Cun, Denker et a1. [SoW 0] , and offers the pot.ential for an int.egrated syst.em of splitting and pruning - a truly adapt.ive net.work archit.ecture. 3.1 Applying the sensitivity measnre to inpnt nodes In a.ddit.ion to using t.he ga.ting parameter sensit.ivit.y to select nodes for pruning, Mozer and Smolensky mention the possibility of using it on the input nodes to indicate those inputs to which the c1a.<;sification is most sensitive. This has been implemented in our syst.em wit.h the second derivat.ive sensitivity measure, and applied to a large financial classification prohlem supplied by THORN El\JI Research. The analysis was carried out. on the 78-dimensional dat.a, and the input sensitivities varied over several orders of magnit.ude. The inputs were grouped into four sets according to sensitivit.y, and MLPs of 10 hidden nodes were trained on each subset of the dat.a. \Vhile the low sensitivit.y groups failed to learn anyt.hing at all, t.he higher sensit.ivit.y groups quickly attained a reasonable classification rat.e. Ident.ification of useless inputs leads t.o greatly increased training speed in fut.ure analysis, and can Node Splitting: A Contructive Algorithm for Feed-Forward Neural Networks 1077 yield valuable economies in future data collection. This work is reported in more detail in [WJ91b]. 4 Evaluation in Multi Layer Percept ron networks Despite the promising results from initial evaluations, further testing showed that the splitter technique was often unable to improve on the performance of the network used as a seed for the first split. These test were carried out on a number of different classification problems, where large numbers of hidden nodes were already known to be required, and with a number of different splitting criteria. Prolonged experimentation and consideration of this failure lead to the hypothesis that a split might be made to correct some miscla.<;sified patterns in one region of the input space but, owing to the long range effects of MLP decision boundaries, the changed positions of the planes might cause a much greater number of misclassifications elsewhere. These would tend to cause the newly creat.ed nodes to slip back to the position of the node from which they were created, with no overall benefit. This possibility was tested hy re-implementing the splitter technique in a gaussian mixture modeling system, which uses a network of localised receptive fields, and hence does not have the long range effects which occurred in the multi layer perceptron. 5 Implementation of the splitter in a Gaussian Mixture Model, and the results The Gaussian Mixt.ures Model [Cox91] is a clustering algorithm, which attempts to model the distribution of a points in a data set. It consists of a numher of mult.ivariate gaussian dist.rihut.ions in different posit.ions in t.he input space, and wit.h different variances in different direct.ions. The responses of t.hese recept.ive fields (humps) are weighted and summed together; the weights are calculated to sat.isfy the PDF const.raint. that t.he responses should sum to one over the data set. For the experiment.s on node splitting, the variance was the same in all direct.ions for a particular bump, leading to a model which is a sum of weight.ed spherical gaussian distribut.ions of different sizes and in different. positions. The model is t.rained by gradient ascent in the likelihood of the model fitting the data, which leads t.o a set of learning rules for re-estimat.ing the weights, then t.he cent.re positions of the recept.ive fields, then their variances. For t.he splitter, a small model is trained until nothing more can be learned, and the paramet.ers are frozen. The training set is run t.hrough once more, and the updat.es are calculated which each pattern attempts to make to the centre position of each receptive field. The first principal component and trace of these updates are calculated by the iterative met.hod, and any nodes for which t.he principal component variance is a large proportion of the trace is split in two. The algorithm is quick to converge, and is slowed down only a. lit.tle by the oV('fhead of computing the principal component and trace. Figure 3 shows the application of t.he gaussian mixture splitter to modelling a circle and an enclosing annulus; in the circle (a) there is no dominant. principa.l component direction in the data ('Overed by the receptive field of each node (shown at. one st.anda.rd deviation by a circle), while 1078 Wynne-Jones in (b) three nodes are clearly insufficient to model the annulus, and one has just undergone a split. (c) shows the same data set. and model a little later in t.raining after a number of splits have taken place. The technique has been evaluated on a number of other simple problems, with no negat.ive results to date. Figure 3: Gaussian mixt.ure model with node-splitting applied to a circle and surrounding annulus 6 Conclusions The split.ter t.echnique based on taking the principal component. of the influences on hidden nodes in a network, ha.g been shown to be useful in the multi layer perceptron in only a very limited number of cases. The split in this kind of net.work corresponds to a hinge in the decision boundary, which corrects the errors for which it was calculated, but usually caused for more errors in other parts of the input space. This problem does not occur in networks of localised receptive fields such as radial ba."is funct.ions of gaussian mixture distributions, where it appears to work very well. Further studies will include splitting nodes into more than two, in cases where there is more than one dominant principal component, and applying node-split.t.ing to different. modelling algorithms, and to gaussian mixtures in hidden markov models for speech recognition. The analysis of the sensit.ivity of the net.work error to individual nodes gives an ordered list which can be used for both splitting and pruning in the same network, although splitting does not generally work in the MLP. This measure has been demonstrated in t.he input layer, to identify which network inputs are more or less useful in the classification t.ask. Acknowledgements The author is greatly indebted to John Bridle and Steve Luttrell of RSRE, Neil Thacker of Sheffield University, and colleagues in the Research Initiative in Pattern Node Splitting: A Contructive Algorithm for Feed-Forward Neural Networks 1079 Recognition and its member companies for helpful comments and advice; also to David Bounds of Aston University and RIPR for advice and encouragement. References [Ash89] Timur Ash. Dynamic node creation in backpropagation networks. Technical Report 8901, Institute for Cognitive Science, UCSD, La Jolla, California 92093, February 1989. [Cox91] John S Bridle & Stephen J Cox. Recnorm: Simultaneous normalisation and classification applied to speech recognition. In Richard P Lippmann & John E Moody & David S Touretzky, editor, Advances in Neural Information Processing Systems 3, pages 234-240, San Mateo, CA, September 1991. Morgan Kaufmann Publishers. [Han90] Stephen Jose Hanson. Meiosis networks. In David S Touretzky, editor, Adllances in Nellral Information Processing Systems 2, pages 533-541, San Mateo, CA, April 1990. Morgan Kaufmann Puhlishers. [IIea90] Anthony JR Heading. An analysis of noise tolerance in multi-layer perceptrons. Research Note SP4 122, R.oyal Signals and Radar Estahlishment, St Andrews Road, Malvern, Worcestershire, WR14 3PS, UK, July 1990. [Oja83] E Oja. Sllhspace Methods of Pattern Recognition. Research Studit's Press Ltd, Letchworth, UK, 1983. [San89] TD Sanger. Optimalunsupervispd learningin a single-Iayn linear feed forward neural network. Neural Networks, 2:459-473, 1989. [SmoR9] MC Mozer & P Smolensky. Skeletonization: A tedlllique for trimming the fat from a neural network. In DS Touretzky, editor, Advances in Neural Information Processing Systems 1, pages 107-115, San Mateo, CA, April 1989. Morgan Kaufmann Publishers. [SoWO] Yann Le Cun & John S Denker & Sara A Solla. Optimal brain damage. In David S Touretzky, editor, Adt'ances in Neural Information Processing Systems 2, pages 598-605, San Mateo, CA, April 1990. Morgan Kaufmann Publishers. [Vet86] WII Prpss & BP Flannery & SA Teukolsky & \VT Vette-rling. Numerical Recipes in C: The A rt of Scientific Compttting. Camhrigde University Press, 1986. [WJ91a] Mike Wynne-Jones. Constructive algorithms and pruning: Improving the multi layer perceptron. In R Vichnevetsky & JJII 1\filler, editor, Proceedings of the 18th IMACS World Congress on Computation and Applied Mathemetics, pages 747-750, Duhlin, July 1991. IMACS '91, IMACS. [\VJ91b] Mike Wynne-Jones. Self-configuring neural nptworks, a new constructive algorithm, and assessing the importance of individual inputs. Technical Report X2345!1, Thorn BMI Central Research Lahoratories, Dawley Road, Hayes, Middlesex, UB3 lHH, UK, March 1991.
1991
7
540
420 VISIT: A Neural Model of Covert Visual Attention Subutai AhmadSiemens Research and Development, ZFE ST SN6, Otto-Hahn Ring 6, 8000 Munich 83, Germany. ahmad~bsUD4Gztivax.siemens.eom Abstract Visual attention is the ability to dynamically restrict processing to a subset of the visual field. Researchers have long argued that such a mechanism is necessary to efficiently perform many intermediate level visual tasks. This paper describes VISIT, a novel neural network model of visual attention. The current system models the search for target objects in scenes containing multiple distractors. This is a natural task for people, it is studied extensively by psychologists, and it requires attention. The network's behavior closely matches the known psychophysical data on visual search and visual attention. VISIT also matches much of the physiological data on attention and provides a novel view of the functionality of a number of visual areas. This paper concentrates on the biological plausibility of the model and its relationship to the primary visual cortex, pulvinar, superior colliculus and posterior parietal areas. 1 INTRODUCTION Visual attention is perhaps best understood in the context of visual search, i.e. the detection of a target object in images containing multiple distractor objects. This task requires solving the binding problem and has been extensively studied in psychology (su[16] for a review). The ba8ic experimental finding is that a target object containing a single distinguishing feature can be detected in constant time, independent of the number of distractors. Detection based on a conjunction of features, however, takes time linear in the number of objects, implying a sequential search process (there are exceptions to this general rule). It is generally accepted "Thanks to Steve Omohundro, Anne Treuman, Joe Malpeli, and Bill Baird for enlight. ening discussions. Much of this resea.rch waa conducted at the International Computer Science Institute, Berkeley, CA. VISIT: A Neural Model of Covert Visual Attention 421 I High Level Recognition I ___ ~ I "t down Working !%rmaticm Memory ~ .r--.....:.-....... / r-----""7 ealure Maps I .. &nage Figure 1: Overview of VISIT that some form of covert attention 1 is necessary to accomplish this task. The following sections describe VISIT, a connectionist model of this process. The current paper concentrates on the relationships to the physiology of attention, although the psychological studies are briefly touched on. For further details on the psychological aspects see[l, 2]. 2 OVERVIEW OF VISIT We first outline the essential characteristics of VISIT. Figure 1 shows the basic architecture. A set of features are first computed from the image. These features are analogous to the topographic maps computed early in the visual system. There is one unit per location per feature, with each unit computing some local property of the image. Our current implementation uses four feature maps: red, blue, horizontal, and vertical. A parallel global sum of each feature map's activity is computed and is used to detect the presence of activity in individual maps. The feature information is fed through two different systems: a gating network and a priority network. The gating network implements the focus - its function is to restrict higher level processing to a single circular region. Each gate unit receives the coordinates of a circle as input. If it is outside the circle, it turns on and inhibits corresponding locations in the gated feature maps. Thus the network can filter image properties based on an external control signal. The required computation is a simple second order weighted sum and takes two time steps[l]. 1 Covert attention refers to the ability to concentrate processing on a single image region without any overt actions such as eye movements. 422 Ahmad The priority network ranks image locations in parallel and encodes the information in a manner suited to the updating of the focus of attention. There are three units per location in the priority map. The activity of the first unit represents the location's relevance to the current task. It receives activation from the feature maps in a local neighborhood of the image. The value of the i'th such unit is calculated as: Ai = G( L L PfAfzy ) (1) z,yERF. fEF where A fzy is the activation of the unit computing feature I at location (z,y). RFi denotes the receptive field of unit i, Pf is the priority given to feature map I, and G is a monotonically increasing function such as the sigmoid. Pf is represented as the real valued activation of individual units and can be dynamically adjusted according to the task. Thus by setting Pf for a particular feature to 1 and all others to 0, only objects containing that feature will influence the priority map. Section 2.1 describes a good strategy for setting Pf . The other two units at each location encode an "error vector" , i.e. the vector difference between the units' location and center of the focus. These vectors are continually updated as the focus of attention moves around. To shift the focus to the most relevant location, the network simply adds the error vector corresponding to the highest priority unit to the activations of the units representing the focii's center. Once a location has been visited, the corresponding relevance unit is inhibited, preventing the network from continually attending to the highest priority location. The control networks are responsible for mediating the information flow between the gating and priority networks, as well as incorporating top-down knowledge. The following section describes the part which sets the priority values for the feature maps. The rest of the networks are described in detail in [1J. Note that the control functions are fully implemented as networks of simple units and thus requires no "homunculus" to oversee the process. 2.1 SWIFT: A FAST SEARCH STRATEGY The main function of SWIFT is to integrate top-down and bottom-up knowledge to efficiently guide the search process. Top down information about the target features are stored in a set of units. Let T be this set of features. Since the desired object must contain all the features of T, any of the corresponding feature maps may be searched. Using the ability to weight feature maps differently, the network removes the influence of all but one of the features in T. By setting this map's priority to 1, and all others to 0, the system will effectively prune objects which do not contain this feature.SWIF~ To minimize search time, it should choose the feature corresponding to the smallest number of objects. Since it is difficult to count the number of objects in parallel, the network chooses the map with the minimal total activity as the one likely to contain the minimal number of objects. (If the target features are not known in advance, SWIFT chooses the minimal feature map over all features. The net effect is to always pick the most distinctive feature.) 2Hence the name SWIFT: Search WIth Features Thrown out. VISIT: A Neural Model of Covert Visual Attention 423 2.2 RELATIONSHIP TO PSYCHOPHYSICAL DATA The run time behavior of the system closely matches the data on human visual search. Visual attention in people is known to be very quick, taking as little as 40-80 msecs to engage. Given that cortical neurons can fire about once every 10 msecs, this leaves time for at most 8 sequential steps. In VISIT, unlike other implementations of attention[10], the calculation of the next location is separated from the gating process. This allows the gating to be extremely fast, requiring only 2 time steps. Iterative models, which select the most active object through lateral inhibition, require time proportional to the distance in pixels between maximally separated objects. These models are not consistent with the 80msecs time requirement. During visual search, SWIFT always searches the minimal feature map. The critical variable that determines search time is M, the number of objects in the minimal feature map. Search time will be linear in M. It can be shown that VISIT plus SWIFT is consistent with all of Treisman's original experiments including single feature search, conjunctive search, 2:1 slope ratios, search asymmetries, and illusory conjuncts[16], as well as the exceptions reported in[5, 14]. With an assumption abou t the features that are coded (consistent with current physiological know ledge), the results in[7, 11] can also be modeled. (This is described in more detail in [2]). 3 PHYSIOLOGY OF VISUAL ATTENTION The above sections have described the general architecture of VISIT. There is a fairly strong correspondence between the modules in VISIT and the various visual areas involved in attention. The rest of the paper discusses these relationships. 3.1 TOPOGRAPHIC FEATURE MAPS Each of the early visual areas, LGN, VI, and V2, form several topographic maps of retinal activity. In VI alone there are a thousand times as many neurons as there are fibers in the optic nerve, enough to form several hundred feature maps. There is a diverse list of features thought to be computed in these areas, including orientations, colors, spatial frequencies, motion, etc.[6]. These areas are analogous to the set of early feature maps computed in VISIT. In VISIT there are actually two separate sets of feature maps: early features computed directly from the image and gated feature maps. It might seem inefficient to have two copies of the same features. An alternate possibility is to directly inhibit the early feature maps themselves, and so eliminate the need for two sets. However, in a focused state, such a network would be unable to make global decisions based on the features. With the configuration described above, at some hardware cost, the network can efficiently access both local and global information simultaneously. SWIFT relies on this ability to efficiently carry out visual search. There is evidence for a similar setup in the human visual system. Although people have actively searched, no local attentional effects have been found in the early feature maps. (Only global effects, such as an overall increase in firing rate, have been noticed.) The above reasoning provides a possible computational explanation of this phenomenon. 424 Ahmad A natural question to ask is: what is the best set of features? For fast visual search, if SWIFT is used as a constraint, then we want the set of features that minimize M over all possible images and target objects, i.e. the features that best discriminate objects. It is easy to see that the optimal set of features should be maximally uncorrelated with a near uniform distribution of feature values. Extracting the principal components of the distribution of images gives us exactly those features. It is well known that a single Hebb neuron extracts the largest principal componentj sets of such neurons can be connected to select successively smaller components. Moreover, as some researchers have demonstrated, simple Hebbian learning can lead to features that look very similar to the features in visual cortex (see [3] for a review). If the early features in visual cortex do in fact represent principal components, then SWIFT is a simple strategy that takes advantage of it. 3.2 THE PULVINAR Contrary to the early visual system, local attentional effects have been discovered in the pulvinar. Recordings of cells in the lateral pulvinar of awake, behaving monkeys have demonstrated a spatially localized enhancement effect tied to selective attention[17]. Given this property it is tempting to pinpoint the pulvinar as the locus of the gated feature maps. The general connectivity patterns provide some support for this hypothesis. The pulvinar is located in the dorsal part of the thalamus and is strongly connected to just about every visual area including LGN, VI, V2, superior colliculus, the frontal eye fields, and posterior parietal cortex. The projections are topography preserving and non-overlapping. As a result, the pulvinar contains several high-resolution maps of visual space, possibly one map for each one in primary visual cortex. In addition, there is a thin sheet of neurons around the pulvinar, the reticular complex, with exclusively inhibitory connections to the neurons within [4]. This is exactly the structure necessary to implement VISITs gating system. There are other clues which also point to the thalamus as the gating system. Human patients with thalamic lesions have difficulty engaging attention and inhibiting crosstalk from other locations. Lesioned monkeys give slower responses when competing events are present in the visual field[12]. The hypothesis can be tested by further experiments. In particular, if a map in the pulvinar corresponding to a particular cortical area is damaged, then there should be a corresponding deficit in the ability to bind those specific features in the presence of distractors. In the absence of distractors, the performance should remain unchanged. 3.3 SUPERIOR COLLICULUS The SC is involved in both the generation of eye saccades[15] and possibly with covert attention[12]. It is probably also involved in the integration oflocation information from various different modalities. Like the pulvinar, the superior colliculus (SC) is a structure with converging inputs from several different modalities including visual, auditory, and somatosensory[15]. The superior colliculus contains a representation similar to VISITs error maps for eye saccades[15]. At each location, VISIT: A Neural Model of Covert Visual Attention 425 groups of neurons represent the vector in motor coordinates required to shift the eye to that spot. In [13] the authors studied patients with a particular form of Parkinson's disease where the SC is damaged. These patients are able to make horizontal, but not vertical eye saccades. The experiments showed that although the patients were still able to move their covert attention in both the horizontal and vertical directions, the speed of orienting in the vertical direction was much slower. In addition [12] mentions that patients with this damage shift attention to previously attended locations as readily as new ones, suggesting a deficit in the mechanism that inhibits previously attended locations. These findings are consistent with the priority map in VISIT. A first guess would identify the superior colliculus as the priority map, however this is probably inaccurate. More recent evidence suggests that the SC might be involved only in bottom-up shifts of attention (induced by exogenous stimuli as opposed to endogenous control signals) (Rafal, personal communication). There is also evidence that the frontal eye fields (F EF) are involved in saccade generation in a manner similar to the superior colliculus, particularly for saccades to complex stimuli[17]. The role of the FE F in covert attention is currently unknown. 3.4 POSTERIOR PARIETAL AREAS The posterior paretal cortex P P may provide an answer. One hypothesis that is consistent with the data is that there are several different priority maps, for bottom-up and top-down stimuli. The top-down maps exist within P P, whereas the bottom-up maps exist in SC and possibly F EF. P P receives a significant projection from superior colliculus and may be involved in the production of voluntary eye saccades[17]. Experiments suggest that it is also involved in covert shifts of attention. There is evidence that neurons in P P increase their firing rate when in a state of attentive fixation[9]. Damage to P P leads to deficits in the ability to disengage covert attention away from a target[12]. In the context of eye saccades, there exist neurons in P P that fire about 55 msecs before an actual saccade. These results suggest that the control structure and the aspects of the network that integrate priority information from the various modules might also reside within PP. 4 DISCUSSION AND CONCLUSIONS The above relationships between VISIT and the brain provides a coherent picture of the functionality of the visual areas. The literature is consistent with having the LGN, V1, and V2 as the early feature maps, the pulvinar as a gating system, the superior colliculus, and frontal eye fields, as a bottom-up priority map, and posterior parietal cortex as the locus of a higher level priority map as well as the the control networks. Figure 2 displays the various visual areas together with their proposed functional relationships. In [12] the authors suggest that neurons in parietal lobe disengage attention from the present focus, those in superior colliculus shift attention to the target, and neurons in pulvinar engage attention on it. This hypothesis looks at the time course of an attentional shift (disengage, move, engage) and assigns three different areas to 426 Ahmad Figure 2: Proposed functionality of various visual areas. Lines denote major pathways. Those connections without arrows are known to be bi-directional. the three different intervals within that temporal sequence. In VISIT, these three correspond to a single operation (add a new update vector to the current location) and a single module (the control network). Instead, the emphasis is on assigning different computational responsibilities to the various modules. Each module operates continuously but is involved in a different computation. While the gating network is being updated to a new location, the priority network and portions of the control network are continuously updating the priorities. The model doesn't yet explain the findings in [8] where neurons in V4 exhibited a localized attentional response, but only if the stimuli were within the receptive fields. However, these neurons have relatively large receptive fields and are known to code for fairly high-level features. It is possible that this corresponds to a different form of attention working at a much higher level. By no means is VISIT intended to be a detailed physiological model of attention. Precise modeling of even a single neuron can require significant computational resources. There are many physiological details that are not incorporated. However, at the macro level there are interesting relationships between the individual modules in VISIT and the known functionality of the different areas. The advantage of an implemented computational model such as VISIT is that it allows us to examine the underlying computations involved and hopefully better understand the underlying processes. VISIT: A Neural Model of Covert Visual Attention 427 References [1] S. Ahmad. VISIT: An Efficient Computational Model of Human Visual Attention. PhD thesis, University of illinois at Urbana-Champaign, Champaign, IL, September 1991. Also TR-91-049, International Computer Science Institute, Berkeley, CA. [2] S. Ahmad and S. Omohundro. Efficient visual search: A connectionist solution. In 13th Annual Conference of the Cognitive Science Society, Chicago, IL, August 1991. [3] S. Becker. Unsupervised learning procedures for neural networks. International Journal of Neural Sy~tem~, 12, 1991. [4] F. Crick. Function of the thalamic reticular complex: the searchlight hypothesis. In National Academy of Science~, volume 81, pages 4586-4590, 1984. [5] H.E. Egeth, R.A. Virzi, and H. Garbart. Searching for conjunctively defined targets. Journal of Experimental P~ychology: Human Perception and Performance, 10(1):3239, 1984. [6] D. Van Essen and C. H. Anderson. Information processing strategies and pathways in the primate retina and visual cortex. In S.F. Zornetzer, J .L. Davis, and C. Lau, editors, An Introduction to Neural and Electronic Network!. Academic Press, 1990. [7] P. McLeod, J. Driver, and J. Crisp. Visual search for a conjunction of movement and form is parallel. Nature, 332:154-155, 1988. [8] J. Moran and R. Desimone. Selective attention gates visual processing in the extrastriate cortex. Science, 229, March 1985. [9] V.B. Mountcastle, R.A. Anderson, and B.C. Motter. The influence of attention fixation upon the excitability ofthe light-sensitive neurons ofthe posterior parietal cortex. The Journal of Neuro~cience, 1(11):1218-1235, 1981. [10] M. Mozer. The Perception of Multiple Objects: A Connectioni~t Approach. MIT Press, Cambridge, MA, 1991. [11] K. Nakayama and G. Silverman. Serial and parallel processing of visual feature conjunctions. Nature, 320:264-265, 1986. [12] M.l. Posner and S.E. Petersen. The attention system of the human brain. Annual Review of Neuro~cience, 13:25-42, 1990. [13] M.l. Posner, J.A. Walker, and R.D. Rafal. Effects of parietal injury on covert orienting of attention. The Journal of Neuro~cience, 4(7):1863-1874, 1982. [14] P.T. Quinlan and G.W. Humphreys. Visual search for targets defined by combinations of color, shape, and size: An examination of the task constraints of feature and conjunction searches. Perception & P~ychophy~ic~, 41:455-472, 1987. [15] D. L. Sparks. Translation of sensory signals into commands for control of saccadic eye movements: Role of primate superior colliculus. Physiological Review~, 66(1), 1986. [16] A. Treisman. Features and objects: The Fourteenth Bartlett Memorial Lecture. The Quarterly Journal of Experimental P~ychology, 40A(2), 1988. [17] R.H. Wurtz and M.E. Goldberg, editors. The Neurobiology of Saccadic Eye Movemenb. Elsevier, New York, 1989.
1991
70
541
Kernel Regression and Backpropagation Training with Noise Petri Koistinen and Lasse Holmstrom Rolf Nevanlinna Institute, University of Helsinki Teollisuuskatu 23, SF-0051O Helsinki, Finland Abstract One method proposed for improving the generalization capability of a feedforward network trained with the backpropagation algorithm is to use artificial training vectors which are obtained by adding noise to the original training vectors. We discuss the connection of such backpropagation training with noise to kernel density and kernel regression estimation. We compare by simulated examples (1) backpropagation, (2) backpropagation with noise, and (3) kernel regression in mapping estimation and pattern classification contexts. 1 INTRODUCTION Let X and Y be random vectors taking values in R d and RP, respectively. Suppose that we want to estimate Y in terms of X using a feedforward network whose input-output mapping we denote by y = g(x, w). Here the vector w includes all the weights and biases of the network. Backpropagation training using the quadratic loss (or error) function can be interpreted as an attempt to minimize the expected loss '\(w) = ElIg(X, w) _ Y1I2. (1) Suppose that EIIYW < 00. Then the regression function m(x) = E[YIX = x]. (2) minimizes the loss Ellb(X) - YI1 2 over all Borel measurable mappings b. Therefore, backpropagation training can also be viewed as an attempt to estimate m with the network g. 1033 1034 Koistinen and Holmstrom In practice, one cannot minimize -' directly because one does not know enough about the distribution of (X, Y). Instead one minimizes a sample estimate (3) in the hope that weight vectors w that are near optimal for ~n are also near optimal for -'. In fact, under rather mild conditions the minimizer of ~n actually converges towards the minimizing set of weights for -' as n -+ 00, with probability one (White, 1989). However, if n is small compared to the dimension of w, minimization of ~n can easily lead to overfitting and poor generalization, i.e., weights that render ~n small may produce a large expected error -'. Many cures for overfitting have been suggested. One can divide the available samples into a training set and a validation set, perform iterative minimization using the training set and stop minimization when network performance over the validation set begins to deteriorate (Holmstrom et al., 1990, Weigend et al., 1990). In another approach, the minimization objective function is modified to include a term which tries to discourage the network from becoming too complex (Weigend et al., 1990). Network pruning (see, e.g., Sietsma and Dow, 1991) has similar motivation. Here we consider the approach of generating artificial training vectors by adding noise to the original samples. We have recently analyzed such an approach and proved its asymptotic consistency under certain technical conditions (Holmstrom and Koistinen, 1990). 2 ADDITIVE NOISE AND KERNEL REGRESSION Suppose that we have n original training vectors (Xi, Yi) and want to generate artificial training vectors using additive noise. If the distributions of both X and Y are continuous it is natural to add noise to both X and Y components of the sample. However, if the distribution of X is continuous and that of Y is discrete (e.g., in pattern classification), it feels more natural to add noiRe to the X components only. In Figure 1 we present sampling procedures for both ca~es. In the x-only case the additive noise is generated from a random vector Sx with density Kx whereas in the x-and-y case the noise is generated from a random vector SXy with density Kxy. Notice that we control the magnitude of noise with a scalar smoothing parameter h > O. In both cases the sampling procedures can be thought of as generating random samples from new random vectors Xkn) and y~n) . Using the same argument as in the Introduction we see that a network trained with the artificial samples tends to approximate the regression function E[y~n) IXkn)]. Generate I uniformly on {1, ... , n} and denote by I and I( .11 = i) the density and conditional density of Xkn ). Then in the x-only case we get n m~n)(Xkn)) := E[y~n)IXkn)] = LYiP(I = iIXkn)) i=l Kernel Regression and Backpropagation Training with Noise 1035 Procedure 1. (Add noise to x only) 1. Select i E {I, ... , n} with equal probability for each index. 2. Draw a sample Sx from density I<x on Rd. 3. Set x~n) Xi + hsx (n) Yh Yi· Procedure 2. (Add noise to both x and y) 1. Select i E {I, ... , n} with equal probability for each index. 2. Draw a sample (sx, Sy) from density /{Xy on Rd+p. 3. Set x~n) Xi + hsx (n) h Yh Yi + Sy. Figure 1: Two Procedures for Generating Artificial Training Vectors. n f(X~n)II = i)P(I = i) n h-d/{x«Xkn) - xi)/h). n- 1 = tt Yi f(Xhn )) = tt Yi 2:7=1 n- 1h-d/{x«Xkn) - xi)/h)· Denoting /{x by k we obtain (n)( ) _ 2:~=1 k«x - xi)/h)Yi m h X ",0 . L..,j=1 k«x - xi)/h) (4) We result in the same expression also in the x-and-y case provided that fY/{Xy(x,y)dy = 0 and that we take k(x) = fI<Xy(x,y)dy (Watson, 1964). The expression (4) is known as the (N adaraya-Watson) kernel regression estimator (Nadaraya, 1964, Watson, 1964, Devroye and Wagner, 1980). A common way to train a p-class neural network classifier is to train the network to associate a vector x from class j with the j'th unit vector (0, ... ,0,1,0, ... ,0). It is easy to see that then the kernel regression estimator components estimate the class a posteriori probabilities using (Parzen-Rosenblatt) kernel density estimators for the class conditional densities. Specht (1990) argues that such a classifier can be considered a neural network. Analogously, a kernel regression estimator can be considered a neural network though such a network would need units proportional to the number of training samples. Recently Specht (1991) has advocated using kernel regression and has also presented a clustering variant requiring only a fixed amount of units. Notice also the resemblance of kernel regression to certain radial basis function schemes (Moody and Darken, 1989, Stokbro et al., 1990). An often used method for choosing h is to minimize the cross-validated error (HardIe and Marron, 1985, Friedman and Silverman, 1989) (5) Another possibility is to use a method suggested by kernel density estimation theory (Duin, 1976, Habbema et al., 1974) whereby one chooses that h maximizing a crossvalidated (pseudo) likelihood function o o Lxy (h) = II f:,L(Xi, yd, Lx(h) = IT f:'h,i(Xi), (6) i=1 i=1 1036 Koistinen and Holmstrom where I;r i (I; h i) is a kernel density estimate with kernel Kxy (Kx) and smoothing para~~ter h hut with the i'th sample point left out. 3 EXPERIMENTS In the first experiment we try to estimate a mapping go from noisy data (x, y), Y go(X)+Ny =asinX+b+Ny, a=0.4,b=0.5 X '" UNI( -71",71"), Ny '" N(O, (72), (7 = O.l. Here UNI and N denote the uniform and the normal distribution. We experimented with backpropagation, backpropagation with noise and kernel regression. Backpropagation loss function was minimized using Marquardt's method. The network architecture was FN-1-13-1 with 40 adaptable weights (a feedforward network with one input, 13 hidden nodes, one output, and logistic activation functions in the hidden and output layers). We started the local optimizations from 3 different random initial weights and kept the weights giving the least value for ~n. Backpropagation training with noise was similar except that instead of the original n vectors we used IOn artificial vectors generated with Procedure 2 using SXy '" N(O, 12). Magnitude of noise was chosen with the criterion Lxy (which, for backpropagation, gave better results than M). In the kernel regression experiments SXy was kept the same. Table 1 characterizes the distribution of J, the expected squared distance of the estimator 9 (g(., w) or m~n) from go, J = E[g(X) - gO(X)]2. Table 2 characterizes the distribution of h chosen according to the criteria Lxy and M and Figure 2 shows the estimators in one instance. Notice that, on the average, kernel regression is better than backpropagation with noise which is better than plain backpropagation. The success of backpropagation with noise is partly due to the fact that (7 and n have here been picked favorably. Notice too that in kernel regression the results with the two cross-validation methods are similar although the h values they suggest are clearly different. In the second experiment we trained classifiers for a four-dimensional two-class problem with equal a priori probabilities and class-conditional densities N(J.ll, C1) and N(J.l2' C2), J.ll = 2.32[1 0 0 O]T, C1 = 14; J.l2 = 0, C2 = 414. An FN-4-6-2 with 44 adaptable weights was trained to associate vectors from class 1 with [0.9 O.l]T and vectors from class 2 with [0.1 0.9jT. We generated n/2 original vectors from each class and a total of IOn artificial vectors using Procedure 1 with Sx '" N(O, 14). We chose the smoothing parameters, hI and h2' separately for the two classes using the criterion Lx: hi was chosen by evaluating Lx on class i samples only. We formed separate kernel regression estimators for each class; the i'th estimator was trained to output 1 for class i vectors and 0 for the other sample vectors. The M criterion then produces equal values for hI and h2. The classification rule was to classify x to class i if the output corresponding to the i'th class was the maximum output. The error rates are given in Table 3. (The error rate of the Bayesian classifier is 0.116 in this task.) Table 4 summarizes the distribution of hI and h2 as selected by Lx and M . Kernel Regression and Backpropagation Training with Noise 1037 Table 1: Results for Mapping Estimation. Mean value (left) and standard deviation (right) of J based on 100 repetitions are given for each method. BP BP+noise, Kernel regression n Lxy Lxy M 40 .0218 .016 .0104 .0079 .00446 .0022 .00365 .0019 80 .00764 .0048 .00526 .0018 .00250 .00078 .00191 .00077 Table 2: Values of h Suggested by the Two Cross-validation Methods in the Mapping Estimation Experiment. Mean value and standard deviation based on 100 repetitions are given. n Lxy M 40 0.149 0.020 0.276 0.086 80 0.114 0.011 0.241 0.062 Table 3: Error Rates for the Different Classifiers. Mean value and standard deviation based on 25 repetitions are given for each method. BP BP+noise, Kernel regression n Lx Lx M 44 .281 .054 .189 .018 .201 .022 .207 .027 88 .264 .028 .163 .011 .182 .010 .184 .013 176 .210 .023 .145 .0lD .164 .0089 .164 .011 Table 4: Values of hl and h2 Suggested by the Two Cross-validation Methods in the Classification Experiment. Mean value and standard deviation based on 25 repetitions are given. Lx M n hl h2 hl = h2 44 .818 .078 1.61 .14 1.14 .27 88 .738 .056 1.48 .11 1.01 .19 176 .668 .048 1.35 .090 .868 .lD 1038 Koistinen and Holmstrom 4 CONCLUSIONS Additive noise can improve the generalization capability of a feedforward network trained with the backpropagation approach. The magnitude of the noise cannot be selected blindly, though. Cross-validation-type procedures seem to suit well for the selection of noise magnitude. Kernel regression, however, seems to perform well whenever backpropagation with noise performs well. If the kernel is fixed in kernel regression, we only have to choose the smoothing parameter h, and the method is not overly sensitive to its selection. References [Devroye and Wagner, 1980] Devroye, 1. and Wagner, T. (1980). Distribution-free consistency results in non parametric discrimination and regression function estimation. The Annals of Statistics, 8(2):231-239. [Duin, 1976] Duin, R. P. W. (1976). On the choice of smoothing parameters for Parzen estimators of probability density functions. IEEE Transactions on Computers, C-25:1175-1179. [Friedman and Silverman, 1989] Friedman, J. and Silverman, B. (1989). Flexible parsimonious smoothing and additive modeling. Technometrics, 31(1):3-2l. [Habbema et al., 1974] Habbema, J. D. F., Hermans, J., and van den Broek, K. (1974). A stepwise discriminant analysis program using density estimation. In Bruckmann, G., editor, COMPSTAT 1974, pages 101-110, Wien. Physica Verlag. [HardIe and Marron, 1985] HardIe, W. and Marron, J. (1985). Optimal bandwidth selection in nonparametric regression function estimation. The Annals of Statistics, 13(4):1465-148l. [Holmstrom and Koistinen, 1990] Holmstrom, 1. and Koistinen, P. (1990). Using additive noise in back-propagation training. Research Reports A3, Rolf Nevanlinna Institute. To appear in IEEE Trans. Neural Networks. [Holmstrom et al., 1990] Holmstrom, L., Koistinen, P., and Ilmoniemi, R. J. (1990). Classification of un averaged evoked cortical magnetic fields. In Proc. IJCNN-90WASH DC, pages II: 359-362. Lawrence Erlbaum Associates. [Moody and Darken, 1989] Moody, J. and Darken, C. (1989). Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281-294. [N adaraya, 1964] N adaraya, E. (1964). On estimating regression. Theor. Probability Appl., 9:141-142. [Sietsma and Dow, 1991] Sietsma, J. and Dow, R. J. F. (1991). Creating artificial neural networks that generalize. Neural Networks, 4:67-79. [Specht, 1991] Specht, D. (1991). A general regression neural network. IEEE Transactions on Neural Networks, 2(6):568-576. [Specht, 1990] Specht, D. F. (1990). Probabilistic neural networks. Neural Networks, 3(1):109-118. [Stokbro et al., 1990] Stokbro, K., Umberger, D., and Hertz, J. (1990). Exploiting neurons with localized receptive fields to learn chaos. NORDITA preprint. [Watson, 1964] Watson, G. (1964). Smooth regression analysis. Sankhyii Ser. A, 26:359-372. [Weigend et al., 1990] Weigend, A., Huberman, B., and Rumelhart, D. (1990). Predicting the future: A connectionist approach. International Journal of Neural Systems, 1(3):193-209. Kernel Regression and Backpropagation Training with Noise 1039 [White, 1989] White, H. (1989). Learning in artificial neural networks: A statistical perspective. Neural Computation, 1 :425-464. 1.5.....--.....----.---.....--....----.---.....--....----, o 1 0 0.5 0 0 --- true kernel -0.54 -3 -2 0 2 3 4 1.5 • ' : . . - .. • 0 1 0.5 0 .. ---- BP .. . ' .. BP+noise -0.54 -3 -2 -I 0 1 2 3 4 Figure 2: Results From a Mapping Estimation Experiment. Shown are the n = 40 original vectors (o's), the artificial vectors (dots), the true function asinx + band the fitting results using kernel regression, backpropagation and backpropagation with noise. Here h = 0.16 was chosen with Lxy. Values of J are 0.0075 (kernel regression), 0.014 (backpropagation with noise) and 0.038 (backpropagation).
1991
71
542
Recognition of Manipulated Objects by Motor Learning Hiroaki Gomi Mitsuo Kawato A TR Auditory and Visual Perception Research Laboratories, Inui-dani, Sanpei-dani, Seika-cho, Soraku-gun, Kyoto 619-02, Japan Abstract We present two neural network controller learning schemes based on feedbackerror-learning and modular architecture for recognition and control of multiple manipulated objects. In the first scheme, a Gating Network is trained to acquire object-specific representations for recognition of a number of objects (or sets of objects). In the second scheme, an Estimation Network is trained to acquire function-specific, rather than object-specific, representations which directly estimate physical parameters. Both recognition networks are trained to identify manipulated objects using somatic and/or visual information. After learning, appropriate motor commands for manipulation of each object are issued by the control networks. 1 INTRODUCTION Conventional feedforward neural-network controllers (Barto et aI., 1983; Psaltis et al., 1987; Kawato et aI., 1987, 1990; Jordan, 1988; Katayama & Kawato, 1991) can not cope with multiple or changeable manipulated objects or disturbances because they cannot change immediately the control law corresponding to the object. In interaction with manipulated objects or, in more general terms, in interaction with an environment which contains unpredictable factor, feedback information is essential for control and object recognition. From these considerations, Gomi & Kawato (1990) have examined the adaptive feedback controller learning schemes using feedback-error-Iearning, from which impedance control (Hogan, 1985) can be obtained automatically. However, in that scheme, some higher system needs to supervise the setting of the appropriate mechanical impedance for each manipulated object or environment. In this paper, we introduce semi-feedforward control schemes using neural networks which receive feedback and/or feedforward information for recognition of multiple manipulated objects based on feedback-error-learning and modular network architecture. These schemes have two advantages over previous ones as follows. (1) Learning is achieved without the 547 548 Gomi and Kawato exact target motor command vector, which is unavailable during supervised motor learning. (2) Although somatic information alone was found to be sufficient to recognize objects, object identification is predictive and more reliable when both somatic and visual information are used. 2 RECOGNITION OF MANIPULATED OBJECTS The most important issues in object manipulation are (l) how to recognize the manipulated object and (2) how to achieve uniform performance for different objects. There are several ways to acquire helpful information for recognizing manipulated objects. Visual information and somatic information (performance by motion) are most informative for object recognition for manipulation. The physical characteristics useful for object manipulation such as mass, softness and slipperiness, can not be predicted without the experience of manipulating similar objects. In this respect, object recognition for manipulation should be learned through object manipulation. 3 MODULAR ARCHITECTURE USING GATING NETWORK Jacobs et al. (1990, 1991) and Nowlan & Hinton (1990, 1991) have proposed a competitive modular network architecture which is applied to the task decomposition problem or classification problems. Jacobs (1991) applied this network architecture to the multi-payload robotics task in which each expert network controller is trained for each category of manipulated objects in terms of the object's mass. In his scheme, the payload's identity is fed to the gating network to select a suitable expert network which acts as a feedforward controller. We examined modular network architecture using feedback-e"or-learning for simultaneous learning of object recognition and control task as shown in Fig.l. M1Tt rU."nbmmOOi--._,",So "'::" ,:f, v Gatmg t:~~ ~~t~ t:~ Network .... +----. Expert Network 1 Expert Network 2 t--=-.c::u-~ Expert Network 3 u~ u Controlled t-....... eo{4~"""""-"l~ object 1--.... ... -_ .. + Fig.1 Configuration of the modular architecture using Gating Network for object manipulation based on feedback-error-learning In this learning scheme, the quasi-target vector for combined output of expert networks is employed instead of the exact target vector. This is because it is unlikely that the exact target motor command vector can be provided in learning. The quasi-target vector of feedforward motor command, u' is produced by : '+ U U Ufo ' (1) Recognition of Manipulated Objects by Motor Learning 549 Here, U denotes the previous final motor command and ufo denotes the feedback motor command. Using this quasi-target vector, the gating and expert networks are trained to maximize the log-likelihood function, In L, by using backpropagation. In L = In i gje -IU'-u,r /2(1,2 (2) j=! Here, uj is the i th expert network output, (Ij is a variance scaling parameter of the i th expert network and gj' the i th output of gating network, is calculated by eS, gj =-11 -, (3) LesJ j=! where Sj denotes the weighted input received by the i th output unit. The total output of the modular network is 11 uff = ~gjUj' (4) j=l By maximizing Eq.2 using steepest ascent method, the gating network learns to choose the expert network whose output is closest to the quasi-target command, and each expert network is tuned correctly when it is chosen by the gating network. The desired trajectory is fed to the expert networks so as to make them work as feedforward controllers. 4 SIMULATION OF OBJECT MANIPULATION BY MODULAR ARCHITECTURE WITH GATING NETWORK We show the advantage of the learning schemes presented above by simulation results below. The configuration of the controlled object and manipulated object is shown in Fig.2 in which M, B, K respectively denote the mass, viscosity and stiffness of the coupled object (controlled- and manipulated-object). The manipulated object is changed every epoch (l [sec]) while the coupled object is controlled to track the desired trajectory. Fig.3 shows the selected object, the feedforward and feedback motor commands, and the desired and actual trajectories before learning. x -4--j M Fig.2 Configuration of the controlled object and the manipulated object - -- - ~ -- -l~a __ -- t:~ .24,------~1------r-----"--~~1 o 5 20 time [ •• c] Fig.3 Temporal patterns of the selected object, the motor commands, the desired and actual trajectories before learning The desired trajectory, xd ' was produced by Ornstein-Uhlenbeck random process. As shown in Fig.3, the error between the desired trajectory and the actual trajectory remained because the feedback controller in which the gains were fixed, was employed in this condition. (Physical characteristics of the objects used are listed in Fig.4a) 550 Gomi and Kawato 4.1 SOMATIC INFORMATION FOR GATING NETWORK We call the actual trajectory vector, x, and the final motor command, U , "somatic infonnation". Somatic infonnation should be most useful for on-line (feedback) recognition of the dynamical characteristics of manipulated objects. The latest four times data of somatic information were used as the gating network inputs for identification of the coupled object in this simulation. s ofEq.3 is expressed as: s(t) = '1'1 (x(t), x(t -1), x(t - 2), x(t - 3), u(t), u(t -1), u(t - 2), u(t - 3»). (5) The dynamical characteristics of coupled objects are shown in Fig.4a. The object was changed in every epoch (l [secD. The variance scaling parameter was (Jj = 0.8 and the learning rates were 77ga,e = 1. 0 x 10-3 and 77experti = 1. 0 x 10-5 • The three-layered feedforward neural network (input 16, hidden 30, output 3) was employed for the gating network and the two-layered linear networks (input 3, output 1) were used for the expert networks. Comparing the expert's weights after learning and the coupled object characteristics in Fig.4a, we realize that expert networks No.1, No.2, No.3 obtained the inverse dynamics of coupled objects y, (3, a, respectively. The time variation of object, the gating network outputs, motor commands and trajectories after learning are shown in Fig.4b. The gating network outputs for the objects responded correctly in the most of the time and the feedback motor command, ufo' was almost zero. As a consequence of adaptation, the actual trajectory almost perfectly corresponded with the desired trajectory. b. 'Y -- --- a. Gating Net Outputs v.s. Objects relinal chara::loristiicsl ,mago D-u::-;-""""::'::"::-;:;':-:=-r-"7.:""':...--i M B K a 1.0 2.0 8.0 none f3 5.0 7.0 4.0 none - 20 --\---' __ ---"'.-__ ----'-..L"--__ ~~---.:,; !1~actC:al 1~L_ ____ L:::Lll£[ili±~ .... ~ .... =::O~ ~d.:: L. 8.03.01 .0 none IS. _ _ o 5 10 15 tlmo [,.cl Fig.4 Somatic information for gating network, a. Statistical analysis of the correspondence of the expert networks with each object after learning (averaged gating outputs), b. Temporal patterns of objects, gating outputs, motor commands and trajectories after learning 4.2 VISUAL INFORMATION FOR GATING NETWORK 20 We usually assume the manipulated object's characteristics by using visual infonnation. Visual information might be helpful for feedforward recognition. In this case, s of Eq.3 is expressed as: s(t) = 'l'2(V(t») . (6) We used three visual cues corresponding to each coupled object in this simulation as shown in Fig.5a. At each epoch in this simulation, one of three visual cues selected randomly is randomly placed at one of four possible locations on a 4 x 4 retinal matrix. Recognition of Manipulated Objects by Motor Learning 551 The visual cues of each object are different, but object ex and ex* have the same dynamical characteristics as shown in Fig.5a. The gating network should identify the object and select a suitable expert network for feedforward control by using this visual information. The learning coefficients were O"j = 0.7, 17gate = 1. 0 X 10-3, 17eXpert j = 1. 0 X 10-5 . The same networks used in above experiment were used in this simulation. After learning, the expert network No.2 acquired the inverse dynamics of object ex and ex * , and expert network No.3 accomplished this for object y. It is recognized from Fig.5b that the gating network almost perfectly selected expert network No.2 for object ex and ex*, and almost perfectly selected expert network No.3 for object y. Expert network No.1 which did not acquire inverse dynamics corresponding to any of the three objects, was not selected in the test period after learning. The actual trajectory in the test period corresponded almost perfectly to the desired trajectory. b. - - ----- -- a. Gating Net. Outputs V.s. Objects tlma [sac] Fig. 5 Visual information for gating network, a. Statistical analysis of the correspondence of the expert networks with each Object after learning (averaged gating outputs), b. Temporal patterns of objects, gating outputs, motor commands and trajectories after learning 4.3 SOMATIC & VISUAL INFORMATION FOR GATING NETWORK We show here the simulation results by using both of somatic and visual information as the gating network inputs. In this case, s ofEq.3 is represented as: s(t)= 'l'3(x(t),·· ·,x(t-3),u(t),···,u(t-3),V(t)). (7) In this simulation, the object ex and ~* had different dynamical characteristics, but shared same visual cue as listed in Fig.6a. Thus, to identify the coupled object one by one, it is necessary for the gating network to utilize not only visual information but also somatic information. The learning coefficients were O"j = 1. 0, 17gale = 1. 0 X 10-3 and 17expert j = 1. 0 X 10-5 . The gating network had 32 input units, 50 hidden units, and 1 output unit, and the expert networks were the same as in the above experiment. After learning, expert networks No.1, No.2, No.3 acquired the inverse dynamics of objects y, ~*, ex respectively. As shown in Fig.6b, the gating network identified the object almost correctly. 552 Gomi and Kawato a. Gating Net. Outputs v.s. Objects Objac1 physical charactonstics M B K b. - -- --- 20~------~----~ ____________ ~ 8 1 actual :2j~ LL __ .J£~2Lill1iiliJill ___ ••• "l 0 ~ j o 5 10 15 20 time [.ocJ Fig. 6 Somatic & Visual information for gating network, a. Statistical analysis of the correspondence of the expert networks with each object after learning (averaged gating outputs), b. Temporal patterns of objects, gating outputs, motor commands and trajectories after learning 4.4 UNKNOWN OBJECT RECOGNITION BY USING SOMATIC INFORMATION Fig.7b shows the responses for unknown objects whose physical characteristics were slightly different from known objects (see Fig.7a and Fig.4a) in the case using somatic information as the gating network inputs. Even if each tested object was not the same as any of the known (learned) objects, the closest expert network was selected. (compare Fig.4a and Fig.7a) During some period in the test phase, the feedback command increased because of an inappropriate feedforward command. b. - -- -- - --a. Gating Net. Outputs v.s. Objects object physical rotinal II---..:-..---.-----..-:~_=_r____._;:_;;__t charactorisbCS Imago M B K II--'--'----'--+---'--"---'---+--'---'--"-t a' 2.0 3.0 7.0 none 20 ~ 0 ....,..".~~,y 'i~~~k--~ ~ ~ __ III ·2 O~--....lIt,.------':'--i:..i,-__ ~--'-_~ 4.0 6.0 5.0 none 9.0 2.0 2.0 none tim. [secJ Fig. 7 Unknown objects recognition by using Somatic information, a. Statistical analysis of the correspondence of the expert networks with each object after learning (averaged gating outputs), b. Temporal patterns of objects, gating outputs, motor commands and trajectories after learning Recognition of Manipulated Objects by Motor Learning 553 5 MODULAR ARCHITECTURE USING ESTIMATION NETWORK The previous modular architecture is competitive in the sense that expert networks compete with each other to occupy its niche in the input space. We here propose a new cooperative modular architecture where expert networks specified for different functions cooperate to produce the required output. In this scheme, estimation networks are trained to recognize physical characteristics of manipulated objects by using feedback information. Using this method, an infinite number of manipulated objects in the limited domain can be treated by using a small number of estimation networks. We applied this method to recognizing the mass of the manipulated objects. (see Fig.8) Fig.9a shows the output of the estimation network compared to actual masses. The realized trajectory almost coincided with the desired trajectory as shown in Fig.9b. This learning scheme can be applied not only to estimating mass but also to other physical characteristics such as softness or slipperiness. j ,i.x Fig. 8 Confaguration of the modular architecture using mass estimation network for object manipulation by feedback-error-Iearning 6 DISCUSSION a. ~ 8 6 ~ 4 ~ 2 0'r--_"""T""'_""""''''-_-'--_--' 0.0 0.5 1.0 1.5 2.0 ~me (sec] b. _ 2 desired traj9Ctory f\. ~ 1 actual traJecklry ,1"\ I ~ .~ a -"'--__ ~ I \ 8. -1 '..-1 \ -2 -3 o 5 10 15 time (sec] Fig. 9 a. Comparison of actual & estimated mass, b. desired & actual trajectory 20 In the first scheme, the internal models for object manipulation (in this case, inverse dynamics) were represented not in terms of visual information but rather, of somatic information (see 4.2). Although the current simulation is primitive, it indicates the very important issue that functional internal-representations of objects (or environments), rather than declarative ones, were acquired by motor learning. The quasi-target motor command in the first scheme and the motor command error in the second scheme are not always exactly correct in each time step because the proposed learning schemes are based on the feedback-error-learning method. Thus, the learning rates in the proposed schemes should be slower than those schemes in which exact target commands are employed. In our preliminary simulation, it was about five times slower. However, we emphasize that exact target motor commands are not available in supervised motor learning. The limited number of controlled objects which can be dealt with by the modular network with a gating network is a considerable problem (Jacobs, 1991; Nowlan, 1990, 1991). This problem depends on choosing an appropriate number of expert networks and value of the variance scaling parameter, (J' . Once this is done, the expert networks can interpolate 554 Gomi and Kawato the appropriate output for a number of unknown objects. Our second scheme provides a more satisfactory solution to this problem. On the other hand, one possible drawback of the second scheme is that it may be difficult to estimate many physical parameters for complicated objects, even though the learning scheme which directly estimates the physical parameters can handle any number of objects. We showed here basic examinations of two types of neural networks - a gating network and a direct estimation network. Both networks use feedback and/or feedforward information for recognition of multiple manipulated objects. In future. we will attempt to integrate these two architectures in order to model tasks involving skilled motor coordination and high level recognition. Ack nowledgmen t We would like to thank Drs. E. Yodogawa and K. Nakane of AlR Auditory and Visual Perception Research Laboratories for their continuing encouragement. Supported by HFSP Grant to M.K. References Barto, A.G., Sutton. R.S., Anderson, C.W. (1983) Neuronlike adaptive elements that can solve difficult learning control problems; IEEE Trans. on Sys. Man and Cybern. SMC-13, pp.834-846 Gomi, H., Kawato, M. (1990) Learning control for a closed loop system using feedbackerror-learning. Proc. the 29th IEEE Conference on Decision and Control, Hawaii. Dec., pp.3289-3294 Hogan, N. (1985) Impedance control: An approach to manipulation: Part I - Theory, Part II - Implementation, Part III - Applications, ASME Journal of Dynamic Systems, Measurement, and Control, Vol. 107, pp.1-24 Jacobs, R.A., Jordan, M.I., Barto, A.G. (1990) Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks, COINS Technical Report 90-27, pp.1-49 Jacobs, R.A., Jordan, M.I. (1991) A competitive modular connectionist architecture. In Lippmann, R.P. et al., (Eds.) NIPS 3, pp.767-773 Jordan. M.I. (1988) Supervised learning and systems with excess degrees of freedom, COINS Technical Report 88-27, pp.1-41 Kawato, M., Furukawa, K., Suzuki, R. (1987) A hierarchical neural-network model for control and learning of voluntary movement; Bioi. Cybern. 57, pp.169-185 Kawato, M. (1990) Computational schemes and neural network models for formation and control of multijoint arm trajectory. In: Miller, T., Sutton, R.S., Werbos, P.J.(Eds.) Neural Networksfor Control, The MIT Press, Cambridge, Massachusetts, pp.197-228 Katayama, M., Kawato, M. (1991) Learning trajectory and force control of an artificial muscle arm by parallel-hierarchical neural network model. In Lippmann, R.P. et al., (Eds.) NIPS 3, pp.436-442 Nowlan, S.J. (1990) Competing experts: An experimental investigation of associative mixture models, Univ. Toronto Tech. Rep. CRG-TR-90-5, pp.I-77 Nowlan, S.1., Hinton, G.E. (1991) Evaluation of adaptive mixtures of competing experts. In Lippmann, R.P. et al., (Eds.) NIPS 3, pp.774-780 Psaltis, D., Sideris, A., Yamamura, A. (1987) Neural controllers, Proc. IEEE Int. Con! Neural Networks, Vol.4, pp.551-557
1991
72
543
Temporal Adaptation • In a Silicon Auditory Nerve John Lazzaro CS Division UC Berkeley 571 Evans Hall Berkeley, CA 94720 Abstract Many auditory theorists consider the temporal adaptation of the auditory nerve a key aspect of speech coding in the auditory periphery. Experiments with models of auditory localization and pitch perception also suggest temporal adaptation is an important element of practical auditory processing. I have designed, fabricated, and successfully tested an analog integrated circuit that models many aspects of auditory nerve response, including temporal adaptation. 1. INTRODUCTION We are modeling known and proposed auditory structures in the brain using analog VLSI circuits, with the goal of making contributions both to engineering practice and biological understanding. Computational neuroscience involves modeling biology at many levels of abstraction. The first silicon auditory models were constructed at a fairly high level of abstraction (Lyon and Mead, 1988; Lazzaro and Mead, 1989ab; Mead et al., 1991; Lyon, 1991). The functional limitations of these silicon systems have prompted a new generation of auditory neural circuits designed at a lower level of abstraction (Watts et al., 1991; Liu et -al., 1991). 813 814 Lazzaro The silicon model of auditory nerve response models sensory transduction and spike generation in the auditory periphery at a high level of abstraction (Lazzaro and Mead, 1989c); this circuit is a component in silicon models of auditory localization, pitch perception, and spectral shape enhancement (Lazzaro and Mead, 1989ab; Lazzaro, 1991a). Among other limitations, this circuit does not model the shortterm temporal adaptation of the auditory nerve. Many auditory theorists consider the temporal adaptation of the auditory nerve a key aspect of speech coding in the auditory periphery (Delgutte and Kiang, 1984). From the engineering perspective, the pitch perception and auditory localization chips perform well with sustained sounds as input; temporal adaptation in the silicon auditory nerve should improve performance for transient sounds. I have designed, fabricated, and tested an integrated circuit that models the temporal adaptation of spiral ganglion neurons in the auditory periphery. The circuit receives an analog voltage input, corresponding to the signal at an output tap of a silicon cochlea, and produces fixed-width, fixed-height pulses that are correlates to the action potentials of an auditory nerve fiber. I have also fabricated and tested an integrated circuit that combines an array of these neurons with a silicon cochlea (Lyon and Mead, 1988); this design is a silicon model of auditory nerve response. Both circuits were fabricated using the Orbit double polysilicon n-well 2/.l1n process. 2. TEMPORAL ADAPTATION Figure 1 shows data from the temporal adaptation circuit; the data in this figure was taken by connecting signals directly to the inner hair cell circuit input, bypassing silicon cochlea processing. In (a), we apply a 1 kHz pure tone burst of 20ms in duration to the input of the hair cell circuit (top trace), and see an adapting sequence of spikes as the output (middle trace). If this tone burst in repeated at 80ms intervals, each response in unique; by averaging the responses to 64 consecutive tone bursts (bottom trace), we see the envelope ofthe temporal adaptation superimposed on the cycle-by-cycle phase-locking of the spike train. These behaviors qualitatively match biological experiments (Kiang et al., 1965). In biological auditory nerve fibers, cycle-by-cycle phase locking ceases for auditory fibers tuned to sufficiently high frequencies, but the temporal adaptation property remains. In the silicon spiral ganglion neuron, a 10kHz pure tone burst fails to elicit phae;e-Iocking (Figure 1(b), trace identities ae; in (a)). Temporal adaptation remains, however, qualitatively matching biological experiments (Kiang et aI., 1965). To compare this data with the previous generation of silicon auditory nerve circuits, we set the control parameters of the new spiral ganglion model to eliminate temporal adaptation. Figure 1 (c) shows the 1 kHz tone burst response (trace identities as in (a)). Phase locking occurs without temporal adaptation. The uneven response of the averaged spike outputs is due to beat frequencies between the input tone frequency and the output spike rate; in practice, the circuit noise of the silicon cochleae; adds random variation to the auditory input and smooths this response (Lazzaro and Mead, 1989c). Temporal Adaptation in a Silicon Auditory Nerve 815 U I ,I I (a) ___ UJ j 1. (b) _, .. -.......-.I.LW-..-J ~_ ... _ (c) Figure 1. Responses of test chip to pure tone bursts. Horizontal axis is time for all plots, all horizontal rules measure 5 ms. (a) Chip response to a 1 kHz, 20 ms tone burst. Top trace shows tone burst input, middle trace shows a sample response from the chip, bottom trace shows averaged output of 64 responses to tone bursts. Averaged response shows both temporal adaptation and phage locking. (b) Chip response to a 10 kHz, 20 ms tone burst. Trace identifications identical to (a). Response shows temporal adaptation without phase locking. (c) Chip response to a 1 kHz, 20 ms tone burst, with adaptation circuitry disabled. Trace identifications identical to (a). Response shows phase locking without temporal adaptation. 816 Lazzaro 3. CIRCUIT DESIGN Figure 2 shows a block diagram of the model. The circuits modeling inner hair cell transduction remain unchanged from the original model (Lazzaro and Mead, 1989c), and are shown as a single box. This box performs time differentiation, nonlinear compression and half-wave rectification on the input waveform Vi, producing a unidirectional current waveform as output. The dependent current source represents this processed signal. The axon hillock circuit (Mead, 1989), drawn as a box marked with a pulse, converts this current signal into a series of fixed-width, fixed height spikes; Vo is the output of the model. The current signal is connected to the pulse generator using a novel current mirror circuit, that serves as the control element to regulate temporal adaptation. This current mirror circuit has an additional high impedance input, Va, that exponentially scales the current entering the axon hillock circuit (the current mirror operates in the subthreshold region). The adaptation capacitor Ca is associated with the control voltage Va. IHC Figure 2. Circuit schematic of the enhanced silicon model of auditory nerve response. The circuit converts the analog voltage input Vi into the pulse train Vo; control voltages VI and Vp control the temporal adaptation of state variable Va on capacitor Ca. See text for details. Temporal Adaptation in a Silicon Auditory Nerve 817 Ca is constantly charged by the PFET transistor associated with control voltage Vi, and is discharged during every pulse output of the axon hillock circuit, by an amount set by the control voltage Vp. During periods with no input signal, Va is charged to Vdd, and the current mirror is set to deliver maximum current with the onset of an input signal. If an input signal occurs and neuron activity begins, the capacitor Va is discharged with every spike, degrading the output of the current mirror. In this way, temporal adaptation occurs, with characteristics determined by Vp and Vi. The nonlinear differential equations for this adaptation circuit are similar to the equations governing the adaptive baroreceptor circuit (Lazzaro et al., 1991); the publication describing this circuit includes an analysis deriving a recurrence relation that describes the pulse output of the circuit given a step input. Sp/sec 600 400 26 40 36 200 14 ________ 0.0 10 20 (ms) 30 Figure 3. Instantaneous firing rate of the adaptive neuron, as a function of time; tone burst begins at 0 ms. Each curve is marked with the amplitude of presented tone burst, in dB. Tone burst frequency is 1Khz. 818 Lazzaro 4. DATA ANALYSIS The experiment shown in Figure l(a) was repeated for tone bursts of different amplitudes; this data set was used to produce several standard measures of adaptive response (Hewitt and Meddis, 1991). The integrated auditory nerve circuit was used for this set of experiments. Data was taken from an adaptive auditory nerve output that had a best frequency of 1 Khz.; the frequency of all tone bursts was also 1 Khz. Figure 3 shows the instantaneous firing rate of the auditory nerve output as a function of time, for tone bursts of different amplitudes. Adaptation was more pronounced for more intense sounds. This difference is also seen in Figure 4. In this figure, instantaneous firing rate is plotted as a function of amplitude, both at response onset and after full adaptation. 700 Spikes/sec 500 300 100 10 20 30 40 dB Figure 4. Instantaneous firing rate of the adaptive neuron, as a function of amplitude (in dB). Top curve is firing rate at onset of response, bottom curve is firing rate after adaptation. Tone burst frequency is 1Khz. Temporal Adaptation in a Silicon Auditory Nerve 819 Figure 4 shows that the instantaneous spike rate saturates at moderate intensity after full adaptation; at these moderate intensities, however, the onset instantaneous spike rate continues to encode intensity. Figure 4 shows a non-monotonicity at high intensities in the onset response; this undesired non-monotonicity is a result of the undesired saturation of the silicon cochlea circuit (Lazzaro, 1991b). 5. CONCLUSION This circuit improves the silicon model of auditory response, by adding temporal adaptation. We expect this improvement to enhance existing architectures for auditory localization and pitch perception, and aid the creation of new circuits for speech processing. A cknow ledgeluen t s Thanks to K. Johnson of CU Boulder and J. Wawrzynek of UC Berkeley for hosting this research in their laboratories. I also thank the Caltech auditory research community, specifically C. l\-1ead, D. Lyon, M. Konishi, L. Watts, M. Godfrey, and X. Arreguit. This work was funded by the National Science Foundation. References Delgutte, B., and Kiang, Y. S. (1984). Speech coding in the auditory nerve I-V. 1. Acoust. Soc. Am 75:3,866-918. Hewitt, M. J. and TvIeddis, R. (1991). An evaluation of eight computer models of mammalian inner hair-cell function. J. Acoust. Soc. Am 90:2, 904. Kiang, N. Y.-s, 'Watenabe, T., Thomas, E.C., and Clark, L.F. (1965). Discharge Patterns of Single Fibers in the Cat's Auditory Nerve. Cambridge, MA: M.LT Press. Lazzaro, J. and Mead, C. (1989a). A silicon model of auditory localization. Neural Computation 1: 41-70. Lazzaro, J. and Mead, C. (1989b). Silicon modeling of pitch perception. Proceedings National Academy of Sciences 86: 9597-960l. Lazzaro, J. and Mead, C. (1989c). Circuit models of sensory transduction in the cochlea. In Mead, C. and Ismail, M. (eds), Analog VLSI Implementations of Neural Net'works. Nonvell, MA: Klmver Academic Publishers, pp. 85-1Ol. Lazzaro, J. P. (1991a). A silicon model of an auditory neural representation of spectral shape. IEEE Jour1lal Solid State Circuits 26: 772-777. Lazzaro, J. P. (1991b). Biologically-based auditory signal processing in analog VLSL IEEE Asilomar COllference on Signals, System,s, and Computers. Lazzaro, J. P., Schwaber, J., and Rogers, W. (1991). Silicon baroreceptors: modeling cardiovascular pressure transduction in analog VLSL In Sequin, C. (ed), Ad820 Lazzaro vanced Research. in VLSI, Proceedings of the 1991 Santa Cruz Conference, Cambridge, MA: MIT Press, pp. 163-177. Liu, VV., Andreou, A., and Goldstein, M. (1991). Analog VLSI implementation of an auditory periphery model. 25 Annual Conference on Information Sciences and Systems, Baltimore, MD, 1991. Lyon, R. and Mead, C. (1988). An analog electronic cochlea. IEEE Trans. Acoust., Speech, Signal Processing 36: 1119-1134. Lyon, R. (1991). CCD correlators for auditory models. IEEE Asilomar Conference on Signals, Systems, and Computers. Mead, C. A., Arreguit, X., Lazzaro, J. P. (1991) Analog VLSI models of binaural hearing. IEEE Journal of Neural Networks, 2: 230-236. Mead, C. A. (1989). Analog VLSI and Neural Systems. Reading, MA: AddisonWesley. Watts, L., Lyon, R., and Mead, C. (1991). A bidirectional analog VLSI cochlear model. In Sequin, C. (ed), Advanced Research in VLSI, Proceedings of tlte 1991 Santa Cruz Conference, Cambridge, MA: MIT Press, pp. 153-163.
1991
73
544
Oscillatory Model of Short Term Memory David Horn School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Sciences Tel-Aviv University Tel Aviv 69978, Israel Marius U sher* Dept. of Applied Mathematics and Computer Science Weizmann Institute of Science Rehovot 76100, Israel Abstract We investigate a model in which excitatory neurons have dynamical thresholds which display both fatigue and potentiation. The fatigue property leads to oscillatory behavior. It is responsible for the ability of the model to perform segmentation, i.e., decompose a mixed input into staggered oscillations of the activities of the cell-assemblies (memories) affected by it. Potentiation is responsible for sustaining these staggered oscillations after the input is turned off, i.e. the system serves as a model for short term memory. It has a limited STM capacity, reminiscent of the magical number 7 ± 2. 1 Introduction The limited capacity (7 ± 2) of the short term memory (STM) has been a subject of major interest in the psychological and physiological literature. It seems quite natural to assume that the limited capacity is due to the special dynamical nature of STM. Recently, Crick and Koch (1990) suggested that the working memory is functionally related to the binding process, and is obtained via synchronized oscillations of neural populations. The capacity limitation of STM may then result from the competition between oscillations representing items in STM. In the model which we investigate this is indeed the case. ·Present address: Division of Biology, 216-76, Caltech, Pasadena CA 91125. 125 126 Horn and Usher Models of oscillating neural networks can perform various tasks: 1. Phase-locking and synchronization in response to global coherence in the stimuli, such as similarity of orientation and continuity (Kamen et al. 1989; Sompolinsyet al. 1990; Konig & Schillen 1991). 2. Segmentation of incoherent stimuli in low level vision via desynchronization, using oscillator networks with delayed connections (Schillen & Konig 1991). 3. Segmentation according to semantic content, i.e., separate an input of mixed information into its components which are known memories of the system (Wang et al. 1990, Horn and Usher 1991). In these models the memories are represented by competing cell a.'3semblies. The input, which affects a subset of these assemblies, induces staggered oscillations of their activities. This works as long as the number of memories in the input is small, of the order of 5. 4. Binding, i.e., connecting correctly different attributes of the same object which appear in the mixed input (Horn et al. 1991). Binding can be interpreted as matching the phases of oscillations representing attributes of the same object in two different networks which are coupled in a way which does not assume any relation between the attributes. To these we add here the important task of 5. STM, i.e., keeping information about segmentation or binding after the input is turned off. In order to qualify as models for STM, the staggered oscillations have to prevail after the input stimuli disappear. Unfortunately, this does not hold for the models quot.ed above. Once the input disappears, either the network's activity dies out, or oscillations of assemblies not included in the original input are turned on. In other words, the oscillations have no inertia, and thus they do not persist after the disappearance of the sensory input. Our purpose is to present a model of competing neural assemblies which, upon receiving a mixed input develops oscillations which prevail after the st.imulus disappears. In order to achieve this, the biological mechanism of post tetanic potentiation will be used. 2 Dynaillics of Short Ternl Potentiation It was shown that following a t.etanus of electrophysiological stimulation temporary modifications in the synaptic strengths, mostly non Hebbian, are observed (Crick and Koch, 1990; Zucker, 1989). The time scale of these synaptic modifications ranges between 50 111S to several minutes. A detailed description of the processes responsible for this mechanism was given by Zucker (1989), exhibiting a rather complex behavior. In the following we will use a simplified version of these mechanisms involving two processes with different time scales. We assume that following a prolonged activation of a synapse, the synaptic strength exhibits depression on a short time scale, but recovers and becomes slightly enhanced on a longer time scale. As illustrated in Fig 1 of Zucker (1989), this captures most of the dynamics of Short Term Potentiation. The fact that these mechanisms are non Hebbian implies that all synapses associated with a presynaptic cell are affected, and thus the unit of change is the presynaptic cell (Crick & Koch 1990). Oscillatory Model of Shorr Term Memory 127 Our previous oscillatory neural networks were based on the assumption that, in addition to the customary properties of the formal neuron, its threshold increases when the neuron keeps firing, thus exhibiting adaptation or fatigue (Horn & Usher 1989). Motivated by the STP findings we add a new component offacilitaion, which takes place on a longer time scale than fatigue. We denote the dynamical threshold by the continuous variable r which is chosen as a sum of two components, I and p, representing fatigue and potentiation, r = all - a2p· Their dynamics is governed by the equations ,dl/dt = m + (l/CI - 1)1 ,dp/dt = m + (1/c2 - l)p (1) (2) where m describes the average neuron activity (firing rate) on a time scale which is large compared to the refractory period. The time constants of the fatigue and potentiation components, Tj = c,c:' l are chosen so that TI < T2. As a result the neuron displays fatigue on a short time scale, but recovers and becomes slightly enhanced (potentiated) on a longer time scale. This is clearly seen in Fig. 1, which shows the behavior when the activity m of the corresponding neuron is clamped at 1 for some time (due to sensory input) and quenched to zero afterwards. 3 _. -. ,,2 / \ I \ f r \ 1 \ , " ..... ".-.- . -. -.0 \ '\ '\ ....'\ ..-1 .." ./ " p ....... ....... ....... ....... ....... ./ -2 0 20 40 60 80 100 time Figure 1: Behavior of the dynamic threshold r and its fatigue I and potentiation p components, when the neuron activity m is clamped as shown. Time scale is arbitrary. The parameters are CI = 1.2 C2 = 1.05 al = 4 a2 = 1 . We observe here that the threshold increases during the cell's activation, being driven to its asymptotic value al c1-I. After the release of the stimulus the dyCl namic threshold decreases (i.e. the neuron recovers) and turns negative (signifying 128 Horn and Usher potentiation). The parameters were chosen so that asymptotically the threshold reaches zero, i.e. no permanent effect is left. In our model we will assume a similar behavior for the excitatory cell-assemblies which carry the memories in our system. 3 The Model Our basic model (Horn & Usher 1990) is composed of two kinds of neurons which are assumed to have excitatory and inhibitory synapses exclusively. Memory patterns are carried by excitatory neurons only. Furthermore, we make the simplifying assumption that the patterns do not overlap with one another, i.e. the model is composed of disjoint Hebbian cell-assemblies of excitatory neurons which affect one another through their interaction with a single assembly of inhibitory neurons. Let us denote by mS'(t) the fraction of cell-assembly number Il which fires at time t, and by m I (t) the fraction of active inhibitory neurons. We will refer to mS' as the activity of the Ilth memory pattern. There are P different memories in the model, and their activities obey the following differential equations dmS' /dt = -mS' + FT(AmS' - BmI f}S' + is') (3) dmI/dt == -mI + FT(CM - DmI - f}I) where M= LmS' (4) S' f}S' and f}I are the thresholds of all excitatory and inhibitory neurons correspondingly and is' represents the input into cell assembly Il. The four parameters ABC and D are all positive and represent the different couplings between the neurons. This system is an attractor neural network. In the absence of input and dynamical thresholds it is a dissipative system which flows into fixed points determined by the memOrIes. This system is a generalization of the E-I model of Wilson and Cowan (1972) in which we have introduced competing memory patterns. The latter make it into an attractor neural network. Wilson and Cowan have shown that a pair of excitatory and inhibitory assemblies, when properly connected, will form an oscillator. We induce oscillations in a different way, keeping the option of having the network behave either as an attractor neural network or as an oscillating one: we turn the thresholds of the excitatory neurons into dynamic variables, which are defined by f}S' = f}t; + brS' . The dynamics of the new variables rS' are chosen to follow equations (1) and (2) where all elements, r f p and m refer to the same cell-assembly 1-'. To understand the effects of this change let us first limit ourselves to the fatigue component only, i.e. a1 = 1 and a2 = 0 in Eq. 1. Imagine a situation in which the system would flow into a fixed point mS' = 1. rS' will then increase until it reaches the value cI/( C1 -1). This means that the argument of the FT function in the equation for mS' decreases by 9 = bCI/(Cl - 1) . If this overcomes the effect of the other terms the amplitude mS' decreases and the system moves out of the attractor and falls into the basin of a different center of attraction. This process can continue indefinitely creating Oscillatory Model of Short Term Memory 129 an oscillatory network which moves from one memory to another. Envisage now turning on a p/lo component leading to an r/lo behavior of the type depicted in Fig. 1. Its effect will evidently be the same as the input i/lo in Eq. (3) during the time in which it is active. In other words, it will help to reactivate the cell-assembly /-l, thus carrying the information that this memory was active before. Therefore, its role in our system is to serve as the inertia component necessary for creating the effect of STM. 4 Seglnentation and Short Term Memory In this section we will present. results of numerical investigations of our model. The parameters used in the following are A = C = D = 1 B = 1.10t; = 0.075 OJ = -0.55 T = 0.05 b = 0.2 I = 2.5 and the values of ai and Ci of Fig. 1. We let n of the P memories have a constant input of the form i/lo = i /-l = 1" .. , n i/lo = 0 /-l = n + 1,·,·, P. (5) An example of the result. of a system with P = 10 and n = 4 is shown in Fig. 2. U1 \l.) ..... o+J ..... :> ..... o+J CJ CO 1.0 ,... I I 0.8 0.6 0.4 0.2 0.0 4 3 2 1 a , ,.,,11/ " • ,\ " I " 11 ,I " '\' : I I " I . '. I . . . , • • f •• 'I:: II,·:: I,!. ,',." . ", I , . ,I,·:. 1,1 • ,. . , '" " ':: I ~ \ I-/ t • Ilr r ., . ' ; t ~ I ~ tr1 • I I. ~ \1 . \ ~~. '. ,\ ,I I. '. J\ " .1 \t:.)., : \ . 1.\, ~ \. ---------C> a 25 50 I I , • 1 I ~ oj A :. "~" ,. I, :: " " ,,:: " .. :: .,11:: II " ,.1,:: " I, \< ,.,,:: 'I, " .' ., I ,'. I, '. I. \ .. . . , ,I I' , . . ' "1 1< "', \,1 I: : ': . I: I" " , " ,,' I:: " :: '\/ I: . :: ',~~: I~I~"~' : Il' 11,\.t . ~. l ~'" _' I, ~ • , "\.;~.'" • ',J\..J ~ '_'l ' , .A. 75 100 125 time Figure 2: Result.s of our model for P = 10 memories and n = 4 inputs. The first frame displays the activities m of the four relevant cell-assemblies, and the second frame represents their l' values. The arrow indicates the duration of the mixed input. 130 Horn and Usher Here we display the activities of the cell-assemblies that receive the constant input and their corresponding average thresholds. While the signal of the mixed input is on (denoted by an arrow along the time scale) we see how the phenomenon of segmentation develops. The same staggered oscillation of the four cell-assemblies which received an input is sustained after the signal is turned off. This indicates that the system functions as a STM. Note that no synaptic connections were changed and, once the system will receive a new input its behavior will be revised. However, as long as it is left alone, it will continue to activate the cell-assemblies affected by the original input. We were able to obtain good results only for low n values, n ::; 4. As n is increased we have difficulties wit.h both segmentation and STM. By modifying slightly the paradigm we were able to feed 5 different inputs in a STM, as shown in Fig. 3. This required presenting them at different times, as indicated by the 5 arrows on this figure. In other words, t.his system does not perform segmentation but it continues to work as a STM. Note, however, that the order of the different activities is no longer maintained after the stimuli are turned off. 1.0 I II .. . .. I I 0.8 , I tZl · · tl) I , · · .0.6 · · ~ · · .I · > · · .0.4 , : · : ~ • i C) · ; ttl \: . , 0.2 · : -; 0.0 o o 25 :"': l\ 1 /1 . '1 ,I . I I , I . . , II , , 50 . · . · . · time 75 100 . '. .' . I' :: ; ; : ~ 125 Figure 3: Result.s for 5 inputs which are fed in consecutively at the times indicated by the short arrows. The model functions as STM without segmentation. Oscillatory Model of Short Term Memory 131 5 Discussion. Psychological experiments show that subjects can repeat a sequence of verbal items in perfect order a.'l long as their number is small (7 ± 2). The items may be numbers or let.ters but can also be combinations of the latter such as words or recognizable dates or acronyms. This proves that STM makes use of the encoded material in the long term memory (Miller 1956). This relation between the two different kinds of memOt'y lies in the basis of our model. Long term memory is represented by excitatory cell assemblies. Incorporating threshold fatigue into the model, it acquires the capability of performing temporal segmentation of external input. Adding to the threshold post tetanic potentiation, the model becomes capable of maintaining the segmented information in the form of staggered oscillations. This is the property which we view as responsible for STM. Both segmentation and STM have very limited capacities. This seems to follow from t.he oscillatory nature of the system which we use to model these functions. In contrast with long term memory, whose capacity can be increased endlessly by adding neurons and synaptic connections, we find here that only a few items can be st.ored in t.he dynamic fashion of staggered oscillations, irrespective of the size of the system. Vve regard this result as very significant, in view of the fact that the same holds for the limited psychological ability of attention and STM. It may indicate t.hat the oscillatory model contains the key to the understanding of these psychological findings. In order to validate the hypothesis that STM is based on oscillatory correlations between firing rates of neurons, some more experimental neurobiological and psychophysical research is required. While no conclusive results were yet obtained from recordings of t.he cortical activity in the monkey, some positive support has been obtained in psychophysical experiments. Preliminary results show that an oscillatory component can be found in the percentage of correct responses in STM matching experiments (Usher & Sagi 1991). Our mathematical model is based on many specific assumptions. We believe that our main results are characteristic of a class of such models which can be obtained by changing various elements in our system. The main point is that dynamical storage of information can be achieved through staggered oscillations of memory activit.ies. Moreover, to sustain them in the absence of an external input, a potentiation capability ha.'l to be present. A model which contains both should be able to accomodate STM in t.he fashion which we have demonstrated. A cknowledgenlent s M. Usher is the recipient of a Dov Biegun post-doctoral fellowship. We wish to thank S. Popescu for helpful discussions. References Crick,F. & Koch,C. 1990. Towards a neurobiological theory of consciousness. Seminars in the Neurosciences 2, 263-275. 132 Horn and Usher Horn,D., Sagi,D. & Usher,M. 1991. Segmentation, binding and illusory conjunctions. Neural Compo 3, 509-524. Horn,D. & Usher ,M. 1989. Neural networks with dynamical thresholds, Phys. Rev. A 40, 1036-1044. Horn,D. & Usher,M. 1990. Excitatory-inhibitory networks with dynamical thresholds, Int. 1. NeuralSyst. 1, 249-257. Horn,D. & Usher ,M. 1991. Parallel Activation of Memories is an Oscillatory Neural Network. Neural Compo 3, 31-43. Kammen,D.M., Holmes,P.J. & Koch C. 1990. Origin of oscillations in visual cortex: Feedback versus local coupling. In Models of Brain Function, M.Cotterill ed., pp 273-284. Cambridge University Press. Konig,P. & Schillen,T.B. 1991. Stimulus-dependent assembly formation of oscillatory responses: I. Synchronization, Neural Compo 3, 155-166. Miller,G. 1956. The magical number seven plus minus two. Psych. Rev., 63,81-97. Sompolinsky,H., Golomb,D. & Kleinfeld,D. 1990. Global processing of visual stimuli in a neural network of coupled oscillators. Proc. Natl. Acad. of Sci. USA, 87, 7200-7204. Schillen,T.B. & Konig,P. 1991. Stimulus-dependent assembly formation of oscillatory responses: I. Synchronization, Neural Compo 3, 155-166. Wang,D., Buhmann,J. & von der Malsburg,C. 1990. Pattern segmentation in associative memory. Neural Compo 2, 94-106. Wilson,H .R. & Cowan,J .D. 1972. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. 1. 12, 1-24. Usher,M. & Sa.gi D. 1991, in preparation. Zucker ,R.S. 1989. Short-term synaptic plasticity. Ann. Rev. Neurosci. 12, 13-31. PART III SPEECH
1991
74
545
Estimating Average-Case Learning Curves Using Bayesian, Statistical Physics and VC Dimension Methods David Haussler University of California Santa Cruz, California Manfred Opper Institut fur Theoretische Physik Universita.t Giessen, Germany Abstract Michael Kearns· AT&T Bell Laboratories Murray Hill, New Jersey Robert Schapire AT&T Bell Laboratories Murray Hill, New Jersey In this paper we investigate an average-case model of concept learning, and give results that place the popular statistical physics and VC dimension theories of learning curve behavior in a common framework. 1 INTRODUCTION In this paper we study a simple concept learning model in which the learner attempts to infer an unknown target concept I, chosen from a known concept class:F of {O, 1}valued functions over an input space X. At each trial i, the learner is given a point Xi E X and asked to predict the value of I(xi) . If the learner predicts I(xi) incorrectly, we say the learner makes a mistake. After making its prediction, the learner is told the correct value. This simple theoretical paradigm applies to many areas of machine learning, including much of the research in neural networks. The quantity of fundamental interest in this setting is the learning curve, which is the function of m defined as the prob·Contact author. Address: AT&T Bell Laboratories, 600 Mountain Avenue, Room 2A-423, Murray Hill, New Jersey 07974. Electronic mail: mkearns@research.att.com. 855 856 Haussler, Kearns, Opper, and Schapire ability the learning algorithm makes a mistake predicting f(xm+I}, having already seen the examples (Xl, I(x!)), ... , (xm, f(xm)). In this paper we study learning curves in an average-case setting that admits a prior distribution over the concepts in F. We examine learning curve behavior for the optimal Bayes algorithm and for the related Gibbs algorithm that has been studied in statistical physics analyses of learning curve behavior. For both algorithms we give new upper and lower bounds on the learning curve in terms of the Shannon information gain. The main contribution of this research is in showing that the average-case or Bayesian model provides a unifying framework for the popular statistical physics and VC dimension theories of learning curves. By beginning in an average-case setting and deriving bounds in information-theoretic terms, we can gradually recover a worst-case theory by removing the averaging in favor of combinatorial parameters that upper bound certain expectations. Due to space limitations, the paper is technically dense and almost all derivations and proofs have been omitted. We strongly encourage the reader to refer to our longer and more complete versions [4, 6] for additional motivation and technical detail. 2 NOTATIONAL CONVENTIONS Let X be a set called the instance space. A concept class F over X is a (possibly infinite) collection of subsets of X. We will find it convenient to view a concept f E F as a function I : X {O, I}, where we interpret I(x) = 1 to mean that x E X is a positive example of f, and f(x) = 0 to mean x is a negative example. The symbols P and V are used to denote probability distributions. The distribution P is over F, and V is over X. When F and X are countable we assume that these distributions are defined as probability mass functions. For uncountable F and X they are assumed to be probability measures over some appropriate IT-algebra. All of our results hold for both countable and uncountable F and X. We use the notation E I E'P [x (f)] for the expectation of the random variable X with respect to the distribution P, and Pr/E'P[cond(f)] for the probability with respect to the distribution P of the set of all I satisfying the predicate cond(f). Everything that needs to be measurable is assumed to be measurable. 3 INFORMATION GAIN AND LEARNING Let F be a concept class over the instance space X. Fix a target concept I E F and an infinite sequence of instances x = Xl, .. . , X m , Xm+l, ... with Xm E X for all m. For now we assume that the fixed instance sequence x is known in advance to the learner, but that the target concept I is not. Let P be a probability distribution over the concept class F. We think of P in the Bayesian sense as representing the prior beliefs of the learner about which target concept it will be learning. In our setting, the learner receives information about I incrementally via the label Estimating Average-Case Learning Curves 857 sequence I(xd, ... , I(xm), I(xm+d, .... At time m, the learner receives the label I(xm). For any m ~ 1 we define (with respect to x, I) the mth version space Fm(x, I) = {j E F: j(xd = I(XI), . .. , j(Xm) = I(xm)} and the mth volume V!(x, I) = P[Fm(x, I)]. We define Fo(x, I) = F for all x and I, so Vl(x, I) = 1. The version space at time m is simply the class of all concepts in F consistent with the first m labels of I (with respect to x), and the mth volume is the measure of this class under P. For the first part of the paper, the infinite instance sequence x and the prior P are fixed; thus we simply write Fm(f) and Vm(f). Later, when the sequence x is chosen randomly, we will reintroduce this dependence explicitly. We adopt this notational practice of omitting any dependence on a fixed x in many other places as well. For each m ~ 0 let us define the mth posterior distribution Pm(x, I) = Pm by restricting P to the mth version space Fm(f); that is, for all (measurable) S C F, Pm[S] = P[S n Fm(I))/P[Fm(l)] = P[S n Fm(I)]/Vm(f). Having already seen I(xd, ... , I(xm), how much information (assuming the prior P) does the learner expect to gain by seeing I(xm+d? If we let Im+l(x, I) (abbreviated Im+l (I) since x is fixed for now) be a random variable whose value is the (Shannon) information gained from I(xm+d, then it can be shown that the expected information is E/E'P[Im +1(f)] = E/E'P [-log v~:(j~) 1 = E/E'P[-logXm+1 (I)] (1) where we define the (m + 1 )st volume ratio by X!:+l (x, I) = Xm+l (f) = Vm +l (f)/Vm(f). We now return to our learning problem, which we define to be that of predicting the label I(xm+d given only the previous labels I(XI), . .. , I(xm). The first learning algorithm we consider is called the Bayes optimal classification algorithm, or the Bayes algorithm for short. For any m and bE {O, I}, define F:n(x,!) = F:n(1) = {j E Fm(x,!) : j(xm+d = b}. Then the Bayes algorithm is: If Pm[F~(f)] > Pm[F~(I)], predict I(xm+d = 1. If Pm[F~(f)] < Pm[F~(I)], predict I(xm+d = O. If Pm[F~(f)] = Pm[F~(f)], flip a fair coin to predict I(xm+d· It is well known that if the target concept I is drawn at random according to the prior distribution P, then the Bayes algorithm is optimal in the sense that it minimizes the probability that f(xm+d is predicted incorrectly. Furthermore, if we let Bayes:+1 (x, I) (abbreviated Bayes:+l (I) since x is fixed for now) be a random variable whose value is 1 if the Bayes algorithm predicts I(xm+d correctly and 0 otherwise, then it can be shown that the probability of a mistake for a random I is (2) Despite the optimality of the Bayes algorithm, it suffers the drawback that its hypothesis at any time m may not be a member of the target class F. (Here we 858 Haussler, Kearns, Opper, and Schapire define the hypothesis of an algorithm at time m to be the (possibly probabilistic) mapping j : X -+ {O, 1} obtained by letting j(x) be the prediction of the algorithm when Xm+l = x.) This drawback is absent in our second learning algorithm, which we call the Gibbs algorithm [6]: Given I(x!), ... , f(xm), choose a hypothesis concept j randomly from Pm. Given Xm+l, predict I(xm+d = j(xm+!). The Gibbs algorithm is the "zero-temperature" limit of the learning algorithm studied in several recent papers [2, 3, 8, 9]. If we let Gibbs~+l (x, I) (abbreviated Gibbs~+l (f) since x is fixed for now) be a random variable whose value is 1 if the Gibbs algorithm predicts f(xm+d correctly and 0 otherwise, then it can be shown that the probability of a mistake for a random f is (3) Note that by the definition of the Gibbs algorithm, Equation (3) is exactly the average probability of mistake of a consistent hypothesis, using the distribution on :F defined by the prior. Thus bounds on this expectation provide an interesting contrast to those obtained via VC dimension analysis, which always gives bounds on the probability of mistake of the worst consistent hypothesis. 4 THE MAIN INEQUALITY In this section we state one of our main results: a chain of inequalities that upper and lower bounds the expected error for both the Bayes and Gibbs ,algorithms by simple functions of the expected information gain. More precisely, using the characterizations of the expectations in terms of the volume ratio Xm+l (I) given by Equations (1), (2) and (3), we can prove the following, which we refer to as the main inequality: 1l- 1(E/E'P[Im+1(1))) < E/E'P [Bayesm+1 (I)] < E/E'P[Gibbsm+1(f)] ~ ~E/E'P[Im+l(I)]. (4) Here we have defined an inverse to the binary entropy function ll(p) = -p log p (1 - p) log(1 - p) by letting 1l-1(q), for q E [0,1]' be the unique p E [0,1/2] such that ll(p) = q. Note that the bounds given depend on properties of the particular prior P, and on properties of the particular fixed sequence x . These upper and lower bounds are equal (and therefore tight) at both extremes E /E'P [Im+l (I)] = 1 (maximal information gain) and E/E'P [Im +1(f)] = 0 (minimal information gain). To obtain a weaker but perhaps more convenient lower bound, it can also be shown that there is a constant Co > 0 such that for all p > 0, 1l-1(p) 2:: cop/log(2/p). Finally, if all that is wanted is a direct comparison of the performances of the Gibbs and Bayes algorithms, we can also show: Estimating Average-Case Learning Curves 859 5 THE MAIN INEQUALITY: CUMULATIVE VERSION In this section we state a cumulative version of the main inequality: namely, bounds on the expected cumulative number of mistakes made in the first m trials (rather than just the instantaneous expectations). First, for the cumulative information gain, it can be shown that EfE'P [L~l Li(f)] = EfE'P[-log Vm(f)]. This expression has a natural interpretation. The first m instances Xl, . .. , xm of x induce a partition II!:(x) of the concept class :F defined by II~(x) = II~ = {:Fm(x, f) : f E :F}. Note that III~I is always at most 2m , but may be considerably smaller, depending on the interaction between :F and Xl,· ·· ,Xm· It is clear that EfE'P[-logVm(f)] = - L:7rEIF P[1I']1ogP[71']. Thus the expected cumulative information gained from the labels of Xl, .. . , Xm is simply the entropy of the partition II~ under the distribution P. We shall denote this entropy by 1i'P(II~(x)) = 1i'f;.(x) = 1i'f;.. Now analogous to the main inequality for the instantaneous case (Inequality (4)), we can show: log(2m/1i~) < mW' (~ 1l::') :'0 E/E1' [t. BayeSi(f)] < E/E'P [t, GibbSi(f)] :'0 ~1l::' (6) Here we have applied the inequality 1i-l(p) ~ cop/log(2/p) in order to give the lower bound in more convenient form. As in the instantaneous case, the upper and lower bounds here depend on properties of the particular P and x. When the cumulative information gain is maximum (1i'f;. = m), the upper and lower bounds are tight. These bounds on learning performance in terms of a partition entropy are of special importance to us, since they will form the crucial link between the Bayesian setting and the Vapnik-Chervonenkis dimension theory. 6 MOVING TO A WORST-CASE THEORY: BOUNDING THE INFORMATION GAIN BY THE VC DIMENSION Although we have given upper bounds on the expected cumulative number of mistakes for the Bayes and Gibbs algorithms in terms of 1i'f;.(x) , we are still left with the problem of evaluating this entropy, or at least obtaining reasonable upper bounds on it. We can intuitively see that the "worst case" for learning occurs when the partition entropy 1i'f;. (x) is as large as possible. In our context, the entropy is qualitatively maximized when two conditions hold: (1) the instance sequence x induces a partition of :F that is the largest possible, and (2) the prior P gives equal weight to each element of this partition. In this section, we move away from our Bayesian average-case setting to obtain worst-case bounds by formalizing these two conditions in terms of combinatorial parameters depending only on the concept class:F. In doing so, we form the link between the theory developed so far and the VC dimension theory. 860 Haussler, Kearns, Opper, and Schapire The second of the two conditions above is easily quantified. Since the entropy of a partition is at most the logarithm of the number of classes in it, a trivial upper bound on the entropy which holds for all priors P is 1l!(x) ~ log III~(x)l. VC dimension theory provides an upper bound on log III~(x)1 as follows. For any sequence x = Xl, X2, .•. of instances and for m ~ 1, let dimm(F, x) denote the largest d ~ 0 such that there exists a subsequence XiI' ... , Xiii of Xl, ... ,Xm with 1II~((Xill ... ,xili))1 = 2d; that is, for every possible labeling of XiII ... ,Xili there is some target concept in F that gives this labeling. The Vapnik-Chervonenkis (VC) dimension of F is defined by dim(F) = max{ dimm(F, x) : m ~ 1 and Xl, X2, .•• E X}. It can be shown [7, 10] that for all x and m ~ d ~ 1, log III~(x)1 ~ (1 + 0(1)) dimm(F, x) log d· 7) (7) lmm F,x where 0(1) is a quantity that goes to zero as Cl' = m/dimm(F, x) goes to infinity. In all of our discussions so far, we have assumed that the instance sequence x is fixed in advance, but that the target concept f is drawn randomly according to P. We now move to the completely probabilistic model, in which f is drawn according to P, and each instance Xm in the sequence x is drawn randomly and independently according to a distribution V over the instance space X (this infinite sequence of draws from V will be denoted x E V*). Under these assumptions, it follows from Inequalities (6) and (7), and the observation above that 1l~(x) ~ log III~(x)1 that for any P and any V, EfEP,XE"· [t. Bayes,(x, f)] < EfEP,XE"· [t. Gibbs,(x, f)] < ~EXE'V. [log III~(x)1l < (1 + o(I))ExE'V. [diffim~F, x) log dimm7F, x)] dim(F) m < (1 + 0(1)) 2 log dim(F) (8) The expectation EXE'V. [log In~(x)1l is the VC entropy defined by Vapnik and Chervonenkis in their seminal paper on uniform convergence [11] . In terms of instantaneous mistake bounds, using more sophisticated techniques [4], we can show that for any P and any V, [ dimm(F, X)] dim(F) E/E1',XE'V· [Bayesm(x, I)] ~ EXE'V· m ~ m (9) E [G ·bb (f)] E [2 dimm(F, x)] 2 dim(F) IE1',XE'V· I Sm x, ~ XE'V· ~ m m (10) Haussler, Littlestone and Warmuth [5] construct specific V, P and F for which the last bound given by Inequality (8) is tight to within a factor of 1/ In\2) ~ 1.44; thus this bound cannot be improved by more than this factor in general. Similarly, the 1 It follows that the expected total number of mistakes of the Bayes and the Gibbs algorithms differ by a factor of at most about 1.44 in each of these cases; this was not previously known. Estimating Average-Case Learning Curves 861 bound given by Inequality (9) cannot be improved by more than a factor of 2 in general. For specific V, P and F, however, it is possible to improve the general bounds given in Inequalities (8), (9) and (10) by more than the factors indicated above. We calculate the instantaneous mistake bounds for the Bayes and Gibbs algorithms in the natural case that F is the set of homogeneous linear threshold functions on R d and both the distribution V and the prior P on possible target concepts (represented also by vectors in Rd) are uniform on the unit sphere in Rd. This class has VC dimension d. In this case, under certain reasonable assumptions used in statistical mechanics, it can be shown that for m ~ d ~ I, 0.44d m (compared with the upper bound of dim given by Inequality (9) for any class of VC dimension d) and 0.62d m (compared with the upper bound of 2dlm in Inequality (10)). The ratio of these asymptotic bounds is J2. We can also show that this performance advantage of Bayes over Gibbs is quite robust even when P and V vary, and there is noise in the examples [6]. 7 OTHER RESULTS AND CONCLUSIONS We have a number of other results, and briefly describe here one that may be of particular interest to neural network researchers. In the case that the class F has infinite VC dimension (for instance, if F is the class of all multi-layer perceptrons of finite size), we can still obtain bounds on the number of cumulative mistakes by decomposing F into F 1 , F2, ... , F" ... , where each F, has finite VC dimension, and by decomposing the prior P over F as a linear sum P = 2::1 aiP" where each Pi is an arbitrary prior over Fi, and 2::1 a, = 1. A typical decomposition might let Fi be all multi-layer perceptrons of a given architecture with at most i weights, in which case di = O( i log i) [1]. Here we can show an upper bound on the cumulative mistakes during the first m examples of roughly 11 {ai} + [2::1 aidi] log m for both the Bayes and Gibbs algorithms, where 11{ad = - 2::1 ai log a,. The quantity 2::1 aid, plays the role of an "effective VC dimension" relative to the prior weights {a,}. In the case that x is also chosen randomly, we can bound the probability of mistake on the mth trial by roughly ~(11{ad + [2:~1 a,di)logm). In our current research we are working on extending the basic theory presented here to the problems of learning with noise (see Opper and Haussler [6]), learning multi-valued functions, and learning with other loss functions. Perhaps the most important general conclusion to be drawn from the work presented here is that the various theories of learning curves based on diverse ideas from information theory, statistical physics and the VC dimension are all in fact closely related, and can be naturally and beneficially placed in a common Bayesian framework. 862 Haussler, Kearns, Opper, and Schapire Acknowledgements We are greatly indebted to Ron Rivest for his valuable suggestions and guidance, and to Sara Solla and N aft ali Tishby for insightful ideas in the early stages of this investigation. We also thank Andrew Barron, Andy Kahn, Nick Littlestone, Phil Long, Terry Sejnowski and Haim Sompolinsky for stimulating discussions on these topics. This research was supported by ONR grant NOOOI4-91-J-1162, AFOSR grant AFOSR-89-0506, ARO grant DAAL03-86-K-0171, DARPA contract NOOOI489-J-1988, and a grant from the Siemens Corporation. This research was conducted in part while M. Kearns was at the M.I.T. Laboratory for Computer Science and the International Computer Science Institute, and while R. Schapire was at the M.I.T. Laboratory for Computer Science and Harvard University. References [1] E. Baum and D. Haussler. What size net gives valid generalization? Neural Computation, 1(1):151-160, 1989. [2] J. Denker, D. Schwartz, B. Wittner, S. SoHa, R. Howard, L. Jackel, and J. Hopfield. Automatic learning, rule extraction and generalization. Complex Systems, 1:877-922, 1987. [3] G. Gyorgi and N. Tishby. Statistical theory of learning a rule. In Neural Networks and Spin Glasses. World Scientific, 1990. [4] D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension. In Computational Learning Theory: Proceedings of the Fourth Annual Workshop. Morgan Kaufmann, 1991. [5] D. Haussler, N. Littlestone, and M. Warmuth. Predicting {O, 1}-functions on randomly drawn points. Technical Report UCSC-CRL-90-54, University of California Santa Cruz, Computer Research Laboratory, Dec. 1990. [6] M. Opper and D. Haussler. Calculation of the learning curve of Bayes optimal classification algorithm for learning a perceptron with noise. In Computational Learning Theory: Proceedings of the Fourth Annual Workshop. Morgan Kaufmann, 1991. [7] N. Sauer. On the density of families of sets. Journal of Combinatorial Theory (Series A), 13:145-147,1972. [8] H. Sompolinsky, N. Tishby, and H. Seung. Learning from examples in large neural networks. Physics Review Letters, 65:1683-1686, 1990. [9] N. Tishby, E. Levin, and S. Solla. Consistent inference of probabilities in layered networks: predictions and generalizations. In IJCNN International Joint Conference on Neural Networks, volume II, pages 403-409. IEEE, 1989. [10] V. N. Vapnik. Estimation of Dependences Based on Empirical Data. SpringerVerlag, New York, 1982. [11] V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264-80, 1971.
1991
75
546
Operators and curried functions: Training and analysis of simple recurrent networks Janet Wiles Depts of Psychology and Computer Science, University of Queensland QLD 4072 Australia. janetw@CS.uq.oz.au Abstract Anthony Bloesch, Dept of Computer Science, University of Queensland, QLD 4072 Australia anthonyb@cs.uq.oz.au We present a framework for programming tbe bidden unit representations of simple recurrent networks based on the use of hint units (additional targets at the output layer). We present two ways of analysing a network trained within this framework: Input patterns act as operators on the information encoded by the context units; symmetrically, patterns of activation over tbe context units act as curried functions of the input sequences. Simulations demonstrate that a network can learn to represent three different functions simultaneously and canonical discriminant analysis is used to investigate bow operators and curried functions are represented in the space of bidden unit activations. 1 INTRODUCTION Many recent papers have contributed to the understanding of recurrent networks and their potential for modelling sequential pbenomena (see for example Giles, Sun, Chen, Lee, & Chen, 1990; Elman, 1989; 1990; Jordan, 1986; Cleeremans, Servan-Schreiber & McClelland, 1989; Williams & Zipser, 1988). Of particular interest in these papers is the development of recurrent architectures and learning algorithms able to solve complex problems. The perspective of the work we present bere has many similarities with these stUdies, however, we focus on programming a recurrent network for a specific task, and hence provide appropriate sequences of inputs to learn the temporal component. The function computed by a neural network is conventionally represented by its weights. During training, the task of a network is to learn a set of weights that causes tbe appropriate action (or set of context-specific actions) for each input pattern. However, in 325 326 Wiles and Bloesch a network with recurrent connections, patterns of activation are also part of the function computed by a network. After training (when the weights have been fixed) each input pattern has a specific effect on the pattern of activation across the hidden and output units which is modulated by the current state of those units. That is, each input pattern is a context sensitive operator on the state of the system. To illustrate this idea, we present a task in which many sequences of the form, {F, argl, ... , argn} are input to a network, which is required to output the value of each function, F(argl, ... , argn). The task is interesting since it illustrates how more than one function can be computed by the same network and how the function selected can be specified by the inputs. Viewing all the inputs (both function patterns, F, and argument patterns, argi) as operators allows us to analyse the effect of each input on the state of the network (the pattern of activation in the hidden and context units). From this perspective, the weights in the network can be viewed as an interpreter which has been programmed to carry out the operations specified by each input pattern. We use the term programming intentionally, to convey the idea that the actions of each input pattern playa specific role in the processing of a sequence. In the simulations described in this paper, we use the simple recurrent network (SRN) proposed by Elman (1990). The art of programming enters the simulations in the use of extra target units, called hints, that are provided at the output layer. At each step in learning a sequence, hints specify all the information that the network must preserve in the hidden unit representation (the state of the system) in order to calculate outputs later in the sequence (for a discussion of the use of hints in training a recurrent network see Rumelhart, Hinton & Williams, 1986). 2 SIMULATIONS Three different boolean functions and their arguments were specified as sub-sequences of patterns over the inputs to an SRN. The network was required to apply the function specified by the first pattern in each sequence to each of the subsequent arguments in turn. The functions provided were boolean functions of the current input and previous output, AND, OR and XOR (Le., exclusive-or) and the arguments were arbitrary length strings of O's and 1 'so The context units were not reset between sub-sequences. An SRN with 3 input, 5 hidden, 5 context, 1 output and 5 hint units was trained using backpropagation with a momentum term. The 5 hint units at the output layer provided information about the boolean functions during training (via the backpropagation of errors), but not during testing. The network was trained on three data sets each containing 700 (ten times the number of weights in the network) randomly generated patterns, forming function and arguments sequences of average length 0.5, 2 and 4 arguments respectively. The network was trained for one thousand iterations on each training set. 2.1 RESULTS AND GENERALISATION After training, the network correctly computed every pattern in the three training sets (using a closest match criterion for scoring the output) and also in a test set of sequences generated using the same statistics. Generalisation test data consisting of all possible sequences composed of each function and eight arguments, and long sequences each of 50 arguments also produced the correct output for every pattern in every sequence. To test Operators and curried functions: Training and analysis of simple recurrent networks 327 c:: u ffiQ c:: 8. flJa 8 ~AI 0 rr Co) r-:~. ,. ~ jj Co) '8 ~. 0 ,. § • ~ Co) "0 • .!:l L -!. ~ First canonical component First canonical component la. lb. Figure la. The hidden unit patterns for the training data, projected onto the first two canonical components. These components separate the patterns into 3 distinct regions corresponding to the initial pattern (AND, OR or XOR) in each sequence. lb. The first and third canonical components further separate the hidden unit patterns into 6 regions which have been marked in the diagrams above by the corresponding output classes AI, AO, Rl, RO, Xl and XO. These regions are effectively the computational states of the network. 1~G O'U Figure 2. Finite state machine to compute the three-function task. Another way of considering sub-sequences in the input stream is to describe all the inputs as functions, not over the other inputs, as above, but as functions of the state (for which we use the term operators). Using this terminology, a sub-sequence is a composition of operators which act on the current state, G(S(t) = argt ° ... ° arg2 ° argJo S(O), where (f ° g) (x) = f(g(x)), and S(O) is the initial state of the network. A consequence of describing the input patterns as operators is that even the 0 and 1 data bits can be seen as operators that transform the internal state (see Box 1). 328 Wiles and Bloesch 3a. First canonical component First canonical component First canonical component []. ru.. First canonical component 3d. .. First canonical component 3e. • • Figure 3. State transitions caused by each input pattern, projected onto the ftrst and third canonical components of the hidden unit patterns (generated by the training data as in Figure 1). 3a-c. Transitions caused by the AND, OR and XOR input patterns respectively. From every point in the hidden unit space, the input patterns for AND, OR and XOR transform the hidden units to values corresponding to a point in the regions marked AI, RO and XO respectively. 3d-e. Transitions caused by the 0 and I input patterns respectively. The 0 and I inputs are context sensitive operators. The 0 input causes changes in the hidden unit patterns corresponding to transitions from the state Al to AO, but does not cause transitions from the other 5 regions. Conversely. a I input does not cause the hidden unit patterns to change from the regions AI, AO or RI, but causes transitions from the regions RO, Xl and XO. Operators and curried functions: Training and analysis of simple recurrent networks 329 Input operators AND OR XOR 1 o Patterns on the input units 011 110 101 111 000 Effect on information encoded in the state cf-'AND cf-. OR cf-'XOR x(t) -. x(t-1) 1 NOT(x(t-1» x(t) -. 0 x(t-1) x(t-1) if cf= AND if cf= OR if cf= XOR if cf= AND if cf= OR if cf= XOR Box 1. Operators for the 5 input patterns. The operation performed by each input pattern is described in terms of the effect it has on information encoded by the hidden unit patterns. The first and second columns specify the input operators and their corresponding input patterns. The third column specifies the effect that each input in a sub-sequence has on information encoded in the state, represented as cf, for current function, and x(t) for the last output. For each input pattern, we plotted all the transitions in hidden unit space resulting from that input projected onto the canonical components used in Figure 1. Figures 3a to 3e show transitions for each of the five input operators. For the three "function" inputs, OR, AND, and XOR, the effect is to collapse the hidden unit patterns to a single region - a particular state. These are relatively context insensitive operations. For the two "argument" inputs, 0 and 1, the effect is sensitive to the context in which the input occurs (i.e., the previous state of the hidden units). A similar analysis of the states themselves focuses on the hidden unit patterns and the information that they must encode in order to compute the three-function task. At each timestep the weights in the network construct a pattern of activation over the hidden units that reduces the structured arguments of a complex function of several arguments by a simpler function of one less argument. This can be represented as follows: G(F, arg1, ... argn) -. F(arg1, ... argn) -. Fargl(arg2, ... argn) -. Fargl arg2(arg3, ... argn). This process of replacing structured arguments by a corresponding sequence of simple ones is known as currying the input sequence (for a review of curried functions, see Bird and Wadler, 1988). Using this terminology, the pattern of activation in the hidden units is a curried function of the entire input sequence up to that time step. The network combines the previous hidden unit patterns (preserved in the context units) with the current input patterns to compute the next curried function in the sequence. Since there are 6 states required by the network, there are 6 classes of equivalent curried functions. Figure 4 shows the transition diagrams for each of the 6 equivalence classes of curried functions from the same simulation shown in Figures 1 and 3. 330 Wiles and Bloesch First canonical component 4a. First canonical component 4b. First canonical component 4c. • First canonical component 4d. First canonical component 4e. First canonical component 4f. • Figure 4. State transitions for each hidden unit pattern, grouped into classes of curried functions, projected onto the frrst and third canonical components. 4a-f. Transitions from AI, RI, Xl, AO, RO and XO respectively. Each pattern of activation corresponds to a curried function of the input sequence up to that item in the sequence. Operators and curried fUnctions: Training and analysis of simple recurrent networks 331 how often the network finds a good solution, five simulations were completed with the above parameters, all started with different sets of random weights, and randomly generated training patterns. Three simulations learnt the training set perfectly (the other two simulations appeared to be converging, but slowly: worst case error less than 1%). On the test data, the results were also good (worst case 7% error). 2.2 ANALYSIS The hidden unit patterns generated by the training data in the simulations described above were analysed using canonical discriminant analysis (CDA, Kotz & Johnson, 1982). Six output classes were specified, corresponding to one class for each output for each function. The output classes were used to compute the first three canonical components of the hidden unit patterns (which are 5-dimensional patterns corresponding to the 5 hidden units). The graph of the first two canonical components (see Figure 1a) shows the hidden unit patterns separated into three tight clusters, corresponding to the sequence type (OR, AND and XOR). The first and third canonical components (see Figure 1b) reveals more of the structure within each class. The six classes of hidden unit patterns are spread across six distinct regions (these correspond to the 6 states of the minimal finite state machine, as shown in Figure 2). The first canonical component separates the hidden unit patterns into sequence type (OR, AND, or XOR, separated across the page). Within each region, the third canonical component separates the outputs into O's and l's (separated down the page). Cluster analysis followed by CDA on the clusters gave similar results. 3 DISCUSSION In a network that is dedicated to computing a boolean function such as XOR, it seems obvious that the information for computing the function is in the weights. The simulations described in this paper show that this intuition does not necessarily generalise to other networks. The three-function task requires that the network use the first input in a sequence to select a function which is then applied to subsequent arguments. In general, for any given network, the function that is computed over a given sulrsequence will be specified by the interaction between the weights and the activation pattern. The function computed by the networks in these simulations can be described in terms of the output of the global function, O(t) = G(argl, ... , argt), computed by the weights of the network, which is a function of the whole input sequence. An equivalent description can be given in terms of sulrsequences of the input stream, which specify a boolean function over subsequent arguments, G(F, argl, ... , argt) = F(argJ, ... , argt). Both these levels of description follow the traditional approach of separating functions and data, where the patterns of activity can be described as either one or the other. 332 Wiles and Bloesch It appears to us that descriptions based on operators and curried functions provide a promising approach for the integration of representation and process within recurrent networks. For example, in the simulations described by Elman (1990), words can be understood as denoting operators which act on the state of the recurrent network, rather than denoting objects as they do in traditional linguistic theory. The idea of currying can also be applied to feedback from the output layer, for example in the networks developed by Jordan (1986), or to the product units used by Giles et al. (1990). Acknowledgements We thank Jeff Elman, Ian Hayes, Julie Stewart and Bill Wilson for many discussions on these ideas, and Simon Dennis and Steven Phillips for developing the canonical discriminant program. This work was supported by grants from the Australian Research Council and A. Bloesch was supported by an Australian Postgraduate Research Award. References Bird, R., and Wadler P. (1988). Introduction to Functional Programming, Prentice Hall, NY. Cleeremans, A., Servan-Schreiber, D., and McClelland, J.L. (1989). Finite state automata and simple recurrent networks, Neural Computation, 1,372-381. Elman, J. (1989). Representation and structure in connectionist models. UCSD CRL Technical Report 8903, August 1989. Elman, J. (1990). Finding structure in time. Cognitive Science, 14, 179-211. Giles, C. L., Sun, G. Z., Chen, H. H., Lee, Y. C., and Chen, D. (1990). Higher Order Recurrent Networks. In D.S. Touretzky (ed.) Advances in Neural Information Processing Systems 2, Morgan-Kaufmann, San Mateo, Ca., 380-387. Jordan, M. I. (1986). Serial order: A parallel distributed processing approach. Institute for Cognitive Science, Technical Report 8604. UCSD. Kotz, S., and Johnson, N.L. (1982). Encyclopedia of Statistical Sciences. John Wiley and Sons, NY. Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986). Learning internal representations by error propagation. In D.E. Rumelhart & J.L. McClelland (eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1, pp.318-362). Cambridge, MA: MIT Press. Williams, R. J., and Zipser, D. (1988). A Learning Algorithm for Continually Running Fully Recurrent Neural Networks, Institute for Cognitive SCience, Technical Report 8805. UCSD.
1991
76
547
The Efficient Learning of Multiple Task Sequences Satinder P. Singh Department of Computer Science University of Massachusetts Amherst, MA 01003 Abstract I present a modular network architecture and a learning algorithm based on incremental dynamic programming that allows a single learning agent to learn to solve multiple Markovian decision tasks (MDTs) with significant transfer of learning across the tasks. I consider a class of MDTs, called composite tasks, formed by temporally concatenating a number of simpler, elemental MDTs. The architecture is trained on a set of composite and elemental MDTs. The temporal structure of a composite task is assumed to be unknown and the architecture learns to produce a temporal decomposition. It is shown that under certain conditions the solution of a composite MDT can be constructed by computationally inexpensive modifications of the solutions of its constituent elemental MDTs. 1 INTRODUCTION Most applications of domain independent learning algorithms have focussed on learning single tasks. Building more sophisticated learning agents that operate in complex environments will require handling multiple tasks/goals (Singh, 1992). Research effort on the scaling problem has concentrated on discovering faster learning algorithms, and while that will certainly help, techniques that allow transfer of learning across tasks will be indispensable for building autonomous learning agents that have to learn to solve multiple tasks. In this paper I consider a learning agent that interacts with an external, finite-state, discrete-time, stochastic dynamical environment and faces multiple sequences of Markovian decision tasks (MDTs). 251 252 Singh Each MDT requires the agent to execute a sequence of actions to control the environment, either to bring it to a desired state or to traverse a desired state trajectory over time. Let S be the finite set of states and A be the finite set of actions available to the agent.l At each time step t, the agent observes the system's current state Zt E S and executes action at E A. As a result, the agent receives a payoff with expected value R(zt, at) E R and the system makes a transition to state Zt+l E S with probability P:r:t:r:t+l (at). The agent's goal is to learn an optimal closed loop control policy, i.e., a function assigning actions to states, that maximizes the agent's objective. The objective used in this paper is J = E~o -yt R(zt, at), i.e., the sum of the payoffs over an infinite horizon. The discount factor, 0 ~ "Y ~ I, allows future payoff to be weighted less than more immediate payoff. Throughout this paper, I will assume that the learning agent does not have access to a model of the environment. Reinforcement learning algorithms such as Sutton's (1988) temporal difference algorithm and Watkins's (1989) Q-Iearning algorithm can be used to learn to solve single MDTs (also see Barto et al., 1991). I consider compositionally-structured MDTs because they allow the possibility of sharing knowledge across the many tasks that have common subtasks. In general, there may be n elemental MDTs labeled TI , T2 , ••• , Tn. Elemental MDTs cannot be decomposed into simpler subtasks. Compo8ite MDTs, labeled GI , G2 , ••• , Gm , are produced by temporally concatenating a number of elemental MDTs. For example, G; = [T(j, I)T(j, 2) ... T(j, k)] is composite task j made up of k elemental tasks that have to be performed in the order listed. For 1 $ i $ k, T(j, i) E {TI' T2 , ••• , Tn} is the itk elemental task in the list for task G;. The sequence of elemental tasks in a composite task will be referred to as the decompo8ition of the composite task; the decomposition is assumed to be unknown to the learning agent. Compo8itional learning involves solving a composite task by learning to compose the solutions of the elemental tasks in its decomposition. It is to be emphasized that given the short-term, evaluative nature of the payoff from the environment (often the agent gets informative payoff only at the completion of the composite task), the task of discovering the decomposition of a composite task is formidable. In this paper I propose a compositional learning scheme in which separate modules learn to solve the elemental tasks, and a task-sensitive gating module solves composite tasks by learning to compose the appropriate elemental modules over time. 2 ELEMENTAL AND COMPOSITE TASKS All elemental tasks are MDTs that share the the same state set S, action set A, and have the same environment dynamics. The payoff function for each elemental task 11, 1 ~ i ~ n, is ~(z, a) = EYES P:r:y(a)ri(Y) - c(z, a), where ri(Y) is a positive reward associated with the state Y resulting from executing action a in state Z for task 11, and c(z, a) is the positive cost of executing action a in state z. I assume that ri(z) = 0 if Z is not the desired final state for 11. Thus, the elemental tasks share the same cost function but have their own reward functions. A composite task is not itself an MDT because the payoff is a function of both lThe extension to the case where different sets of actions are available in different states is straightforward. The Efficient Learning of Multiple Task Sequences 253 the state and the current elemental task, instead of the state alone. Formally, the new state set2 for a composite task, S', is formed by augmenting the elements of set S by n bits, one for each elemental task. For each z, E S', the projected 3tate z E S is defined as the state obtained by removing the augmenting bits from z'. The environment dynamics and cost function, c, for a composite task is defined by assigning to each z, E S' and a E A the transition probabilities and cost assigned to the projected state z E S and a E A. The reward function for composite task Cj , rj, is defined as follows. rj( z') ;::: 0 if the following are all true: i) the projected state z is the final state for some elemental task in the decomposition of Cj, say task Ii, ii) the augmenting bits of z' corresponding to elemental tasks appearing before and including sub task Ti in the decomposition of Cj are one, and iii) the rest of the augmenting bits are zero; rj(z') = 0 everywhere else. 3 COMPOSITIONAL Q-LEARNING Following Watkins (1989), I define the Q-value, Q(z,a), for z E S and a E A, as the expected return on taking action a in state z under the condition that an optimal policy is followed thereafter. Given the Q-values, a greedy policy that in each state selects an action with the highest associated Q-value, is optimal. Q-Iearning works as follows. On executing action a in state z at time t, the resulting payoff and next state are used to update the estimate of the Q-value at time t, Qt(z, a): (1.0 - Qt)Qt(z, a) + ae[R(z, a) + l' max Qt(Y, a')], a'EA (1) where Y is the state at time t + 1, and at is the value of a positive learning rate parameter at time t. Watkins and Dayan (1992) prove that under certain conditions on the sequence {at}, if every state-action pair is updated infinitely often using Equation 1, Qt converges to the true Q-values asymptotically. Compositional Q-Iearning (CQ-Iearning) is a method for constructing the Q-values of a composite task from the Q-values of the elemental tasks in its decomposition. Let QT.(z,a) be the Q-value of (z,a), z E S and a E A, for elemental task Ii, and let Q~:(z',a) be the Q-value of (z', a), for z' E S' and a E A, for task Ii when performed as part of the composite task Cj = [T(j, 1) ... T(j, k)]. Assume Ii = T(j, I). Note that the superscript on Q refers to the task and the subscript refers to the elemental task currently being performed. The absence of a superscript implies that the task is elemental. Consider a set of undiscounted (1' = 1) MDTs that have compositional structure and satisfy the following conditions: (AI) Each elemental task has a single desired final state. (A2) For all elemental and composite tasks, the expected value of undiscounted return for an optimal policy is bounded both from above and below for all states. (A3) The cost associated with each state-action pair is independent of the task being accomplished. 2The theory developed in this paper does not depend on the particular extension of S chosen, as long as the appropriate connection between the new states and the elements of S can be made. 254 Singh (A4) For each elemental task 71, the reward function ri is zero for all states except the desired final state for that task. For each composite task Cj , the reward function rj is zero for all states except pouibly the final states of the elemental tasks in its decomposition (Section 2). Then, for any elemental task Ii and for all composite tasks Cj containing elemental task 71, the following holds: Q~:(z',a) QT.(Z, a) + K(Cj,T(j, I», (2) for all z' E S' and a E A, where z E S is the projected state, and K (Cj, T(j, I» is a function of the composite task Cj and subtask T(j, I), where Ti = T(j, I). Note that K( Cj , T(j, I» is independent of the state and the action. Thus, given solutions of the elemental tasks, learning the solution of a composite task with n elemental tasks requires learning only the values of the function K for the n different subtasks. A proof of Equation 2 is given in Singh (1992). WIll. NoIN N(O.G) a Q Q Q Networtc • •• Networtt 1 n Figure 1: The CQ-Learning Architecture (CQ-L). This figure is adapted from Jacobs et al. (1991). See text for details. Equation 2 is based on the assumption that the decomposition of the composite tasks is known. In the next Section, I present a modular architecture and learning algorithm that simultaneously discovers the decomposition of a composite task and implements Equation 2. 4 CQ-L: CQ-LEARNING ARCHITECTURE Jacobs (1991) developed a modular connectionist architecture that performs task decomposition. Jacobs's gating architecture consists of several expert networks and a gating network that has an output for each expert network. The architecture has been used to learn multiple non-sequential tasks within the supervised learning The Efficient Learning of Multiple Task Sequences 255 Table 1: Tasks. Tasks Tl, T2, and T3 are elemental tasks; tasks Gl , G2 , and G3 are composite tasks. The last column describes the compositional structure of the tasks. Label Command De.eription Deeompo.ition '11 000001 VlS1t A Tl T2 000010 VlS1t B T2 T3 000100 V1S1t C T3 0 1 001000 VlSlt A and then C 1113 C2 010000 VlS1t B and then C T2 T3 C3 100000 V1S1t A, then B and then C T1 T2T3 paradigm. I extend the modular network architecture to a CQ-Learning architecture (Figure I), called CQ-L, that can learn multiple compositionally-structured sequential tasks even when training information required for supervised learning is not available. CQ-L combines CQ-learning and the gating architecture to achieve transfer of learning by "sharing" the solutions of elemental tasks across multiple composite tasks. Only a very brief description of the CQ-L is provided in this paper; details are given in Singh (1992) . In CQ-L the expert networks are Q-learning networks that learn to approximate the Q-values for the elemental tasks. The Q-networks receive as input both the current state and the current action. The gating and bias networks (Figure 1) receive as input the augmenting bits and the task command used to encode the current task being performed by the architecture. The stochastic switch in Figure 1 selects one Q-network at each time step. CQ-L's output, Q, is the output of the selected Q-network added to the output of the bias network. The learning rules used to train the network perform gradient descent in the log likelihood, L(t), of generating the estimate of the desired Q-value at time t, denoted D(t), and are given below: 8 log L(t) qj(t) + oQ 8qj(t) , 8 log L(t) Si(t) + Og 8Si(t) ,and b(t) + ob(D(t) - Q(t)), where qj is the output of the jt" Q-network, Si is the it" output of the gating network, b is the output of the bias network, and 0Q, Ob and Og are learning rate parameters. The backpropagation algorithm ( e.g., Rumelhart et al., 1986) was used to update the weights in the networks. See Singh (1992) for details. 5 NAVIGATION TASK To illustrate the utility of CQ-L, I use a navigational test bed similar to the one used by Bachrach (1991) that simulates a planar robot that can translate simultaneously 256 Singh c G Figure 2: Navigation Testbed. See text for details. and independently in both ~ and y directions. It can move one radius in any direction on each time step. The robot has 8 distance sensors and 8 gray-scale sensors evenly placed around its perimeter. These 16 values constitute the state vector. Figure 2 shows a display created by the navigation simulator. The bottom portion of the figure shows the robot's environment as seen from above. The upper panel shows the robot's state vector. Three different goal locations, A, B, and C, are marked on the test bed. The set of tasks on which the robot is trained are shown in Table 1. The elemental tasks require the robot to go to the given goal location from a random starting location in minimum time. The composite tasks require the robot to go to a goal location via a designated sequence of subgoallocations. Task commands were represented by standard unit basis vectors (Table 1), and thus the architecture could not "parse" the task command to determine the decomposition of a composite task. Each Q-network was a feedforward connectionist network with a single hidden layer containing 128 radial basis units. The bias and gating networks were also feedforward nets with a single hidden layer containing sigmoid units. For all ~ E S U Sf and a E A, c(~, a) = -0.05. ri(~) = 1.0 only if ~ is the desired final state of elemental task Ii, or if ~ E Sf is the final state of composite task Cii ri(~) = 0.0 in all other states. Thus, for composite tasks no intermediate payoff for successful completion of subtasks was provided. 6 SIMULATION RESULTS In the simulation described below, the performance of CQ-L is compared to the performance of a "one-for-one" architecture that implements the "learn-each-taskseparately" strategy. The one-for-one architecture has a pre-assigned distinct netThe Efficient Learning of Multiple Task Sequences 257 work for each task, which prevents transfer of learning. Each network of the onefor-one architecture was provided with the augmented state. ,oo ... COA. -- - ONE-FOA.oNE ... • , . 1 .. 18. , I .. I " 0 I ' 0 t t·· , to', COA. f .... , C<>L -----'I --- ON.·FOA-ONE 'I ' ' 'I .• ~ ,~ I', • 1 1 '; V \ 1oo , .'t . , ,1,1 .. .. I,' I .. 0 ... ' ... , ... ,, , TrW NIJ1rioer (for T .. k A) Trial Nurrber (for T .. k [AB)) Til .. Number (fer TMk [ABC)) Figure 3: Learning Curves for Multiple tasks. Both CQ-L and the one-for-one architecture were separately trained on the six tasks T1 , T2, T3 , ClI C2 , and C3 until they could perform the six tasks optimally. CQ-L contained three Q-networks, and the one-for-one architecture contained six Q-networks. For each trial, the starting state of the robot and the task identity were chosen randomly. A trial ended when the robot reached the desired final state or when there was a time-out. The time-out period was 100 for the elemental tasks, 200 for C1 and C2 , and 500 for task C3 • The graphs in Figure 3 show the number of actions executed per trial. Separate statistics were accumulated for each task. The rightmost graph shows the performance of the two architectures on elemental task TI. Not surprisingly, the one-for-one architecture performs better because it does not have the overhead of figuring out which Q-network to train for task T1 . The middle graph shows the performance on task C I and shows that the CQL architecture is able to perform better than the one-for-one architecture for a composite task containing just two elemental tasks. The leftmost graph shows the results for composite task C3 and illustrates the main point of this paper. The onefor-one architecture is unable to learn the task, in fact it is unable to perform the task more than a couple of times due to the low probability of randomly performing the correct task sequence. This simulation shows that CQ-L is able to learn the decomposition of a composite task and that compositional learning, due to transfer of training across tasks, can be faster than learning each composite task separately. More importantly, CQ-L is able to learn to solve composite tasks that cannot be solved using traditional schemes. 7 DISCUSSION Learning to solve MDTs with large state sets is difficult due to the sparseness of the evaluative information and the low probability that a randomly selected sequence of actions will be optimal. Learning the long sequences of actions required to solve such tasks can be accelerated considerably if the agent has prior knowledge of useful subsequences. Such subsequences can be learned through experience in learning to 258 Singh solve other tasks. In this paper, I define a class of MOTs, called composite MOTs, that are structured as the temporal concatenation of simpler MOTs, called elemental MOTs. I present CQ-L, an architecture that combines the Q-Iearning algorithm of Watkins (1989) and the modular architecture of Jacobs et al. (1991) to achieve transfer of learning by sharing the solutions of elemental tasks across multiple composite tasks. Given a set of composite and elemental MOTs, the sequence in which the learning agent receives training experiences on the different tasks determines the relative advantage of CQ-L over other architectures that learn the tasks separately. The simulation reported in Section 6 demonstrates that it is possible to train CQ-L on intermixed trials of elemental and composite tasks. Nevertheless, the ability of CQ-L to scale well to complex sets of tasks will depend on the choice of the training sequence. Acknowledgements This work was supported by the Air Force Office of Scientific Research, Bolling AFB, under Grant AFOSR-89-0526 and by the National Science Foundation under Grant ECS-8912623. I am very grateful to Andrew Barto for his extensive help in formulating these ideas and preparing this paper. References J. R. Bachrach. (1991) A connectionist learning control architecture for navigation. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Adv4nce6 in Neural Information Proceuing Sy6tem6 3, pages 457-463, San Mateo, CA. Morgan Kaufmann. A. G. Barto, S. J. Bradtke, and S. P. Singh. (1991) Real-time learning and control using asynchronous dynamic programming. Technical Report 91-57, University of Massachusetts, Amherst, MA. Submitted to AI Journal. R. A. Jacobs. (1990) T46lc decomp06ition through competition in a modular connectioni6t architecture. PhD thesis, COINS dept, U niv. of Massachusetts, Amherst, Mass. U.S.A. R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. (1991) Adaptive mixtures of local experts. Neural Computation, 3( 1 ). D. E. Rumelhart, G. E. Hinton, and R. J. Williams. (1986) Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Proceuing: E:cploration6 in the Micr06tructure of Cognition, vol.1: Found4tion6. Bradford Books/MIT Press, Cambridge, MA. S. P. Singh. (1992) Transfer of learning by composing solutions for elemental sequential tasks. Machine Learning. R. S. Sutton. (1988) Learning to predict by the methods of temporal differences. Machine Learning, 3:9-44. C. J. C. H. Watkins. (1989) Learning from Delayed Rewards. PhD thesis, Cambridge Univ., Cambridge, England. C. J. C. H. Watkins and P. Dayan. (1992) Q-learning. Machine Learning.
1991
77
548
Benchmarking Feed-Forward Neural Networks: Models and Measures Leonard G. C. Harney Computing Discipline Macquarie University NSW2109 AUSTRALIA Abstract Existing metrics for the learning performance of feed-forward neural networks do not provide a satisfactory basis for comparison because the choice of the training epoch limit can determine the results of the comparison. I propose new metrics which have the desirable property of being independent of the training epoch limit. The efficiency measures the yield of correct networks in proportion to the training effort expended. The optimal epoch limit provides the greatest efficiency. The learning performance is modelled statistically, and asymptotic performance is estimated. Implementation details may be found in (Harney, 1992). 1 Introduction The empirical comparison of neural network training algorithms is of great value in the development of improved techniques and in algorithm selection for problem solving. In view of the great sensitivity of learning times to the random starting weights (Kolen and Pollack, 1990), individual trial times such as reported in (Rumelhart, et al., 1986) are almost useless as measures of learning performance. Benchmarking experiments normally involve many training trials (typically N = 25 or 100, although Tesauro and Janssens (1988) use N = 10000). For each trial i, the training time to obtain a correct network ti is recorded. Trials which are not successful within a limitofTepochs are considered failures; they are recorded as ti = T. The mean successful training time IT is defined as follows. 1167 1168 Harney where S is the number of successful trials. The median successful time 'iT is the epoch at which S/2 trials are successes. It is common (e.g. Jacobs, 1987; Kruschke and Movellan, 1991; Veitch and Holmes, 1991) to report the mean and standard deviation along with the success rate AT = S / N, but the results are strongly dependent on the choice of T as shown by Fahlman (1988). The problem is to characterise training performance independent of T. Tesauro and Janssens (1988) use the harmonic mean tH as the average learning rate. _ N tH = N 1 Ei=l ti This minimizes the contribution of large learning times, so changes in T will have little effect on tH. However, tH is not an unbiased estimator of the mean, and is strongly influenced by the shortest learning times, so that training algorithms which produce greater variation in the learning times are preferred by this measure. Fahlman (1988) allows the learning program to restart an unsuccessful trial, incorporating the failed training time in the total time for that trial. This method is realistic, since a failed trial would be restarted in a problem-solving situation. However, Fahlman's averages are still highly dependent upon the epoch limit T which is chosen beforehand as the restart point. The present paper proposes new performance measures for feed-forward neural networks. In section 4, the optimal epoch limit TE is defined. TE is the optimal restart point for Fahlman's averages, and the efficiency e is the scaled reciprocal of the optimised Fahlman average. In sections 5 and 6, the asymptotic learning behaviour is modelled and the mean and median are corrected for the truncation effect of the epoch limit T. Some benchmark results are presented in section 7, and compared with previously published results. 2 Performance Measurement For benchmark results to be useful, the parameters and techniques of measurement and training must be fully specified. Training parameters include the network structure, the learning rate 1}, the momentum term a and the range of the initial random weights [-r, r]. For problems with binary output, the correctness of the network response is defined by a threshold Tc-responses less than Tc are considered equivalent to 0, while responses greater than 1 Tc are considered equivalent to 1. For problems with analog output, the network response is considered correct if it lies within Tc of the desired value. In the present paper, only binary problems are considered and the value Tc = 0.4 is used, as in (Fahlman 1988). 3 The Training Graph The training graph displays the proportion of correct networks as a function of the epoch. Typically, the tail of the graph resembles a decay curve. It is evident in figure 1 that the Benchmarking Feed-Forward Neural Networks: Models and Measures 1169 1.0 -BP 0.8 -DE CIJ '~ 0 0 0.6 c: ! 0 cu .-t:: Z 8. u 0.4 ~ ~ 8 0.2 0.0 0 2000 4000 6000 8000 10000 Epoch Limit Figure 1: Typical Training Graphs: Back-Propagation ('I} = 0.5, Q' = 0) and Descending Epsilon (ry = 0.5, Q' = 0) on Exclusive-Or (2-2-1 structure, N = 1000, T = 10000). success rate for either algorithm may be significantly increased if the epoch limit was raised beyond 10000. The shape of the training graph varies depending upon the problem and the algorithm employed to solve it. Descending epsilon (Yu and Simmons, 1990) solves a higher proportion of the exclusive-or trials with T = 10000, but back-propagation would have a higher success rate if T = 3000. This exemplifies the dramatic effect that the choice of T can have on the comparison of training algorithms. 1\vo questions naturally arise from this discussion: "What is the optimal value for T?" and "What happens as T ~ oo?". These questions will be addressed in the following sections. 4 Efficiency and Optimal T. Adjusting the epOch limit T in a learning algorithm affects both the yield of correct networks and the effort expended on unsuccessful trials. To capture the total yield for effort ratio, we define the efficiency E( t) of epoch limit t as follows. The efficiency graph plots the efficiency against of the epoch limit. The effiCiency graph for back-propagation (figure 2) exhibits a strong peak with the efficiency reducing relatively quickly if the epoch limit is too large. In contrast, the efficiency graph for descending epsilon exhibits an extremely broad peak with only a slight drop as the epoch limit is increased. This occurs because the asymptotic success rate (A in section 5) is close to Figure 2: Efficiency Graphs: Back-Propagation (ry = 0.3, a = 0.9) and Descending Epsilon (ry = 0.3, a = 0.9) on Exclusive-Or (2-2-1 structure, N = 1000, T = 10000). 1.0; in such cases, the efficiency remains high over a wider range of epoch limits and near-optimal performance can be more easily achieved for novel problems. The efficiency benchmark parameters are derived from the graph as shown in figure 3. The epoch limit TE at which the peak efficiency occurs is the optimal epoch limit. The peak efficiency e is a good performance measure, independent of T when T > TE. Unlike I H , it is not biased by the shortest learning times. The peak efficiency is the scaled reciprocal of Fahlman's (1988) average for optimal T, and incorporates the failed trials as a perfonnance penalty. The optimisation of training parameters is suggested by Tesauro and Janssens (1988), but they do not optimise T. For comparison with other performance measures, the un scaled optimised Fahlman average t E = 1000/ e may be used instead of e. The prediction of the optimal epoch limit TE for novel problems would help reduce wasted computation. The range parameters TEl and TE2 show how precisely Tmust be set to obtain efficiency within 50% of optimal-if two algorithms are otherwise similar in performance, the one with a wider range (TEl , TE2) would be preferred for novel problems. 5 Asymptotic Performance: T ~ 00 In the training graph, the proportion of trials that ultimately learn correctly can be estimated by the asymptote which the graph is approachin¥. I statistically model the tail of the graph by the distribution F(t) = 1 - [a(t - To) + 1]- and thus estimate the asymptotic success rate A. Figure 4 illustrates the model parameters. Since the early portions of the graph are dominated by initialisation effects, To, the point where the model commences to fit, is determined by applying the Kolmogorov-Smimov goodness-of-fit test (Stephens 1974) Benchmarking Feed-Forward Neural Networks: Models and Measures 1171 0.0 -t-----'--t----1'------------+----o Epoch Limit Figure 3: Efficiency Parameters in Relation to the Efficiency Graph. for all possible values of To. The maximum likelihood estimates of a and k are found by using the simplex algorithm (Caceci and Cacheris, 1984) to directly maximise the following log-likelihood equation. Let) M [lna+lnk-In(l- (a(T-To)+l)-k)](k+l) L In(a(ti- To)+l) To<t;<T where M is the number of trials recording times in the range (To, T). The asymptotic success rate .A is then obtained as follows. In practice, the statistical model I have chosen is not suitable for all learning algorithms. For example, in preliminary investigations I have been unable to reliably model the descending epsilon algorithm (Yu and Simmons, 1990). Further study is needed to develop more widely applicable models. 6 Corrected Measures The mean IT and the median tT are based upon only those trials that succeeded in T epochs. The asymptotic learning model predicts additional success for t > T epochs. Incorporating 1172 Harney 1.0 0.8 tIJ '~ 0 ~ § ~ 0.6 ... .... Il) J z ... ~ 0.4 5 u 0.2 0.0 0 To T 00 Epoch Limit Figure 4: Parameters for the Model of Asymptotic Perfonnance. the predicted successes, the corrected mean Ie estimates the mean successful learning time as T 00. The corrected median te is the epoch for which AI2 of the trials are successes. It estimates the median successful learning time as T 00. 7 Benchmark Results for Back.Propagation Table 1 presents optimised results for two popular benchmark problems: the 2-2-1 exclusive-or problem (Rumelhart, et al., 1986, page 334), and the 10-5-10 encoder/decoder problem (Fahlman, 1988). Both problems employ three-layer networks with one hidden layer fully connected to the input and output units. The networks were trained with input and output values of 0 and 1. The weights were updated after each epoch of training; i.e. after each cycle through all the training patterns. The characteristics of the learning for these two problems differs significantly. To accurately benchmark the exclusive-or problem, N = 10000 learning runs were needed to measure e accurate to ±0.3. With T = 200, I searched the combinations of 0:', 1] and r. The optimal parameters were then used in a separate run with N = 10000 and T = 2000 to estimate the other benchmark parameters. In contrast, the encoder/decoder problem produced more stable efficiency values so that N = 100 learning runs produced estimates of e precise to ±0.2. With T = 600, all the learning runs converged. The final benchmark values were Benchmarking Feed-Forward Neural Networks: Models and Measures Table 1: Optimised Benchmark Results. PROBLEM r Q' TJ e TE TEl TE2 tE exclusive-or 1.4 0.65 7.0 17.1 49 26 235 59 2-2-1 ±0.2 ±0.05 ±0.5 ±0.3 encoder/decoder 1.1 0.00 1.7 8.1 00 110 00 124 10-5-10 ±0.2 ±0.10 ±0.1 ±0.2 PROBLEM a k To 'Y A Ie AT IT IH exclusive-or 0.1 0.5 54 0.66 0.93 409 0.76 50 40 encoder/decoder 1.00 124 1.00 124 114 determined with N = 1000. Confidence intervals for e were obtained by applying the jackknife procedure (Mosteller and Tukey, 1977, chapter 8); confidence intervals on the training parameters reflect the range of near-optimal efficiency results. In the exclusive-or results, the four means vary from each other considerably. Ie is large because the asymptotic performance model predicts many successful learning runs with T > 2000. However, since the model is fitting only a small portion of the data (approximately 1000 cases), its predictions may not be highly reliable. IT is low because the limit T = 2000 discards the longer training runs. IH is also low because it is strongly biased by the shortest times. IE measures the training effort required per trained network, including failure times, provided that T = 49. However, TEl and TE2 show that T can lie within the range (26,235) and achieve performance no worse than 118 epochs effort per trained network. The results for the encoder/decoder problem agree well with Fahlman (1988) who found Q' = 0, TJ = 1.7 and 1" = 1.0 as optimal parameter values and obtained t = 129 based upon N = 25. Equal performance is obtained with Q' = 0.1 and TJ = 1.6, but momentum values in excess of 0.2 reduce the efficiency. Since all the learning runs are successful, t E = Ie = IT and A = AT = 1.0. Both TE and TE2 are infinite, indicating that there is no need to limit the training epochs to produce optimal learning performance. Because there were no failed runs, the asymptotic performance was not modelled. 8 Conclusion The measurement of learning performance in artificial neural networks is of great importance. Existing performance measurements have employed measures that are either dependent on an arbitrarily chosen training epoch limit or are strongly biased by the shortest learning times. By optimising the training epoch limit, I have developed new performance measures, the efficiency e and the related mean tE, which are both independent of the training epoch limit and provide an unbiased measure of performance. The optimal training epoch limit TE and the range over which near-optimal performance is achieved (TEl, TE2) may be useful for solving novel problems. I have also shown how the random distribution of learning times can be statistically mod1173 1174 Harney elled, allowing prediction of the asymptotic success rate A, and computation of corrected mean and median successful learning times, and I have demonstrated these new techniques on two popular benchmark problems. Further work is needed to extend the modelling to encompass a wider range of algOrithms and to broaden the available base of benchmark results. In the process, it is believed that greater understanding of the learning processes of feed-forward artificial neural networks will result. References M. S. Caceci and W. P. Cacheris. Fitting curves to data: The simplex algorithm is the answer. Byte, pages 340-362, May 1984. Scott E. Fahlman. An empirical study of learning speed in back-propagation networks. Technical Report CMU-CS-88-162, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, 1988. Leonard G. C. Hamey. Benchmarking feed-forward neural networks: Models and measures. Macquarie Computing Report, Computing Discipline, Macquarie University, NSW 2109 Australia, 1992. R. A. Jacobs. Increased rates of convergence through learning rate adaptation. COINS Technical Report 87 -117 , University of Massachusetts at Amherst, Dept. of Computer and Information Science, Amherst, MA, 1987. John F. Kolen and Jordan B. Pollack. Back propagation is sensitive to initial conditions. Complex Systems, 4:269-280, 1990. John K. Kruschke and Javier R. Movellan. Benefits of gain: Speeded learning and minimal hidden layers in back-propagation networks. IEEE Trans. Systems, Man and Cybernetics, 21(1):273-280, January 1991. Frederick Mosteller and John W. Tukey. Data Analysis and Regression. Addison-Wesley, 1977. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing, chapter 8, pages 318-362. MIT Press, 1986. M. A. Stephens. EDF statistics for goodness of fit and some comparisons. Journal of the American Statistical Association, 69:730-737, September 1974. G. Tesauro and B. Janssens. Scaling relationships in back-propagation learning. Complex Systems, 2:39-44, 1988. A. C. Veitch and G. Holmes. Benchmarking and fast learning in neural networks: Results for back-propagation. In Proceedings of the Second Australian Conference on Neural Networks, pages 167-171,1991. Yeong-Ho Yu and Robert F. Simmons. Descending epsilon in back-propagation: A technique for better generalization. In Proceedings of the International Joint Conference on Neural Networks 1990,1990.
1991
78
549
CCD Neural Network Processors for Pattern Recognition Alice M. Chiang Michael L. Chuang Jeffrey R. LaFranchise MIT Lincoln Laboratory 244 Wood Street Lexington, MA 02173 Abstract A CCD-based processor that we call the NNC2 is presented. The NNC2 implements a fully connected 192-input, 32-output two-layer network and can be cascaded to form multilayer networks or used in parallel for additional input or output nodes. The device computes 1.92 x 109 connections/sec when clocked at 10 MHz. Network weights can be specified to six bits of accuracy and are stored on-chip in programmable digital memories. A neural network pattern recognition system using NNC2 and CCD image feature extractor (IFE) devices is described. Additionally, we report a CCD output circuit that exploits inherent nonlinearities in the charge injection process to realize an adjustable-threshold sigmoid in a chip area of 40 x 80 J.tlU2. 1 INTRODUCTION A neural network chip based on charge-coupled device (CCD) technology, the NNC2, is presented. The NNC2 implements a fully connected two-layer net and can be cascaded to form multilayer networks. An image feature extractor (IFE) device (Chiang and Chuang, 1991) is briefly l·eviewed. The IFE is suited for neural networks with local connections and shared weights and can also be used for image preprocessing tasks. A neural network pattern recognition system based on feature extraction using IFEs and classification using NNC2s is proposed. The efficacy of neural networks with local connections and shared weights for feature extraction in character 741 742 Chiang, Chuang, and LaFranchise recognition and phoneme recognition t.asks has been demonstrated by researchers such as (LeCun et. al. 1989) and (Waibel d. aI., 1989), respectively. :rvlore complex recognition tasks are likely to prove amenable to a system using locally connected networks as a front end with outputs generated by a highly-connected classifier. Both the IFE and the NNC2 are hybrids composed of analog and digital components. Network weights are stored digitally while neuron states and computation results are represented in analog form. Data enter and leave the devices in digital form for ease of integration into digital systems. The sigmoid is used in many network models as the nonlinear neuron output function. We have designed, fabricated and tested a compact CCD sigmoidal output circuit that is described below. The paper concludes with a discussion of strategies for implementing networks with particularly high or low fan-in to fan-out ratios. 2 THE NNC2 AND IFE DEVICES The NNC2 is a neural network processor that implements a fully connected twolayer net with 192 input nodes and 32 output nodes. The device is an expanded version of a previous neural network classifier (NNC) chip (Chiang, 1990) hence the appellation "NNC2." The NNC2 consists of a 192-stage CCD tapped delay line for holding and shifting input values, 192 four-quadrant multipliers, and 192 32-word local memories for weight storage. vVhen clocked at 10 l\iIHz, the NNC2 performs 1.92 x 109 connections/sec. The device was fabricated using a 2-J,lm minimum feature size double-metal, double-polysilicon CCD/CMOS process. The NNC2 measures 8.8 x 9.2 mm2 and is depicted in Figure 1. DIGITAL __ .~ MEMORY MDAC-~~~ .. ~ ~ . .! t ~,! :: i..:..: _. ._ • - •. : - .. '; ; .J.. 't ."'! ~'f K .' ~ •• .. ~~~~~~~~~~~~~----.~- t-~;--! ! ; p' • ..l 6 ... k ~ ." • CCDTAPPED ______ ~ __ --~~ DELAYUNE Figure 1: Photomicrograph of the NNC2 .... ~ . .. Tests indicate that the NNC2 has an output dynamic range exceeding 42 dB. Figure 2 shows the output of the NNC2 when the input consists of the cosine waveforms In = 0.2cos(27r2n/192) + 0.4cos(27r3n/192) and the weights are set to CCO Neural Network Processors for Pattern Recognition 743 cos(2?Tnk/192), k = ±1, ±2, ... , ±16. Due to the orthogonality of sinusoids of different frequencies, the output correlations 91e = 2:~~o fncos(2?Tnk/192) should yield scaled impulses with amplitudes of ±0.2 and ±0.4 for k = ±2 and ±3 only; this is indeed the case as the output (lower trace) in Figure 2 shows. This test demonstrates the linearity of the weighted sum (inner product) computed by the NNC2. Figure 2: Response of the NNC2 to input cosine waveforms Locally connected, shared weight networks can be implemented using the IFE which raster scans up to 20 sets of 7x 7 weights over an input image. At every window position the inner product of the windowed pixels and each of the 20 sets of weights is computed. For additonal details, see (Chiang and Chuang, 1991). The IFE and the NNC2 share a number of common features that are described below. 2.1 MDACS The multiplications of the inner product are performed in parallel by multiplyingD/ A-converters (MDACs), of which there are 192 in the NNC2 and 49 in the IFE. Each MDAC produces a charge paclcet proportional to the product of an input and a digital weight. The partial products are summed on an output line common to all the MDACs, yielding a complete inner product every clock cycle. The design and operation of an MDAC are described in detail in (Chiang, 1990). Using a 2-J.lm design rule, a four-quadrant MDAC with 8-bit weights occupies an area of 200x 200 J.lm2 . 2.2 WEIGHT STORAGE The NNC2 and IFE feature on-chip digital storage of programmable network weights, specified to 6 and 8 bits, respectively. The NNC2 contains 192 local memories of 32 words each, while the IFE has forty-nine 20-word memories. Individual words can be addressed by means of a row pointer and a column pointer. Each bit of the CCD shift register memories is equipped with a feedback enable switch that obviates the need to refresh the volatile CCD storage medium explictly; words are 744 Chiang, Chuang, and LaFranchise rewritten as they are read for use in computation, so that no cycles need be devoted to memory refresh. 2.3 INPUT BUFFER Inputs to the NNC2 are held in a 192-stage CCD analog floating-gate tapped delay line. At each stage the floating gate is coupled to the input of the corresponding MDAC, permitting inputs to be sensed nondestructively for computation. The NNC2 delay line is composed of three 64-stage subsections (see Figure 1). This partionning allows the NNC2 to compute either the weighted sum of 192 inputs or three 64-point inner products. The latter capability is well-matched to Time-Delay Neural Networks (TDNNs) that implement a moving temporal window for phoneme recognition (Waibel et. ai., 1989). The IFE contains a similar 775-stage delay line that holds six lines of a 128-pixel input image plus an additional seven pixels. Taps are placed on the first seven of every 128 stages in the IFE delay line so that the 1-dimensionalline emulates a 2-dimensional window. 3 CCD SIGMOIDAL OUTPUT CIRCUIT A sigmoidal charge-domain nonlinear detection circuit is shown in Figure 3. The circuit has a programmable input-threshold controlled by the amplitude of the transfer gate voltage, VTG. If the incoming signal charge is below the threshold set by VTG no charge is transferred to the output port and the incoming signal is ignored. If the input is above threshold, the amount of charge transferred to the output port is the difference between the charge input and the threshold level. The circuit design is based on the ability to calculate the charge transfer efficiency from an n+ diffusion region over a bias gate to a receiving well as a function of device parameters and exploits the fact that under certain operating conditions a nonlinear dependence exists between the input and output charge (Thornber, 1971). The maximum output produced can be bounded by the size and gate voltage of the receiving well. The predicted and measured responses of the circuit for two different threshold levels are shown in the bottom of Figure 3. The circuit has an area of 40 x 80 J1.m2 and can be integrated with the NNC2 or IFE chips to perform both the weighted-sum and output-nonlinearity computations on a single device. 4 DESIGN STRATEGIES The NNC2 uses a time-multiplexed output (TMO) structure (Figure 4a), where the number of multipliers and the number of local memories is equal to the number of inputs, N. The depth of each local memory is equal to the number of output nodes, M, and the outputs are computed serially as each set of weights is read in sequence from the memories. A 256-input, 256-output device with 64k 8-bit weights has been designed and can be realized in a chip area of 14x 14 mm2 . This chip is reconfigurable so that a single such device can be used to implement multilayer networks. If a network with a large (>1000) number of input nodes is required, then a time-multiplexed input (TMI) architecture with M multipliers may be more suitable (Figure 4b). In contrast to a TMO system that computes the M inner products CCO Neural Network Processors for Pattern Recognition 745 TGGATE CLOCKlllS.... CALCULATED MEASURED N'""3.75 ---.-E , , u , ...... ". 2.50 , ,... , 0 , T"" , " -'TG =2.5V 51.25 0 , ---- 'TG = 0.5 V 0 I 0 1.25 2.50 3.75 5.00 1 2 3 4 5 6 7 8 9 aln (107 .-'cm2) INPUT VOLTAGE (V) Figure 3: Schematic, micrograph, and test results of the sigmoid cIrcuit x1 • • • x2 • • • xN l WE'~HTS • •• "'--........ -- ... --......... -~ (a) yl, y2, ... , yM (Serial Outputs) xl, x2, ... , xN (Serial Inputs) __ ....... t.e- ... _----. y1 y2 yM (b) 10 Figure 4: (a) Time-multiplexed output ('1'1\10), (b) Time-multiplexed input (TMI) 746 Chiang. Chuang. and LaFranchise sequentially (the multiplications of each inner product are performed in parallel), a TMI structure performs N sets of At multiplications each (all M inner products are serially computed in parallel). As each input element arrives it is broadcast to all At multipliers. Each multiplier multiplies the input by an appropriate weight from its N -word deep local memory and places the result in an accumulator. The M inner products appear in the accumulators one cycle after receipt of the final, Nth input. 5 SUMMARY We have presented the NNC2, a CCD chip that implements a fully connected twolayer network at the rate of 1.92 x 109 connections/second. The NNC2 may be used in concert with IFE devices to form a CCD-based neural network pattern recogniton system or as a co-processor to speed up neural network simulations on conventional computers. A VME-bus board for the NNC2 is presently being constructed. A compact CCD circuit that generates a sigmoidal output function was described, and finally, the relative merits of time-multiplexing input or output nodes in neural network devices were enumerated. Table 1 below is a comparison of recent neural network chips. MIT LINCOLN LAB CIT INTEL MITSUBISHI AT&T HITACHI NNC2 NN ETANN NN NN WSINN No. OF OUTPUT NODES 32 256 TWO 64 168 16 (or 256) 576 No. OF INPUT NODES 192 256 TWO 64 168 256 (or 16) 64 SYNAPSE ACCURACY 6b ' ANALOG 1 b ' ANALOG ANALOG · ANALOG ANALOG· ANALOG 3b ' 6b 8b • 9 b PROGRAMMABLE 6k 64k 10 k 28 k 4k 37k SYNAPSES THROUGHPUT RATE 1.92 0.5 2 ? (109 Connections/s) 5.1 1.2 CHIP AREA (mm2) 8.8 · 9.2 , 11.2 ' 7.5 14.5' 14.5 4.5 ' 7 125 · 125 CLOCK RATE 10MHz 1.5MHz 400 kHz ? 20 MHz 2.1 MHza WEIGHT STORAGE DIGITALb ANALOG ANALOG ANALOG ANALOG DIGITAL ON CHIP LEARNING NO NO NO YESc NO NO DESIGN RULE 211m 211m 111m 111m 0.9 11m 0.8 11m CCD/CMOS CCD CMOS CMOS CMOS CMOS REPORTED AT: NIPS 91 IJCNN 90 IJCNN89 ISSCC91 ISSCC 91 IJCNN90 NOTE: a - CLOCK RATE FOR WSINN IS EXTRAPOLATED BASED ON 1/STEP TIME. b - NO DEGRADATION OBSERVED ON DIGITALLY STORED AND REFRESHED WEIGHTS. c - A SIMPLIFIED BOLTZMANN MACHINE LEARNING ALGORITHM IS USED. Table 1: Selected neural network chips Acknow ledgements ADAPT. SOL. Xl 64 4k 9 b ' 16 b 256 k 1.6 26.2 • 27.5 25 MHz DIGITAL YES 0.8 11m CMOS ISSCC 91 This work was supported by DARPA, the Office of Naval Research, and the Department of the Air Force. The IFE and NN C2 were fabricated by Orbit Semiconductor. CCD Neural Network Processors for Pattern Recognition 747 References A. J. Agranat, C. F. Neugebauer and A. Yariv, "A CCD Based Neural Network Integrated Circuit with 64k Analog Programmable Synapses," IlCNN, 1990 Proceedings, pp. 11-551-11-555. Y. Arima et. al., "A 336-Neuron 28-k Synapse Self-Learning Neural Network Chip with branch-Neuron-Unit Architecture," in ISSCC Dig. of Tech. Papers, pp. 182183, Feb. 1991. B. E. Boser and E. Sackinger, "An Analog Neural Network Processor with Programmable Network Topology," in ISSCC Dig. of Tech. Papers, pp. 184-185, Feb. 1991. A. M. Chiang, "A CCD Programmable Signal Processor," IEEE lour. Solid-State Circ., vol. 25, no. 6, pp. 1510-1517, Dec. 1990. A. M. Chiang and M. L. Chuang, "A CCD Programmable Image Processor and its Neural Network Applications," IEEE lour. Solid-State Circ., vol. 26, no. 12, pp. 1894-1901, Dec. 1991. D. Hammerstrom, "A VLSI Architecture for High-Performance, Low-Cost On-chip Learning," IlCNN, 1990 Proceedings, pp. 11-537-11-543. M. Holler et. al., "An Electrically Trainable Artificial Neural Network (ETANN) with 10240 "Floating Gate" Synapses," IlCNN, 1989 Proceedings, pp. 11-191-11196. Y. Le Cun et. al., "Handwritten Digit Recognition with a Back-Propagation Network," in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, pp. 396-404, San Mateo, CA: Morgan Kaufmann, 1989. K. K. Thornber, "Incomplete Charge Transfer in IGFET Bucket-Brigade Shift Registers," IEEE Trans. Elect. Dev., vol. ED-18, no. 10, pp.941-950, 1971. A. Waibel et. al., "Phoneme Recognition Using Time-Delay Neural Networks," IEEE Trans. on Acoust., Speech, Sig. Proc., vol. 37, no. 3, pp. 329-339, March 1989. M. Yasunaga et. al., "Design, Fabrication and Evaluation of a 5-Inch Wafer Scale Neural Network LSI Composed of 576 Digital Neurons," IlCNN, 1990 Proceedings, pp. 11-527-11-535.
1991
79
550
Structural Risk Minimization for Character Recognition I. Guyon, V. Vapnik, B. Boser, L. Bottou, and S. A. Solla AT&T Bell Laboratories Holmdel, NJ 07733, USA Abstract The method of Structural Risk Minimization refers to tuning the capacity of the classifier to the available amount of training data. This capacity is influenced by several factors, including: (1) properties of the input space, (2) nature and structure of the classifier, and (3) learning algorithm. Actions based on these three factors are combined here to control the capacity of linear classifiers and improve generalization on the problem of handwritten digit recognition. 1 RISK MINIMIZATION AND CAPACITY 1.1 EMPIRICAL RISK MINIMIZATION A common way of training a given classifier is to adjust the parameters w in the classification function F( x, w) to minimize the training error Etrain, i.e. the frequency of errors on a set of p training examples. Etrain estimates the expected risk based on the empirical data provided by the p available examples. The method is thus called Empirical Risk Minimization. But the classification function F(x, w*) which minimizes the empirical risk does not necessarily minimize the generalization error, i.e. the expected value of the risk over the full distribution of possible inputs and their corresponding outputs. Such generalization error Egene cannot in general be computed, but it can be estimated on a separate test set (Ete$t). Other ways of 471 472 Guyon, Vapnik, Boser, Bottou, and Solla estimating Egene include the leave-one-out or moving control method [Vap82] (for a review, see [Moo92]). 1.2 CAPACITY AND GUARANTEED RISK Any family of classification functions {F(x, w)} can be characterized by its capacity. The Vapnik-Chervonenkis dimension (or VC-dimension) [Vap82] is such a capacity, defined as the maximum number h of training examples which can be learnt without error, for all possible binary labelings. The VC-dimension is in some cases simply given by the number of free parameters of the classifier, but in most practical cases it is quite difficult to determine it analytically. The VC-theory provides bounds. Let {F(x, w)} be a set of classification functions of capacity h. With probability (1 71), for a number of training examples p > h, simultaneously for all classification functions F{x, w), the generalization error Egene is lower than a guaranteed risk defined by: Eguarant = Etrain + ((p, h, Etrain, 71) , (1) where ((p, h, Etrain, 71) is proportional to (0 = [h(ln2p/h+ 1) - 71l/p for small Etrain, and to Fa for Etrain close to one [Vap82,Vap92]. For a fixed number of training examples p, the training error decreases monotonically as the capacity h increases, while both guaranteed risk and generalization error go through a minimum. Before the minimum, the problem is overdetermined: the capacity is too small for the amount of training data. Beyond the minimum the problem is underdetermined. The key issue is therefore to match the capacity of the classifier to the amount of training data in order to get best generalization performance. The method of Structural Risk Minimization (SRM) [Vap82,Vap92] provides a way of achieving this goal. 1.3 STRUCTURAL RISK MINIMIZATION Let us choose a family of classifiers {F(x, w)}, and define a structure consisting of nested subsets of elements of the family: S1 C S2 c ... C Sr c .... By defining such a structure, we ensure that the capacity hr of the subset of classifiers Sr is less than hr+l of subset Sr+l. The method of SRM amounts to finding the subset sopt for which the classifier F{x, w*) which minimizes the empirical risk within such subset yields the best overall generalization performance. Two problems arise in implementing SRM: (I) How to select sopt? (II) How to find a good structure? Problem (I) arises because we have no direct access to Egene. In our experiments, we will use the minimum of either E te3t or Eguarant to select sopt, and show that these two minima are very close. A good structure reflects the a priori knowledge of the designer, and only few guidelines can be provided from the theory to solve problem (II). The designer must find the best compromise between two competing terms: Etrain and i. Reducing h causes ( to decrease, but Etrain to increase. A good structure should be such that decreasing the VC-dimension happens at the expense of the smallest possible increase in training error. We now examine several ways in which such a structure can be built. Structural Risk Minimization for Character Recognition 473 2 PRINCIPAL COMPONENT ANALYSIS, OPTIMAL BRAIN DAMAGE, AND WEIGHT DECAY Consider three apparently different methods of improving generalization performance: Principal Component Analysis (a preprocessing transformation of input space) [The89], Optimal Brain Damage (an architectural modification through weight pruning) [LDS90], and a regularization method, Weight Decay (a modification of the learning algorithm) [Vap82]. For the case of a linear classifier, these three approaches are shown here to control the capacity of the learning system through the same underlying mechanism: a reduction of the effective dimension of weight space, based on the curvature properties of the Mean Squared Error (M SE) cost function used for training. 2.1 LINEAR CLASSIFIER AND MSE TRAINING Consider a binary linear classifier F(x, w) = (}o(wT x), where wT is the transpose of wand the function {}o takes two values 0 and 1 indicating to which class x belongs. The VC-dimension of such classifier is equal to the dimension of input space 1 (or the number of weights): h = dim(w) = dim(x) = n. The empirical risk is given by: p Etrain = ! L(yk - {}o(wT xk»2 , p k=l (2) where xk is the kth example, and yk is the corresponding desired output. The problem of minimizing Etrain as a function of w can be approached in different ways [DH73], but it is often replaced by the problem of minimizing a Mean Square Error (MSE) cost function, which differs from (2) in that the nonlinear function (}o has been removed. 2.2 CURVATURE PROPERTIES OF THE MSE COST FUNCTION The three structures that we investigate rely on curvature properties of the M S E cost function. Consider the dependence of MSE on one of the parameters Wi. Training leads to the optimal value wi for this parameter. One way of reducing the capacity is to set Wi to zero. For the linear classifier, this reduces the VCdimension by one: h' = dim(w) - 1 = n - 1. The MSE increase resulting from setting Wi = 0 is to lowest order proportional to the curvature of the M SEat wi. Since the decrease in capacity should be achieved at the smallest possible expense in M S E increase, directions in weight space corresponding to small M S E curvature are good candidates for elimination. The curvature of the M S E is specified by the Hessian matrix H of second derivatives of the M SE with respect to the weights. For a linear classifier, the Hessian matrix is given by twice the correlation matrix of the training inputs, H = (2/p) 2:~=1 xkxkT. The Hessian matrix is symmetric, and can be diagonalized to get rid of cross terms, 1 We assume, for simplicity, that the first component of vector x is constant and set to 1, so that the corresponding weight introduces the bias value. 474 Guyon, Vapnik, Boser, Bottou, and SaBa to facilitate decisions about the simultaneous elimination of several directions in weight space. The elements of the Hessian matrix after diagonalization are the eigenvalues Ai; the corresponding eigenvectors give the principal directions wi of the MSE. In the rotated axis, the increase IlMSE due to setting w: = 0 takes a simple form: (3) The quadratic approximation becomes an exact equality for the linear classifier. Principal directions wi corresponding to small eigenvalues Ai of H are good candidates for elimination. 2.3 PRINCIPAL COMPONENT ANALYSIS One common way of reducing the capacity of a classifier is to reduce the dimension of the input space and thereby reduce the number of necessary free parameters (or weights). Principal Component Analysis (PCA) is a feature extraction method based on eigenvalue analysis. Input vectors x of dimension n are approximated by a linear combination of m ~ n vectors forming an ortho-normal basis. The coefficients of this linear combination form a vector x' of dimension m. The optimal basis in the least square sense is given by the m eigenvectors corresponding to the m largest eigenvalues of the correlation matrix of the training inputs (this matrix is 1/2 of H). A structure is obtained by ranking the classifiers according to m. The VC-dimension of the classifier is reduced to: h' = dim(x/) = m. 2.4 OPTIMAL BRAIN DAMAGE For a linear classifier, pruning can be implemented in two different but equivalent ways: (i) change input coordinates to a principal axis representation, prune the components corresponding to small eigenvalues according to PCA, and then train with the M SE cost function; (ii) change coordinates to a principal axis representation, train with M S E first, and then prune the weights, to get a weight vector w' of dimension m < n. Procedure (i) can be understood as a preprocessing, whereas procedure (ii) involves an a posteriori modification of the structure of the classifier (network architecture). The two procedures become identical if the weight elimination in (ii) is based on a 'smallest eigenvalue' criterion. Procedure (ii) is very reminiscent of Optimal Brain Damage (OBD), a weight pruning procedure applied after training. In OBD, the best candidates for pruning are those weights which minimize the increase IlM SE defined in equation (3). The m weights that are kept do not necessarily correspond to the largest m eigenvalues, due to the extra factor of (wi*)2 in equation (3). In either implementation, the VC-dimension is reduced to h' = dim(w/) = dim(x/) = m. 2.5 WEIGHT DECAY Capacity can also be controlled through an additional term in the cost function, to be minimized simultaneously with Al S E. Linear classifiers can be ranked according to the norm IIwll2 = L1=1 wJ of the weight vector. A structure is constructed Structural Risk Minimization for Character Recognition 475 by allowing within the subset Sr only those classifiers which satisfy IIwll2 < Cr. The positive bounds Cr form an increasing sequence: Cl < C2 < '" < Cr < ... This sequence can be matched with a monotonically decreasing sequence of positive Lagrange multipliers 11 ~ 12 ~ ... ~ Ir > ... , such that our training problem stated as the minimization of M S E within a specific set Sr is implemented through the minimization of a new cost function: MSE + 'rllwIl 2 • This is equivalent to the Weight Decay procedure (WD). In a mechanical analogy, the term ,rllwll2 is like the energy of a spring of tension Ir which pulls the weights to zero. As it is easier to pull in the directions of small curvature of the MSE, WD pulls the weights to zero predominantly along the principal directions of the Hessian matrix H associated with small eigenvalues. In the principal axis representation, the minimum w-Y of the cost function MSE + ,lIwIl2, is a simple function of the minimum wO of the MSE in the I -+ 0+ limit: wI = w? Ad(Ai + I)' The weight w? is attenuated by a factor Ad (Ai + I)' Weights become negligible for I ~ Ai, and remain unchanged for I «: Ai· The effect of this attenuation can be compared to that of weight pruning. Pruning all weights such that Ai < I reduces the capacity to: n h' = L: 8-y(Ai) , (4) i=1 where 8-y(u) = 1 if U > I and 8-y(u) = 0 otherwise. By analogy, we introduce the Weight Decay capacity: h' = t Ai . i=1 Ai + I (5) This expression arises in various theoretical frameworks [Moo92,McK92]' and is valid only for broad spectra of eigenvalues. 3 SMOOTHING, HIGHER-ORDER UNITS, AND REGULARIZATION Combining several different structures achieves further performance improvements. The combination of exponential smoothing (a preprocessing transformation of input space) and regularization (a modification of the learning algorithm) is shown here to improve character recognition. The generalization ability is dramatically improved by the further introduction of second-order units (an architectural modification). 3.1 SMOOTHING Smoothing is a preprocessing which aims at reducing the effective dimension of input space by degrading the resolution: after smoothing, decimation of the inputs could be performed without further image degradation. Smoothing is achieved here through convolution with an exponential kernel: Lk Ll PIXEL(i + k,j + I) exp[-~Jk2 + 12] BLURRED.PIXEL(i,j) = IJ ' Lk Ll exp[-fj k2 + 12] 476 Guyon, Vapnik, Boser, Bottou, and Soil a where {3 is the smoothing parameter which determines the structure. Convolution with the chosen kernel is an invertible linear operation. Such preprocessing results in no capacity change for a MSE-trained linear classifier. Smoothing only modifies the spectrum of eigenvalues and must be combined with an eigenvaluebased regularization procedure such as OBD or WD, to obtain performance improvement through capacity decrease. 3.2 HIGHER-ORDER UNITS Higher-order (or sigma-pi) units can be substituted for the linear units to get polynomial classifiers: F(x, w) = 6o(wTe(x)), where e(x) is an m-dimensional vector (m > n) with components: x}, X2, ... , Xn , (XIXt), (XIX2), .•• , (xnxn ), ••• , (X1X2 ••. Xn) . The structure is geared towards increasing the capacity, and is controlled by the order of the polynomial: Sl contains all the linear terms, S2 linear plus quadratic, etc. Computations are kept tractable with the method proposed in reference [Pog75]. 4 EXPERIMENTAL RESULTS Experiments were performed on the benchmark problem of handwritten digit recognition described in reference [GPP+S9]. The database consists of 1200 (16 x 16) binary pixel images, divided into 600 training examples and 600 test examples. In figure 1, we compare the results obtained by pruning inputs or weights with PCA and the results obtained with WD. The overall appearance of the curves is very similar. In both cases, the capacity (computed from (4) and (5)) decreases as a function of r, whereas the training error increases. For the optimum value r*, the capacity is only 1/3 of the nominal capacity, computed solely on the basis of the network architecture. At the price of some error on the training set, the error rate on the test set is only half the error rate obtained with r = 0+ . The competition between capacity and training error always results in a unique minimum of the guaranteed risk (1). It is remarkable that our experiments show the minimum of Eguarant coinciding with the minimum of E te1t • Any of these two quantities can therefore be used to determine r*. In principle, another independent test set should be used to get a reliable estimate of Egene (cross-validation). It seems therefore advantageous to determine r* using the minimum of Eguarant and use the test set to predict the generalization performance. Using Eguarant to determine r* raises the problem of determining the capacity of the system. The capacity can be measured when analytic computation is not possible. Measurements performed with the method proposed by Vapnik, Levin, and Le Cun yield results in good agreement with those obtained using (5). The method yields an effective VC·dimension which accounts for the global capacity of the system, including the effects of input data, architecture, and learning algorithm 2. 2 Schematically, measurements of the effective VC.dimension consist of splitting the training data into two subsets. The difference between Etrain in these subsets is maximized. The value of h is extracted from the fit to a theoretical prediction for such maximal discrepancy. Structural Risk Minimization for Character Recognition 477 %error 1 12 11 10 9 8 Etest a , -------I I I Etrain I 1 I 0 -5 -4 -3 -2 -1 *0 1 -4 -3 -2 -1 *0 1 10CJ-;.-& "1 10;-; __ "1 %error h' 260 1 250 I 240 12 230 220 11 Etest 210 10 200 190 9 180 170 8 160 b 150 7 140 130 6 120 110 5 100 90 4 80 70 ------------3 60 50 2 40 Etrain 30 1 20 10 0 0 -5 -4 -3 -2 0 -5 -4 -3 -2 -1 0 1 10;-;'-& "1* loq-qamma "1* Figure 1: Percent error and capacity h' as a function of log r (linear classifier, no smoothing): (a) weight/input pruning via peA (r is a threshold), (b) WD (r is the decay parameter). The guaranteed risk has been rescaled to fit in the figure. 478 Guyon, Vapnik, Boser, Bottou, and Solla Table 1: Eteat for Smoothing, WD, and Higher-Order Combined. I (3 II "I II 13t order I 2nd order I 0 "1* 6.3 1.5 1 "1* 5.0 0.8 2 "1* 4.5 1.2 10 'Y'~ 4.3 1.3 any 0+ 12.7 3.3 In table 1 we report results obtained when several structures are combined. Weight decay with 'Y = "1* reduces Ete3t by a factor of 2. Input space smoothing used in conjunction with WD results in an additional reduction by a factor of 1.5. The best performance is achieved for the highest level of smoothing, (3 = 10, for which the blurring is considerable. As expected, smoothing has no effect in the absence ofWD. The use of second-order units provides an additional factor of 5 reduction in Ete3t • For second order units, the number of weights scales like the square of the number of inputs n2 = 66049. But the capacity (5) is found to be only 196, for the optimum values of "I and (3. 5 CONCLUSIONS AND EPILOGUE Our results indicate that the VC-dimension must measure the global capacity of the system. It is crucial to incorporate the effects of preprocessing of the input data and modifications of the learning algorithm. Capacities defined solely on the basis of the network architecture give overly pessimistic upper bounds. The method of SRM provides a powerful tool for tuning the capacity. We have shown that structures acting at different levels (preprocessing, architecture, learning mechanism) can produce similar effects. We have then combined three different structures to improve generalization. These structures have interesting complementary properties. The introduction of higher-order units increases the capacity. Smoothing and weight decay act in conjunction to decrease it. Elaborate neural networks for character recognition [LBD+90,GAL +91] also incorporate similar complementary structures. In multilayer sigmoid-unit networks, the capacity is increased through additional hidden units. Feature extracting neurons introduce smoothing, and regularization follows from prematurely stopping training before reaching the M S E minimum. When initial weights are chosen to be small, this stopping technique produces effects similar to those of weight decay. Structural Risk Minimization for Character Recognition 479 Acknowledgments We wish to thank L. Jackel's group at Bell Labs for useful discussions, and are particularly grateful to E. Levin and Y. Le Cun for communicating to us the unpublished method of computing the effective VC-dimension. References [DH73] R.O. Duda and P.E. Hart. Pattern Classification And Scene Analysis. Wiley and Son, 1973. [GAL +91] I. Guyon, P. Albrecht, Y. Le Cun, J. Denker, and W. Hubbard. Design of a neural network character recognizer for a touch terminal. Pattern Recognition, 24(2), 1991. [GPP+89] I. Guyon, I. Poujaud, L. Personnaz, G. Dreyfus, J. Denker, and Y. Le Cun. Comparing different neural network architectures for classifying handwritten digits. In Proceedings of the International Joint Conference on Neural Networks, volume II, pages 127-132. IEEE, 1989. [LBD+90] Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Back-propagation applied to handwritten zipcode recognition. Neural Computation, 1(4), 1990. [LDS90] Y. Le Cun, J. S. Denker, and S. A. Solla. Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2 (NIPS 89), pages 598-605. Morgan Kaufmann, 1990. [McK92] D. McKay. A practical bayesian framework for backprop networks. In this volume, 1992. [Mo092] J. Moody. Generalization, weight decay and architecture selection for non-linear learning systems. In this volume, 1992. [Pog75] T. Poggio. On optimal nonlinear associative recall. Bioi. Cybern., (9)201, 1975. [The89] C. W. Therrien. Decision, Estimation and Classification: An Introduction to Pattem Recognition and Related Topics. Wiley, 1989. [Vap82] V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982. [Vap92] V Vapnik. Principles of risk minimization for learning theory. In this volume, 1992.
1991
8
551
A Simple Weight Decay Can Improve Generalization Anders Krogh· CONNECT, The Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen, Denmark krogh@cse.ucsc.edu John A. Hertz Nordita Blegdamsvej 17 DK-2100 Copenhagen, Denmark hertz@nordita.dk Abstract It has been observed in numerical simulations that a weight decay can improve generalization in a feed-forward neural network. This paper explains why. It is proven that a weight decay has two effects in a linear network. First, it suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. Second, if the size is chosen right, a weight decay can suppress some of the effects of static noise on the targets, which improves generalization quite a lot. It is then shown how to extend these results to networks with hidden layers and non-linear units. Finally the theory is confirmed by some numerical simulations using the data from NetTalk. 1 INTRODUCTION Many recent studies have shown that the generalization ability of a neural network (or any other 'learning machine') depends on a balance between the information in the training examples and the complexity of the network, see for instance [1,2,3]. Bad generalization occurs if the information does not match the complexity, e.g. if the network is very complex and there is little information in the training set. In this last instance the network will be over-fitting the data, and the opposite situation corresponds to under-fitting. ·Present address: Computer and Information Sciences, Univ. of California Santa Cruz, Santa Cruz, CA 95064. 950 A Simple Weight Decay Can Improve Generalization 951 Often the number of free parameters, i. e. the number of weights and thresholds, is used as a measure of the network complexity, and algorithms have been developed, which minimizes the number of weights while still keeping the error on the training examples small [4,5,6]. This minimization of the number of free parameters is not always what is needed. A different way to constrain a network, and thus decrease its complexity, is to limit the growth of the weights through some kind of weight decay. It should prevent the weights from growing too large unless it is really necessary. It can be realized by adding a term to the cost function that penalizes large weights, 1 "" 2 E(w) = Eo(w) + 2A L..J Wi' i (1) where Eo is one's favorite error measure (usually the sum of squared errors), and A is a parameter governing how strongly large weights are penalized. w is a vector containing all free parameters of the network, it will be called the weight vector. If gradient descend is used for learning, the last term in the cost function leads to a new term -AWi in the weight update: . fJEo \ Wi ex: --fJ ",Wi· Wi (2) Here it is formulated in continuous time. If the gradient of Eo (the 'force term') were not present this equation would lead to an exponential decay of the weights. Obviously there are infinitely many possibilities for choosing other forms of the additional term in (1), but here we will concentrate on this simple form. It has been known for a long time that a weight decay of this form can improve generalization [7], but until now not very widely recognized. The aim of this paper is to analyze this effect both theoretically and experimentally. Weight decay as a special kind of regularization is also discussed in [8,9] . 2 FEED-FORWARD NETWORKS A feed-forward neural network implements a function of the inputs that depends on the weight vector w, it is called fw. For simplicity it is assumed that there is only one output unit. When the input is e the output is fw (e) . Note that the input vector is a vector in the N-dimensional input space, whereas the weight vector is a vector in the weight space which has a different dimension W. The aim of the learning is not only to learn the examples, but to learn the underlying function that produces the targets for the learning process. First, we assume that this target function can actually be implemented by the network . This means there exists a weight vector u such that the target function is equal to fu . The network with parameters u is often called the teacher, because from input vectors it can produce the right targets. The sum of squared errors is p Eo(w) = ~ 2:[fu(eJl ) - fw(eJl)]2, JI=l (3) 952 Krogh and Hertz where p is the number of training patterns. The learning equation (2) can then be written Wi <X 2)fu(eJl ) - fw(eJl)]&~:~) - AWj. Jl • (4) Now the idea is to expand this around the solution u, but first the linear case will be analyzed in some detail. 3 THE LINEAR PERCEPTRON The simplest kind of 'network' is the linear perceptron characterized by (5) where the N-l/2 is just a convenient normalization factor. Here the dimension of the weight space (W) is the same as the dimension of the input space (N). The learning equation then takes the simple form Defining and it becomes Wi <X L N- 1 L[Uj - Wj]ejer - AWi. Jl j Aij = N- 1 L ere; Jl Vj <X - L AijVj + A(Uj - Vi)' j Transforming this equation to the basis where A is diagonal yields vr <X -(Ar + A)Vr + AUr, (6) (7) (8) (9) (10) where Ar are the eigenvalues of A, and a subscript r indicates transformation to this basis. The generalization error is defined as the error averaged over the distribution of input vectors N- 1 L VjVj(eiej)€ ij (11) Here it is assumed that (eiej)€ = 6ij . The generalization error F is thus proportional to Iv12 , which is also quite natural. The eigenvalues of the covariance matrix A are non-negative, and its rank can easily be shown to be less than or equal to p. It is also easily seen that all eigenvectors belonging to eigenvalues larger than 0 lies in the subspace of weight space spanned A Simple Weight Decay Can Improve Generalization 953 by the input patterns e, ... , e. This subspace, called the pattern subspace, will be denoted Vp , and the orthogonal subspace is denoted by V.l. When there are sufficiently many examples they span the whole space, and there will be no zero eigenvalues. This can only happen for p 2:: N. When A = 0 the solution to (10) inside Vp is just a simple exponential decay to Vr = o. Outside the pattern subspace Ar = 0, and the corresponding part of Vr will be constant. Any weight vector which has the same projection onto the pattern subspace as u gives a learning error O. One can think of this as a 'valley' in the error surface given by u + V/. The training set contains no information that can help us choose between all these solutions to the learning problem. When learning with a weight decay A > 0, the constant part in V/ will decay to zero asymptotically (as e->'t, where t is the time). An infinitesimal weight decay will therefore choose the solution with the smallest norm out of all the solutions in the valley described above. This solution can be shown to be the optimal one on average. 4 LEARNING WITH AN UNRELIABLE TEACHER Random errors made by the teacher can be modeled by adding a random term 11 to the targets: (12) The variance of TJ is called u 2 , and it is assumed to have zero mean. Note that these targets are not exactly realizable by the network (for Q' > 0), and therefore this is a simple model for studying learning of an unrealizable function. With this noise the learning equation (2) becomes Wi ex L:(N- 1 L: Vjf.j + N- 1/ 211/J)f.f AWi· (13) /J j Transforming it to the basis where A is diagonal as before, vr ex -(Ar + A)Vr + AUr - N- 1/ 2 L 11/Jf.~· (14) /J The asymptotic solution to this equation is AUr - N-l/ 2 L/J TJ/Jf.~ Vr = A + Ar . (15) The contribution to the generalization error is the square of this summed over all r. If averaged over the noise (shown by the bar) it becomes for each r (16) The last expression has a minimum in A, which can be found by putting the derivative with respect to A equal to zero, A~Ptimal = u 2 /u;. Remarkably it depends only 954 Krogh and Hertz Figure 1: Generalization error as a function of Q' = pIN. The full line is for A = u 2 = 0.2, and the dashed line for A = O. The dotted line is the generalization error with no noise and A = O. LI.. ., o~ __________ ~ __________ ~ o 1 2 pIN on u and the variance of the noise, and not on A. If it is assumed that u is random (16) can be averaged over u. This yields an optimal A independent of r, u 2 Aoptlmai = ---;;-, u~ where u2 is the average of N- 1IuI 2 . (17) In this case the weight decay to some extent prevents the network from fitting the nOIse. From equation (14) one can see that the noise is projected onto the pattern subspace. Therefore the contribution to the generalization error from V/ is the same as before, and this contribution is on average minimized by a weight decay of any size. Equation (17) was derived in [10] in the context of a particular eigenvalue spectrum. Figure fig. 1 shows the dramatic improvement in generalization error when the optimal weight decay is used in this case, The present treatment shows that (17) is independent of the spectrum of A. We conclude that a weight decay has two positive effects on generalization in a linear network: 1) It suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. 2) If the size is chosen right, it can suppress some of the effect of static noise on the targets. 5 NON-LINEAR NETWORKS It is not possible to analyze a general non-linear network exactly, as done above for the linear case. By a local linearization, it is however, possible to draw some interesting conclusions from the results in the previous section. Assume the function is realizable, f = fu. Then learning corresponds to solving the p equations (18) A Simple Weight Decay Can Improve Generalization 955 in W variables, where W is the number of weights. For p < W these equations define a manifold in weight space of dimension at least W - p. Any point W on this manifold gives a learning error of zero, and therefore (4) can be expanded around w. Putting v = W - w, expanding fw in v, and using it in (4) yields Vi ex - L (8f;~:Jj») v/9f;~:Jj) + A(Wi - vd Jj ,1 - LAij(W)Vj - AVi + AWj j (The derivatives in this equation should be taken at iV.) The analogue of A is defined as A··( -) = L 8fw(eJj) 8fw(eJj) '1 w ~ :::l • uW' uW' Jj , 1 (19) (20) Since it is of outer product form (like A) its rank R( in) ~ min{p, W}. Thus when p < W, A is never of full rank. The rank of A is of course equal to W minus the dimension of the manifold mentioned above. From these simple observations one can argue that good generalization should not be expected for p < W. This is in accordance with other results (cf. [3]), and with current 'folk-lore'. The difference from the linear case is that the 'rain gutter' need not be (and most probably is not) linear, but curved in this case. There may in fact be other valleys or rain gutters disconnected from the one containing u. One can also see that if A has full rank, all points in the immediate neighborhood of W = u give a learning error larger than 0, i.e. there is a simple minimum at u. Assume that the learning finds one of these valleys. A small weight decay will pick out the point in the valley with the smallest norm among all the points in the valley. In general it can not be proven that picking that solution is the best strategy. But, at least from a philosophical point of view, it seems sensible, because it is (in a loose sense) the solution with the smallest complexity-the one that Ockham would probably have chosen. The value of a weight decay is more evident if there are small errors in the targets. In that case one can go through exactly the same line of arguments as for the linear case to show that a weight decay can improve generalization, and even with the same optimal choice (17) of A. This is strictly true only for small errors (where the linear approximation is valid). 6 NUMERICAL EXPERIMENTS A weight decay has been tested on the NetTalk problem [11]. In the simulations back-propagation derived from the 'entropic error measure' [12] with a momentum term fixed at 0.8 was used. The network had 7 x 26 input units, 40 hidden units and 26 output units. In all about 8400 weights. It was trained on 400 to 5000 random words from the data base of around 20.000 words, and tested on a different set of 1000 random words. The training set and test set were independent from run to run. 956 Krogh and Hertz 1.2 0.26 0.24 0.22 f/) .... 1.0 0 0.20 .... .... w 0.18 0.16 lL. 0.14 0.8 0 2 0104 4 0104 P . . 0.6 . -o Figure 2: The top full line corresponds to the generalization error after 300 epochs (300 cycles through the training set) without a weight decay. The lower full line is with a weight decay. The top dotted line is the lowest error seen during learning without a weight decay, and the lower dotted with a weight decay. The size of the weight decay was .A = 0.00008. Insert: Same figure except that the error rate is shown instead of the squared error. The error rate is the fraction of wrong phonemes when the phoneme vector with the smallest angle to the actual output is chosen, see [11]. Results are shown in fig. 2. There is a clear improvement in generalization error when weight decay is used. There is also an improvement in error rate (insert of fig. 2), but it is less pronounced in terms of relative improvement. Results shown here are for a weight decay of .A = 0.00008. The values 0.00005 and 0.0001 was also tried and gave basically the same curves. 7 CONCLUSION It was shown how a weight decay can improve generalization in two ways: 1) It suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. 2) If the size is chosen right, a weight decay can suppress some of the effect of static noise on the targets. Static noise on the targets can be viewed as a model of learning an unrealizable function. The analysis assumed that the network could be expanded around an optimal weight vector, and A Simple Weight Decay Can Improve Generalization 957 therefore it is strictly only valid in a little neighborhood around that vector. The improvement from a weight decay was also tested by simulations. For the NetTalk data it was shown that a weight decay can decrease the generalization error (squared error) and also, although less significantly, the actual mistake rate of the network when the phoneme closest to the output is chosen. Acknowledgements AK acknowledges support from the Danish Natural Science Council and the Danish Technical Research Council through the Computational Neural Network Center (CONNECT). References [1] D.B. Schwartz, V.K. Samalam, S.A. Solla, and J.S. Denker. Exhaustive learning. Neural Computation, 2:371-382, 1990. [2] N. Tishby, E. Levin, and S.A. Solla. Consistent inference of probabilities in layered networks: predictions and generalization. In International Joint Conference on Neural Networks, pages 403-410, (Washington 1989), IEEE, New York, 1989. [3] E.B. Baum and D. Haussler. What size net gives valid generalization? Neural Computation, 1:151-160, 1989. [4] Y. Le Cun, J .S. Denker, and S.A. Solla. Optimal brain damage. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems, pages 598605, (Denver 1989), Morgan Kaufmann, San Mateo, 1990. [5] H.H. Thodberg. Improving generalization of neural networks through pruning. International Journal of Neural Systems, 1:317-326, 1990. [6] D.H. Weigend, D.E. Rumelhart, and B.A. Huberman. Generalization by weight-elimination with application to forecasting. In R.P. Lippmann et ai, editors, Advances in Neural Information Processing Systems, page 875-882, (Denver 1989), Morgan Kaufmann, San Mateo, 1991. [7] G.E. Hinton. Learning translation invariant recognition in a massively parallel network. In G. Goos and J. Hartmanis, editors, PARLE: Parallel Architectures and Languages Europe. Lecture Notes in Computer Science, pages 1-13, Springer-Verlag, Berlin, 1987. [8] J .Moody. Generalization, weight decay, and architecture selection for nonlinear learning systems. These proceedings. [9] D. MacKay. A practical bayesian framework for backprop networks. These proceedings. [10] A. Krogh and J .A. Hertz. Generalization in a Linear Perceptron in the Presence of Noise. To appear in Journal of Physics A 1992. [11] T.J. Sejnowski and C.R. Rosenberg. Parallel networks that learn to pronounce english text. Complex Systems, 1:145-168,1987. [12] J .A. Hertz, A. Krogh, and R.G. Palmer. Introduction to the Theory of Neural Computation. Addison-Wesley, Redwood City, 1991.
1991
80
552
Neural Network Diagnosis of Avascular Necrosis from Magnetic Resonance Images Armando Manduca Dept. of Physiology and Biophysics Mayo Clinic Rochester, MN 55905 Paul Christy Dept. of Diagnostic Radiology Mayo Clinic Rochester, MN 55905 Richard Ehman Dept. of Diagnostic Radiology Mayo Clinic Rochester, MN 55905 Abstract A vascular necrosis (AVN) of the femoral head is a common yet potentially serious disorder which can be detected in its very early stages with magnetic resonance imaging. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose A VN from single magnetic resonance images of the femoral head with 100% accuracy on training data and 97% accuracy on test data. 1 INTRODUCTION Diagnostic radiology may be a very natural field of application for neural networks, since a simple answer is desired from a complex image, and the learning process that human experts undergo is to a large extent a supervised learning experience based on looking at large numbers of images with known interpretations. Although many workers have applied neural nets to various types of I-dimensional medical data (e.g. ECG and EEG waveforms), little work has been done on applying neural nets to diagnosis directly from medical images. 645 646 Manduca, Christy, and Ehman We wanted to explore the use of neural networks in diagnostic radiology by (1) starting with a simple but real diagnostic problem, and (2) using only actual data. We chose the diagnosis of avascular necrosis from magnetic resonance images as an ideal initial problem, because: the area in question is small and well-defined, its size and shape do not vary greatly between individuals, the condition (if present) is usually visible even at low spatial and gray level resolution on a single image, and real data is readily available. Avascular necrosis (A VN) is the deterioration of tissue due to a disruption in the blood supply. AVN ofthe femoral head (the ball at the upper end of the femur which fits into the socket formed by the hip bone) is an increasingly common clinical problem, with potentially crippling effects. Since the sole blood supply to the femoral head in adults traverses the femoral neck, AVN often occurs following hip fracture (e.g., Bo Jackson). It is now apparent that AVN can also occur as a side effect of treatment with corticosteroid drugs, which are commonly used for immunosuppression in transplant patients as well as for patients with asthma, rheumatoid arthritis and other autoimmune diseases. Although the pathogenesis of AVN secondary to corticosteroid use is not well understood, 6 - 10% of such patients appear to develop the disorder (Ternoven et al., 1990). AVN may be detected with magnetic resonance imaging (MRI) even in its very early stages, as a low signal region within the femoral head due to loss of water-containing bone marrow. MRI is expected to play an important future role in screening patients undergoing corticosteroid therapy for AVN. 2 METHODOLOGY The data set selected for analysis consisted of 125 sagittal images of femoral heads from T1-weighted MRI scans of 40 adult patients, with 51% showing evidence of AVN, from early stages to quite severe (see Fig. 1). Often both femoral heads from the same patient were selected (typically only one has AVN if the cause is fracturerelated while both sometimes have AVN if the cause is secondary to drug use), and often two or three different cross-sectional slices of the same femoral head were included (the appearance of AVN can change dramatically as one steps through different cross-sectional slices). The images were digitized and 128x128 regions centered on and just containing the femoral heads were manually selected. These 128x128 subimages with 256 gray levels were averaged down to 32x32 resolution and to 16 gray levels for most of the trials (see Fig. 2). The neural networks used to analyze the data were standard feed-forward, fullyconnected multilayer perceptrons with a single hidden layer of 4 to 30 nodes and 2 output nodes. The majority of the runs were with networks of 1024 input nodes, into which the 32x32 images were placed, with gray levels scaled so the input values ranged within +0.5. In other experiments with different input features the number of input nodes varied accordingly. Conjugate gradient optimization was used for training (Kramer and Sangiovanni-Vincentelli, 1989; Barnard and Cole 1989). Training was stopped at a maximum of 50 passes through the training set, though usually convergence was achieved before this point. Each training run took less than 1 minute on a SPARCstation 2. Neural Network Diagnosis of Avascular Necrosis from Magnetic Resonance Images 647 Figure 1: Representative sagittal hip Tl weighted MR images. The small circular area in the center of each picture is the femoral head (the ball joint at the upper end of the femur). The top image shows a normal femoral head; the bottom is a femoral head with severe avascular necrosis. 648 Manduca, Christy, and Ehman Figure 2: Sample images from our 32x32 pixel, 16 gray level data set. The five femoral heads in the right column are free of AVN, the five in the middle column have varying degrees of AVN, while the left column shows five images that were particularly difficult for both the networks and untrained humans to distinguish (only the last two have AVN). Neural Network Diagnosis of Avascular Necrosis from Magnetic Resonance Images 649 Table 1: Diagnostic Accuracies on Test Data (averages over 24 and 100 runs respectively) hidden nodes 50% training 80% training none 91.6% 92.6% 4 92.6% 95.5% 5 93.2% 96.4% 6 93.8% 96.4% 7 93.2% 97.0% 8 92.4% 96.8% 10 92.4% 96.1% 30 91.2% 94.1% 3 RESULTS Two sets of runs with the image data were made, with the data randomly split 50%50% and 80%-20% into training and test data sets respectively. In the first set, 4 different random splits of the data, with either half in turn serving as training or test data, and 3 different random weight initializations each were used for a total of 24 distinct runs for each network configuration. For the other set, since there was less test data, 10 different splits of the data with 10 different weight initializations each were used for a total of 100 distinct runs for each network configuration. The results are shown in Table 1. In all cases, the sensitivity and specificity were approximately equal. Standard deviations of the averages shown were typically 4.0% for the 24 run values and 3.0% for the 100 run values. The overall data set is linearly separable, and networks with no hidden nodes readily achieved 100% on training data and better than 91% on test data. Networks with 2 or 3 hidden nodes were unable to converge on the training data much of the time, but with 4 hidden nodes convergence was restored and accuracy on test data was improved over the linear case. This accuracy increased up to 6 or 7 hidden nodes, and then began a gradual decrease as still more hidden nodes were added. This may be related to overfitting of the training data with the extra degrees of freedom, leading to poorer generalization. Adding a second hidden layer also decreased generalization accuracy. Many other experiments were performed, using as inputs respectively: the 2-D FFT of the images, the power spectrum, features extracted with a ring-wedge detector in frequency space, the image data combined with each of the above, and multiple slight translations of the training and/or test data. None of these yielded an improvement in accuracy over the above, and no approach to date with significantly fewer than 1024 inputs maintained the high accuracies above. We are continuing experiments on other forms of reducing the dimensionality of the input data. A few experiments have been run with much larger networks, maintaining the full 128x128 resolution and 256 gray levels, but this also yields no improvement in the results. 650 Manduca, Christy, and Ehman 4 DISCUSSION The networks' performance at the 50% training level was comparable to that of humans with no training in radiology, who, supplied with the correct diagnosis for half of the images, averaged 92.5% accuracy on the remaining half. When the networks were trained on a larger set of data, their accuracy improved, to as high as 97.0% when 80% of the data was used for training. We expect this performance to continue to improve as larger data sets are collected. It is difficult to compare the networks' performance to trained radiologists, who can diagnose AVN with essentially 100% accuracy, but who look at multiple crosssectional images of far higher quality than our low-resolution, 16 gray-level data set. When presented with single images from our data set, they typically make no mistakes but set aside a few images as uncertain and strongly resist being forced to commit to an answer on those. We are currently experimenting with networks which can take inputs from multiple slices and which have an additional output representing uncertainty. We consider the 97% accuracy achieved here to be very encouraging for further work on this problem and for the use of neural networks in more complex problems in diagnostic radiology. This is perhaps a very natural field of application for neural networks, since radiology resident training is essentially a four year experience with a very large training set, and the American College of Radiology teaching file is a classic example of a large collection of input/output training pairs (Boone et aI., 1990). More complex diagnostic radiology problems may of course require fusing information from multiple images or imaging modalities, clinical data, and medical knowledge (perhaps as expert system rules). An especially intriguing possibility is that sophisticated network based systems could someday be presented with images which cannot currently be interpreted, supplied with the correct diagnosis as determined by other means, and learn to detect subtle distinctions in the images that are not apparent to human radiologists. References Barnard, E. and Cole, R. (1989) "A neural-net training program based on conjugate gradient optimization", Oregon Graduate Institute, Technical report CSE 89-014. Boone, J. M., Sigillito, V. G. and Shaber, G. S. (1990), "Neural networks in radiology: An introduction and evaluation in a signal detection task", Medical Physics, 17, 234-241. Kramer, A. and Sangiovanni-Vincentelli, A. (1989), "Efficient Parallel Learning Algorithms for Neural Networks", in D. S. Touretzky (ed.) Advances in Neural Information Processing Systems 1,40-48. Morgan-Kaufmann, San Mateo, CA. Ternoven, O. et a1. (1990), "Prevalence of Asymptomatic, Clinically Occult Avascular Necrosis of the Hip in a Population at Risk", Radiology, 177(P), 104.
1991
81
553
Obstacle Avoidance through Reinforcement Learning Tony J. Prescott and John E. W. Maybew Artificial Intelligence and Vision Research Unit. University of Sheffield. S 10 2TN. England. Abstract A method is described for generating plan-like. reflexive. obstacle avoidance behaviour in a mobile robot. The experiments reported here use a simulated vehicle with a primitive range sensor. Avoidance behaviour is encoded as a set of continuous functions of the perceptual input space. These functions are stored using CMACs and trained by a variant of Barto and Sutton's adaptive critic algorithm. As the vehicle explores its surroundings it adapts its responses to sensory stimuli so as to minimise the negative reinforcement arising from collisions. Strategies for local navigation are therefore acquired in an explicitly goal-driven fashion. The resulting trajectories form elegant collisionfree paths through the environment 1 INTRODUCTION Following Simon's (1969) observation that complex behaviour may simply be the reflection of a complex environment a number of researchers (eg. Braitenberg 1986. Anderson and Donath 1988. Chapman and Agre 1987) have taken the view that interesting, plan-like behaviour can emerge from the interplay of a set of pre-wired reflexes with regularities in the world. However, the temporal structure in an agent's interaction with its environment can act as more than just a trigger for fixed reactions. Given a suitable learning mechanism it can also be exploited to generate sequences of new responses more suited to the problem in hand. Hence, this paper attempts to show that obstacle avoidance. a basic level of navigation competence, can be developed through learning a set of conditioned responses to perceptual stimuli. In the absence of a teacher a mobile robot can evaluate its performance only in terms of final outcomes. A negative reinforcement signal can be generated each time a collision occurs but this information te]]s the robot neither when nor how. in the train of actions preceding the crash, a mistake was made. In reinforcement learning this credit assignment 523 524 Prescott and Mayhew problem is overcome by forming associations between sensory input patterns and predictions of future outcomes. This allows the generation of internal "secondary reinforcement" signals that can be used to select improved responses. Several authors have discussed the use of reinforcement learning for navigation, this research is inspired primarily by that of Barto, Sutton and co-workers (1981, 1982, 1983, 1989) and Werbos (1990). The principles underlying reinforcement learning have recently been given a firm mathematical basis by Watkins (1989) who has shown that these algorithms are implementing an on-line, incremental, approximation to the dynamic programming method for detennining optimal control. Sutton (1990) has also made use of these ideas in fonnulating a novel theory of classical conditioning in animal learning. We aim to develop a reinforcement learning system that will allow a simple mobile robot with minimal sensory apparatus to move at speed around an indoor environment avoiding collisions with stationary or slow moving obstacles. This paper reports preliminary results obtained using a simulation of such a robot. 2 THE ROBOT SIMULATION Our simulation models a three-wheeled mobi1e vehicle, called the 'sprite', operating in a simple two-dimensional world (500x500 cm) consisting of walls and obstacles in which the sprite is represented by a square box (30x30 cm). Restrictions on the acceleration and the braking response of the vehicle model enforce a degree of realism in its ability to initiate fast avoidance behaviour. The perceptual system simulates a laser range-finder giving the logarithmically scaled distance to the nearest obstacle at set angles from its current orientation. An important feature of the research has been to explore the extent to which spatially sparse but frequent data can support complex behaviour. We show below results from simulations using only three rays emitted at angles _60°, 0°, and +60°. The controller operates directly on this unprocessed sensory input. The continuous trajectory of the vehicle is approximated by a sequence of discrete time steps. In each interval the sprite acquires new perceptual data then performs the associated response generating either a change in position or a feedback signal indicating that a collision has occured preventing the move. After a collision the sprite reverses slightly then attempts to rotate and move off at a random angle (90-180° from its original heading), if this is not possible it is relocated to a random starting position. 3 LEARNING ALGORITHM The sprite learns a multi-parameter policy (IT) and an evaluation (V). These functions are stored using the CMAC coarse-coding architecture (Albus 1971), and updated by a reinforcement learning algorithm similar to that described by Watkins (1989). The action functions comprising the policy are acquired as gaussian probability distributions using the method proposed by Williams (1988). The following gives a brief summary of the algorithm used. Let Xt be the perceptual input pattern at time t and rt the external reward, then the reinforcement learning error (see Barto et aI., 1989) is given by f:t+l = rt+l + yYt(X1+l) - \{(Xt) (1) where y is a constant (0 < y < 1). This error is used to adjust V and IT by gradient descent ie. Obstacle Avoidance through Reinforcement Learning 525 \{+ 1 (x) = \{ (x) + a. Et+ 1 mt (x) and 111+ 1 (x) = Ilt (x) + ~ Et+ 1 ndx) (2) (3) where a. and ~ are learning rates and mt(x) and nt(x) are the evaluation and policy eligibility traces for pattern x. The eligibility traces can be thought of as activity in short-term memory that enables learning in the L TM store. The minimum STM requirement is to remember the last input pattern and the exploration gradient ~at of the last action taken (explained below), hence ml+l(x) = 1 and nl+l(x) = ~3t iff x is the current pattern, ml+l(x) = nl+l(x) = 0 otherwise. (4) Learning occurs faster, however, if the memory trace of each pattern is allowed to decay slowly over time with strength of activity being related to recency. Hence, if the rate of decay is given by A (0 <= A <= 1) then for patterns other than the current one ml+ 1 (x) = A mt (x) and nl+ 1 (x) = A nt (x). Using a decay rate of less than 1.0 the eligibility trace for any input becomes negligible within a short time, so in practice it is only necessary to store a list of the most recent patterns and actions (in our simulations only the last four values are stored). The policy acquired by the learning system has two elements (f and '6) corresponding to the desired forward and angular velocities of the vehicle. Each element is specified by a gaussian pdf and is encoded by two adjustable parameters denoting its mean and standard deviation (hence the policy as a whole consists of four continuous functions of the input). In each time-step an action is chosen by selecting randomly from the two distributions associated with the current input pattern. In order to update the policy the exploratory component of the action must be computed, this consists of a four-vector with two values for each gaussian element. Following Williams we define a standard gaussian density function g with parameters J.1 and 0' and output y such that (Y - IL)2 g(y, J.1, 0') = 2 ~(J e202 the derivatives of the mean and standard deviation 1 are then given by y - J.1 [(y - J.1)2 -a2] ~J.1 = and ~O' = ---0'2 0'3 The exploration gradient of the action as a whole is therefore the vector ~3t = [~J.1f' ~O'f, ~J.1t'), ~o't'}]. (5) (6) The four policy functions and the evaluation function are each stored using a CMAC table. This technique is a form of coarse-coding whereby the euclidean space in which a function lies is divided into a set of overlapping but offset tilings. Each tiling consists of regular regions of pre-defined size such that all points within each region are mapped to a single stored parameter. The value of the function at any point is given by the average of the parameters stored for the corresponding regions in all of the tHings. In our 1 In practice we use (In s) as the second adjustable parameter to ensure that the standard deviation of the gaussian never has a negative value (see Williams 1988 for details). 526 Prescott and Mayhew simulation each sensory dimension is quantised into five discrete bins resulting in a 5X5X5 tiling, five tilings are overlaid to form each CMAC. If the input space is enlarged (perhaps by adding further sensors) the storage requirements can be reduced by using a hashing function to map all the tiles onto a smaller number of parameters. This is a useful economy when there are large areas of the state space that are visited rarely or not at all. 4 EXPLORATION In order for the sprite to learn useful obstacle avoidance behaviour it has to move around and explore its environment. If the sprite is rewarded simply for avoiding collisions an optimal strategy would be to remain still or to stay within a small, safe, circular orbit. Therefore to force the sprite to explore its world a second source of reinforcement is used which is a function of its current forward velocity and encourages it to maintain an optimal speed. To further promote adventurous behaviour the initial policy over the whole state-space is for the sprite to have a positive speed. A system which has a high initial expectation of future rewards will settle less rapidly for a locally optimal solution than a one with a low expectation. Therefore the value function is set initially to the maximum reward attainable by the sprite. Improved policies are found by deviating from the currently preferred set of actions. However, there is a trade-off to be made between exploiting the existing policy to maximise the short term reward and experimenting with untried actions that have potentially negative consequences but may eventually lead to a better policy. This suggests that an annealing process should be applied to the degree of noise in the policy. In fact, the algorithm described above results in an automatic annealing process (Williams 88) since the variance of each gaussian element decreases as the mean behaviour converges to a local maximum. However, the width of each gaussian can also increase, if the mean is locally sub-optimal, allowing for more exploratory behaviour. The final width of the gaussian depends on whether the local peak in the action function is narrow or flat on top. The behaviour acquired by the system is therefore more than a set of simple reflexes. Rather, for each circumstance, there is a range of acceptable actions which is narrow if the robot is in a tight corner, where its behaviour is severely constrained, but wider in more open spaces. 5 RESULTS To test the effectiveness of the learning algorithm the performance of the sprite was compared before and after fifty-thousand training steps on a number of simple environments. Over 10 independent runs2 in the first environment shown in figure one the average distance travelled between collisions rose from approximately O.9m (lb) before learning to 47.4m (Ic) after training. At the same time the average velocity more than doubled to just below the optimal speed. The requirement of maintaining an optimum speed encourages the sprite to follow trajectories that avoid slowing down, stopping or reversing. However, if the sprite is placed too close to an obstacle to turn away safely, it can perform an n-point-turn manoeuvre requiring it to stop, back-off, turn and then move forward. It is thus capable of generating quite complex sequences of actions. 2Each measure was calculated over a sequence of five thousand simulation-steps with learning disabled. a) Robot casting three rays. ~=:;;:-........ . .. : ........ . : •• / '.c. '. '. .:.: ••• :. • ". ~ ....... ). '. '. '"(' ~v·:··· '. "':...,. "J' •••• .' " =-:v:' '''''?t. •• ,. ... :. "0 ." .... ~t'~e.;.r-: ' . • "' ...... =. "0 .: '.i:''''''' "" 10:' 8\ii.:: .. ' .. or ~:(~ ". ~. I . .". '. • ... ~. ~I ' ••• ." .. \. . J.:.~'#tr.'; _"~ .;\!~ ::., "0 "0 .:,:: ... :. ... ,. •• 1° v~· ·,..~· .. I/· V· ~ ~ .•.. ::. ("'~ ;' : ' . .,. :.:",1::--. i ":: .... : .~: ~"-~- . '. :,:! vA.:J ,--reo . ·:i~····· ,': '. :(~:f! ....... J. • • ~.' • :':l,; • : ." •• ' .. /...t'..;,~:.. ..... ~y. :; I • " 0'" ... _ •••• ~. •••• "0 .~.,. , './L". : • .~ .. :.~I::~. : .... '. .~ ~::. ... "0 {t ... ·f.:" ~-..~"::" 0 ....... 0° ::~ •• ' ~ !r ,. • •• : •• ~J-rJ •• :'y .. .. ... ": ........... ~ .. -0" • ~;~:: : • .... It": 0: ':~'. ''':r'"'. '111.".A,~~":: : o°Jtl: • • ••••• ' c) ... after training ... Obstacle Avoidance through Reinforcement Learning 527 b) Trajectories before training ... d) ... and in a novel environment Figure One: Sample Paths from the Obstacle Avoidance Simulation. The trajectories show the robot's movement over two thousand simulation steps before and after training. After a collision the robot reverses slightly then rotates to move off at a random angle 90-1800 from its original heading, if this is not possible it is relocated to a random position. Crosses indicate locations where collisions occured, circles show new starting positions. 528 Prescott and Mayhew Some differences have been found in the sprite's ability to negotiate different environments with the effectiveness of the avoidance learning system varying for different configurations of obstacles. However, only limited performance loss has been observed in transferring from a learned environment to an unseen one (eg. figure Id), which is quickly made up if the sprite is allowed to adapt its strategies to suit the new circumstances. Hence we are encouraged to think that the learning system is capturing some fairly general strategies for obstacle avoidance. The different kinds of tactical behaviour acquired by the sprite can be illustrated using three dimensional slices through the two policy functions (desired forward and angular velocities). Figure two shows samples of these functions recorded after fifty thousand training steps in an environment containing two slow moving rectangular obstacles. Each graph is a function of the three rays cast out by the sprite: the x and y axes show the depths of the left and right rays and the vertical slices correspond to different depths of the central ray (9, 35 and 74cm). The graphs show clearly several features that we might expect of effective avoidance behaviour. Most notably, there is a transition occuring over the three slices during which the policy changes from one of braking then reversing (graph a) to one of turning sharply (d) whilst maintaining speed or accelerating (e). This transition clearly corresponds to the threshold below which a collision cannot be avoided by swerving but requires backing-off instead. There is a considerable degree of left-right symmetry (reflection along the line left-ray=right-ray) in most of the graphs. This agrees with the observation that obstacle avoidance is by and large a symmetric problem. However some asymmetric behaviour is acquired in order to break the deadlock that arises when the sprite is faced with obstacles that are equidistant on both sides. 6 CONCLUSION We have demonstrated that complex obstacle avoidance behaviour can arise from sequences of learned reactions to immediate perceptual stimu1i. The trajectories generated often have the appearance of planned activity since individual actions are only appropriate as part of extended patterns of movement. However, planning only occurs as an implicit part of a learning process that allows experience of rewarding outcomes to be propagated backwards to influence future actions taken in similar contexts. This learning process is effective because it is able to exploit the underlying regularities in the robot's interaction with its world to find behaviours that consistently achieve its goals. Acknowledgements This work was supported by the Science and Engineering Research Council. References Albus, J.S., (1971) A theory of cerebellar function. Math Biosci 10:25-61. Anderson, T.L., and Donath, M. (1988a) Synthesis of reflexive behaviour for a mobHe robot based upon a stimulus-response paradigm. SPIE Mobile Robots III, 1007:198210. Anderson, T.L., and Donath, M. (1988b) A computational structure for enforcing reactive behaviour in a mobile robot. SPIE Mobile Robots III 1007:370-382. Barto, A.G., Sutton, R.S., and Brouwer, P.S. (1981) Associative search network: A reinforcement learning associative memory". Biological Cybernetics 40:201-211. Obstacle Avoidance through Reinforcement Learning 529 Forward Velocity Angular Velocity +15cm o o a) centre 9cm b) centre gem +15cm c) 35em d) 35cm +15cm e) 74cm f) 74cm Figure Two: Surfaces showing action policies for depth measures for the central ray of 9, 35 and 74 cm. 530 Prescott and Mayhew Barto, A.G., Anderson, C.W., and Sutton, R.S.(1982) Synthesis of nonlinear control surfaces by a layered associative search network. Biological Cybernetics 43: 175-185. Barto, A.G., Sutton, R.S., Anderson, C.W. (1983) Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernbetics SMC-13:834-846. Barto, A.G., Sutton, R.S., and Watkins, CJ.H.C (1989) Learning and sequential decision making. COINS technical report. Braitenberg, V (1986) Vehicles: experiments in synthetic psychology, MIT Press, Cambridge, MA. Chapman, D. and Agre, P.E. (1987) Pengi: An implementation of a theory of activity. AAAI-87. Simon, H.A. (1969) The sciences of the artificial. MIT Press, Cambridge, Massachusetts. Sutton, R.S. and Barto, A.G. (1990) Time-deriviative models of pavlovian reinforcement. in Moore, J.W., and Gabriel, M. (eds.) Learning and Computational Neuroscience. MIT Press, Cambridge, MA. Watkins, CJ.H.C (1989) Learning from delayed rewards. Phd thesis, King's College. Cambridge University, UK. Werbos, PJ. (1990) A menu of designs for reinforcement learning over time. in Millet, III, W.T., Sutton, R.S. and Werbos, PJ. Neural networks for control, MIT Press, Cambridge, MA. Williams RJ., (1988) Towards a theory of reinforcement-learning connectionist systems. Technical Report NV-CCS-88-3, College of Computer Science, Northeastern University, Boston, MA.
1991
82
554
The VC-Dimension versus the Statistical Capacity of Multilayer Networks Chuanyi Ji "and Demetri Psaltis Department of Electrical Engineering California Institute of Technology Pasadena, CA 91125 Abstract A general relationship is developed between the VC-dimension and the statistical lower epsilon-capacity which shows that the VC-dimension can be lower bounded (in order) by the statistical lower epsilon-capacity of a network trained with random samples. This relationship explains quantitatively how generalization takes place after memorization, and relates the concept of generalization (consistency) with the capacity of the optimal classifier over a class of classifiers with the same structure and the capacity of the Bayesian classifier. Furthermore, it provides a general methodology to evaluate a lower bound for the VC-dimension of feedforward multilayer neural networks. This general methodology is applied to two types of networks which are important for hardware implementations: two layer (N - 2L - 1) networks with binary weights, integer thresholds for the hidden units and zero threshold for the output unit, and a single neuron ((N 1) networks) with binary weigths and a zero threshold. Specifically, we obtain OC~L) ::; d2 ::; O(W), and d1 ""' O(N). Here W is the total number of weights of the (N - 2L - 1) networks. d1 and d2 represent the VCdimensions for the (N - 1) and (N - 2L - 1) networks respectively. 1 Introduction The information capacity and the VC-dimension are two important quantities that characterize multilayer feedforward neural networks. The former characterizes their "Present Address: Department of Electrical Computer and System Engineering, Rensselaer Poly tech Institute, Troy, NY 12180. 928 The VC-Dimension versus the Statistical Capacity of Multilayer Networks 929 memorization capability, while the latter represents the sample complexity needed for generalization. Discovering their relationships is of importance for obtaining a better understanding of the fundamental properties of multilayer networks in learning and generalization. In this work we show that the VC-dimension of feedforward multilayer neural networks, which is a distribution-and network-parameter-indenpent quantity, can be lower bounded (in order) by the statistical lower epsilon-capacity C; (McEliece et.al, (1987», which is a distribution-and network-dependent quantity, when the samples are drawn from two classes: 0 1(+1) and 02{-1). The only requirement on the distribution from which samples are drawn is that the optimal classification error achievable, the Bayes error Pbe, is greater than zero. Then we will show that the VC-dimension d and the statistical lower epsilon-capacity C; are related by C; ~ Ad, (1) I I I I where (: = P eo (: for 0 < (: ~ P eo ; or (: = Pbe (: for 0 < (: ~ Pbe. Here (: is the error tolerance, and P eo represents the optimal error rate achievable on the class of classifiers considered. It is obvious that P eo ~ Pbe' The relation given in Equation (1) is non-trivial if Pbe > 0, P eo ~ / or Pbe ~ / so that (: is a nonnegative quantity. Ad is called the universal sample bound for generalization, where 1281n-+ A < 12 ' is a positive constant. When the sample complexity exceeds Ad, all the ( networks of the same architechture for all distributions of the samples can generalize with almost probability 1 for d large. A special case of interest, in which Pbe = ~, corresponds to random assignments of samples. Then C; represents the random storage capacity which characterizes the memorizing capability of networks. Although the VC-dimension is a key parameter in generalization, there exists no systematic way of finding it. The relationship we have obtained, however, brings concomitantly a constructive method of finding a lower bound for the VC-dimension of multilayer networks. That is, if the weights of a network are properly constructed using random samples drawn from a chosen distribution, the statistical lower epsilon-capacity can be evaluated and then utilized as bounds for the VCdimension. In this paper we will show how this constructive approach cQntributes to finding lower bounds of the VC-dimension of multilayer networks with binary weights. 2 A Relationship Between the VC-Dimension and the Statistical Capacity 2.1 Definition of the Statistical Capacity Consider a network s whose weights are constructed from M random samples belonging to two classes. Let r{ s) = ~, where Z is the total number of samples classified incorrectly by the network s. Then the random variable r( s) is the training error rate. Let (2) 930 Ji and Psaltis where 0 < € ~ 1. Then the statistical lower epsilon-capacity (statistical capacity in short) C; is the maximum M such that Pf(M) ;::: 1 Tj, where Tj can be arbitrarily small for sufficiently large N. Roughly speaking, the statistical lower epsilon-capacity defined here can be regarded as a sharp transition point on the curve Pf(M) shown in Fig.1. When the number of samples used is below this sharp transition, the network can memorize them perfectly. 2.2 The Universal Sample Bound for Generalization Let Pe(xls) be the true probability of error for the network s. Then the generalization error LlE(s) satisfies LlE(s) =1 r(s) - Pe(xls) I. We can show that the probability for the generalization error to exceed a given small quantity ( satisfies the following relation. Theorem 1 Pr(maxLlE(s) > /) ~ h(2M;d,l), sES (3) where 1; (2M)" _ .,2 M • 6 d! e ---r-, otherwise. h(2M; d, <') = { . (2M)" .,2 M ezther 2M :s d, or 6 d! e--s -;::: 1&2M > d, Here S is a class of networks with the same architecture. The function h(2M; d, (') has one sharp transition occurring at Ad shown in Fig.l, where A is a constant ,2 satisfying the equation A = In(2A) + 1 - TA = O. This theorem says that when the number M of samples used exceeds Ad, generalization happens with probability 1. Since Ad is a distribution-and network-parameterindependent quantity, we call it the universal sample bound for generalization. 2.3 A Relationship between The VC-Dimension and C; Roughly speaking, since both the statistical capacity and the VC-dimension represent sharp transition points, it is natural to ask whether they are related. The relationship can actually be given through the theorem below. Theorem 2 Let samples belonging to two classes 0 1(+1) and O2(-1) be drawn independently from some distribution. The only requirement on the distributions considered is that the Bayes error Pbe satisfies 0 < Pbe ~ !. Let 5 be a class of feedforward multilayer networks with a fixed structure consisting of threshold elements and SI be one network in 5, where the weights of S1 are constructed from M (training) samples drawn from one distribution as specified above. For a given distribution, let Peo be the optimal error rate achievable on Sand Pbe be the Bayes error rate. Then , , Pr(r(sI) < Peo - ( ) :s h(2M; d, ( ), (4) and (5) The VC-Dimension versus the Statistical Capacity of Multilayer Networks 931 1 I h(2M;d,E' M Figure 1: Two sharp transition points for the capacity and the universal sample bound for generalization. where f(S1) is equal to the training error rate of S1. (It is also called the resubstitution error estimator in the pattern recognition literature.) These relations are nontrivial if Peo > /, Pbe > / and (' > 0 small. The key idea of this result is illustrated in Fig.1. That is, the sharp transition which stands for the lower epsilon-capacity is below the sharp transition for the universal sample bound for generalization. To interpret this relation, let us compare Equation (2) and Equation (5) and examine the range of (: and (' respectively. Since (', which is initially given in Inequality (3), represents a bound on the generalization error, it is usually quite small. For most of practical problems, Pbe is small also. If the structure of the class of networks is properly chosen so that P eo ~ Pbe, then ( = Peo (' will be a sma.ll quantity. Although the epsilon-capacity is a valid quantity depending on M for any network in the class, for M sufficiently large, the meaningful networks to be considered through this relation is only a small subset in the class whose true probability of error is close to Peo . That is, this small subset contains only those networks which can approximate the best classifier contained in this class. For a special case in which samples are assigned randomly to two classes with equal probability, we have a result stated in Corollary 1. Corollary 1 Let samples be drawn independently from some distribution and then assigned randomly to two classes fh(+I) and O2(-1) with equal probability. This is equivalent to the case that the two class conditional distributions have complete overlap with one another. That is, Pr(x 101) = Pr(x I O2). Then the Bayes error is !. Using the same notation as in the above theorem, we have C"l I < Ad. 2-( (6) 932 Ji and Psaltis Although the distributions specified here give an uninteresting case for classification purposes, we will see later that the random statistical epsilon-capacity in Inequality (6) can be used to characterize the memorizing capability of networks, and to formulate a constructive approach to find a lower bound for the VC-dimension. 3 Bounds for the VC-Dimension of Two Networks with Binary Weights 3.1 A Constructive Methodology One of the applications of this relation is that it provides a general constructive approach to find a lower bound for the VC-dimension for a class of networks. Specifically, using the relationship given in Inequality (6), the procedures can be described as follows. 1) Select a distribution. 2) Draw samples independently from the chosen distribution, and then assign them randomly to two classes. 3) Evaluate the lower epsilon-capacity and then use it as a lower bound for the VC-dimension. Two example are given below to demonstrate how this general approach can be applied to find lower bounds for the VC-dimension. 3.2 Bounds for Two-Layer Networks with Binary Weigths Two-layer (N - 2L - 1) networks with binary weights and integer thresholds are considered in this section. 3.2.1 A lower Bound The construction of the network we consider is motivated by the one used by Baum (Baum, 1988) in finding the capacity for two layer networks with real weights. Although this particular network will fail if the accuracy of the weights and the thresholds is reduced, the idea of using the grandmother-cell type of network will be adopted to construct our network. We consider a two layer binary network with 2L hidden threshold units and one output threshold unit shown in Fig.2 a). The weights at the second layer are fixed and equal to +1 and -1 alternately. The hidden units are allowed to have integer thresholds in [-N, N], and the threshold for the output unit is zero. Let Xr(m) = (x~;n), .. " x~;) be a N dimensional random vector, where x~;n)'s are independent random variables taking (+ 1) and (-1) with equal probability ~, 0 ~ I ::; L, and 0 ::; m ::; M. Consider the Ith pair of hidden units. The weights at the first layer for this pair of hidden units are equal. Let Wri denote the weight from the ith input to these two hidden units, then we have The VC-Dimension versus the Statistical Capacity of Multilayer Networks 933 1 + + + 2L + + + + N ith (a) (b) Figure 2: a) The two-layer network with binary weights. b) Illustration on how a pair of hidden units separates samples. M W/i = sgn(a/ L x~r»), (7) m=l where sgn(x) = 1 if x> 0, and -1 otherwise. a/'s, 1 ~ I ~ L, which are independent random variables which take on two values +1 or -1 with equal probability, represent the random assignments of the LM samples into two classes Ol( +1) and 02( -1). The thresholds for these two units are different and are given as (8) where 0 < k < 1, and t/:J: correspond to the thresholds for the units with weight + 1 and -1 at the second layer respectively. Fig.2 b) illustrates how this network works. Each pair of hidden units forms two parallel hyperplanes separated by the two thresholds, which will generates a presynaptic input either +2 or (-2) to the output unit only for the samples stored in this pair which fall in between the planes when a/ equals either + 1 or -1, and a presynaptic input 0 for the samples falling outside. When the samples as well as the parallel hyperplanes are random, with a certain probability they will fall either between a pair of parallel hyperplanes or outside. Therefore, statistical analysis is needed to obtain the lower epsilon-capacity. 934 Ji and Psaltis , Theorem 3 A lower bound c~ ,for the lower epsilon-capacity c~ ,for this 2-( 2-( network is: , c; , ~-( (1-k)2NL (9) 3.2.2 An Upper Bound Since the total number of possible mappings of two layer (N -2L-1) networks with binary weights and integer thresholds ranging in [-N, N] is bounded by 2w +L log 2N, the VC-dimension d2 is upper bounded by W + L log 2N, which is in the order of W. Then d2 ~ O(W). By combining both the upper and lower bounds, we have (10) 3.3 Bounds for One-Layer Networks with Binary Weigths The one-layer network we consider here is equivalent to one hidden unit in the above (N - 2L -1) network. Specifically, the weight from the i-th input unit to the neuron IS M Wi = sgn( L O'mx~m», (11) m=l where (1 < i :::; N), x~m) 's and O'm's are independent and equally probable binary(±1) random variables, which represent elements of N-dimensional sample vectors and their random assignments to two classes respectively. Theorem 4 The lower epsilon-capacity c~ ,of this network satisfies 2-( N C"" -1 ,"" 22' 2-( 7r (: (12) Then by Corollary 1 we have O(N) ~ O(dd, where d1 is the VC-dimension of one-layer (N - 1) networks. Using the similar counting arguement, an upper bound can be obtained as d 1 ~ N . Then combining the lower and upper bounds, we have d1 "" O(N) 4 Discussions The general relationship we have drawn between the VC-dimension and the statistical lower epsilon-capacity provides a new view on the sample complexity for generalization. Specifically, it has two implications to learning and generalziation. 1) For random assignments of the samples (Pbe = t), the relationship confirms that generalization occurs after memorization, since the statistical lower epsilon-capacity The VC-Dimension versus the Statistical Capacity of Multilayer Networks 935 for this case is the random storage capacity which charaterizes the memorizing capability of networks and it is upper bounded by the universal sample bound for generalization. 2) For cases where the Bayes error is smaller than ~, the relationship indicates that an appropriate choice of a network structure is very important. If a network structure is properly chosen so that the optimal achievable error rate Peo is close to the Bayes error Peb , than the optimal network in this class is the one which has the largest lower epsilon-capacity. Since a suitable structure can hardly be chosen a priori due to the lack of knowledge about the underlying distribution, searching for network structures as well as weight values becomes necessary. Similar idea has been addressed by Devroye (Devroye, 1988) and by Vapnik (Vapnik, 1982) for structural minimization. We have applied this relation as a general constructive approach to obtain lower bounds for the VC-dimension of two-layer and one-layer networks with binary interconnections. For the one-layer networks, the lower bound is tight and matches the upper bound. For the two-layer networks, the lower bound is smaller than the upper bound (in order) by a In factor. In an independent work by Littlestone (Littlestone, 1988), the VC-dimension of so-called DNF expressions were obtained. Since a.ny DNF expression can be implemented by a two layer network of threshold units with binary weights and integer thresholds, this result is equivalent to showing that the VC-dimension of such networks is O(W). We believe that the In factor in our lower bound is due to the limitations of the grandmother-cell type of networks used in our construction. Acknowledgement The authors would like to thank Yaser Abu-Mostafa and David Haussler for helpful discussions. The support of AFOSR and DARPA is gratefully acknowledged. References E. Baum. (1988) On the Capacity of Multilayer Perceptron. J. of Complexity, 4:193-215. L. Devroye. (1988) Automatic Pattern Recognition: A Study of Probability of Error. IEEE Trans. on Pattern Recognition and Machine Intelligence, Vol. 10, No.4: 530-543. N. Littlestone. (1988) Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm. Machine Learning 2: 285-318. R.J . McEliece, E.C . Posner, E.R. Rodemich, S.S . Venkatesh. (1987) The Capacity of the Hopfield Associative Memory. IEEE Trans. Inform. Theory, Vol. IT-33, No. 4,461-482. V.N . Vapnik (1982) Estimation of Dependences Based on Empirical Data, New York: Springer-Verlag.
1991
83
555
Improving the Performance of Radial Basis Function Networks by Learning Center Locations Dietrich Wettschereck Department of Computer Science Oregon State University Corvallis, OR 97331-3202 Thomas Dietterich Department of Computer Science Oregon State University Corvallis, OR 97331-3202 Abstract Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resulted in inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning. 1 Introduction Radial basis function (RBF) networks are 3-layer feed-forward networks in which each hidden unit a computes the function IIX-X",1I2 fa(x) = e,,2 , and the output units compute a weighted sum of these hidden-unit activations: N J*(x) = L cafa(x). 1133 1134 Wettschereck and Dietterich In other words, the value of rex) is determined by computing the Euclidean distance between x and a set of N centers, Xa. These distances are then passed through Gaussians (with variance 172 and zero mean), weighted by Ca, and summed. Radial basis function networks (RBF networks) provide an attractive alternative to sigmoid networks for learning real-valued mappings: (a) they provide excellent approximations to smooth functions (Poggio & Girosi, 1989), (b) their "centers" are interpretable as "prototypes" , and (c) they can be learned very quickly, because the center locations (xa) can be determined by unsupervised learning algorithms and the weights (ca ) can be computed by pseudo-inverse methods (Moody and Darken, 1989). Although the application of unsupervised methods to learn the center locations does yield very efficient training, there is some evidence that the generalization performance of RBF networks is inferior to sigmoid networks. Moody and Darken (1989), for example, report that their RBF network must receive 10 times more training data than a standard sigmoidal network in order to attain comparable generalization performance on the Mackey-Glass time-series task. There are several plausible explanations for this performance gap. First, in sigmoid networks, all parameters are determined by supervised learning, whereas in RBF networks, typically only the learning of the output weights has been supervised. Second, the use of Euclidean distance to compute Ilx - Xa II assumes that all input features are equally important. In many applications, this assumption is known to be false, so this could yield poor results. The purpose of this paper is twofold. First, we carefully tested the performance of RBF networks on the well-known NETtaik task (Sejnowski & Rosenberg, 1987) and compared it to the performance of a wide variety of algorithms that we have previously tested on this task (Dietterich, Hild, & Bakiri, 1990). The results confirm that there is a substantial gap between RBF generalization and other methods. Second, we evaluated the benefits of employing supervised learning to learn (a) the center locations X a , (b) weights Wi for a weighted distance metric, and (c) variances a; for each center. The results show that supervised learning of the center locations and weights improves performance, while supervised learning of the variances or of combinations of center locations, variances, and weights did not. The best performance was obtained by supervised learning of only the center locations (and the output weights, of course). In the remainder of the paper we first describe our testing methodology and review the NETtaik domain. Then, we present results of our comparison ofRBF with other methods. Finally, we describe the performance obtained from supervised learning of weights, variances, and center locations. 2 Methodology All of the learning algorithms described in this paper have several parameters (such as the number of centers and the criterion for stopping training) that must be specified by the user. To set these parameters in a principled fashion, we employed the cross-validation methodology described by Lang, Hinton & Waibel (1990). First, as Improving the Performance of Radial Basis Function Networks by Learning Center Locations 1135 usual, we randomly partitioned our dataset into a training set and a test set. Then, we further divided the training set into a subtraining set and a cross-validation set. Alternative values for the user-specified parameters were then tried while training on the subtraining set and testing on the cross-validation set. The best-performing parameter values were then employed to train a network on the full training set. The generalization performance of the resulting network is then measured on the test set. Using this methodology, no information from the test set is used to determine any parameters during training. We explored the following parameters: (a) the number of hidden units (centers) N, (b) the method for choosing the initial locations of the centers, (c) the variance (j2 (when it was not subject to supervised learning), and (d) (whenever supervised training was involved) the stopping squared error per example. We tried N = 50, 100, 150, 200, and 250; (j2 = 1, 2, 4, 5, 10, 20, and 50; and three different initialization procedures: (a) Use a subset of the training examples, (b) Use an unsupervised version of the IB2 algorithm of Aha, Kibler & Albert (1991), and (c) Apply k-means clustering, starting with the centers from (a). For all methods, we applied the pseudo-inverse technique of Penrose (1955) followed by Gaussian elimination to set the output weights. To perform supervised learning of center locations, feature weights, and variances, we applied conjugate-gradient optimization. We modified the conjugate-gradient implementation of backpropagation supplied by Barnard & Cole (1989). 3 The NETtalk Domain We tested all networks on the NETtaik task (Sejnowski & Rosenberg, 1987), in which the goal is to learn to pronounce English words by studying a dictionary of correct pronunciations. We replicated the formulation of Sejnowski & Rosenberg in which the task is to learn to map each individual letter in a word to a phoneme and a stress. Two disjoint sets of 1000 words were drawn at random from the NETtaik dictionary of 20,002 words (made available by Sejnowski and Rosenberg): one for training and one for testing. The training set was further subdivided into an 800-word sub training set and a 200-word cross-validation set. To encode the words in the dictionary, we replicated the encoding of Sejnowski & Rosenberg (1987): Each input vector encodes a 7-letter window centered on the letter to be pronounced. Letters beyond the ends of the word are encoded as blanks. Each letter is locally encoded as a 29-bit string (26 bits for each letter, 1 bit for comma, space, and period) with exactly one bit on. This gives 203 input bits, seven of which are 1 while all others are O. Each phoneme and stress pair was encoded using the 26-bit distributed code developed by Sejnowski & Rosenberg in which the bit positions correspond to distinctive features of the phonemes and stresses (e.g., voiced/unvoiced, stop, etc.). 1136 Wettschereck and Dietterich 4 RBF Performance on the NETtaik Task We began by testing RBF on the NETtalk task. Cross-validation training determined that peak RBF generalization was obtained with N = 250 (the number of centers), (12 = 5 (constant for all centers), and the locations of the centers computed by k-means clustering. Table 1 shows the performance of RBF on the lOOO-word test set in comparison with several other algorithms: nearest neighbor, the decision tree algorithm ID3 (Quinlan, 1986), sigmoid networks trained via backpropagation (160 hidden units, cross-validation training, learning rate 0.25, momentum 0.9), Wolpert's (1990) HERBIE algorithm (with weights set via mutual information), and ID3 with error-correcting output codes (ECC, Dietterich & Bakiri, 1991). Table 1: Generalization performance on the NETtalk task. % correct Jl000-word test seQ Algorithm Word Letter Phoneme Stress Nearest neighbor 3.3 53.1 61.1 74.0 RBF 3.7 57.0***** 65.6***** 80.3***** ID3 9.6***** 65.6***** 78.7***** 77.2***** Back propagation 13.6** 70.6***** 80.8**** 81.3***** Wolpert 15.0 72.2* 82.6***** 80.2 ID3 + 127-bit ECC 20.0*** 73.7* 85.6***** 81.1 PrIor row dIfferent, p < .05* .01** .005*** .002**** .001***** Performance is shown at several levels of aggregation. The "stress" column indicates the percentage of stress assignments correctly classified. The "phoneme" column shows the percentage of phonemes correctly assigned. A "letter" is correct if the phoneme and stress are correctly assigned, and a "word" is correct if all letters in the word are correctly classified. Also shown are the results of a two-tailed test for the difference of two proportions, which was conducted for each row and the row preceding it in the table. From this table, it is clear that RBF is performing substantially below virtually all of the algorithms except nearest neighbor. There is certainly room for supervised learning of RBF parameters to improve on this. 5 Supervised Learning of Additional RBF Parameters In this section, we present our supervised learning experiments. In each case, we report only the cross-validation performance. Finally, we take the best supervised learning configuration, as determined by these cross-validation scores, train it on the entire training set and evaluate it on the test set. 5.1 Weighted Feature Norm and Centers With Adjustable Widths The first form of supervised learning that we tested was the learning of a weighted norm. In the NETtaik domain, it is obvious that the various input features are not equally important. In particular, the features describing the letter at the center of Improving the Performance of Radial Basis Function Networks by Learning Center Locations 1137 the 7-letter window-the letter to be pronounced-are much more important than the features describing the other letters, which are only present to provide context. One way to capture the importance of different features is through a weighted norm: Ilx - xall! = L Wi(Xi - xad 2 . i We employed supervised training to obtain the weights Wi. We call this configuration RBFFW. On the cross-validation set, RBFFW correctly classified 62.4% of the letters (N=200, (j2 = 5, center locations determined by k-means clustering). This is a 4.7 percentage point improvement over standard RBF, which on the crossvalidation set classifies only 57.7% of the letters correctly (N=250, (j2 = 5, center locations determined by k-means clustering). Moody & Darken (1989) suggested heuristics to set the variance of each center. They employed the inverse of the mean Euclidean distance from each center to its P-nearest neighbors to determine the variance. However, they found that in most cases a global value for all variances worked best. We replicated this experiment for P = 1 and P = 4, and we compared this to just setting the variances to a global value ((j2 = 5) optimized by cross-validation. The performance on the cross-validation set was 53.6% (for P=l), 53.8% (for P=4), and 57.7% (for the global value). In addition to these heuristic methods, we also tried supervised learning of the variances alone (which we call RBFu). On the cross-validation set, it classifies 57.4% of the letters correctly, as compared with 57.7% for standard RBF. Hence, in all of our experiments, a single global value for (j2 gives better results than any of the techniques for setting separate values for each center. Other researchers have obtained experimental results in other domains showing the usefulness of nonuniform variances. Hence, we must conclude that, while RBF u did not perform well in the NETtaik domain, it may be valuable in other domains. 5.2 Learning Center Locations (Generalized Radial Basis Functions) Poggio and Girosi (1989) suggest using gradient descent methods to implement supervised learning of the center locations, a method that they call generalized radial basis functions (GRBF). We implemented and tested this approach. On the cross-validation set, GRBF correctly classifies 72.2% ofthe letters (N = 200, (j2 = 4, centers initialized to a subset of training data) as compared to 57.7% for standard RBF. This is a remarkable 14.5 percentage-point improvement. We also tested GRBF with previously learned feature weights (GRBFFW) and in combination with learning variances (G RBF u ). The performance of both of these methods was inferior to GRBF. For GRBFFW, gradient search on the center locations failed to significantly improved performance of RBF FW networks (RBF FW 62.4% vs. GRBFFw 62.8%, RBFFw 54.5% vs. GRBFFW 57.9%). This shows that through the use of a non-Euclidian, fixed metric found by RBFFW the gradient search of GRBFFw is getting caught in a local minimum. One explanation for this is that feature weights and adjustable centers are. two alternative ways of achieving the same effect-namely, of making some features more important than others. Redundancy can easily create local minima. To understand this explanation, consider the plots in Figure 1. Figure l(A) shows the weights of the input features as they 1138 Wettschereck and Dietterich 5 .--.---.---.~-.--~--~--. 4 3 2 1 o 29 (A) 58 87 116 145 input number 174 203 0.8 0.6 0.4 0.2 0.0 0 (B) 29 58 87 116 145 174 203 input number Figure 1: (A) displays the weights of input features as learned by RBFFW. In (B) the mean square-distance between centers (separate for each dimension) from a GRBF network (N = 100, 0-2 = 4) is shown. were learned by RBF FW . Features with weights near zero have no influence in the distance calculation when a new test example is classified. Figure l(B) shows the mean squared distance between every center and every other center (computed separately for each input feature). Low values for the mean squared distance on feature i indicate that most centers have very similar values on feature i. Hence, this feature can play no role in determining which centers are activated by a new test example. In both plots, the features at the center of the window are clearly the most important. Therefore, it appears that GRBF is able to capture the information about the relative importance of features without the need for feature weights. To explore the effect of learning the variances and center locations simultaneously, we introduced a scale factor to allow us to adjust the relative magnitudes of the gradients. We then varied this scale factor under cross validation. Generally, the larger we set the scale factor (to increase the gradient of the variance terms) the worse the performance became. As with GRBF FW, we see that difficulties in gradient descent training are preventing us from finding a global minimum (or even re-discovering known local minima). 5.3 Summary Based on the results of this section as summarized in Table 2, we chose GRBF as the best supervised learning configuration and applied it to the entire 1000-word training set (with testing on the 1000-word test set). We also combined it with a 63-bit error-correcting output code to see if this would improve its performance, since error-correcting output codes have been shown to boost the performance of backpropagation and ID3. The final comparison results are shown in Table 3. The results show that GRBF is superior to RBF at all levels of aggregation. Furthermore, GRBF is statistically indistinguishable from the best method that we have tested to date (103 with 127-bit error-correcting output code), except on phonemes where it is detectably inferior and on stresses where it is detect ably superior. GRBF with error-correcting output codes is statistically indistinguishable from 103 with error-correcting output codes. Improving the Performance of Radial Basis Function Networks by Learning Center Locations 1139 Table 2: Percent of letters correctly classified on the 200-word cross-validation data set. % Letters Method Correct RBF 57.7 RBFFW 62.4 RBFq 57.4 GRBF 72.2 GRBFFW 62.8 GRBF q 67.5 Table 3: Generalization performance on the NETtaik task. % correct (lOOO-word test set) Algorithm Word Letter Phoneme Stress RBF 3.7 57.0 65.6 80.3 GRBF 19.8** 73.8*** 84.1*** 82.4** ID3 + 127-bit ECC 20.0 73.7 85.6* 81.1* GRBF + 63-bit ECC 19.2 74.6 85.3 82.2 PrIor row different ,p < .05* .002** .001*** The near-identical performance of GRBF and the error-correcting code method and the fact that the use of error correcting output codes does not improve GRBF's performance significantly, suggests that the "bias" of GRBF (i.e., its implicit assumptions about the unknown function being learned) is particularly appropriate for the NETtaik task. This conjecture follows from the observation that errorcorrecting output codes provide a way of recovering from improper bias (such as the bias of ID3 in this task). This is somewhat surprising, since the mathematical justification for GRBF is based on the smoothness of the unknown function, which is certainly violated in classification tasks. 6 Conclusions Radial basis function networks have many properties that make them attractive in comparison to networks of sigmoid units. However, our tests of RBF learning (unsupervised learning of center locations, supervised learning of output-layer weights) in the NETtaik domain found that RBF networks did not generalize nearly as well as sigmoid networks. This is consistent with results reported in other domains. However, by employing supervised learning of the center locations as well as the output weights, the GRBF method is able to substantially exceed the generalization performance of sigmoid networks. Indeed, GRBF matches the performance of the best known method for the NETtaik task: ID3 with error-correcting output codes, which, however, is approximately 50 times faster to train. We found that supervised learning of feature weights (alone) could also improve the performance of RBF networks, although not nearly as much as learning the center locations. Surprisingly, we found that supervised learning of the variances of the Gaussians located at each center hurt generalization performance. Also, combined supervised learning of center locations and feature weights did not perform as well as supervised learning of center locations alone. The training process is becoming stuck in local minima. For GRBFFW, we presented data suggesting that feature weights are redundant and that they could be introducing local minima as a result. Our implementation of GRBF, while efficient, still gives training times comparable to those required for backpropagation training of sigmoid networks. Hence, an 1140 Wettschereck and Dietterich important open problem is to develop more efficient methods for supervised learning of center locations. While the results in this paper apply only to the NETtaik domain, the markedly superior performance of GRBF over RBF suggests that in new applications of RBF networks, it is important to consider supervised learning of center locations in order to obtain the best generalization performance. Acknowledgments This research was supported by a grant from the National Science Foundation Grant Number IRI-86-57316. References D. W. Aha, D. Kibler & M. K. Albert. (1991) Instance-based learning algorithms. Machine Learning 6(1):37-66. E. Barnard & R. A. Cole. (1989) A neural-net training program based on conjugategradient optimization. Rep. No. CSE 89-014. Oregon Graduate Institute, Beaverton, OR. T. G. Dietterich & G. Bakiri. (1991) Error-correcting output codes: A general method for improving multiclass inductive learning programs. Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI-91), Anaheim, CA: AAAI Press. T. G. Dietterich, H. Hild, & G. Bakiri. (1990) A comparative study ofID3 and backpropagation for English text-to-speech mapping. Proceedings of the 1990 Machine Learning Conference, Austin, TX. 24-31. K. J. Lang, A. H. Waibel & G. E. Hinton. (1990) A time-delay neural network architecture for isolated word recognition. Neural Networks 3:33-43. J. MacQueen. (1967) Some methods of classification and analysis of multivariate observations. In LeCam, 1. M. & Neyman, J. (Eds.), Proceedings of the 5th Berkeley Symposium on Mathematics, Statistics, and Probability (p. 281). Berkeley, CA: University of California Press. J. Moody & C. J. Darken. (1989) Fast learning in networks of locally-tuned processing units. Neural Computation 1(2):281-294. R. Penrose. (1955) A generalized inverse for matrices. Proceedings of Cambridge Philosophical Society 51:406-413. T. Poggio & F. Girosi. (1989) A theory of networks for approximation and learning. Report Number AI-1140. MIT Artificial Intelligence Laboratory, Cambridge, MA. J. R. Quinlan. (1986) Induction of decision trees. Machine Learning 1(1):81-106. T. J. Sejnowski & C. R. Rosenberg. (1987) Parallel networks that learn to pronounce English text. Complex Systems 1:145-168. D. Wolpert. (1990) Constructing a generalizer superior to NETtaik via a mathematical theory of generalization. Neural Networks 3:445-452.
1991
84
556
MODELS WANTED: MUST FIT DIMENSIONS OF SLEEP AND DREAMING* J. Allan Hohson, Adam N. Mamelakt and Jeffrey P. Suttont Laboratory of Neurophysiology and Department of Psychiatry Harvard Medical School 74 Fenwood Road, Boston, MA 02115 Abstract During waking and sleep, the brain and mind undergo a tightly linked and precisely specified set of changes in state. At the level of neurons, this process has been modeled by variations of Volterra-Lotka equations for cyclic fluctuations of brainstem cell populations. However, neural network models based upon rapidly developing knowledge ofthe specific population connectivities and their differential responses to drugs have not yet been developed. Furthermore, only the most preliminary attempts have been made to model across states. Some of our own attempts to link rapid eye movement (REM) sleep neurophysiology and dream cognition using neural network approaches are summarized in this paper. 1 INTRODUCTION New models are needed to test the closely linked neurophysiological and cognitive theories that are emerging from recent scientific studies of sleep and dreaming. This section describes four separate but related levels of analysis at which modeling may ·Based, in part, upon an invited address by J.A.H. at NIPS, Denver, Dec. 2 1991 and, in part, upon a review paper by J.P.S., A.N.M. and J.A.H. published in the P.ychiatric Annal •. t Currently in the Department of Neurosurgery, University of California, San Francisco, CA 94143 : Also in the Center for Biological Information Processing, Whitaker College, E25-201, Massachusetts Institute of Technology, Cambridge, MA 02139 3 4 Hobson, Mamelak, and Sutton be applied and outlines some of the desirable features of such models in terms of the burgeoning data of sleep and dream science. In the subsequent sections, we review our own preliminary efforts to develop models at some of the levels discussed. 1.1 THE INDIVIDUAL NEURON Existing models or "neuromines" faithfully represent membrane properties but ignore the dynamic biochemical changes that change neural excitability over the long term. This is particularly important in the modeling of state control where the crucial neurons appear to act more like hormone pumps than like simple electrical transducers. Put succinctly, we need models that consider the biochemical or "wet" aspects of nerve cells, as well as the "dry" or electrical aspects (cf. McKenna et al., in press). 1.2 NEURAL POPULATION INTERACTIONS To mimic the changes in excitability of the modulatory neurons which control sleep and dreaming, new models are needed which incorporate both the engineering principles of oscillators and the biological principles of time-keeping. The latter principle is especially relevant in determining the dramatica.lly variable long period time-constants that are observed within and across species. For example, we need to equip population models borrowed from field biology (McCarley and Hobson, 1975) with specialized properties of "wet" neurons as mentioned in section 1.1. 1.3 COGNITIVE CONSEQUENCES OF MODULATION OF NEURAL NETWORKS To understand the state-dependent changes in cognition, such as those that distinguish waking and dreaming, a potentially fruitful approach is to mimic the known effects of neuromod ulation and examine the information processing properties of neural networks. For example, if the input-output fidelity of networks can be altered by changing their mode (see Sutton et al., this volume), we might be better able to understand the changes in both instantaneous associative properties and long term plasticity alterations that occur in sleep and dreaming. We might thus trap the brain-mind into revealing its rules for making moment-to-moment crosscorrelations of its data and for changing the content and status of its storage in memory. 1.4 STATE-DEPENDENT CHANGES IN COGNITION At the highest level of analysis, psychological data, even that obtained from the introspection of waking and dreaming subjects, need to be more creatively reduced with a view to modeling the dramatic alterations that occur with changes in brain state. As an example, consider the instability of orientation of dreaming, where times, places, persons and actions change without notice. Short of mastering the thorny problem of generating narrative text from a data base, and thus synthesizing an artificial dream, we need to formulate rules and measures of categorizing constancy and transformations (Sutton and Hobson, 1991). Such an approach is a Models Wanted: Must Fit Dimensions of Sleep and Dreaming 5 means of further refining the algorithms of cognition itself, an effort which is now limited to simple activation models that cannot change mode. An important characteristic of the set of new models that are proposed is that each level informs, and is informed by, the other levels. This nested, interlocking feature is represented in figure 1. It should be noted that any erroneous assumptions made at level 1 will have effects at levels 2 and 3 and these will, in turn, impede our capacity to integrate levels 3 and 4. Level 4 models can and should thus proceed with a degree ofindependence from levels 1, 2 and 3. Proceeding from level 1 upward is the "bottom-up" approach, while proceeding from level 4 downward is the "topdown" approach. We like to think it might be possible to take both approaches in our work while according equal respect to each. LEVEL IV COGNITIVE STATES (eg. dream plot sequences) III MODULATION OF NETWORKS (eg. hippocampus, cortex) Il NEURAL POPULATIONS (eg. pontine brainstem) I SINGLE NEURONS (eg. NE, 5HT, ACh neurons) SCHEMA <C--+D A-.B E--+F ~~ ,~ ~ (~2~ t?(7 ~ FEATURES variable associative and learning states modulation of 1-0 processing variable timeconstant oscillator wet hormonal aspects Figure 1: Four levels at which modeling innovations are needed to provide more realistic simulations of brain-mind states such as waking and dreaming. See text for discussion. 6 Hobson, Marnelak, and Sutton 2 STATES OF WAKING AND SLEEPING The states of waking and sleeping, including REM and non-REM (NREM) sleep, have characteristic behavioral, neuronal, polygraphic and psychological features that span all four levels. These properties are summarized in figures 2 and 3. Changes occurring within and between different levels are affected by the sleepwake or circadian cycle and by the relative shifts in brain chemistry. B A WAKE NREM SLEEP REM SLEEP E~I~-------- -------------1---------EEGI:_==:::==::::;::: 1~~':':.fJd,~ 1===::::::::: EOG ~ ~1--.."l'-'o...J Sensation and Vivid, Dull or Absent Vivid, IntemoUy Generated Perception Externally Generated TlJougllf Movement , , t Logical Progressive Contiooous Voluntary Time (hours) Logical Perseverotive Episodic Involuntary c Illogical Bizarre Commanded tM Inhibited D __ ...,(1 --ll_ _ 0_ o _ ..,[) Ml _ o _ - - .,--- -Time (lJours) Figure 2: (a) States of waking and NREM and REM sleeping in humans. Characteristic behavioral, polygraphic and psychological features are shown for each state. (b) Ultradian sleep cycle of NREM and REM sleep shown in detailed sleep-stage graphs of 3 subjects. (c) REM sleep periodograms of 15 subjects. From Hobson and Steriade (1986), with permission. Models Wanted: Must Fit Dimensions of Sleep and Dreaming 7 2.1 CIRCADIAN RHYTHMS The circadian cycle has been studied mathematically using oscillator and other non-linear dynamical models to capture features of sleep-wake rhythms (Moore-Ede and Czeisler, 1984; figure 2). Shorter (infradian) and longer (ultradian) rhythms, relative to the circadian rhythm, have also been examined. In general, oscillators are used to couple neural, endocrine and other pathways important in controlling a variety of functions, such as periods of rest and activity, energy conservation and thermoregulation. The oscillators can be sensitive to external cues or zeitgebers, such as light and daily routines, and there is a stong linkage between the circadian clock and the NREM-REM sleep oscillator. 2.2 RECIPROCAL INTERACTION MODEL In the 1970s, a brainstem oscillator became identified that was central to regulating sleeping and waking. Discrete cell populations in the pons that were most active during waking, less active in NREM sleep and silent during REM sleep were found to contain the monoamines norepinephrine (NE) and serotonin (5HT). Among the many cell populations that became active during REM sleep, but were generally quiescent otherwise, were cells associated with acetylcholine (ACh) release. A BlillJ C t15 o 4 , \ 3 2 I 20 40 10 10 100 o 'vi o Figure 3: (a) Reciprocal interaction model of REM sleep generation showing the structural interaction between cholinergic and monoaminergic cell populations. Plus sign implies excitatory influences; minus sign implies inhibitory influences. (b) Model output of the cholinergic unit derived from Lotka-Volterra equations. (c) Histogram of the discharge rate from a cholinergic related pontine cell recorded over 12 normalized sleep-wake cycles. Model cholinergic (solid line) and monoaminergic (dotted line) outputs. (d) N oradrenergic discharge rates before (S), during (D) and following (W) a REM sleep episode. From Hobson and Steriade (1986), with permission. 8 Hobson, Mamelak, and Sutton By making a variety of simplifying assumptions, McCarley and Hobson (1975) were able to structurally and mathematically model the oscillations between these monoaminergic and cholinergic cell populations (figure 3). This level 2 model consists of two compartments, one being monoaminergic-inhibitory and the other cholinergic-excitatory. It is based pupon the assumptions offield biology (VoIterraLotka) and of dry neuromines (level 3). The excitation (inhibition) originating from each compartment influences the other and also feeds back on itself. Numerous predictions generated by the model have been verified experimentally (Hobson and Steriade, 1986). Because the neural population model shown in figure 3 uses the limited passive membrane type of neuromine discussed in the introduction, the resulting oscillator has a time-constant in the millisecond range, not even close to the real range of minutes to hours that characterize the sleep-dream cycle (figure 2). As such, the model is clearly incapable of realistically representing the long-term dynamic properties that characterize interacting neuromodulatory populations. To surmount this limitation, two modifications are possible: one is to remodel the individual neuromines equipping them with mathematics describing up and down regulation of receptors and intracellular biochemistry that results in long-term changes in synaptic efficacy (c/. McKenna et al., in press); another is to model the longer time constants of the sleep cycle in terms of protein transport times between the two populations in brainstems of realistically varying width (c/. Hobson and Steriade, 1986). 3 NEUROCOGNITIVE ASPECTS OF WAKING, SLEEPING AND DREAMING Since the discovery that REM sleep is correlated with dreaming, significant advances have been made in understanding both the neural and cognitive processes occurring in different states of the sleep-wake cycle. During waking, wherein the brain is in a state of relative aminergic dominance, thought content and cognition display consistency and continuity. NREM sleep mentation is typically characterized by ruminative thoughts void of perceptual vividness or emotional tone. Within this state, the aminergic and cholinergic systems are more evenly balanced than in either the wake or REM sleep states. As previously noted, REM sleep is a state associated with relative cholinergic activation. Its mental status manifestations include graphic, emotionally charged and formally bizarre images encompassing visual hallucinations and delusions. 3.1 ACTIVATION.SYNTHESIS MODEL The activation-synthesis hypothesis (Hobson and McCarley, 1977) was the first account of dream mentation based on the neurophysiological state of REM sleep. It considered factors present at levels 3 and 4, according to the scheme in section 1, and attempted to bridge these two levels. In the model, cholinergic activation and reciprocal monoaminergic disinhibition of neural networks in REM sleep generated the source of dream formation. However, the details of how neural networks might actually synthesize information in the REM sleep state was not specified. Models Wanted: Must Fit Dimensions of Sleep and Dreaming 9 3.2 NEURAL NETWORK MODELS Several neural network models have subsequently been proposed that also attempt to bridge levels 3 and 4 (for example, Crick and Mitchison, 1983). Recently, Mamelak and Hobson (1989) have suggested a neurocognitive model of dream bizarreness that extends the activation-synthesis hypothesis. In the model, the monoaminergic withdrawal in sleep relative to waking leads to a decrease in the signal-to-noise ratio in neural networks (figure 4). When this is coupled with phasic cholinergic excitation of the cortex, via brainstem ponto-geniculo-occipital (PGO) cell firing (figure 5), cognitive information becomes altered and discontinuous. A central premise of the model is that the monoamines and acetylcholine function as neuromodulators, which modify ongoing activity in networks, without actually supplying afferent input information. Implementation of the Mamelak and Hobson model as a temporal sequencing network is described by Sutton et al. in this volume. Computer simulations demonstrate how changes in modulation similar to some monoaminergic and cholinergic effects can completely alter the way information is collectively sequenced within the same network. This occurs even in the absence of plastic changes in the weights connecting the artificial neurons. Incorporating plasticity, which generally involves neuromodulators such as the monoamines, is a logical next step. This would build important level 1 features into a level 3-4 model and potentially provide useful insight into some state-dependent learning operations. B .r ... r-----"77"'. __ ::::--~ ~ II J II ........ . __ '_'.",.' ........ te""_ D . " •• 10 ... • ·r 11111 1111111111 11111 111111' II 111111111111 .-10 I "'" 1111 fill 11111111111111' II 1'.-.'0 ... • ·r --____ --+-__ .·10 I II I 1 I III 1 I Figure 4: (a) Monoaminergic innervation of the brain is widespread. (b) Plot of the neuron firing probability as a function of the relative membrane potential for various values of monoaminergic modulation (parameterized by Q). Higher (lower) modulation is correlated with smaller (larger) Q values. (c) Neuron firing when subjected to supra- and sub-threshold inputs of +10 mvand -10 mv, respectively, for Q = 2 and Q = 10. (d) For a given input, the repertoire of network outputs generally increases as Q increases. From Mamelak and Hobson (1989), with permission. 10 Hobson, Mamelak, and Sutton A B Unil' i r ~ g) ii' iI' LGlk~ LGBi--' ~ Figure 5: (a) Cholinergic input from the bramstem to the thalamus and cortex is widespread. (b) Unit recordings from PGO burst cells in the pons are correlated with PGO waves recorded in the lateral geniculate bodies (LGB) of the thalamus. 4 CONCLUSION After discussing four levels at which new models are needed, we have outlined some preliminary efforts at modeling states of waking and sleeping. We suggest that this area of research is ripe for the development of integrative models of brain and mind. Acknowledgements Supported by NIH grant MH 13,923, the HMS/MMHC Research & Education Fund, the Livingston, Dupont-Warren and McDonnell-Pew Foundations, DARPA under ONR contract NOOOl4-85-K-0124, the Sloan Foundation and Whitaker College. References Crick F, Mitchison G (1983) The function of dream sleep. Nature 304 111-114. Hobson JA, McCarley RW (1977) The brain as a dream-state generator: An activation-synthesis hypothesis of the dream process. Am J P.ych 134 1335-1368. Hobson JA, Steriade M (1986) Neuronal basis of behavioral state control. In: Mountcastle VB (ed) Handbook of Phy.iology - The NeMJou. Syltem, Vol IV. Bethesda: Am Physiol Soc, 701-823. Mamelak AN, Hobson JA (1989) Dream bizarrenes as the cognitive correlate of altered neuronal behavior in REM sleep. J Cog Neuro.ci 1(3) 201-22. McCarley RW, Hobson JA (1975) Neuronal excitability over the sleep cycle: A structural and mathematical model. Science 189 58-60. McKenna T, Davis J, Zornetzer (eds) In press. Single Neuron Computation. San Diego, Academic. Moore-Ede Me, Czeisler CA (eds) (1984) Mathematical Model. of the Circadian Sleep- Wake Cycle. New York: Raven. Sutton JP, Hobson (1991) Graph theoretical representation of dream content and discontinuity. Sleep Re.earch 20 164.
1991
85
557
Merging Constrained Optimisation with Deterministic Annealing to "Solve" Combinatorially Hard Problems Paul Stolorz· Santa Fe Institute 1660 Old Pecos Trail, Suite A Santa Fe, NM 87501 ABSTRACT Several parallel analogue algorithms, based upon mean field theory (MFT) approximations to an underlying statistical mechanics formulation, and requiring an externally prescribed annealing schedule, now exist for finding approximate solutions to difficult combinatorial optimisation problems. They have been applied to the Travelling Salesman Problem (TSP), as well as to various issues in computational vision and cluster analysis. I show here that any given MFT algorithm can be combined in a natural way with notions from the areas of constrained optimisation and adaptive simulated annealing to yield a single homogenous and efficient parallel relaxation technique, for which an externally prescribed annealing schedule is no longer required. The results of numerical simulations on 50-city and 100-city TSP problems are presented, which show that the ensuing algorithms are typically an order of magnitude faster than the MFT algorithms alone, and which also show, on occasion, superior solutions as well. 1 INTRODUCTION Several promising parallel analogue algorithms, which can be loosely described by the term "deterministic annealing" , or "mean field theory (MFT) annealing", have *also at Theoretical Division and Center for Nonlinear Studies, MSB213, Los Alamos National Laboratory, Los Alamos, NM 87545. 1025 1026 Stolorz recently been proposed as heuristics for tackling difficult combinatorial optimisation problems [1, 2, 3, 4, 5, 6, 7]. However, the annealing schedules must be imposed externally in a somewhat ad hoc manner in these procedures (although they can be made adaptive to a limited degree [8]). As a result, a number of authors [9, 10, 11] have considered the alternative analogue approach of Lagrangian relaxation, a form of constrained optimisation due originally to Arrow [12], as a different means of tackling these problems. The various alternatives require the introduction of a new set of variables, the Lagrange multipliers. Unfortunately, these usually lead in turn to either the inclusion of expensive penalty terms, or the consideration of restricted classes of problem constraints. The penalty terms also tend to introduce unwanted local minima in the objective function, and they must be included even when the algorithms are exact [13, 10]. These drawbacks prevent their easy application to large-scale combinatorial problems, containing 100 or more variables. In this paper I show that the technical features of analogue mean field approximations can be merged with both Lagrangian relaxation methods, and with the broad philosophy of adaptive annealing without, importantly, requiring the large computational resources that typically accompany the Lagrangian methods. The result is a systematic procedure for crafting from any given MFT algorithm a single parallel homogeneous relaxation technique which needs no externally prescribed annealing schedule. In this way the computational power of the analogue heuristics is greatly enhanced. In particular, the Lagrangian framework can be used to construct an efficient adaptation of the elastic net algorithm [2], which is perhaps the most promising of the analogue heuristics. The results of numerical experiments are presented which display both increased computational efficiency, and on occasion, better solutions (avoidance of some local minima) over deterministic annealing. Also, the qualitative mechanism at the root of this behaviour is described. Finally, I note that the apparatus can be generalised to a procedure that uses several multipliers, in a manner that roughly parallels the notion of different temperatures at different physical locations in the simulated annealing heuristic. 2 DETERMINISTIC ANNEALING The deterministic annealing procedures consist of tracking the local minimum of an objective function of the form (1) where x represents the analogue variables used to describe the particular problem at hand, and T ~ 0 (initially chosen large) is an adjustable annealing, or temperature, parameter. As T is lowered, the objective function undergoes a qualitative change from a convex to a distinctly non-convex function. Provided the annealing shedule is slow enough, however, it is hoped that the local minimum near T = 0 is a close approximation to the global solution of the problem. The function S( x) represents an analogue approximation [5, 4, 7] to the entropy of an underlying discrete statistical physics system, while F(l;,.) approximates its free energy. The underlying discrete system forms the basis of the simulated annealing heuristic [14]. Although a general and powerful technique, this heuristic is an inherently stochastic procedure which must consider many individual discrete tours at Merging Constrained Optimisation with Deterministic Annealing 1027 each and every temperature T. The deterministic annealing approximations have the advantage of being deterministic, so that an approximate solution at a given temperature can be found with much less computational effort. In both cases, however, the complexity of the problem under consideration shows up in the need to determine with great care an annealing schedule for lowering the temperature parameter. The primary contribution of this paper consists in pursuing the relationship between deterministic annealing and statistical physics one step further, by making explicit use of the fact that due to the statistical physics embedding of the deterministic annealing procedures, (2) where Xmin is the local minimum obtained for the parameter value T. This deceptively simple observation allows the consideration of the somewhat different approach of Lagrange multiplier methods to automatically determine a dynamics for T in the analogue heuristics, using as a constraint the vanishing of the entropy function at zero temperature. This particular fact has not been explicitly used in any previous optimisation procedures based upon Lagrange multipliers, although it is implicit in the work of [9]. Most authors have focussed instead on the syntactic constraints contained in the function U(~) when incorporating Lagrange multipliers. As a result the issue of eliminating an external annealing schedule has not been directly confronted. 3 LAGRANGE MULTIPLIERS Multiplier methods seek the critical points of a "Lagrangian" function F(~,;\) = U(x) - ;\S(x) (3) where the notation of (1) has been retained, in accordance with the philosophy discussed above. The only difference is that the parameter T has been replaced by a variable ;\ (the Lagrange multiplier), which is to be treated on the same basis as the variables~. By definition, the critical points of F(x,;\) obey the so-called Kuhn-Tucker conditions \1LF(~,;\) = 0 = \1>..F(x,;\) =0= \1 rU(~) - ;\ \1 rS(~) -Sex) (4) Thus, at any critical point of this function, the constraint S(~) = 0 is satisfied. This corresponds to a vanishing entropy estimate in (1). Hopefully, in addition, U(~) is minimised, subject to the constraint. The difficulty with this approach when used in isolation is that finding the critical points of F(~,;\) entails, in general, the minimisation of a transformed "unconstrained" function, whose set of local minima contains the critical points of F as a subset. This transformed function is required in order to ensure an algorithm which is convergent, because the critical points of F(~,;\) are saddle points, not local minima. One well-known way to do this is to add a term S2(~) to (3), giving an augmented Lagrangian with the same fixed points as (3), but hopefully with better convergence properties. Unfortunately, the transformed function is invariably more complicated than F(~, ;\), typically containing extra quadratic penalty 1028 Stolorz terms (as in the above case), which tend to convert harmless saddle points into unwanted local minima. It also leads to greater computational overhead, usually in the form of either second derivatives of the functions U(L) and S(L) , or of matrix inversions [13, 10] (although see [11] for an approach which minimises this overhead). For large-scale combinatorial problems such as the TSP these disadvantages become prohibitive. In addition, the entropic constraint functions occurring in deterministic annealing tend to be quite complicated nonlinear functions of the variables involved, often with peculiar behaviour near the constraint condition. In these cases (the Hopfield /Tank method is an example) a term quadratic in the entropy cannot simply be added to (3) in a straightforward way to produce a suitable augmented Lagrangian (of course, such a procedure is possible with several of the terms in the internal energy U (.~». 4 COMBINING BOTH METHODS The best features of each of the two approaches outlined above may be retained by using the following modification of the original first-order Arrow technique: Xi = -"Vr,F(x,>.) = -"Vr,U(x) + >'''Vx,S(x) >. =+"V>.F(x,>.) =-S(x)+c/>' where F(x, >.) is a slightly modified "free energy" function given by F(x, >.) = U(x) - >'S(x) + c In >. (5) (6) In these expressions, c > 0 is a constant, chosen small on the scale of the other parameters, and characterises the sole, inexpensive, penalty requirement. It is needed purely in order to ensure that>. remain positive. In fact, in the numerical experiment that I will present, this penalty term for>. was not even used - the algorithm was simply terminated at a suitably small value of >.. The reason for insisting upon>. > 0, in contrast to most first-order relaxation methods, is that it ensures that the free energy objective function is bounded below with respect to the X variables. This in turn allows (5) to be proven locally convergent [15] using techniques discussed in [13]. Furthermore, the methods described by (5) are found empirically to be globally convergent as well. This feature is in fact the key to their computational efficiency, as it means that they need not be grafted onto more sophisticated and inefficient methods in order to ensure convergence. This behaviour can be traced to the fact that the ''free energy" functions, while non-convex overall with respect to L, are nevertheless convex over large volumes of the solution space. The point can be illustrated by the construction of an energy function similar to that used by Platt and Barr [9], which also displays the mechanism by which some of the unwanted local minima in deterministic annealing may be avoided. These issues are discussed further in Section 6. The algorithms described above have several features which distinguish them from previous work. Firstly, the entropy estimate Sex) has been chosen explicitly as the appropriate constraint function, a fact which has previously been unexploited in the optimisation context (although a related piecewise linear function has been used by [9]). Further, since this estimate is usually positive for the mean field theory Merging Constrained Optimisation with Deterministic Annealing 1029 heuristics, A (the only new variable) decreases monotonically in a manner roughly similar to the temperature decrease schedule used in simulated and deterministic annealing, but with the ad hoc drawback now removed. Moreover, there is no requirement that the system be at or near a fixed point each time A is altered there is simply one homogeneous dynamical system which must approach a fixed point only once at the very end of the simulation, and furthermore A appears linearly except near the end of the procedure (a major reason for its efficiency). Finally, the algorithms do not require computationally cumbersome extra structure in the form of quadratic penalty terms, second derivatives or inverses, in contrast to the usual Lagrangian relaxation techniques. All of these features can be seen to be due to the statistical physics setting of the annealing "Lagrangian", and the use of an entropic constraint instead of the more usual syntactic constraints. The apparatus outlined above can immediately be used to adapt the Hopfield/Tank heuristic for the Travelling Salesman Problem (TSP) [1], which can easily be written in the form (1). However, the elastic net method [2] is known to be a somewhat superior method, and is therefore a better candidate for modification. There is an impediment to the procedure here: the objective function for the elastic net is actually of the form F(z., A) = U(x) - AS(X, A) (7) which precludes the use of a true Lagrange muliplier, since A now appears nontrivially in the constraint function itself! However, I find surprisingly that the algorithm obtained by applying the Lagrangian relaxation apparatus in a straightforward way as before still leads to a coherent algorithm. The equations are Xi = -V'1',F(x,A) = -V'1',U(z.) + AV'1',S(X) (8) A = +(V'>.F(z.,A) = -([S(.~,A) + AV'>.S(X,A)] The parameter ( > 0 is chosen so that an explicit barrier term for A can be avoided. It is the only remaining externally prescribed part of the former annealing schedule, and is fixed just once at the begining of the algorithm. It can be shown that the global convergence of (8) is highly plausible in general (and seems to always occur in practice), as in the simpler case described by (5). Secondly, and most importantly, it can be shown that the constraints that are obeyed at the new fixed points satisfy the syntax of the original discrete problem [15]. The procedure is not limited to the elastic net method for the TSP. The mean field approximations discussed in [3, 4, 5] all behave in a similar way, and can therefore be adapted successfully to Lagrangian relaxation methods. The form of the elastic net entropy function suggests a further natural generalisation of the procedure. A different "multiplier" Aa can be assigned to each city a, each variable being responsible for satisfying a different additive component of the entropy constraint. The idea has an obvious parallel to the notion in simulated annealing of lowering the temperature in different geographical regions at different rates in response to the behaviour of the system. The number of extra variables required is a modest computational investment, since there are typically many more tour points than city points for a given implementation. 1030 Stolorz 5 RESULTS FOR THE TSP Numerical simulations were performed on various TSP instances using the elastic net method, the Lagrangian adaptation with a single global Lagrange multiplier, and the modification discussed above involving one Lagrange multiplier for each city. The results are shown in Table 1. The tours for the Lagrangian relaxation methods are about 0.5% shorter than those for the elastic net, although these differences are not yet at a statistically significant level. The differences in the computational requirements are, however, much more dramatic. No attempt has been made to optimise any of the techniques by using sophisticated descent procedures, although the size of the update step has been chosen to seperately optimise each method. Table 1: Performance of heuristics described in the text on a set of 40 randomly distibuted 50-city instances of the TSP in the unit square. CPU times quoted are for a SUN SPARC Station 1+. Q' and j3 are the standard tuning parameters [4]. METHOD Elastic net Global multiplier Local multipliers Q' j3 0.2 2.5 0.4 2.5 0.4 2.5 TOUR LENGTH 5.95 ± 0.10 5.92 ± 0.09 5.92 ± 0.08 CPU(SEC) 260 ± 33 49 ± 5 82 ± 12 I have also been able to obtain a superior solution to the 100-city problem analysed by Durbin and Willshaw [2], namely a solution of length 7.746 [15] (c.f. length 7.783 for the elastic net) in a fraction of the time taken by elastic net annealing. This represents an improvement of roughly 0.5%. Although still about 0.5% longer than the best tour found by simulated annealing, this result is quite encouraging, because it was obtained with far less CPU time than simulated annealing, and in substantially less time than the elastic net: improvements upon solutions within about 1% of optimality typically require a substantial increase in CPU investment. 6 HOW IT WORKS - VALLEY ASCENT Inspection of the solutions obtained by the various methods indicates that the multiplier schemes can sometimes exchange enough "inertial" energy to overcome the energy barriers which trap the annealing methods, thus offering better solutions as well as much-improved computational efficiency. This point is illustrated in Figure l(a), which displays the evolution of the following function during the algorithm for a typical set of parameters: 1 ~ . 2 1. 2 E = "2 ~ Xi + 2"\ I (9) The two terms can be thought of as different components of an overall kinetic energy E. During the procedure, energy can be exchanged between these two components, so the function E(t) does not decrease monotonically with time. This allows the system to occasionally escape from local minima. Nevertheless, after a long enough Merging Constrained Optimisation with Deterministic Annealing 1031 (I) Q) .~ 6 ~4 o 'j c: 22 I~·, . .. \ ..... ,. .... -" (a) ' ... _ ........... _.,-_ ..... _._._.-.. o L-LJ......L....J. .......... ...L-&...L..I...I...I....J'-..I...L...J....L..J....L..L...L=.::J o 20 40 60 80 Iteration number 100 ... :'": .... .....:.~ ... Figure I: (a) Evolution of variables for a typical 50-city TSP. The solid curve shows the total kinetic energy E given by (9). The dotted curve shows the A component of this energy, and the dash-dotted curve shows the x component. (b) Trajectories taken by various algorithms on a schematic free energy surface. The two dashdotted curves show possible paths for elastic net annealing, each ascending a valley floor. The dotted curve shows a Lagrangian relaxation, which displays oscillations about the valley floor leading to the superior solution. time the function does decrease smoothly, ensuring convergence to a valid solution to the problem. The basic mechanism can also be understood by plotting schematically the free energy "surface" F(ll, A), as shown in Figure l(b). This surface has a single valley in the foreground, where A is large. Bifurcations occur as A becomes smaller, with a series of saddles, each a valid problem solution, being reached in the background at A = O. Deterministic annealing can be viewed as the ascent of just one of these valleys along the valley floor. It is hoped that the broadest and deepest minimum is chosen at each valley bifurcation, leading eventually to the lowest background saddle point as the optimal solution. A typical trajectory for one of the Lagrangian modifications also consists roughly of the ascent of one of these valleys. However, oscillations about the valley floor now occur on the way to the final saddle point, due to the interplay between the different kinetic components displayed in Figure lea). It is hoped that the extra degrees of freedom allow valleys to be explored more fully near bifurcation points, thus biasing the larger valleys more than deterministic annealing. Notice that in order to generate the A dynamics, computational significance is now assigned to the actual value of the free energy in the new schemes, in contrast to the situation in regular annealing. 7 CONCLUSION In summary, a simple yet effective framework has been developed for systematically generalising any algorithm described by a mean field theory approximation procedure to a Lagrangian method which replaces annealing by the relaxation of a single dynamical system. Even in the case of the elastic net, which has a slightly awkward 1032 Stolorz form, the resulting method can be shown to be sensible, and I find in fact that it substantially improves the speed (and accuracy) of that method. The adaptations depend crucially upon the vanishing of the analogue entropy at zero temperature. This allows the entropy to be used as a powerful constraint function, even though it is a highly nonlinear function and might be expected at first sight to be unsuitable for the task. In fact, this observation can also be applied in a wider context to design objective functions and architectures for neural networks which seek to improve generalisation ability by limiting the number of network parameters [16]. References [1] J.J. Hopfield and D.W. Tank. Neural computation of decisions in optimization problems. Bioi. Cybern., 52:141-152, 1985. [2] R. Durbin and D. Willshaw. An analogue approach to the travelling salesman problem using an elsatic net method. Nature, 326:689-691, 1987. [3] D. Geiger and F. Girosi. Coupled markov random fields and mean field theory. In D. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 660-667. Morgan Kaufmann, 1990. [4] A.L. Yuille. Generalised deformable models, statistical physics, and matching problems. Neural Comp., 2:1-24, 1990. [5] P.D. Simic. Statistical mechanics as the underlying theory of "elastic" and "neural" optimisations. NETWORK: Compo Neural Syst., 1:89-103, 1990. [6] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press, 1987. [7] C. Peterson and B. Soderberg. A new method for mapping optimization problems onto neural networks. Int. J. Neural Syst., 1:3-22, 1989. [8] D.J. Burr. An improved elastic net method for the travelling salesman problem. In IEEE 2nd International Con! on Neural Networks, pages 1-69-76, 1988. [9] J .C. Platt and A.H. Barr. Constrained differential optimization. In D.Z. Anderson, editor, Neural Information Proc. Systems, pages 612-621. AlP, 1988. [10] A.G. Tsirukis, G.V. Reklaitis, and M.F. Tenorio. Nonlinear optimization using generalised hopfield networks. Neural Comp., 1:511-521, 1989. [11] E. Mjolsness and C. Garrett. Algebraic transformations of objective functions. Neural Networks, 3:651-669, 1990. [12] K.J. Arrow, L. Hurwicz, and H. Uzawa. Studies in Linear and Nonlinear Programming. Stanford University Press, 1958. [13] D.P. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, 1982. See especially Chapter 4. [14] S. Kirkpatrick, C.D. Gelatt Jr., and M.P. Vecchio Optimization by simulated annealing. Science, 220:671-680, 1983. [15] P. Stolorz. Merging constrained optimisation with deterministic annealing to "solve" combinatorially hard problems. Technical report, LA-UR-91-3593, Los Alamos National Laboratory, 1991. [16] P.Stolorz. Analogue entropy as a constraint in adaptive learning and optimisation. Technical report, in preparation, Santa Fe Institute, 1992.
1991
86
558
Neural Control for Rolling Mills: Incorporating Domain Theories to Overcome Data Deficiency Martin Roscheisen Computer Science Dept. Munich Technical University 8 Munich 40, FRG Reimar Hofmann Computer Science Dept. Edinburgh University Edinburgh, EH89A, UK Volker Tresp Corporate R&D Siemens AG 8 Munich 83, FRG Abstract In a Bayesian framework, we give a principled account of how domainspecific prior knowledge such as imperfect analytic domain theories can be optimally incorporated into networks of locally-tuned units: by choosing a specific architecture and by applying a specific training regimen. Our method proved successful in overcoming the data deficiency problem in a large-scale application to devise a neural control for a hot line rolling mill. It achieves in this application significantly higher accuracy than optimally-tuned standard algorithms such as sigmoidal backpropagation, and outperforms the state-of-the-art solution. 1 INTRODUCTION Learning in connectionist networks typically requires many training examples and relies more or less explicitly on some kind of syntactic preference bias such as "minimal architecture" (Rumelhart, 1988; Le Cun et ai., 1990; Weigend, 1991; inter alia) or a smoothness constraint operator (Poggio et ai., 1990), but does not make use of explicit representations of domain-specific prior knowledge. If training data is deficient, learning a functional mapping inductively may no longer be feasible, whereas this may still be the case when guided by domain knowledge. Controlling a rolling mill is an example of a large-scale real-world application where training data is very scarce and noisy, yet there exist much refined, though still very approximate, analytic models that have been applied for the past decades and embody many years of experience in this particular domain. Much in the spirit of Explanation659 660 Roscheisen, Hofmann, and Tresp Based Learning (see, for example, Mitchell et ai., 1986; Minton et ai., 1986), where domain knowledge is applied to get valid generalizations from only a few training examples, we consider an analytic model as an imperfect domain theory from which the training data is "explained" (see also Scott et ai., 1991; Bergadano et ai., 1990; Tecuci et ai., 1990). Using a Bayesian framework, we consider in Section 2 the optimal response of networks in the presence of noise on their input, and derive, in Section 2.1, a familiar localized network architecture (Moody et ai., 1989,1990). In Section 2.2, we show how domain knowledge can be readily incorporated into this localized network by applying a specific training regimen. These results were applied as part of a project to devise a neural control for a hot line rolling mill, and, in Section 3, we describe experimental results which indicate that incorporating domain theories can be indispensable for connectionist networks to be successful in difficult engineering domains. (See also references for one of our more detailed papers.) 2 THEORETICAL FOUNDATION 2.1 NETWORK ARCHITECTURE We apply a Bayesian framework to systems where the training data is assumed to be generated from the true model I, which itself is considered to be derived from a domain theory b that is represented as a function. Since the measurements in our application are very noisy and clustered, we took this as the paradigm case, and assume the actual input X EJRd to be a noisy version of one of a small number (N) of prototypical input vectors it, ... , ~EJRd where the noise is additive with covariance matrix~. The corresponding true output values I(it), . .. , I(~)E JR are assumed to be distributed around the values suggested by the domain theory, b(it), ... , b(~) (variance C7~rior). Thus, each point in the training data D := {(Xi, Yi); i = 1, ... , M} is considered to be generated as follows: Xi is obtained by selecting one of the t~ and adding zero-mean noise with covariance ~, and Yi is generated by adding Gaussian zero-mean noise with variance C7Jata to l(ik).l We determine the system's response O( x) to an input X to be optimal with respect to the expectation of the squared error (MMSE-estimate): O(x) := argmin £((/(1'true) - 0(x))2). o(x) The expectation is given by "L,~1 P(Ttrue = t: IX = x) . (/(4) - 0(x))2. Bayes' Theorem states that P(Ttrue = t;lx = x) = p(X = xlTtrue = 4) . P(7true = 4) / p(X = x). Under the assumption that all 4 are equally likely, simplifying the derivative of the expectation yields IThis approach is related to Nowlan (1990) and MacKay (1991), but we emphasize the influence of different priors over the hypothesis space by giving preference to hypotheses that are closer to the domain theory. Neural Control for Rolling Mills 661 where Ci equals £(f(t:)ID), i.e. the expected value of f(t:) given that the training data is exactly D. Assuming the input noise to be Gaussian and ~, unless otherwise noted, to be diagonal, ~ = (8ij (Trhs,i,jS,d, the probability density of X under the assumption that Ttrue equals 4 is given by p(X = xlTtrue = ~) = (27r)d/}'I~ll/2 exp [ -~(x - 4)t ~-1 (x - 4)] where 1.1 is the determinant. The optimal response to an input x can now be written as O(x) = 2::~~exp[-t(X - t;)t ~-1 (x - t:)] . Ci 2::i=l exp[-~(x - t:)t ~-1 (x - t:)] (1) Equation 1 corresponds to a network architecture with N Gaussian Basis Functions (GBFs) centered at 4, k = 1, ... ,N, each of which has a width (Ti, i = 1, ... ,d, along the i-th dimension, and an output weight Ck. This architecture is known to give smooth function approximations (Poggio et al., 1990; see also Platt, 1990), and the normalized response function (partitioning-to-one) was noted earlier in studies by Moody et al. (1988, 1989, 1990) to be beneficial to network performance. Carving up an input space into hyperquadrics (typically hyperellipsoids or just hyperspheres) in this way suffers in practice from the severe drawback that as soon as the dimensionality of the input is higher, it becomes less feasible to cover the whole space with units of only local relevance ("curse of dimensionality"). The normalized response function has an essentially space-filling effect, and fewer units have to be allocated while, at the same time, most of the locality properties can be preserved such that efficient ball tree data structures (Omohundro, 1991) can still be used. If the distances between the centers are large with respect to their widths, the nearest-neighbor rule is recovered. With decreasing distances, the output of the network changes more smoothly between the centers. 2.2 TRAINING REGIMEN The output weights Ci are given by Ci = £(f(t:)ID) = 1: z· p(f(t:) = zlD) dz. Bayes' Theorem states that p(f(i;) = zlD) = p(Dlf(i;) = z) . p(f(i;) = z) / p(D). Let M (i) denote the set of indices j of the training data points (x j , Yj) that were generated by adding noise to (i;, f( ii)), i. e. the points that "originated" from ii. Note that it is not known a priori which indices a set M (i) contains; only posterior probabilities can be given. By applying Bayes' Theorem and by assuming the independence between different locations t:, the coefficients Ci can be written as2 00 n [ 1 (Z_y",)2] J mEM(i) exp -2' O'~ata Ci = z· 00 J n [_l(v-YmP] -00 mEM(i) exp 2 0'2 -00 data exp[_l (Z-b(i'.))2] 2 0'2 prior dz. exp[_1(V-b(i'.))2] dv 2 0'2 prior 2The normalization constants of the Gaussians in numerator and denominator cancel as well as the product for all m~M(i) of the probabilities that (Xm, Ym) is in the data set. 662 Roscheisen, Hofmann, and Tresp It can be easily shown that this simplifies to LmEM(i) Ym + k . b(t:) Ci = IM(i)1 + k (2) where k = uJata/ Uirior and I· I denotes the cardinality operator. In accordance with intuition, the coefficients Ci turn out to be a weighted mean between the value suggested by the domain theory b and the training data values which originated from t:. The weighting factor k/(IM(i)1 + k) reflects the relative reliability of the two sources of information, the empirical data and the prior knowledge. Define Si as Si = (Ci - b(ik)) . k + LmEM(i)(Ci - Ym). Clearly, if ISil is minimized to 0, then Ci reaches exactly the optimal value as it is given by equation 2. An adaptive solution to this is to update Ci according to Ci = -'"'( . Si. Since the membership distribution for M( i) is not known a priori, we approximate it using a posterior estimate of the probability p(m E M(i)lxm) that m is in M(i) given that xm was generated by some center t~, which is ( E M( ')I- ) p(X = xmlTtrue = t:) pm Z Xm M _ _ . Lk=l p(X = xmlTtrue = tk) p(X = xmlTtrtLe = t:) is the activation acti of the i-th center, when the network is presented with input xm . Substituting the equation in the sum of Si leads to the following training regimen: Using stochastic sample-by-sample learning, we present in each training step with probability 1 - A a data point Yi, and with probability A a point b(ik) that is generated from the domain theory, where A is given by k·N A:= k.N+M (3) (Recall that M is the total number of data points, and N is the number of centers.) A varies from 0 (the data is far more reliable than the prior knowledge) to 1 (the data is unreliable in comparison with the prior knowledge). Thus, the change of Ci after each presentation is proportional to the error times the normalized activation of the i-th center, acti / Lf=l actk. The optimal positions for the centers t: are not known in advance, and we therefore perform standard LMS gradient descent on t:, and on the widths Ui. The weight updates in a learning step are given by a discretization of the following dynamic equations (i=l, ... ,N; j=l, ... ,d): . Ci - O(i) 1 t·· - 2", . ~ . act · . . . (x· - t .. ) ZJ I Z "",N 2 J ZJ L...,.k=l actk Uii ( 1) Ci-O(X) 2 -2= -'"'( . ~. acti . N . (xi - tii) uij Lk=l actk where ~ is the interpolation error, acti is the (forward-computed) activity of the the i-th center, and tii and Xi are the j-th component of t: and x respectively. Neural Control for Rolling Mills 663 3 APPLICATION TO ROLLING MILL CONTROL 3.1 THE PROBLEM In integrated steelworks, the finishing train of the hot line rolling mill transforms preprocessed steel from a casting successively into a homogeneously rolled steelplate. Controlling this process is a notoriously hard problem: The underlying physical principles are only roughly known. The values ofthe control parameters depend on a large number of entities, and have to be determined from measurements that are very noisy, strongly clustered, "expensive," and scarce.3 On the other hand, reliability and precision are at a premium. Unreasonable predictions have to be avoided under any circumstances, even in regions where no training data is available, and, by contract, an extremely high precision is required: the rolling tolerance has to be guaranteed to be less than typically 20j.tm, which is substantial, particularly in the light of the fact that the steel construction that holds the rolls itself expands for several millimeters under a rolling pressure of typically several thousands of tons. The considerable economic interest in improving adaptation methods in rolling mills derives from the fact that lower rolling tolerances are indispensable for the supplied industry, yet it has proven difficult to remain operational within the guaranteed bounds under these constraints. The control problem consists of determining a reduction schedule that specifies for each pair of rolls their initial distance such that after the final roll pair the desired thickness of the steel-plate (the actual feedback) is achieved. This reinforcement problem can be reduced to a less complex approximation problem of predicting the rolling force that is created at each pair of rolls, since this force can directly and precisely be correlated to the reduction in thickness at a roll pair by conventional means. Our task was therefore to predict the rolling force on the basis of nine input variables like temperature and rolling speed, such that a subsequent conventional high-precision control can quickly reach the guaranteed rolling tolerance before much of a plate is lost. The state-of-the-art solution to this problem is a parameterized analytic model that considers nine physical entities as input and makes use of a huge number of tabulated coefficients that are adapted separately for each material and each thickness class. The solution is known to give only approximate predictions about the actual force, and although the on-line corrections by the high-precision control are generally sufficient to reach the rolling tolerance, this process necessarily takes more time, the worse the prediction is-resulting in a waste of more of the beginning of a steel-plate. Furthermore, any improvement in the adaptation techniques will also shorten the initialization process for a rolling mill, which currently takes several months because of the poor generalization abilities of the applied method to other thickness classes or steel qualities. The data for our simulations was drawn from a rolling mill that was being installed at the time of our experiments. It included measurements for around 200 different steel qualities; only a few qualities were represented more than 100 times. 3The costs for a single sheet of metal-giving three useful data points that have to be measured under difficult conditions-amount to a six-digit dollar sum. Only a limited number of plates of the same steel quality is processed every week, causing the data scarcity. 664 Roscheisen, Hofmann, and Tresp 3.2 EXPERIMENTAL RESULTS According to the results in Section 2, a network of the specified localized architecture was trained with data (artificially) generated from the domain theory and data derived from on-line measurements. The remaining design considerations for architecture selection were based on the extent to which a network had the capacity to represent an instantiation of the analytic model (our domain theory): Table 1 shows the approximation error of partitioning-to-one architectures with different degrees of freedom on their centers' widths. The variances of the GBFs were either all equal and not adapted (GBFs with constant widths), or adapted individually for all centers (GBFs with spherical adaptation), or adapted individually for all centers and every input dimension-leading to axially oriented hyperellipsoids (GBFs with ellipsoidal adaptation). Networks with "full hyperquadric" GBFs, for Method Normalized Error Maximum Error Squares [10- 2] [10- 2] GBFs with partitioning constant widths 0.40 2.1 spherical adaptation 0.18 1.7 ellipsoidal " 0.096 0.41 G BFs no partitioning 0.85 5.3 MLP 0.38 3.4 Table 1: Approximation of an instantiation of the domain theory: localized architectures (GBFs) and a network with sigmoidal hidden units (MLP). which the covariance matrix is no longer diagonal, were also tested, but performed clearly worse, apparently due to too many degrees of freedom. The table shows that the networks with "ellipsoidal" GBFs performed best. Convergence time of this type of network was also found to be superior. The table also gives the comparative numbers for two other architectures: GBFs without normalized response function achieved significantly lower accuracy-even if they had far more centers {performance is given for a net with 81 centers)-than those with partitioning and only 16 centers. Using up to 200 million sample presentations, sigmoidal networks trained with standard backpropagation (Rumelhart et al., 1986) achieved a yet lower level-despite the use of weight-elimination (Le Cun, 1990), and an analysis of the data's eigenvalue spectrum to optimize the learning rate (see also Le Cun, 1991). The indicated numbers are for networks with optimized numbers of hidden units. The value for A was determined according to equation 3 in Section 2.2 as A = 0.8; the noise in our application could be easily estimated, since there are multiple measurements for each input point available and the reliability of the domain theory is known. Applying the described training regimen to the GBF-architecture with ellipsoidal adaptation led to promising results: Figure 1 shows the points in a "slice" through a specific point in the input space: the measurements, the force as it is predicted by the analytic model and the network. It can be seen that the net exhibits fail-safe behavior: it sticks closely to the analytic model in regions where no data is available. If data points are available and suggest Neural Control for Rolling Mills 665 Force Force Model I MOdel I \ Model t Net t Net thickness thickneSS temperature Figure 1: Prediction of the rolling force by the state-of-the-art model, by the neural network, and the measured data points as a function of the input 'sheet thickness,' and 'temperature.' Method Percent of Improvement Percent of Improvement on Trained Samples at Generalization Gaussian Units A = 0.8 18 16 Gaussian Units A = 0.4 41 14 MLP 3.9 3.1 Table 2: Relative improvement of the neural network solutions with respect to the state-of-the-art model: on the training data and on the cross-validation set. a different force, then the network modifies its output in direction of the data. Table 2 shows to what extent the neural network method performed superior to the currently applied state-of-the-art model (cross-validated mean). The numbers indicate the relative improvement of the mean squared error of the network solution with respect to an optimally-tuned analytic model. Although the data set was very sparse and noisy, it was nevertheless still possible to give a better prediction. The effect is also shown if a different value for A were chosen: the higher value of A, that is, more prior knowledge, keeps the net from memorizing the data, and improves generalization slightly. In case of the sigmoidal network, A was simply optimized to give the smallest cross-validation error. When trained without prior knowledge, none of the architectures lead to an improvement. 4 CONCLUSION In a large-scale applications to devise a neural control for a hot line rolling mill, training data turned out to be insufficient for learning to be feasible that is only based on syntactic preference biases. By using a Bayesian framework, an imperfect domain theory was incorporated as an inductive bias in a principled way. The method outperformed the state-of-the-art solution to an extent which steelworks automation experts consider highly convincing. 666 Roscheisen, Hofmann, and Tresp Acknowledgements This paper describes the first two authors' joint university project, which was supported by grants from Siemens AG, Corporate R&D, and Studienstiftung des deutschen Volkes. H. Rein and F. Schmid of the Erlangen steelworks automation group helped identify the problem and sampled the data. W. Buttner and W. Finnoff made valuable suggestions. References Bergadano, F. and A. Giordana (1990). Guiding Induction with Domain Theories. In: Y. Kodratoff et al. (eds.), Machine Learning, Vol. 3, Morgan Kaufmann. Cun, Y. Le, J. S. Denker, and S. A. Solla (1990). Optimal Brain Damage. In: D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann. Cun. Y. Le, I. Kanter and S. A. Solla (1991). Second Order Properties of Error Surfaces: Learning Time and Generalization. In: R. P. Lippman et al. (eds.), Advances in Neural Information Processing 3, Morgan Kaufmann. Darken, Ch. and J. Moody (1990). Fast adaptive k-means clustering: some empirical results. In: Proceedings of the IlCNN, San Diego. Duda, R. O. and P. E. Hart (1973). Pattern Classification and Scene Analysis. NY: Wiley. MacKay, D. (1991). Bayesian Modeling. Ph.D. thesis, Caltech. Minton, S. N., J. G. Carbonell et al. (1989). Explanation-based Learning: A problemsolving perspective. Artificial Intelligence, Vol. 40, pp. 63-118. Mitchell, T. M., R. M. Keller and S. T. Kedar-Cabelli (1986). Explanation-based Learning: A unifying view. Machine Learning, Vol. 1, pp. 47-80. Moody, J. (1990). Fast Learning in Multi-Resolution Hierarchies. In: D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Kaufmann, pp. 29-39. Moody, J. and Ch. Darken (1989). Fast Learning in Networks of Locally-tuned Processing Units. Neural Computation, Vol. 1, pp. 281-294, MIT. Moody, J. and Ch. Darken (1988). Learning with Localized Receptive Fields. In: D. Touretzky et al. (eds.), Proc. of Connectionist Models Summer School, Kaufmann. Nowlan, St. J. (1990). Maximum Likelihood Competitive Learning. In: D. S. Touretzky (ed.,) Advances in Neural Information Processing Systems 2, Morgan Kaufmann. Omohundro, S. M. (1991). Bump Trees for Efficient Function, Constraint, and Classification Learning. In: R. P. Lippman et al. (eds.), Advances in Neural Information Processing 3, Morgan Kaufmann. Platt, J. (1990). A Resource-Allocating Network for Function Interpolation. In: D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Kaufmann. Poggio, T. and F. Girosi (1990). A Theory of Networks for Approximation and Learning. A.I. Memo No. 1140 (extended in No. 1167 and No. 1253), MIT. Roscheisen, M., R. Hofmann, and V. Tresp (1992). Incorporating Domain-Specific Prior Knowledge into Networks of Locally-Tuned Units. In: S. Hanson et al.(eds.), Computational Learning Theory and Natural Learning Systems, MIT Press. Rumelhart, D. E., G. E. Hinton, and R. J. Williams (1986). Learning representations by back-propagating errors. Nature, 323(9):533-536, October. Rumelhart, D. E. (1988). Plenary Address, IJCNN, San Diego. Scott, G.M., J. W. Shavlik, and W. H. Ray (1991). Refining PID Controllers using Neural Networks. Technical Report, submitted to Neural Computation. Tecuci, G. and Y. Kodratoff (1990). Apprenticeship Learning in Imperfect Domain Theories. In: Y. Kodratoff et al. (eds.), Machine Learning, Vol. 3, Morgan Kaufmann. Weigend, A. (1991). Connectionist Architectures for Time-Series Prediction of Dynamical Systems. Ph.D. thesis, Stanford.
1991
87
559
Polynomial Uniform Convergence of Relative Frequencies to Probabilities Alberto Bertoni, Paola Carnpadelli~ Anna Morpurgo, Sandra Panizza Dipartimento di Scienze dell'Informazione Universita degli Studi di Milano via Comelico, 39 - 20135 Milano - Italy Abstract We define the concept of polynomial uniform convergence of relative frequencies to probabilities in the distribution-dependent context. Let Xn = {O, l}n, let Pn be a probability distribution on Xn and let Fn C 2X ,. be a family of events. The family {(Xn, Pn, Fn)}n~l has the property of polynomial uniform convergence if the probability that the maximum difference (over Fn) between the relative frequency and the probability of an event exceed a given positive e be at most 6 (0 < 6 < 1), when the sample on which the frequency is evaluated has size polynomial in n,l/e,l/b. Given at-sample (Xl, ... ,Xt), let C~t)(XI, ... ,Xt) be the Vapnik-Chervonenkis dimension of the family {{x}, ... ,xtl n f I f E Fn} and M(n, t) the expectation E(C~t) It). We show that {(Xn, Pn, Fn)}n~l has the property of polynomial uniform convergence iff there exists f3 > 0 such that M(n, t) = O(n/t!3). Applications to distribution-dependent PAC learning are discussed. 1 INTRODUCTION The probably approximately correct (PAC) learning model proposed by Valiant [Valiant, 1984] provides a complexity theoretical basis for learning from examples produced by an arbitrary distribution. As shown in [Blumer et al., 1989], a cen• Also at CNR, Istituto di Fisiologia dei Centri Nervosi, via Mario Bianco 9, 20131 Milano, Italy. 904 Polynomial Uniform Convergence 905 tral notion for distribution-free learnability is the Vapnik-Chervonenkis dimension, which allows obtaining estimations of the sample size adequate to learn at a given level of approximation and confidence. This combinatorial notion has been defined in [Vapnik & Chervonenkis, 1971] to study the problem of uniform convergence of relative frequencies of events to their corresponding probabilities in a distributionfree framework. In this work we define the concept of polynomial uniform convergence of relative frequencies of events to probabilities in the distribution-dependent setting. More precisely, consider, for any n, a probability distribution on {O, l}n and a family of events Fn ~ 2{O,1}"; our request is that the probability that the maximum difference (over Fn) between the relative frequency and the probability of an event exceed a given arbitrarily small positive constant £ be at most 6 (0 < 6 < 1) when the sample on which we evaluate the relative frequencies has size polynomial in n, 1/£,1/6. The main result we present here is a necessary and sufficient condition for polynomial uniform convergence in terms of "average information per example" . In section 2 we give preliminary notations and results; in section 3 we introduce the concept of polynomial uniform convergence in the distribution-dependent context and we state our main result, which we prove in section 4. Some applications to distribution-dependent PAC learning are discussed in section 5. 2 PRELIMINARY DEFINITIONS AND RESULTS Let X be a set of elementary events on which a probability measure P is defined and let F be a collection of boolean functions on X, i.e. functions f ; X {O, 1}. For I E F the set 1-1 (1) is said event, and Pj denotes its probability. At-sample (or sample of size t) on X is a sequence ~ = (Xl, .. . , X,), where Xk E X (1 < k < t). Let X(t) denote the space of t-samples and pCt) the probability distribution induced by P on XCt), such that P(t)(Xl,"" Xt) = P(Xt)P(X2)'" P(Xt). Given a t-sample ~ and a set f E F, let vjt)(~) be the relative frequency of f in the t-sample~, i.e. (t)( ) _ L~=l I(x;) Vj X • t Consider now the random variable II~) ; XCt) _ [01], defined over (XCt), pCt»), where II~)(Xt, ... ,xe) = sup I Vjt)(Xl, ... , Xt) - Pj I . JEF The relative frequencies of the events are said to converge to the probabilities uniformly over F if, for every £ > 0, limt_oo pCt){ X I II~)(~) > £} = O. In order to study the problem of uniform convergence of the relative frequencies to the probabilities, the notion of index LlF ( x) of a family F with respect to a t-sample ~ has been introduced [Vapnik & Chervonenkis, 1971]. Fixed at-sample ~ = (Xl, ... , Xt), 906 Bertoni, Campadelli, Morpurgo, and Panizza Obviously A.F(Xl, ... ,Xt) ~ 2t; a set {xl, ... ,x,} is said shattered by F iff A.F(Xl, ... ,Xt) = 2t; the maximum t such that there is a set {XI, ... ,Xt} shattered by F is said the Vapnik-Chervonenkis dimension dF of F. The following result holds [Vapnik & Chervonenkis, 1971]. Theorem 2.1 For all distribution probabilities on X I the relative frequencies of the events converge (in probability) to their corresponding probabilities uniformly over F iff dF < 00. We recall that the Vapnik-Chervonenkis dimension is a very useful notion in the distribution-independent PAC learning model [Blumer et al., 1989]. In the distribution-dependent framework, where the probability measure P is fixed and known, let us consider the expectation E[log2 A.F(X)]' called entropy HF(t) of the family F in samples of size t; obviously HF(t) depends on the probability distribution P. The relevance of this notion is showed by the following result [Vapnik & Chervonenkis, 1971]. Theorem 2.2 A necessary and sufficient condition for the relative frequencies of the events in F to converge uniformly over F (in probability) to their corresponding probabilities is that 3 POLYNOMIAL UNIFORM CONVERGENCE Consider the family {(Xn, Pn , Fn}}n>l, where Xn = {O, l}n, Pn is a probability distribution on Xn and Fn is a family of boolean functions on X n. Since Xn is finite, the frequencies trivially converge uniformly to the probabilities; therefore we are interested in studying the problem of convergence with constraints on the sample size. To be more precise, we introduce the following definition. Definition 3.1 Given the family {(Xn, Pn, Fn}}n> 1, the relative frequencies of the events in Fn converge polynomially to their corresponding probabilities uniformly over Fn iff there exists a polynomial p(n, 1/£, 1/8) such that \1£,8> 0 \In (t? p(n, 1/£, 1/8) ~ p(t){~ I n~~(~) > £} < 8). In this context £ and 8 are the approximation and confidence parameters, respectively. The problem we consider now is to characterize the family {(Xn , Pn, Fn}}n> 1 such that the relative frequencies of events in Fn converge polynomially to the probabilities. Let us introduce the random variable c~t) : X~t) ~ N, defined as C~t)(Xl' ... ' Xt) = maxi #A I A ~ {XI, ... , xtl A A is shattered by Fn}. In this notation it is understood that c~t) refers to Fn. The random variable c~t) and the index function A.Fn are related to one another; in fact, the following result can he easily proved. Polynomial Uniform Convergence 907 L(~lllUla 3.1 C~t)(~.) < 10g~Fn(~) S; C~)(~) logt. C(t) C(t) Let M(n, t) = E(_n_) be the expectation of the random variable~. From Lemma t t 3.1 readily follows that HF (t) M(n, t) < ; S; M(n, t) logt; therefore M(n, t) is very close to HF,..(t)/t, which can be interpreted as "average information for example" for samples of size t. Our main result shows that M(n, t) is a useful measure to verify whether {(Xn, Pn, Fn) }n>l satisfies the property of polynomial convergence, as shown by the following theorem. Theorem 3.1 Given {(Xn, Pn , Fn) }n~ 1, the following conditions are equivalent: Cl. The relative frequencies of events in Fn converge polynomially to their corresponding probabilities. C2. There exists f3 > 0 such that M(n, t) = O(n/t!3). C3. There exists a polynomial1/;(n, l/e) such that 'r/c'r/n (t ~ 1/;(n, l/c)::} M(n,t) < c). Proof· • C2 ::} C3 is readily veirfied. In fact, condition C2 says there exist a, f3 > 0 such that M(n,t) S; an/tf3; now, observing that t ~ (an/c)! implies an/t!3 < e, condition C3 immediately follows . • C3 ::} C2. As stated by condition C3, there exist a, b, c > 0 such that if t ~ anb Icc then M(n, t) < c. Solving the first inequality with respect to c gives, in the worst case, c = (anb /t)~, and substituting for c in the second inequality yields M(1l,t) ~ (anb/t)~ = a~n~/t~. If ~ < 1 we immediately obtain M(n,t) ~ a~n~/t~ < a~n/d. Otherwise, if ~ > 1, since M(n,t) < 1, we have M(n,t) S; min{l,atn~/d} S; min{l,(a~n~/d)~} S; aln/tl. 0 The proof of the equivalence between propositions C1 and C3 will be given in the next section. 4 PROOF OF THE MAIN THEOREM First of all, we prove that condition C3 implies condition Cl. The proof is based on the following lemma, which is obtained by minor modifications of [Vapnik & Chervonenkis, 1971 (Lemma 2, Theorem 4, and Lemma 4)]. 908 Benoni, Campadelli, Morpurgo, and Panizza Lemma 4.1 Given the family {(Xn,Pn,Fn)}n~I' if limt_ex> HF;(t) = 0 then \;fe\;fo\;fn (t > 1:;;0 => p~t){~ I rr~~(~) > e} < 0), where to is such that H Fn (to)/to ::; e2/64. As a consequence, we can prove the following. Theorem 4.1 Given {(Xn,Pn,Fn)}n~}, if there exists apolynomial1/J(n,l/e) such that \;fe\;fn (t ~ 1/J(n, l/e) => HF;(t) < c), then the relative frequencies of events in Fn converge polynomially to their probabilities. Proof (outline). It is sufficient to observe that if we choose to = 1/J(n,64/e2), by hypothesis it holds that HFn(to)/to < e2/64; therefore, from Lemma 4.1, if 132to _ 132./,( 64) t > e20 - e20 'f' n, e2 ' then p~t) {~ I rr~~ (~) > e} < O. o An immediate consequence of Theorem 4.1 and of the relation M(n, t) < HF ... (t)/t < M(n, t) logt is that condition C3 implies condition Cl. We now prove that condition C1 implies condition C3. For the sake of simplicity it is convenient to introduce the following notations: d t ) a~t) = T Pa(n,e,t) = p~t){~la~>C~) < e}. The following lemma, which relates the problem of polynomial uniform convergence of a family of events to the parameter Pa(n, e, t), will only be stated since it can be proved by minor modifications of Theorem 4 in [Vapnik & Chervonenkis, 1971]. Lemma 4.2 1ft ~ 16/e2 then pAt){~lrr~~(x) > e} > {(1- Pa(n,8e,2t)). A relevant property of Pa(n, e, t) is given by the following lemma. Lemma 4.3 \;fa > 1 Pa(n,e/a,at) < P~(n,e,t). Proof. Let ~l , ... '~a) be an at-sample obtained by the concatenation of a elements ~1""'~ E X(t). It is easy to verify that c~at)(~I"" ,~a) ~ maXi=I, ... ,aC~t)(Xi)' Therefore PAat){c~at)(~l, ... ,Xa)::; k}::; PAat){c~t)(~I)::; k/l. ···/l.C~t)(~a) < k}. By the independency of the events c~t)(~) < k we obtain a p~at){c~at)(Xl'" .,~) < k} < II p~t){C~t)(~d::; k}. i=1 Polynomial Uniform Convergence 909 Recalling that a~) = C~t) It and substituting k = et, the thesis follows. o A relation between Pa(n, e, t) and the parameter M(n, t), which we have introduced to characterize the polynomial uniform convergence of {(Xn, Pn, Fn)}n~I, is shown in the following lemma. Lemma 4.4 For every e (0 < e < 1/4), if M(n, t) > 2.,fi then Pa(n, e, t) < 1/2. Proof. For the sake of simplicity, let m = M(n, I). If m > 6 > 0 , we have 6 < m = t x dPa = r6/ 2 x dPa + t x dPa Jo Jo J6/2 666 < 2Pa(n, 2' I) + 1- Pa(n, 2' I). Since 0 < 6 < 1, we obtain 6 1- 6 6 Pa(n'2,/) < 1- 6/ 2 ~ 1- 2· By applying Lemma 4.3 it is proved that, for every a ~ 1, 6 (6)Q Pa(n, 20" 0'/) ~ 1 - 2 For a = ~ we obtain 62 21 -1 1 Pa(n'4'"6)<e <2· For e = 6214 and t = 2116, the previous result implies that, if M(n, t.,fi) > 2-j"i, then Pa(n, e, t) < 1/2. It is easy to verify that C~Qt)(~I' ... '~Q) < Ef=1 C~t)(~;) for every a ~ 1. This implies M(n, at) < M(n, t) for a > 1, hence M(n, t..fi) > M(n, t), from which the thesis follows. 0 Theorem 4.2 If for the family {(Xn, Pn ,Fn)}n~1 the relative frequencies of events in Fn converge polynomially to their probabilities, then there exists a polynomial1/;(n, lie) such that \;f e \;fn (t ~ 'ljJ(n, lie) => M(n, t) ~ e). Proof. By contradiction. Let us suppose that {(Xn' Pn, Fn)}n> 1 polynomially converges and that for all polynomial functions 1/;(n, lie) there exist e, n, t such that t ~ 1/;(n, lie) and M(n, t) > e. Since M(n, t) is a monotone, non-increasing function with respect to t it follows that for every 1/; there exist e, n such that M(n, 1/;(n, lie)) > e. Considering the one-to-one corrispondence T between polynomial functions defined by T1/;(n, lie) = <pen, 4/e2), we can conclude that for any <p there exist e, n such that M(n, <pen, lie)) > 2.,fi. From Lemma 4.4 it follows that \;f<p3n3e (Pa(n,e,<p(n,~)) < ~). (1) 910 Bertoni, Campadelli, Morpurgo, and Panizza Since, by hypothesis, {(Xn, Pn, Fn)}n~l polynomially converges, fixed 6 1/20, there exists a polynomial </J such that \Ie \In \I</J (t ~ </J( n, ;) => p~t){.f.1 n~~ (.f.) > c} < 210) From Lemma 4.2 we know that if t ~ 16/e2 then p~t){.f.1 n~~(.f.) > e} ~ ~(1- Pa(n,8e, 2t)) 1ft ~ max{16/e2, </J(n, l/e)} , then H1-Pa(n, 8e, 2t)) < ;0' hence Pa(n, 8e, 2t) > ~. Fixed a polynomial p(n, l/e) such that 2p(n, 8/e) ~ max{16/e2 , </J(n, l/e)}, we can conclude that \Ie \In (Pa(n,e,p(n,~)) > ~). (2) From assertions (1) and (2) the contradiction ~ < ~ can easily be derived. 0 An immediate consequence of Theorem 4.2 is that, in Theorem 3.1, condition C1 implies condition C3. Theorem 3.1 is thus proved. 5 DISTRIBUTION-DEPENDENT PAC LEARNING In this section we briefly recall the notion of learnability in the distributiondependent PAC model and we discuss some applications of the previous results. Given {(Xn, Pn, Fn)}n~b a labelled t-sample 5, for f E Fn is a sequence ((Xl, f(xt), . . . , (Xt, f(xt}»), where (Xl, . . . , Xt) is a t-sample on X n . We say that h,/2 E Fn are e-close with respect to Pn iff Pn{xlh(x) f. /2(x)} < e. A learning algorithm A for {(Xn, Pn , Fn)}n>l is an algorithm that, given in input e,6 > 0, a labelled t-sample 5, with f E Fn , outputs the representation of a function 9 which, with probability 1 - 6, is e-close to f . The family {(Xn, Pn, Fn)}n~l is said polynomially learnable iff there exists a learning algorithm A working in time bounded by a polynomial p(n, l/e , 1/6). Bounds on the sample size necessary to learn at approximation e and confidence 1 - 6 have been given in terms of e-covers [Benedek & Itai, 1988]; classes which are not learnable in the distribution-free model, but are learnable for some specific distribution, have been shown (e.g. I-terms DNF [Kucera et al., 1988]). The following notion is expressed in terms of relative frequencies. Definition 5.1 A quasi-consistent algorithm for the family {(Xn, Pn, Fn)}n>l is an algorithm that, given in input 6, e > 0 and a labelled t-sample 5, with f E Fn , outputs in time bounded by a polynomial p( n, 1/ e, 1/6) the representation of a function 9 E Fn such that p(t){X I vet) (x) > e} < 6 n '$gBy Theorem 3.1 the following result can easily be derived. Polynomial Uniform Convergence 911 TheoreIn 5.1 Given {(Xn, Pn, Fn)}n~l' if there exists f3 > 0 such that M(n, t) = O(n/t f3 ) and there exists a quasi-consistent algorithm for {(Xn, Pn, Fn)}n~l then {(Xn, Pn, Fn) }n~l is polynomially learnable. 6 CONCLUSIONS AND OPEN PROBLEMS We have characterized the property of polynomial uniform convergence of {(Xn,Pn,Fn)}n>l by means of the parameter M(n,t). In particular we proved that {(Xn,Pn,Fn}}n~l has the property of polynomial convergence iff there exists f3 > 0 such that M(n, t) = O(n/tf3 ), but no attempt has been made to obtain better upper and lower bounds on the sample size in terms of M(n, t). With respect to the relation between polynomial uniform convergence and PAC learning in the distribution-dependent context, we have shown that if a family {(Xn, Pn, Fn) }n> 1 satisfies the property of polynomial uniform convergence then it can be PAC learned with a sample of size bounded by a polynomial function in n, 1/£, 1/6. It is an open problem whether the converse implication also holds. Acknowledgements This research was supported by CNR, project Sistemi Informatici e Calcolo Parallelo. References G. Benedek, A. Itai. (1988) "Learnability by Fixed Distributions". Proc. COLT'88, 80-90. A. Blumer, A. Ehrenfeucht, D. Haussler, K. Warmuth. (1989) "Learnability and the Vapnik-Chervonenkis Dimension". J. ACM 36, 929-965. L. Kucera, A. Marchetti-Spaccamela, M. Protasi. (1988) "On the Learnability of DNF Formulae". Proc. XV Coli. on Automata, Languages, and Programming, L.N .C.S. 317, Springer Verlag. L.G. Valiant. (1984) "A Theory of the Learnable". Communications of the ACM 27 (11), 1134-1142. V.N. Vapnik, A.Ya. Chervonenkis. (1971) "On the uniform convergence of relative frequencies of events to their probabilities". Theory of Prob. and its Appl. 16 (2), 265-280.
1991
88
560
Forward Dynamics Modeling of Speech Motor Control Using Physiological Data Makoto Hirayama Eric Vatikiotis-Bateson Mitsuo Kawato A TR Auditory and Visual Perception Research Laboratories 2 - 2, Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN Michael I. Jordan Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Abstract We propose a paradigm for modeling speech production based on neural networks. We focus on characteristics of the musculoskeletal system. Using real physiological data - articulator movements and EMG from muscle activitya neural network learns the forward dynamics relating motor commands to muscles and the ensuing articulator behavior. After learning, simulated perturbations, were used to asses properties of the acquired model, such as natural frequency, damping, and interarticulator couplings. Finally, a cascade neural network is used to generate continuous motor commands from a sequence of discrete articulatory targets. 1 INTRODUCTION A key problem in the formal study of human language is to understand the process by which linguistic intentions become speech. Speech production entails extraordinary coordination among diverse neurophysiological and anatomical structures from which unfolds through time a complex acoustic signal that conveys to listeners something of the speaker's intention. Analysis of the speech acoustics has not revealed the encoding of these intentions, generally conceived to be ordered strings of some basic unit, e.g., the phoneme. Nor has analysis of the articulatory system provided an answer, although recent pioneering work by Jordan (1986), Saltzman (1986), Laboissiere (1990) and others 191 192 Hirayama, Vatikiotis-Bateson, Kawato, and Jordan has brought us closer to an understanding of the articulatory-to-acoustic transform and has demonstrated the importance of modeling the articulatory system's temporal properties. However, these efforts have been limited to kinematic modeling because they have not had access to the neuromuscular activity of the articulatory structures. In this study, we are using neural networks to model speech production. The principle steps of this endeavor are shown in Figure 1. In this paper, we focus on characteristics of the musculoskeletal system. Using real physiological data - articulator movements and EMG from muscle activity - a neural network learns the forward dynamics relating motor commands to muscles and the ensuing articulator behavior. After learning, a cascade neural network model (Kawato, Maeda, Uno, & Suzuki, 1990) is used to generate continuous motor commands. Intention to Speak Intended Phoneme Sequence Global Performance Parameters Transformation from Phoneme to Gesture Articulatory Targets Motor Command Generation Motor Command Musculo-Skeletal System Articulator Trajectories Transformation from Articulatory Movement to Acoustic Signal Acoustic Wave Radiation Figure 1: Forward Model of Speech Production 2 EXPERIMENT Movement, EMG, and acoustic data were recorded for one speaker who produced reiterant versions of two sentences. Speaking rate was fast and the reiterant syllables were ba, boo Figure 2 shows approximate marker positions for tracking positions of the jaw (horizontal and vertical) and lips (vertical only) and muscle insertion points for hookedwire, bipolar EMG recording from four muscles: ABD (anterior belly of the digastric) for jaw lowering, OOI( orbicularis oris inferior) and MTL (mentalis) for lower lip raising and protrusion, and GGA (genioglossus anterior) for tongue tip lowering. All movement and EMG (rectified and integrated) signals were digitized (12 bit) at 200 Hz and then numerically smoothed at 40 Hz. Position signals were differentiated to obtain velocity and then, after smoothing at 22 Hz, differentiated again to get acceleration. Figure 3 shows data for one reiterant utterance using ba. Forward Dynamics Modeling of Speech Motor Control 193 Articulator UL: u~er lip (vertical) LL lower lip (vertical) JX jaw (horizontal) JY jaw (vertical) Muscle ABD : anterior belly of the digastric 001 : orbicularis oris inferior MTL : mentalis GGA : genioglossus anterior oOI-___ ~r~ MTL .:..-:-~~~ ABD Figure 2: Approximate Positions of Markers and Muscle Insertion for Recording Movement and EMG Audio UL tJ) LL 0 Q. JX JY UL -' LL w > JX JY UL 0 LL 0 C( JX JY ABO Cl 001 :E W MTL GGA 0 1 2 3 4 Time [51 Figure 3: Time Series Representations for All Channels of One Reiterant Rendition Using ba 5 194 Hirayama, Vatikiotis-Bateson, Kawaro, and Jordan 3 FORWARD DYNAMICS MODELING OF THE MUSCULOSKELETAL SYSTEM AND TRAJECTORY PREDICTION FROM MUSCLE EMG The forward dynamics model (FDM) for ba, bo production was obtained using a threelayer perceptron with back propagation (Rumelhart, Hinton, & Williams, 1986). The network learns the correlations between position, velocity, EMG at time 1 and the changes of position and velocity for all articulators at the next time sample 1+1. After learning, the forward dynamics model is connected recurrently as shown in Figure 4. The network uses only the initial articulator position and velocity values and the continuous EMG "motor command" input to generate predicted trajectories. The FDM estimates the changes of position and velocity and sums them with position and velocity values of the previous sample Ito obtain estimated values at the next sample 1+1. Figure 5 compares experimentally observed trajectories with trajectories predicted by this network. Spatiotemporal characteristics are very similar, e.g., amplitude, frequency, and phase, and demonstrate the generally good perfonnance of the model. There is, however, a tendency towards negative offset in the predicted positions. There are two important limitations that reduce the current model's ability to compensate for position shifts in the test utterance. First, there is no specified equilibrium or rest position in articulator space, towards which articulators might tend in the absence of EMG activity. Second, the acquired FDM is based on limited EMG; at most there is correlated EMG for only one direction of motion per articulator. Addition of antagonist EMG and/or an estimate of equilibrium position in articulator or, eventually, task coordinates should increase the model's generalization capability. Predicted Position Trajectory "v' Velocity Forward Position DynamiCS 6Position Model 1\/ ~ EMG 6Velocity Velocity Figure 4: Recurrent Network for Trajectory Prediction from Muscle EMG Forward Dynamics Modeling of Speech Motor Control 195 Network Output ~~~ .. " Observed Trajectory UL c 0 LL ;:: 'iii 0 Q. JX JY UL >LL 'u 0 Q) > JX JY 0 123 4 5 Time [s] Figure 5: Experimentally Observed vs. Predicted Trajectories 4 ESTIMATION OF DYNAMIC PARAMETER To investigate quantitative characteristics of the obtained forward dynamics model, the model system's response to two types of simulated perturbation were examined. The first simulated perturbation confirmed that the model system indeed learned an appropriate nonlinear dynamics and affords a rough estimation of the its visco-elastic properties, such as natural frequency (1.0 Hz) and damping ratio (0.24). Simulated release of the lower lip at various distances from rest revealed underdamped though stable behavior, as shown in Figure 6a. The second perturbation entailed observing articulator response to a step increase (50 % of full-scale) in EMG activity for each muscle. Figure 6b demonstrates that the learned relation between EMG input and articulator movement output is dynamical rather than kinematic because articulator responses are not instantaneous. Learned responses to each muscle's activation also show some interesting and reasonable (though not always correct) couplings between different articulators. 196 Hirayama, Vatikiotis-Bateson, Kawato, and Jordan 0.6 0.4 a 0.2 ~ en .f 0.0 -0.2 - - r -- -- -: -- -- -- t -- -- -: 1 2 3 TIme [s] 4 UL LL JX JY a. Release of Lower Lip from Rest Position + 0.2 5 ,,~~~~~h~~~~~~~~~~ ABO .. ~ ~ ~ 001 ~""~_-!'--~--.-i--...-"""'----! MTL \.~ ,~ ... ~ .-. .-. I' ................. ~ ..................... J""II r. _.:i -: : : : : GGA ~~~h~~~~~~~~~~~~~~ . . . . 0 1 2 3 4 TIme [s) b. Response of Step Increase (+0.5) in EMG . 5 Figure 6: Visco-Elastic Property of the FDM Observed by Simulated Perturbations 5 MOTOR COMMAND GENERATION USING CASCADE NEURAL NETWORK MODEL Observed articulator movements are smooth. Their smoothness is due partly to physical dynamic properties (inertia, viscosity). Furthermore, smoothness may be an attribute of the motor command itself, thereby resolving the ill-posed computational problem of generating continuous motor commands from a small number of discrete articulatory targets. To test this, we incorporated a smoothness constraint on the motor command (rectified EMG, in this case), which is conceptually similar to previously proposed constraints on change of torque (Uno, Kawato, & Suzuki, 1989) and muscle-tension (Uno, Suzuki, & Kawato, 1989). Two articulatory target (via-point) constraints were specified spatially, one for consonant closure and the other for vowel opening, and assigned to each of the 21 consonant + vowel syllables. The alternating sequence of via-points was isochronous (temporally equidistant) except for initial, medial and final pauses. The cascade neural network (Figure 7) then generated smooth EMG and articulator trajectories whose spatiotemporal asymmetry approximated the prosodic patterning of the natural test utterances (Figure 8). Although this is only a preliminary implementation of via-point and smoothness constraints, the model's ability to generate trajectories of appropriate spatiotemporal complexity from a series of alternating via-point inputs is encouraging. Forward Dynamics Modeling of Speech Motor Control 197 ~ ." t sequence of articulatory targets initial gesture position velocity +. ~ position, ~ velocity CJ :E w r--'----........., FDM FDM ••• FDM ... ~"...-.smoothness . constraint time ---- ---------------------~ generated motor command position, velocity motor command realized articulator trajectory musclulo-skeletal ....... ~ system Figure 7: Cascade Neural Network Model for Motor Command Generation UL LL JX JY ABO 001 MTL GGA 0 1 2 3 4 Time [5] Figure 8: Generated Motor Command (EMG) with Trajectory To Satisfy Articulatory Targets 5 198 Hirayama, Vatikiotis-Bateson, Kawato, and Jordan 6 CONCLUSION AND FUTURE WORK Our intent here has been to provide a preliminary model of speech production based on the articulatory system's dynamical properties. We used real physiological data EMGto obtain the forward dynamics model of the articulators from a multilayer perceptron. After training, a recurrent network predicted articulator trajectories using the EMG signals as the motor command input. Simulated perturbations were used to examine the model system's response to isolated inputs and to assess its visco-elastic properties and interarticulator couplings. Then, we incorporated a reasonable smoothness criterion minimum-motor-command-change into a cascade neural network that generated realistic trajectories from a bead-like string of via-points. We are now attempting to model various styles of real speech using data from more muscles and articulators such as the tongue. Also, the scope of the model is being expanded to incorporate global perfonnance parameters for motor command generation, and the transformations from phoneme to articulatory gesture and from articulatory movement to acoustic signal. Finally, a main goal of our work is to develop engineering applications for speech synthesis and recognition. Although our model is still preliminary, we believe resolving the difficulties posed by coarticulation, segmentation, prosody, and speaking style ultimately depends on understanding physiological and computational aspects of speech motor control. Acknowledgem ent We thank Vincent Gracco and Kiyoshi Oshima for muscle insertions; Haskins Laboratories for use of their facilities (NIH grant DC-00121); Kiyoshi Honda, Philip Rubin, Elliot Saltzman and Yoh'ichi Toh'kura for insightful discussion; and Kazunari Nakane and Eiji Yodogawa for continuous encouragement. Further support was provided by HFSP grants to M. Kawato and M. I. Jordan. References Jordan, M. I. (1986) Serial order: a parallel distributed processing approach, ICS (Institute for Cognitive Science, University of California) Report. 8604. Kawato, M .• Maeda. M., Uno, Y. & Suzuki. R. (1990) Trajectory Formation of Arm Movement by Cascade Neural Network Model Based on Minimum Torque-change Criterion. Bioi. Cybern.62, 275-288. Laboissiere, R .• Schwarz. 1. L. & Bailly. G. (1990) Motor Control for Speech Skills: a Connectionist Approach. Proceeding o/the 1990 Summer School, Morgan Kaufmann Publishers, 319-327. Rumelhart, D.E., Hinton. G.E. & Williams, RJ.(1986) Learning Internal Representation by Error Propagation. Parallel Distributed Processing Chap. 8. MIT Press. Saltzman. E.L. (1986) Task dynamics coordination of the speech articulators: A preliminary model. Experimental Brain Research. Series 15. 129-144. Uno. Y., Kawato. M., & Suzuki, R. (1989) Formation and Control of Optimal Trajectory in Human Multijoint Arm Movement, Bioi. Cybern. 61, 89-101. Uno, Y., Suzuki. R. & Kawato, M. (1989) Minimum muscle-tension-change model which reproduces human arm movement. Proceedings of the 4th symposium on Biological and Physiological Engineering, 299-302, in Japanese.
1991
89
561
706 Computer Recognition of Wave Location in Graphical Data by a Neural Network Donald T. Freeman School of Medicine University of Pittsburgh Pittsburgh. PA 15261 Abstract Five experiments were performed using several neural network architectures to identify the location of a wave in the time ordered graphical results from a medical test. Baseline results from the first experiment found correct identification of the target wave in 85% of cases (n=20). Other experiments investigated the effect of different architectures and preprocessing the raw data on the results. The methods used seem most appropriate for time oriented graphical data which has a clear starting point such as electrophoresis Or spectrometry rather than continuous teSts such as ECGs and EEGs. I INTRODUCTION Complex wave form recognition is generally considered to be a difficult task for machines. Analytical approaches to this problem have been described and they work with reasonable accuracy (Gabriel et al. 1980. Valdes-Sosa et al. 1987) The use of these techniques, however, requires substantial mathematical Iraining and the process is often time consuming and labor intensive (Boston 1987). Mathematical modeling also requires substantial knowledge of the particular details of the wave forms in order to determine how to apply the models and to determine detection criteria. Rule-based expert systems have also been used for the recognition of wave forms (Boston 1989). They require that a knowledge engineer work closely with a domain expert to exlract the rules that the expert uses to perform the recognition. If the rules are ad hoc or if it is difficult for experts to articulate the rules they use. then rule-based expert systems are cumbersome to implement. This paper describes the use of neural networks to recognize the location of peak V from the wave-form recording of brain stem auditory evoked potential tests. General discussions of connectionist networks can be found in (Rumelhart and McClelland 1986). The main features of neural networks that are relevant for our purposes revolve around their ease of use as compared to other modeling techniques. Neural networks provide several advantages over modeling with differential equations or rule-based systems. First. there is no knowledge engineering phase. The network is trained automatically using a series of examples along with the "right answer" to each example. Second. the resulting network typically has significant predictive power when novel examples are presented. So, neural network technology allows expert performance to be mimicked without requiring that expert knowledge be codified in a Iraditional fashion. In addition. neural networks. when used to perform signal analysis. require vastly less restrictive Computer Recognition of Wave Location in Graphical Data by a Neural Network 707 assumptions about the strucblre of the input signal than analytical techniques (Gonnan and Sejnowski 1988). Still, neural nets have not yet been widely applied to problems of this sort (DeRoach 1989). Nevertheless, it seems that interest is growing in using computers, especially neural networks, to solve advanced problems in medical decision making (Sblbbs 1988). 1.1 BRAIN STEM AUDITORY EVOKED POTENTIAL (BAEP) Sensory evoked potentials are electric signals from the brain that occur in response to transient auditory, somatosensory, or visual stimuli such as a click, pinprick, or flash of light. The signals, recorded from electrodes placed on a subject's scalp, are a measure of the electrical activity in the subject's brain both from response to the stimulus and from the spontaneous electroencephalographic (EEG) activity of the brain. One way of discerning the response to the stimulus from the background EEG noise is to average the individual responses from many identical stimuli. When "cortical noise" has been removed in this way, evoked potentials can be an important noninvasive measure of central nervous system function. They are used in sbldies of physiology and psychology, for the diagnosis of neurologic disorders (Greenberg et al. 1981). Recently attention has focused on continuous automated monitoring of the BAEP intraoperatively as well as post-operatively for evaluation of central nervous system function (Moulton et al. 1991). Brain stem auditory evoked potentials (BAEP) are generated in the auditory pathways of the brain stem. They can be used to asses hearing and brain stem function even in unresponsive or uncooperative patients. The BAEP test involves placing headphones on the patient, flooding one ear with white noise. and delivering clicks into the other ear. Electrodes on the scalp both on the same side (ipsilateral) and opposite side (contralateral) of the clicks record the electric potentials of brain activity for 10 msec. following each click. In the protocol used at the University of Pittsburgh Presbyterian University Hospital (pUH). a series of 2000 clicks is delivered and the results from each click - a graph of electrode activit>;: over the 10 msec. - are averaged into a single graph. Results from the stimulation of one ear with the clicks is referred to as "one ear of data". A graph of the wave fonn which results from the averaging of many stimuli appears as a series of peaks following the stimulus (Figure 1). The resulting graph typically has 7 important peaks but often includes other peaks resulting from the noise which remains after averaging. Each important peak represents the firing of a group of neurons in the auditory neural pathwayl. The time of arrival of the peaks (the peak latencies) and the amplitudes of the peaks are used to characterize the response. The latencies of peaks I. III, and V are typically used to detennine if there is evidence of slowed central nervous system conduction which is of value in the diagnosis of multiple sclerosis and other disease states2. Conduction delay may be seen in the left, right, or both BAEP pathways. It is of interest that the time of arrival of a wave on the ipsilateral and contralateral sides may be slightly different. This effect becomes more exagerated the more distant the correlated peaks are from the origin (Durrant. Boston, and Martin 1990). Typically there are several issues in the interpretation of the graphs. First. it must be clear that some neural response to the auditory stimulus is represented in the wave fonn. If a response is present, the peaks which correspond to nonnal and abnonnal responses must be distinguished from noise which remains in the signal even after averaging. Wave IV and wave V occasionally fuse, forming a wave IV N complex, confounding this IPutative generators are: I-Acoustic nerve; II-Cochlear nucleus; III-Superior olivary nucleus; IV -Lateral lemniscus; V -Inferior colliculus: VI-Medial geniculate nucleus; VII-Auditory radiations. 20ther disorders include brain edema. acoustic neuroma. gliomas. and central pontine myelinolysis. 708 Freeman process. In these cases we say that wave V is absenL Finally, the latencies and possibly the amplitudes of the identified peaks are be measured and a diagnostic explanation for them is developed. r'" _. I . .... . ·· ... ·f··--- ·· ·j ._, I ---n f i I i l-- I I I I I ' f. ___ . • ..i.. ! . .... , I i I ! ··_·4 ·! ! I I ; iJ I i I I I I I I . " . . , ., ' .. .. . __ .J I j j . -~-' .1,- _ .. __ . Figure I. BAEP chart with the time of arrival for waves I to V identified. 2 METHODS AND PROCEDURES 2.1 DATA Plots of BAEP tests were obtained from the evoked potential files from the last 4 years at PUH. A preliminary group of training cases consisting of 13 patients or 26 ears was selected by traversing the files alphabetically from the beginning of the alphabet. This Computer Recognition of Wave Location in Graphical Data by a Neural Network 709 group was subsequently extended to 25 patients Or 50 ears, 39 nonnals and 11 abnonnals. Most BAEP tests show no abnonnalities: only 1 of the first 40 ears was abnonnal. In order to create a training set with an adequate number of abnonnal cases we included only patients with abnonnal ears after these first 40 had been selected. Ten abnonnal ears were obtained from a search of 60 patient meso Test cases were selected from files starting at the end of the alphabet, moving toward the beginning, the opposite of the process used for the training cases. Unlike the training set - where some cases were selected over others - all cases were included in the test set without bias. No cases were common to both sets. A total of 10 patients or 20 ears were selected. Table I summarizes the input data. For one of the experiments, another data set was made using the ipsilateral data for 80 inputs and the derivative of the curve for the other 80 inputs. The derivative was computed by subtracting the amplitude of the point's successor from the amplitude of the point and dividing by 0.1. The ipsilateral and contralateral wave recordings were transfonned to machine readable fonnat by manual tracing with a BitPad Plus~ digitizer. A fonnal protocol was followed to ensure that a high fidelity transcription had been effected. The approximately 400 points which resulted from the digitization of each ear were graphed and compared to the original tracings. If the tracings did not match, then the transcription was performed again. In addition, the originally recorded latency values for peak V were corrected for any distortion in the digitizing process. The distortion was judged by a neurologist to be minimal. Table I: Composition of Input Data Cases NonnalEars Abnonnal Ears Total Ears Prolonged V Absent V Total Training 39 8 3 11 50 Testing 18 0 2 2 20 A program was written to process the digital wave fonns, creating an output file readable by the neural network simulator. The program discarded the rust and last 1 msec. of the recordings. The remaining points were sampled at 0.1 msec. intervals using linear interpolation to estimate an amplitude if a point had not been recorded within 0.01 msec. of the desired time. These points were then normalized to the range <-1,1>. The resulting 80 points for the ipsilateral wave and 80 points for the contralateral wave (a total of 160 points) were used as the initial activations for the input layer of processing elements. 2.2 ARCHITECTURES Each of the four network architectures had 160 input nodes. Each node represented the amplitude of the wave at each sample time (1.0 to 8.9 ms, every 0.1 ms). Each architecture also had 80 output nodes with a similar temporal interpretation (Figure 2). Architecture 1 (AI) had 30 hidden units connected only to the ipsilateral input units. 5 hidden units connected only to the contralateral input units and 5 hidden units connected to all the input units. The hidden units for all architectures were fully connected to the output units. Architecture 2 (A2) reversed these proportions. Architecture 3 (A3) was fully connected to the inputs. Architecture 4 (A4) preserved the proportions of Al but had 16 ipsilateral hidden units, 3 contralateral. and 3 connected to both. All architectures used the sigmoid transfer function at both the hidden and output layers and all units were attached to a bias unit. The distribution of the hidden units was chosen with the knowledge that human experts usually use information from the ipsilateral side but refer to the contralateral side only 710 Freeman when features in the ipsilateral side are too obscure to resolve. The selection of the number of hidden units in neural network models remains an art. In order to detennine whether the size of the hidden unit layer could be changed, we repeated the experiments using Architecture 2 where the number of hidden units was reduced to 16, with 10 connected to the ipsilateral inputs, 3 to the contralateral inputs, and 3 connected to all the inputs. 2.3 TRAININ G For training, target values for the output layer were all 0.0 except for the output nodes representing the time of arrival for wave V (reported on the BAEP chart) and one node on each side of it The peak node target was 0.95 and the two adjacent nodes had targets of 0.90. For cases in which wave V was absent, the target for all the output nodes was 0.0. A neural network simulator (NeuralWorks Professional II~ version 3.5) was used to construct the networks and run the simulations. The back-propagation learning algorithm was used to train the networks. The random number generator was initialized with random number seeds taken from a random number table. Then network weights were initialized to random values between -0.2 and 0.2 and the training begun. Since our random number generator is detenninistic - given the random number seed - these trials are replicable. output hidden '--____ --' input ipsilateral contralateral Figure 2. Diagram of Architecture 1 with representation of input and output data shown. Each of the 50 ears of data in the training set was presented using a randomize, shuffle, and deal technique. Network weights were saved at various stages of learning, usually after every 1000 presentations (20 epochs) until the cumulative RMS error for an epoch fell below 0.01. The contribution of each training example to the total error was examined to detennine whether a few examples were the source of most of the error. If so, training was continued until these examples had been learned to an error level comparable to the rest of the cases. After training, the 20 ears in the test set were presented to each of the saved networks and the output nodes of the net were examined for each test case. Computer Recognition of Wave Location in Graphical Data by a Neural Network 711 2.4 ANALYSIS OF RESULTS A threshold method was used to analyze the data. For each of the test cases the actual location of the maximum valued output unit was compared to the expected location of the maximum valued output unit. For a network result to be classified as a correct identification in the wave V present (true positive), we require that the maximum valued output unit have an activation which is over an activity-threshold (0.50) and that the unit be within a distance-threshold (0.2 msec.) of the expected location of wave V. For a true negative identification of wave V - a correct identification of wave V being absent - we require that all the output activities be below the activity threshold and that the case have no wave V to find. The network makes a false positive prediction of the location of wave V if some activity is above the activity threshold for a case which has no wave V. Finally, there are two ways for the network to make a false negative identification of wave V. In both instances, wave V must be present in the case. In one instance, some output node has activity above the activity threshold, but it is outside of the distance threshold. This corresponds to the identification of a wave V but in the wrong place. In the other instance, no node attains activity over the activity threshold, corresponding to a failure to find a wave V when there exists a wave V in the case to find. 2.5 EXPERIMENTS Five experiments were performed. The flfst four used different architectures on the same data set and the last used architecture Al on the derivatives data set. Each of the network architectures was trained from different random starting positions. For each trial, a network was randomized and trained as described above. The networks were sampled as learning progressed. Experiment 1 determined how well archtecture Al could identify wave V and provided baseline results for the remaining experiments. Experiments 2 and 3 tested whether our use of more hidden units attached to ipsilateral data made sense by reversing the proportion of hidden units alloted to ipsilateral data processing (experiment 2) and by tring a fully connected network (experiment 3). Experiment 4 determined whether fewer hidden units could be used. Experiment 5 investigated whether preprocessing of the input data to make derivative information available would facilitate network identification of peak location. 3 RESULTS Results from the best network found for each of five experiments are shown in Table 2. Table 2: Results from presentation of 20 test cases to various network architectures. Experiment Network TP 'IN Total FP FN Total I Al 16 1 17 1 2 3 2 A2 16 0 16 2 2 4 3 A3 16 0 16 2 2 4 4 A4 15 0 IS 3 2 5 5 Al 15 1 16 1 3 4 4 DISCUSSION In Experiment I, the three cases which were incorrectly identified were examined closely. It is not evident from inspection why the net failed to identify the peaks or identified 712 Freeman peaks where there were none to identify. Where peaks are present, they are not unusually located or surrounded by noise. The appearance of their shape seems similar to the cases which were identified correctly. We believe that more training examples which are "similar" to these 3 test cases, as well as examples with greater variety, will improve recognition of these cases. This improvement comes not from better generalization but rather from a reduced requirement for generalization. If the net is trained with cases which are increasingly similar to the cases which will be used to test it, then recognition of the test cases becomes easier at any given level of generalization. The distribution of hidden units in Al was chosen with the knowledge that human experts use information primarily from the ipsilateral side, referring to the contralateral side only when ipsilateral features are too obscure to resolve. Experiments 2 and 3 investigate whether this reliance on ipsilateral data suggests that there should be more hidden units for the ipsilateral side or for the contralateral side. The identical results from these experiments are similar to those of Experiment l. One interpretation is that it is possible to make diagnoses of BAEPs using very few features from the ipsilateral side. Another interpretation is that it is possible to use the contralateral data as the chief information source, contrary to our expert's belief. Experiment 4 investigates whether fewer features are needed by restricting the hidden layer to 20 hidden units. The slight degradation of performance indicates that it is possible to make BAEP diagnoses with fewer ipsilateral features. Experiment 5 utilized the ipsilateral waveform and its derivative to determine whether this pre-processing would improve the results. Surprisingly, the results did nOl improve, but it is possible that a better estimator of the derivative will prove this method useful. Finally, when the weights from all the networks above were examined, we found that amplitudes from only the area where wave V falls were used. This suggests that it is not necessary to know the location of wave III before determining the location of wave V, in sharp contrast to expert's intuition. We believe the networks form a "local expert" for the identification of wave V which does not need 10 interact with da"l from other parts of the graph, and that other such local experts will be formed as we expand the project's scope. 5 CONCLUSIONS Automated wave form recognition is considered to be a difficult task for machines and an especially difficult task for neural networks. Our results offer some encouragement that in some domains neural networks may be applied to perform wave form recognition and that the technique will be extensible as problem complexity increases. Still, the accuracy of the networks we have discussed is not high enough for clinical use. Several extensions have been attempted and others considered including 1) increasing the sampling rate to decrease the granularity of the input data, 2) increasing the training set size, 3) using a different representation of the output for wave V absent cases, 4) using a different representation of the input, such as the derivative of the amplitudes, and 5) architectures which allow hybrids of these ideas. Finally. since many other tests in medicine as well as other fields require the interpretation of graphical data, it is tempting to consider extending this method to other domains. Many other tests in medicine as well as other fields require the interpretation of graphical data.One distinguishing feature of the BAEP is that there is no difficulty with the time registration of the data; we always know where to start looking for the wave. This is in contrast to an EKG, for example, which may require substantial effort just to identify the beginning of a QRS complex. Our results indicate that the interpretation of graphs where the time registration of data is not an issue is possible using neural networks. Medical tests for which this technique would be appropriale include: other evoked potentials, spectrometry, and gel electrophoresis. Computer Recognition of Wave Location in Graphical Data by a Neural Network 713 Acknowledgements The author wishes to thank Dr. Scott Shoemaker of the Department of Neurology for his expertise, encouragement, constructive criticism, patience, and collaboration throughout the progress of this work. This research has been supported by NLM Training grant TIS LM-07059. References Boston, ].R. 1987. Detection criteria for sensory evoked potentials. Proceedings of 9th Ann. IEEElEMBS Conf., Boston, MA. Boston, ].R. 1989. Automated interpretation of brainstem auditory evoked potentials: a prototype system. IEEE Trans. Biomed. Eng. 36 (5) : 528-532. DeRoach, ].N. 1989. Neural networks - an artificial intelligence approach to the analysis of clinical data. Austral. Pbys. & Eng. Sci. in Med. 12 (2); 100-106. Durrant, ].0., ].R. Boston, and W.H. Martin. 1990. Correlation study of two-channel recordings of the brain stem auditory evoked potential. Ear and Hearing 11 (3) ; 215-221. Gabriel, S., ].0. Durrant, A.E. Dickter, and 1.E. Kephart. 1980. Computer identification of waves in the auditory brain stem evoked potentials. EEG and Clin. Neurophys. 49 : 421-423. Gorman, R. Paul, and Terrence 1. Sejnowski. 1988. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks 1 : 75-89. Greenberg, R.P., P.G. Newlon, M.S. Hyatt, R.K. Narayan, and D.P. Becker. 1981. Prognostic implications of early multi modality evoked potentials in severely head-injured patients. 1. Neurosurg 5; 227-236. Moulton, Richard, Peter Kresta, Mario Ramirez, and William Tucker. 1991. Continuous automated monitoring of somatosensory evoked potentials in posttraumatic coma.l.o.um.al of Trauma 31 (5) ; 676-685. Rumelhart, David E., and James L. McClelland. 1986. Parallel distributed processing. Cambridge. Mass: MIT Press. Stubbs, 0 F. 1988. Neurocomputers. MD ComDut 5 (3) : 14-24. Valdes-Sosa, M.J., M.A. Bobes. M.C. Perez-abalo. M. Perra. 1.A. Carballo, and P. Valdes-Sosa. 1987. Comparison of auditory evoked potential detection methods using signal detection theory. Au.di,Ql26: 166-178.
1991
9
562
Learning to Segment Images Using Dynamic Feature Binding Michael C. Moser Dept. of Compo Science & Inst. of Cognitive Science University of Colorado Boulder, CO 80309-0430 Richard S. Zemel Dept. of Compo Science University of Toronto Toronto, Ontario Canada M5S lA4 Marlene Behrmann Dept. of Psychology & Faculty of Medicine University of Toronto Toronto, Ontario Canada M5S lAl Abstract Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learn. how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that aUempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC'S training procedure is a generalization of recurrent back propagation to complex-valued units. When a visual image contains multiple, overlapping objects, recognition is difficult because features in the image are not grouped according to which object they belong. Without the capability to form such groupings, it would be necessary to undergo a massive search through all subsets of image features. For this reason, most machine vision recognition systems include a component that performs feature grouping or image .egmentation (e.g., Guzman, 1968; Lowe, 1985; Marr, 1982). 436 Learning to Segment Images Using Dynamic Feature Binding 437 A multitude of heuristics have been proposed for segmenting images. Gestalt psychologists have explored how people group elements of a display and have suggested a range of grouping principles that govern human perception (Rock &z: Palmer, 1990). Computer vision researchers have studied the problem from a more computational perspective. They have investigated methods of grouping elements of an image based on nonaccidental regularitie..-feature combinations that are unlikely to occur by chance when several objects are juxtaposed, and are thus indicative of a single object (Kanade, 1981; Lowe &z: Binford, 1982). In these earlier approaches, the researchers have hypothesized a set of grouping heuristics and then tested their psychological validity or computational utility. In our work, we have taken an adaptive approach to the problem of image segmentation in which a system learns how to group features based on a set of examples. We call the system MAGIC, an acronym for multiple-object !daptive grouping of image ~omponents. In many cases MAGIC discovers grouping heuristics similar to those proposed in earlier work, but it also has the capability offinding nonintuitive structural regularities in images. MAGIC is trained on a set of presegmented images containing multiple objects. By "presegmented," we mean that each image feature is labeled as to which object it belongs. MAGIC learns to detect configurations of the image features that have a consistent labeling in relation to one another across the training examples. Identifying these configurations allows MAGIC to then label features in novel, unsegmented images in a manner consistent with the training examples. 1 REPRESENTING FEATURE LABELINGS Before describing MAGIC, we must first discuss a representation that allows for the labeling of features. Von der Malsburg (1981), von der Malsburg &z: Schneider (1986), Gray et al. (1989), and Eckhorn et al. (1988), among others, have suggested a biologically plausible mechanism of labeling through temporal correlations among neural signals, either the relative timing of neuronal spikes or the synchronization of oscillatory activities in the nervous system. The key idea here is that each processing unit conveys not just an activation value-average firing frequency in neural termsbut also a second, independent value which represents the relative phcue of firing. The dynamic grouping or binding of a set of features is accomplished by aligning the phases of the features. Recent work (Goebel, 1991; Hummel &z: Biederman, in press) has used this notion of dynamic binding for grouping image features, but has been based on relatively simple, predetermined grouping heuristics. 2 THE DOMAIN Our initial work has been conducted in the domain of two-dimensional geometric contours, including rectangles, diamonds, crosses, triangles, hexagons, and octagons. The contours are constructed from four primitive feature types-oriented line segments at 0°, 45°, 90°, and 135°-and are laid out on a 15 X 20 grid. At each location on the grid are units, called feature unib, that detect each of the four primitive feature types. In our present experiments, images contain two contours. Contours are not permitted to overlap in their activation of the same feature unit. 438 Mozer, Zemel, and Behrmann hidden layer __ r Figure 1: The architedure of MAGIC. The lower layer contains the feature units; the upper layer contains the hidden units. Each layer is arranged in a spatiotopic array with a number of different feature types at each position in the array. Each plane in the feature layer corresponds to a different feature type. The grayed hidden units are reciprocally conneded to all features in the corresponding grayed region of the feature layer. The lines between layers represent projections in both directions. 3 THE ARCHITECTURE The input to MAGIC is a paUern of activity over the feature units indicating which features are present in an image. The initial phases ofthe units are random. MAGIC'S task is to assign appropriate phase values to the units. Thus, the network performs a type of paUern completion. The network architedure consists of two layers of units, as shown in Figure 1. The lower (input) layer contains the feature units, arranged in spatiotopic arrays with one array per feature type. The upper layer contains hidden units that help to align the phases of the feature units; their response properties are determined by training. Each hidden unit is reciprocally conneded to the units in a local spatial region of all feature arrays. We refer to this region as a patch; in our current simulations, the patch has dimensions 4 x 4. For each patch there is a corresponding fixed-size pool of hidden units. To achieve uniformity of response across the image, the pools are arranged in a spatiotopic array in which neighboring pools respond to neighboring patches and the weights of all pools are consbained to be the same. The feature units activate the hidden units, which in turn feed back to the feature units. Through a relaxation process, the system settles on an assignment of phases to the features. Learning to Segment Images Using Dynamic Feature Binding 439 4 NETWORK DYNAMICS Formally, the response of each feature unit i, ~i, is a complex value in polar form, (<<li, pil, where «li is the amplitude or activation and Pi is the phase. Similarly, the response of each hidden unit ;, 11;, has components (b;, q;). The weight connecting unit i to unit ;, wiiJ is also complex valued, having components (Pii,8ii). The activation rule we propose is a generalization of the dot product to the complex domain: neti x·wi Ei~iWii ([(Ei«lip;i cos(Pi - 8;i»2 + (Ei«liPii sin(pi - 8ii»2] ! , t -1 [Ei«lip;iSin(Pi - 8ii)]) an Ei«liP;i COS(pi - 8;i) where net; is the net input to hidden unit;. The net input is passed through a squashing nonlinearity that maps the amplitude of the response from the range o -+ 00 to 0 -+ 1 but leaves the phase unaffected: 1Ii neti (1 _ e-Inetjl:l) . Inet;1 The :Bow of activation from the hidden layer to the feature layer follows the same dynamics, although in the current implementation the amplitudes of the features are clamped, hence the top-down How affects only the phases. One could imagine a more general architecture in which the relaxation process determined not only the phase values, but cleaned up noise in the feature amplitudes as well. The intuition underlying the activation rule is as follows. The activity of a hidden unit, b;, should be monotonically related to how well the feature response pattern matches the hidden unit weight vector, just as in the standard real-valued activation rule. Indeed, one can readily see that if the feature and weight phases are equal (Pi = 8;i), the rule for bi reduces to the real-valued case. Even if the feature and weight phases differ by a constant (Pi = 8i i + e), b; is unaffected. This is a critical property of the activation rule: Because ab.olute phase values have no inhinsic meaning, the response of a unit should depend only on the relative phases. The activation rule achieves this by essentially ignoring the average difference in phase between the feature units and the weights. The hidden phase, q;, reHects this average difference. 5 LEARNING ALGORITHM During training, we would like the hidden units to learn to detect configurations of features that reliably indicate phase relationships among the features. We have experimented with a variety of training algorithms. The one with which we have had greatest success involves running the network for a fixed number of iterations and, after each iteration, attempting to adjust the weights so that the feature phase pattern will match a target phase pattern. Each training hial proceeds as follows: 440 Mozer, Zemel, and Behrmann 1. A training example is generated at random. This involves selecting two contours and instantiating them in an image. The features of one contour have target phase 0° and the features of the other contour have target phase 180°. 2. The training example is presented to MAGIC by clamping the amplitude of a feature unit to 1.0 ifits corresponding image feature is present, or 0.0 otherwise. The phases ofthe feature units are set to random values in the range 0° to 360°. 3. Activity is allowed to :flow from the feature units to the hidden units and back to the feature units. Because the feature amplitudes are clamped, they are unaffected. 4. The new phase pattern over the feature units is compared to the target phase pattern (see step I), and an error measure is computed: E = -(Et(l( cos(Pi - Pi))2 - (Eta. sin(Pi - Pi))2, where p is the target phase pattern. This error ignores the absolute difference between the target and actual phases. That is, E is minimized when Pi - Pi is a constant for all i, regardless of the value of Pi - Pi. 5. Using a generalization of back propagation to complex valued units, error gradients are computed for the feature-to-hidden and hidden-to-feature weights. 6. Steps 3-5 are repeated for a maximum of 30 iterations. The trial is terminated if the error increases on five consecutive iterations. 7. Weights are updated by an amount proportional to the average error gradient over iterations. Learning is more robust when the feature-to-hidden weights are constrained to be symmetric with the hidden-to-feature weights. For complex weights, symmetry means that the weight from feature unit i to hidden unit j is the complex conjugate of the weight from hidden unit j to feature unit i. Weight symmetry ensures that MAGIC will converge to a fixed point. (The proof is based on discrete-time update and a two-layer architecture with sequential layer updates and no intralayer connections. ) Simulations reported below use a learning rate of .005 for the amplitudes and 0.02 for the phases. About 10,000 learning trials are required for stable performance, although MAGIC rapidly picks up on the most salient aspects of the domain. 6 SIMULATION RESULTS We trained a network with 20 hidden units per pool on images containing either two rectangles, two diamonds, or a rectangle and a diamond. The shapes were of varying size and appeared in various locations. A subset of the resulting weights are shown in Figure 2. Each hidden unit attempts to detect and reinstantiate activity patterns that match its weights. One clear and prevalent pattern in the weights is the collinear arrangement of segments of a given orientation, all having the same phase value. When a hidden unit having weights of this form responds to a patch of the feature array, it tries align the phases of the patch with the phases of its weight vector. By synchronizing the phases of features, it acts to group the features. Thus, one can interpret the weight vectors as the rules by which features are grouped. Learning to Segmem Images Using Dynamic Feature Binding 441 :GOO '" "::: ,',' 'QQG) '::: ,',' 'GJ:::OO ::' ":'8' ,',' . ..:: .. -::. ':-. ..;:. .:;" .;.. .;::.;:.'~G : V Phase Spectrum Figure 2: Sample of feature-to-hidden weights learned by MAGIC. The area of a circle represents the amplitude of a weight, the orientation of the internal tick mark represents the phase angle. The weights are arranged such that the connections into each hidden unit are presented on a light gray background. Each hidden unit has a total of 64 incoming weights--t x 4 locations in its receptive field and four feature types at each location. The weights are further grouped by feature type, and for each feature type they are arranged in a 4 X 4 pattern homologous to the image patch itself. Whereas traditional grouping principles indicate the conditions under which features should be bound together as part of the same object, the grouping principles learned by MAGIC also indicate when features should be segregated into different objects. For example, the weights of the vertical and horizontal segments are generally 1800 out of phase with the diagonal segments. This allows MAGIC to segregate the vertical and horizontal features of a rectangle from the diagonal features of a diamond. We had anticipated that the weights to each hidden unit would contain two phase values at most because each image patch contains at most two objects. However, some units make use of three or more phases, suggesting that the hidden unit is performing several distinct functions. As is the usual case with hidden unit weights, these patterns are difficult to interpret. Figure 3 presents an example of the network segmenting an image. The image contains two diamonds. The top left panel shows the features of the diamonds and their initial random phases. The succeeding panels show the network's response during the relaxation process. The lower right panel shows the network response at equilibrium. Features of each object have been assigned a uniform phase, and the two objects are 1800 out of phase. The task here may appear simple, but it is quite challenging due to the illusory diamond generated by the overlapping diamonds. 442 Mozer, Zemel, and Behrmann ''''·''''tiIi'·'''' • " .... # .-? " " ..•. , . :.; . , ,.." . ;::' .. ' . "'" <:::. ... • .:::. ~~ '" ~:~::-4$s.,' " ~ ;:::: ,; , ~ ... .::: . ,; " ..... , .. :;::. " • , ,; '::! • . . ~ • ~ Iteration 0 Iteration 2 Iteration 4 Iteration 6 Iteration 10 Iteration 25 Figure 3: An example of MAGIC segmenting an image. The "iteration" refers to the number of times activity has flowed from the feature units to the hidden units and back. The phase value of a feature is represented by a gray level. The periodic phase continuum can only be approximated by the linear gray level continuum, but the basic information is conveyed nonetheless. 7 CURRENT DIRECTIONS We are currently extending MAGIC in several diredions, which we outline here. • A natural principle for the hierarchical decomposition of objects emerges from the relative frequency of feature configurations during training. More frequent configurations result in a robust hidden representation, and hence the features forming these configurations will be tightly coupled. A coarse quantization of phases will lead to parses of the image in which only the highest frequency configurations are considered as "objeds." Finer quantizations will lead to a further decomposition of the image. Thus, the continuous phase representation allows for the construdion of hierarchical descriptions of objeds. • Spatially local grouping principles are unlikely to be sufficient for the image segmentation task. Indeed, we have encountered incorred solutions produced by MAGIC that are locally consistent but globally inconsistent. To solve this problem, we are investigating an architecture in which the image is processed at several spatial scales simultaneously. • Simulations are also underway to examine MAGIC'S performance on real-world images-overlapping handwriUen leUers and digits-where it is somewhat less clear to which types of paUerns the hidden units should respond. • Zemel, Williams, and Mozer (to appear) have proposed a mathematical framework that-with slight modifications to the model-allow it to be interpreted Learning to Segment Images Using Dynamic Feature Binding 443 as a mean-field approximation to a stochastic phase model . • Behrmann, Zemel, and Mozer (to appear) are conducting psychological experiments to examine whether limitations of the model match human limitations. Acknowledgements This research was supported by NSF Presidential Young Investigator award ffiI-9058450, grant 90-21 from the James S. McDonnell Foundation, and DEC external research grant 1250 to MM, and by a National Sciences and Engineering Research Council Postgraduate Scholarship to RZ. Our thanks to Paul Smolensky, Chris Williams, Geoffrey Hinton, and Jiirgen Schmidhuber for helpful comments regarding this work. References Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M., &; Reitboek, H. J. (1988). Coherent oscillations: A mechanism of feature linking in the visual cortex? Biological Cybernetic8, 60, 121-130. Goebel, R. (1991). An oscillatory neural network model of visual attention, pattern recognition, and response generation. Manuscript in preparation. Gray, C. M., Koenig, P., Engel, A. K., &; Singer, W. (1989). Oscillatory responses in cat visual cortex exhibit intercolumnar synchronization which reflects global stimulus properties. Nature (London), 338, 334-337. Guzman, A. (1968). Decomposition of a visual scene into three-dimensional bodies. AFIPS Fall Joint Computer Conference, 33, 291-304. Hummel, J. E., &; Biederman, r. (1992). Dynamic binding in a neural network for shape recognition. P8ychological Review. In Press. Kanade, T. (1981). Recovery of the three-dimensional shape of an object from a single view. Artificial Intelligence, 17, 409-460. Lowe, D. G. (1985). Perceptual Organization and Vi8ual Recognition. Boston: Kluwer Academic Publishers. Lowe, D. G., &; Binford, T. O. (1982). Segmentation and aggregation: An approach to figure-ground phenomena. In Proceeding8 of the DARPA IU Work8hop (pp. 168-178). Palo Alto, CA: (null). Marr, D. (1982). Vi8ion. San Francisco: Freeman. Rock, I., &; Palmer, S. E. (1990). The legacy of Gestalt psychology. Scientific American, !63, 84-90. von der Malsburg, C. (1981). The correlation theory of brain function (Internal Report 81-2). Goettingen: Department of Neurobiology, Max Planck Intitute for Biophysical Chemistry. von der Malsburg, C., &; Schneider, W. (1986). A neural cocktail-party processor. Biological Cybernetic8, 54, 29-40.
1991
90
563
Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model. Menashe Dornay* Yoji Uno" Mitsuo Kawato * Ryoji Suzuki** ·Cognitive Processes Department, A TR Auditory and Visual Perception Research Laboratories, Sanpeidani, Inuidani, Seika-Cho, Soraku-Gun, Kyoto 619-02 Japan. ··Department of Mathematical Engineering and Information Physics, Faculty of Engineering, University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113 Japan. Abstract This work discusses various optimization techniques which were proposed in models for controlling arm movements. In particular, the minimum-muscle-tension-change model is investigated. A dynamic simulator of the monkey's arm, including seventeen single and double joint muscles, is utilized to generate horizontal hand movements. The hand trajectories produced by this algorithm are discussed. 1 INTRODUCTION To perform a voluntary hand movement, the primate nervous system must solve the following problems: (A) Which trajectory (hand path and velocity) should be used while moving the hand from the initial to the desired position. (lB) What muscle forces should be generated. Those two problems are termed "ill-posed" because they can be solved in an infinite number of ways. The interesting question to us is: what strategy does the nervous system use while choosing a specific solution for these problems? The chosen solutions must comply with the known experimental data: Human and monkey's free horizontal multi-joint hand movements have straight or gently curved paths. The hand velocity profiles are always roughly bell shaped (Bizzi & Abend 1986). 627 628 Damay, Uno, Kawato, and Suzuki 1.1 THE MINIMUM-JERK MODEL Flash and Hogan (1985) proposed that a global kinematic optimization approach, the minimum-jerk model, defines a solution for the trajectory detennination problem (problem A). Using this strategy, the nervous system is choosing the (unique) smoothest trajectory of the hand for any horizontal movement, without having to deal with the structure or dynamics of the ann. The minimum-jerk model produces reasonable approximations for hand trajectories in unconstrained point to point movements in the horizontal plane in front of the body (Flash & Hogan 1985; Morasso 1981; Uno et al. 1989a). It fails to describe, however, some important experimental findings for human arm movements (Uno et al. 1989a). 1.2 THE EQUILIBRIUM-TRAJECTORY HYPOTHESIS According to the equilibrium-trajectory hypothesis (Feldman 1966), the nervous system generates movements by a gradual change in the equilibrium posture of the hand: at all times during the execution of a movement the muscle forces defines a stable posture which acts as a point of attraction in the configurational space of the limb. The actual hand movement is the realized trajectory. The realized hand trajectory is usually different from the attracting pre-planned virtual trajectory (Hogan 1984). Simulations by Flash (1987), have suggested that realistic multi-joint ann movements at moderate speed can be generated by moving the hand eqUilibrium position along a pre-planned minimum-jerk virtual trajectory. The interactions of the dynamic properties of the ann and the attracting virtual trajectory create together the actual realized trajectory. Flash did not suggest a solution to problem lB. A static local optimization algorithm related to the equilibrium-trajectory hypothesis and called backdriving was proposed by Mussa-Ivaldi et al. (1991). This algorithm can be used to solve problem lB only after the virtual trajectory is known. The virtual trajectory is not necessarily a minimum-jerk trajectory. Driving the arm from a current equilibrium position to the next one on the virtual trajectory is perfonned by two steps: 1) simulate a passive displacement of the arm to the new position and 2) update the muscle forces so as to eliminate the induced hand force. A unique active change (step 2) is chosen by finding these muscle forces which minimize the change in the potential energy stored in the muscles. Using a static model of the monkey's arm, the first author has analyzed this sequential computational approach, including a solution for both the trajectory detennination (A) and the muscle forces (lB) problems (Domay 1990, 1991a, 1991b). The equilibrium-trajectory hypothesis which is using the minimum-jerk model was criticized by Katayama and Kawato (in preparation). According to their recent findings, the values of the dynamic stiffness used by Flash (1987) are too high to be realistic. They have found that a very complex virtual trajectory, completely different from the one predicted by the minimum-jerk model, is needed for coding realistic hand movements. Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model 629 2 GLOBAL DYNAMIC OPTIMIZATIONS A set of global dynamic optimizations have been proposed by Uno et al. (1989a, 1989b). Uno et al. suggested that the dynamic properties of the arm must be considered by any algorithm for controlling hand movements. They also proposed that the hand trajectory and the motor commands (joint torques, muscle tensions, etc.,) are computed in parallel. 2.1 THE MINIMUM-TORQUE-CHANGE MODEL Uno et al. (1989a) have proposed the minimum-torque-change model. The model proposes that the hand trajectory and the joint torques are determined simultaneously, while the algorithm minimizes globally the rate of change of the joint torques. The minimum-torquechange model was criticized by Flash (1990), saying that the rotary inertia used was not realistic. If Flash's inertia values are used then the hand path predicted by the minimumtorque-change model is curved (Flash 1990). 2.2 THE MINIMUM-MUSCLE-TENSION-CHANGE MODEL The minimum-muscle-tension-change model (Uno et al. 1989b, Domay et al. 1991) is a parallel dynamic optimization approach in which the trajectory determination problem (A) and the muscle force generation problem (]B) are solved simultaneously. No explicit trajectory is imposed on the hand, but that it must reach the final desired state (position, velocity, etc.) in a pre-specified time. The numerical solution used is a "penalty" method, in which the controller minimizes globally by iterations an energy function E : (1) E is the energy that must be minimized in iterations. ED is a collection of hard constraints, like, for example that the hand must reach the desired position at the specified time. Es is a smoothness constraint, like the minimum-muscle-tension-change model. "is a regularization function, that needs to become smaller and smaller as the number of iterations increases. This is a key point because the hard constraints must be strictly satisfied at the end of the iterative process. £ is a small rate term. The smoothness constraint Es ' is the minimum-muscle-tension-change model, defined as: (2) !; is the tension of muscle i, n is the total number of muscles, to is the initial time and trut is the final time of the movement. Preliminary studies have shown (Uno et al. 1989b) that the minimum-muscle-tensionchange model can simulate reasonable hand movements. 630 Damay, Uno, Kawato, and Suzuki 3 THE MONKEY'S ARM MODEL The model used was recently described (Domay 1991a; Domay et al. 1991). It is based on anatomical study using the Rhesus monkey. Attachments of 17 shoulder. elbow and double joint muscles were marked on the skeleton. The skeleton was cleaned and reassembled to a natural configuration of a monkey during horizontal arm movements (Fig. 1). X-ray analysis was used to create a simplified horizontal model of the arm (Fig. 1). Effective origins and insertions of the muscles were estimated by computer simulations to ensure the postural stability ofthe hand at equilibrium (Domay 1991a). The simplified dynamic model used in this study is described in Domay et al. (1991). Figure 1: The Monkey's Arm Model. Top left is a ventral view of the skeleton. Middle right is a dorsal view. The bottom shows a top-down X-ray projection of the skeleton. with the axes marked on it. The photos were taken by Mr. H.S. Hall. MIT. Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model 631 4 THE BEHAVIORAL TASK We tried to simulate the horizontal arm movements reported by Uno et al. (1989a) for human subjects, using the monkey's model. Fig. 2 (left) shows a top view of the hand workspace of the monkey (light small dots). We used 7 hand positions defined by the following shoulder and elbow relative angles (in degrees): Tl (14,122); Tz {67,100}; T3 {75,64}; T4 {63,45}; Ts {35,54}; T6 {-5,lOl} and T7 {-25,45}. The joint angles used by Uno et al. (1989a) for T4 and T7, {77.22} and {O,O}, are out of the workspace of the monkey's hand (open circles in Fig 2. left). We approximated them by our T4 and T7 (filled circles). The behavioral task that we simulated using the minimum-muscle-tensionchange model consisted of the 4 trajectories shown in Fig. 2 (right). 5 SIMULATION RESULTS Figure 2 (right) shows the paths (Tz->T6), (T3->T6), (T4->T1), and (T7->Ts)' The paths T2>T6' T3->T6 and T7->Ts are slightly convex. Slightly convex paths for Tz->T6 were reported in human movements by Flash (1987), Uno et al. (l989a) and Morasso (1981). Human T3->T6 paths have a small tendency to be slightly convex (Uno et al. 1989a; Flash (1987). In our simulations, Tz->T6 and T3->T6 have slightly larger curvatures than those reported in humans. Human large movements from the side of the body to the front of the body similar to our T7->Ts were reported by Uno et al. (1989a). The path of these movements is convex and similar to our simulation results. The simulated path of T 4-> T 1 is slightly curved to the left and then to the right, but roughly straight. The human's T4> TI paths look slightly straighter than in our simulations (Uno et at. 1989a; Flash 1987). 0.( ,.--..... T4 if) T3 ___ , T5 ~ 0:3 (l) -..:....... "~ ...... . ... (l) 0.2 ~ .......... :::.:::.. .... {.;#:!.. • '. E T 1 T6 \" '-' o. i >-' \\ X 0 .0 + T7 -O.38m -01 ! -C.2 -01 0.0 0.1 0.2 03 04 -O.2m X (meters) Figure 2. The Behavioral Task. The left side shows the hand workspace (small dots). The shoulder position and origin of coordinates (0,0) is marked by +. The elbow location when the hand is on position TI is marked by E. The right side shows 4 hand paths simulated by the minimum-musc1e-tension-change model. Arrows indicate the directions of the movements. 632 Domay, Uno, Kawato, and Suzuki Fig. 3 shows the corresponding simulated hand velocities. The velocity profiles have a single peak and are roughly bell shaped, like those reported for human subjects. The left side of the velocity profile of T4->Tt looks slightly irregular. The hand trajectories simulated here are in general closer to human data than those reported by us in the past (Domay et al. 1991). In the current study we used a much slower protocol for reducing A than in the previous study, and we think that we are closer now to the optimal solution of the numerical calculation than in the previous study. Indeed, the hand velocity profiles and muscle tension profiles look smoother here than in the previous study. It is in general very difficult to guarantee that the optimal solution is achieved, unless an unpractical large number of iterations is used. Fig. 4 (top,left) shows the way ED and Es of equation 1 are changing as a function A for the trajectory T7->T5• Ideally. both should reach a plato when the optimal solution is reached. The muscle tensions simulated for T1->T5 are shown in Fig. 4. They look quite smooth . f :;:• 1:: j ~ : o ./' '/"" .. . : -. . . . . .... •• 02 " .t II •• •• • J /" .. :/ ~ . . . / ' .. e. ., " II •• t. Figure 3. The Hand Tangential VelocIty. 6 DISCUSSION .• T .. T 4 11 I. .. -() ---'--'-'--,Tlme(.' .. T O+:1j 7 5 .. /~"" G' .:. . ... . . . . . . .. -, 00 .,,/ \ ... ~ I ! , , , •• e.. ... a. •• .• Various control strategies have been proposed to explain the roughly straight hand trajectory shown by primates in planar reaching movements. The minimum-jerk model (Flash & Hogan 1985) takes into account only the desired hand movement. and completely ignores the dynamic properties of the arm. This simplified approach is a good approximation for many movements, but cannot explain some experimental evidence (Uno et al. 1989a). A more demanding approach, the minimum-torque-change model (Uno et al. 1989a), takes into account the dynamics of the arm, but emphasizes only the torques at the joints, and completely ignores the properties of the muscles. This model was criticized to produce unrealistic hand trajectories when proper inertia values are used (Flash 1990). A third and more complicated model is the minimum-muscle-tension-change model (Uno et a1. 1989b. Domay et al. 1991). The minimum-muscle-tension-change model was shown here to produce gently curved hand movements, which although not identical, are quite close to the primate behavior. In the current study the initial and final tensions of the muscles were assumed to be zero. This is not a realistic assumption since even a static hand at an equilibrium is expected to have some stiffness. Using the minimummuscle-tension-change model with non-zero initial and final muscle tensions is a logical Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model 633 Se4 ~\ .. .1 II I' I. SfB .~. Muscle Forces N Sel 1 ' Se2 /,I:~ .... OJ ~ • _-,-. _ _ _ • '! 0 I •• •• •• • ~ • II • , •• •• e I • t Se5 SfG .. /\ .... \ ..: \ Se3 ~II IJ I' II II • Sf7 ./\\ , : II II 0& II I. II I' •• .1 • • II.. ,. II '1 'I Sf9 ~\ EelO J Eel! , : '- .. '-:-:--~-:---:'7-~ 'I II , 0 Ie II I' I' I' • It .." II II I' ' . lOOt I' II II 'I Ef12 •• 01 II II Df16 .r--.. Ef13 I. ., I. II Df17 ,./"\ '-,. o 1 Time(s) Ef14 De15 •• II ... •• .1 J' II IJ ,. II I' I I Figure 4. Numerical Analysis and Muscle Tensions For T7->Ts. S=shoulder, E=elbow, D=double-joint muscle, e=extensor, f=flexor. study which we intend to test in the near future. Still, the minimum-muscle-tension-change model considers only the muscle moment-arms (Il) and momvels (olll ae) and ignores the muscle length-tension curves. A more complicated model which we are studying now is the minimum-motor-command-change model, which includes the length-tension curves. 634 Domay, Uno, Kawato. and Suzuki Acknowledgements M. Domay and M. Kawato would like to thank Drs. K. Nakane and E. Yodogawa, ATR, for their valuable help and support. Preparation of the paper was supported by Human Frontier Science Program grant to M. Kawato. References 1 E Bizzi & WK Abend (1986) Control of multijoint movements. In MJ. Cohen and F. Strumwasser (Eds.) Comparative Neurobiology: Modes of Communication in the Nervous System, John Wiley & Sons, pp. 255-277 2 M Dornay (1990) Control of movement and the postural stability of the monkey's arm. Proc. 3rd International Symposium on Bioelectronic and Molecular Electronic Devices, Kobe, Japan, December 18-20, pp. 101-102 3 M Domay (1991 a) Static analysis of posture and movement, using a 17 -muscle model of the monkey's arm. ATR Technical Report TR-A-0109 4 M Domay (1991b) Control of movement, postural stability, and muscle angular stiffness. Proc. IEEE Systems, Man and Cybernetics, Virginia, USA, pp. 1373-1379 5 M Dornay, Y Uno, M Kawato & R Suzuki (1991) Simulation of optimal movements using a 17-muscle model of the monkey's arm. Proc. SICE 30th Annual Conference, ES-1-4, July 17-19, Yonezawam Japan, pp. 919-922 6 AG Feldman (1966) Functional tuning of the nervous system with control of movement or maintenance of a steady posture. Biophysics, li, pp. 766-775 7 T Flash & N Hogan (1985) The coordination of arm movements: an experimentally confIrmed mathematical model. J. Neurosci., 2,., pp. 1688-1703 8 T Flash (1987) The control of hand equilibrium trajectories in multi-joint arm movements. Biol. Cybern., 57, pp. 257-274 9 T Flash (1990) The organization of human arm trajectory control. In J. Winters and S. Woo (Eds.) Multiple muscle systems: Biomechanics and movement organization, Springer-Verlag, pp. 282-301 ION Hogan (1984) An organizing principle for a class of voluntary movements. J. Neurosci., i, pp. 2745-2754 11 P Morasso (1981) Spatial control of arm movements. Experimental Brain Research, 42, pp. 223-227 12 FA Mussa-Ivaldi, P Morasso, N Hogan & E Bizzi (1991) Network models of motor systems with many degrees of freedom. In M.D. Fraser (Ed.) Advances in control networks and large scale parallel distributed processing models, Albex Publ. Corp. 13 Y Uno, M Kawato & R Suzuki (1989a) Formation and control of optimal trajectory in human multijoint arm movement - minimum-torque-change model. Biol. Cybern., g, pp. 89-101 14 Y Uno, R Suzuki & M Kawato (1989b) Minimum muscle-tension change model which reproduces human arm movement. Proceedings of the 4th Symposium on Biological and Physiological Engineering, pp. 299-302, (in Japanese) PART X ApPLICATIONS
1991
91
564
Neural Network Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance Rita Venturini William W. Lytton Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute La J oBa, CA 92037 Abstract Automated monitoring of vigilance in attention intensive tasks such as air traffic control or sonar operation is highly desirable. As the operator monitors the instrument, the instrument would monitor the operator, insuring against lapses. We have taken a first step toward this goal by using feedforward neural networks trained with backpropagation to interpret event related potentials (ERPs) and electroencephalogram (EEG) associated with periods of high and low vigilance. The accuracy of our system on an ERP data set averaged over 28 minutes was 96%, better than the 83% accuracy obtained using linear discriminant analysis. Practical vigilance monitoring will require prediction over shorter time periods. We were able to average the ERP over as little as 2 minutes and still get 90% correct prediction of a vigilance measure. Additionally, we achieved similarly good performance using segments of EEG power spectrum as short as 56 sec. 1 INTRODUCTION Many tasks in society demand sustained attention to minimally varying stimuli over a long period of time. Detection of failure in vigilance during such tasks would be of enormous value. Different physiological variables like electroencephalogram 651 652 Venturini, Lytton, and Sejnowski (EEG), electro-oculogram (EOG), heart rate, and pulse correlate to some extent with the level of attention (1, 2, 3). Profound changes in the appearance and spectrum of the EEG with sleep and drowsiness are well known. However, there is no agreement as to which EEG bands can best predict changes in vigilance. Recent studies (4) seem to indicate that there is a strong correlation between several EEG power spectra frequencies changes and attentional level in subjects performing a sustained task. Another measure that has been widely assessed in this context involves the use of event-related potentials (ERP)(5). These are voltage changes in the ongoing EEG that are time locked to sensory, motor, or cognitive events. They are usually too small to be recognized in the background electrical activity. The ERP's signal is typically extracted from the background noise of the EEG as a consequence of averaging over many trials. The ERP waveform remains constant for each repetition of the event, whereas the background EEG activity has random amplitude. Late cognitive event-related potentials, like the P300, are well known to be related to attentional allocation (6, 7,8). Unfortunately, these ERPs are evoked only when the subject is attending to a stimulus. This condition is not present in a monitoring situation where monitoring is done precisely because the time of stimulus occurrence is unknown. Instead, shorter latency responses, evoked from unobtrusive task-irrelevant signals, need to be evaluated. Data from a sonar simulation task was obtained from S.Makeig at al (9). They presented auditory targets only slightly louder than background noise to 13 male United States Navy personnel. Other tones, which the subjects were instructed to ignore, appeared randomly every 2-4 seconds (task irrelevant probes). Background EEG and ERP were both collected and analyzed. The ERPs evoked by the task irrelevant probes were classified into two groups depending on whether they appeared before a correctly identified target (pre-hit ERPs) or a missed target (pre-lapse ERPs). Pre-lapse ERPs showed a relative increase of P2 and N2 components and a decrease of the N 1 deflection. N 1, N2 and P2 designate the sign and time of peak of components in the ERP. Prior linear discriminant analysis (LDA) performed on the averages of each session, showed 83% correct classification using ERPs obtained from a single scalp site. Thus, the pre-hit and pre-lapse ERPs differed enough to permit classification by averaging over a large enough sample. In addition, EEG power spectra over 81 frequency bands were computed. EEG classification was made on the basis of a continuous measure of performance, the error rate, calculated as the mean of hits and lapses in a 32 sec moving window. Analysis of the EEG power spectrum (9) revealed that significant coherence is observed between various EEG frequencies and performance. 2 METHOD 2.1 THE DATA SET Two different groups of input data were used (ERPs and EEG). For the former, a 600 msec sample of task irrelevant probe ERP was reduced to 40 points after low-pass filtering. We normalized the data on the basis of the maximum and minimum values of the entire set, maintaining amplitude variability. A single ERP was classified as being pre-hit or pre-lapse based on the subject's performance on the next target tone. EEG power spectrum, obtained every 1.6 seconds, was used as an input to Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance 653 predict a continuous estimate of vigilance (error rate), obtained by averaging the subject's performance during a 32 second window (normalized between -1 and 1). The five frequencies used (3, 10, 13, 19 and 39 Hz) had previously shown to be most strongly related to error rate changes (9). Each frequency was individually normalized to range between -1 and 1. 2.2 THE NETWORK Feedforward networks were trained with backpropagation. We compared two-layer network to three-layer networks, varying the number of hidden units in different simulations between 2 and 8. Each architecture was trained ten times on the same task, resetting the weights every time with a different random seed. Initial simulations were performed to select network parameter values. We used a learning rate of 0.3 divided by the fan-in and weight initialization in a range between ±0.3. For the ERP data we used a jackknife procedure. For each simulation, a single pattern was excluded from the training set and considered to be the test pattern. Each pattern in turn was removed and used as the test pattern while the others are used for training. The EEG data set was not as limited as the ERP one and the simulations were performed using half of the data as training and the remaining part as testing set. Therefore, for subjects that had two runs each, the training and testing data came from separate sessions. 3 RESULTS 3.1 ERPs The first simulation was done using a two-layer network to assess the adequacy of the neural network approach relative to the previous LDA results. The data set consisted of the grand averages of pre-hits and pre-lapses, from a single scalp site (Cz) of 9 subjects, three of them with a double session, giving a total of 24 patterns. The jackknife procedure was done in two different ways. First each ERP was considered individually, as had been done in the LDA study (pattern-jackknife). Second all the ERPs of a single subject were grouped together and removed together to form the test set (subject-jackknife). The network was trained for 10,000 epochs before testing. Figure 1 shows the weights for the 24 networks each trained with a set of ERPs obtained by removing a single ERP. The "waveform" of the weight values corresponds to features common to the pre-hit ERPs and to the negative of features common to the pre-lapse ERPs. Classification of patterns by the network was considerably more accurate than the 83% correct that had been obtained with the previous LDA analysis. 96% correct evaluation was seen in seven of the ten networks started with different random weight selections. The remaining three networks produced 92% correct responses (Fig. 2). The same two patterns were missed in all cases. Using hidden units did not improve generalization. The subjectjackknife results were very similar: 96% correct in two of ten networks and 92% in the remaining eight (Fig. 2). Thus, there was a somewhat increased difficulty in generalizing across individuals. The ability of the network to generalize over a shorter period of time was tested by progressively decreasing the number of trials used for testing using a network trained on the grand average ERPs. Subaverages 654 Venturini, Lytton, and Sejnowski 0.50 0.0 -0.50 ~0~------------~3~0~0------------~600 rns Figure 1: Weights from 24 two-layer networks trained from different initial weights: each value correspond to a sample point in time in the input data. %CORRECf CLASSIFICATION 100 90 80 70 60 50 o 246 HIDDEN UNITS 8 %CORRECf CLASSIFICATION 100 90 80 70 60 50 o 246 HIDDEN UNITS 8 Figure 2: Generalization performance in Pattern (left) and Subject (right) Jackknifes, using two-layer and three-layer networks with different number of hidden units. Each bar represents a different random start of the network. Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance 655 % CORRECT CLASSIFICA TION 100 50 0I 1 I I I I 32 64 96 128 TOTAL NUMBER I 160 Figure 3: Generalization for testing subaverages made using varymg number of individual ERPs were formed using from 1 to 160 individual ERPs (Figure 3). Performance with a single ERP is at chance. With 16 ERPs, corresponding to about 2 minutes, 90% accuracy was obtained. 3.2 EEG We first report results using a two-layer network to compare with the previous LDA analysis. Five power spectrum frequency bands from a single scalp site (Cz) were used as input data. The error rate was averaged over 32 seconds at 1.6 second intervals. In the first set of runs both error rate and power spectra were filtered using a two minute time window. Good results could be obtained in cases where a subject made errors more than 40% of the time (Fig. 4). When the subject made few errors, training was more difficult and generalization was poor. These results were virtually identical to the LDA ones. The lack of improvement is probably due to the fact that the LDA performance was already close to 90% on this data set. Use of three-layer networks did not improve the generalization performance. The use of a running average includes information in the EEG after the time at which the network is making a prediction. Causal prediction was attempted using multiple power spectra taken at 1.6 sec intervals over the past 56 sec, to predict the upcoming error rate. The results for one subject are shown in Figure 5. The predicted error rate differs from the target with a root mean square error of 0.3. 656 Venturini, Lytton, and Sejnowski E R R o R R A T E 1.0 0.5 :1. r . . ! . t\\" j 0.0 _ I 0.0 ~ ! li ., I 5.0 I I I 10.0 15.0 20.0 TIME (min.) I 25.0 Figure 4: Generalization results predicting error rate from EEG. The dotted line is the network output, solid line the desired value. E R R 0 R R A T E -I 1.00.50.0-0.51 1 1 1 1 1 0.0 5.0 10.0 15.0 20.0 25.0 TIME (min.) Figure 5: Causal prediction of error rate from EEG. The dotted line is the network output, solid line the desired value. Analysis of Event Related Potentials and Electroencephalogram Predicts Vigilance 657 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 3.05 Hz 9.15 Hz 13.4 Hz 19.5 Hz 39.0 Hz Figure 6: Weights from a two-layer causal prediction network. Each bar, within each frequency band, represents the influence on the output unit of power in that band at previous times ranging from 1 sec (right bar) to 56 sec (left bar). Figure 6 shows the weights from a two-layer network trained to predict instantaneous error rate. The network mostly uses information from the 3.05 Hz and 13.4 Hz frequency bands in predicting the error rate changes. The values of the 3.05 Hz weights have a strong peak from the most recent time steps, indicating that power in this frequency band predicts the state of vigilance on a short time scale. The alternating positive and negative weights present in the 13.4 Hz set suggest that rapid changes in power in this band might be predictive of vigilance (i.e. the derivative of the power signal). 4 DISCUSSION These results indicate that neural networks could be useful in analyzing electrophysiological measures. The EEG results suggest that the analysis can be applied to detect fluctuations of the attentional level of the subjects in real time. EEG analysis could also be a useful tool for understanding changes that occur in the electric activity of the brain during different states of attention. In the ERP analysis, the lack of improvement with the introduction of hidden units might be due to the small size of the data set. If the data set is too small, adding hidden units and connections may reduce the ability to find a general solution to the problem. The ERP subject-jackknife results point out that inter-subject generalization is possible. This suggests the possibility of preparing a pre-programmed network that could be used with multiple subjects rather than training the network for each individual. The subaverages results suggest that the detection is possible 658 Venturini, Lytton, and Sejnowski in a relatively brief time interval. ERPs could be an useful completion to the EEG analysis in order to obtain an on line detector of attentional changes. Future research will combination of these two measures along with EOG and heart rate. The idea is to let the model choose different network architectures and parameters, depending on the specific subtask. ACKNOWLEDGEMENTS We would like to thank Scott Makeig and Mark Inlow, Cognitive Performance and Psychophysiology Department, Naval Health Research Center, San Diego for providing the data and for invaluable discussions and Y. Le Cun and L.Y. Bottou from Neuristique who provided the SN2 simulator. RV was supported by Ministry of Public Instruction, Italy; WWL from a Physician Scientist Award, National Institute of Aging; TJS is an Investigator with the Howard Hughes Medical Institute. Research was supported by ONR Grant N00014-91-J-1674. REFERENCES 1 Belyavin, A. and Wright, N .A.( 1987). Changes in electrical activity of the brain with vigilance. Electroencephalography and Clinical Neuroscience, 66:137-144. 2 Torsvall, L. and Akerstedt, T.(1988). Extreme sleepiness: quantification of OG and spectral EEG parameters. Int. 1. Neuroscience, 38:435-44l. 3 Fruhstorfer, H., Langanke, P., Meinzer, K., Peter, J.H., and Pfaff, U.(1977). Neurophysiological vigilance indicators and operational analysis of a train vigilance monitoring device: a laboratory and field study. In R.R.Mackie(Ed.), Vigilance Theory, Operational Performance, and Physiological Correlates, 147-162, New York: Plenum Press. 4 Makeig, S. and Inlow M.(1991). Lapses in Alertness: Coherence of fluctuations in performance and EEG spectrum. Cognitive Performance and Psychophysiology Department, NHRC, San Diego. Technical Report. 5 Fruhstorfer, H. and Bergstrom, R.M.(1969). Human vigilance and auditory evoked responses. Electroencephalography and Clinical Neurophysiology, 27:346-355. 6 Polich, J.(1989). Habituation of P300 from auditory stimuli. Psychobiology, 17:19-28. 7 Polich, J .(1987). Task difficulty, probability, and inter-stimulus interval as determinants of P300 from auditory stimuli. Electroencephalography and clinical Neurophysiology, 68:311-320. 8 Polich, J.( 1990). P300, Probability, and Interstimulus Interval. Psychophysiology, 27:396-403. 9 Makeig S., Elliot F.S., Inlow M. and Kobus D.A.(1991) Predicting Lapses in Vigilance Using Brain Evoked Responses to Irrelevant Auditory Probe. Cognitive Performance and Psychophysiology Department, NHRC, San Diego. Technical Report.
1991
92
565
HARMONET: A Neural Net for Harmonizing Chorales in the Style of l.S.Bach Hermann Hild Johannes Feulner Wolfram Menzel hhild@ira.uka.de johannes@ira.uka.de menzel@ira.uka.de Institut fur Logik, Komplexitat und Deduktionssysteme Am Fasanengarten 5 Universitat Karlsruhe W-7500 Karlsruhe 1, Germany Abstract HARMONET, a system employing connectionist networks for music processing, is presented. After being trained on some dozen Bach chorales using error backpropagation, the system is capable of producing four-part chorales in the style of J .s.Bach, given a one-part melody. Our system solves a musical real-world problem on a performance level appropriate for musical practice. HARMONET's power is based on (a) a new coding scheme capturing musically relevant information and (b) the integration of backpropagation and symbolic algorithms in a hierarchical system, combining the advantages of both. 1 INTRODUCTION Neural approaches to music processing have been previously proposed (Lischka, 1989) and implemented (Mozer, 1991)(Todd, 1989). The promise neural networks offer is that they may shed some light on an aspect of human creativity that doesn't seem to be describable in terms of symbols and rules. Ultimately what music is (or isn't) lies in the eye (or ear) of the beholder. The great composers, such as Bach or Mozart, learned and obeyed quite a number of rules, e.g. the famous prohibition of parallel fifths. But these rules alone do not suffice to characterize a personal or even historic style. An easy test is to generate music at random, using only 267 268 Hild, Feulner, and Menzel A Chorale Melody Bach's Chorale Harmonization Figure 1: The beginning of the chorale melody "Jesu, meine Zuversicht" and its harmonization by J .S.Bach schoolbook rules as constraints. The result is "error free" but aesthetically offensive. To overcome this gap between obeying rules and producing music adhering to an accepted aesthetic standard, we propose HARMONET, which integrates symbolic algorithms and neural networks to compose four part chorales in the style of J .S. Bach (1685 - 1750), given the one part melody. The neural nets concentrate on the creative part of the task, being responsible for aesthetic conformance to the standard set by Bach in nearly 400 examples. Original Bach Chorales are used as training data. Conventional algorithms do the bookkeeping tasks like observing pitch ranges, or preventing parallel fifths. HARMONET's level of performance approaches that of improvising church organists, making it applicable to musical practice. 2 TASK DEFINITION The process of composing an accompaniment for a given chorale melody is called chorale harmonization. Typically, a chorale melody is a plain melody often harmonized to be sung by a choir. Correspondingly, the four voices of a chorale harmonization are called soprano (the melody part), alto, tenor and bass. Figure 1 depicts an example of a chorale melody and its harmonization by J .S.Bach. For centuries, music students have been routinely taught to solve the task of chorale harmonization. Many theories and rules about "dos" and "don'ts" have been developed. However, the task of HARMONET is to learn to harmonize chorales from example. Neural nets are used to find stylisticly characteristic harmonic sequences and ornamentations. HARMONET: A Neural Net for Harmonizing Chorales 269 3 SYSTEM OVERVIEW Given a set of Bach chorales, our goal is to find an approximation j of the quite complex function l f which maps chorale melodies into their harmonization as demonstrated by J .S.Bach on almost 400 examples. In the following sections we propose a decomposition of f into manageable subfunctions. 3.1 TASK DECOMPOSITION The learning task is decomposed along two dimensions: Different levels of abstractions. The chord skeleton is obtained if eighth and sixteenth notes are viewed as omitable ornamentations. Furthermore, if the chords are conceived as harmonies with certain attributes such as "inversion" or "characteristic dissonances", the chorale is reducible to its harmonic skeleton, a thoroughbass-like representation (Figure 2). Locality in time. The accompaniment is divided into smaller parts, each of which is learned independently by looking at some local context, a window. Treating small parts independently certainly hurts global consistency. Some of the dependencies lost can be regained if the current decision window additionally considers the outcome of its predecessors (external feedback). Figure 3 shows two consecutive windows cut out from the harmonic skeleton. To harmonize a chorale, HARMONET starts by learning the harmonic skeleton, which then is refined to the chord skeleton and finally augmented with ornamenting quavers (Figure 4, left side). 3.2 THE HARMONIC SKELETON Chorales have a rich harmonic structure, which is mainly responsible for their "musical appearance". Thus generating a good harmonic skeleton is the most important of HARMONET's subtasks. HARMONET creates a harmonic sequence by sweeping through the chorale melody and determining a harmony for each quarter note, considering its local context and the previously found harmonies as input. At each quarterbeat position t, the following information is extracted to form one training example: [ \'-3 i'-2 ~~~1 ~j:.! 8'H 1 ! N;.:al! '~' I at I at I at I L •••••••••••••• J. •••••••••••••• J. ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• ~ The target to be learned (the harmony H t at position t) is marked by the box. The input consists of the harmonic context to the left (the external feedback H t - 3 , Ht - 2 and Ht - 1 ) and the melodic context (pitches St-I! St and st+t). phrt contains ITo be sure, f is not a function but a relation, since there are many ''legal" accompaniments for one melody. For simplicity, we view f as a function. 270 Hild, Feulner, and Menzel JJ Chord Skeleton If J J J J (J J j Harmonic Skeleton Figure 2: The chord and the harmonic skeleton of the chorale from figure 1. information about the relative position of t to the beginning or end of a musical phrase. strt is a boolean value indicating whether St is a stressed quarter. A harmony H t has three components: Most importantly, the harmonic function relates the key of the harmony to the key of the piece. The inversion indicates the bass note of the harmony. The characteristic dissonances are notes which do not directly belong to the harmony, thus giving it additional tension. The coding of pitch is decisive for recognizing musically relevant regularities in the training examples. This problem is discussed in many places (Shepard, 1982) (Mozer, 1991). We developed a new coding scheme guided by the harmonic necessities of homophonic music pieces: A note s is represented as the set of harmonic functions that contain s, as shown below: Fct. T D S Tp Sp Dp DD DP TP d Vtp SS C 1 0 1 1 0 0 0 0 0 0 0 0 D 0 1 0 0 1 0 1 0 0 1 1 1 E .. T, D, S, Tp etc. are standard musical abbreviations to denote harmonic functions. The resulting representation is distributed with respect to pitch. However, it is local with respect to harmonic functions. This allows the network to anticipate future harmonic developments even though there cannot be a lookahead for harmonies yet uncomposed. Besides the 12 input units for each of the pitches St-1, St, St+l, we need 12+5+3 = HARMONET: A Neural Net for Harmonizing Chorales 271 t ftt I 2 I I I ... .. .. ,-j u ~ T T Tpr-..,.Ht? -t+l " I 2 1 I I .... r. "" u v;-T T Tp DP3-+Wt+l? Figure 3: The harmonic skeleton broken into local windows. The harmony Ht , determined at quarterbeat position t, becomes part of the input of the window at position t + 1. 20 input units for each of the 3 components of the harmonies Ht-3, Ht-2 and Ht-l, 9 units to code the phrase information phrt and 1 unit for the stress Strt. Thus our net has a total of 3 * 12 + 3 * 20 + 9 + 1 = 106 input units and 20 output units. We used one hidden layer with 70 units. In a more advanced version (Figure 4, right side), we use three nets (Nl, N2, N3) in parallel, each of which was trained on windows of different size. The harmonic function for which the majority of these three nets votes is passed to two subsequent nets (N4, N5) determining the chord inversion and characteristic dissonances of the harmony. Using windows of different sizes in parallel employs statistical information to solve the problem of chosing an appropriate window size. 3.3 THE CHORD SKELETON The task on this level is to find the two middle parts (alto and tenor) given the soprano S of the chorale melody and the harmony H determined by the neural nets. Since H includes information about the chord inversion, the pitch of the bass (modulo its octave) is already given. The problem is tackled with a "generate and test" approach: Symbolic algorithms select a "best" chord out of the set of all chords consistent with the given harmony H and common chorale constraints. 3.4 QUAVER ORNAMENTATIONS In the last subtask, another net is taught how to add ornamenting eighths to the chord skeleton. The output of this network is the set of eighth notes (if any) by which a particular chord Ct can be augmented. The network's input describes the local context of Ct in terms of attributes such as the intervals between Ct and Ct+1, voice leading characteristics, or the presence of eighths in previous chords. 272 Hild, Feulner, and Menzel Chorale Melody [If J J J J I I Determine Harmonies I If J J J J I T T TpDP3 I Expand Harmonies to Chords I " I " r i r r .J I.J J I I I T I Insert Eighth Notes I " ill I r"" I .J I.J J . I U I T Harmonized Chorale If J J J J (J J T T Tp ? , H Harmonic Function Inversion Characteristic Di880nances Figure 4: Left side: Overall structure of HARMONET. Right side: A more specialized architecture with parallel and sequential nets (see text). 4 PERFORMANCE HARMONET was trained separately on two sets of Bach chorales, each containing 20 chorales in major and minor keys, respectively. By passing the chorales through a window as explained above, each set amounted to approx. 1000 training examples. All nets were trained with the error backpropagation algorithm, needing 50 to 100 epochs to achieve reasonable convergence. Figures 5 and 6 show two harmonizations produced by HARMONET, given melodies which were not in the training set. An audience of music professionals judged the quality of these and other chorales produced by HARMONET to be on the level on an improvising organist. HARMONET also compares well to non-neural approaches. In figure 6 HARMONET's accompaniment is shown on a chorale melody also used in the Ph.D. thesis of (Ebcioglu, 1986) to demonstrate the expert system "CHORAL" . HARMONET: A Neural Net for Harmonizing Chorales 273 Christus, der ist mein Leben :i e 9 ./ 5 6 ~ 7 8 9 It, I J I J J J J I J J J I J J J J I J. J I J J J I J J J I J J J J I 2.11 ~ t 9 ./ .. f r r i i U r i i r i U f I I ri I I ro-, f1 I . i IT LSi r ur r I U I r T " 5 6 I 7 8 9 "r I I i wr r tri r r i iTt ~. r I I I 1 J 11 r1 I ri I ro-" J I f . I r ,I r r I . r T T Figure 5: A chorale in a major key harmonized by HARMONET. Happy Birthday to You " t I 9 ~ I I 5 J I 6 n 7.Dl I 8 l -.J I I I [JI r I r r 1 LJ' U r I r I --I J J J J d JJ-J J J -J -J~ .Jl-J 11J J J I I I ~ ;, DP,+ Tp D DD~+ D DP~+ Tp I T S D7 Tp DP T3 S T D T D T Figure 6: "Happy Birthday" harmonized by HARMONET. 274 Hild, Feulner, and Menzel 5 CONCLUSIONS The music processing system HARMONET presented in this paper clearly shows that musical real-world applications are well within the reach of connectionist approaches. We believe that HARMONET owes much of its success to a clean task decomposition and a meaningful selection and representation of musically relevant features. By using a hybrid approach we allow the networks to concentrate on musical essentials instead of on structural constraints which may be hard for a network to learn but easy to code symbolically. The abstraction of chords to harmonies reduces the problem space and resembles a musician's problem approach. The "harmonic representation" of pitch shows the harmonic character of the given melody more explicitly. We have also experimented to replace the neural nets in HARMONET by other learning techniques such as decision trees (ID3) or nearest neighbor classification. However, as also reported on other tasks (Dietterich et al., 1990), they were outperformed by the neural nets. HARMONET is not a general music processing system, its architecture is designed to solve a quite difficult but also quite specific task. However, due to HARMONET's neural learning component, only a comparatively small amount of musical expert knowledge was necessary to design the system, making it easier to build and more flexible than a pure rule based system. Acknowledgements We thank Heinz Braun, Heiko Harms and Gudrun Socher for many fruitful discussions and contributions to this research and our music lab. References J .S.Bach (Ed.: Bernhard Friedrich Fischer) 389 Choralgesange fur vierstimmigen Chor. Edition Breitkopf, Nr. 3765. Dietterich,T.G., Hild,H., & Bakiri,G. A comparative study of ID3 and Backpropagation for English Text-to-Speech Mapping. Proc. of the Seventh International Conference on Machine Learning (pp. 24-31). Kaufmann, 1990. Ebcioglu,K. An Expert System for Harmonization of Chorales in the Style of J.S.Bach. Ph.D. Dissertation, Department ofC.S., State University of New York at Buffalo, New York, 1986. Lischka,C. Understanding Music Cognition. GMD St.Augustin, FRG, 1989. Mozer,M.C., Soukup,T. Connectionist Music Composition Based on Melodic and Stylistic Constraints. Advances in Neural Information Processing 3 (NIPS 3), R.P. Lippmann, J. E. Moody, D.S. Touretzky (eds.), Kaufmann 1991. Shepard, Roger N. Geometrical Approximations to the Structure of Musical Pitch. Psychological Review, Vol. 89, Nr. 4, July 1982. Todd, Peter M. A Connectionist Approach To Algorithmic Composition. Computer Music Journal, Vol. 13, No.4, Winter 1989.
1991
93
566
A Self-Organizing Integrated Segmentation And Recognition Neural Net Jim Keeler * MCC 3500 West Balcones Center Drive Austin, TX 78729 Abstract David E. Rumelhart Psychology Department Stanford University Stanford, CA 94305 We present a neural network algorithm that simultaneously performs segmentation and recognition of input patterns that self-organizes to detect input pattern locations and pattern boundaries. We demonstrate this neural network architecture on character recognition using the NIST database and report on results herein. The resulting system simultaneously segments and recognizes touching or overlapping characters, broken characters, and noisy images with high accuracy. 1 INTRODUCTION Standard pattern recognition systems usually involve a segmentation step prior to the recognition step. For example, it is very common in character recognition to segment characters in a pre-processing step then normalize the individual characters and pass them to a recognition engine such as a neural network, as in the work of LeCun et al. 1988, Martin and Pittman (1988). This separation between segmentation and recognition becomes unreliable if the characters are touching each other, touching bounding boxes, broken, or noisy. Other applications such as scene analysis or continuous speech recognition pose similar and more severe segmentation problems. The difficulties encountered in these applications present an apparent dilemma: one cannot recognize the patterns *keeler@mcc.com Reprint requests: coila@mcc.com or at the above address. 496 A Self-Organizing Integrated Segmentation and Recognition Neural Net 497 5 I I I ItfIM ............. I I I I~~~ I I I I I , 2.AFI 1ABY/ °AW O Sz utputs: pz = --I + Sz Summing Units: Sz = LXxyz xy Grey-scale Input image I(.X,y) Figure 1: The ISR network architecture. The input image may contain several characters and is presented to the network in a two-dimensional grey-scale image. The units in the first block, hij", have linked-local receptive field connections to the input image. Block 2, Hr'JI'z" has a three-dimensional linked-local receptive field to block 1, and the exponential unit block, block 3, has three-dimensional linkedlocal receptive field connections to block 2. These linked fields insure translational invariance (except for edge-effects at the boundary). The exponential unit block has one layer for each output category. These units are the output units in the test mode, but hidden units during training: the exponential unit activity is summed over (sz) to project out the positional information, then converted to a probability Pz. Once trained, the exponential unit layers serve as "smart histograms" giving sharp peaks of activity directly above the corresponding characters in the input image, as shown to the left. 498 Keeler and Rumelhart until they are segmented, yet in many cases one cannot segment the patterns until they are recognized. A solution to this apparent dilemm is to simultaneously segment and recognize the patterns. Integration of the segmentation and recognition steps is essential for further progress in these difficult pattern recognition tasks, and much effort has been devoted to this topic in speech recognition. For example, Hidden Markov models integrate the task of segmentation and recognition as a part of the word-recognition module. Nevertheless, little neural network research in pattern recognition has focused on the integrated segmentation and recognition (ISR) problem. There are several ways to achieve ISR in a neural network. The first use of backpropagation ISR neural networks for character recognition was reported by Keeler, Rumelhart and Leow (1991a). The ISR neural network architecture is similar to the time-delayed neural network architecture for speech recognition used by Lang, Hinton, and Waibel (1990). The following section outlines the neural network algorithm and architecture. Details and rationale for the exact structure and assumptions of the network can be found in Keeler et al. (1991a,b). 2 NETWORK ARCHITECTURE AND ALGORITHM The basic organization of the network is illustrated in Figure 2. The input consists of a twcrdimensional grey-scale image representing the pattern to be processed. We designate this input pattern by the twcrdimensional field lex, y). In general, we assume that any pattern can be presented at any location and that the characters may touch, overlap or be broken or noisy. The input then projects to a linked-Iocalreceptive-field block of sigmoidal hidden units (to enforce translational invariance). We designate the activation of the sigmoidal units in this block by hij It. The second block of hidden units, H 1:'J/' z', is a linked-local receptive field block of sigmoidal units that receives input from a three-dimensional receptive field in the hiilt block. In a standard neural network architecture we would normally connect block H to the output units. However we connect block H to a block of exponential units X1:J/z, The X block serves as the outputs after the network has been trained; there is a sheet of exponential units for each output category. These units are connected to block H via a linked-local receptive field structure. X1:J/z = e"''''"·, where the net input to the unit is TJ1:J/Z = L W:,~~z,H1:'J/'z' + /3z, 1:'J/' (1) and W:,~~z' is the weight from hidden unit H1:'J/'z' to the exponential unit X1:J/z, Since we use linked weights in each block, the entire structure is translationally invariant. We make use of this property in our training algorithm and project out the positional information by summing over the entire layer, Sz = L1:Y X1:J/z, This allows us to give non-specific target information in the form of "the input contains a 5 and a 3, but I will not say where." We do this by converting the summed information in"to an output probability, pz = 1!5 •. A Self-Organizing Integrated Segmentation and Recognition Neural Net 499 2.1 The learning Rule There are two objective functions that we have used to train ISR networks: cross entropy and total-sum-square-error. I = Ez tzlnpz + (1 - tz)ln(l - Pz), where tz equals 1 if pattern z is presented and 0 otherwise. Computing the gradient with respect to the net input to a particular exponential unit yields the following term in our learning rule: ~ - (t _ ) X~yz z pz 8TJ~yz E~y X~yz (2) It should be noted that this is a kind of competitive rule in which the learning is proportional to the relative strength of the activation at the unit at a particular location in the X layer to the strength of activation in the entire layer. For example, suppose that X2,3,5 = 1000 and X5,3,5= 100. Given the above rules, X2,3,5 would receive about 10 times more of the output error than the unit X5,3,5. Thus the units compete with each other for the credit or blame of the output, and the "rich get richer" until the proper target is achieved. This favors self-organization of highly localized spikes of activity in the exponential layers directly above the particular character that the exponential layer detects ("smart histograms" as shown in Figure 1). Note that we never give positional information in the network but that the network self-organizes the exponential unit activity to discern the positional information. The second function is the total-sum-square error, E = Ez(tz - pz)2. For the total-sum-square error measure, the gradient term becomes 8E ( ) X~yz -~ = t z - pz ~ )2 . uTJ~yz (1 + L.~y X~yz (3) Again this has a competitive term, but the competition is only important for X~yz large, otherwise the denominator is dominated by 1 for small E~y X~yz. We used the quadratic error function for the networks reported in the next section. 3 NIST DATABASE RECOGNITION 3.1 Data We tested this neural network algorithm on the problem of segmenting and recognizing handwritten numerals from the NIST database. This database contains approximately 273,000 samples of handwritten numerals collected from the Bureau of Census field staff. There were 50 different forms used in the study, each with 33 fields, 28 of which contain handwritten numerals ranging in length from 2 to 10 digits per field. We only used fields of length 2 to 6 (field numbers 6 to 30). We used two test sets: a small test set, Test Set A of approximately 4,000 digits, 1,000 fields, from forms labeled f1800 to f1840 and a larger test set, Test Set B, containing 20,000 numerals 5,000 fields and 200 forms from f1800 to f1899 and f2000 to f2199. We used two different training sets: a hand-segmented training set containing approximately 33,000 digits from forms mooo to m636 (the Segmented Training Set) and another training set that was never hand-segmented from forms mooo to f1800 (the Unsegmented Training Set. We pre-processed the fields with a simple boxremoval and size-normalization program before they were input to the ISR net. 500 Keeler and Rumelhart The hand segmentation was conventional in the sense that boxes were drawn around each of the characters, but we the boxes included any other portions of characters that may be nearby or touching in the natural context. Note that precise labeling of the characters is not essential at all. We have trained systems where only the center information the characters was used and found no degradation in performance. This is due to the fact that the system self-organizes the positional information, so it is only required that we know whether a character is in a field, not precisely where. 3.2 TRAINING We trained several nets on the NIST database. The best training procedure was as follows: Step 1): train the network to an intermediate level of accuracy (96% or so on single characters, about 12 epochs of training set 1). Note that when we train on single characters, we do not need isolated characters - there are often portions of other nearby characters within the input field. Indeed, it helps the ISR performance to use this natural context. There are two reasons for this step: the first is speed - training goes much faster with single characters because we can use a small network. We also found a slight generalization accuracy benefit by including this training step. Step 2): copy the weights of this small network into a larger network and start training on 2 and 3 digit fields from the database without hand segmentation. These are fields numbered 6,7,11,15,19,20,23,24,27, and 28. The reason that we use these fields is that we do not have to hand-segment them - we present the fields to the net with the answer that the person was supposed to write in the field. (There were several cases where the person wrote the wrong numbers or didn't write anything. These cases were NOT screened from the training set.) Taking these fields from forms mooo to f1800 gives us another 45,000 characters to train on without ever segmenting them. There were several reasons that we use fields of length 2 and 3 and not fields of 4,5,or 6 for training (even though we used these in testing). First, 3 characters covers the most general case: a character either has no characters on either side, one to the left, one to the right or one on both sides (3 characters total). If we train on 3 characters and duplicate the weights, we have covered the most general case for any number of characters, and it is clearly faster to train on shorter fields. Second, training with more characters confuses the net. As pointed out in our previous work (keeler 1991a), the learning algorithm that we use is only valid for one or no characters of a given type presented in the input field. Thus, the field '39541' is ok to train on, but the field '288' violates one of the assumptions of the training rule. In this case the two 8 's would be competing with each other for the answer and the rule favors only one winner. Even though this problem occurs 1/lth of the time for two digit fields, it is not serious enough to prevent the net from learning. (Clearly it would not learn fields of length 10 where all of the target units are turned on and there would be no chance for discrimination.) This problem could be avoided by incorporating order information into training and we have proposed several mechanisms for incorporating order information in training, but do not use them in the present system. Note that this biases the training toward the a-priori distribution of characters in the 2 and 3 digit fields, which is a different distribution from that of the testing set. The two networks that we used had the following architectures: Net1: Input: 28x24 A Self-Organizing Integrated Segmentation and Recognition Neural Net 501 receptive fields 6x6 shift 2x2. hidden 1: 12xllx12 receptive fields 4x4x12 shift 2x2x12. hidden 2: 5x4x18 receptive fields 3x3x18 shift lxlxl8. exponentials (block 3): 3x2xlO 10 summing, 10 outputs. Net2: Input: 28x26 receptive fields 6x6 shift 2x4. hidden 1: 12x6x12 receptive fields 5x4x12 shift lx2xl2. hidden 2: 8x2x18 receptive fields 5x2x18 shift lxlxl8. exponentials (block 3): 4xlxlO 10 summing, 10 outputs. 100 A 0/0 n1&2 99 c 0 r r e c t B 98 97 96 99. 5 t--+-+--t---:lhr-t:~rI-', 95 n2 94 93 92 91 .... ______ .. _ ..... o 5 10 15 20 25 3C 98 ----'--I0/0 Rejected 6 , 97.5 .......... _ .... ~ ............ _ .............. 5 10 15 20 25 30 35 40 45 50 55 60 Figure 2: Average combined network performance on the NIST database. Figure 2A shows the generalization performance of two neural networks on the NIST Test Set A. The individual nets Netl and Net2 (nl, n2 respectively) and the combined performance of nets 1 and 2 are shown where fields are rejected when the nets differ. The curves show results for fields ranging length 2 to 6 averaged over all fields for 1,000 total fields, 4,000 characters. Note that Net2 is not nearly as accurate as Netl on fields, but that the combination of the two is significantly better than either. For this test set the rejection rate is 17% (83% acceptance) with an accuracy rate of 99.3% (error rate 0.7%) overall on fields of average length 4. Figure 2B shows the per-field performance for test-set B (5,000 fields, 20,000 digits) Again both nets are used for the rejection criterion. For comparison, 99% accuracy on fields of length 4 is achieved at 23% rejection. Figure 2 shows the generalization performance on the NIST database for Netl, Net2 and their combination. For the combination, we accepted the answer only when the networks agreed and rejected further based on a simple confidence measure (the difference of the two highest activations) of each individual net. 502 Keeler and Rumelhart ../f. .! 1"'" ~./.~;, I I .I Figure 3: Examples of correctly recognized fields in the NIST database. This figure shows examples of fields that were correctly recognized by the ISR network. Note the cases of touching characters, multiple touching characters, characters touching in multiple places, fields with extrinsic noise, broken characters and touching, broken characters with noise. Because of the integrated nature of the segmentation and recognition, the same system is able to handle all of these cases. 4 DISCUSSION AND CONCLUSIONS This investigation has demonstrated that the ISR algorithm can be used for integrated segmentation and recognition and achieve high-accuracy results on a large database of hand-printed numerals. The overall accuracy rates of 83% acceptance with 99.3% accuracy on fields of average length 4 is competitive with accuracy reported in commercial products. One should be careful making such comparisons. We found a variance of 7% or more in rejection performance on different test sets with more than 1,000 fields (a good statistical sample). Perhaps more important than the high accuracy, we have demonstrated that the ISR system is able to deal with touching, broken and noisy characters. In other investigations we have demonstrated the ISR system on alphabetic characters with good results, and on speech recognition (Keeler, Rumelhart, Zand-Biglari, 1991) where the results are slightly better than Hidden Markov Model results. There are several attractive aspects about the ISR algorithm: 1) Labeling can be "sloppy" in the sense that the borders of the characters do not have to be defined. This reduces the labor burden of getting a system running. 2) The final weights can be duplicated so that the system can all run in parallel. Even with both networks running, the number of weights and activations needed to be stored in memory is quite small - about 30,000 floating point numbers, and the system is quite fast in the feed-forward mode: peak performance is about 2.5 characters/sec on a Dec 5000 (including everything: both networks running, input pre-processing, parsing the answers, printing results, etc.). This structure is ideal for VLSI implementation since it contains a very small number of weights (about 5,000). This is one possible way around the computational bottleneck facing encountered in processing complex scenes - the ISR net can do very-fast first-cut scene analysis with good discrimiA Self-Organizing Integrated Segmentation and Recognition Neural Net 503 nation of similar objects - an extremely difficult task. 3) The ISR algorithm and architecture presents a new and powerful approach of using forward models to convert position-independent training information into position-specific error signals. 4) There is no restriction to one-dimension; The same ISR structure has been used for two-dimensional parsing. Nevertheless, there are several aspects of the ISR net that require improvement for future progress. First, the algorithmic assumption of having one pattern of a given type in the input field is too restrictive and can cause confusion in some training examples. Second, we are throwing some information away when we project out all of the positional information order information could be incorporated into the training information. This extra information should improve training performance due to the more-specific error signals. Finally, normalization is still a problem. We do a crude normalization, and the networks are able to segment and recognize characters as long as the difference in size is not too large. A factor of two in size difference is easily handled with the ISR system, but a factor of four decreases recognition accuracy by about 3-5% on the character recognition rates. This requires a tighter coupling between the segmentation/recognition and normalization. Just as one must segment and recognize simultaneously, in many cases one can't properly normalize until segmentation/recognition has occurred. Fortunately, in most document processing applications, crude normalization to within a factor of two is simple to achieve, allowing high accuracy networks. Acknowledgements We thank Wee-Kheng Leow, Steve O'Hara, John Canfield, for useful discussions and coding. References (1] J.D. Keeler, D.E. Rumelhart, and W.K. Leow (1991a) "Integrated Segmentation and Recognition of Hand-printed Numerals". In: Lippmann, Moody and Touretzky, Editors, Neural Information Processing Systems 3, 557-563. [2] J.D. Keeler, D.E. Rumelhart, and S. Zand-Biglari (1991b) "A Neural Network For Integrated Segmentation and Recognition of Continuous Speech". MCC Technical Report ACT-NN-359-91. [3] K. Lang, A. Waibel, G. Hinton. (1990) A time delay Neural Network Architecture for Isolated Word Recognition. Neural Networks, 3 23-44. [4] Y. Le Cun, B. Boser, J .S. Denker, S. Solla, R. Howard, and L. Jackel. (1990) "Back-Propagation applied to Handwritten Zipcode Recognition." Neural Computation 1(4):541-551. [5] G. Martin, J. Pittman (1990) "Recognizing hand-printed letters and digits." In D. Touretzky (Ed.). Neural Information Processing Systems 2, 405-414, Morgan Kauffman Publishers, San Mateo, CA. [6] The NIST database can be obtained by writing to: Standard Reference Data National Institute of Standards and Technology 221/ A323 Gaithersburg, MD 20899 USA and asking for NIST special database 1 (HWDB).
1991
94
567
A Comparison of Projection Pursuit and Neural Network Regression Modeling Jellq-Nellg Hwang, Hang Li, Information Processing Laboratory Dept. of Elect. Engr., FT-lO University of Washington Seattle WA 98195 Martin Maechler, R. Douglas Martin, Jim Schimert Department of Statistics Mail Stop: GN-22 University of Washington Seattle, WA 98195 Abstract Two projection based feedforward network learning methods for modelfree regression problems are studied and compared in this paper: one is the popular back-propagation learning (BPL); the other is the projection pursuit learning (PPL). Unlike the totally parametric BPL method, the PPL non-parametrically estimates unknown nonlinear functions sequentially (neuron-by-neuron and layer-by-Iayer) at each iteration while jointly estimating the interconnection weights. In terms of learning efficiency, both methods have comparable training speed when based on a GaussNewton optimization algorithm while the PPL is more parsimonious. In terms of learning robustness toward noise outliers, the BPL is more sensitive to the outliers. 1 INTRODUCTION The back-propagation learning (BPL) networks have been used extensively for essentially two distinct problem types, namely model-free regression and classification, 1159 1160 Hwang, Li, Maechler, Martin, and Schimert which have no a priori assumption about the unknown functions to be identified other than imposes a certain degree of smoothness. The projection pursuit learning (PPL) networks have also been proposed for both types of problems (Friedman85 [3]), but to date there appears to have been much less actual use of PPLs for both regression and classification than of BPLs. In this paper, we shall concentrate on regression modeling applications of BPLs and PPLs since the regression setting is one in which some fairly deep theory is available for PPLs in the case of low-dimensional regression (Donoh089 [2], Jones87 [6]). A multivariate model-free regression problem can be stated as follows: given n pairs of vector observations, (Yl , Xl) = (Yll,···, Ylq; Xll,···, Xlp ), which have been generated from unknown models YIi=gi(XI)+tli, 1=1,2,·.·,n; i=I,2,···,q (1) where {y,} are called the multivariable "response" vector and {x,} are called the "independent variables" or the "carriers". The {gd are unknown smooth nonparametric (model-free) functions from p-dimensional Euclidean space to the real line, i.e., gi: RJ> ~ R, Vi. The {tli} are random variables with zero mean, E(tli] = 0, and independent of {x,}. Often the {tli} are assumed to be independent and identically distributed (iid) as well. The goal of regression is to generate the estimators, 91, 92, ... , 9q, to best approximate the unknown functions, gl, g2, ... , gq, so that they can be used for prediction of a new Y given a new x: Yi = gi(X), Vi. 2 A TWO-LAYER PERCEPTRON AND BACK-PROPAGATION LEARNING Several recent results have shown that a two-layer (one hidden layer) perceptron with sigmoidal nodes can in principle represent any Borel-measurable function to any desired accuracy, assuming "enough" hidden neurons are used. This, along with the fact that theoretical results are known for the PPL in the analogous two-layer case, justifies focusing on the two-layer perceptron for our studies here. 2.1 MATHEMATICAL FORMULATION A two-layer percept ron can be mathematically formulated as follows: Yi p L WkjXj - (h = wf x - (h, k = 1, 2, j=1 m m k=l k=1 m (2) where Uk denotes the weighted sum input of the kth neuron in the hidden layer; Ok denotes the bias of the kth neuron in the hidden layer; Wkj denotes the inputlayer weight linked between the kth hidden neuron and the jth neuron of the input A Comparison of Projection Pursuit and Neural Network Regression Modeling 1161 layer (or ph element of the input vector x); f3ik denotes the output-layer weight linked between the ith output neuron and the kth hidden neuron; fk is the nonlinear activation function, which is usually assumed to be a fixed monotonically increasing (logistic) sigmoidal function, u( u) = 1/(1 + e- U ). The above formulation defines quite explicitly the parametric representation of functions which are being used to approximate {gi(X), i = 1,2"", q}. A simple reparametrization allows us to write gi(X) in the form: m T A() "'"' akx-/-lk gj x = ~ f3ikU( ) k=l Sk (3) where ak is a unit length version of weight vector Wk. This formulation reveals how {gd are built up as a linear combination of sigmoids evaluated at translates (by /-lk) and scaled (by Sk) projection of x onto the unit length vector ak. 2.2 BACK-PROPAGATION LEARNING AND ITS VARIATIONS Historically, the training of a multilayer perceptron uses back-propagation learning (BPL). There are two common types of BPL: the batch one and the sequentialone. The batch BPL updates the weights after the presentation of the complete set of training data. Hence, a training iteration incorporates one sweep through all the training patterns. On the other hand, the sequential BPL adjusts the network parameters as training patterns are presented, rather than after a complete pass through the training set. The sequential approach is a form of Robbins-Monro Stochastic Approximation. While the two-layer perceptron provides a very powerful nonparametric modeling capability, the BPL training can be slow and inefficient since only the first derivative (or gradient) information about the training error is utilized. To speed up the training process, several second-order optimization algorithms, which take advantage of second derivative (or Hessian matrix) information, have been proposed for training perceptrons (Hwang90 [4]). For example, the Gauss-Newton method is also used in the PPL (Friedman85 [3]). The fixed nonlinear nodal (sigmoidal) function is a monotone non decreasing differentiable function with very simple first derivative form, and possesses nice properties for numerical computation. However, it does not interpolate/extrapolate efficiently in a wide variety of regression applications. Several attempts have been proposed to improve the choice of nonlinear nodal functions; e.g., the Gaussian or bell-shaped function, the locally tuned radial basis functions, and semi-parametric (non-fixed nodal function) nonlinear functions used in PPLs and hidden Markov models. 2.3 RELATIONSHIP TO KERNEL APPROXIMATION AND DATA SMOOTHING It is instructive to compare the two-layer perceptron approximation in Eq. (3) with the well-known kernel method for regression. A kernel K(.) is a non-negative symmetric function which integrates to unity. Most kernels are also unimodal, with 1162 Hwang, Li, Maechler, Martin, and Schimert mode at the origin, K(tl) ~ K(t2), 0 < tl < t2. A kernel estimate of gi(X) has the form _ ~ 1 IIx - xIII gK,i(X) = ~ Yli hq K( h9 ), (4) 1=1 where h is a bandwidth parameter and q is the dimension of YI vector. Typically a good value of h will be chosen by a data-based cross-validation method. Consider for a moment the special case of the kernel approximator and the two-layer perceptron in Eq. (3) respectively, with scalar YI and XI, i.e., with p = q = 1 (hence unit length interconnection weight Q' = 1 by definition): ~ .!.K( Ilx - xdl) = ~ :"K(x - XI) ~ YI h h ~ YI h h' (5) 1=1 1=1 m g(X) L ,BkO"( X - Ilk) k=1 Sk (6) This reveals some important connections between the two approaches. Suppose that for g( x), we set 0" = K, i.e., 0" is a kernel and in fact identical to the kernel K, and that ,Bk,llk,sk = s have been chosen (trained), say by BPL. That is, all {sd are constrained to a single unknown parameter value s. In general, m < n, or even m is a modest fraction of n when the unknown function g(x) is reasonably smooth. Furthermore, suppose that h has been chosen by cross validation. Then one can expect 9K(X) ~ gq(x), particularly in the event that the {1lA:} are close to the observed values {x,} and X is close to a specific Ilk value (relative to h). However, in this case where we force Sk = S, one might expect gK(X) to be a somewhat better estimate overall than 9q(x), since the former is more local in character. On the other hand, when one removes the restriction Sk = s, then BPL leads to a local bandwidth selection, and in this case one may expect gq(x) to provide better approximation than 9K(X) when the function g(x) has considerably varying curvature, gll(X), and/or considerably varying error variance for the noise (Ii in Eq. (1). The reason is that a fixed bandwidth kernel estimate can not cope as well with changing curvature and/or noise variance as can a good smoothing method which uses a good local bandwidth selection method. A small caveat is in order: if m is fairly large, the estimation of a separate bandwidth for each kernel location, Ilk, may cause some increased variability in gq (x) by virtue of using many more parameters than are needed to adequately represent a nearly optimal local bandwidth selection method. Typically a nearly optimal local bandwidth function will have some degree of smoothness, which reflects smoothly varying curvature and/or noise variance, and a good local bandwidth selection method should reflect the smoothness constraints. This is the case in the high-quality "supersmoother", designed for applications like the PPL (to be discussed), which uses cross-validation to select bandwidth locally (Friedman85 [3]), and combines this feature with considerable speed. The above arguments are probably equally valid without the restriction u = J(, because two sigmoids of opposite signs (via choice of two {,Bk}) that are appropriately A Comparison of Projection Pursuit and Neural Network Regression Modeling 1163 shifted, will approximate a kernel up to a scaling to enforce unity area. However, there is a novel aspect: one can have a separate local bandwidth for each half of the kernel, thereby using an asymmetric kernel, which might improve the approximation capabilities relative to symmetric kernels with a single local bandwidth in some situations. In the multivariate case, the curse of dimensionality will often render useless the kernel approximator 9K,i(X) given by Eq. (4). Instead one might consider using a projection pursuit kernel (PPK) approximator: n mIT T 9PPK,i(X) = LL Yli hk J«(1:kX~kD:kXI) 1=1 k=l (7) where a different bandwidth hk is used for each direction D:k . In this case, the similarities and differences between the PPK estimate and the BPL estimate 9q,i(X) become evident. The main difference between the two methods is that PPK performs explicit smoothing in each direction D:k using a kernel smoother, whereas BPL does implicit smoothing with both fJk (replacing Yli/ hk) and /-lk (replacing aT XI) being determined by nonlinear least squares optimization. In both PPK and BPL, the D:k and hk are determined by nonlinear optimization (cross-validation choices of bandwidth parameters are inherently nonlinear optimization problems) (Friedman85 [3]). 3 PROJECTION PURSUIT LEARNING NETWORKS The projection pursuit learning (PPL) is a statistical procedure proposed for multivariate data analysis using a two-layer network given in Eq. (2). This procedure derives its name from the fact that it interprets high dimensional data through well-chosen lower-dimensional projections. The "pursuit" part of the name refers to optimization with respect to the projection directions. 3.1 COMPARATIVE STRUCTURES OF PPL AND BPL Similar to a BPL perceptron, a PPL network forms projections of the data in directions determined from the interconnection weights. However, unlike a BPL perceptron, which employs a fixed set of nonlinear (sigmoidal) functions, a PPL non-parametrically estimates the nonlinear nodal functions based on nonlinear optimization approach which involves use of a one-dimensional data-smoother (e.g., a least squares estimator followed by a variable window span data averaging mechanism) (Friedman85 [3]). Therefore, it is important to note that a PPL network is a semi-parametric learning network, which consists of both parametrically and non-parametrically estimated elements. This is in contrast to a BPL perceptron, which is a completely parametric model. 3.2 LEARNING STRATEGIES OF PPL In comparison with a batch BPL, which employs either 1st-order gradient descent or 2nd-order Newton-like methods to estimate the weights of all layers simultaneously 1164 Hwang, Li, Maechler, Martin, and Schimert after all the training patterns are presented, a PPL learns neuron-by-neuron and layer-by-Iayer cyclically after all the training patterns are presented. Specifically, it applies linear least squares to estimate the output-layer weights, a one-dimensional data smoother to estimate the nonlinear nodal functions of each hidden neuron, and the Gauss-Newton nonlinear least squares method to estimate the input-layer weights. The PPL procedure uses the batch learning technique to iteratively minimize the mean squared error, E, over all the training data. All the parameters to be estimated are hierarchically divided into m groups (each associated with one hidden neuron), and each group, say the kth group, is further divided into three subgroups: the output-layer weights, {,Bik, i = 1"", q}, connected to the kth hidden neuron; the nonlinear function, h( u), of the kth hidden neuron; and the input-layer weights, {Wkj, j = 1"" ,p}, connected to the kth hidden neuron. The PPL starts from updating the parameters associated with the first hidden neuron (group) by updating each subgroup, {,Bid, h(u), and {Wlj} consecutively (layer-by-Iayer) to minimize the mean squared error E. It then updates the parameters associated with the second hidden neuron by consecutively updating {,Bi2}, h(u), and {W2j}. A complete updating pass ends at the updating of the parameters associated with the mth (the last) hidden neuron by consecutively updating {,Bim}, fm(u), and {wmj}. Repeated updating passes are made over all the groups until convergence (i.e., in our studies of Section 4, we use the stopping criterion that prespecified small constant, ~ = 0.005). IE(new)_E(old)1 E(old) be smaller than a 4 LEARNING EFFICIENCY IN BPL AND PPL Having discussed the "parametric" BPL and the "semi-parametric" PPL from structural, computational, and theoretical viewpoints, we have also made a more practical comparison of learning efficiency via a simulation stUdy. For simplicity of comparison, we confine the simulations to the two-dimensional univariate case, i.e., p = 2, q = 1. This is an important situation in practice, because the models can be visualized graphically as functions y = g(Xl' X2). 4.1 PROTOCOLS OF THE SIMULATIONS Nonlinear Functions: There are five nonlinear functions gU) : [0,1]2 --+ R investigated (Maechler90 [7]), which are scaled such that the standard deviation is 1 (for a large regular grid of 2500 points on [0,1]2), and translated to make the range nonnegative. Training and Test Data: Two independent variables (carriers) (Xll' X12) were generated from the uniform distribution U([O,I]2), i.e., the abscissa values {(Xll' X12)} were generated as uniform random variates on [0,1] and independent from each other. We generated 225 pairs {(xu, X12)} of abscissa values, and used this same set for experiments of all five different functions, thus eliminating an unnecessary extra random component of the simulation. In addition to one set of noiseless training data, another set of noisy training data was also generated by adding iid Gaussian noises. A Comparison of Projection Pursuit and Neural Network Regression Modeling 1165 Algorithm Used: The PPL simulations were conducted using the S-Plus package (S-Plus90 [1]) implementation of PPL, where 3 and 5 hidden neurons were tried (with 5 and 7 maximum working hidden neurons used separately to avoid the overfitting). The S-Plus implementation is based on the Friedman code (Friedman85 [3]), which uses a Gauss-Newton method for updating the lower layer weights. To obtain a fair comparison, the BPL was implemented using a batch Gauss-Newton method (rather than the usual gradient descent, which is slower) on two-layer perceptrons with linear output neurons and nonlinear sigmoidal hidden neurons (Hwang90 [4], Hwang9I [5]), where 5 and 10 hidden neurons were tried. Independent Test Data Set: The assessment of performance was done by comparing the fitted models with the "true" function counterparts on a large independent test set. Throughout all the simulations, we used the same set of test data for performance assessment, i.e., {g(j)( Xll, X/2)}, of size N = 10000, namely a regularly spaced grid on [0,1]2, defined by its marginals. 4.2 SIMULATION RESULTS IN LEARNING EFFICIENCY To summarize the simulation results in learning efficiency, we focused on the chosen three aspects: accuracy, parsimony, and speed. Learning Accuracy: The accuracy determined by the absolute L2 error measure of the independent test data in both learning methods are quite comparable either trained by noiseless or noisy data (Hwang9I [5]). Note that our comparisons are based on 5 & 10 hidden neurons of BPLs and 3 & 5 hidden neurons of PPLs. The reason of choosing different number of hidden neurons will be explained in the learning parsimony section. Learning Parsimony: In comparison with BPL, the PPL is more parsimonious in training all types of nonlinear functions, i.e., in order to achieve comparable accuracy to the BPLs for a two-layer perceptrons, the PPLs require fewer hidden neurons (more parsimonious) to approximate the desired true function (Hwang9I [5]). Several factors may contribute to this favorable performance. First and foremost, the data-smoothing technique creates more pertinent nonlinear nodal functions, so the network adapts more efficiently to the observation data without using too many terms (hidden neurons) of interpolative projections. Secondly, the batch GaussNewton BPL updates all the weights in the network simultaneously while the PPL updates cyclically (neuron-by-neuron and layer-by-layer), which allows the most recent updating information to be used in the subsequent updating. That is, more important projection directions can be determined first so that the less important projections can have a easier search (the same argument used in favoring the GaussSeidel method over the Jacobi method in an iterative linear equation solver). Learning Speed: As we reported earlier (Maechler90 [7]), the PPL took much less time (1-2 order of magnitude speedup) in achieving accuracy comparable with that of the sequential gradient descent BPL. Interestingly, when compared with the batch Gauss-Newton BPL, the PPL took quite similar amount of time over all the simulations (under the same number of hidden neurons and the same convergence 1166 Hwang, Li, Maechler, Martin, and Schimert threshold e = 0.005). In all simulations, both the BPLs and PPLs can converge under 100 iterations most of the time. 5 SENSITIVITY TO OUTLIERS Both BPL's and PPL's are types of nonlinear least squares estimators. Hence like all least squares procedures, they are all sensitive to outliers. The outliers may come from large errors in measurements, generated by heavy tailed deviations from a Gaussian distribution for the noise iii in Eq. (1). When in presence of additive Gaussian noises without outliers, most functions can be well approximated by 5-10 hidden neurons using BPL or with 3-5 hidden neurons using PPL. When the Gaussian noise is altered by adding one outlier, the BPL with 5-10 hidden neurons can still approximate the desired function reasonably well in general at the sacrifice of the magnified error around the vicinity of the outlier. If the number of outliers increases to 3 in the same corner, the BPL can only get a "distorted" approximation of the desired function. On the other hand, the PPL with 5 hidden neurons can successfully approximate the desired function and remove the single outlier. In case of three outliers, the PPL using simple data smoothing techniques can no longer keep its robustness in accuracy of approximation. Acknowledgements This research was partially supported through grants from the National Science Foundation under Grant No. ECS-9014243. References [1] S-Plus Users Manual (Version 3.0). Statistical Science Inc., Seattle, WA, 1990. [2] D.L. Donoho and I.M. Johnstone. Projection-based approximation and a duality with kernel methods. The Annals of Statistics, Vol. 17, No.1, pp. 58-106, 1989. [3] J .H. Friedman. Classification and multiple regression through projection pursuit. Technical Report No. 12, Department of Statistics, Stanford University, January 1985. [4] J. N. Hwang and P. S. Lewis. From nonlinear optimization to neural network learning. In Proc. 24th Asilomar Conf. on Signals, Systems, & Computers, pp. 985-989, Pacific Grove, CA, November 1990. [5] J. N. Hwang, H. Li, D. Martin, J. Schimert. The learning parsimony of projection pursuit and back-propagation networks. In 25th Asilomar Conf. on Signals, Systems, & Computers, Pacific Grove, CA, November 1991. [6] L.K. Jones. On a conjecture of Huber concerning the convergence of projection pursuit regression. The Annals of Statistics, Vol. 15, No. 2,880-882, 1987. [7] M. Maechler, D. Martin, J. Schimert, M. Csoppenszky and J. N. Hwang. Projection pursuit learning networks for regression. in Proc. 2nd Int'l Conf. Tools for AI, pp. 350-358, Washington D.C., November 1990.
1991
95
568
1096 Unsupervised Classifiers, Mutual Information and 'Phantom Targets' John s. Bridle Anthony J .R. Heading Defence Research Agency St. Andrew's Road, Malvern ""orcs. "\VR14 3PS, U.K. David J.e. MacKay California Institute of Technology 139-74 Pasadena CA 91125 U.S.A Abstract We derive criteria for training adaptive classifier networks to perform unsupervised data analysis. The first criterion turns a simple Gaussian classifier into a simple Gaussian mixture analyser. The second criterion, which is much more generally applicable, is based on mutual information. It simplifies to an intuitively reasonable difference between two entropy functions, one encouraging 'decisiveness,' the other 'fairness' to the alternat.ive interpretations of the input. This 'firm but fair' criterion can be applied to any network that produces probability-type outputs, but it does not necessarily lead to useful behavior. 1 Unsupervised Classification One of the main distinctions made in discussing neural network architectures, and pattern analysis algorithms generally, is between supervised and unsupervised data analysis. We should therefore be interested in any method of building bridges between techniques in these two categories. For instance, it is possible to use an unsupervised system such as a Boltzmann machine to learn the joint distribution of inputs and a teacher's classificat.ion labels. The particular type of bridge we seek is a method of taking a supervised pattern classifier and turning it into an unsupervised data analyser. That is, we are interested in methods of "bootstrapping" classifiers. Consider a classifier system. Its input is a vector x, and the output is a probability vector y(x). (That is, the elements ofy are positive and sum to 1.) The elements of y, (Yi (x), i = 1 ... Nc ) are to be taken as the probabilities that x should be assigned to each of Nc classes. (Note that our definition of classifier does not include a decision process.) Unsupervised Classifiers, Mutual Information and 'Phantom Targets' 1097 To enforce the conditions we require for the output values, v,,'e recommend using a generalised logistic (normalised exponential, or SoftMax) output stage. \Ve call t.he unnormalised log probabilities of the classes ai, and the softmax performs: Yi = ea,/Z with Z = Lea, (1 ) Normally the parameters of such a system would be adjust.ed using a training set comprising examples of inputs and corresponding classes, {(Xi, cd}, vVe assume that the system includes means t.o convert derivatives of a t.raining criterion with respect to the outputs into a form suitable for adjusting the values of the parameters, for instance by "backpropagation", Imagine however that we have unlabelled data, X m , m. = 1, , ,Nts , and wish to use it to 'improve' the classifier. We could think of this as self-supervised learning, to hone an already good system on lots of easily-obtained unlabelled real-world data, or to adapt to a slowly changing environment, or as a way of turning a classifier int.o some sort of cluster analyser. (Just what kind depends on details of the classifier itself.) The ideal method would be theoretically well-founded, generalpurpose (independent of the details of the classifier), and computationally tractable. One well known approach to unsupervised data analysis is to minimise a reconstruction error: for linear projections and squared euclidean distance this leads to principal components analysis, while reference-point based classifiers lead to vector quantizer design methods, such as the LBG algorithm, Variants on VQ, such as Kohonen's feature maps, can be motivated by requiring robustness to distortions in the code space . Reconstruction error is only available as a training criterion if reconstruction is defined: in general we are only given class label probabilities. 2 A Data Likelihood Criterion For the special case of a Gaussian clustering of an unlabelled data set, it was demonstrated in [1] that gradient ascent on the likelihood of the data has an appealing interpretation in terms of backpropagation in an equivalent unit-Gaussian classifier network: for each input X presented to the network, the output y is doubled to give 'phantom targets' t = 2y; when the derivatives of the log likelihood criterion J = -Eiti 10gYi relative to these targets are propagated back through the network, it turns out that the resulting gradient is identical to t.he gradient of the likelihood of the data given a Gaussian mixture model. For the unit-Gaussian classifier, the activations ai in (1) are ai = -Ix - wd 2, (2) so the outputs of the network are Yi = P(class = i I x, w) (3) where we assume the inputs are drawn from equi-probable unit-Gaussian distributions with the mean of the distribution of the ith class equal to Wi. This result was only derived in a limited context, and it was speculated that it might be generalisable to arbitrary classification models. The above phantom t.arget. rule 1098 Bridle, Heading, and MacKay has been re-derived for a larger class of networks [4], but the conditions for strict applicability are quite severe. Briefly, there should be exponential density functions for each class, and the normalizing factors for these densit.ies should be independent of the parameters. Thus Gaussians with fixed covariance matrices are acceptable, but variable covariances are not, and neither are linear transformat.ions preceeding the Gaussians. The next section introduces a new objective function which is independent of details of the classifier. 3 Mutual Information Criterion Intuitively, an unsupervised adaptive classifier is doing a plausible job if its outputs usually give a fairly clear indication of the class of an input vector, and if there is also an even dist.ribution of input patterns between the classes. We could label these desiderata 'decisive' and 'fair' respectively. Note that it is trivial to achieve either of them alone. For a poorly regularised model it may also be trivial to achieve both. There are several ways to proceed. We could devise ad-hoc measures corresponding to our notions of decisiveness and fairness, or we could consider particular types of classifier and their unsupervised equivalents, seeking a general way of turning one into the other. Our approach is to return to the general idea that the class predictions should retain as much information about the input values as possible. We use a measure of the information about x which is conveyed by the output distribution, i. e. the mutual information between the inputs and the outputs. 'Ne interpret the outputs y as a probability distribution over a discrete random variable e (the class label), thus y = p( elx). The mutual information between x and e is I(e; x) jr{ p(e,x) J dcdxp(e, x) log p(e)p(x) J dxp(x) J dep(elx) log p~~~~) J J p(clx) dxp(x) de p(elx) log J dxp(x)p( elx) The elements of this expression are separately recognizable: J dx p(x)(.) is equivalent to an average over a training set .~t. Lts (.); p( clx) is simply the network output Yc; J dc(·) is a sum over the class labels and corresponding network outputs. Hence: I(c; x) I Nc y. N L L Yi log :-! ts t$ i=l Yi (4) (5) (6) (7) Unsupervised Classifiers, Mutual Information and 'Phantom Targets' 1099 Nc 1 Nc -L fh log Yh + IV L L Yi log Yi i=l 1 is ts i=l (8) 1i(y) -1i(y) (9) The objective function I is the difference between the entropy of the average of the out.puts, and the average of the entropy of the outputs, where both averages are over the training set. 1i(y) has its maximum value when the average activities of the separate output.s are equal- this is 'fairness'. 1i(Y) has its minimum value when one output is full on and the rest are off for every training case - this is 'firmness'. \Ve now evaluate I for the training set. a.nd take the gradient of I. 4 Gradient descent To use this criterion with back-propagation network training, we need its derivatives with respect to the network outputs. oI(c ;x) OYi (10) (11 ) (12) The resulting expression is quite simple, but note that the presence of a fii term means that two passes through the training set are required: the first to calculate the average output node activations, and the second to back-propagate the derivatives. 5 Illustrations Figures 1 shows I (divided by its maximum possible value, log Nc) for a run of a particular unit-Gaussian classifier network. The 30 data points are drawn from a 2-d isotropic Gaussian. Figure 2 shows the fairness and firmness criteria separately. (The upper curve is 'fairness' ?i(y )/log N e , and the lower curve is 'firmness' (1 1i(y)/log N c ).) The t.en reference points had starting values drawn from the same distribution as the data. Figure 3 shows their movement during training. From initial positions within the data cluster, they move outwards into a circle around the data. The resulting classification regions are shown in Figure 4. (The grey level is proportional to the value of the maximum response at each point, and since the outputs are positive normalised this value drops to 0.5 or less at the decision boundaries.) We observe that the space is being partitioned into regions with roughly equal numbers of points. It might be surprising at. first t.hat t.he reference points do not end up near 1100 Bridle, Heading, and MacKay 1.0 0.8 0.2 4 2 o -2 20 40 60 80 100 Iteration 1. The M.1. criterion 4~~~~~~~~~~~ 4 -2 o 2 4 3. 'Tracks of reference points 1.0 0.8 0.6 0.4 0.2 20 40 60 80 100 Iteration 2. Firm and Fair separately 4. Decision Regions Unsupervised Classifiers, Mutual Information and 'Phantom Targets' 1101 the dat.a. However, it is only the transformat.ion from dat.a x to out.puts y that is being trained, and t.he refereme points are just parameters of t.hat t.ra.nsformation. As t.he reference point.s move further away from OBe anot.her t.he dE'cision bounclaries grow firmer. In t.his example the fairness crit.erion happens t.o decreasf' in favour of t.he firmness, and this usually happens. \Ve could consider different weightings of the two components of the criterion. 6 Con1n1ents The usefulness of this objective function will prove will depend very much on the form of classifier that it is applied t.o. For a poorly regularised classifier, maximisation of the criterion alone will not necessarily lead to good solutions to unsupervised classification; it could be ma.ximised by any implausible classification of the input. that is completely hard (i. e. the output vector always has one 1 and all the other outputs 0), and t.hat. chops the t.raining set int.o regions cont.aining similar numbers of training points; such a solution would be one of many global maxima, regardless of whether it chopped t.he data into natural classes. The meaning of a 'natural' partition in t.his cont.ext is, of course, rather ill-defined. Simple models often do not. have t.he capacity t.o break a pattern space int.o highly contorted regions - the decision boundaries shown in the figure below is an example of model producing a reasonable result as a consequence of its inherent simplicity. When we use more complex models, however, we must ensure t.hat we find simpler solutions in preference to more complex ones. Thus this criterion encourages us to pursue objective t.echniques for regularising classification networks [2, 3]; such techniques are probably long overdue. Copyright © Controller HMSO London 1992 References [1] J .S. Bridle (1988). The phantom target cluster network: a peculiar relative of (unsupervised) maximum likelihood stochastic modelling and (supervised) error backpropagation, RSRE Research Note SP4: 66, DRA Malvern UK. [2] D.J .C. MacKay (1991). Bayesian interpolation, submitted to Neural computation. [3] D.J .C. MacKay (1991). A practical Bayesian framework for backprop networks, submitted to Neural computation. [4] .J S Bridle and S J Cox. Recnorm: Simultaneous normalisation and classification applied to speech recognition. In Advances in Ne'ural Information Processing Systems ;g. Morgan Kaufmann, 1991. [5] J S Bridle. Training stochastic model recognition algorithms as networks can lead to maximum mut.ual informat.ion estimation of parameters. In Advances in Neural Informatio71 Processing Systems 2. Morgan Kaufmann, 1990.
1991
96
569
Fast, Robust Adaptive Control by Learning only Forward Models Andrew W. Moore MIT Artificial Intelligence Laboratory 545 Technology Square, Cambridge, MA 02139 awmGai.JD.it.edu Abstract A large class of motor control tasks requires that on each cycle the controller is told its current state and must choose an action to achieve a specified, state-dependent, goal behaviour. This paper argues that the optimization of learning rate, the number of experimental control decisions before adequate performance is obtained, and robustness is of prime importance-if necessary at the expense of computation per control cycle and memory requirement. This is motivated by the observation that a robot which requires two thousand learning steps to achieve adequate performance, or a robot which occasionally gets stuck while learning, will always be undesirable, whereas moderate computational expense can be accommodated by increasingly powerful computer hardware. It is not unreasonable to assume the existence of inexpensive 100 Mflop controllers within a few years and so even processes with control cycles in the low tens of milliseconds will have millions of machine instructions in which to make their decisions. This paper outlines a learning control scheme which aims to make effective use of such computational power. 1 MEMORY BASED LEARNING Memory-based learning is an approach applicable to both classification and function learning in which all experiences presented to the learning box are explicitly remembered. The memory, Mem, is a set of input-output pairs, Mem = {(Xl, YI), (X21 Y2), ... , (Xb Yk)}. When a prediction is required of the output of a novel input Xquery, the memory is searched to obtain experiences with inputs close to Xquery. These local neighbours are used to determine a locally consistent output for the query. Three memory-based techniques, Nearest Neighbour, Kernel Regression, and Local Weighted Regression, are shown in the accompanying figure. 571 572 Moore j. • o • • • • I • I • , • , w laput Nearest Neighbour: Ypredict(Xquery) = Yi where i minimizes {( Xi - x query) 2 : (Xi, Yi) E Mem}. There is a general introduction in [5], some recent applications in [11], and recent robot learning work in [9, 3]. i · • o • • , • • • • • , • , .e j. • o • • , • I • I • Y • • M lap.t lap.t Kernel Regression: Also Local Weighted Regresknown as Shepard's interpo- sion: finds the linear maplation or Local Weighted Av- ping Y = Ax to minimize erages. Y;.;edict(Xquery) = the sum of weighted squares C£ w.y.)/ L w. where Wi = of residua!s E Wj(Yi - AXi)2. exp( -(Xi - X query )2 / K width 2)Yp!~dict IS ~hen AXquery. [6] describes some variants LWR was mtroduced for . robot learning control by [1]. 2 A MEMORY-BASED INVERSE MODEL An inverse model maps State x Behaviour ~ Action (8 x b ~ a). Behaviour is the output of the system, typically the next state or time derivative of state. The learned inverse model provides a conceptually simple controller: 1. Observe 8 and b goa1. 2. a : - inverse-model(s, bgoal) 3. Perform action a and observe actual behaviour bactual. 4. Update MEM with (8, bactual a): If we are ever again in state 8 and require behaviour bactual we should apply action a. Memory-based versions of this simple algorithm have used nearest neighbour [9] and LWR [3]. bgoal is the goal behaviour: depending on the task it may be fixed or it may vary between control cycles, perhaps as a function of state or time. The algorithm provides aggressive learning: during repeated attempts to achieve the same goal behaviour, the action which is applied is not an incrementally adjusted version of the previous action, but is instead the action which the memory and the memory-based learner predicts will directly achieve the required behaviour. If the function is locally linear then the sequence of actions which are chosen are closely related to the Secant method [4] for numerically finding the zero of a function by bisecting the line between the closest approximations that bracket the y = 0 axis. If learning begins with an initial error Eo in the action choice, and we wish to reduce this error to Eo/I<, the number of learning steps is O(log log I<): subject to benign conditions, the learner jumps to actions close to the ideal action very quickly. A common objection to learning the inverse model is that it may be ill-defined. For a memory-based method the problems are particularly serious because of its update rule. It updates the inverse model near bactual and therefore in those cases in which bgoal and bactual differ greatly, the mapping near bgoal may not change. As a result, Fast, Robust Adaptive Control by Learning only Forward Models 573 subsequent cycles will make identical mistakes. [10] discusses this further. 3 A MEMORY-BASED FORWARD MODEL One fix for the problem of inverses becoming stuck is the addition of random noise to actions prior to their application. However, this can result in a large proportion of control cycles being wasted on experiments which the robot should have been able to predict as valueless, defeating the initial aim of learning as quickly as possible. An alternative technique using multilayer neural nets has been to learn a forward model, which is necessarily well defined, to train a partial inverse. Updates to the forward model are obtained by standard supervised training, but updates to the inverse model are more sophisticated. The local Jacobian of the forward model is obtained and this value is used to drive an incremental change to the inverse model [8]. In conjunction with memory-based methods such an approach has the disadvantage that incremental changes to the inverse model loses the one-shot learning behaviour, and introduces the danger of becoming trapped in a local minimum. Instead, this investigation only relies on learning the forward model. Then the inverse model is implicitly obtained from it by online numerical inversion instead of direct lookup. This is illustrated by the following algorithm: 1. Observe sand bgoal. 2. Perform numerical inversion: Search among a series of candidate actions a1, a2 .. , ak: brredict : _ forvard-llodel( s, a1, MEM) b~redict : = forvard-llodel(s, a2, MEM) beredict : _ forvard-llodel( s, ak, MEM) Until I TIME-OUT I or I beredict = bgoal I 3. If TIME-OUT then perform experimental action else perform ak. 4. Update MEM with (s, ak bactual) A nice feature of this method is the absence of a preliminary training phase such as random flailing or feedback control. A variety of search techniques for numerical inversion can be applied. Global random search avoids local minima but is very slow for obtaining accurate actions, hill climbing is a robust local procedure and more aggressive procedures such as Newton's method can use partial derivative estimates from the forward model to make large second-order steps. The implementation used for subsequent results had a combination of global search and local hill climbing. In very high speed applications in which there is only time to make a small number of forward model predictions, it is not difficult to regain much of the speed advantage of directly using an inverse model by commencing the action search with ao as the action predicted by a learned inverse model. 4 OTHER CONSIDERATIONS Actions selected by a forward memory-based learner can be expected to converge very quickly to the correct action in benign cases, and will not become stuck in difficult cases, provided that the memory based representation can fit the true forward 574 Moore model. This proviso is weak compared with incremental learning control techniques which typically require stronger prior assumptions about the environment, such as near-linearity, or that an iterative function approximation procedure will avoid local minima. One-shot methods have an advantage in terms of number of control cycles before adequate performance whereas incremental methods have the advantage of only requiring trivial amounts of computation per cycle. However, the simple memory-based formalism described so far suffers from two major problems which some forms of adaptive and neural controllers may avoid . • Brittle behaviour in the presence of outliers . • Poor resistance to non-stationary environments. Many incremental methods implicitly forget all experiences beyond a certain horizon. For example, in the delta rule ~Wij = lI(y~ctual yrredict) X j, the age beyond which experiences have a negligible effect is determined by the learning rate 1I. As a result, the detrimental effect of misleading experiences is presen t for only a fixed amount of time and then fades awayl . In contrast, memory-based methods remember everything for ever. Fortunately, two statistical techniques: robust regression and cross-validation allow extensions to the numerical inversion method in which we can have our cake and eat it too. 5 USING ROBUST REGRESSION We can judge the quality of each experience (Xi, yd E Mem by how well it is predicted by the rest of the experiences. A simple measure of the ith error is the cross validation error, in which the experience is first removed from the memory before prediction. efve =1 Predict(xi, Mem - {(Xi, Yin) I. With the memorybased formalism, in which all work takes place at prediction time, it is no more expensive to predict a value with one datapoint removed than with it included. Once we have the measure efve of the quality of each experience, we can decide if it is worth keeping. Robust statistics [7] offers a wide range of methods: this implementation uses the Median Absolute Deviation (MAD) procedure. 6 FULL CROSS VALIDATION The value e~~ial = L.: efve, summed over all "good" experiences, provides a measure of how well the current representation fits the data. By optimizing this value with respect to internal learner parameters, such as the width of the local weighting function [(width used by kernel regression and LWR, the internal parameters can be found automatically. Another important set of parameters that can be optimized is the relative scaling of each input variable: an example of this procedure applied to a two-joint arm task may be found in Reference [2]. A useful feature of this procedure is its quick discovery (and subsequent ignoring) of irrelevant input variables. Cross-validation can also be used to selectively forget old inaccurate experiences caused by a slowly drifting or suddenly changing environment. We have already seen that adaptive control algorithms such as the LMS rule can avoid such problems because the effects of experiences decay with time. Memory based methods can also forget things according to a forgetfulness parameter: all observations are weighted IThis also has disadvantages: persistence of excitation is required and multiple tasks can often require relearning if they have not been practised recently. Fast, Robust Adaptive Control by Learning only Forward Models 575 by not only the distance to the Xquery but also by their age: Wi = exp( -(Xi Xquery)2 / Kwidth 2 - (n - i)/ Krecau) (1) where we assume the ordering of the experiences' indices i is temporal, with experience n the most recent. We find the K recall that minimizes the recen t weighted average cross validation error L:?=o efve exp( -en - i)/,), where, is a human assigned 'meta-forgetfulness' constant, reflecting how many experiences the learner would need in order to benefit from observation of an environmental change. It should be noted that, is a substantially less task dependent prescription of how far back to forget than would be a human specified Krecall. Some initial tests of this technique are included among the experiments of Section 8. Architecture selection is another use of cross validation. Given a family of learners, the member with the least cross validation error is used for subsequent predictions. 7 COMPUTATIONAL CONSIDERATIONS Unless the real time between control cycles is longer than a few seconds, cross validation is too expensive to perform after every cycle. Instead it can be performed as a separate parallel process, updating the best parameter values and removing outliers every few real control cycles. The usefulness of breaking a learning control task into an online realtime processes and offline mental simulation was noted by [12]. Initially, the small number of experiences means that cross validation optimizes the parameters very frequently, but the time between updates increases with the memory size. The decreasing frequency of cross validation updates is little cause for concern, because as time progresses, the estimated optimal parameter values are expected to become decreasingly variable. If there is no time to make more than one memory based query per cycle, then memory based learning can nevertheless proceed by pushing even more of the computation into the offline component. If the offline process can identify meaningful states relevant to the task, then it can compute, for each of them, what the optimal action would be. The resulting state-action pairs are then used as a policy. The online process then need only look up the recommended action in the policy, apply it and then insert (s, a, b) into the memory. 8 COMPARATIVE TESTS The ultimate goal of the investigation is to produce a learning control algorithm which can learn to control a fairly wide family of different tasks. Some basic, very different, tasks have been used for the initial tests. The HARD task, graphed in Figure 1, is a one-dimensional direct relationship between action and behaviour which is both non-monotonic and discontinuous. The VARIER task (Figure 2) is a sinusoidal relation for which the phase continuously drifts, and occasionally alters catastrophically. LINEAR is a noisy linear relation between 4-d states, 4-d actions and 4-d behaviours. For these first three tasks, the goal behaviour is selected randomly on each control cycle. ARM (Figure 3) is a simulated noisy dynamic two-joint arm acting under gravity in which state is perceived in cartesian coordinates and actions are produced 576 Moore in joint-torque coordinates. Its task is to follow the circular trajectory. BILLIARDS is a simulation of the real billiards robot described shortly in which 5% of experiences are entirely random outliers. -.-----------------, ~ . o .. . J G1I • 01114"'.'. Action ~ . o .. . J G1I • Figure 1: The HARD relation. Figure 2: VARIER relation. Goal Trajectory Figure 3: The ARM task. The following learning methods were tested: nearest neighbour, kernel regression and LWR, all searching the forward model and using a form of uncertainty-based intelligent experimentation [10] when the forward search proved inadequate. Another method under test was sole use of the inverse, learned by LWR. Finally a "bestpossible" value was obtained by numerically inverting the real simulated forward model instead of a learned model. All tasks were run for only 200 control cycles. In each case the quality of the learner was measured by the number of successful actions in the final hundred cycles, where "successful" was defined as producing behaviour within a small tolerance of bgoal . Results are displayed in Table 1. There is little space to discuss them in detail, but they generally support the arguments of the previous sections. The inverse model on its own was generally inferior to the forward method, even in those cases in which the inverse is well-defined. Outlier removal improved performance on the BILLIARDS task over non-robustified versions. Interestingly, outlier removal also greatly benefited the inverse only method. The selectively forgetful methods performed better than than their non-forgetful counterparts on the VARIER task, but in the stationary environments they did not pay a great penalty. Cross validation for K width was useful: for the HARD task, LWR found a very small K width but in the LINEAR task it unsurprisingly preferred an enormous Kwidth. Some experiments were also performed with a real billiards robot shown in Figure 4. Sensing is visual: one camera looks along the cue stick and the other looks down at the table. The cue stick swivels around the cue ball, which starts each shot at the same position. At the start of each attempt the object ball is placed at a random position in the half of the table opposite the cue stick. The camera above the table obtains the (x, y) image coordinates of the object ball, which constitute the state. The action is the x-coordinate of the image of the object ball on the cue stick camera. A motor swivels the cue stick until the centroid of the actual image of the object ball coincides with the chosen x-coordinate value. The shot is then performed and observed by the overhead camera. The behaviour is defined as the cushion and position on the cushion with which the object ball first collides. Fast, Robust Adaptive Control by Learning only Forward Models 577 Controller type. (K = use MAD VARIER HARD LINEAR ARM BIL'DS outlier removal, X = use crossvalidation for K width, R = use crossvalidation for K recall , IF = obtain initial candidate action from the inverse model then search the forward model.) Best Possible: Obtamed from nu100 ±O 100 ±O 75 ± 3 94± 1 82 ± 4 merically inverting simulated world Inverse only, learned WIth LWR 15 ± 9 24 ± 11 7±6 76 ± 28 71 ± 5 Inverse only, learned WIth LW R, KRX 48 ± 16 72± 8 70 ± 4 89± 4 70± 10 LWR: IF 14± 10 11 ± 5 58 ± 4 83 ± 4 55± 12 LWR: IF X 19± 9 72± 4 70 ± 4 89 ± 3 61 ± 9 LWK: IF KX 22 ± 15 51 ± 27 73 ± 3 90± 3 75 ± 7 LWK: IF KRX 54± 8 65 ±28 70 ± 5 89± 2 69 ± 7 L W K: r'orward only, KRX 56 ± 9 53 ± 17 73 ± 1 89± 1 69± 7 Kernel KegresslOn: IF 8±2 6±2 13 ± 3 3±2 1±1 Kernel RegreSSion: IF KRX 15 ± 8 42 ± 21 14± 2 23 ± 10 30± 5 Nearest Neigh bour: IF 22± 4 92± 2 O±O 44± 6 10 ± 2 Nearest Nelghbour: IF K 26 ± 10 69± 4 O±O 40± 6 9±3 Nearest Neigh bour: IF KR 44± 8 68± 3 O±O 40± 7 11 ± 3 Nearest Neighbour: Forward only, 43 ± 8 66± 5 O±O 37 ± 3 8±1 KR Global Lmear RegresslOn: IF 8±3 7±3 74± 5 60 ± 17 23 ± 6 Global Lmear RegresslOn: IF KR 20 ± 13 9±2 73 ± 4 72± 3 21 ± 4 Global Quadrattc RegresslOn: IF 14±7 5±3 64± 2 70 ± 22 40± 11 Table 1: Relative erformance of a famil o learners on a famil p y y of tasks. Each combination of learner and task was run ten times to provide the mean number of successes and standard deviation shown in the table. The controller uses the memory based learner to choose the action to maximize the probability that the ball will enter the nearer of the two pockets at the end of the table. A histogram of the number of successes against trial number is shown in Figure 5. In this experiment, the learner was LWR using outlier removal and cross validation for [(width. After 100 experiences, control choice running on a Sun-4 was taking 0.8 seconds2 . Sinking the ball requires better than 1 % accuracy in the choice of action, the world contains discontinuities and there are random outliers in the data and so it is encouraging that within less than 100 experiences the robot had reached a 70% success rate--substantially better than the author can achieve. ACKNOWLEDGEMENTS Some of the work discussed in this paper is being performed in collaboration with Chris Atkeson. The robot cue stick was designed and built by Wes Huang with help from Gerrit van Zyl. Dan Hill also helped considerably with the billiards robot. The author is supported by a Postdoctoral Fellowship from SERC/NATO. Support was provided under Air Force Office of Scien tific Research gran t AFOSR-89-0500 and a National Science Foundation Presidential Young Investigator Award to Christopher G. Atkeson. 2This could have been greatly improved with more appropriate hardware or better software techniques such as kd-trees for structuring data [11, 9]. 578 Moore Figure 4: The billiards robot. In the foreground is the cue stick which attempts to sink balls in the far pockets. References 10 9 8 ! 7 &" r5 J .. e 3 ::I Z :z 1 o 1 o :zo 40 60 80 100 Trial number (batches of 10) Figure 5: Frequency of successes versus con trol cycle for the billiards task. [1] C. G. Atkeson. Using Local Models to Control Movement. In Proceedings of Neural Information Processing Systems Conference, November 1989. [2] C. G. Atkeson. Memory-Based Approaches to Approximating Continuous Functions. Technical report, M. I. T. Artificial Intelligence Laboratory, 1990. [3] C. G. Atkeson and D. J. Reinkensmeyer. Using Associative Content-Addressable Memories to Control Robots. In Miller, Sutton, and Werbos, editors, Neural Networks for Control. MIT Press, 1989. [4] S. D. Conte and C. De Boor. Elementary Numerical Analysis. McGraw Hill, 1980. [5] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley & Sons, 1973. [6] R. Franke. Scattered Data Interpolation: Tests of Some Methods. Mathematics of Computation, 38(157), January 1982. [7] F. Hampbell, P. Rousseeuw, E. Ronchetti, and W. Stahel. Robust Statistics. Wiley International, 1985. [8] M. 1. Jordan and D. E. Rumelhart. Forward Models: Supervised Learning with a Distal Teacher. Technical report, M. I. T., July 1990. [9] A. W. Moore. Efficient Memory-based Learning for Robot Control. PhD. Thesis; Technical Report No. 209, Computer Laboratory, University of Cambridge, October 1990. [10] A. W. Moore. Knowledge of Knowledge and Intelligent Experimentation for Learning Control. In Proceedings of the 1991 Seattle International Joint Conference on Neural Networks, July 1991. [11] S. M. Omohundro. Efficient Algorithms with Neural Network Behaviour. Journal of Complex Systems, 1(2):273-347, 1987. [12] R. S. Sutton. Integrated Architecture for Learning, Planning, and Reacting Based on Approximating Dynamic Programming. In Proceedings of the 7th International Conference on Machine Learning. Morgan Kaufman, June 1990.
1991
97
570
Induction of Finite-State Automata Using Second-Order Recurrent Networks Raymond L. Watrous Siemens Corporate Research 755 College Road East, Princeton, NJ 08540 Gary M. Kuhn Center for Communications Research, IDA Thanet Road, Princeton, NJ 08540 Abstract Second-order recurrent networks that recognize simple finite state languages over {0,1}* are induced from positive and negative examples. Using the complete gradient of the recurrent network and sufficient training examples to constrain the definition of the language to be induced, solutions are obtained that correctly recognize strings of arbitrary length. A method for extracting a finite state automaton corresponding to an optimized network is demonstrated. 1 Introduction We address the problem of inducing languages from examples by considering a set of finite state languages over {O, 1}* that were selected for study by Tomita (Tomita, 1982): L1. 1* L2. (10)* L3. no odd-length O-string anywhere after an odd-length I-string L4. not more than 20's in a row L5. bit pairs, #01 's + #10's = 0 mod 2 309 310 Watrous and Kuhn L6. abs(#l's - #O's) = 0 mod 3 L 7. 0*1*0*1* Tomita also selected for each language a set of positive and negative examples (summarized in Table 1) to be used as a training set. By a method of heuristic search over the space of finite state automata with up to eight states, he was able to induce a recognizer for each of these languages (Tomita, 1982). Recognizers of finite-state languages have also been induced using first-order recurrent connectionist networks (Elman, 1990; Williams and Zipser, 1988; Cleeremans, Servan-Schreiber and McClelland, 1989). Generally speaking, these results were obtained by training the network to predict the next symbol (Cleeremans, Servan-Schreiber and McClelland, 1989; Williams and Zipser, 1988), rather than by training the network to accept or reject strings of different .lengths. Several training algorithms used an approximation to the gradient (Elman, 1990; Cleeremans, Servan-Schreiber and McClelland, 1989) by truncating the computation of the backward recurrence. The problem of inducing languages from examples has also been approached using second-order recurrent networks (Pollack, 1990; Giles et al., 1990). Using a truncated approximation to the gradient, and Tomita's training sets, Pollack reported that "none of the ideal languages were induced" (Pollack, 1990). On the other hand, a Tomita language has been induced using the complete gradient (Giles et al., 1991). This paper reports the induction of several Tomita languages and the extraction of the corresponding automata with certain differences in method from (Giles et al., 1991). 2 Method 2.1 Architecture The network model consists of one input unit, one threshold unit, N state units and one output unit. The output unit and each state unit receive a first order connection from the input unit and the threshold unit. In addition, each of the output and state units receives a second-order connection for each pairing of the input and threshold unit with each of the state units. For N = 3, the model is mathematically identical to that used by Pollack (Pollack, 1990); it has 32 free parameters. 2.2 Data Representation The symbols of the language are represented by byte values, that are mapped into real values between 0 and 1 by dividing by 255. Thus, the ZERO symbol is represented by octal 040 (0.1255). This value was chosen to be different from 0.0, which is used as the initial condition for all units except the threshold unit, which is set to 1.0. The ONE symbol was chosen as octal 370 (0.97255). All strings are terminated by two occurrences of a termination symbol that has the value 0.0. Induction of Finite-State Automata Using Second-Order Recurrent Networks 311 Grammatical Strings Ungrammatical Strings Language Length < 10 I Longer Strmgs Total Training In Training Set Length ::; 10 I Longer Strmgs Total Training In Training Set 1 11 9 2036 8 2 6 5 1 2041 10 3 652 11 2 1395 11 1 4 1103 10 1 944 7 2 5 683 9 1364 11 1 6 683 10 1364 11 1 7 561 11 2 1486 6 2 Table 1; Number of grammatical and ungrammatical strings oflength 10 or less for Tomita languages and number of those included in the Tomita training sets. 2.3 Training The Tomita languages are characterized in Table 1 by the number of grammatical strings of length 10 or less (out of a total of 2047 strings). The Tomita training sets are also characterized by the number of grammatical strings of length 10 or less included in the training data. For completeness, the Table also shows the number of grammatical strings in the training set of length greater than 10. A comparison of the number of grammatical strings with the number included in the training set shows that while Languages 1 and 2 are very sparse, they are almost completely covered by the training data, whereas Languages 3-7 are more dense, and are sparsely covered by the training sets. Possible consequences of these differences are considered in discussing the experimental results. A mean-squared error measure was defined with target values of 0.9 and 0.1 for accept and reject, respectively. The target function was weighted so that error was injected only at the end of the string. The complete gradient of this error measure for the recurrent network was computed by a method of accumulating the weight dependencies backward in time (Watrous, Ladendorf and Kuhn, 1990). This is in contrast to the truncated gradient used by Pollack (Pollack, 1990) and to the forward-propagation algorithm used by Giles (Giles et al., 1991). The networks were optimized by gradient descent using the BFGS algorithm. A termination criterion of 10- 10 was set; it was believed that such a strict tolerance might lead to smaller loss of accuracy on very long strings. No constraints were set on the number of iterations. Five networks with different sets of random initial weights were trained separately on each of the seven languages described by Tomita using exactly his training sets (Tomita, 1982), including the null string. The training set used by Pollack (Pollack, 1990) differs only in not including the null string. 2.4 Testing The networks were tested on the complete set of strings up to length 10. Acceptance of a string was defined as the network having a final output value of greater than 312 Watrous and Kuhn 0.9 - T and rejection as a final value of less than 0.1 + T, where 0 < T < 0.4 is the tolerance. The decision was considered ambiguous otherwise. 3 Results The results of the first experiment are summarized in Table 2. For each language, each network is listed by the seed value used to initialize the random weights. For each network, the iterations to termination are listed, followed by the minimum MSE value reached. Also listed is the percentage of strings of length 10 or less that were correctly recognized by the network, and the percentage of strings for which the decision was uncertain at a tolerance of 0.0. The number of iterations until termination varied widely, from 28 to 37909. There is no obvious correlation between number of iterations and minimum MSE. 3.1 Language 1 It may be observed that Language 1 is recognized correctly by two of the networks (seeds 72 and 987235) and nearly correctly by a third (seed 239). This latter network failed on the strings 19 and 110 , both of which were not in the training set. The network of seed 72 was further tested on all strings of length 15 or less and made no errors. This network was also tested on a string of 100 ones and showed no diminution of output value over the length of the string. When tested on strings of 99 ones plus either an initial zero or a final zero, the network also made no errors. Another network, seed 987235, made no errors on strings of length 15 or less but failed on the string of 100 ones. The hidden units broke into oscillation after about the 30th input symbol and the output fell into a low amplitude oscillation near zero. 3.2 Language 2 Similarly, Language 2 was recognized correctly by two networks (seeds 89340 and 987235) and nearly correctly by a third network (seed 104). The latter network failed only on strings of the form (10)*010, none of which were included in the training data. The networks that performed perfectly on strings up to length 10 were tested further on all strings up to length 15 and made no errors. These networks were also tested on a string of 100 alternations of 1 and 0, and responded correctly. Changing the first or final zero to a one caused both networks correctly to reject the string. 3.3 The Other Languages For most of the other languages, at least one network converged to a very low MSE value. However, networks that performed perfectly on the training set did not generalize well to a definition of the language. For example, for Language 3, the network with seed 104 reached a MSE of 8 x 10- 10 at termination, yet the performance on the test set was only 78.31%. One interpretation of this outcome is that the intended language was not sufficiently constrained by the training set. Induction of Finite-State Automata Using Second-Order Recurrent Networks 313 Language Seed Iterations MSE Accuracy Uncertainty 72 28 0.0012500000 100.00 0.00 104 95 0.0215882357 78.07 20.76 1 239 8707 0.0005882353 99.90 0.00 89340 5345 0.0266176471 66.93 0.00 987235 994 0.0000000001 100.00 0.00 72 5935 0.0005468750 93.36 4.93 104 4081 0.0003906250 99.80 0.20 2 239 807 0.0476171875 62.73 37.27 89340 1084 0.0005468750 100.00 0.00 987235 1("06 0.0001562500 100.00 0.00 72 442 0.0149000000 47.09 33.27 104 37909 0.0000000008 78.31 0.15 3 239 9264 0.0087000000 74.60 11.87 89340 8250 0.0005000000 73.57 0.00 987235 5769 0.0136136712 50.76 23.94 72 8630 0.0004375001 52.71 6.45 104 60 0.0624326924 20.86 50.02 4 239 2272 0.0005000004 55.40 9.38 89340 10680 0.0003750001 60.92 15.53 987235 324 0.0459375000 22.62 77.38 72 890 0.0526912920 34.39 63.80 104 368 0.0464772727 45.92 41.62 5 239 1422 0.0487500000 31.46 36.93 89340 2775 0.0271525856 46.12 22.52 987235 2481 0.0209090867 66.83 2.49 72 524 0.0788760972 0.05 99.95 104 332 0.0789530751 0.05 99.95 6 239 1355 0.0229551248 31.95 47.04 89340 8171 0.0001733280 46.21 5.32 987235 306 0.0577867426 37.71 24.87 72 373 0.0588385157 9.38 86.08 104 8578 0.0104224185 55.74 17.00 7 239 969 0.0211073814 52.76 26.58 89340 4259 0.0007684520 54.42 0.49 987235 666 0.0688690476 12.55 74.94 Table 2: Results of Training Three State-Unit Network from 5 Random Starts on Tomita.Languages Using Tomita Training Data In the case of Language 5, in no case was the MSE reduced below 0.02. We believe that the model is sufficiently powerful to compute the language. It is possible, however, that the power of the model is marginally sufficient, so that finding a solution depends critically upon the initial conditions. 314 Watrous and Kuhn Seed Iterations MSE Accuracy Uncertainty 72 215 0.0000001022 100.00 0.00 104 665 0.0000000001 99.85 0.05 239 205 0.0000000001 99.90 0.10 89340 5244 0.0005731708 99.32 0.10 987235 2589 0.0004624581 92.13 6.55 Table 3: Results of Training Three State-Unit Network from 5 Random Starts on Tomita Language 4 Using Probabilistic Training Data (p=O.l) 4 Further Experiments The effect of additional training data was investigated by creating training sets in which each string oflength 10 or less is randomly included with a fixed probability p. Thus, for p = 0.1 approximately 10% of 2047 strings are included in the training set. A flat random sampling of the lexicographic domain may not be the best approach, however, since grammaticality can vary non-uniformly. The same networks as before were trained on the larger training set for Language 4, with the results listed in Table 3. Under these conditions, a network solution was obtained that generalizes perfectly to the test set (seed 72). This network also made no errors on strings up to length 15. However, very low MSE values were again obtained for networks that do not perform perfectly on the test data (seeds 104 and 239). Network 239 made two ambiguous decisions that would have been correct at a tolerance value of 0.23. Network 104 incorrectly accepted the strings 000 and 1000 and would have correctly accepted the string 0100 at a tolerance of 0.25. Both networks made no additional errors on strings up to length 15. The training data may still be slightly indeterminate. Moreover, the few errors made were on short strings, that are not included in the training data. Since this network model is continuous, and thus potentially infinite state, it is perhaps not surprising that the successful induction of a finite state language seems to require more training data than was needed for Tomita's finite state model (Tomita, 1982). The effect of more complex models was investigated for Language 5 using a network with 11 state units; this increases the number of weights from 32 to 288. Networks of this type were optimized from 5 random initial conditions on the original training data. The results of this experiment are summarized in Table 4. By increasing the complexity of the model, convergence to low MSE values was obtained in every case, although none of these networks generalized to the desired language. Once again, it is possible that more data is required to constrain the language sufficiently. 5 FSA Extraction The following method for extracting a deterministic finite-state automaton corresponding to an optimized network was developed: Induction of Finite-State Automata Using Second-Order Recurrent Networks 315 Seed Iterations MSE Accuracy Uncertainty 72 1327 0.0002840909 53.00 11.87 104 680 0.0001136364 39.47 16.32 239 357 0.0006818145 61.31 3.32 89340 122 0.0068189264 63.36 6.64 987235 4502 0.0001704545 48.41 16.95 Table 4: Results of Training Network with 11 State-Units from 5 Random Starts on Tomita Language 5 Using Tomita Training Data 1. Record the response of the network to a set of strings. 2. Compute a zero bin-width histogram for each hidden unit and partition each histogram so that the intervals between adjacent peaks are bisected. 3. Initialize a state-transition table which is indexed by the current state and input symbol; then, for each string: (a) Starting from the NULL state, for each hidden unit activation vector: 1. Obtain the next state label from the concatenation of the histogram interval number of each hidden unit value. ll. Record the next state in the state-transition table. If a transition is recorded from the same state on the same input symbol to two different states, move or remove hidden unit histogram partitions so that the two states are collapsed and go to 3; otherwise, update the current state. (b) At 'the end of the string, mark the current state as accept, reject or uncertain according as the output unit is ~ 0.9, S; 0.1 or otherwise. If the . current state has already received a different marking, move or insert histogram partitions so that the offending state is subdivided and go to 3. If the recorded strings are processed successfully, then the resulting state-transition table may be taken as an FSA interpretation of the optimized network. The FSA may then be minimized by standard methods (Giles et al., 1991). If no histogram partition can be found such that the process succeeds, the network may not have a finite-state interpretation. As an approximation to Step 3, the hidden unit vector was labeled by the index of that vector in an initially empty set of reference vectors for which each component value was within some global threshold (B) of the hidden unit value. If no such reference vector was found, the observed vector was added to the reference set. The threshold B could be raised or lowered as states needed to be collapsed or subdivided. Using the approximate method, for Language 1, the correct and minimal FSA was extracted from one network (seed 72, B = 0.1). The correct FSA was also extracted from another network (seed 987235, B = 0.06), although for no partition of the hidden unit activation values could the minimal FSA be extracted. Interestingly, the FSA extracted from the network with seed 239 corresponded to 1 n for n < 8. Also, the FSA for another network (seed 89340, B = 0.0003) was nearly correct, although the string accuracy was only 67%; one state was wrongly labeled "accept" . For Language 2, the correct and minimal FSA was extracted from one network (seed 987235, B = 0.00001). A correct FSA was also extracted from another network (seed 316 Watrous and Kuhn 89340, () = 0.0022), although this FSA was not minimal. For Language 4, a histogram partition was found for one network (seed 72) that led to the correct and minimal FSA; for the zero-width histogram, the FSA was correct, but not minimal. Thus, a correct FSA was extracted from every optimized network that correctly recognized strings of length 10 or less from the language for which it was trained. However, in some cases, no histogram partition was found for which the extracted FSA was minimal. It also appears that an almost-correct FSA can be extracted, which might perhaps be corrected externally. And, finally, the extracted FSA may be correct, even though the network might fail on very long strings. 6 Conclusions We have succeeded in recognizing several simple finite state languages using secondorder recurrent networks and extracting corresponding finite-state automata. We consider the computation of the complete gradient a key element in this result. Acknowledgements We thank Lee Giles for sharing with us their results (Giles et al., 1991). References Cleeremans, A., Servan-Schreiber, D., and McClelland, J. (1989). Finite state automata and simple recurrent networks. Neural Computation, 1(3):372-381. Elman, J . L. (1990). Finding structure in time. Cognitive Science, 14:179-212. Giles, C. 1., Chen, D., Miller, C. B., Chen, H. H., Sun, G. Z., and Lee, Y. C. (1991). Second-order recurrent neural networks for grammatical inference. In Proceedings of the International Joint Conference on Neural Networks, volume II, pages 273-281. Giles, C. L., Sun, G. Z., Chen, H. H., Lee, Y. C., and Chen, D. (1990). Higher order recurrent networks and grammatical inference. In Touretzky, D. S., editor, Advances in Neural Information Systems 2, pages 380-387. Morgan Kaufmann. Pollack, J. B. (1990). The induction of dynamical recognizers. Technical Report 90-JP-AUTOMATA, Ohio State University. Tomita, M. (1982). Dynamic construction of finite automata from examples using hill-climbing. In Proceedings of the Fourth International Cognitive Science Conference, pages 105-108. Watrous, R. L., Ladendorf, B., and Kuhn, G. M. (1990). Complete gradient optimization of a recurrent network applied to /b/, /d/, /g/ discrimination. Journal of the Acoustical Society of America, 87(3):1301-1309. Williams, R. J. and Zipser, D. (1988). A learning algorithm for continually running fully recurrent neural networks. Technical Report ICS Report 8805, UCSD Institute for Cognitive Science.
1991
98
571
Network generalization for production: Learning and producing styled letterforms Igor Grebert 541 Cutwater Ln. David G. Stork Ricoh Calif. Research Cen. 2882 Sand Hill Rd.# 115 Menlo Park, CA 94025 Ron Keesing Dept. Physiology U. C. S. F. Steve Mims Electrical Engin. Foster City, CA 94404 San Francisco, CA 94143 Stanford U. Stanford, CA 94305 1118 Abstract We designed and trained a connectionist network to generate letterfonns in a new font given just a few exemplars from that font. During learning. our network constructed a distributed internal representation of fonts as well as letters. despite the fact that each training instance exemplified both a font and a letter. It was necessary to have separate but interconnected hidden units for "letter" and "font" representations several alternative architectures were not successful. l. INTRODUCTION Generalization from examples is central to the notion of cognition and intelligent behavior (Margolis, 1987). Much research centers on generalization in recognition, as in optical character recognition, speech recognition. and so fonh. In all such cases, during the recognition event the information content of the representation is reduced; sometimes categorization is binary, representing just one bit of infonnation. Thus the infonnation reduction in answering "Is this symphony by Mozan?" is very large. A different class of problems requires generalization for production, e.g., paint a portrait of Madonna in the style of Matisse. Here during the production event a very low infonnational input ("Madonna," and "Matisse") is used to create a very high informational output, including color, fonn, etc. on the canvas. Such problems are a type of analogy. and typically require the generalization system to abstract out invariants in both the instance being presented (e.g., Madonna) and the style (e.g., Matisse), and to integrate these representations in a meaningful way. This must be Network generalization for production: Learning and producing styled letterforms done despite the fact that the system is never taught explicitly the features that correspond to Matisse's style alone, nor to Madonna's face alone, and is never presented an example of both simultaneously_ To explore this class of analogy and production issues, we addressed the following problem, derived from Hofstadter (1985): Given just a few letters in a new font, draw the remaining letters. Connectionist networks have recently been applied to production problems such as music composition (Todd, 1989), but our task is somewhat different. Whereas in music composition, memory and context (in the form of recurrent connections in a network) are used for pattern generation (melody or harmony), we have no such temporal or other explicit context information during the production of letterforms. 2. DATA, NETWORK AND TRAINING Figure 1 illustrates schematically our class of problems, and shows a subset of the data used to train our network. The general problem is to draw all the remaining letterforms in a given font, such that those forms are recognizable as letters in the style of that font. ., -..... -. ---. ., -..... -. ., -.... --. .,- -. I'".'.',:. ";:'~I I~I ',:. '''i:f~' I' , .... ,:. I, ,I, ,I - ~ -~ iIIIIIM! ., -..... -.. ... ., ,., , . ., , . -.. ., ...... 'II .,- -.. t',.:"t~t ,,;:.',:- I',: I ~I I~f~f . " " ~I -~ • ',..' t -.,: ',: I ',..,' I' , ... ~I ~I',: I' , "" ~t ,I, I, ,I ,., I, ,I -_ - .,- -. -...... - .,- -. I" f'" ~I',: I ' , " " " v~I ~I"" . ' , ~f " -~ vt',:I',:1 I',: I '".' I I' , "" 't: 1 I~ I '': I 1'':,',:., ,., ,. II ,., ,I . , ... ., ... ' , . I, ,I, , . .. -..... -.. ., -...... -. .,-. .,-...--. ., -..... -. f'.: I ~ f I"" I " , I I'" I! _~" _ v 1f:'~-:'lI_ ~f I~f~f l',:l~t _v If _ 1I~ _ \I I! _ ~,,_ v Figure 1: Several letters from three fonts (Standard, House and Benzene right) in Hofstadter's GridFont system. There are 56 fundamental horizontal, vertical and diagonal strokes, or "pixels," in the grid. 1119 1120 Grebert, Stork, Keesing, and Mims Each letterfonn in Figure 1 has a recognizable letter identity and "style" (or font). Each letter (columns) shares some invariant features as does each font (rows), though it would be quite difficult to describe what is the "same" in each of the a's for instance, or for all letters in Benzene right font. We trained our network with 26 letters in each of five fonts (Standard, House, Slant, Benzene right and Benzene left), and just 14 letters in the "test" font (Hunt four font). The task of the network was to reconstruct the missing 12 letters in Hunt four font. We used a structured three-level network (Figure 2) in which letter identity was represented in a l-of-26 code (e.g., 010000 ... ~ b), and the font identity was represented in a similar l-of-6 code. The letterfonns were represented as 56-element binary vectors, with l' s for each stroke comprising the character, and were provided to the output units by a teacher. (Note that this network is "upside-down" from the typical use of connectionist networks for categorization.) The two sections of the input layer were each fully connected to the hidden layer, but the hidden layer-to-output layer connections were restricted (Figures 3 and 4). Such restricted hidden-tooutput projections helped to prevent the learning of spurious and meaningless correlations between strokes in widely separate grid regions. There are unidirectional one-to-many intra-hidden layer connections from the letter section to the font section within the hidden layer (Figure 3). I / 56 strokes restricted connections 44 letter hidden I fully interconnected \ 44 font hidden tUllY interconnected ~ 26 letters I 6 fonts Figure 2: Network used for generalization in production. Note that the high-dimensional representation of strokes is at the output of the network, while the low-dimensional representation (a one-of-26 coding for letters and a one-ofsix for fonts) is the input. The net has one-to-many connections from letter hidden units to font hidden units (cf. Figure 3) Network generalization for production: Learning and producing styled letterforms 1121 18 strokes in ascender region output units ~~I!§1'M«.; . ...•. ~ ,'< ',' ~ ••• • • I • •••• letter hidden units font hidden units Figure 3: Expanded view of the hidden and output layers of the network of Figure 2. Four letter hidden units and four font hidden units (dark) project fully to the eighteen stroke (output) units representing the ascender region of the GridFont grid; these hidden units project to no other output units. Each of the four letter hidden units also projects to all four of the corresponding font hidden units. This basic structure is repeated across the network (see text). All connection weights, including intra-hidden layer weights, were adjusted using backpropagation (Rumelhart, Hinton and Williams, 1986), with a learning rate of TJ = 0.005 and momentum IX = 0.9. The training error stopped decreasing after roughly 10,000 training epochs, where each epoch consisted of one presentation of each of the 144 patterns (26 letters x 5 fonts + 14 letters) in random order. 10{ I JI: 10 { w-~-~ 1',:1',:1 } .., "I, ,. 4 If -.,.... -. I~I',:I "_lI(._~ Figure 4: The number of hidden units projecting to each region of the output. Four font hidden units and four letter hidden units project to the 18 top strokes (ascender region) of the output layer, as indicated. Ten font hidden units and ten letter hidden units project to the next lower square region (20 strokes), etc. This restriction prevents the learning of meaningless correlations between particular strokes in the ascender and descender regions (for instance). Such spurious correlations disrupt learning and generalization only with a small training set such as ours. 1122 I.. _ ~ - "0 I.. :-:~ I.. ~ ~ 0:: :::C~ ~­ I.. :: 0Q. ~ .... .... :: QJ 0 C Grebert, Stork, Keesing, and Mims 3. RESULTS AND CONCLUSIONS In order to produce any letterfonn, we presented as input to the trained network a (very sparse) l-of-26 and l-of-6 signal representing the target letter and font; the letterfonns emerged at the output layer. Our network reproduced nearly perfectly all the patterns in the training set. Figure 5 shows untrained letterfonns generated by the network. Note that despite irregularities, all the letters except z can be easily recognized by humans. Moreover, the letterfonns typically share the common style of Hunt four font b, c, g, and p have the diamond-shaped "loop" of 0, q, and other letters in the font; the g and y generated have the same right descender, similar to that in several letters of the original font, and so on; the I exactly matches the fonn designed by Hofstadter. Incidentally, we found that some of the letterfonns produced by the network could be considered superior to those designed by Hofstadter. For instance, the generated w had the characteristic Hunt four diamond shape while the w designed by Hostadter did not. We must stress, though, that there is no "right" answer here; the letterforms provided by Hofstadter are merely one possible solution. Just as there is no single "correct" portrait of Madonna in the style of Matisse, so our system must be judged successful if the letterforms produced are both legible and have the style implied by the other letterforms in the test font ,,- .... -,. fI-~fI'',:1',:1 I~f .'': I, ,,, " I, ,. I, ,,-.,..-,. fI-,. fI1~I~t I~I',:I I'"..:' -~ ~vI I~ '"..' •• '': I, \01 ., . - -....... I" "~I ," ~:" ':ov If::'~.¥ .',:.',:. I-;:I~I I I, ", ,. I, .. ~ " .-,.,..-,. ......... -.. • ~.",..'. I'..:I~I " _ :I, _ ~ If _ ~" _ v . ", , .... , fI-~fI-"" If:" -'lI:::" ~ I~I~I I',.,.' ";:_ ~v--~ I"~ " •• " ". ~ If:" ':'lI If::' :' • .',:. I";: ',:. I 'A' ~I ., ,. 1, ... , I, " .. -,. .-".--'111 ."." .. "." .".". ,"." .. 1:"~':~ ~":-,.. 1f:"~':'lI ::::'~:'~ 1- ~':ov 1',:'',:1 I I"',:'',t. '~'I~I ,'::1',:' ~-~~-~ ~ ~-~!-~ ~-!~-~ ~-~!-~ fI - ....... - 'II I~I~I I, ,. fI-'II .~ ',tl Figure 5: Hofstadter's letterfonns from Hunt four font (above), and the output of our network (below) for the twelve letterforms that had never been presented during training. Hofstadter's letterfonns serve merely as a guide; it is not necessary that the network reproduce these exactly to be judged successful. Analysis of learned connection strengths (Grebert et al., 1992) reveals that different internal representations were formed for letter and for font characteristics, and that these are appropriate to the task at hand. The particular letter hidden unit shown in Figure 6 effectively "shuts down" any activity in the ascender region. Such a hidden unit would be useful when Network generalization for production: Learning and producing styled letterforms 1123 generating a, c, e, etc. Indeed this hidden unit receives strong input from all letters that have no ascenders. The particular font hidden unit shown in Figure 6 leads to excitation of the "loop" in Slant font, and is used in the generation of 0, b, d, g, etc. in that font. We note further that our network integrated style information (e.g., the diamond shape of the "loop" for the b, g, the "dot" for the I, etc.) with the form information appropriate to the particular letter being generated. exc inh exc inh ....................... ] .... ...... ............ ,., ............•.... -..... .. :-.. .":'" " : ", ", : ", ", : :.-.: : :.-.: : ", ", : ", '" ~:: ....... :',:.:: ....... :. :", .'.:". .."': : a,. ,'_ : ". ..' : : :.-:. : :r.: : : .a' ".: .,'..: ~:: .... I •• :·;...:: ••••••• :·..: :'., .,-: '., ,,': : e" .'_ : e., ,.' : : :.-.: : ::( : : .,-',: .,' .'.: ::: ............ :.;...:: ............ :.;. :". .,': '" ,1-: : e" •• _ : e" .,' : : ::-.: : :.-.: : : .'_ ",: .,' e,.: ~:: ....... :.;..:: ....... :.;. :". .,':'.. ,,': : -', ,-' : ". ..- : : :.-.: : : .... : : : .. ' e..: .,' I" : ;:: ....... :.;...:: ....... :.;. :'., ,.':'.. .-.: : ' ••••• : e, ••• _ : : :.-.: : :.-.: : : .. ' ".: .. ' '..: ~::I ...... :·;..:: ....... :·,; :". _.':". .' ': : ", ,.' : ". .,' : : :.-,'. : :~: ; : .,".: .. ' ", : ~:: ....... :.~:: ....... :.,: :~ 00':'.. ,,': : ". .,' : '0. .0' : : :.-.: : :.-.: : : •• ' '0. : ._. 'I. : ~:: ....... :., :: ....... : .. :". .' : ". ..' ........ . : ::".'. . .. . : ... ", .. ' ". io:: ....... :· :: ....... :. :". .' : .... : '. " : '. .. : : " ... ' : ' ... ,' : : ::.... : :.-.: : : ,0' ". : •• ' ".: ~:: ....... :.~:: ....... :.,: :". ,,':", .. ': : " ... ' : " ... ' : : ::0.'. : :0-': ! : ,,' ".: .. ' ", : ;.:: ....... :.,;.:: ....... :.;. :". ..':". .. .. : : .... .... : .... .... : : .. ~. : .'~.. : : •• ' ". : •• ' "a : ~:: ....... :.;..:: ....... :.,: :". ,.':". ..': : '0. ..' : ". ,.' : : :0-': : :.-:, : : 0·0 ". : _.' -',: .. ' ' ... : ....... :.,: .. ': ,.' : ", : ........ :.,: : '. .' : " 0": : '0. 0" : ", ,.' : : :,~: : :,~: : : .,' ·0. : ,,' ". : ~:: II •• '" :';':: •• II "' :'~ :". .":". ,,': : ". ,.' : ". 0" : : :.~: : :.~: : : ,.' ". ~ .. ' ", : :.:: ....... :.;..:: ....... :.;, letter hidden font hidden Figure 6: Hidden unit representation for a single letter hidden unit (left) and font hidden unit (right). In general, the network does quite well. The only letterform quite poorly represented is z. Evidently, the z letterform cannot be inferred from other information, presumably because z does not consist of any of the simplest fundamental features that make up a wide variety of other letters (left or right ascenders, loops, crosses for t and f, dots, right or left descenders). The average adult has seen perhaps as many as 106 distinct examples of each letter in perhaps 1010 presentations; in contrast, our network experienced just five or six distinct examples of each letter in 104 presentations. Out of this tremendous number of letterforms, the human virtually never experiences a g that has a disconnected descender (to take one example), and would not have made the errors our network does. We suspect that the errorS our network makes are similar to those a typical westerner would exhibit in generating novel characters in a completely foreign alphabet, such as Thai. Although our network similarly has experienced only g's with connected descenders, it has a very small database over which to generalize; it is to be expected, then, that the network has not yet "deduced" the connectivity constraint for g. Indeed, it is somewhat surprising that our network performs as well as it does, and this gives us confidence that the architecture of Figure 2 is appropriate for the production task. 1124 Grebert, Stork, Keesing, and Mims This conclusion is supported by the fact that alternative architectures gave very poor results. For instance a standard three-level backpropagation network produced illegible letterfonns. Likewise, if the direct connections between letter hidden units and the output units in Figure 2 were removed, generalization perfonnance was severely compromised. Our network parameters could have been "fine tuned" for improved perfonnance but such fine tuning would be appropriate for our problem alone, and not the general class of production problems. Even without such fine tuning, though, it is clear that the architecture of Figure 2 can successfully learn invariant features of both letter and font infonnation, and integrate them for meaningful production of unseen letterfonns. We believe this architecture can be applied to related problems, such as speech production, graphic image generation, etc. ACKNOWLEDGEMENTS Thanks to David Rumelhart and Douglas Hofstadter for useful discussions. Reprint requests should be addressed to Dr. Stork at the above address, or stork@crc.ricoh.com. REFERENCES Grebert, Igor, David G. Stork, Ron Keesing and Steve Mims, "Connectionist generalization for production: An example from GridFont," Neural Networks (1992, in press). Hofstadter, Douglas, "Analogies and Roles in Human and Machine Thinking," Chapter 24, 547-603 in Metamagical Themas: Questing for the Essence of Mind and Pattern Basic Books (1985). Margolis, Howard, Patterns, Thinking, and Cognition: A Theory of Judgment U. Chicago Press (1987). Rumelhart, David E., Geoffrey E. Hinton and Ron 1. Williams, "Learning Internal Representations by Error Propagation," Chapter 8, pp. 318362 in Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol 1: Foundations D. E. Rumelhart, and 1. L. McClelland (eds.) MIT Press (1986). Todd, Peter M., "A Connectionist approach to algorithmic composition," Computer Music Journal, 13(4), 27-43, Winter 1989.
1991
99
572
Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System Lance C. Walton University of Kent at Canterbury Canterbury Kent England David L. Bisset University of Kent at Canterbury Canterbury Kent England Abstract This paper examines and extends the work of Linsker (1986) on self organising feature detectors. Linsker concentrates on the visual processing system, but infers that the weak assumptions made will allow the model to be used in the processing of other sensory information. This claim is examined here, with special attention paid to the auditory system, where there is much lower connectivity and therefore more statistical variability. On-line training is utilised, to obtain an idea of training times. These are then compared to the time available to pre-natal mammals for the formation of feature sensitive cells. 1 INTRODUCTION Within the last thirty years, a great deal of research has been carried out in an attempt to understand the development of cells in the pathways between the sensory apparatus and the cortex in mammals. For example, theories for the development of feature detectors were forwarded by Nass and Cooper (1975), by Grossberg (1976) and more recently Obermayer et al (1990). Hubel and Wiesel (1961) established the existence of several different types of feature sensitive cell in the visual cortex of cats. Various subsequent experiments have 1007 1008 Walton and Bisset shown that a considerable amount of development takes place before birth (i.e. without environmental input). This must either be dependent on a genetic predispostion for individual cells to develop in an appropriate way without external influence, or some low level rules sufficient to create the required cell morphologies in the presence of random action potentials. Although there is a great deal of a priori information concerning axon growth and synapse arborisation (governed by chemical means in the brain), it is difficult to conceive of a biological system that could use genetic information to directly manipulate the spatial information about the pre-synaptic target with respect to the axon with which the synapse is made. However, there is considerable random activity in the sensory apparatus that could be used to effect synaptic development. Various authors have constructed models that deal with different aspects of selforganisation of this kind and some have pointed out the value of these types of cells in pattern classification problems (Grossberg 1976), but either the biological plausibility of these models is questionable, or the subject of pre-natal development is not addressed (i.e. without environmental input). In this paper, the networks of Linsker (1986) will be examined. Although these networks have been analysed quite extensively by Linsker, and also by Mackay and Miller (1990), the biological aspects of parameter ranges and choices have only been touched upon. It is our aim in this paper, to add further detail in this area by examining the one-dimensional case which represents the auditory pathways. 2 LINSKER NETWORKS The network is based on a Multi Layer Perceptron, with feed forward connections in all layers, and la.teral connections (inhibition and excitation) in higher layers. The neural outputs are sums of the weighted inputs, and the weights develop according to a constrained Hebbian Rule. Each layer is lettered for reference starting from A and subsequent layers are lettered B,C,D etc. The superscript M will be used to refer to an arbitrary layer, and L is used to refer to the previous layer. Each layer has a set of parameters which are the same for all neurons in that layer. Connectivity is random but is based on a Gaussian density distribution (exp( -r2/rll»' where rM is the arbor radius for layer M. Each layer is a rectangular array of neurons (or vector of neurons for the one dimensional case). The layers are assumed to be large enough so that edge effects are not important or do not occur. Layers develop one at a time starting from the B layer. The A layer is an input layer, which is divided into boxes, within each of which activity is uniform. This is biologically realistic, since sensory neurons fan out to a number of cells (an average of lOin the cochlea) each of which only take input from one sensory cell. Hence the input layer for the network acts like a layer of tonotopically organised neurons. Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System 1009 3 NETWORK DEVELOPMENT The output of a neuron in layer M is given by F~1rr = Ra + Rb. L CnjFp~:(nj) j Where, 7r indexes a pattern presentation, The subscript n is used to index the M layer neurons, R a , Rb are layer parameters, (1) Fp~:(nj) is the output of the L layer neuron which is pre-synaptic to the j'th input of the n'th M layer neuron. The synaptic weights develop according to a constrained Hebbian learning rule, (.6cn i)7I' = ka + kb.(FnM7r Ftt).(Fp~:(ni) - Fl) (2) Where, (.6cn d7l' is the change in the i'th weight of neuron n, ka , kb' FtJ , Frf are layer parameters. Synaptic weights are constrained to lie within t.he ra.nge (nem - 1, n em ). (In this work, nem = 0.5) Linsker (1986a) derives an Ensemble Averaged Development equation which shows how development depends on the parameters, and how correlations develop between spatially proximate neurons in layers beyond the first. In so doing, the number of parameters is reduced from five per layer to two per layer, and therefore the equation is a very useful aid in understanding the self-organising nature of this model. The development equation is ~. QL (.) (,),cnJ' . }., +}' _ + ~) pren, .pre nJ Cni = '\1 \.::!'Cn NM < (Fl7r - FL).(F/ 7r - fL) Qt == f6 Where, N M is the number of synaptic connections to an M layer neuron, FL is the a.verage output activity in the L layer, ka + kb.(Ra F~1) . (FL - Frf) K 1 = 2 NMkbRbfo K'l = fL . (FL'l- Frf) ~ fa (3) (4) (5) (6) f& is a unit of activity used to normalise the two point correlation function Qb. In this work fJ is chosen to set Qt = 1 Angle brackets denote an a.verage ta.ken over the ensemble of input patterns. 1010 Walton and Bisset 4 MORPHOLOGICAL REGIMES From equation 3, an expression can be found for the average weight value c in a layer, and therefore certain properties of the system can be described. Although Mackay and Miller (1990) have described the regimes with the aid of eigenvalues and eigenfunctions, there is a much simpler method which will provide the same information. For an all-excitatory (AE) layer, the average weight value is equal to 71.em . Since all weights are equal to llem, the summation in equation 3 can be re-written ~ QL N h rB nem · L.Jj pre(ni).pre(nj) = llem · M·q, were q = 2 N J 2 + 2 • . c· rc r B A similar expression can be found for all-inhibitory (AI) layers, and therefore the J(l 1<2 plane can be sub-divided into three regions which will yield AE cells, AI cells, and mixed-mode cells (see figure 1). The plane can be divided further for the mixed-mode cell type in the C layer. Oncenter and off-center cells develop close to the AE and AI boundaries respectively. Mackay and Miller have shown why these cells develop and have placed a theoretical lower bound on c which agrees with experimental data. However, in so doing the effect of the intercept on the /{2 axis was deemed small, due to a large number of synaptic connections. This approximation depends upon the large number of connections between the Band C layers. In the auditory case, the number of connections is smaller, and it is possible that this assumption no longer holds. From equation 3, it can be seen that movement into the On-Centre region from the AE region, causes the value of ~. QL (.) (.).cn)· t.o decrease. This has the L.J} pre 7H .pre n} effect of moving the intercept of the constant c line from J{:? = q towards /{ 2 = O. J{2 finally reaches 0 when c = 0, and then begins to move back towards ij as the AI regime is approached. This has two potentially important effects. Firstly, it means that the tolerance of /{ 2 varies with J( 1; for a particular value of /{l, there are upper and lower limits on the value of K2 which will allow maturation of on-center cells. This range of values (i.e. the difference between the limits) varies in a linear way with K I, but the ratio of the range to a value of K2 which is within the range (i.e. the center value) is not linear with /{l. Here, tolerance is defined as that ratio. Secondly, there is a region of negative /{2 where the nature of the cell morphology which will be produced is unknown. It is therefore important that K'2 should be larger than this value in order to produce On-Center or Off-Center cells reliably. Mackay and Miller use IK21 -+ 00 in their analysis. Unfortunately, this would require the fundamental network parameter Fl -+ 00 from equation 6, and therefore is an unsuitable choice. It is reasonable to assume that Fl is of the same order as FL, and hence an order for 1<2 can be established. For a concrete example, assume inputs are binary (giving Qfi = 0.25) and Fl = FL x 1.2, this will ensure K2 < 0 (equation 6) while adhering to the assumption made above. Equation 6 now gives the order for K2 = 0.2. To find the value of ij, which will place a lower bound on IK2L a particular system should be chosen. The auditory system is chosen here. Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System 1011 Kl ~~~~~~-----------------------------------.K2 o Figure 1: Graph of Morphological Regions for C Layer There are approximately 3000 inner hair cells in the cochlea, each of which fans out to an average of 10 neurons (which sets our box size p = 10). These neurons take input from only one hair cell. The anteroventral cochlea nucleus takes input from this layer of cells, with a fan in N B ~ 50 (c.f. the value of N B = 1000 in Linsker (1986a)). The assumption is made that the three sections of the cochlea nucleus each contain approximately the same number of cells. With this smaller number of connections, the correlation function for this layer is somewhat coarser, and does not follow the theoretical curve for the continuum limit so well. In addition, the on-center cells found in the posteroventral cochlea nucleus and the dorsal nucleus have centres with a tuning curve response Q of about 2.5 which corresponds to about 2000 B layer cells. If it is assumed that the surround of the cell is half the width of the core, then there is a total "c ~ 3000 neurons. Simulations here use Nc = 100 which is a realistic number of connections in the context of a one-dimensional network. In general, the arbor radius increases as layers become closer to the cortex. From Linsker, rc /"B = 3. "B is therefore equal to 1000. This yields the average number of connections to a given B cell from a. particular A box being approximately unity, which agrees well with the condition expressed by Linskel'. Using the expression above, if can be calculated as approximately 1.5 x 10-3 . This value is certainly insignificant with respect to the value of f{2 = 0.2 quoted earlier, and therefore any effects due to the summation term in equation 3 can be ignored in the calculation of c for this system. This means that the original approximation still holds even in this low connectivity case. 1012 Walton and Bisset 5 SIMULATION RESULTS A network was trained using the connectivity sta.ted above to give various values of C with f{2 = 0.2. To obtain an idea of the total number of presentations that were required to train the network, without any artifacts that might be produced as a result of batch training, the original network equations were used. In all of these simulations, R a , Ftt = 0 so that the value of f{ 1 could be easily controlled. The findings were that the maximum value of kb was about 10-3 which required 2.5 million pattern presentations to mature the network. With this value, on-center cells with an average weight value less than about 0.3 would not mature. However as the value of kb was decreased (keeping f{l constant), the value of c could be made lower, at the expense of more pattern presentations. The figures obtained for the maturation of feature sensitive cells are extremely biologically realistic in the light of the number of pattern presentations available to an average mammal. For example, the foetal cat has sufficient time for about 25 million presentations (assuming 10 presentations per second). 6 CONCLUSION We have shown that the class of network developed by Linsker is extendable to the auditory system where the number and density of synapses is considerably smaller than in the visual case. It has also been shown that the time for layer maturation by this method is sufficiently short even for mammals with a relatively short gestation period. and therefore should also be sufficient in mammals with longer foetal development times. We conclude that the model is therefore a good representation of feature detector development in the pre-natal mammal. References Grossberg S. (1976) - On the Development of Feature Detectors in the Visual Cortex with Applications to Learning and Reaction Diffusion Systems, Biological Cybernetics 21, 145 - 159 Grossberg S. (1976) - Adaptive Pattern Classification and Universal Recoding : 1 Parallel Development and Coding of Neural Feature Detectors, Biological Cybernetics 28, 121 - 134 H u bel D. H. and Wiesel T . N. (1961) - Receptive Fields, Binocular Interaction and Functional Architechture in the Cat's Visual Cortex, Journal of Physiology, 160, 106 - 154 Kalil R. E. (1989) - Synapse Formation In The Developing Brain, Scientific American, December 1989, 38 - 45 Klinke R. (1986) - Physiology of Hearing. In Schmidt R. 'N . (ed.), Fundamentals of Sensory Physiology, 19!-J - 22:3 MacKay D. J. C. and Miller K. D. (1990) - Analysis of Linsker's Simulations of Hehbian Rules, Neural Computation, 2, 173 - 187 von der Malsburg C. (1979) - Development of Ocularity Domains and Growth Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System 1013 Behaviour of Axon Terminals, Biological Cybernetics, :]2, 49 - 62 Linsker R. (1986a) - From Basic Network Principles To Neural Architecture: Emergence Of Spatial-Opponent Cells, Proceedings of the National Academy of Sciences (USA), 83, 7508 - 7512 Linsker R. (1986b) - From Ba.')ic Network Principles To Neural Architecture: Emergence of Orientation-Selective Cells, Proceedings of the National Academy of Sciences (USA), 83, 8390 - 8394 Linsker R. (1986c) - From Basic Network Principles To Neural Architecture: Emergence of Orientation-Columns, Proceedings of the National Academy of Sciences (USA), 83, 8779 - 8783 Nass M. M. and Cooper L. N. (1975) - A Theory for the Development of Feature Detecting Cells in the Visua.l Cortex, Biological Cybernetics, 19, 1 - 18 Obermayer K. Ritter H. and Schulten K. (1990) - Development and Spatial Structure of Cortical Feature Maps: A Model Study NIPS, 3, 11 - 17 Sloman A. (1989) - On Designing a Visual System (Towards a Gibsonian Computational Model of Vision) Journal of Experimental and Theoretical Artificial Intelligence, 1, 289 - 337 Tanaka S. (1990) - Intera.ction among Ocularity, Retinotopy and On-Center/Off Center Pathways During Development NIPS, 3, 18 - 25
1992
1
573
Rational Parametrizations of Neural Networks Uwe Helmke Department of Mathematics University of Regensburg Regensburg 8400 Germany Robert C. Williamson Department of Systems Engineering Australian National University Canberra 2601 Australia Abstract A connection is drawn between rational functions, the realization theory of dynamical systems, and feedforward neural networks. This allows us to parametrize single hidden layer scalar neural networks with (almost) arbitrary analytic activation functions in terms of strictly proper rational functions. Hence, we can solve the uniqueness of parametrization problem for such networks. 1 INTRODUCTION Nonlinearly parametrized representations of functions ¢: IR -+- IR of the form n (1.1) ¢(x) = L CiU(X - ai) x E IR, i=l have attracted considerable attention recently in the neural network literature. Here u: IR -+- IR is typically a sigmoidal function such as (1.2) but other choices than (1.2) are possible and of interest. Sometimes more complex representations such as n (1.3) ¢(x) = L ciu(bix - ad i=l 623 624 Helmke and Williamson or even compositions of these are considered. The purpose of this paper is to explore some parametrization issues regarding (1.1) and in particular to show the close connection these representations have with the standard system-theoretic realization theory for rational functions. We show how to define a generalization of (1.1) parametrized by (A, b, c), where A is a matrix over a field, and band c are vectors. (This is made more precise below). The parametrization involves the (A, b, c) being used to define a rational function. The generalized u-representation is then defined in terms of the rational function. This connection allows us to use results available for rational functions in the study of neural-network representations such as (1.1). It will also lead to an understanding of the geometry of the space of functions. One of the main contributions of the paper is to show how in general neural network representations are related to rational functions. In this summary all proofs have been omitted. A complete version of the paper is available from the second author. 2 REALIZATIONS RELATIVE TO A FUNCTION In this section we explore the relationship between sigmoidal representations of real analytic functions ¢: II --+ IR defined on an interval II C IR, real rational functions defined on the complex plane C, and the well established realization theory for linear dynamical systems x(t) Ax(t) + bu(t) y(t) cx(t) + du(t). For standard textbooks on systems theory and realization theory we refer to [5, 7]. Let IK denote either the field IR of real numbers or the field C of complex numbers. Let ~ C C be an open and simply connected subset of the complex plane and let u: ~ --+ C be an analytic function defined on ~. For example, u may be obtained by an analytic continuation of some sigmoidal function u: IR --+ IR into the domain of holomorphy of the complex plane. Let T: V --+ V be a linear operator on a finite-dimensional IK-vector space V such that T has all its eigenvalues in ~. Let r c ~ be a simple closed curve, oriented in the counter-clockwise direction, enclosing all the eigenvalues of T in its interior. More generally, r may consist of a finite number of simple closed curves rk with interiors ~~ such that the union of the domains ~~ contains all the eigenvalues of T. Then the matrix valued function u(T) is defined as the contour integral [8, p.44] (2.1) u(T) := 21. r u(z) (zI - T)-l dz. 7rZ Jr Note that for each linear operator T: V --+ V, u(T): V --+ V is again a linear operator on V. If we now make the substitution T := xl + A for x E C and A: V --+ V IK-linear, then u(xI + A) = 21. f u(z) «z - x)I - A)-l dz 7rZ Jr Rational Parametrizations of Neural Networks 625 becomes a function of the complex variable x, at least as long as r contains all the eigenvalues of xl + A. Using the change of variables e := z - x we obtain (2.2) u(xl + A) = ~ ( u(x + e) (el - A)-I de 27rZ Jr' where r' = r - x C ~ encircles all the eigenvalues of A. Given an arbitrary vector b E V and a linear functional c: V ---+- IK we achieve the representation (2.3) Note that in (2.3) the simple closed curve r c C is arbitrary, as long as it satisfies the two conditions (2.4) (2.5) r encircles all the eigenvalues of A x + r = {x +el e E r} c~. Let </>: 1I ---+- ~ be a real analytic function in a single variable x E 1I, defined on an interval II C ~. Definition 2.1 A quadruple (A, b, c, d) is called a finite-dimensional u-realization of </>: II ---+- ~ over a field of constants IK if for all x E 1I (2.6) </>(x) = cu(xl + A)b + d holds, where the right hand side is given by (2.3) and r is assumed to satisfy the conditions (2.4)-(2.5). Here d E IK, b E V, and A: V ---+- V, c: V ---+- IK are IK-linear maps and V is a finite dimensional IK-vector space. Definition 2.2 The dimension (or degree) of a u-realization is dimK V. The 0'degree of </>, denoted 817 (</», is the minimal dimension of all u-realizations of </>. A minimal u-realization is a u-realization of minimal dimension 817 (</». u-realizations are a straightforward extension of the system-theoretic notion of a realization of a transfer function. In this paper we will address the following specific questions concerning u-realizations. Q1 What are the existence and uniqueness properties of u-realizations? Q2 How can one characterize minimalu-realizations? Q3 How can one compute 817 (</»? 3 EXISTENCE OF IT-REALIZATIONS We now consider the question of existence of u-realizations. To set the stage, we consider the systems theory case u(x) = x-I first. Assume we are given a formal power senes (3.1) N "" </>i . </>(x) = L.J 1 x" . z. ,=0 N $00, 626 Helmke and Williamson and that (A, b, c) is a O'-realization in the sense of definition 2.1. The Taylor expansion of c(xI + A)-lb at ° is (for A nonsingular) 00 (3.2) c(xI + A)-lb = 2:)-I)icA-(i+l)bxi. i=O Thus (3.3) i = 0, ... ,N. if and only if the expansions of (3.1) and (3.2) coincide up to order N. Observe [7] that ¢(x) = c(xI + A)-lb and dim 'V < 00 ¢(x) is rational with ¢(oo) = 0. The possibility of solving (3.3) is now easily seen as follows. Let 'V = lRN +1 = Map({O, ... ,N},lR) be the finite or infinite (N + I)-fold product space oflR. (Here Map(X, Y) denotes the set of all maps from X to Y.) If N is finite let (3.4) A-I [ O ~ :.:: ~1 °0 1.] E ]R(N+l)X(N+l), b = (10 ... O)T E'V, c= (~, ¢o, ¢l, ~~, ... , (~~~)!). For N = 00 we take A-I: lRN ---Io]RN as a shift operator A-I: ]RN ---Io]RN (3.5) We then have and A-I: (xo, xl, ... ) .-- -(0, xo, Xl, •• • ) b=(I,O, ... ), c=(0,¢0,¢I,¢2/2!, ... ): Lemma 3.1 Let O'(x) = Li 7txi be analytic at X = ° and let (A, b, c) be a 0'realization of the formal power series ¢( x) = L~o !ffxi , N ~ 00 (i. e. matching of the first N + 1 derivatives of ¢(x) and cO'(xI + A)b at X = 0). Then (3.6) ¢i = cO'(i)(A)b for i = 0, ... , N. Observe that for O'(x) = x-I we have O'(i)(-A) = i!(A-l)i+1 as before. The existence part of the realization question Ql can now be restated as Q4 Given O'(x):= L:o~xi and a sequence of real numbers (¢o, ... ,¢N), does there exist an (A, b, c) with (3.7) ¢i = cO'(i)(A)b, i = 0, ... , N? Rational Parametrizations of Neural Networks 627 Thus question Q1 is essentially a Loewner interpolation question (1,3]. Let Ii = cAib, f. E No, and let (3.8) [ Uo 0"1 0"2 ::! 1 = (Ui+i)r;=o· 0"1 0"2 0"3 F= 7 0"3 0"4 10 [¢] = [ ~q . II h]= 12/2! and 13/3! Write (3.9) Then (3.6) (for N = 00) can formally be written as (3.10) [¢] = F· hJ. Of course, any meaningful interpretation of (3.10) requires that the infinite sums ,",00 17.+i • W • t Th· h C 1 ·f ,",00 2 . W i....Ji=O i! Ii, z E !"I0, eXls . IS appens, lor examp e, I i....Ji=O O"i+i < 00, z E 1"10 and 2:~0 C'Yi Jj!)2 < 00 exist. We have already seen that every finite or infinite sequence h] has a realization (A, b, c). Thus we obtain Corollary 3.2 A function ¢(x) admits a O"-realization if and only if [¢] E image(F). Corollary 3.3 Let H = (/Hi )~=o. There exists a finite dimensionalO"-realization of ¢(x) if and only if[¢] = Fh] with rankH < 00. In this case 617(¢) = rankH. 4 UNIQUENESS OF a-REALIZATIONS In this section we consider the uniqueness of the representation (2.3). Definition 4.1 (c.f. [2]) A system {91, ... ,9n} of continuous functions 9i: JI -P lR?, defined on an interval IT C lR?, is said to satisfy a Haar* condition of order n on JI if 91, ... ,9n are linearly independent, i. e. For every Cl, . .. , Cn E lR? with 2:7:1 Ci9i(X) = 0 for all x E JI, then Cl = ... = Cn = O. Remark The Haar* condition is implied by the stronger classical Haar condition that [ 91(Xt} det : gn(xd for all distinct (xi)i=1 in IT. Equivalently, if 2:7=1 cigi(X) has n distinct roots in JI, then Cl = ... = Cn = o. Definition 4.2 A subset A of C is called self-conju9ate if a E A implies a E A. 628 Helmke and Williamson Let (1': ~ ---+ ~ be a continuous function and define (1'~~)(x) := (1'(i)(x + Zi). Let m '" := ("'1, ... ''''m) where L "'j = n, "'j EN, "'j ~ 1, j = 1, ... ,m j=l denote a combination of n of size m. For a given combination", = ("'1, ... , "'m) of n, let 1:= {I, ... ,m} and let Ji := {I, ... ,"'d. Let Zm := {ZI, ... ,zm} and let (4.1 ) ( Z) { (i-I). I . J} (1' "', m := (1'Zi : ~ E ,J E i . Definition 4.3 If for all m < n, for all combinations", = ("'I, ... ''''m) of n of size m, and for any self-conjugate set Zm of distinct points, (1'("" Zm) satisfies a H aar* condition of order n, then (1' is said to be Haar generating of order n. Theorem 4.4 (Uniqueness) Let (1': ~ ---+ ~ be Haar generating of order at least 2n on 1I and let (A, b, c) and (A, b, c) be minimal (1'-realizations of order n of functions ¢ and ¢ respectively. Then the following equivalence holds c(1'(xI + A)b = c(1'(xI + A)b \:Ix E 1I (4.2) c(eI - A)-lb = c(eI - A)-Ii; \:Ie E ~. Conversely, if ({2) holds for almost all order n triples (A, b, c), (A, b, c), then (1': ~ ---+ ~ is Haar generating on 1I of order ~ n. The following result gives examples of activation functions (1': ~ ---+ ~ which are Haar generating. Lemma 4.5 Let d E No. Then 1) The function (1'(x) = x- d is Haar generating of arbitrary order. 2) The monomial (1'(x) = x d is Haar generating of order d + 1. 3) The function e- x2 is Haar generating of arbitrary order. Remark A simple example of a (1' which is not Haar generating of order ~ 2 is (1'(x) = eX. In fact, in this case (1'(x+Zj) = Cj(1'(x+zd for Cj = eZj Z " j = 2, ... ,no Remark The function (1'(x) = (l+e- X)-l is not Haar generating of any order > 2. By the periodicity of the complex exponential function, (1'( x + 27ri) = (1'( x - 27ri), i = .;::I, for all x. Thus the Haar* condition fails for Z2 = {27ri, -27ri}. In particular, the above uniqueness result fails for the standard sigmoid case. In order to cover this case we need a further definition. Definition 4.6 Let ° = n c C be a self-conjugate subset of C. A function (1': ~ ---+ ~ is said to be Haar generating of order non 0, if for all m $ n, for all combinations '" = ("'1, ... ,"'m) of n of size m, and for any self-conjugate subset Zm C n of distinct points of 0, (1'("', Zm) satisfies a Haar* condition of order n. Of course for n = C, this definition coincides with definition 4.3. Rational Parametrizations of Neural Networks 629 Theorem 4.1 (Local Uniqueness) Let u: ~ -+ ~ be analytic and let 0 C C be a self-conjugate subset contained in the domain of holomorphy of u. Let 1I be a nontrivial subinterval ofOn~. Suppose u: ~ -+ ~ is Haar generating on 0 of order at least 2n, n EN. Then for any two minimal u-realizations (A, b, c) and (A, b, c) of orders at most n with spect A, spect A E n the following equivalence holds: cu(xI + A)~ = cu(xI + A)b 'Vx E 1I c(~I - A)-lb = c(~I - A)-Ii; 'Ve E~. (4.3) Lemma 4.8 Let 0 := {z E C: I~zl < 7r}. Then the standard sigmoid function u(x) = (1 + e-X)-l is Haar generating on 0 of arbitrary order. 5 MAIN RESULT As a consequence of the uniqueness theorems 4.4 and 4.7 we can now state our main result on the existence of minimal u-realizations of a function ¢(x). It extends a parallel result for standard transfer function realizations, where u( x) = x-I. Theorem 5.1 (Realization) Let n c C be a self-conjugate subset, contained in the domain of holomorphy of a real meromorphic function u: ~ -+ ~. Suppose u is Haar generating on n of order at least 2n and assume ¢(x) has a finite dimensional realization (A, b, c) of dimension at most n such that A has all its eigenvalues in O. 1. There exists a minimal u-realization (AI, bl , cd of ¢(x) of degree 6q (¢) ::; dim(A, b, c). Furthermore, there exists an invertible matrix S such that (5.1) SAS- I = [~l ~~ 1 ' Sb = [ be: 1 ' cS-1 = [CI, C2]. 2. If (AI, bt, cd and (A~, b~, cD are minimal u-realizations of ¢( x) such that the eigenvalues of Al and A~ are contained in 0, then there exists a unique invertible matrix S such that (5.2) 3. A u-realization (A, b, c) is minimal if and only if(A, b, c) is controllable and observable; i.e. if and only if (A, b, c) satisfies the generic rank conditions rank(b, Ab, ... ,An-Ib) = n, rank [ c~ 1 = n cAn-1 for A E ocn xn , bE ocn, cT E ocn . Remark The use of the terms "observable" and "controllable" is solely for formal correspondence with standard systems theory. There are no dynamical systems actually under consideration here. 630 Helmke and Williamson Remark Note that for any u-realization (A, b, c) of the form A [ All A12] [ b1 ] [] h () [ U(All) * ] o A22 ,b = 0 ,c = Cl, C2 ,we ave u A = 0 U(A22) and thus cu(xI + A)b = clu(xI + All )b1 • Thus transformations of the above kind always reduce the dimension of au-realization. Corollary 5.2 ([9]) Let u(x) = (1 + e- X )-l and let ¢(x) = E~=l CiU(X ai) = E?=l c~u(x aD be two minimal length u-representations with I~ad < 11", l~aH < 11", i = 1, ... ,n. Then (aL cD = (ap(i)' Cp(i» for a unique permutation p: {I, . .. ,n} {I, ... ,n}. In particular, minimal length representation (1.1) with real coefficients ai and Ci are unique up to a permutation of the summands. 6 CONCLUSIONS We have drawn a connection between the realization theory for linear dynamical systems and neural network representations. There are further connections (not discussed in this summary) between representations of the form (1.3) and rational functions of two variables. There are other questions concerning diagonalizable realizations and Jordan forms. Details are given in the full length version of this paper. Open questions include the problem of partial realizations [4,6].1 REFERENCES [1] A. C. Antoulas and B. D. O. Anderson, On the Scalar Rational Interpolation Problem,IMA Journal of Mathematical Control and Information, 3 (1986), pp. 61-88. [2] E. W. Cheney, Introduction to Approximation Theory, Chelsea Publishing Company, New York, 1982. [3] W. F. Donoghue, Jr, Monotone Matrix Functions and Analytic Continuation, Springer-Verlag, Berlin, 1974. [4] W. B. Gragg and A. Lindquist, On the Partial Realization Problem, Linear Algebra and its Applications, 50 (1983), pp. 277-319. [5] T . Kailath, Linear Systems, Prentice-Hall, Englewood Cliffs, 1980. [6] R. E. Kalman, On Partial Realizations, Transfer Functions, and Canonical Forms, Acta Polytechnica Scandinavica, 31 (1979), pp. 9-32. [7] R. E. Kalman, P. L. Falb and M. A. Arbib, Topics in Mathematical System Theory, McGraw-Hill, New York, 1969. [8] T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, 1966. [9] R. C. Williamson and U. Helmke, Existence and Uniqueness Results for Neural Network Approximations, To appear, IEEE Transactions on Neural Networks, 1993. IThis work was supported by the Australian Research Council, the Australian Telecommunications and Electronics Research Board, and the Boeing Commercial Aircraft Company (thanks to John Moore). Thanks to Eduardo Sontag for helpful comments also.
1992
10
574
Mapping Between Neural and Physical Activities of the Lobster Gastric Mill Kenji Doya Mary E. T. Boyle Allen I. Selverston Department of Biology University of California, San Diego La Jolla, CA 92093-0322 Abstract A computer model of the musculoskeletal system of the lobster gastric mill was constructed in order to provide a behavioral interpretation of the rhythmic patterns obtained from isolated stomatogastric ganglion. The model was based on Hill's muscle model and quasi-static approximation of the skeletal dynamics and could simulate the change of chewing patterns by the effect of neuromodulators. 1 THE STOMATOGASTRIC NERVOUS SYSTEM The crustacean stomatogastric ganglion (STG) is a circuit of 30 neurons that controls rhythmic movement of the foregut. It is one of the best elucidated neural circuits. All the neurons and the synaptic connections between them are identified and the effects of neuromodulators on the oscillation patterns and neuronal characteristics have been extensively studied (Selverston and Moulins 1987, H arrisWarrick et al. 1992). However, STG's function as a controller of ingestive behavior is not fully understood in part because of our poor understanding of the controlled object: the musculoskeletal dynamics of the foregut. We constructed a mathematical model of the gastric mill, three teeth in the stomach, in order to predict motor patterns from the neural oscillation patterns which are recorded from the isolated ganglion. The animal we used was the Californian spiny lobster (Panulirus interruptus), which 913 914 Doya, Boyle, and Selverston (a) medial tooth __ -esophagus • Inhibitory • Functional Inhibitory 6 Excitatory , FWlCionai Excitalory flexible endoscope JVV\... Electronic Figure 1: The lobster stomatogastric system. (a) Cross section of the foregut (objects are not to scale). (b) The gastric circuit. is available locally. The stomatogastric nervous system controls four parts of the foregut: esophagus, cardiac sac (stomach), gastric mill, and pylorus (entrance to the intestine) (Figure l.a). The gastric mill is composed of one medial tooth and two lateral teeth. These grind large chunks of foods (mollusks, algae, crabs, sea urchins, etc.) into smaller pieces and mix them with digestive fluids. The chewing period ranges from 5 to 10 seconds. Several different chewing patterns have been analyzed using an endoscope (Heinzel 1988a, Boyle et al. 1990). Figure 2 shows two of the typical chewing patterns: "cut and grind" and "cut and squeeze" . The STG is located in the opthalmic artery which runs from the heart to brain over the dorsal surface of the stomach. When it is taken out with two other ganglia (the esophageal ganglion and the commissural ganglion), it can still generate rhythmic motor outputs. This isolated preparation is ideal for studying the mechanism of rhythmic pattern generation by a neural circuit. From pairwise stimulus and response of the neurons, the map of synaptic connections has been established. Figure 1 (b) shows a subset of the STG circuit which controls the motion of the gastric mill. It consists of 11 neurons of 7 types. GM and DG neurons control the medial tooth and LPG, MG, and LG neurons control the lateral teeth. A question of interest is how this simple neural network is utilized to control the various movement patterns of the gastric mill, which is a fairly complex musculoskeletal system. The oscillation pattern of the isolated ganglion can be modulated by perfusing it with of several neuromodulators, e.g. proctolin, octopamine (Heinzel and Selverston 1988), CCK (Turrigiano 1990), and pilocarpine (Elson and Selverston 1992). However, the behavioral interpretation of these different activity patterns is not well understood. The gastric mill is composed of 7 ossicles (small bones) which is loosely suspended by more than 20 muscles and connective tissues. That makes it is very difficult to intuitively estimate the effect of the change of neural firing patterns in terms of the teeth movement. Therefore we, decided to construct a quantitative model of the musculoskeletal system of the gastric mill. Mapping Between Neural and Physical Activities of the Lobster Gastric Mill 915 (a) (b) Figure 2: Typical chewing patterns of the gastric mill. (a) cut and grind. (b) cut and squeeze. 2 PHYSIOLOGICAL EXPERIMENTS In order to design a model and determine its parameters, we performed anatomical and physiological experiments described below. Anatomical experiments: The carapace and the skin above the stomach mill was removed to expose a dorsal view of the ossicles and the muscles which control the gastric mill. Usually, the gastric mill was quiescent without any stimuli. The positions of the ossicles and the lengths of the muscles at the resting state was measured. After the behavioral experiments mentioned below, the gastric mill was taken out and the size of the ossicles and the positions of the attachment points of the muscles were measured. Behavioral experiments: With the carapace removed and the gastric mill exposed, one video camera was used to record the movement of the ossicles and the muscles. Another video camera attached to a flexible endoscope was used to record the motion of the teeth from inside the stomach. In the resting state, muscles were stimulated by a wire electrode to determine the behavioral effects. In order to induce chewing, neuromodulators such as proctolin and pilocarpine were injected into the artery in which STG is located. Single muscle experiments: The gm!, the largest of the gastric mill muscles, was used to estimate the parameters of the muscle model mentioned below. It was removed without disrupting the carapace or ossicle attachment points and fixed to a tension measurement apparatus. The nerve fiber aln that innervates gmt was stimulated using a suction electrode. The time course of isometric tension was recorded at different muscle lengths and stimulus frequencies. The parameters obtained from the gmt muscle experiment were applied to other muscles by considering their relative length and thickness. 916 Doya, Boyle, and Selverston (a) contraction element (CE) o parallel elasticity (PE) (c) fO fmax o serial elasticity (SE) leo (d) fs Figure 3: The Hill-based muscle model. 3 MODELING THE MUSCULOSKELETAL SYSTEM 3.1 MUSCULAR DYNAMICS There are many ways to model muscles. In the simplest models, the tension or the length of a muscle is regarded as an instantaneous function of the spike frequency of the motor nerve. In some engineering approaches, a muscle is considered as a spring whose resting length and stiffness are modulated by the nervous input (Hogan 1984). Since these models are a linear static approximation of the nonlinear dynamical characteristics of muscles, their parameters must be changed to simulate different motor tasks (Winters90). Molecular models (Zahalak 1990), which are based on the binding mechanisms of actin and myosin fibers, can explain the widest range of muscular characteristics found in physiological experiments. However, these complex models have many parameters which are difficult to estimate. The model we employed was a nonlinear macroscopic model based on A. V. Hill's formulation (Hill 1938, Winters 1990). The model is composed of a contractile element (CE), a serial elasticity (SE), and a parallel elasticity (PE) (Figure 3.a). This model is based on empirical data about nonlinear characteristics of muscles and its parameters can be determined by physiological experiments. The output force Ie of the CE is a function of its length Ie and its contraction speed Ve = -dle/dt (Figure 3.b) Ve ?:: 0 (contraction), Ve < 0 (extension), (1) where 10 is the isometric output force (at Ve = 0) and Vo is the maximal contraction velocity. The parameters of the I-v curve were a = 0.25 and f3 = 0.3. The isometric force 10 was given as the function of CE length Ie and the activation level a(t) of Mapping Between Neural and Physical Activities of the Lobster Gastric Mill 917 the muscle (Figure 3.c) fo(l"a(t)) = { ~m.Z!~, (f.;)' (f.; - r) a(t) where leo is the resting length of the CE and 'Y = 1.5. The SE was modeled as an exponential spring (Figure 3.d) o < Ie < 'Y, otherwise, I${I$) = { okl(exp[k2l'~~'Q] -1) 1$ ~ 1$0, 1$ < 1$0, (2) (3) where 1$ is the output force, 1$0 is the resting length, and kl and k2 are stiffness parameters. The PE was supposed to have the same exponential elasticity (3). In the simulations, the CE length Ie was taken as the state variable. The total muscle length 1m = Ie + 1$ is given by the skeletal model and the muscle activation a(t) is given by the the activation dynamics described below. The SE length is given from 1$ = 1m - Ie and then the output force I${I$) = Ie + Ip = 1m is given by (3). The contraction velocity Ve = -~ is derived from the inverse of (1) at Ie = I${I$) - Ip(le) and then integrated to update the CE length Ie. The activation level a(t) of a muscle is determined by the free calcium concentration in muscle fibers. Since we don't have enough data about the calcium dynamics in muscle cells, the activation dynamics was crudely approximated by the following equations. da(t) Ta-;{t = -a(t) + e(t), and de(t) Te~ = -e(t) + n(t)2, (4) where n(t) is the normalized firing frequency of the nerve input and e(t) is the electric activity of the muscle fibers. The nonlinearity in the nervous input represents strong facilitation of the postsynaptic potential (Govind and Lingle 1987). We incorporated seven of the gastric mill muscles: gml, gm2, gm3a, gm3c, gm4, gm6b, and gm9a (Maynard and Dando 1974). The muscles gml, gm2, gm3a, and gm3c are extrinsic muscles that have one end attached to the carapace and gm4, gm6b, and gm9a are intrinsic muscles both ends of which are attached of the ossicles. Three connective tissues were also incorporated and regarded as muscles without contraction elements. See Figure 4 for the attachment of these muscles and tissues to the ossicles. 3.2 SKELETAL DYNAMICS The medial tooth was modeled as three rigid pieces PI, P2 and P3 . PI is the base of the medial tooth. P2 is the main body of the medial tooth. P3 forms the cusp and the V-shaped lever on the dorsal side. The lateral tooth was modeled as two rigid pieces P4 and Ps. P4 is a L-shaped plate with a cusp at the angle and is connected to P3 at the dorsal end. Ps is a rod that is connected to P4 near the root of the cusp (Figure 4). We assumed that the motion is symmetric with respect to the midline. Therefore the motion of the medial tooth was two-dimensional and only the left one of the 918 Doya, Boyle, and Se1verslon gm3c ~':-;--...J......... gm9a z 30 y Figure 4: The design of the gastric mill model. Ossicle PI stands for the ossicles I and II, P2 for VII, P3 for VI, P4 for III, IV, and V, Ps for XIV in the standard description by Maynard and Dando (1974). two lateral teeth was considered. The coordinate system was taken so that x-axis points to the left, y-axis backward, and z-axis upward. The rotation angles of the ossicles around x, y, and z axes ware represented as 0, <P, and 1/J respectively. The configuration of the ossicles was determined by a 10 dimensional vector e = (YO,zO,OI,02,03,04,<P4,1/J4,OS,<P5), (5) where (YO,zo) represents the position of the joint between PI and P2 and (01,02,03) represents the rotation angle of PI, P2 and P3 in the y-z plane. The rotation angles of P4 and P5 were represented as (04, <P4, 1/J4) and (Os, 4>s) respectively. P5 has only two degrees of rotation freedom since it is regarded as a rod. We employed a quasi-static approximation. The configuration of the ossicles e was determined by the static balance of force. Now let Lm and Fm be the vectors of the muscle lengths and forces. Then the balance of the generalized forces in the e space (force for translation and torque for rotation) is given by Tm(e, Fm) + Te = 0, (6) where T m and Te represent the generalized forces from muscles and external loads. The muscle force in the e space is given by Tm(e, Fm) = J(e)T Fm, (7) where J(e) = 8Lm/8e is the Jacobian matrix of the mapping e .- Lm determined by the ossicle kinematics and the muscle attachment. Since it is very difficult to obtain a closed form solution of (6), we used a gradient descent equation de dt = -c:(Tm(e, Fm) + Te) = -c:(J(e)T Fm + Te) (8) Mapping Between Neural and Physical Activities of the Lobster Gastric Mill 919 (a) t=O. t=2. t=4. t=6. (b) t=O. t=1.5 t=3. t=4.5 Figure 5: Chewing patterns predicted from oscillation patterns of isolated STG. (a) spontaneous pattern. (b) proctolin induced pattern. to find the approximate solution of 0(t). This is equivalent to assuming a viscosity term c- 1d0ldt in the motion equation. 4 SIMULATION RESULTS The musculoskeletal model is a 17-th order differential equation system and was integrated by Runge-Kutta method with a time step 1ms. Figure 5 shows examples of motion patterns predicted by the model. The motoneuron output of spontaneous oscillation of the isolated ganglion was used in (a) and the output under the effect of proctolin was used in (b). It has been reported in previous behavioral studies (Heinzel 1988b) that the dose of proctolin typically evokes "cut and grind" chewing pattern. The trajectory (b) predicted from the proctolin induced rhythm has a larger forward movement of the medial tooth while the lateral teeth are closed, which qualitatively agrees with the behavioral data. 5 DISCUSSION The motor pattern generated by the model is considerably different from the chewing patterns observed in the intact animal using an endoscope. This is partly because of crude assumptions in model construction and errors in parameter estimation. However, this difference may also be due to the lack of sensory feedback in the isolated preparation. The future subject of this project is to refine the model so that we can reliably predict the motion from the neural outputs and to combine it with models of the gastric network (Rowat and Selverston, submitted) and sensory receptors. This will enable us to study how a biological control system integrates central pattern generation and sensory feedback. 920 Doya, Boyle, and Selverston Acknowledgements We thank Mike Beauchamp for the gml muscle data. This work was supported by the grant from Office of Naval Research NOOOI4-91-J-1720. References Boyle, M. E. T., Turrigiano, G. G., and Selverston, A. 1. 1990. An endoscopic analysis of gastric mill movements produced by the peptide cholecystokinin. Society for Neuroscience Abstracts 16, 724. Elson, R. C. and Selverston, A. 1. 1992. Mechanisms of gastric rhythm generation in the isolated stomatogastric ganglion of spiny lobsters: Bursting pacemaker potentials, synaptic interactions and muscarinic modulation. Journal of Neurophysiology 68, 890-907. Govind, C. K. and Lingle, C. J. 1987. Neuromuscular organization and pharmacology. In Selverston, A. 1. and Moulins, M., editors, The Crustacean Stomatogastric System, pages 31-48. Springer-Verlag, Berlin. Harris-Warrick, R. M., Marder, E., Selverston, A. 1., and Moulins, M. 1992. Dynamic Biological Networks The Stomatogastric Nervous System. MIT Press, Cambridge, MA. Heinzel, H. G. 1988. Gastric mill activity in the lobster. I: Spontaneous modes of chewing. Journal of Neurophysiology 59, 528-550. Heinzel, H. G. 1988. Gastric mill activity in the lobster. II: Proctolin and octopamine initiate and modulate chewing. Journal of Neurophysiology 59, 551565. Heinzel, H. G. and Selverston, A. 1. 1988. Gastric mill activity in the lobster. III: Effects of proctolin on the isolated central pattern generator. Journal of Neurophysiology 59, 566-585. Hill, A. V. 1938. The heat of shortening and the dynamic constants of muscle. Proceedings of the Royal Sciety of London, Series B 126, 136-195. Hogan, N. 1984. Adaptive control of mechanical impedance by coactivation of antagonist muscles. IEEE Transactions on Automatic Control 29, 681-690. Maynard, D. M. and Dando, M. R. 1974. The structure ofthe stomatogastric neuromuscular system in callinectes sapidus, homarus americanus and panulirus argus (decapoda crustacea). Philosophical Transactions of Royal Society of London, Biology 268, 161- 220. Rowat, P. F. and Selverston, A. 1. Modeling the gastric mill central pattern generator of the lobster with a relaxation-oscillator network. submitted. Selverston, A. 1. and Moulins, M. 1987. The Crustacean Stomatogastric System. Springer-Verlag, New York, NY. Turrigiano, G. G. and Selverston, A. 1. 1990. A cholecystokinin-like hormone activates a feeding-related neural circuit in lobster . Nature 344, 866-868. Winters, J. M. 1990. Hill-based muscle models: A systems engineering perspective. In Winters, J. M. and Woo, S. 1.-Y., editors, Multiplie Muscle Systems: Biomechanics and Movement Organization, chapter 5, pages 69-93. Springer-Verlag, New York, NY. Zahalak, G. I. 1990. Modeling muscle mechanics (and energetics). In Winters, J. M. and Woo, S. L.-Y., editors, Multiplie Muscle Systems: Biomechanics and Movement Organization, chapter 1, pages 1-23. Springer-Verlag, New York, NY.
1992
100
575
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization Gert Cauwenberghs California Institute of Technology Mail-Code 128-95 Pasadena, CA 91125 E-mail: gert(Qcco. cal tech. edu Abstract A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks. 1 Background and Motivation We address general optimization tasks that require finding a set of constant parameter values Pi that minimize a given error functional £(p). For supervised learning, the error functional consists of some quantitative measure of the deviation between a desired state x T and the actual state of a network x, resulting from an input y and the parameters p. In such context the components of p consist of the connection strengths, thresholds and other adjustable parameters in the network. A 244 A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization 245 typical specification for the error in learning a discrete set of pattern associations (yCa), x TCa ) for a steady-state network is the Mean Square Error (MSE) (1) and similarly, for learning a desired response (y(t), xT(t» in a dynamic network (2) For £(p) to be uniquely defined in the latter dynamic case, initial conditions X(tinit) need to be specified. A popular method for minimizing the error functional is steepest error descent (gradient descent) [1]-[6] o£ Llp = -7]op (3) Iteration of (3) leads asymptotically to a local minimum of £(p), provided 7] is strictly positive and small. The computation of the gradient is often cumbersome, especially for time-dependent problems [2]-[5], and is even ill-posed for analog hardware learning systems that unavoidably contain unknown process impurities. This calls for error descent methods avoiding calculation of the gradients but rather probing the dependence of the error on the parameters directly. Methods that use some degree of explicit internal information other than the adjustable parameters, such as Madaline III [6] which assumes a specific feedforward multi-perceptron network structure and requires access to internal nodes, are therefore excluded. Two typical methods which satisfy the above condition are illustrated below: • Weight Perturbation [7], a simple sequential parameter perturbation technique. The method updates the individual parameters in sequence, by measuring the change in error resulting from a perturbation of a single parameter and adjusting that parameter accordingly. This technique effectively measures the components of the gradient sequentially, which for a complete knowledge of the gradient requires as many computation cycles as there are parameters in the system . • Model-Free Distributed Learning [8], which is based on the "M.LT." rule in adaptive control [9}. Inspired by analog hardware, the distributed algorithm makes use oftime-varying perturbation signals 1I".(t) supplied in parallel to the parameters Pi, and correlates these 1I"i(t) with the instantaneous network response E(p + 11") to form an incremental update Ll.Pi. Unfortunately, the distributed model-free algorithm does not support learning of dynamic features (2) in networks with delays, and the learning speed degrades sensibly with increasing number of parameters [8]. 2 Stochastic Error-Descent: Formulation and Properties The algorithm we investigate here combines both above methods, yielding a significant improvement in performance over both. Effectively, at every epoch the constructed algorithm decreases the error along a single randomly selected direction in the parameter space. Each such decrement is performed using a single 246 Cauwenberghs synchronous parallel parameter perturbation per epoch. Let I> = p + 1(' with parallel perturbations 1C'i selected from a random distribution. The perturbations 1C'i are assumed reasonably small, but not necessarily mutually orthogonal. For a given single random instance of the perturbation 1r, we update the parameters with the rule ~p = -I-' £ 1r , (4) where the scalar t = £(1)) - £(p) (5) is the error contribution due to the perturbation 1r, and I-' is a small strictly positive constant. Obviously, for a sequential activation of the 1C'i, the algorithm reduces to the weight perturbation method [7]. On the other hand, by omitting £(p) in (5) the original distributed model-free method [8] is obtained. The subtraction of the unperturbed reference term £(p) in (5) contributes a significant increase in speed over the original method. Intuitively, the incremental error t specified in (5) isolates the specific contribution due to the perturbation, which is obviously more relevant than the total error which includes a bias £(p) unrelated to the perturbation 1r. This bias necessitates stringent zero-mean and orthogonality conditions on the 1C'i and requires many perturbation cycles in order to effect a consistent decrease in the error [8].1 An additional difference concerns the assumption on the dynamics of the perturbations 1C'i. By fixing the perturbation 1r during every epoch in the present method, the dynamics of the 1C'i no longer interfere with the time delays of the network, and dynamic optimization tasks as (2) come within reach. The rather simple and intuitive structure (4) and (5) of the algorithm is somewhat reminiscent of related models for reinforcement learning, and likely finds parallels in other fields as well. Random direction and line-search error-descent algorithms for trajectory learning have been suggested and analyzed by P. Baldi [12]. As a matter of coincidence, independent derivations of basically the same algorithm but from different approaches are presented in this volume as well [13],[14]. Rather than focussing on issues of originality, we proceed by analyzing the virtues and scaling properties of this method. We directly present the results below, and defer the formal derivations to the appendix. 2.1 The algorithm performs gradient descent on average, provided that the perturbations 1C'i are mutually uncorrelated with uniform auto-variance, that is E(1C'i1C'j) = (J'26ij with (J' the perturbation strength. The effective gradient descent learning rate corresponding to (3) equals 7Jeff = 1-'(J'2. Hence on average the learning trajectory follows the steepest path of error descent. The stochasticity of the parameter perturbations gives rise to fluctuations around the mean path of descent, injecting diffusion in the learning process. However, the individual fluctuations satisfy the following desirable regularity: 1 An interesting noise-injection variant on the model-free distributed learning paradigm of [8], presented in [10], avoids the bias due to the offset level £(p) as well, by differentiating the perturbation and error signals prior to correlating them to construct the parameter increments. A complete demonstration of an analog VLSI system based on this approach is lJresented in this volume [llJ. As a matter offact, the modified noise-injection algorithm corresponds to a continuous-time version of the algorithm presented here, for networks and error functionals free of time-varying features. A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization 247 2.2 The error £(p) always decreases under an update (4) for any 1r, provided that 11r12 is "small", and J1 is strictly positive and "small". Therefore, the algorithm is guaranteed to converge towards local error minima just like gradient descent, as long as the perturbation vector 11" statistically explores all directions of the parameter space, provided the perturbation strength and learning rate are sufficiently small. This property holds only for methods which bypass the bias due to the offset error term £(p) for the calculation of the updates, as is performed here by subtraction of the offset i'1 (5). The guaranteed decrease in error of the update (4) under any small, single instance of the perturbation 11" removes the need of averaging multiple trials obtained by different instances of 11" in order to reduce turbulence in the learning dynamics. We intentionally omit any smoothing operation on the constructed increments (4) prior to effecting the updates ~PI' unlike the estimation of the true gradient in [8],[10],[13] by essentially accumulating and averaging contributions (4) over a large set of random perturbations. Such averaging is unnecessary here (and in [13]) since each individual increment (4) contributes a decrease in error, and since the smoothing of the ragged downward trajectory on the error surface is effectively performed by the integration of the incremental updates (4) anyway. Furthermore, from a simple analysis it follows that such averaging is actually detrimental to the effective speed of convergence. 2 For a correct measure of the convergence speed of the algorithm relative to that of other methods, we studied the boundaries of learning stability regions specifying maximum learning rates for the different methods. The analysis reveals the following scaling properties with respect to the size of the trained network, characterized by the number of adjustable parameters P: 2.3 The maximum attainable average speed of the algorithm is a factor pl/2 slower than that of pure gradient descent, as opposed to the maximum average speed of sequential weight perturbation which is a factor P slower than gradient descent. The reduction in speed of the algorithm vs. gradient descent by the square root of the number of parameters can be understood as well from an information-theoretical point of view using physical arguments. At each epoch, the stochastic algorithm applies perturbations in all P dimensions, injecting information in P different "channels". However, only scalar information about the global response of the network to the perturbations is available at the outside, through a single "channel". On average, such an algorithm can extract knowledge about the response of the network in at most p 1 / 2 effective dimensions, where the upper limit is reached only if the perturbations are truly statistically independent, exploiting the full channel capacity. In the worst case the algorithm only retains scalar information through a single, low-bandwidth channel, which is e.g. the case for the sequential weight perturbation algorithm. Hence, the stochastic algorithm achieves a speed-up of a factor p 1/ 2 over the technique of sequential weight perturbation, by using parallel statistically independent perturbations as opposed to serial single perturbations. The original model-free algorithm by Dembo and Kailath [8] does not achieve this p 1 / 2 2Sure enough, averaging say M instances of (4) for different random perturbations will improve the estimate of the gradient by decreasing its variance. However, the variance of the update ~p decreases by a factor of M, allowing an increase in learning rate by only a factor of M 1/2, while to that purpose M network evaluations are required. In terms of total computation efforts, the averaged method is hence a factor A-l1/2 slower. 248 Cauwenberghs speed-up over the sequential perturbation method (and may even do worse), partly because the information about the specific error contribution by the perturbations is contaminated due to the constant error bias signal £(p). Note that up to here the term "speed" was defined in terms of the number of epochs, which does not necessarily directly relate to the physical speed, in terms of the total number of operations. An equally important factor in speed is the amount of computation involved per epoch to obtain values for the updates (3) and (4). For the stochastic algorithm, the most intensive part of the computation involved at every epoch is the evaluation of £(p) for two instances of pin (5), which typically scales as O(P) for neural networks. The remaining operations relate to the generation of random perturbations 7ri and the calculation of the correlations in (4), scaling as O( P) as well. Hence, for an accurate comparison of the learning speed, the scaling of the computations involved in a single gradient descent step needs to be balanced against the computation effort by the stochastic method corresponding to an equivalent error descent rate, which combining both factors scales as O( p 3 / 2 ). An example where the scaling for this computation balances in favor of the stochastic error-descent method, due to the expensive calculation of the full gradient, will be demonstrated below for dynamic trajectory learning. More importantly, the intrinsic parallelism, fault tolerance and computational simplicity of the stochastic algorithm are especially attractive with hardware implementations in mind. The complexity of the computations can be furthermore reduced by picking a binary random distribution for the parallel perturbations, 7ri = ±u with equal probability for both polarities, simplifying the multiply operations in the parameter updates. In addition, powerful techniques exist to generate largescale streams of pseudo-random bits in VLSI [15]. 3 Numerical Simulations For a test of the learning algorithm on time-dependent problems, we selected dynamic trajectory learning (a "Figure 8") as a representative example [2]. Several exact gradient methods based on an error functional of the form (2) exist [2]-[5k with a computational complexity scaling as either O( P) per epoch for an off-line method [2] (requiring history storage over the complete time interval of the error functional), or as O(p2) [3] and recently as O(p3 / 2) [4]-[5] per epoch for an on-line method (with only most current history storage). The stochastic error-descent algorithm provides an on-line alternative with an O( P) per epoch complexity. As a consequence, including the extra p 1/ 2 factor for the convergence speed relative to gradient descent, the overall computation complexity of the stochastic error-descent still scales like the best on-line exact gradient method currently available. For the simulations, we compared several runs of the stochastic method with a single run of an exact gradient-descent method, all runs starting from the same initial conditions. For a meaningful comparison, the equivalent learning rate for 3The distinction between on-line and off-line methods here refers to issues of time rev~rsal in the computation. On-line methods process iucoming data strictly in the order it is received, while off-line methods require extensive access to previously processed data. On-line methods are therefore more desirable for real-time learning applications. A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization 249 stochastic descent 7]eff = J-U12 was set to 7], resulting in equal average speeds. We implemented binary random perturbations 7ri = ±O" with 0" = 1 X 10-3 • We used the network topology, the teacher forcing mechanism, the values for the learning parameters and the values for the initial conditions from [4], case 4, except for 7] (and 7]eff) which we reduced from 0.1 to 0.05 to avoid strong instabilities in the stochastic sessions. Each epoch represents one complete period of the figure eight. We found no loca.l minima for the learning problem, and all sessions converged successfully within 4000 epochs as shown in Fig. 1 (a). The occasional upward transitions in the stochastic error are caused by temporary instabilities due to the elevated value of the learning rate. At lower values of the learning rate, we observed significantly less frequent and articulate upward transitions. The measured distribution for the decrements in error at 7]eff = 0.01 is given in Fig. 1 (b). The values of the stochastic error decrements in the histogram are normalized to the mean of the distribution, i. e. the error decrements by gradient descent (8). As expected, the error decreases at practically all times with an average rate equal to that of gradient descent, but the largest fraction of the updates cause little change in error. Figure Eight Trajectory - - • Exact Gradient Descent Stochastic Error-Descent Number of Epochs (a) ~ >. u ~ ~ =' 0" ~ ... u.. 15 10 5 0 -1 0 2 3 4 Normalized Error Decrement (b) Figure 1 Exact Gradient and Stochastic Error-Descent Methods for the Figure "8" Trajectory. (a) Convergence Dynamics (11 = 0.05). (b) Distribution of the Error Decrements.(11 = 0.01). 4 Conclusion 5 The above analysis and examples serve to demonstrate the solid performance of the error-descent algorithm, in spite of its simplicity and the minimal requirements on explicit knowledge of internal structure. While the functional simplicity and faulttolerance of the algorithm is particularly suited for hardware implementations, on conventional digital computers its efficiency compares favorably with pure gradient descent methods for certain classes of networks and optimization problems, owing to the involved effort to obtain full gradient information. The latter is particularly true for complex optimization problems, such as for trajectory learning and adaptive control, with expensive scaling properties for the calculation of the gradient. In particular, the discrete formulation of the learning dynamics, decoupled from the dynamics of the network, enables the stochastic error-descent algorithm to handle dynamic networks and time-dependent optimization functionals gracefully. 250 Cauwenberghs Appendix: Formal Analysis We analyze the algorithm for small perturbations 1I"i, by expanding (5) into a Taylor series around p: f = L 88f 11") + 0(111"12) , p) ) where the 8f / 8p) represent the components of the true error physical structure of the network. Substituting (6) in (4) yields: "" 8f 2 tl.Pi = -It ~ 8pj 1I"i1l") + 0(111"1 )1I"i . ) (6) gradient, reflecting the (7) For mutually uncorrelated perturbations 1I"i with uniform variance (1'2, E(1I"i1l") = (1'26i), the parameter vector on average changes as 2 8f 3 E(tl.p) = -It(1' 8p + 0((1' ) . (8) Hence, on average the algorithm performs pure gradient descent as in (3), with an effective learning rate 11 = 1'(1'2. The fluctuations of the parameter updates (7) with respect to their average (8) give rise to diffusion in the error-descent process. Nevertheless, regardless of these fluctuations the error will always decrease under the updates (4), provided that the increments tl.Pi are sufficiently small (J.t small): "" 8f 2 "" "" 8f 8f "2 tl.f = ~ -8 . tl.Pi + O(Itl.pl ) ~ -It ~ ~ -8 1I"i8 11") ~ -J.t f ::; 0 . . p. . ~ ~ (9) . . ) Note that this is a direct consequence of the offset bias subtraction in (5), and (9) is no longer valid when the compensating reference term f(p) in (5) is omitted. The algorithm will converge towards local error minima just like gradient descent, as long as the perturbation vector 11" statistically explores all directions of the parameter space. In principle, statistical independence of the 11". is not required to ensure convergence, though in the case of cross-correlated perturbations the learning trajectory (7) does not on average follow the steepest path (8) towards the optima, resulting in slower learning. The constant It cannot be increased arbitrarily to boost the speed of learning. The value of J.t is constrained by the allowable range for Itl.pl in (9). The maximum level for Itl.pl depends on the steepness and nonlinearity of the error functional f, but is largely independent of which algorithm is being used. A value of Itl.pl exceeding the limit will likely cause instability in the learning process, just as it would for an exact gradient descent method. The constraint on Itl.pl allows us to formulate the maximum attainable speed of the stochastic algorithm, relative to that of other methods. From (4), ltl.pl2 = J.t2111"12f2 ::::::: p1'2(1'2f2 (10) where P is the number of parameters. The approximate equality at the end of (10) holds for large P, and results from the central limit theorem for 111"12 with E( 1I"i1l") = (1'2 h.) . From (6), the expected value of (10) is E(I~pI2) = P (p0'2)21 ~! 12 . (11) The maximum attainable value for I' can be expressed in terms of the maximum value of 11 for gradient descent learning. Indeed, from a worst-case analysis of (3) 1 1 2 2 2 8f Itl.plmax = l1max a p max (12) A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization 251 and from a similar worst-case analysis of (11). we obtain P IJma.x /7 2 "" 71ma.x to a first order approximation. With the derived value for J.tma.x, the maximum effective learning rate 1/eff associated with the mean field equation (8) becomes 71eff = p- 1/ 2 71ma.x for the stochastic method, as opposed to 1/ma.x for the exact gradient method. This implies that on average and under optimal conditions the learning process for the stochastic error descent method is a factor pl/2 slower than optimal gradient descent. From similar arguments, it can be shown that for sequential perturbations lI'j the effective learning rate for the mean field gradient descent satisfies 71eff = p-l 71ma.x. Hence under optimal conditions the sequential weight perturbation technique is a factor P slower than optimal gradient descent. Acknowledgements We thank J. Alspector, P. Baldi, B. Flower, D. Kirk, M. van Putten, A. Yariv, and many other individuals for valuable suggestions and comments on the work presented here. References [I] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning Internal Representations by Error Propagation," in Parallel Distributed Processing, Explorations in the Microstructure of Cognition, vol. 1, D.E. Rumelhart and J.L. McClelland, eds., Cambridge, MA: MIT Press, 1986. [2] B.A. Pearlmutter, "Learning State Space Trajectories in Recurrent Neural Networks," Neural Computation, vol. 1 (2), pp 263-269, 1989. [3] R.J. Williams and D. Zipser, "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks," Neural Computation, vol. 1 (2), pp 270-280, 1989. [4] N.B. Toomarian, and J. Barhen, "Learning a Trajectory using Adjoint Functions and Teacher Forcing," Neural Networks, vol. 5 (3), pp 473-484, 1992. [5] J. Schmidhuber, " A Fixed Size Storage O( n 3 ) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks," Neural Computation, vol. 4 (2), pp 243248, 1992. [6] B. Widrow and M.A. Lehr, "30 years of Adaptive Neural Networks. Percept ron, Madaline, and Backpropagation," Proc. IEEE, vol. 78 (9), pp 1415-1442, 1990. [7] M. Jabri and B. Flower, "Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayered Networks," IEEE Trans. Neural Networks, vol. 3 (1), pp 154-157, 1992. [8] A. Dembo and T. Kailath, "Model-Free Distributed Learning," IEEE Trans. Neural Networks, vol. 1 (1), pp 58-70, 1990. [9] H.P. Whitaker, "An Adaptive System for the Control of Aircraft and Spacecraft," in Institute for Aeronautical Sciences, pap. 59-100, 1959. [10] B.P. Anderson and D.A. Kerns, "Using Noise Injection and Correlation in Analog Hardware to Estimate Gradients," submitted, 1992. [11] D. Kirk, D. Kerns, K. Fleischer, and A. Barr, "Analog VLSI Implementation of Gradient Descent," in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993. [12] P. Baldi, "Learning in Dynamical Systems: Gradient Descent, Random Descent and Modular Approaches," JPL Technical Report, California Institute of Technology, 1992. [13J J. Alspector, R. Meir, B. Yuhas, and A. Jayakumar, "A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks," in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993. [14] B. Flower and M. labri, "Summed Weight Neuron Perturbation: An O(n) Improvement over Weight Perturbation," in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993. [15] J. Alspector, l.W. Gannett, S. Haber, M.B. Parker, and R. Chu, "A VLSI-Efficient Technique for Generating Multiple Uncorrelated Noise Sources and Its Application to Stochastic Neural Networks," IEEE T. Circuits and Systems, 38 (1), pp 109-123, 1991. PART III CONTROL, NAVIGATION, AND PLANNING
1992
101
576
Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes Stephen Judd Siemens Corporate Research 755 College Road East Princeton NJ 08540 jUdd@learning.siemens.com Paul W. Munro Department of Infonnation Science University of Pittsburgh Pittsburgh, PA 15260 munro@lis.pitt.edu ABSTRACT In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors injected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthennore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine. 1 INTRODUCTION The encoder task described by Ackley, Hinton, and Sejnowski (1985) for the Boltzmann machine, and by Rumelhart, Hinton, and Williams (1986) for feed-forward networks. has been used as one of several standard benchmarks in the neural network literature. Cottrell, Munro, and Zipser (1987) demonstrated the potential of such autoencoding architectures to lossy compression of image data. In the encoder architecture, the weights connecting the input layer to the hidden layer play the role of an encoding mechanism. and the hidden-output weights are analogous to a decoding device. In the terminology of Shannon and Weaver (1949), the hidden layer corresponds to the communication channel. By analogy, channel noise corresponds to a fault (misfiring) in the hidden layer. Previous 89 90 Judd and Munro encoder studies have shown that the representations in the hidden layer correspond to optimally efficient (i.e., fully compressed) codes, which suggests that introducing noise in the fonn of random interference with hidden unit function may lead to the development of codes more robust to noise of the kind that prevailed during learning. Many of these ideas also appear in Chiueh and Goodman (1987) and Sequin and Clay (1990). We have tested this conjecture empirically, and analyzed the resulting solutions, using a standard gradient-descent procedure (backpropagation). Although there are alternative techniques to encourage fault tolerance through construction of specialized error functions (eg., Chauvin, 1989) or direct attacks (eg., Neti, Schneider, and Young, 1990), we have used a minimalist approach that simply introduces intennittent node misfirings during training that mimic the errors anticipated during nonnal performance. In traditional approaches to developing error-correcting codes (eg., Hamming, 1980), each symbol from a source alphabet is mapped to a codeword (a sequence of symbols from a code alphabet); the distance between codewords is directly related to the code's robustness. 2 METHODOLOGY Computer simulations were performed using strictly layered feed forward networks. The nodes of one of the hidden layers randomly misfrre during training; in most experiments, this "channel" layer was the sole hidden layer. Each input node corresponds to a transmitted symbol, output nodes to received symbols, channel representations to codewords; other layers are introduced as needed to enable nonlinear encoding and/or decoding. After training, the networks were analyzed under various conditions, in terms of performance and coding-theoretic measures, such as Hamming distance between codewords. The response, r, of each unit in the channel layer is computed by passing the weighted sum, x , through the hyperbolic tangent (a sigmoid that ranges from -1 to +1). The responses of those units randomly designated to misfire are then multiplied by -1 as this is most comparable with concepts from coding theory for binary channels" The misfire operation influences the course of learning in two ways, since the erroneous information is both passed on to units further "downstream" in the net, and used as the presynaptic factor in the synaptic modification rule. Note that the derivative factor in the backpropagation procedure is unaffected for units using the hyperbolic k'Ulgent, since dr/dx = (l+r )(l-r )/2. These misfrrings were random I y assigned according to various kinds of probability distributions: independent identically distributed (i.i.d), k~f-n, correlated across hidden units, and correlated over the input distribution. The hidden unit representations required to h,mdie uncorrelated noise roughly correspond to Hamming spheres2 ,and can be decoded by a 1 Other possible misfire modes include setting the node's activity to zero (or some other constant) or randomizing it. The most appropriate mode depends on various factors, ineluding the situation to be simulated and the type of analysis to be performed. For exampIe, simulating neuronal death in a biological situation may warrant a different failure mode than simulating failure of an electronic component. 2 Consider an n-bit block code, where each codeword lies on the vertex of an n-cube. The Hamming sphere of radius k is the neighborhood of vertices that differ from the codeword by a number of bits less than or equal to k. Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes 91 single layer of weights; thus the entire network consists of just three sets of units: source-channel-sink. However, correlated noise generally necessitates additional layers. All the experiments described below use the encoder task described by Ackley, Hinton, and Sejnowki (1986); that is, the input pattern consists of just one unit active and the others inactive. The task is to activate only the corresponding unit in the output layer. By comparison with coding theory, the input units are thus analogous to symbols to be encoded, and the hidden unit representations are analogous to the code words. 3 RESULTS 3.1. PERFORMANCE The ftrst experiment supports the claim of Sequin and Clay (1990) that training with faults improves network robustness. Four 8-30-8 encoders were trained with fault probability p = 0, 0.05, 0.1, and 0.3 respectively. After training, each network was tested with fault probabilities varying from 0.05 to 1.0. The results show enhanced performance for networks trained with a higher rate of hidden unit misftring. Figure 1 shows four performance curves (one for each training fault probability), each as a function of test fault probability. Interesting convergence properties were also observed; as the training fault probabilty, p, was varied from 0 to 0.4, networks converge reliably faster for low nonzero values (0.05<p<0.15) than they do at p=O. 1.0 ...... C,.) 0.8 Q) ''0 C,.) ...... 0.6 c: Q) C,.) training fault probability 'Q) a. 0.4 EI p=O.OO Q) C> .. p=O.05 m '9 p=O. 10 Q) 0.2 > • « p=O.30 0.0 0.0 0.2 0.4 0.6 0.8 1.0 T est fault probability Figure 1. Performance for various training conditions. Four 8-30-8 encoders were trained with different probabilities for hidden unit misfiring. Each data point is an average over 1000 random stimuli with random hidden unit faults. Outputs are scored correct if the most active output node corresponds to the active input node. 92 Judd and Munro 3.2. DISTANCE 3.2.1 Distances increase with fault probability Distances were measured between all pairs of hidden unit representations. Several networks trained with different fault probabilities and various numbers of hidden units were examined. As expected, both the minimum distances and average distances increase with the training fault probability until it approaches 0.5 per node (see Figure 2). For probabilities above 0.25, the minimum distances fall within the theoretical bounds for a 30 bit code of a 16 symbol alphabet given by Gilbert and Elias (see Blahut, 1987). 14 12 -.s 10 CD (.) c as ~ 8 c 6 4 Elias Bound o average • minimum 0.0 0.1 0.2 0.3 0.4 training fault probability Figure 2. Distance increases with fault probability. Average and minimum L1 distances are plotted for 16-30-16 networks trained with fault probabilities ranging from 0.0 to 0.4. Each data point represents an average over 100 networks trained using different weight initializations. 3.2.2. Input probabilities affect distance The probability distribution over the inputs influences the relative distances of the representations at the hidden unit level. To illustrate this, a 4-10-4 encoder was trained using various probabilities for one of the four inputs (denoted P*), distributing the remaining probabilty unifonnly among the other three. The average distance between the representation of p* and the others increases with its probability, while the average distmlce among the other three decreases as shown in the upper part of Figure 3. The more frequent patterns are generally expected to "claim" a larger region of representation space. Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes 93 6 ~~~------------~------------------~ 5 CD U c as 1/1 is CD 4 m as ... CD > <t 3 5~==========~============~ CD 4 u c as 1/1 is CD m as ... CD > <t 3 0.0 0.1 0.2 0.3 0.4 0.5 Prob(P*) Figure 3. Non-uniform input distribution. 4-10-4 encoders were trained usingfailure probabilities of 0 (squares), 0.1 (circles), and 0.2 (triangles). The input distribution was skewed by varying the probability of one of the four items (denoted P*) in the training set from 0.05 to 0.5, keeping the other probabilities uniform. Average L1 distances are shown from the manipulated pattern to the other three (open symbols) and among the equiprobables (filled symbols) as well. In the upper figure, failure is independent of the input, while in the lower figure, failure is induced only when P* is presented. 94 Judd and Munro The dashed line in Figure 3 indicates a uniform input distribution, hence in the top figure, the average distance to p* is equal to the average distances among the other patterns. However this does not hold in the lower figure, indicating that the representations of stimuli that induce more frequent channel errors also claim more representation space. 3.3. CORRELATED MISFIRING If the error probability for each bit in a message (or each hiddoo unit in a network layer) is uncorrelated with the other message bits (hidden units), then the principles of distance between codewords (representations) applies. On the other hand, if there is some structure to the noise (i.e. the misfrrings are correlated across the hidden units), there may be different strategies to encoding and decoding, that require computations other than simple distance. While a Hamming distance criterion on a hypercube is a linearly separable classification function, and hence computable by a single layer of weights, the more general case is not linearly separable, as is demonstrated below. Example: Misfiring in 2 of 6 channel units. In this example, up to two of six channel units are randomly selected to misfire with each learning trial. In order to guarantee full recovery from two simultaneous faults, only two symbols can be represented, if the faults are independent; however, if one fault is always in one three-unit subset and the other is always in the complementary subset, it is possible to store four patterns. The following code can be considered with no loss of generality: Let the six hidden units (code bits) be partitioned into two sets of three, where there is at most one fault in each subset. The four code words, 000000, 000111, 111000, 111111 form an error correcting code under this condition; i.e. each subset is a triplicate code. Under the allowed fault combinations specified above, any given transmitted code string will be converted by noise to one of 9 strings of the 15 that lie at a Hamming distance of 2 (the 15 unconstrained two-bit errors of the string 000000 are shown in the table below with the 9 that satisfy the constraint in a box). Because of the symmetric distribution of these 9 allowed states, any category that includes all of them and is defined by a linear (hyperplane) boundary, must include all 15. Thus, this code cannot be decoded by a single layer of threshold (or sigmoidal) units; hence even if a 4-6-4 network discovers this code, it will not decode it accurately. However, our experiments show that inserting a reliable (fault-free) hidden layer of just two units between the channel layer and the output layer (i.e., a 4-6-2-4 encoder) enables the discovery of a code that is robust to errors of this kind. The representations of the four patterns in the channel layer show a triply redundant code in each half of the channel layer (Figure 4). The 2-unit layer provides a transformation that allows successful decoding of channel representations with faults. Table. Possible two-bit error masks 000011 000101 000110 001001 001010 010001 010010 100001 100010 001100 010100 011000 100100 101000 '--------------------------110000 Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes 95 Input Channel Decoder Output Figure 4. Sample solution to 3-3 channel task. Thresholded activation patterns are shown for a 4-6-2-4 network. Errors are introduced into the first hidden (channel) layer only. With each iteration, the outputs of one hidden unit from the left half of the hidden layer and one unit from the right half can be inverted. Note that the channel develops a triplicate code for each half-layer. 4 DISCUSSION Results indicate that vanilla backpropagation on its own does not spread out the hidden unit representations (codewords) optimally, and that deliberate random misfiring during training induces wider separations, increasing resistance to node misfiring. Furthermore, non-uniform input distributions and non-uniform channel properties lead to asymmetries among the similarity relationships between hidden unit representations that are consistent with optimizing mutual information. A mechanism of this kind may be useful for increasing fault tolerance in electronic systerns, and may be used in neurobiological systems. The potential usefulness of inducing faults during training extends beyond fault tolerance. Clay and Sequin (1992) point out that training of this kind can enhance the capacity of a network to generalize. In effect, the probability of random faults can be used to vary the number of "effective parameters" (a term coined by Moody, 1992) available for adaptation, without dynamically altering network architecture. Thus, a naive system might begin with a relatively high probability of misfiring, and gradually reduce it as storage capacity needs increase with experience. This technique may be particularly valuable for designing efficient, robust codes for channels with high order statistical properties, which defy traditional coding techniques. In such cases, a single layer of weights for encoding is not generally sufficient, as was shown above in the 4-6-2-4 example. Additional layers may enhance code efficiency for complex noiseless applications, such as image compression (Cottrell, Munro, and Zipser, 1987). Acknowledgements The second author participated in this research as a visiting research scientist during the summers of 1991 and 1992 at Siemens Corporate Research, which kindly provided fmancial support and a stimulating research environment. 96 Judd and Munro References Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985) A learning algorithm for Boltzmann machines. Cognitive Science. 9: 147-169. Blahut, R. E. (1987) Principle and Practise of Information Theory. Reading MA, Addison Wesley. Chauvin, Y. (1989) A back-propagation algorithm with optimal use of hidden units. In: Touretsky, D.S. (ed.) Advances in Neural Information Processing Systems I. San Mateo, CA: Morgan Kaufmann Publishers. Chiueh, Tz-Dar and Rodney Goodman. (1987) A neural network classifier based on coding theory. In: Dana Z. Anderson, editor, Neural Information Processing Systems, pp 174--183, New York, A.I.P. Clay, Reed D. and Sequin, Carlo H. (1992) Fault tolerance training improves generalization and robustness. Proceedings of JJCNN92 , 1-769, Baltimore. Cottrell, G. W., P. Munro, and D. Zipser (1987) Image compression by back propagation: An example of extensional programming. Ninth Ann Meeting of the Cognitive Science Society, pp. 461-473. Hamming, R. W. (1980) Coding and Iriformation Theory. Prentice Hall: Englewood Cliffs, N.J. Moody, J. (1992) The effective number of parameters. In: Moody, J. E., Hanson, S. J., Lippman, R., (eds.) Advances in Neural Iriformation Processing Systems 4. San Mateo, CA: Morgan Kaufmann Publishers. Neti, C., M. H. Schneider, and E. D. Young. (1990) Maximally fault-tolerant neural networks and nonlinear programming. Proceedings of JJCNN, 11-483, San Diego. Rumelhart D., Hinton G., and Williams R. (1986) Learning representations by backpropagating errors. Nature 323:533-536. Sequin, Carlo H. and Reed D. Clay (1990) Fault tolerance in artificial neural networks. Proceedings of JJCNN, 1-703, San Diego. Shannon, C. and Weaver, W. (1949) The Mathematical Theory of Communication. University of Illinois Press. PART II ARCHITECTURES AND ALGORITHMS
1992
102
577
Unsmearing Vistlal Motion: Development of Long-Range Horizolltal Intrinsic Conllections Kevin E. Martin Jonathan A. Marshall Department of Computer Science, CB 3175, Sitterson Hall University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A. Abstract Human VlSlon systems integrate information nonlocally, across long spatial ranges. For example, a moving stimulus appears smeared when viewed briefly (30 ms), yet sharp when viewed for a longer exposure (100 ms) (Burr, 1980). This suggests that visual systems combine information along a trajectory that matches the motion of the stimulus. Our self-organizing neural network model shows how developmental exposure to moving stimuli can direct the formation of horizontal trajectory-specific motion integration pathways that unsmear representations of moving stimuli. These results account for Burr's data and can potentially also model ot.her phenomena, such as visual inertia. 1 INTRODUCTION N onlocal interactions strongly influence the processing of visual motion information and the response characteristics of visual neurons. Examples include: attentional modulation of receptive field shape; modulation of neural response by stimuli beyond the classical receptive field; and neural response to large-field background motion. In this paper we present a model of the development of nonlocal neural mechanisms for visual motion processing. Our model (Marshall, 1990a, 1991) is based on the long-range excitatory horizontal intrinsic connections (LEHICs) that have been identified in the visual cortex of a variety of animal species (Blasdel, Lund, & Fitzpatrick, 1985; Callaway & Katz, 1990; Gabbott, Martin, & Whitteridge, 1987; Gilbert & Wiesel, 1989; Luhmann, Martinez Millan, & Singer, 1986; Lund, 1987; Michalski, Gerstein, Czarkowska, & Tarnecki, 1983; Mitchison & Crick, 1982; Nelson & Frost, 1985; Rockland & Lund, 1982, 1983; Rockland, Lund, & Humphrey, 1982; Ts'o, Gilbert, & vViesel, 1986). 2 VISUAL UNSMEARING Human visual systems summate signals over a period of approximately 120 ms in daylight (Burr 1980; Ross & Hogben, 1974). This slow summation reinforces 417 418 Martin and Marshall stationary stimuli but would tend to smear any moving object. Nevertheless, human observers report perceiving both stationary and moving stimuli as sharp (Anderson, Van Essen, & Gallant, 1990; Burr, 1980; Burr, Ross, & Morrone, 1986; Morgan & Benton, 1989; Welch & McKee, 1985). Why do moving objects not appear smeared? Burr (1980) measured perceived smear of moving spots as a function of exposure time. He found that a moving visual spot appears smeared (with a comet-like tail) when it is viewed for a brief exposure (30 ms) yet perfectly sharp when viewed for a longer exposure (100 ms) (Figure 1). The ability to counteract smear at longer exposures suggests that human visual systems combine (or integrate) and sharpen motion information from multiple locations along a specific spatiotemporal trajectory that matches the motion of the stimulus (Barlow, 1979,1981; Burr, 1980; Burr & Ross, 1986) in the domains of direction, velocity, position, and time. This unsmearing phenomenon also suggests the existence of a memory-like effect, or persistence, which would cause the behavior of processing mechanisms to differ in the early, smeared stages of a spot's motion and in the later, unsmeared stages. 3 NETWORK ARCHITECTURE We built a biologically-modeled self-organizing neural network (SONN) containing long-range excitatory horizontal intrinsic connections (LEHICs) that learns to integrate visual motion information nonlocally. The network laterally propagates predictive moving stimulus information in a trajectory-specific manner to successive image locations where a stimulus is likely to appear. The network uses this propagated information sharpen its representation of visual motion. 3.1 LONG-RANGE EXCITATORY HORIZONTAL INTRINSIC CONNECTIONS The network's LEHICs modeled several characteristics consistent with neurophysiological data: • They are highly specific and anisotropic (Callaway & Katz, 1990). Oms 5 30ms . ~ ~ :::J "0 en .2 :::J E +=(J) 100 ms Motion • • .~v~ .:::. ~~:~~ -:.:.::~.~ ......,/ ..... ~/f/" ··-:::::::~1··1 • .>:::::~~ -:::,:~ .::,. •• • • • Figure 1: Motion unsmearing. A spot presented for 30 rns appears to have a comet-like tail, but a spot presented for 100 lOS appears sharp and unsmeared (Burr, 1980). Un smearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections 419 • They typically run between neurons with similar stimulus preferences (Callaway & Katz, 1990). • They can run for very long distances across the network space (e.g., 10 mm horizontally across cortex) (Luhmann, Martinez Millan, & Singer, 1986). • They can be shaped adaptively through visual experience (Callaway & Katz, 1990; Luhmann, Martinez Millan, & Singer, 1986). • They may serve to predictively prime motion-sensitive neurons (Gabbott, Martin, & Whitteridge, 1987). Some characteristics of our modeled LEHICs are also consistent with those of the horizontal connections described by Hirsch & Gilbert (1991). For instance, we predicted (Marshall, 1990a) that horizontal excitatory input alone should not cause suprathreshold activation, but. horizontal excitatory input should amplify activation when local bottom-up excitation is present. Hirsch & Gilbert (1991) directly observed these characteristics in area 17 pyramidal neurons in the cat. Since LEHICs are found in early vision processing areas like VI, we hypothesize that similar connections are likely to be found within "higher" cortical areas as well, like areas MT and STS. Our simulated networks may correspond to structures in such higher areas. Although our long-range lateral signals are modeled as being excitatory (Orban, Gulyas, & Vogels, 1987), they are also functionally homologous to long-range trajectory-specific lateral inhibition of neurons tuned to null-direction motion (Ganz & Felder, 1984; Marlin, Dougla.", & Cynader, 1991; Mott.er, Steinmetz, Duffy, & Mountcastle, 1987). LEHICs constitute one possible means by which nonlocal communication can take place in visual cortex. Other means, such as large bottom-up receptive fields, can also cause information to be transmitted nonlocally. However, the main difference between LEHICs and bottom-up receptive fields is that LEHICs provide lateral feedback information about the outcome of other processing within a given stage. This generates a form of memory, or persistence. Purely bottom-up net.works (without LEHICs or other feedba.ck) would perform processing afresh at each step, so that the outcome of processing would be influenced only by the direct, feedforward inputs at each step. 3.2 RESULTS OF NETWORK DEVELOPMENT In our model, developmental exposure to moving stimuli guides the formation of motion-integration pathways that unsmear representations of moving stimuli. Our model network is repeat.edly exposed to training input sequences of smeared motion patterns through bottom-up excitatory connections. Smear is modeled as an exponential decay and represents the responses of t.emporally integrating neurons to moving visual stimuli. The network contains a set of initially nonspecific LEHICs with fixed signal transmission latencies. The moving stimuli cause the pattern of weights across the LEHICs to become refined, eventually fonning "chains" that correspond to trajectories in the visual environment. To model unsmearing fully, we would need a 2-D retinotopically organized layer of neurons tuned to different. directions of motion and different velocities. Each trajectory in visual space would be represented by a set of like velocity and direction sensitive neurons whose receptive fields are located along the trajectory. These neurons would be connected through a trajectory-specific chain of time-delayed LEHICs. Lat.eral inhibition between chains would be organized selectively to allow representations of multiple stimuli to be simultaneously active (Marshall, 1990a), thereby letting most trajectory representations operate independently. Our simulation consists of a I-D subnet.work of the full 2-D network, with 32 neurons sensitive to a single velocity and direction of motion (Figure 2a). The 420 Martin and Marshall lateral inhibitory connections are fixed in a Gaussian distribution, but the LEHIC weights can change according to a modified Hebbiall rule (Grossberg, 1982): d dt Zii = {f(Xi)(-zii + h(xj)), where z; represents the weight of the LEHIC from the jth neuron to the ith neuron, Xi represents the value of the activation level of the ith neuron, { is a slow learning ra.te, h(xj) = max(O, Xj)2 is a faster-than-linear signal function, and f(xd = max(O, Xi)2 is a faster-than-linear sampling function. To model multiplestep trajectories, we used LEHICs with three different signal transmission delays. Initially the LEHICs were all represented, but their weights were zero. As stimuli move across the receptive fields of the neurons in the network, many neurons are coactive because the network is unable to resolve the smear. By the learning rule, the weights of the LEHICs between these coactive neurons increase. This leads to a profusion of connection weights (Figure 2b), analogous to the "crude clusters" proposed by Callaway and Katz (1990) to describe the early (postnatal days 14-35) structure of horizontal connections in cat area VI. After sufficient exposure to moving stimuli, the "crude clusters" in our simulation become sharper (Figure 2c) because of the faster-than-linear signal functions. This refinement of the pattern of connection weights into chains might correspond to the later (postnatal day 42+) development of "refined clusters" described by Callaway and Katz (1990). 3.3 RESULTS OF NETWORK OPERATION Before learning begins the network is incapable of unsmearing a stimulus moving across the receptive fields of the neurons (Figure 3a). As the stimulus moves from one position to the next, the pattern of neuron activations is no less smeared than o 0 0+ ...•... +0 0 0 --.. ".... . .~... . ........ ...... ",," t "" .......... -: : : : :, .. .. ~ : : : : ........... .," t t t t Excitatory Input (a) t t Lateral Inhibitory Connections cI O~O """"'0 "'0 LEHICs (b) (c) Figure 2: Three phases of modeled development. (a) Initial. Lateral excitatory connections were modifiable and had zero weight. Lateral inhibition was fixed in a Gaussian distribut.ion (thickness of dotted arrows). The neurons received sequences of smeared rightward-moving excitatory input patterns. (b) Profusion. During early development lateral excitatory connections went through a phase of weight profusion. The output LERrC weights (thickness of arrows) from one neuron (filled circle) are shown; weights were biased toward rightward motion. (c) Refinement. During later development, the pattern of weights settled into sets of regular anisotropic chains; most of the early profuse connections were eliminated. No external parameters were manipulated during the simulation to induce the initial-profusion-refinement phase transitions. The simulation contained three different signal transmission latencies, but only one is shown here. Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections 421 the moving input pattern. No information is built up along the trajectory since the LEHIC weights are still zero. After training, the network is able to resolve the smear (Figure 3b) in a manner reminiscent of Burr's results (Figure 1). As a stimulus moves, it excites a sequence of neurons whose receptive fields lie along its trajectory. As each neuron receives excitatory input in turn from the moving stimulus, it becomes activated and emits excitatory signals along its trajectory-specific LEHICs. Subsequent neurons along the trajectory then receive both direct stimulus-generated excitation and lateral time-delayed excitation. The combination causes these neurons to become even more active; thus activation accumulates along the chain toward an asymptote. The accumulating activation lets neurons farther along the trajectory more effectively suppress (via lateral inhibition) the activation of the neurons carrying the trailing smear. The comet-like tail contracts progressively, and the representation of the Time:1 Time=2 Time=3 Time=4 Time=5 Time=6 Time=7 Time=8 Time:9 Time=10 Time=11 Time=12 1,000000000001 •• 00000]0000 r··, . .... 1 ~"ooOOOOOOO i •••• 0000 0 000 , I ., ... , 1 •• ~.'0000000 1 I I ., I , 1 . .. .. .. •••••• 000000 ,. I"T···I T···I 1 ··4t·~"·000001 . • .. , , I I , 1 . . . . . . •••••••• 0 0 00 , .• ···rr,·rr··1 .. . . ••••••••• 0 0 0 , . r .... T ·"I"r"l"··r ··1 . . . •••••••••• 00 , .. , ... / ... , .... , ·r··r·FT···1 . ••••••••••• 0 ,. , .... , .... , ....•.. , ·r··I"··rT··1 @ ••••••••••• , . , • ..•.... , .... / .. r r ··TT· r·T ··1 Time=1 Time=2 Time=3 Time=4 Time=5 Time=6 Time=8 Time=9 Time=10 Time=11 Time=12 1 ,00000000000 1 •• 0000000000 ····1 , . .. .. .... . 1 ",000000000 1 ~.e.OOOOOOOO r I .. , ·-1 . . . . . . 1 00*-.0000000 I I ., I , 1 . . . . . . . 000 0 •• 000000 ,. , ·,1 ·,·-1 . . . . . . 1 000~.000001 • . ,.., I I , 1 . . . . . 0 0 0000®.0 000 , r ·rT· ' T··r··1 ... . OOOOOOO@.OOO , . , . .... r ··, r·r·r ··1 000000000.00 , ., .... , ..•. , ·rr·rT···1 . . 0 0 0 0000000.0 , , .. , .. , ... ,. ,. ·r ·r "1""·"/"" ·-1 . 0000000 0000. • , .. , .. . , .. , ... r· r ·rT T ··l ·-1 Figure 3: Results of unsmearing simulation. A simulated spot moves rightward for 12 time steps along a I-D model retina. Smeared input patterns are plotted as vertical lines, and relative output neuron activation patterns are plotted as shading intensity of circles (neurons). (a) Before learning (left) the network is unable to resolve the smear in the input, but (b) after learning (right), the smear is resolved by time step 11. The same test input patterns are used both before and after learning. 422 Martin and Marshall moving stimulus becomes increasingly sharp. Each neuron's activation value Xi changes according to a shunting differential equation (Grossberg, 1982): d dtXi = -Axi + (B - XdEi (C + xdIi, where the neuron's total excitatory input Ei = Ki (1 + Li) combines bottomup input K j (the smeared motion) with summed lateral excitation input Li = L I:j h( X j )zt, the neuron's inhibitory input is Ii = /2: j g( x j )zji, h(xj) = max(u, Xj)2 and g(Xj) = max(O, .'Cj)3 are faster-than-linear signal functions, and A, B, C, L, /3, and / are constants. 4 CONCLUSIONS AND FUTURE RESEARCH One might wonder why visual systems allow smear to be represented during the first 30 ms of a stimulus' motion, since simple winner-take-all lateral inhibition could easily eliminate t.he smear in t.he representation. Our research lea.ds us to suggest that representing smear lets human visual systems tolerate and even exploit initial uncertainty in local motion measurements. A system with winnertake-all sharpening could not generate a reliable trajectory prediction from an initial inaccurate local motion measurement because the motion can be determined accurately only after multiple measurements along the trajectory are combined (Figure 4a). The inaccurate trajectory predictions of such a network would impair its ability to develop or maintain circuits for combining motion measurements (Marshall, 1990ab). We conclude that motion perception systems need to represent explicitly both initial smear and subsequent unsmearing. Figure 4a illustrates that when a moving object first appears, the direction in which it will move is uncertain. As the object continues to move, its successor positions become increasingly predictable, in general. The initial smear in the representation is necessary for communicating prior trajectory information to the representations of many possible future trajectory positions. Faster-tha.n-linear signal functions (Figure 4b) were used so that a neuron would generate little lateral excitation and inhibition when it is uncertain about the presence of the moving stimulus in its receptive field (when a new stimulus appears) and so that a highly active neuron (more certain about the presence of the stimulus in its receptive field) would generate strong lateral excitation and inhibition. Our results illustrate how visual systems may become able both to propagate motion Uncertainty 0-... --0-... --<>-... ~o-Strong Certainty Uncertain Certain (a) (b) Figure 4: Visual motion system uncertainty. (a) When a moving object first appears, the direction in which it will move is uncertain (top row, circular shaded region). As motion proceeds (second, third, and fourth rows), the set of possible stimulus locatlOns becomes increasingly predictable (smaller shaded regions). (b) Faster-than-linear signal functions maintain smear of uncertain data but sharpen more certain data. Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections 423 information in a trajectory-specific manner and to use the propagated information to unsmear representations of moving objects: (1) Regular anisotropic "chain" patterns of time-delayed horizontal excitatory connections become established through a learning procedure, in response to exposure to ordinary moving visual scenes. (2) Accumulation of propagated motion information along these chains causes a sharpening that unsmears representations of moving visual stimuli. These results let us model the integration-along-trajectory revealed by Burr's (1980) experiment, within a developmental framework that corresponds to known neurophysiological data; they can potentially also let other nonlocal motion phenomena, such as visual inertia (Anstis & Ramachandran, 1987), be modeled. ACKNOWLEDGEMENTS This work was supported in part by the National Eye Institute (EY09669), by the Office of Naval Research (Cognitive and NeUl'ai Sciences, N00014-93-1-0130), and by an Oak Ridge Associated Universit.ies Juuior Faculty Enhancement Award. REFERENCES Anderson, C.H., Van Essen, D.C., & Gallant, J.L. (1990). "Blur into Focus." Nature, 343, 419-420. Anstis, S.M. & Ramachandran, V.S. (1987). "Visual Inertia in Apparent Motion." Vision Research, 27(5), 755-764. Barlow, H.B. (1979). "Reconstructing the Visual Image in Space and Time." Nature, 279, 189-190. Barlow, H.B. (1981). "Critical Limiting Factors in the Design of the Eye and Visual Cortex." Proceedings of the Royal Society of Lond011, Ser. B, 212, 1-34. Blasdel, G.G., Lund, J.S., & Fitzpatrick, D. (1985). "Intrinsic Connections of Macaque Striate Cortex: Axonal Projections of Cells Outside Lamina 4C." Journal of Neuroscience, 5(12), 3350-3369. Burr, D. (1980). "Motion Smear." Nature, 284,164-165. Burr, D. & Ross, J. (1986). "Visual Processing oO\'lotion." Trends in Neuroscience, 9(7), 304-307. Burr, D.C., Ross, J., & Morrone, M.C. (1986). "Seeing Objects in Motion." Proceedings of the Royal Society of London, Ser. B, 227,249-265. Callaway, E.M. & Katz, 1.C. (1990). "Emergence and Refinement of Clustered Horizontal Connections in Cat Striate Cortex." J. Neurophysiol., 10, 1134-1153. Gabbott, P.L.A., Martin, K.A.C., & '¥hitteridge, D. (1987). "Connections Between Pyramidal Neurons in La.yer 5 of Cat Visual Cortex (Area 17)." Journal of Comparative Neurology, 259, 364-381. Ganz, 1. & Felder, R. (1984). "Mechanism of Directional Selectivity is Simple Neurons of the Cat's Visual Cortex Analyzed with Stationary Flash Sequences." Journal of Neurophysiology, 51,294-324. Gilbert, C.D. & Wiesel, T.N. (1989). "Columnar Specificity of Intrinsic Horizontal and Corticocortical Connections in Cat Visual Cortex." Journal of Neuroscience, 9, 2432-2442. Hirsch, J. & Gilbert, C.D. (1991). "Synaptic Physiology of Horizontal Connections in the Cat's Visual Cortex." Journal of Neuroscience, 11, 1800-1809. Luhmann, H.J., Martinez Millan, L., & Singer, W. (1986). "Development of 424 Martin and Marshall Horizonta.l Intrinsic Connections in Cat Striate Cortex." Experimental Brain Research, 63, 443-448. Lund, J .S. (1987). "Local Circuit Neurons of Macaque Monkey Striate Cortex: I. Neurons of Laminae 4C and 5A." Journal of Comparative Neurology, 257, 60-92. Marlin, S.G., Douglas, R.M., & Cynader, M.S. (1991). "Position-Specific Adaptation in Simple Cell Receptive Fields of the Cat Striate Cortex." Journal of Neurophysiology, 66(5),1769-1784. Marshall, J.A. (1990a). "Self-Organizing Neural Networks for Perception of Visual Motion." Neural Networks, 3, 45-74. Marshall, J .A. ,1990b). "Representation of Uncertainty in Self-Organizing Neural Networks.' Proceedings of the International Neural Network Conference, Paris, France, July 1990,809-812. Marshall, J .A. (1991). "Challenges of Vision Theory: Self-Organization of Neural Mechanisms for Stable Steering of Object-Grouping Data in Visual Motion Perception." Invited Paper, in Stochastic and Ne'uralll1ethods in Signal Processing, Image Processing, and Computer Vision, Su-Shing Chen, Ed., Proceedings of the SPIE 1569, San Diego, CA, July 1991, pp. 200-21.5. Michalski, A., Gerstein, G.L., Czarkowska, J., & Tarnecki, R. (1983). "Interactions Between Cat Striate Cortex Neurons." Experimental Brain Research, 51, 97-107. Mitchison, G. & Crick, F. (1982). "Long Axons \Vithin t.he Striate Cortex: Their Distribution, Orientat.ion, and Patt.erns of Connection." Proceedings of the National Academy of Sciences of the U.S.A., 79, 3661-3665. Morgan, M.J. & Benton, S. (1989). "Motion-Deblurring in Human Vision." Nature, 340, 385-386. Motter, B.C., Steinmetz, M.A., Duffy, C.J., & Mountcastle, V.B. (1987). "Functional Properties of Parietal Visual Neurons: Mechanisms of Directionality Along a Single Axis." Journal of Neuroscience, 7(1), 154-176. Nelson, J.1. & Frost, B.J. (1985). "Intracortical Facilitation Among Co-Oriented, Co-Axially Aligned Simple Cells in Cat Striate Cortex." Experimental Brain Research, 61, 54-6l. Orban, G.A., Gulyas, B., & Vogels, R. (1987). "Influence of a Moving Textured Background on Direction Selectivity of Cat Striate Neurons." Journal of Neurophysiology, 57(6), 1792-1812. Rockland, K.S. & Lund, J .S. (1982). "Widespread Periodic Intrinsic Connections in the Tree Shrew Visual Cortex." Science, 215, 1532-1534. Rockland, K.S. & Lund, J .S. (1983). "Intrinsic Laminar Lattice Connections in Primate Visual Cortex." Journal of Comparative Neurology, 216, 303-318. Rockland, K.S., Lund, J .S., & Humphrey, A.L. (1982). "Anatomical Banding of Intrinsic Connections in Striate Cortex of Tree Shrews (Tupaia glis )." Journal of Comparative Neurology, 209, 41-58. Ross, J. & Hogben, J.H. (1974). Vision Research, 14,1195-1201. Ts'o, D.Y., Gilbert, C.D., & Wiesel, T.N. (1986). "Relationships Between Horizontal Interactions and Functional Architecture in Cat Striate Cortex as Revealed by Cross-Correlation Analysis." Journal of Neuroscience, 6(4), 1160-1170. Welch, L. & McKee, S.P. (1985). "Colliding Targets: Evidence for Spatial Localization Within the Motion System." Vision Research, 25(12), 1901-1910.
1992
103
578
Remote Sensing Image Analysis via a Texture Classification Neural Network Hayit K. Greenspan and Rodney Goodman Department of Electrical Engineering California Institute of Technology, 116-81 Pasadena, CA 91125 hayit@electra.micro.caltech.edu Abstract In this work we apply a texture classification network to remote sensing image analysis. The goal is to extract the characteristics of the area depicted in the input image, thus achieving a segmented map of the region. We have recently proposed a combined neural network and rule-based framework for texture recognition. The framework uses unsupervised and supervised learning, and provides probability estimates for the output classes. We describe the texture classification network and extend it to demonstrate its application to the Landsat and Aerial image analysis domain. 1 INTRODUCTION In this work we apply a texture classification network to remote sensing image analysis. The goal is to segment the input image into homogeneous textured regions and identify each region as one of a prelearned library of textures, e.g. tree area and urban area distinction. Classification 0 f remote sensing imagery is of importance in many applications, such as navigation, surveillance and exploration. It has become a very complex task spanning a growing number of sensors and application domains. The applications include: landcover identification (with systems such as the AVIRIS and SPOT), atmospheric analysis via cloud-coverage mapping (using the AVHRR sensor), oceanographic exploration for sea/ice type classification (SAR input) and more. Much attention has been given to the use of the spectral signature for the identifica425 426 Greenspan and Goodman tion of region types (Wharton, 1987; Lee and Philpot, 1991). Only recently has the idea of adding on spatial information been presented (Ton et aI, 1991). In this work we investigate the possibility of gaining information from textural analysis. We have recently developed a texture recognition system (Greenspan et aI, 1992) which achieves state-of-the-art results on natural textures. In this paper we apply the system to remote sensing imagery and check the system's robustness in this noisy environment. Texture can playa major role in segmenting the images into homogeneous areas and enhancing other sensors capabilities, such as multispectra analysis, by indicating areas of interest in which further analysis can be pursued. Fusion of the spatial information with the spectral signature will enhance the classification and the overall automated analysis capabilities. Most of the work in the literature focuses on human expert-based rules with specific sensor data calibration. Some of the existing problems with this classic approach are the following (Ton et aI, 1991): - Experienced photointerpreters are required to spend a considerable amount of time generating rules. - The rules need to be updated for different geographical regions. - No spatial rules exist for the complex Landsat imagery. An interesting question is if one can automate the rule generation. In this paper we present a learning framework in which spatial rules are learned by the system from a given database of examples. The learning framework and its contribution in a texture-recognition system is the topic of section 2. Experimental results of the system's application to remote sensing imagery are presented in section 3. 2 The texture-classification network We have previously presented a texture classification network which combines a neural network and rule-based framework (Greenspan et aI, 1992) and enables both unsupervised and supervised learning. The system consists of three major stages, as shown in Fig. 1. The first stage performs feature extraction and transforms the image space into an array of 15-dimensional feature vectors, each vector corresponding to a local window in the original image. There is much evidence in animal visual systems supporting the use of multi-channel orientation selective band-pass filters in the feature-extraction phase. An open issue is the decision regarding the appropriate number of frequencies and orientations required for the representation of the input domain. We define an initial set of 15 filters and achieve a computationally efficient filtering scheme via the multi-resolution pyramidal approach. The learning mechanism shown next derives a minimal subset of the above filters which conveys sufficient information about the visual input for its differentiation and labeling. In an unsupervised stage a machine-learning clustering algorithm is used to quantize the continuous input features. A supervised learning stage follows in which labeling of the input domain is achieved using a rule-based network. Here an information theoretic measure is utilized to find the most informative correlations between the attributes and the pattern class specification, while providing probability estimates for the output classes. Ultimately, a minimal representation for a library of patterns is learned in a training mode, following which the classification Remote Sensing Image Analysis via a Texture Classification Neural Network 427 ORENI' AT10N SELEcrlVE 8PF SUPERVISED UNSUPERVISED LEARNING a.USTEANi Window of Input Image • N-Dimensional Continuous Feature-Vector • • FEATURE-EXTRACTION PHASE N-Dimensional Quantized Feature-Vector LEARNING PHASE Figure 1: System block diagram of new patterns is achieved. 2.1 The system in more detail vIII • TEXTURE CLASSES The initial stage for a classification system is the feature extraction phase. In the texture-analysis task there is both biological a.nd computational evidence supporting the use of Gabor-like filters for the feature-extraction. In this work, we use the Log Gabor pyramid, or the Gabor wavelet decomposition to define an initial finite set of filters. A computational efficient. scheme involves using a pyramidal representation of the image which is convolved with fixed spatial support oriented Gabor filters (Greenspan at aI, 1993). Three scales are used with 4 orientations per scale (0,90,45,135 degrees), together with a non-oriented component, to produce a 15-dimensional feature vector as the output of the feature extraction stage. Using the pyramid representation is computationally efficient as the image is subsampled in the filtering process. Two such size reduction stages take place in the three scale pyramid. The feature values thus generated correspond to the average power of the response, to specific orientation and frequency ranges, in an 8 * 8 window of the input image. Each such window gets mapped to a 15-dimensional attribute vector as the output of the feature extraction stage. The goal of the learning system is to use the feature representation described above to discriminate between the input patterns, or textures. Both unsupervised and supervised learning stages are utilized. A minimal set of features are extracted from the 15-dimensional attribute vector, which convey sufficient information about the visual input for its differentiation and labeling. The unsupervised learning stage can be viewed as a preprocessing stage for achieving a more compact representation of the filtered input. The goal is to quantize the continuous valued features which are the result of the initial filtering, thus shifting to a more symbolic representation of the input domain. This clustering stage was found experimentally to be of importance as an initial learning phase in a classification system. The need for discretization becomes evident when trying to learn associations between attributes in a symbolic representation, such as rules. 428 Greenspan and Goodman The output of the filtering stage consists of N (=15), continuous valued feature maps; each representing a filtered version of the original input. Thus, each local area of the input image is represented via an N-dimensional feature vector. An array of such N-dimensional vectors, viewed across the input image, is the input to the learning stage. We wish to detect characteristic behavior across the Ndimensional feature space, for the family of textures to be learned. In this work, each dimension, out of the 15-dimensional attribute vector, is individually clustered. All training samples are thus projected onto each axis of the space and one-dimensional clusters are found using the K-means clustering algorithm (Duda and Hart, 1973). This statistical clustering technique consists of an iterative procedure of finding K means in the training sample space, following which each new input sample is associated with the closest mean in Euclidean distance. The means, labeled 0 thru K minus 1 arbitrarily, correspond to discrete codewords. Each continuous-valued input sample gets mapped to the discrete codeword representing its associated mean. The output of this preprocessing stage is a 15-dimensional quantized vector of attributes which is the result of concatenating the discrete-valued codewords of the individual dimensions. In the final, supervised stage, we utilize the existing information in the feature maps for higher level analysis, such as input labeling and classification. A rule based information theoretic approach is used which is an extension of a first order Bayesian classifier, because of its ability to output probability estimates for the output classes (Goodman et aI, 1992). The classifier defines correlations between input features and output classes as probabilistic rules. A data driven supervised learning approach utilizes an information theoretic measure to learn the most informative links or rules between features and class labels. The classifier then uses these links to provide an estimate of the probability of a given output class being true. When presented with a new input evidence vector, a set of rules R can be considered to "fire". The classifier estimates the posterior probability of each class given the rules that fire in the form log(p( x )IR), and the largest estimate is chosen as the initial class label decision. The probability estimates for the output classes can now be used for feedback purposes and further higher level processing. The rule-based classification system can be mapped into a 3 layer feed forward architecture as shown in Fig. 2 (Greenspan et aI, 1993). The input layer contains a node for each attribute. The hidden layer contains a node for each rule and the output layer contains a node for each class. Each rule (second layer node j) is connected to a class via a multiplicative weight of evidence Wj. Inputs Rules Class Probability Estimates Figure 2: Rule-Based Network Remote Sensing Image Analysis via a Texture Classification Neural Network 429 3 Results The above-described system has achieved state-of-the-art results on both structured and unstructured natural texture classification [5]. In this work we present initial results of applying the network to the noisy environment of satellite and air-borne Imagery. Fig. 3 presents two such examples. The first example (top) is an image of Pasadena, California, taken via the AVIRIS system (Airborne Visible/Infrared Imaging Spectrometer). rhe AVIRIS system covers 224 contiguous spectral bands simultaneously, at 20 meters per pixel resolution. The presented example is taken as an average of several bands in the visual range. In this input image we can see that a major distinguishing characteristic is urban area vs. hilly surround. These are the two categories we set forth to learn. The training consists of a 128*128 image sample for each category. The test input is a 512*512 image which is very noisy and because of its low resolution, very difficult to segment into the two categories, even to our own visual perception. In the presented output (top right), the urban area is labeled in white, the hillside in gray and unknown, undetermined areas are in darker gray. We see that a rough segmentation into the desired regions has been achieved. The probabilistic network's output allows for the identification of unknown or unspecified regions, in which more elaborate analysis can be pursued (Greenspan et aI, 1992). The dark gray areas correspond to such regions; one example is the hill and urban contact (bottom right) in which some urban suburbs on the hill slopes form a mixture of the classes. Note that in the initial results presented the blockiness perceived is the result of the analysis resolution chosen. Fusing into the system additional spectral bands as our input, would enable pixel resolution as well as enable detecting additional classes (not visually detectable), such as concrete material, a variety of vegetation etc. A higher resolution Airborne image is presented at the bottom of Fig. 3. The classes learned are bush (output label dark gray), ground (output label gray) and a structured area, such as a field present or the man-made structures (white). Here, the training was done on 128*128 image examples (1 example per class). The input image is 800*800. In the result presented (right) we see that the three classes have been found and a rough segmentation into the three regions is achieved. Note in particular the detection of the bush areas and the three main structured areas in the image, including the man-made field, indicated in white. Our final example relates to an autonomous navigation scenario. Autonomous vehicles require an automated scene analysis system to avoid obstacles and navigate through rough terrain. Fusion of several visual modalities, such as intensity-based segmentation, texture, stereo, and color, together with other domain inputs, such as soil spectral decomposition analysis, will be required for this challenging task. In Fig. 4. we present preliminary results on outdoor photographed scenes taken by an autonomous vehicle at JPL (Jet Propulsion Laboratory, Pasadena). The presented scenes (left) are segmented into bush and gravel regions (right). The training set consists of 4 64 * 64 image samples from each category. In the top example (a 256*256 pixel image), light gray indicates gravel while black represents bushy regions. We can see that intensity alone can not suffice in this task (for example, top right corner). The system has learned some textural characteristics which guided 430 Greenspan and Goodman Figure 3: Remote sensing image analysis results. The input test image is shown (left) followed by the system output classification map (right). In the AVIRIS (top) input, white indicates urban regions, gray is a hilly area and dark gray reflects undetermined or different region types. In the Airborne output (bottom), dark gray indicates a bush area, light gray is a ground cover region and white indicates man-made structures. Both robustness to noise and generalization are demonstrated in these two challenging real-world problems. Remote Sensing Image Analysis via a Texture Classification Neural Network 431 the segmentation in otherwise similar-intensity regions. Note that this is also probably the cause for identifying the track-like region (e.g., center bottom) as bush regions. We could learn track-like regions as a third category, or specifically include such examples as gravel in our training set. In the second example (a 400*400 input image, bottom) light gray indicates gravel, dark gray represents a bush-like region, and black represents the unknown category. Here, the top right region of the sky, is labeled correctly as an unknown, or new category. Kote that intensity alone would have confused that region as being gravel. Overall, the texture classification neural-network succeeds in achieving a correct, yet rough, segmentation of the scene based on textural characteristics alone. These are encouraging results indicating that the learning system has learned informative characteristics of the domain. • Fig 4: Image Analysis for Autonomous Navigation 432 Greenspan and Goodman 4 Summary and Discussion The presented results demonstrate the network's capability for generalization and robustness to noise in very challenging real-world problems. In the presented framework a learning mechanism automates the rule generation. This framework can answer some of the current difficulties in using the human expert's knowledge. Further more, the automation of the rule generation can enhance the expert's knowledge regarding the task at hand. We have demonstrated that the use of textural spatial information can segment complex scenery into homogeneous regions. Some of the system's strengths include generalization to new scenes, invariance to intensity, and the ability to enlarge the feature vector representation to include additional inputs (such as additional spectral bands) and learn rules characterizing the integrated modalities. Future work includes fusing several modalities within the learning framework for enhanced performance and testing the performance on a large database. Acknowledgements This work is supported in part by Pacific Bell, and in part by DARPA and ONR under grant no. N00014-92-J-1860. H. Greenspan is supported in part by an Intel fellowship. The research described in this paper was carried out in part by the Jet Propulsion Laboratories, California Institute of Technology. We would like to thank Dr. C. Anderson for his pyramid software support and Dr. 1. Matthies for the autonomous vehicle images. References S. Wharton. (1987) A Spectral-Knowledge-Based Approach for Urban Land-Cover Discrimination. IEEE Transactions on Geoscience and Remote Sensing, Vol. GE25[3] :272-282. J. Lee and W. Philpot. (1991) Spectral Texture Pattern Matching: A Classifier For Digital Imagery. IEEE Transactions on Geoscience and Remote Sensing, Vol. 29[4] :545-554. J. Ton, J. Sticklen and A. Jain. (1991) Knowledge-Based Segmentation of Landsat Images. IEEE Transactions on Geoscience and Remote Sensing, Vol. 29[2]:222-232. H. Greenspan, R. Goodman and R. Chellappa. (1992) Combined Neural Network and Rule-Based Framework for Probabilistic Pattern Recognition and Discovery. In J. E. Moody, S. J. Hanson, and R. P. Lippman (eds.), Advances in Neural Information Processing Systems 4.,444-452, San Mateo, CA: Morgan Kaufmann Publishers. H. Greenspan, R. Goodman, R. Chellappa and C. Anderson. (1993) Learning Texture Discrimination Rules in a Multiresolution System. Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence. R. O. Duda and P. E. Hart. (1973) Pattern Classification and Scene Analysis. John Wiley and Sons, Inc. R. Goodman, C. Higgins, J. Miller and P. Smyth. (1992) Rule-Based Networks for Classification and Probability Estimation. Neural Computation, [4]:781-804.
1992
104
579
Filter Selection Model for Generating Visual Motion Signals Steven J. Nowlan· CNL, The Salk Institute P.O. Box 85800, San Diego, CA 92186-5800 Terrence J. Sejnowski CNL, The Salk Institute P.O. Box 85800, San Diego, CA 92186-5800 Abstract Neurons in area MT of primate visual cortex encode the velocity of moving objects. We present a model of how MT cells aggregate responses from VI to form such a velocity representation. Two different sets of units, with local receptive fields, receive inputs from motion energy filters. One set of units forms estimates of local motion, while the second set computes the utility of these estimates. Outputs from this second set of units "gate" the outputs from the first set through a gain control mechanism. This active process for selecting only a subset of local motion responses to integrate into more global responses distinguishes our model from previous models of velocity estimation. The model yields accurate velocity estimates in synthetic images containing multiple moving targets of varying size, luminance, and spatial frequency profile and deals well with a number of transparency phenomena. 1 INTRODUCTION Humans, and primates in general, are very good at complex motion processing tasks such as tracking a moving target against a moving background under varying luminance. In order to accomplish such tasks, the visual system must integrate many local motion estimates from cells with limited spatial receptive fields and marked orientation selectivity. These local motion estimates are sensitive not just ·Current address, Synaptics Inc., 2698 Orchard Parkway, San Jose, CA 95134. 369 370 Nowlan and Sejnowski to the velocity of a visual target, but also to many other features of the target such as its spatial frequency profile or local edge orientation. As a result, the integration of these motion signals cannot be performed in a fixed manner, but must be a dynamic process dependent on the visual stimulus. Although cells with motion-sensitive responses are found in primary visual cortex (VI in primates), mounting physiological evidence suggests that the integration of these responses to produce responses which are tuned primarily to the velocity of a visual target first occurs in primate visual area MT (Albright 1992, Maunsell and Newsome 1987). We propose a computational model for integrating local motion responses to estimate the velocity of objects in the visual scene. These velocity estimates may be used for eye tracking or other visua-motor skills. Previous computational approaches to this problem (Grzywacz and Yuille 1990, Heeger 1987, Heeger 1992, Horn and Schunk 1981, Nagel 1987) have primarily focused on how to combine local motion responses into local velocity estimates at all points in an image (the velocity flow field). We propose that the integration of local motion measurements may be much simpler, if one does not try to integrate across all of the local motion measurements but only a subset. Our model learns to estimate the velocity of visual targets by solving the problems of what to integrate and how to integrate in parallel. The trained model yields accurate velocity estimates from synthetic images containing multiple moving targets of varying size, luminance, and spatial frequency profile. 2 THE MODEL The model is implemented as a cascade of networks of locally connected units which has two parallel processing pathways (figure 1). All stages of the model are represented as "layers" of units with a roughly retinotopic organization. The figure schematically represents the activity in the model at one instant of time. Conceptually, it is easier to think of the model as computing evidence for particular velocities in an image rather than computing velocity directly. Processing in the model may be divided into 3 stages, to be described in more detail below. In the first stage, the input intensity image is converted into 36 local motion "images" (9 of which are shown in the figure) which represent the outputs of 36 motion energy filters from each region of the input image. In the second stage, the operations of integration and selection are performed in parallel. The integration pathway combines information from motion energy filters tuned to different directions and spatial and temporal frequencies to compute the local evidence in favor of a particular velocity. The selection pathway weights each region of the image according to the amount of evidence for a particular velocity that region contains. In the third stage, the global evidence for a visual target moving at a particular velocity V1: (t) is computed as a sum over the product of the outputs of the integration and selection pathways: V1:(t) = L 11: (x, y, t)S1:(x, y, t) (1) Z:,lI where 11:(x, y, t) is the local evidence for velocity k computed by the integration pathway from region (x, y) at time t, and S1:(x, y, t) is the weight assigned by the selection pathway to that region. Filter Selection Model for Generating Visual Motion Signals 371 Integration Input Motion Energy --:::::::::::::"--4*-:~::l1] . . Velocity ~nm (64 x 64) 9 types 4 directions (8 x 8) Figure 1: Diagram of motion processing model. Processing proceeds from left to right in the model, but the integration and selection stages operate in parallel. Shading within the boxes indicates different levels of activity at each stage. The responses shown in the diagram are intended to be indicative of the responses at different stages of the model but do not represent actual responses from the model. 2.1 LOCAL MOTION ESTIMATES The first stage of processing is based on the motion energy model (Adelson and Bergen 1985, Watson 1985). This model relies on the observation that an intensity edge moving at a constant velocity produces a line at a particular orientation in space-time. This means that an oriented space-time filter will respond most strongly to objects moving at a particular velocity.1 A motion energy filter uses the squared outputs of a quadrature pair (90 0 out of phase) of oriented filters to produce a phase independent local velocity estimate. The motion energy model was selected as a biologically plausible model of motion processing in mammalian VI, based primarily on the similarity of responses of simple and complex cells in cat area VI to the output of different stages of the motion energy model (Heeger 1992, Grywacz and Yuille 1990, Emerson 1987). The particular filters used in our model had spatial responses similar to a twodimensional Gabor filter, with the physiologically more plausible temporal responses suggested by Adelson and Bergen (1985). The motion energy layer was divided into a grid of 49 by 49 receptive field locations and at each grid location there were filters tuned to four different directions of motion (up, down, left, and right). For each direction of motion there were nine different filters representing combinations of three spatial and three temporal frequencies. The filter center frequency spacings were 1 octave spatially and 1.5 octaves temporally. The filter parameters and spacings were chosen to be physiologically realistic, and were fixed during training of the model. In addition, there was a correspondence between the size of the filter IThese filters actually respond most strongly to a narrow band of spatial frequencies (SF) and temporal frequencies (TF), which represent a range of velocities, v = TF/SF. 372 Nowlan and Sejnowski AftW--.., Local Motion Energy (49 x 49) (8 x 8) •• • 33 layers Competition Output (33 units) Figure 2: Diagram of integration and selection processing stages. Different shadings for units in the integration and output pools correspond to different directions of motion. Only two of the selection layers are shown and the backgrounds of these layers are shaded to match their corresponding integration and output units. See text for description of architecture. receptive fields and the spatial frequency tuning of the filters with lower frequency filters having larger spatial extent to their receptive fields. This is also similar to what has been found in visual cortex (Maunsell and Newsome, 1987). The input intensity image is first filtered with a difference of gaussians filter which is a simplification of retinal processing and provides smoothing and contrast enhancement. Each motion energy filter is then convolved with the smoothed input image producing 36 motion energy responses at each location in the receptive field grid which serve as the input to the next stage of processing. 2.2 INTEGRATION AND SELECTION The integration and selection pathways are both implemented as locally connected networks with a single layer of weights. The integration pathway can be thought of as a layer of units organized into a grid of 8 by 8 receptive field locations (figure 2). Units at each receptive field location look at all 36 motion energy measurements from each location within a 9 by 9 region of the motion energy receptive field grid. Adjacent receptive field locations receive input from overlapping regions of the motion energy layer. At each receptive field location in the integration layer there is a pool of 33 integration units (9 units in one of these pools are shown in figure 2). These units represent motion in 8 different directions with units representing four different speeds for each direction plus a central unit indicating no motion. These units form a log polar representation of the local velocity at that receptive field location, since as one moves out along any "arm" of the pool of units each unit represents a speed twice as large as the preceding unit in that arm. All of the integration pools share a common set Filter Selection Model for Generating Visual Motion Signals 373 of weights, so in the final Lrained model all compute the same function. The activity of an integration unit (which lies between 0 and 1) represents the amount of local support for the corresponding velocity. Local competition between the units in each integration pool enforces the important constraint that each integration pool can only provide strong support for one velocity. The competition is enforced using a softmax non-linearity: If I~ (x, y, t) represents the net input to unit k in one of the integration pools, the state of that unit is computed as h:(x,y,t) = el~(~,y,t)/Lel;(~IYlt). j Note that the summation is performed over all units within a single pool, all of which share the same (x, y) receptive field location. The output of the model is also represented by a pool of 33 units, organized in the same way as each pool of integration units. The state of each unit in the output pool represents the global evidence within the entire image supporting a particular velocity. The state of each of these output units Vk(t) is computed as the weighted sum of the state of the corresponding integration unit in all 64 integration receptive field locations (equation (1». The weights assigned to each receptive field location are computed by the state of the corresponding selection unit (figure 2). Although the activity of output units can be treated as evidence for a particular velocity, the activity across the entire pool of units forms a distributed representation of a continuous range of velocities (i. e. activity split between two adjacent units represents a velocity between the optimal velocities of those two units). The selection units are also organized into a grid of 8 by 8 receptive field locations which are in one to one correspondence with the integration receptive field locations (figure 2). However, it is convenient to think of the selection units as being organized not as a single layer of units but rather as 33 layers of units, one for each output unit. In each layer of selection units, there is one unit for each receptive field location. Two of the selection layers are shown in figure 2. The layer with the vertically shaded background corresponds to the output unit for upward motion (also shaded with vertical stripes) and states of units in this selection layer weight the states of upward motion units in each integration pool (again shaded vertically). There is global competition among all of the units in each selection layer. Again this is implemented using a softmax non-linearity: If Sk(x, y, t) is the net input to a selection unit in layer k, the state of that unit is computed as Sk(X,y,t) = eS~(~,y,t)/ L eS~(~',y',t). ~',y' Note that unlike the integration case, the summation in this case is performed over all receptive field locations. This global competition enforces the second important constraint in the model, that the total amount of support for each velocity across the entire image cannot exceed one. This constraint, combined with the fact that the integration unit outputs can never exceed 1 ensures that the states of the output units are constrained to be between 0 and 1 and can be interpreted as the global support within the image for each velocity, as stated earlier. The combination of global competition in the selection layers and local competition within the integration pools means that the only way to produce strong support for 374 Nowlan and Sejnowski a particular output velocity is for the corresponding selection network to focus all its support on regions that strongly support that velocity. This allows the selection network to learn to estimate how useful information in different regions of an image is for predicting velocities within the visual scene. The weights of both the selection and integration networks are adapted in parallel as is discussed next. 2.3 OBJECTIVE FUNCTION AND TRAINING The outputs of the integration and selection networks in the final trained model are combined as in equation (I), so that the final outputs represent the global support for each velocity within the image. During training of the system however, the outputs of each pool of integration units are treated as if each were an independent estimate of support for a particular velocity. If a training image sequence contains an object moving at velocity VA: then the target for the corresponding output unit is set to I, otherwise it is set to o. The system is then trained to maximize the likelihood of generating the targets: log L = L: L: log (L: SA:(z, y, t) exp [-(VA: - IA:(z, y, t))2]) (2) t A: z:,y To optimize this objective, each integration output IA:(z, y, t) is compared to the target VA: directly, and the outputs closest to the target value are assigned the most responsibility for that target, and hence receive the largest error signal. At the same time, the selection network states are trained to try and estimate from the input alone (i. e. the local motion measurements), which integration outputs are most accurate. This interpretation of the system during training is identical to the interpretation given to the mixture of experts (Nowlan, 1990) and the same training procedure was used. Each pool of integration units functions like an expert network, and each layer of selection units functions like a gating network. There are, however, two important differences between the current system and the mixture of experts. First, this system uses multiple gating networks rather than a single one, allowing the system to represent more than a single velocity within an image. Second, in the mixture of experts, each expert network has an independent set of weights and essentially learns to compute a different function (usually different functions of the same input). In the current model, each pool of integration units shares the same set of weights and is constrained to compute the same function. The effect of the training procedure in this system is to bias the computations of the integration pools to favor certain types of local image features (for example, the integration stage may only make reliable velocity estimates in regions of shear or discontinuities in velocity). The selection networks learn to identify which features the integration stage is looking for, and to weight image regions most heavily which contain these kinds of features. 3 RESULTS AND DISCUSSION The system was trained using 500 image sequences containing 64 frames each. These training image sequences were generated by randomly selecting one or two visual Filter Selection Model for Generating Visual Motion Signals 375 targets for each sequence and moving these targets through randomly selected trajectories. The targets were rectangular patches that varied in size, texture, and intensity. The motion trajectories all began with the objects stationary and then one or both objects rapidly accelerated to constant velocities maintained for the remainder of the trajectory. Targets moved in one of 8 possible directions, at speeds ranging between 0 and 2.5 pixels per unit of time. In training sequences containing multiple targets, the targets were permitted to overlap (targets were assigned to different depth planes at random) and the upper target was treated as opaque in some cases and partially transparent in other cases. The system was trained using a conjugate gradient descent procedure until the response of the system on the training sequences deviated by less than 1% on average from the desired response. The performance of the trained system was tested using a separate set of 50 test image sequences. These sequences contained 10 novel visual targets with random trajectories generated in the same manner as the training sequences. The responses on this test set remained within 2.5% of the desired response, with the largest errors occurring at the highest velocities. Several of these test sequences were designed so that targets contained edges oriented obliquely to the direction of motion, demonstrating the ability of the model to deal with aspects of the aperture problem. In addition, only small, transient increases in error were observed when two moving objects intersected, whether these objects were opaque or partially transparent. A more challenging test of the system was provided by presenting the system with "plaid patterns" consisting of two square wave gratings drifting in different directions (Adelson and Movshon, 1982). Human observers will sometimes see a single coherent motion corresponding to the intersection of constraints (IOC) direction of the two grating motions, and sometimes see the two grating motions separately, as one grating sliding through the other. The percept reported can be altered by changing the contrast of the regions where the two gratings intersect relative to the contrast of the grating itself (Stoner et ai, 1990). We found that for most grating patterns the model reliably reported a single motion in the IOC direction, but by manipulating the intensity of the intersection regions it was possible to find regions where the model would report the motion of the two gratings separately. Coherent grating motion was reported when the model tended to select most strongly image regions corresponding to the intersections of the gratings, while two motions were reported when the regions between the grating intersections were strongly selected. We also explored the response properties of selection and integration units in the trained model using drifting sinusoidal gratings. These stimuli were chosen because they have been used extensively in exploring the physiological response properties of visual motion neurons in cortical visual areas (Albright 1992, Maunsell and Newsome 1987). Integration units tended to be tuned to a fairly narrow band of velocities over a broad range of spatial frequencies, like many MT cells (Maunsell and Newsome, 1987). The selection units had quite different response properties. They responded primarily to velocity shear (neighboring regions of differing velocity) and to flicker (temporal frequency) rather than true velocity. Cells with many of these properties are also common in MT (Maunsell and Newsome, 1987). A final important difference between the integration and selection units is their response to whole field motion. Integration units tend to have responses which are somewhat enhanced by whole field motion in their preferred direction, while selection unit 376 Nowlan and Sejnowski responses are generally suppressed by whole field motion. This difference is similar to the recent observation that area MT contains two classes of cell, one whose responses are suppressed by whole field motion, while responses of the second class are not suppressed (Born and Tootell, 1992). Finally, the model that we have proposed is built on the premise of an active mechanism for selecting subsets of unit responses to integrate over. While this is a common aspect of many accounts of attentional phenomena, we suggest that active selection may represent a fundamental aspect of cortical processing that occurs with many pre-attentive phenomena, such as motion processing. References Adelson, E. H. and Bergen, J. R (1985) Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. A, 2, 284-299. Adelson, M. and Movshon, J. A. (1982) Phenomenal coherence of moving visual patterns. Nature, 300, 523-525. Albright, T. D. (1992) Form-cue invariant motion processing in primate visual cortex. Science. 255, 1141-1143. Born, R. T. and Tootell, R B. H. (1992) Segregation of global and local motion processing in primate middle temporal visual area. Nature, 357, 497-500. Emerson, RC., Citron, M.C., Vaughn W.J., Klein, S.A. (1987) Nonlinear directionally selective subunits in complex cells of cat striate cortex. J. Neurophys. 58, 33-65. Grzywacz, N.M. and Yuille, A.L. (1990) A model for the estimate of local image velocity by cells in the visual cortex. Proc. R. Soc. Lond. B 239, 129-161. Heeger, D.J. (1987) Model for the extraction of image flow. J. Opt. Soc. Am. A 4, 1455-1471. Heeger, D.J. (1992) Normalization of cell responses in cat striate cortex. Visual Neuroscience, in press. Horn, B.K.P. and Schunk, B.G. (1981) Determining optical flow. Artificial Intelligence 17, 185-203. Maunsell J.H.R. and Newsome, W.T. (1987) Visual processing in monkey extrastriate cortex. Ann. Rev. Neurosci. 10, 363-401. Nowlan, S.J. (1990) Competing experts: An experimental investigation of associative mixture models. Technical Report CRG-TR-90-5, Department of Computer Science, University of Toronto. Nagel, H.H. (1987) On the estimation of optical flow: relations between different approaches and some new results. Artificial Intelligence 33, 299-324. Stoner G.R, Albright T.D., Ramachandran V.S. (1990) Transparency and coherence in human motion perception. Nature 344, 153-155. Watson, A.B. and Ahumada, A.J. (1985) Model of human visual-motion sensing. J. Opt. Soc. Am. A, 2, 322-342.
1992
105
580
N on-Linear Dimensionality Reduction David DeMers· & Garrison CottreUt Dept. of Computer Science & Engr., 0114 Institute for Neural Computation University of California, San Diego 9500 Gilman Dr. La Jolla. CA, 92093-0114 Abstract A method for creating a non-linear encoder-decoder for multidimensional data with compact representations is presented. The commonly used technique of autoassociation is extended to allow non-linear representations, and an objective function which penalizes activations of individual hidden units is shown to result in minimum dimensional encodings with respect to allowable error in reconstruction. 1 INTRODUCTION Reducing dimensionality of data with minimal information loss is important for feature extraction, compact coding and computational efficiency. The data can be tranformed into "good" representations for further processing, constraints among feature variables may be identified, and redundancy eliminated. Many algorithms are exponential in the dimensionality of the input, thus even reduction by a single dimension may provide valuable computational savings. Autoassociating feed forward networks with one hidden layer have been shown to extract the principal components of the data (Baldi & Hornik, 1988). Such networks have been used to extract features and develop compact encodings of the data (Cottrell, Munro & Zipser, 1989). Principal Components Analysis projects the data into a linear subspace -email: demers@cs.ucsd.edu t email: gary@cs.ucsd.edu 580 Non-Linear Dimensionality Reduction Non-Linear "Principal Componant." Nat Auto-a.soclator a a a a a Output fr 000 Decoding layer fr 000 HldcMn layer fr "bottleneck" 000 Encoding layer a 0\)0 a Input Figure 1: A network capable of non-linear lower dimensional representations of data. with minimum information loss, by multiplying the data by the eigenvectors of the sample covariance matrix. By examining the magnitude of the corresponding eigenvalues one can estimate the minimum dimensionality of the space into which the data may be projected and estimate the loss. However, if the data lie on a non-linear submanifold of the feature space, then Principal Components will overestimate the dimensionality. For example, the covariance matrix of data sampled from a helix in R3 will have full-rank and thus three principal components. However, the helix is a one-dimensional manifold and can be (smoothly) parameterized with a single number. The addition of hidden layers between the inputs and the representation layer, and between the representation layer and the outputs provides a network which is capable of learning non-linear representations (Kramer, 1991; Oja, 1991; Usui, Nakauchi & Nakano, 1991). Such networks can perform the non-linear analogue to Principal Components Analysis, and extract "principal manifolds". Figure 1 shows the basic structure of such a network. However, the dimensionality of the representation layer is problematic. Ideally, the dimensionality of the encoding (and hence the number of representation units needed) would be determined from the data. We propose a pruning method for determining the dimensionality of the representation. A greedy algorithm which successively eliminates representation units by penalizing variances results in encodings of minimal dimensionality with respect to the allowable reconstruction error. The algOrithm therefore performs non-linear dimensionality reduction (NLDR). 581 582 DeMers and Cottrell 2 DIMENSIONALITY ESTIMATION BY REGULARIZATION The a priori assignment of the number of units for the representation layer is problematic. In order to achieve maximum data compression, this number should be as small as possible; however, one also wants to preserve the information in the data and thus encode the data with minimum error. If the intrinsic dimensionality is not known ahead of time (as is typical), some method to estimate the dimensionality is desired. Minimization of the variance of a representation unit will essentially squeeze the variance of the data into the other hidden units. Repeated minimization results in ip..crf'..asingly lower-dimensional representation. More formally, let the dimensionality of the raw data be n. We wish to find F and its approximate inverse such that Rn ~ RP ~1 Rn where p < n. Let y denote the ~ dimensional vector whose elements are the p univalued functions Ii which make up F. If one of the component functions Ii is always constant, itis not contributing to theautoassociation and can be eliminated, yielding a function F with p - 1 components. A constant value for Ii means that the variance of Ii over the data is zero. We add a regularization term to the objective function penalizing the variance of one of the representation units. If the variance can be driven to near zero while simultaneously achieving a target error in the primary task of autoassociation, then the unit being penalized can be pruned. Let Hp = Ap(~f=l (hp(neti ) - E(hp(neti »)2) where neti is the net inputto the unit given the jth training pattern, hp (neti ) is the activation of the pth hidden unit in the representation layer (the one being penalized) and E is the expectation operator. For notational clarity, the superscripts will be suppressed hereafter. E(hi(xi)) can be estimated as hp, the mean activation of hp over all patterns in the training data. 8Hp _ 8Hp 8netp _ 2 \ (h _ h- )h' Ap P P pOl 8Wpl Bnetp 8Wpl where h~ is the derivative of the activation function of unit hp with respect to its input, and 0' is the output of the lth unit in the preceding layer. Let 8p = 2Ap h~ (hp - hp). We simply add Dp to the delta of hp due to backpropagation from the output layer. We first train a multi-Iayerl network to learn the identity map. When error is below a userspecified threshold, Ai is increased for the unit with lowest variance. Ifnetwork weights can be found2 such that the variance can be reduced below a small threshold while the remaining units are able to encode the data, the hidden unit in question is no longer contributing to the autoencoding, and its connections are excised from the network. The process is repeated until the variance of the unit in question cannot be reduced while maintaining low error. IThere is no reason to suppose that the encoding and decoding layers must be of the same size. In fact, it may be that two encoding or decoding layers will provide superior performance. For the helix example. the decoder had two hidden layers and linear connections from the representation to the output, while the encoder had a single layer. Kramer (1991) uses information theoretic measures for choosing the size of the encoding and decoding layers; however, only a fixed representation layer and equal encoding and decoding layers are used. 2Unbounded weights will allow the same amount of information to pass through the layer with arbitrarily small variance and using arbitrarily large weights. Therefore the weights in the network must be bounded. Weight vectors with magnitudes larger than 10 are renormalized after each epoch. Non-Linear Dimensionality Reduction 583 Figure 2: The original 3-D helix data plus reconstructionfrom a single parameter encoding. 3 RESULTS We applied this method to several problems: 1. a closed I-D manifold in R3. 2. a I-D helix in R3. 3. Time series data generated from the Mackey-Glass delay-differential equation. 4. 160 64 by 64 pixel, 8-bit grayscale face images. A number of parameter values must be chosen; error threshold, maximum magnitude of weights, value of Ai when increased, and when to "give up" training. For these experiments, they were chosen by hand; however, reasonable values can be selected such that the method can be automated. 3.1 Static Mappings: Circle and Helix The first problem is interesting because it is known that there is no diffeomorphism from the circle to the unit interval. Thus (smooth) single parameter encodings cannot cover the entire circle, though the region of the circle left unparameterized can be made arbitrarily small. Depending on initial conditions, our technique found one of three different solutions. Some simulations resulted in a two-dimensional representation with the encodings lying on a circle in R2. This is a failure to reduce the dimensionality. The other solutions were both I-D representations; one "wrapping" the unit interval around the circle, the other "splitting" the interval into two pieces. The initial architecture consisted of a single 8-unit encoding layer and two 8-unit decoding layers. T} was set to 0.01, L\A to 0.1, and the error threshold, t:, to 0.001. The helix problem is interesting because the data appears to be three-dimensional to PCA. NLDR consistentl y finds an invertible one-dimensional representation of the data. Figure 2 584 DeMers and Cottrell 1r---~----~--~----~---'----~--~~--~----r----' 0 ... 0.8 0.7 0 ... 0.5 0.4 0.3 0.2 0.1 &.1.9n&1 enaod.i. n 9 ----.nood 9 ......... . o~--~----~--~----~--~----~--~~--~----~--~ 200 220 240 260 280 300 320 340 360 380 400 Figure 3: Data/rom the Mackey-Glass delay-differential equation with T = 17, correlation dimension 2.1, and the reconstructed signal encoded in two and three dimensions. shows the original data, along with the network»s output when the representation layer was stimulated with activation ranging from 0.1 to 0.9. The training data were mapped into the interval 0.213 - 0.778 using a single (sigmoidal) representation unit. The initial architecture consisted of a single 10-unit encoding layer and two 10-unit decoding layers. 7J was set to 0.01,~'\ to 0.1, and the error threshold, f, to 0.001. 3.2 NLDR Applied to Time Series The Mackey-Glass problem consists of estimation of the intrinsic dimensionality of a scalar signal. Classically» such time series data is embedded in a space of "high enough" dimension sllch that one expects the geometric invariants to be preserved. However, this may significantly overestimate the number of variables needed to describe the data. 1\vo different series were examined; parameter settings for the Mackey-Glass equation were chosen such that the intrinsic dimensionality is 2.1 and 3.5. The data was embedded in a high dimensional space by the standard technique of recoding as vectors of lagged data. A 3 dimensional representation was found for the 2.1 dimensional data and a 4 dimensional representation was found for the 3.5 dimensional data. Figure 3 shows the original data and its reconstruction for the 2.1 dimensional data. Allowing higher reconstruction error resulted in a 3 dimensional representation for the 3.5 dimensional data, effectively smoothing the original signal (DeMers, 1992). Figure 4 shows the original data and its reconstruction for the 3.5 dimensional data. The initial architecture consisted of a two 1 O-unit encoding layers and two lO-unit decoding layers, and a 7 -unit representation layer. The representation layer was connected directly to the output layer. 7J was set to 0.01» ~,\ to 0.1, and the error threshold» E, to 0.001. 3.3 Faces The face image data is much more challenging. The face data are 64 x 64 pixel, 8-bit grayscale images taken from (Cottrell & Metcalfe» 1991), each of which can be considered to be a point in a 4,096 dimensional "pixel space". The question addressed is whether NLDR can find low-dimensional representations of the data which are more useful than principal components. The data was preprocessed by reduction to the first 50 principal Non-Linear Dimensionality Reduction 585 O_~r-----~----~----~~----~----~----~------~----, o d' 0_7 0_", 0_4 0_3 ""0 M_ak_y-Gl. •••• i.9"_l. 4.D R_con.truat!.on_ .Z"Z'or bound 0.002 ----4D A..cOon_t ruat i.on. _rror bound 0.0004 .----. "00 Figure4: Datafrom the Mackey-Glass delay-differential equation with T = 35, correlation dimension 3.5, and the reconstructed signal encoded in four dimensions with two different error thresholds. components3 of the images. These reduced representations were then processed further by NLDR. The architecture consisted of a 30-unit encoding layer and a 30-unit decoding layer, and an initial representation layer of 20 units. There were direct connections from the representation layer to the output layer. TJ was 0.05, ~,\ was 0.1 and f was 0.001. NLDR found a five-dimensional representation. Figure 5 shows four of the 160 images after reduction to the first 50 principal components (used as training) and the same images after reconstruction from a five dimensional encoding. We are unable to determine whether the dimensions are meaningful; however, experiments with the decoder show that points inside the convex hull of the representations project to images which look like faces. Figure 6 shows the reconstructed images from a linear interpolation in "face space" between the two encodings which are furthest apart. How useful are the representations obtained from a training set for identification and classification of other images of the same subjects? The 5-D representations were used to train a feedforward network to recognize the identity and gender of the subjects, as in (Cottrell & Metcalfe, 1991). 120 images were used in training and the remaining 40 used as a test set. The network correctly identified 98% of the training data subjects, and 95% on the test set. The network achieved 95% correct gender recognition on both the training and test sets. The misclassified subject is shown in Figure 7. An informal poll of visitors to the poster in Denver showed that about 2/3 of humans classify the subject as male and 1/3 as female. Although NLDR resulted in five dimensional encodings of the face data, and thus superficially compresses the data to approximately 55 bits per image or 0.013 bits per pixel, there is no data compression. Both the decoder portion of the network and the eigenvectors used in the initial processing must also be stored. These amortize to about 6 bits per pixel, whereas the original images require only 1.1 bits per pixel under run-length encoding. In order to achieve data compression, a much larger data set must be obtained in order to find the underlying human face manifold. 350 was chosen by eyeballing a graph of the eigenvalues for the point at which they began to "flatten"; any value between about 40 and 80 would be reasonable. 586 DeMers and Cottrell Figure 5: Four of the originalface images and their reconstruction after encoding asftve dimensional data. Figure 6: The two images with 5-D encodings which are the furthest apart. and the reconstructions of four 5-D points equally spaced along the line joining them. Figure 7: UPat" .. the subject whose gender afeedforward network classified incorrectly. Non-Linear Dimensionality Reduction 587 4 CONCLUSIONS A method for automatically generating a non-linear encoder/decoder for high dimensional data has been presented. The number of representation units in the final network is an estimate of the intrinsic dimensionality of the data. The results are sensitive to the choice of error bound, though the precise relationship is as yet unknown. The size of the encoding and decoding hidden layers must be controlled to avoid over-fitting; any data set can be encoded into scalar values given enough resolution. Since we are using gradient search to solve a global non-linear optimization problem, there is no guarantee that this method will find the global optimum and avoid convergen:;e to local minima. However, NLDR consistently constructed low dimensional encoding!; which were decodeable with low loss. Acknowledgements We would like to thank Matthew Turk & Alex Pentland for making their /acerec software available, which was used to extract the eigenvectors of the original face data. The first author was partially supported by Fellowships from the California Space Institute and the McDonnell-Pew Foundation. References Pierre Baldi and Kurt Hornik (1988) "Neural Networks and Principal Component Analysis: Learning from Examples without Local Minima", Neural Networks 2, 53-58. Garrison Cottrell and Paul Munro (1988) "Principal Components Analysis of Images via Backpropagation", in Proc. SPIE (Cambridge, MA). Garrison Cottrell, Paul Munro, and David Zipser (1989) "Image Compression by Backpropagation: A Demonstration of Extensional Programming", In Sharkey, Noel (Ed.), Models o/Cognition: A review o/Cognitive Science, vol. 1. Garrison Cottrell and Janet Metcalfe (1991) "EMPATH Face, Emotion and Gender Recognition using Holons" in Lippmann, R., Moody, 1. & Touretzky, D., (eds), Advances in Neural Information Processing Systems 3. David DeMers (1992) "Dimensionality Reduction for Non-Linear Time Series", Neural and Stochastic Methods in Image and Signal Processing (SPIE 1766). Mark Kramer (1991) "Nonlinear Principal Component Analysis Using Autoassociative Neural Networks", AIChE lournaI37:233-243. Erkki Oja (1991) "Data Compression, Feature Extraction, and Autoassociation in Feedforward Neural Networks" in Kohonen, T., Simula, O. and Kangas, 1., eds, Artificial Neural Networks,737-745. Shiro Usui, Shigeki Nakauchi, and Masae Nakano (1991) "Internal Color Representation Acquired by a Five-Layer Neural Network", in Kohonen, T., Simula, O. and Kangas, J., eds, Artificial Neural Networks, 867-872. PART VII THEORY AND ANALYSIS
1992
106
581
Feudal Reinforcement Learning Peter Dayan CNL The Salk Institute PO Box 85800 San Diego CA 92186-5800, USA dayan~helmholtz.sdsc.edu Geoffrey E Hinton Department of Computer Science University of Toronto 6 Kings College Road, Toronto, Canada M5S 1A4 hinton~ai.toronto.edu Abstract One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-Iearning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Sub-managers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task .. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-Iearning and builds a more comprehensive map. 1 INTRODUCTION Straightforward reinforcement learning has been quite successful at some relatively complex tasks like playing backgammon (Tesauro, 1992). However, the learning time does not scale well with the number of parameters. For agents solving rewarded Markovian decision tasks by learning dynamic programming value functions, some of the main bottlenecks (Singh, 1992b) are temporal resolution expanding the unit of learning from the smallest possible step in the task, divisionand-conquest - finding smaller subtasks that are easier to solve, exploration, and structural generalisation - generalisation of the value function between different 10271 272 Dayan and Hinton cations. These are obviously related - for instance, altering the temporal resolution can have a dramatic effect on exploration. Consider a control hierarchy in which managers have sub-managers, who work for them, and super-managers, for whom they work. If the hierarchy is strict in the sense that managers control exactly the sub-managers at the level below them and only the very lowest level managers can actually act in the world, then intermediate level managers have essentially two instruments of control over their sub-managers at any time - they can choose amongst them and they can set them sub-tasks. These sub-tasks can be incorporated into the state of the sub-managers so that they in turn can choose their own sub-sub-tasks and sub-sub-managers to execute them based on the task selection at the higher level. An appropriate hierarchy can address the first three bottlenecks. Higher level managers should sustain a larger grain of temporal resolution, since they leave the sub-sub-managers to do the actual work. Exploration for actions leading to rewards can be more efficient since it can be done non-uniformly - high level managers can decide that reward is best found in some other region of the state space and send the agent there directly, without forcing it to explore in detail on the way. Singh (1992a) has studied the case in which a manager picks one of its sub-managers rather than setting tasks. He used the degree of accuracy of the Q-values of submanagerial Q-Iearners (Watkins, 1989) to train a gating system (Jacobs, Jordan, Nowlan & Hinton, 1991) to choose the one that matches best in each state. Here we study the converse case, in which there is only one possible sub-manager active at any level, and so the only choice a manager has is over the tasks it sets. Such systems have been previously considered (Hinton, 1987; Watkins, 1989). The next section considers how such a strict hierarchical scheme can learn to choose appropriate tasks at each level, section 3 describes a maze learning example for which the hierarchy emerges naturally as a multi-grid division of the space in which the agent moves, and section 4 draws some conclusions. 2 FEUDAL CONTROL We sought to build a system that mirrored the hierarchical aspects of a feudal fiefdom, since this is one extreme for models of control. Managers are given absolute power over their sub-managers - they can set them tasks and reward and punish them entirely as they see fit. However managers ultimately have to satisfy their own super-managers, or face punishment themselves - and so there is recursive reinforcement and selection until the whole system satisfies the goal of the highest level manager. This can all be made to happen without the sub-managers initially "understanding" the sub-tasks they are set. Every component just acts to maximise its expected reinforcement, so after learning, the meaning it attaches to a specification of a sub-task consists of the way in which that specification influences its choice of sub-sub-managers and sub-sub-tasks. Two principles are key: Reward Hiding Managers must reward sub-managers for doing their bidding whether or not this satisfies the commands of the super-managers. Sub-managers should just learn to obey their managers and leave it up to them to determine what Feudal Reinforcement Learning 273 it is best to do at the next level up. So if a sub-manager fails to achieve the sub-goal set by its manager it is not rewarded, even if its actions result in the satisfaction of of the manager's own goal. Conversely, if a sub-manager achieves the sub-goal it is given it is rewarded, even if this does not lead to satisfaction of the manager's own goal. This allows the sub-manager to learn to achieve sub-goals even when the manager was mistaken in setting these sub-goals. So in the early stages of learning, low-level managers can become quite competent at achieving low-level goals even if the highest level goal has never been satisfied. Information Hiding Managers only need to know the state of the system at the granularity of their own choices of tasks. Indeed, allowing some decision making to take place at a coarser grain is one of the main goals of the hierarchical decomposition. Information is hidden both downwards - sub-managers do not know the task the super-manager has set the manager - and upwards - a super-manager does not know what choices its manager has made to satisfy its command. However managers do need to know the satisfaction conditions for the tasks they set and some measure of the actual cost to the system for achieving them using the sub-managers and tasks it picked on any particular occasion. For the special case to be considered here, in which managers are given no choice of which sub-manager to use in a given state, their choice of a task is very similar to that of an action for a standard Q-Iearning system. If the task is completed successfully, the cost is determined by the super-manager according to how well (eg how quickly, or indeed whether) the manager satisfied its super-tasks. Depending on how its own task is accomplished, the manager rewards or punishes the submanager responsible. When a manager chooses an action, control is passed to the sub-manager and is only returned when the state changes at the managerial level. 3 THE MAZE TASK To illustrate this feudal system, consider a standard maze task (Barto, Sutton & Watkins, 1989) in which the agent has to learn to find an initially unknown goal. The grid is split up at successively finer grains (see figure 1) and managers are assigned to separable parts of the maze at each level. So, for instance, the level 1 manager of area 1-(1,1) sets the tasks for and reinforcement given to the level 2 managers for areas 2-(1,1), 2-(1,2), 2-(2,1) and 2-(2,2). The successive separation into quarters is fairly arbitrary - however if the regions at high levels did not cover contiguous areas at lower levels, then the system would not perform very well. At all times, the agent is effectively performing an action at every level. There are five actions, N5EW and "', available to the managers at all levels other than the first and last. NSEW represent the standard geographical moves and'" is a special action that non-hierarchical systems do not require. It specifies that lower level managers should search for the goal within the confines of the current larger state instead of trying to move to another region of the space at the same level. At the top level, the only possible action is "'; at the lowest level, only the geographical moves are allowed, since the agent cannot search at a finer granularity than it can move. Each manager maintains Q values (Watkins, 1989; Barto, Bradtke & Singh, 1992) over the actions it instructs its sub-managers to perform, based on the location of 274 Dayan and Hinton Figure 1: Figure 1: The Grid Task. This shows how the maze is divided up at different levels in the hierarchy. The 'u' shape is the barrier, and the shaded square is the goal. Each high level state is divided into four low level ones at every step. the agent at the subordinate level of detail and the command it has received from above. So, for instance, if the agent currently occupies 3-(6,6), and the instruction from the level a manager is to move South, then the 1-(2,2) manager decides upon an action based on the Q values for NSEW giving the total length of the path to either 2-(3,2) or 2-(4,2). The action the 1-(2,2) manager chooses is communicated one level down the hierarchy and becomes part of the state determining the level 2 Q values. When the agent starts, actions at successively lower levels are selected using the standard Q-Iearning softmax method and the agent moves according to the finest grain action (at level 3 here). The Q values at every level at which this causes Steps to Goal 1e+04 7 5 3 2 1.5 1e+03 7 5 3 2 1.5 1e+02 7 5 3 2 "\ \ \ , \ \ , .... \ \ ',~ . " \ -'<\ ~~ -... "" --~ ----r-----. --'. -'. ". -. --.. ~- .. ------------.............. _-. Feudal Reinforcement Learning 275 -+ ---- --... _--------- ... F-Q Task 1 F'-=-Q-Task 2 S.:(j-fask 1 S-QTask 2 1.5 Iterations 0 .00 100.00 200.00 300.00 400.00 500.00 Figure 2: Learning Performance. F-Q shows the performance of the feudal an:::hitecture and S-Q of the standard Q-Iearning architecture. a state transition are updated according to the length of path at that level, if the state transition is what was ordered at all lower levels. This restriction comes from the constraint that super-managers should only learn from the fruits of the honest labour of sub-managers, ie only if they obey their managers. Figure 2 shows how the system performs compared with standard, one-step, Qlearning, first in finding a goal in a maze similar to that in figure I, only having 32x32 squares, and second in finding the goal after it is subsequently moved. Points on the graph are averages of the number of steps it takes the agent to reach the goal across all possible testing locations, after the given number of learning iterations. Little effort was made to optimise the learning parameters, so care is necessary in interpreting the results. For the first task the feudal system is initially slower, but after a while, it learns much more quickly how to navigate to the goal. The early sloth is due to the fact that many low level actions are wasted, since they do not implement desired higher level behaviour and the system has to learn not to try impossible actions or * in inappropriate places. The late speed comes from the feudal system's superior exploratory behaviour. If it decides at a high level that the goal is in one part of the maze, then it has the capacity to specify large scale actions at that level to take it there. This is the same advantage that Singh's (1992b) variable temporal resolution system garners, although this is over a single task rather than explicitly composite sub-tasks. Tests on mazes of different sizes suggested that the number of iterations after which the advantage of exploration outweighs the disadvantage of wasted actions gets less as the complexity of the task increases. A similar pattern emerges for the second task. Low level Q values embody an implicit knowledge of how to get around the maze, and so the feudal system can explore efficiently once it (slowly) learns not to search in the original place. 276 Dayan and Hinton Figure 3: The Learned Actions. The area of the boxes and the radius of the central circle give the probabilities of taking action NSEW and * respectively. Figure 3 shows the probabilities of each move at each location once the agent has learnt to find the goal at 3-(3,3). The length of the NSEW bars and the radius of the central circle are proportional to the probability of selecting actions NSEW or * respectively, and action choice flows from top to bottom. For instance, the probability of choosing action S at state 2-(1,3) is the sum of the products of the probabilities of choosing actions NSEW and * at state 1-(1,2) and the probabilities, conditional on this higher level selection, of choosing action S at state 2-0,3). Apart from the right hand side of the barrier, the actions are generally correct - however there are examples of sub-optimal behaviour caused by the decomposition of the space, eg the system decides to move North at 3-(8,5) despite it being more felicitous to move South. Closer investigation of the course of learning revea Is that, as might be expected from the restrictions in updating the Q values, the system initially learns in a completely bottom-up manner. However after a while, it learns appropriate actions at the highest levels, and so top-down learning happens too. This generally beneficial effect arises because there are far fewer states at coarse resolutions, and so it is easier for the agent to calculate what to do. Feudal Reinforcement Learning 277 4 DISCUSSION The feudal architecture partially addresses one of the major concerns in reinforcement learning about how to divide a single task up into sub-tasks at multiple levels. A demonstration was given of how this can be done separately from choosing between different possible sub-managers at a given level. It depends on there being a plausible managerial system, preferably based on a natural hierarchical division of the available state space. For some tasks it can be very inefficient, since it forces each sub-manager to learn how to satisfy all the sub-tasks set by its manager, whether or not those sub-tasks are appropriate. It is therefore more likely to be useful in environments in which the set tasks can change. Managers need not necessarily know in advance the consequences of their actions. They could learn, in a self-supervised manner, information about the state transitions that they have experienced. These observed next states can be used as goals for their sub-managers - consistency in providing rewards for appropriate transitions is the only requirement. Although the system gains power through hiding information, which reduces the size of the state spaces that must be searched, such a step also introduces inefficiencies. In some cases, if a sub-manager only knew the super-task of its super-manager then it could bypass its manager with advantage. However the reductio of this would lead to each sub-manager having as large a state space as the whole problem, negating the intent of the feudal architecture. A more serious concern is that the non-Markovian nature of the task at the higher levels (the future course of the agent is determined by more detailed information than just the high level states) can render the problem insoluble. Moore and Atkeson's (1993) system for detecting such cases and choosing finer resolutions accordingly should integrate well with the feudal system. For the maze task, the feudal system learns much more about how to navigate than the standard Q-Iearning system. Whereas the latter is completely concentrated on a particular target, the former knows how to execute arbitrary high level moves efficiently, even ones that are not used to find the current goal such as going East from one quarter of the space 1-(2,2) to another 1-(1,2). This is why exploration can be more efficient. It doesn't require a map of the space, or even a model of state x action -4 next state to be learned explicitly. Jameson (1992) independently studied a system with some similarities to the feudal architecture. In one case, a high level agent learned on the basis of external reinforcement to provide on a slow timescale direct commands (like reference trajectories) to a low level agent - which learned to obey it based on reinforcement proportional to the square trajectory error. In another, low and high level agents received the same reinforcement from the world, but the former was additionally tasked on making its prediction of future reinforcement significantly dependent on the output of the latter. Both systems learned very effectively to balance an upended pole for long periods. They share the notion of hierarchical structure with the feudal architecture, but the notion of control is somewhat different. Multi-resolution methods have long been studied as ways of speeding up dynamic programming (see Morin, 1978, for numerous examples and references). Standard 278 Dayan and Hinton methods focus effectively on having a single task at every level and just having coarser and finer representations of the value function. However, here we have studied a slightly different problem in which managers have the flexibility to specify different tasks which the sub-managers have to learn how to satisfy. This is more com plicated, but also more powerful. From a psychological perspective, we have replaced a system in which there is a single external reinforcement schedule with a system in which the rat's mind is composed of a hierarchy of little Skinners. Acknowledgements We are most grateful to Andrew Moore, Mark Ring, Jiirgen Schmid huber, Satinder Singh, Sebastian Thrun and Ron Williams for helpful discussions. This work was supported by SERC, the Howard Hughes Medical Institute and the Canadian Institute for Advanced Research (CIAR). GEH is the Noranda fellow of the CIAR. References [1] Barto, AC, Bradtke, SJ & Singh, SP (1991). Real-Time Learning and Control using Asynchronous Dynamic Programming. COINS technical report 91-57. Amherst: University of Massach usetts. [2] Barto, AC, Sutton, RS & Watkins, qCH (1989). Learning and sequential decision making. In M Gabriel & J Moore, editors, Learning and Computational Neuroscience: Foundations of Adaptive Networks. Cambridge, MA: MIT Press, Bradford Books. [3] Hinton, GE (1987). Connectionist Learning Procedures. Technical Report CMU-CS-B7-115, Department of Computer Science, Carnegie-Mellon University. [4] Jacobs, RA, Jordan, MI, Nowlan, S1 & Hinton, GE. Adaptive mixtures of local experts. Neural Computation, 3, pp 79-87. [5] Jameson, JW (1992). Reinforcement control with hierarchical backpropagated adaptive critics. Submitted to Neural Networks. [6] Moore, AW & Atkeson, CC (1993). Memory-based reinforcement learning: efficient computation with prioritized sweeping. In SJ Hanson, CL Giles & JD Cowan, editors Advances in Neural Information Processing Systems 5. San Mateo, CA: Morgan Kaufmann. [7] Morin, TL (1978). Computational ad vances in dynamic programming. In ML Puterman, editor, Dynamic Programming and its Applications. New York: Academic Press. [8] Moore, AW (1991). Variable resolution dynamic programming: Efficiently learning action maps in multivariate real-valued state spaces. Proceedings of the Eighth Machine Learning Workshop. San Mateo, CA: Morgan Kaufmann. [9] Singh, SP (1992a). Transfer of learning by composing solutions for elemental sequential tasks. Machine Learning, 8, pp 323-340. [10] Singh, SP (1992b). Scaling reinforcement learning algorithms by learning variable temporal resolution models. Submitted to Machine Learning. [11] Tesauro, G (1992). Practical issues in temporal difference learning. Machine Learning, 8, pp 257-278. [12J Watkins, qCH (1989). Learning from Delayed Rewards. PhD Thesis. University of Cambridge, England.
1992
107
582
A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams Charles Rosenberg, Ph.D: Department of Computer Science Hebrew University Jerusalem, Israel Jacob Erel, M.D. Department of Cardiology Sapir Medical Center Meir General Hospital Kfar Saba, Israel Henri Atlan, M.D., PhD. Department of Biophysics and Nuclear Medicine Hadassah Medical Center Jerusalem, Israel Abstract The planar thallium-201 myocardial perfusion scintigram is a widely used diagnostic technique for detecting and estimating the risk of coronary artery disease. Neural networks learned to interpret 100 thallium scintigrams as determined by individual expert ratings. Standard error backpropagation was compared to standard LMS, and LMS combined with one layer of RBF units. Using the "leave-one-out" method, generalization was tested on all 100 cases. Training time was determined automatically from cross-validation perfonnance. Best perfonnance was attained by the RBF/LMS network with three hidden units per view and compares favorably with human experts. 1 Introduction Coronary artery disease (CAD) is one of the leading causes of death in the Western World. The planar thallium-201 is considered to be a reliable diagnostic tool in the detection of • Current address: Geriatrics, Research, Educational and Clinical Center, VA Medical Center, Salt Lake City, Utah. 755 756 Rosenberg, Erel, and Atlan CAD. Thallium is a radioactive isotope that distributes in mammalian tissues after intervenous administration and is imaged by a gamma camera. The resulting scintigram is visually interpreted by the physician for the presence or absence of defects areas with relatively lower perfusion levels. In myocardial applications, thallium is used to measure myocardial ischemia and to differentiate between viable and non-viable (infarcted) heart muscle (pohost and Henzlova, 1990). Diagnosis of CAD is based on the comparison of two sets of images, one set acquired immediately after a standard effort test (BRUCE protocol), and the second following a delay period of four hours. During this delay, the thallium redistributes in the heart muscle and spontaneously decays. Defects caused by scar tissue are relatively unchanged over the delay period (fixed defect), while those caused by ischemia are partially or completely filled-in (reversible defect) (Beller, 1991; Datz et al., 1992). Image interpretation is difficult for a number of reasons: the inherent variability in biological systems which makes each case essentially unique, the vast amount of irrelevant and noisy information in an image, and the "context-dependency" of the interpretation on data from many other tests and clinical history. Interpretation can also be significantly affected by attentional shifts, perceptual abilities, and mental state (Franken Jr. and Berbaum, 1991; Cuar6n et al., 1980). While networks have found considerable application in ECG processing (e.g. (Artis et al., 1991)) and clinical decision-making (Baxt, 1991b; Baxt, 1991a), they have thus far found limited application in the field of nuclear medicine. Non-cardiac imaging applications include the grading of breast carcinomas (Dawson et al., 1991) and the discrimination of normal vs. Alzheimer's PET scans (Kippenhan et al., 1990). Of the studies dealing specifically with cardiac imaging, neural networks have been applied to several problems in cardiology including the identification of stenosis (Porenta et al., 1990; Cios et al., 1989; Cios et al., 1991; Cianflone et al., 1990; Fujita et al., 1992). These studies encouraged us to explore the use of neural networks in the interpretation of cardiac scintigraphy. 2 Methods We trained one network consisting of a layer of gaussian RBF units in an unsupervised fashion to discover features in circumferential profiles in planar thallium scintigraphy. Then a second network was trained in a supervised way to map these features to physician's visual interpretations of those images using the delta rule (Widrow and Hoff, 1960). This architecture was previously found to compare favorably to other network learning algorithms (2-layer backpropagation and single-layer networks) on this task (Rosenberg et al., 1993; Erel et al., 1993). In our experiments, all of the input vectors representing single views f were first normalized to unit length V = IIfll . The activation value of a gaussian unit, OJ, is then given by: netj O · = exp(--) 1 w (1) (2) where j is an index to a gaussian unit and i is an input unit index. The width of the gaussian, A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams 757 R~gion.1 Scores IL. 0 I\0 IL. 0 IL. <;I Ii: Ii: ~ ~ )( Ii. Ii. .:. tIL. ~ t. t. 1/1 1/1 < ;!; Output Ill! III I • Severe • Moderate " Mild o Normal RBF Input ANT LAO 45 LAT VIEWS Figure 1: The network architecture. The first layer (Input) encoded the three circumferential profiles representing the three views, anterior (ANT), left lateral oblique (LAO). and left lateral (LAT). The second layer consisted of radial basis function (RBF) units, the third layer, semi-linear units trained in a supervised fashion. The outputs of the network corresponded to the visual scores as given by the expert observer. An additional unit per view encoded the scaling factor of the input patterns lost as a result of input normalization. given by w, was fixed at 0.25 for all units 1• The gaussian units were trained using a competitive learning rule which moves the center of the unit closest to the current input pattern (Omax, i.e. the "winner") closer to the input pattern2: ~tui,winner = 1](v; Wi,winner) (3) 2.1 Data Acquisition and Selection Scintigraphic images were acquired for each of three views: anterior (ANT), left lateral oblique (LAO 45), and left lateral (LAT) for each patient case. Acquisition was performed twice, once immediately following a standard effort test and once following a delay period of four hours. Each image was pre-processed to produce a circumferential profile (Garcia et aI., 1981; Francisco et aI., 1982) , in which maximum pixel counts within each of 60, 6° contiguous segmental regions are plotted as a function of angle (Garcia, 1991). Preprocessing involved positioning of the region of interest (ROI), interpolative background subtraction, smoothing and rotational alignment to the heart's apex (Garcia, 1991). 1 We have considered applying the learning rule to the unit widths (w) as well as the RBF weights, however we have not as yet pursued this possibility. 2Following Rumelhart and Zipser (Rumelhart and Zipser, 1986), the other units were also pulled towards the input vector, although to a much smaller extent than the winner. We used a ratio of 1 to 100. 3The profiles were generated using the Elscint CTL software package for planar quantitative thallium-20l based on the Cedars-Sinai technique (Garcia et aI., 1981; Maddahi et aI., 1981; Areeda et aI., 1982). 758 Rosenberg, Ere!, and Atlan Lesion single multiple Total mild moderate severe Total 12 5 0 17 16 16 11 43 28 21 11 60 Table 1: Distribution of Abnormal Cases as Scored by the Expert Observer. Defects occurring in any combination of two or more regions (even the proximal and distal subregions of a single area) were treated as one multiple defect. The severity level of multiple lesions was based on the most severe lesion present. Cases were pre-selected based on the following criteria (Beller, 1991): • Insufficient exercise. Cases in which the heart rate was less than 130 b.p.m. were eliminated, as this level of stress is generally deemed insufficient to accurately distinguish normal from abnormal conditions. • Positional abnormalities. In a few cases, the "region of interest" was not positioned or aligned correctly by the technician. • Increased lung uptake. Typically in cases of multi-vessel disease, a significant proportion of the perfusion occurs in the lungs as well as in the heart, making it more difficult to determine the condition of the heart due to the partially overlapping positions of the heart and lungs. • Breast artifacts. Cases were selected at random between August, 1989 and March, 1992. Approximately a third of the cases were eliminated due to insufficient heart rate, 4-5% due to breast artifacts, 4% due to lung uptake, and 1-2% due to positional abnormalities. A set of one hundred usable cases remained. 2.2 Visual Interpretation Each case was visually scored by a single expert observer for each of nine anatomical regions generally accepted as those that best relate to the coronary circulation: Septal: proximal and distal, Anterior: proximal and distal, Apex, Inferior: proximal and distal, and Posterior-Lateral: proximal and distal. Scoring for each region was from normal (I) to severe (4), indicating the level of the observed perfusion deficit. Intra-observer variability was examined by having the observer re-interpret 17 of the cases a second time. The observer was unable to remember the cases from the first reading and could not refer to the previous scores. Exact matches were obtained on 91.5% of the regions; only 8 of the 153 total regions (5%) were labeled as a defect (mild, moderate or severe) on one occasion and not on the other. All differences, when they occurred, were of a single rating level4• 4In contrast, measured inter-observer variability was much higher. A set of 13 cases was individA Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams 759 2.3 The Network Model The input units of the network were divided into 3 groups of 60 units each, each group representing the circumferential profile for a single view. A set of 3 RBF units were assigned to each input group. Then a second layer of weights was trained using the delta rule to reproduce the target visual scores assigned by the expert observer. The categorical visual scores were translated to numerical values to make the data suitable for network learning: normal = 0.0, mild defect = 0.3, moderate defect = 0.7, and severe defect = 1.0. In order to make efficient use of the available data, we actually trained 100 identical networks; each network was trained on a subset of 99 of the 100 cases and tested on the remaining one. This procedure, sometimes referred to as the "leave-one-out" or "jack-knife" method, enabled us to determine the generalization performance for each case. This procedure was followed for both the RBF and the delta rule training5. Training of a single network took only a few minutes of Sun 4 computer time. 3 Results Because of the larger numbers of confusions between normal and mild regions in both the inter- and intra-observer scores, disease was defined as moderate or severe defects. The threshold value dividing the output values of the network into these two sets was varied from 0 to 1 in 0.01 step increments. The number of agreements between the expert observer and the network were computed for each threshold value. The resulting scores, accumulated over all threshold values, were plotted as a Receiver Operating Characteristic (ROC) curve. Best performance (percent correct) was achieved with a threshold value of 0.28, which yielded an overall accuracy of 88.7% (798/900 regions) on the stress data. However, this value of the threshold heavily favored specificity over sensitivity due to the preponderance of normal regions in the data. Using the decision threshold which maximized the sum of sensitivity and specificity, 0.10, accuracy dropped to 84.9% (764/900) but sensitivity improved to 0.771 (121/157), and specificity was 0.865 (643/743). 3.1 Distinguishing Fixed vs. Reversible Defects In order to take into account the delayed distribution as well as the stress set of images, the network was essentially duplicated: one network processed the stress data, and the other, ually interpreted by 3 expert observers in a previous experiment (Rosenberg et aI., 1993). Percent agreement (exact matches) between the observers was 82% (288/351). Of the 63 mis-matches, 5 or about 8% of the regions were of 2 levels of severity. There were no differences of 3 levels of severity. Approximately two-thirds of the disagreements were between normal and mild regions. These results indicate that the single observer data employed in the present study are more reliable than the mixed consensus and individual scores used previously. 5Details of network learning were as follows: Each of the 100 networks was initialized and trained in the same way. RBF-to-output unit weights were initialized to small random values between 0.5 and -0.5. Input-to-RBF unit weights were first randomized and then normalized so that the weight vectors to each RBF unit were of unit length. Unsupervised, competitive training of the RBF units continued for 100 "epochs" or complete sweeps through the set of 99 cases: 20 epochs with a learning rate (11) of 0.1 followed by 80 epochs at 0.01 without momentum (0'). Supervised training using a learning rate of 0.05 and momentum 0.9, was terminated based on cross-validation testing after 200 epochs. Further training led to over-training and poorer generalization. 760 Rosenberg, Erel, and Atlan the redistribution data. (For details, see (Erel et al., 1993).) The combined network exhibited only a limited ability to distinguish between scar and ischemia. Performance on scar detection was good (sens. 0.728 (75/103), spec. 0.878 (700{797», but the sensitivity of the network on ischemia detection was only 0.185 (10/54). This result may be explained, at least in part, by the much smaller number of ischemic regions included in the data set as compared with scars (54 versus 103). 4 Conclusions and Future Directions We suspect that our major limitation is in defect sampling. In order that a statistical system (networks or otherwise) generalize well to new cases, the data used in training must be representative of the full population of data likely to be sampled. This is unlikely to happen when the number of positive cases is on the order of 50, as was the case with ischemia, since each possible defect location, plus all the possible combinations of locations must be included. A variant ofbackpropagation, called competitive backpropagation, has recently been developed which is claimed to generalize appropriately in the presence of multiple defects (Cho and Reggia, 1993). Weights in this network are constrained to take on positive values, so that diagnoses made by the system add constructively. In a standard backpropagation network, multiple diseases can cancel each other out, due to complex interactions of both positive and negative connection strengths. We are currently planning to investigate the application of this learning algorithm to the problem of ischemia detection. Other improvements and extensions include: • Elicit confidence ratings. Expert visual interpretations could be augmented by degree of confidence ratings. Highly ambiguous cases could be reduced in importance or eliminated. The ratings could also be used as additional targets for the network6: cases indicated by the network with low levels of confidence would require closer inspection by a physician. Initial results are promising in this regard. • Provide additional information. We have not yet incorporated clinical history, gender, and examination EKG. Clinical history has been found to have a profound impact on interpretation of radiographs (Doubilet and Herman, 1981). The inclusion of these variables should allow the network to approximate more closely a complete diagnosis, and boost the utility of the network in the clinical setting. • Add constraints. Currently we do not utilize the angles that relate the three views. It may be possible to build these angles in as constraints and thereby cut down on the number of free network parameters. • Expand application. Besides planar thallium, our approach may also be applied to non-planar 3-D imaging technologies such as SPECT and other nuclear agents or stress-inducing modalities such as dipyridamole. Preliminary results are promising in this regard. 6See (fesauro and Sejnowski, 1988) for a related idea. A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams 761 Acknowledgements The authors wish to thank Mr. Haim Karger for technical assistance, and the Departments of Computer Science and Psychology at the Hebrew University for computational support. We would also like to thank Drs. David Shechter, Moshe Bocher, Roland Chisin and the staff of the Department of Medical Biophysics and Nuclear Medicine for their help, both large and small, and two anonymous reviewers. Terry Sejnowski suggested our use of RBF units. References Areeda, J., Train, K. v., Garcia, E. Y., Maddahi, J., Rosanki, A., Waxman, A., and Berman, D. (1982). Improved analysis of segmental thallium-201 myocardial scintigrams: Quantitation of distribution, washout, and redistribution. In Esser, P. D., editor, Digital Imaging. Society of Nuclear Medicine, New York. Artis, S., Mark, R, and Moody, G. (1991). Detection of atrial fibrillation using artificial neural networks. In Computers in Cardiology, pages 173-176, Venice, Italy. IEEE, IEEE Computer Society Press. Baxt, W. (1991a). Use of an artificial neural network for data analysis in clinical decisionmaking: The diagnosis of acute coronary occlusion. Neural Computation, 2:480-489. Baxt, W. (1991b). Use of an artificial neural network for the diagnosis of myocardial infarction. Annals of Internal Medicine, 115:843-848. Beller, G. A. (1991). Myocardial perfusion imaging with thallium-201. In Marcus, M. L., Schelbert, H. R., Skorton, D. J., and Wolf, G. L., editors, Cardiac Imaging. W. B. Sanders. Cho, S. and Reggia, J. (1993). Multiple disorder diagnosis with adaptive competitive neural networks. Artificial Intelligence in Medicine. To appear. Cianfione, D., Carandente, 0., Fragasso, G., Margononato, A., Meloni, C., Rossetti, E., Gerundini, P., and Chiechia, S. L. (1990). A neural network based model of predicting the probability of coronary lesion from myocardial perfusion SPECT data. In Proceedings of the 37th Annual Meeting of the Society of Nuclear Medicine, page 797. Cios, K. J., Goodenday, L. S., Merhi, M., and Langenderfer, R. (1989). Neural networks in detection of coronary artery disease. In Computers in Cardiology Conference, pages 33-37, Jerusalem, Israel. IEEE, IEEE Computer Society Press. Cios, K. J., Shin, 1., and Goodenday, L. S. (1991). Using fuzzy sets to diagnose coronary artery stenosis. Computer, pages 57-63. Cuar6n, A., Acero, A., Cardena, M., Huerta, D., Rodriguez, A., and de Garay, R. (1980). Interobserver variability in the interpretation of myocardial images with Tc-99m-Iabeled diphosponate and pyrophosphate. Journal of Nuclear Medicine, 21(1):1-9. Datz; E, Gabor, E, Christian, P., Gullber, G., Menzel, C., and Morton, K. (1992). The use of computer-assisted diagnosis in cardiac-perfusion nuclear medicine studies: A review. Journal of Digital Imaging, 5(4):1-14. Dawson, A., Austin, R, and Weinberg, D. (1991). Nuclear grading of breast carcinoma by image analysis. American Journal of Clinical Pathology, 95(4):S29-S37. 762 Rosenberg, Erel, and Atlan Doubilet, P. and Herman, P. (1981). Interpretation of radiographs: Effect of clinical history. American Journal of Roentgenology, 137: 1055-1058. Erel, J., Rosenberg, c., and Atlan, H. (1993). Neural network for automatic interpretation of thallium scintigrams. In preparation. Francisco, D. A., Collins, S. M., and et al., R. T. G. (1982). Tomographic thallium-201 myocardial perfusion scintigrams after maximal coronary artery vasodiliation with intravenous dipyridamole: Comparison of qualitative and quantitative approaches. Circulation, 66(2). Franken Jr., E. A. and Berbaum, K. S. (1991). Perceptual aspects of cardiac imaging. In Marcus, M. L., Schelbert, H. R., Skorton, D. J., and Wolf, G. L., editors, Cardiac Imaging. W. B. Sanders. Fujita, H., Katafuchi, T., Uehara, T., and Nishimura, T. (1992). Application of artificial neural network to computer-aided diagnosis of coronary artery disease in myocardial SPECT bull's-eye images. The Journal of Nuclear Medicine, 33(2):272-276. Garcia, E. V. (1991). Physics and instrumentation of radionuclide imaging. In Marcus, M. L., Schelbert, H. R., Skorton, D. J., and Wolf, G. L., editors, Cardiac Imaging. W. B. Sanders. Garcia, E. V., Maddahi, J., Berman, D. S., and Waxman, A. (1981). Space-time quantitation of thallium-201 myocardial scintigraphy. Journal of Nuclear Medicine, 22:309-317. Kippenhan, J., Barker, W., Pascal, S., and Duara, R. (1990). A neural-network classifier applied to PET scans of normal and Alzheimer's disease (AD) patients. In The Proceedings of the 37th Annual Meeting of the Society of Nuclear Medicine, volume 31, Washington, D.C. Maddahi, J., Garcia, E. V., Berman, D. S., Waxman, A., Swan, H. J. C., and Forrester, J. (1981). Improved noninvasive assessment of coronary artery disease by quantitative analysis of regional stress myocardial distribution and washout of thallium-20l. Circulation, 64 :924-935. Pohost, G. M. and Henzlova, M. J. (1990). The value of thallium-201 imaging. New Eng land Journal of Medicine, 323(3): 190-192. Porenta, G., Kundrat, S., Dorffner, G., Petta, P., Duit, J., and r, H. S. (19~0). Computer based image interpretations of thallium- 201 scintigrams: Assessment of coronary artery disease using the parallel distributed processing approach. In Proceedings of the 37th Annual Meeting of the Society of Nuclear Medicine, page 825. Rosenberg, C., Erel, J., and Atlan, H. (1993). A neural network that learns to interpret myocardial planar thallium scintigrams. Neural Computation. To appear. Rumelhart, D. and Zipser, D. (1986). Feature discovery by competitive learning. In Rumelhart, D. and McClelland, J., editors, Parallel Distributed Processing, volume 1, chapter 5, pages 151-193. MIT Press, Cambridge, Mass. Tesauro, G. and Sejnowski, T. J. (1988). A parallel network that learns to play backgammon. Technical Report CCSR-88-2, University of Illinois at Urbana-Champaign Center for Complex Systems Research. Widrow, B. and Hoff, M. (1960). Adaptive switching circuits. In 1960 IRE WESCON Convention Record, volume 4, pages 96-104. IRE, New York. PART X IMPLEMENTATIONS
1992
108
583
Learning Cellular Automaton Dynamics with Neural Networks N H Wulff* and J A Hertz t CONNECT, the Niels Bohr Institute and Nordita Blegdamsvej 17, DK-2100 Copenhagen 0, Denmark Abstract We have trained networks of E - II units with short-range connections to simulate simple cellular automata that exhibit complex or chaotic behaviour. Three levels of learning are possible (in decreasing order of difficulty): learning the underlying automaton rule, learning asymptotic dynamical behaviour, and learning to extrapolate the training history. The levels of learning achieved with and without weight sharing for different automata provide new insight into their dynamics. 1 INTRODUCTION Neural networks have been shown to be capable of learning the dynamical behaviour exhibited by chaotic time series composed of measurements of a single variable among many in a complex system [1, 2, 3]. In this work we consider instead cellular automaton arrays (CA)[4], a class of many-degree-of-freedom systems which exhibits very complex dynamics, including universal computation. We would like to know whether neural nets can be taught to imitate these dynamics, both locally and globally. One could say we are turning the usual paradigm for studying such systems on its head. Conventionally, one is given the rule by which each automaton updates its state, and the (nontrivial) problem is to find what kind of global dynamical ·Present address: NEuroTech AjS, Copenhagen, Denmark t Address until October 1993: Laboratory of Neuropsychology, NIMH, Bethesda MD 20892. email: hertz@nordita.dk 631 632 Wulff and Hertz behaviour results. Here we suppose that we are given the history of some CA, and we would like, if possible, to find the rule that generated it. We will see that a network can have different degrees of success in this task, depending on the constraints we place on the learning. Furthermore, we will be able to learn something about the dynamics of the automata themselves from knowing what level of learning is possible under what constraints. This note reports some preliminary investigations of these questions. We study only the simplest automata that produce chaotic or complex dynamic behaviour. Nevertheless, we obtain some nontrivial results which lead to interesting conjectures for future investigation. A CA is a lattice of formal computing units, each of which is characterized by a state variable Si(t), where i labels the site in the lattice and t is the (digital) time. Every such unit updates itself according to a particular rule or function f( ) of its own state and that of the other units in its local neighbourhood. The rule is the same for all units, and the updatings of all units are simultaneous. Different models are characterized by the nature of the state variable (e.g. binary, continuous, vector, etc), the dimensionality of the lattice, and the choice of neighbourhood. In the two cases we study here, the neighbourhoods are of size N = 3, consisting of the unit itself and its two immediate neighbours on a chain, and N = 9, consisting of the unit itself and its 8 nearest neighbours on a square lattice (the 'Moore neighbourhood'). We will consider only binary units, for which we take Si(t) = ±1. Thus, if the neighbourhood (including the unit itself) includes N sites, f( ) is a Boolean function on the N -hypercube. There are 22N such functions. Wolfram [4) has divided the rules for such automata further into three classes: 1. Class 1: rules that lead to a uniform state. 2. Class 2: rules that lead to simple stable or periodic patterns. 3. Class 3: rules that lead to chaotic patterns. 4. Class 4: rules that lead to complex, long-lived transient patterns. Rules in the fourth cla.ss lie near (in a sense not yet fully understood [5)) a critical boundary between classes 2 and 3. They lead eventually to asymptotic behaviour in class 2 (or possibly 3); what distinguishes them is the length of the transient. It is classes 3 and 4 that we are interested in here. More specifically, for class 3 we expect that after the (short) initial transients, the motion is confined to some sort of attractor. Different attractors may be reached for a given rule, depending on initial conditions. For such systems we will focus on the dynamics on these attractors, not on the short transients. We will want to know what we can learn from a given history about the attractor characterizing it, about the asymptotic dynamics of the system generally (i.e. about all attractors), and, if possible, about the underlying rule. For class 4 CA, in contra.st, only the transients are of interest. Different initial conditions will give rise to very different transient histories; indeed, this sensitivity is the dynamical ba.sis for the capability for universal computation that has been Learning Cellular Automaton Dynamics with Neural Networks 633 proved for some of these systems. Here we will want to know what we can learn from a portion of such a history about its future, as well as about the underlying rule. 2 REPRESENTING A CA AS A NETWORK Any Boolean function of N arguments can be implemented by a ~-n unit of order P ::; N with a threshold activation function, i.e. there exist weights wJlh ... jp such that I(SI, S2 ... SN) = sgn [. L . wJd~ ... jp Sjl Sh ... Sjp] . (1) Jl.J~.·"JP The indices ile run over the sites in the neighbourhood (1 to N) and zero, which labels a constant formal bias unit So = 1. Because the updating rule we are looking for is the same for the entire lattice, the weight WJ1 ... jp doesn't depend on i. Furthermore, because of the discrete nature of the outputs, the weights that implement a given rule are not unique; rather, there is a region of weight space for each rule. Although we could work with other architectures, it is natural to study networks with the same structure as the CA to be simulated. We therefore make a lattice of formal 1: - n neurons with short-range connections, which update themselves according to Vi(t+ 1) = 9 r.~ Wit ... jPVjl(t) ... Vjp(t)] , Jt"'Jp (2) In these investigations, we have assumed that we know a priori what the relevant neighbourhood size is, thereby fixing the connectivity of the network. At the end of the day, we will take the limit where the gain of the activation function 9 becomes infinite. However, during learning we use finite gain and continuous-valued units. We know that the order P of our ~ - n units need not be higher than the neighbourhood size N. However, in most cases a smaller P will do. More precisely, a network with any P > ~N can in principle (Le. given the right learning algorithm and sufficient training examples) implement almost all possible rules. This is an asymptotic result for large N but is already quite accurate for N = 3, where only two of the 256 possible rules are not implementable by a second-order unit, and N = 5, where we found from simple learning experiments that 99.87% of 10000 randomly-chosen rules could be implemented by a third-order unit. 3 LEARNING Having chosen a suitable value of P, we can begin our main task: training the network to simulate a CA, with the training examples {Si(t) -t Si(t + I)} taken from a particular known history. The translational invariance of the CA suggests that weight sharing is appropriate in the learning algorithm. On the other hand, we can imagine situations in which we did not possess a priori knowledge that the CA rule was the same for all units, 634 Wulff and Hertz or where we only had access to the automaton state in one neighbourhood. This case is analogous to the conventional time series extrapolation paradigm, where we typically only have access to a few variables in a large system. The difference is that here the accessible variables are binary rather than continuous. In these situations we should or are constrained to learn without each unit having access to error information at other units. In what follows we will perform the training both with and without weight sharing. The differences in what can be learned in the two cases will give interesting information about the CA dynamics being simulated. Most of our results are for chaotic (class 3) CA. For these systems, this training history is taken after initial transients have died out. Thus many of the 2N possible examples necessary to specify the rule at each site may be missing from the training set, and it is possible that our training procedure will not result in the network learning the underlying rule of the original system. It might instead learn another rule that coincides with the true one on the training examples. This is even more likely if we are not using weight sharing, because then a unit at one site does not have access to examples from the training history at other sites. However, we may relax our demand on the network, asking only that it evolve exactly like the original system when it is started in a configuration the original system could be in after transients have died out (Le. on an attractor of the original system). Thus we are restricting the test set in a way that is "fairer" to the network, given the instruction it has received. Of course, if the CA has more than one attractor, several rules which yield the same evolution on one attractor need not do so on another one. It is therefore possible that a network can learn the attractor of the training history (Le. will simula.te the original system correctly on a part of the history subsequent to the training sequence) but will not be found to evolve correctly when tested on data from another attractor. For class 4 automata, we cannot formulate the distinctions between different levels of learning meaningfully in terms of attractors, since the object of interest is the transient portion of the history. Nevertheless, we can still ask whether a network trained on part of the transient can learn the full rule, whether it can simulate the dynamics for other initial conditions, or whether it can extrapolate the training history. We therefore distinguish three degrees of successful learning: 1. Learning the rule, where the network evolves exactly like the original system from any initial configuration. 2. Learning the dynamics, the intermediate case where the network can simulate the original system exactly after transients, irrespective of initial conditions, despite not having learned the full rule. 3. Learning to continue the dynamics, where the successful simulation of the original system is only achieved for the particular initial condition used to generate the training history. Our networks are recurrent, but because they have no hidden units, they can be trained by a simple variant of the delta-rule algorithm. It can be obtained formally Learning Cellular Automaton Dynamics with Neural Networks 635 from gradient descent on a modified cross entropy E = ~ '" [(1 + Si(t)) log 1 + Si~t~ + (1 _ Si(t)) log 1 ~~~t~l 8[-Si(t)Vi(t)] (3) L l+v.-t 1- ·t it t t We used the online version: f!lwith ... jp = 7]8[-Si(t+ l)l/i(t+ l)J[Si(t+ 1) - Vi(t+ l)]l';l(t)V}l(t).·. V}p(t) (4) This is like an extension of the Adatron algorithm[6} to E-n units, but with the added feature that we are using a nonlinear activation function. The one-dimensional N = 3 automata we simulated were the 9 legal cha.otic ones identified by Wolfram l4]. Using his system for labeling the rules, these are rules 18, 22, 54, 90, 122, 126, 146, 150, and 182. We used networks of order P = 3 so that all rules were learnable. (Rule 150 would not have been learnable by a second-order net.) Each network was a chain 60 units long, subjected to periodic boundary conditions. The training histories {Si (t)} were 1000 steps long, beginning 100 steps after randomly chosen initial configurations. To test for learning the rules, all neighbourhood configurations were checked at every site. To test for learning the dynamics, the CA were reinitialized with different random starting configurations and run 100 steps to eliminate transients, after which new test histories of length 100 steps were constructed. Networks were then tested on 100 such histories. The test set for continuing the dynamics was made simply by allowing the CA that had generated the training set to continue for 100 more steps. There are no class 4 CA among the one-dimensional N = 3 systems. As an example of such a rule, we chose the Game of Life which is defined on a square lattice with a neighbourhood size N = 9 and has been proved capable of universaJ computation (see, e.g. [7, 8]). We worked with a lattice of 60 x 60 units. The training history for the Game of Life consisted of 200 steps in the transient. The trained networks were tested, as in the case of the chaotic one-dimensional systems, on all possible configurations at every site (learning the rule), on other transient histories generated from different initial conditions (learning the dynamics), and on the evolution of the original system immediately following the training history (learning to continue the dynamics). 4 RESULTS With weight sharing, it proved possible to learn the dynamics for all 9 of the onedimensional chaotic rules very easily. In fact, it took no more than 10 steps of the training history to achieve this. Learning the underlying rules proved harder. After training on the histories of 1000 steps, the networks were able to do so in only 4 of the 9 cases. No qualitative difference in the two groups of patterns is evident to us from looking at their histories (Fig. 1). Nevertheless, we conclude that their ergodic properties must be different, at lea.st quantitatively. Life was also easy with weight sharing. Our network succeed in learning the underlying rule starting almost anywhere in the long transient. 636 Wulff and Hertz 22 54 90 126 182 Figure 1: Histories of the 4 one-dimensional rules that could be learned (top) and the 5 that could not (bottom). (Learning with weight sharing.) Without weight sharing, all learning naturally proved more difficult. While it was possible to learn to continue the dynamics for all the one-dimensional chaotic rules, it proved impossible except in one case (rule 22) to learn the dynamics within the training history of 1000 steps. The networks failed on about 25% of the test histories. It was never possible to learn the underlying rule. Thus, apparently these chaotic states are not as homogeneous as they appear (at least on the time scale of the training period). Life is also difficult without weight sharing. Our network was unable even to continue the dynamics from histories of several hundred steps in the transient (Fig. 2). 5 DISCUSSION In previous studies of learning chaotic behaviour in single-variable time series (e.g. [1, 2, 3]), the test to which networks have been put has been to extrapolate the training series, i.e. to continue the dynamics. We have found that this is also possible in cellular automata for all the chaotic rules we have studied, even when only local information about the training history is available to the units. Thus, the CA evolution history at any site is rich enough to permit error-free extrapolation. However, local training data are not sufficient (except in one system, rule 22) to permit our networks to pass the more stringent test oflearning the dynamics. Thus, viewed from any single site, the different attra.ctors of these systems are dissimilar enough that data from one do not permit generalization to another. Learning Cellular Automaton Dynamics with Neural Networks 637 . Q(. '\1(0 . -= .=:;" , , , , . t ..... , ... JI~ oc;> . ~ -0~ :~". Ii,. , I ('v ~. ~ (. , .~ . ..,.~. 0.i!-o,0 +.~. ,."'~ , .~ , ~ . I -;. -.~ -_v .-. Figure 2: The original Game of Life CA (left) and the network (right), both 20 steps a.fter the end of the training history. (Training done without weight sharing.) With the access to training data from other sites implied by weight sharing, the situation changes dramatically. Learning the dynamics is then very easy, implying that all possible asymptotic local dynamics that could occur for any initial condition actually do occur somewhere in the system in any given history. Furthermore, with weight sharing, not only the dynamics but also the underlying rule can be learned for some rules. This suggests that these rules are ergodic, in the sense that all configurations occur somewhere in the system at some time. This division of the chaotic rules into two classes according to this global ergodicity is a new finding. Turning to our class 4 example, Life proves to be impossible without weight sharing, even by OUr most lenient test, continuing the dynamics. Thus, although one might be tempted to think that the transient in Life is so long that it can be treated opera.tionallyas if it were a chaotic attractor, it cannot. For real chaotic attractors, in both in the CA studied here and continuous dynamical systems, networks can learn to continue the dynamics on the basis of local data, while in Life they cannot. On the other hand, the result that the rule of Life is easy to learn with weight sharing implies that looked at globally, the history of the transient is quite rich. Somewhere in the system, it contains sufficient information (together with the a priori knowledge that a second-order network is sufficient) to allow us to predict the evolution from any configuration correctly. This study is a very preliminary one and raises more questions than it answers. We would like to know whether the results we have obtained for these few simple systems are generic to complex and chaotic CA. To answer this question we will have to study systems in higher dimensions and with larger updating neighbourhoods. Perhaps significant universal patterns will only begin to emerge for large neighborhoods (cf [5]). However, we have identified some questions to ask about these problems. 638 Wulff and Hertz References [1J A Lapedes and R Farber, Nonlinear Signal Processing Using Neural Networks: Prediction and System Modelling, Tech Rept LA-UR-87 -2662, Los Alamos N ational Laboratory. Los Alamos NM USA [2] A S Weigend, B A Huberman a.nd D E Rumelhart, Int J Neural Systems 1 193-209 (1990) [3] K Stokbro, D K Umberger and J A Hertz, Complex Systems 4 603-622 (1991) [4] S Wolfram, Theory and Applications of Cellular Automata (World Scientific, 1986) [5] C G Langton, pp 12-37 in Emergent Computation (S Forrest, ed) MIT Press/North Holland, 1991 [6] J K Anlauf and M Biehl, Europhys Letters 10 687 (1989) [7] H V Mcintosh, Physica D 45 105-121 (1990) [8] S Wolfram, Physic a D 10 1-35 (1984)
1992
109
584
Directional-Unit Boltzmann Machines Richard S. Zemel Computer Science Dept. Christopher K. I. Williams Computer Science Dept. Michael C. Mozer Computer Science Dept. University of Toronto Toronto, ONT M5S lA4 University of Toronto Toronto, ONT M5S lA4 University of Colorado Boulder, CO 80309-0430 Abstract We present a general formulation for a network of stochastic directional units. This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values in a cyclic range, between 0 and 271' radians. The state of each unit in a Directional-Unit Boltzmann Machine (DUBM) is described by a complex variable, where the phase component specifies a direction; the weights are also complex variables. We associate a quadratic energy function, and corresponding probability, with each DUBM configuration. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. In a mean-field approximation to a stochastic DUBM, the phase component of a unit's state represents its mean direction, and the magnitude component specifies the degree of certainty associated with this direction. This combination of a value and a certainty provides additional representational power in a unit. We describe a learning algorithm and simulations that demonstrate a mean-field DUBM'S ability to learn interesting mappings. Many kinds of information can naturally be represented in terms of angular, or directional, variables. A circular range forms a suitable representation for explicitly directional information, such as wind direction, as well as for information where the underlying range is periodic, such as days of the week or months of the year. In computer vision, tangent fields and optic flow fields are represented as fields of oriented line segments, each of which can be described by a magnitude and direction. Directions can also be used to represent a set of symbolic labels, e.g., object label A at 0, and object label B at 71'/2 radians. We discuss below some advantages of representing symbolic labels with directional units. 172 Directional-Unit Boltzmann Machines 173 These and many other phenomena can be usefully encoded using a directional representation-a polar coordinate representation of complex values in which the phase parameter indicates a direction between 0 and 27r radians. We have devised a general formulation of networks of stochastic directional units. This paper describes a directional-unit Boltzmann machine (DUBM), which is a novel generalization of a Boltzmann machine (Ackley, Hinton and Sejnowski, 1985) in which the units are not binary, but instead take on directional values between 0 and 27r. 1 STOCHASTIC DUBM A stochastic directional unit takes on values on the unit circle. We associate with unit j a random variable Zj; a particular state of j is described by a complex number with magnitude one and direction, or phase Tj: Zj = eiTj • The weights of a DUBM also take on complex values. The weight from unit k to unit j is: Wj k = hj ke ifJ'k . We constrain the weight matrix W to be Hermitian: W T = W*, where the diagonal elements of the matrix are zero, and the asterisk indicates the complex conjugate operation. Note that if the components are real, then W T = W, which is a real symmetric matrix. Thus, the Hermitian form is a natural generalization of weight symmetry to the complex domain. This definition of W leads to a Hermitian quadratic form that generalizes the real quadratic form of the Hopfield energy function: E(z) = -1/2 z*TWz = -1/2 LZjZZWjk j,k (1) where z is the vector of the units' complex states in a particular global configuration. Noest (1988) independently proposed this energy function. It is similar to that used in Fradkin, Huberman, and Shenker's (1978) generalization of the XY model of statistical mechanics to allow arbitary weight phases OJ k, and coupled oscillator models, e.g., Baldi and Meir (1990). We can define a probability distribution over the possible states of a stochastic network using the Boltzmann factor. In a DUBM, we can describe the energy as a function of the state of a particular unit j: We define E(Zj = Zj) = -1/2 [L ZjZZWjk + L ZkZIWkj] k k Xj = LZkwlk k to be the net input to unit j, where aj and O:j denote the magnitude and phase of x j, respectively. Applying the Boltzmann factor, we find that the probability that unit j IS 10 a particular state is proportional to: p(Zj = Zj) ex e-/3E(Zj=zj) = e/3aj COS(Tj-crj) where f3 is the reciprocal of the system temperature. (2) 174 Zemel, Williams, and Mozer 0° ---> Figure 1: A circular normal density function laid over a unit circle. The dots along the circle represent samples of the circular normal random variable Zj. The expected direction of Zj, Tj, is 7r /4; rj is its resultant length. This probability distribution for a unit's state corresponds to a distribution known as the von Mises, or circular normal, distribution (Mardia, 1972). Two parameters completely characterize this distribution: a mean direction r = (0,27r] and a concentration parameter m > 0 that behaves like the reciprocal of the variance of a Gaussian distribution on a linear random variable. The probability density function of a circular normal random variable Z is l : ( ) 1 em cos( T-T) p T; r, m = () 27r1o m (3) From Equations 2 and 3, we see that if a unit adopts states according to its contribution to the system energy, it will be a circular normal variable with mean direction Cl:j and concentration parameter mj = f3aj. These parameters are directly determined by the net input to the unit. Figure 1 shows a circular normal density function for Zj, the state of unit j. This figure also shows the expected value of its stochastic state, which we define as: Yj = < Zj > = rjei'yi (4) where Ij, the phase of Yj, is the mean direction and rj, the magnitude of Yj, is the resultant length. For a circular normal random variable, Ij = Tj, and rj = ~~~:!j.2 When samples of Zj are concentrated on a small arc about the mean (see Figure 1), rj will approach length one. This corresponds to a large concentration parameter (mj = f3aj). Conversely, for small mj, the distribution approaches the uniform distribution on the circle, and the resultant length falls toward zero. For a uniform distribution, rj = O. Note that the concentration parameter for a unit's circular IThe normalization factor Io(m) is the modified Bessel function of the first kind and order zero. An integral representation of this function is Io(m) = ~ J: e±mcos()d6. It can be computed by numerical routines. 2 An integral representation of the modified Bessel function of the first kind and order k is h(m) = ~ Jo7r emcos() cos(k6)d6. Note that II(m) = dlo(m)/dm. Directional-Unit Boltzmann Machines 175 normal density function is proportional to /3, the reciprocal of the system temperature. Higher temperatures will thus have the effect of making this distribution more uniform, just as they do in a binary-unit Boltzmann machine. 2 EMERGENT PROPERTIES OF A DUBM A network of directional units as defined above contains two important emergent properties. The first property is that the magnitude of the net input to unit j describes the extent to which its various inputs "agree". Intuitively, one can think of each component Zk wj k of the sum that comprises x j as predicting a phase for unit j. When the phases of these components are equal, the magnitude of Xj, aj, is maximized. If these phase predictions are far apart, then they will act to cancel each other out, and produce a small aj. Given Xj, we can compute the expected value of the output of unit j. The expected direction of the unit roughly represents the weighted average of the phase predictions, while the resultant length is a monotonic function of aj and hence describes the agreement between the various predictions. The key idea here is that the resultant length directly describes the degree of certainty in the expected direction of unit j. Thus, a DUBM naturally incorporates a representation of the system's confidence in a value. This ability to combine several sources of evidence, and not only represent a value but also describe the certainty of that value is an important property that may be useful in a variety of domains. The second emergent property is that the DUBM energy is globally rotationinvariant-E is unaffected when the same rotation is applied to all units' states in the network. For each DUBM configuration, there is an equivalence class of configurations which have the same energy. In a similar way, we find that the magnitude of Xj is rotation-invariant. That is, when we translate the phases of all units but one by some phase, the magnitude of that unit is unaffected. This property underlies one of the key advantages of the representation: both the magnitude of a unit's state as well as system energy depend on the relative rather than absolute phases of the units. 3 DETERMINISTIC DUBM Just as in deterministic binary-unit Boltzmann machines (Peterson and Anderson, 1987; Hinton, 1989), we can greatly reduce the computational time required to run a large stochastic system if we invoke the mean-field approximation, which states that once the system has reached equilibrium, the stochastic variables can be approximated by their mean values. In this approximation, the variables are treated as independent, and the system probability distribution is simply the product of the probability distributions for the individual units. Gislen, Peterson, and Soderberg (1992) originally proposed a mean-field theory for networks of directional (or "rotor") units, but only considered the case of realvalued weights. They derived the mean-field consistency equations by using the saddle-point method. Our approach provides an alternative, perhaps more intuitive derivation, due to the use of the circular normal distribution. 176 Zemel, Williams, and Mozer We can directly describe these mean values based on the circular normal interpretation. We still denote the net input to a unit j as Xj: ~ * iao x j = ~ Yk W j k = aj e 1 (5) k Once equilibrium has been reached, the state of unit j is Yj, the expected value of Zj given the mean-field approximation: (6) In the stochastic as well as the deterministic system, units evolve to minimize the free energy, F = < E > - T H. The calculation of H, the entropy of the system, follows directly from the circular normal distribution and the mean-field approximation. We can derive mean-field consistency equations for Xj and Yj by minimizing the mean-field free energy, FM F, with respect to each variable independently. The resulting equations match the mean-field equations (Equations 5 and 6) derived directly from the circular normal probability density function. They also match the special case derived by Gislen et al. for real-valued weights. We have implemented a DUBM using the mean-field approximation. We solve for a consistent set of x and y values by performing synchronous updates of the discretetime approximation of the set of differential equations based on the net input to each unit j. We update the x j variables using the following differential equation: dXj ~ * --;It = -Xj + ~ YkWjk (7) k which has Equation 5 as its steady-state solution. In the simulations, we use simulated annealing to help find good minima of FM F. Just as for the Hopfield binary-state network, it can be shown that the free energy always decreases during the dynamical evolution described in Equation 7 (Zemel, Williams and Mozer, 1992). The equilibrium solutions are free energy minima. 4 DUBM LEARNING The units in a DUBM can be arranged in a variety of architectures. The appropriate method for determining weight values for the network depends on the particular class of network architecture. In an autoassociative network containing a single set of interconnected units, the weights can be set directly from the training patterns. If hidden units are required to perform a task, then an algorithm for learning the weights is required. We use an algorithm that generalizes the Boltzmann machine training algorithm (Ackley, Hinton and Sejnowski, 1985; Peterson and Anderson, 1987) to these networks. As in the standard Boltzmann machine learning algorithm, the partial derivative of the objective function with respect to a weight depends on the difference between the partials of two mean-field free energies: one when both input and output units are clamped, and the other when only the input units are clamped. On a given Directional-Unit Boltzmann Machines 177 training case, for each of these stages we let the network settle to equilibrium and then calculate the following derivatives: OFMF/objk OFM F / O(}j k -rjTk COS(-yj 'Yk + (}jk) rjrkbjk sin('Yj 'Yk + (}jk) The learning algorithm uses these gradients to find weight values that will minimize the objective over a training set. 5 EXPERIMENTAL RESULTS We present below some illustrative examples to show that an adaptive network of directional units can be used in a range of paradigms, including associative memory, input/output mappings, and pattern completion. 5.1 SIMPLE AUTOASSOCIATIVE DUBM The first set of experiments considers a simple autoassociative DUBM, which contains no hidden units, and the units are fully connected. As in a standard Hopfield network, the weights are set directly from the training patterns; they equal the superposition of the outer product of the patterns. We have run several experiments with simple autoassociative DUBMs. The empirical results parallel those for binary-unit autoassociative networks. We find, for example, that a network containing 30 fully interconnected units is capable of reliably settling from a corrupted version of one of 4 stored patterns to a state near the pattern. These patterns thus form stable attractors, as the network can perform pattern completion and clean-up from noisy inputs. The rotation-invariance property of the energy function allows any rotated version of a training pattern to also act as an attractor. The network's performance rapidly degrades for more than 4 orthogonal patterns; the patterns themselves no longer act as fixed-points, and many random initial states end in states far from any stored pattern. In addition, more orthogonal patterns can be stored than random patterns. See Noest (1988) for an analysis of the capacity of an autoassociative DUBM with sparse and asymmetric connections. 5.2 LEARNING INPUT/OUTPUT MAPPINGS We have also used the mean-field DUBM learning algorithm to learn the weights in networks containing hidden units. We have experimented with a task that is wellsuited to a directional representation. There is a single-jointed robot arm, anchored at a point, as shown in Figure 2. The input consists of two angles: the angle between the first arm segment and the positive x-axis (A), and the angle between the two arm segments (p). The two segments each have a fixed length, A and B; these are not explicitly given to the network. The output is the angle between the line connecting the two ends of the arm and the x-axis (J.t). This target angle is related in a complex, non-linear way to the input angles-the network must learn to approximate the following trigonometric relationship: ( A sin A - B sin( A + p) ) J.1. = arctan A cos A - B cos( A + p) 178 Zemel, Williams, and Mozer I I ,\ ~ p, / -----------j~----------~ Figure 2: A sample training case for the robot arm problem. The arm consists of two fixed-length segments, A and B, and is anchored on the x-axis. The two angles, ,\ and p, are given as input for each case, and the target output is the angle p,. With 500 training cases, a DUBM with 2 input units and 8 hidden units is able to learn the task so that it can accurately estimate p, for novel patterns. The learning requires 200 iterations of a conjugate gradient training algorithm. On each of 100 testing patterns, the resultant length of the output unit exceeds .85, and the mean error on the angle is less than .05 radians. The network can also learn the task with as few as 5 hidden units, with a concomitant decrease in learning speed. The compact nature of this network shows that the directional units form a natural, efficient representation for this problem. 5.3 COMPLEX PATTERN COMPLETION Our earlier work described a large-scale DUBM that attacks a difficult problem in computer vision: image segmentation. In MAGIC (Mozer et al., 1992), directional values are used to represent alternative labels that can be assigned to image features. The goal of MAGIC is to learn to assign appropriate object labels to a set of image features (e.g., edge segments) based on a set of examples. The idea is that the features of a given object should have consistent phases, with each object taking on its own phase. The units in the network are arranged into two layers-feature and hidden-and the computation proceeds by randomly initializing the phases of the units in the feature layer, and settling on a labeling through a relaxation procedure. The units in the hidden layer learn to detect spatially local configurations of the image features that are labeled in a consistent manner across the training examples. MAGIC successfully learns to segment novel scenes consisting of overlapping geometric objects. The emergent DUBM properties described above are essential to MAGIC'S ability to perform this task. The complex weights are necessary in MAGIC, as the weights encode statistical regularities in the relationships between image features, e.g., that two features typically belong to the same object (i.e., have similar phase values) or to different objects (i.e., are out of phase). The fact that a unit's resultant length reflects the certainty in a phase label allows the system to decide which phase labels to use when updating labels of neighboring features: the initially random phases are ignored, while confident labels are propagated. Finally, the rotation-invariance property allows the system to assign labels to features in a manner consistent with the relationships described in the weights, where it is the relative rather than absolute phases of the units that are important. Directional-Unit Boltzmann Machines 179 6 CURRENT DIRECTIONS We are currently extending this work in a number of directions. We are extending the definition of a DUBM to combine binary and directional units (Radford Neal, personal communication). This expanded representation may be useful in domains with directional data that is not present everywhere. For example, it can be directly applied to the object labeling problem explored in MAGIC. The binary aspect of the unit can describe whether a particular image feature is present or absent. This may enable the system to handle various complications, particularly labeling across gaps along the contour of an object. Finally, we are applying a DUBM network to the interesting and challenging problem of time-series prediction of wind directions. Acknowledgements The authors thank Geoffrey Hinton for his generous support and guidance. We thank Radford Neal, Peter Dayan, Conrad Galland, Sue Becker, Steve Nowlan, and other members of the Connectionist Research Group at the University of Toronto for helpful comments regarding this work. This research was supported by a grant from the Information Technology Research Centre of Ontario to Geoffrey Hinton, and NSF Presidential Young Investigator award IRI-9058450 and grant 90-21 from the James S. McDonnell Foundation to MM. References Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9:147-169. Baldi, P. and Meir, R. (1990). Computing with arrays of coupled oscillators: An application to preattentive texture discrimination. Neural Computation, 2( 4):458-471. Fradkin, E., Huberman, B. A.,· and Shenker, S. H. (1978). Gauge symmetries in random magnetic systems. Physical Review B, 18(9):4789-4814. GisIen, 1., Peterson, C., and Soderberg, B. (1992). Rotor neurons: Basic formalism and dynamics. Neural Computation, 4(5):737-745. Hinton, G. E. (1989). Deterministic Boltzmann learning performs steepest descent in weight-space. Neural Computation, 1(2):143-150. Mardia, K. V. (1972). Statistics of Directional Data. Academic Press, London. Mozer, M. C., Zemel, R. S., Behrmann, M., and Williams, C. K. I. (1992). Learning to segment images using dynamic feature binding. Neural Computation, 4(5):650-665. Noest, A. J. (1988). Phasor neural networks. In Neural Information Processing Systems, pages 584-591, New York. AlP. Peterson, C. and Anderson, J. R. (1987). A mean field theory learning algorithm for neural networks. Complex Systems, 1:995-1019. Zemel, R. S., Williams, C. K. I., and Mozer, M. C. (1992). Adaptive networks of directional units. Technical Report CRG-TR-92-2, University of Toronto.
1992
11
585
Probability Estimation from a Database Using a Gibbs Energy Model John W. Miller Microsoft Research (9/1051) One Microsoft Way Redmond, WA 98052 Rodney M. Goodman Dept. of Electrical Engineering (116-81) California Institute of Technology Pasadena, CA 91125 Abstract We present an algorithm for creating a neural network which produces accurate probability estimates as outputs. The network implements a Gibbs probability distribution model of the training database. This model is created by a new transformation relating the joint probabilities of attributes in the database to the weights (Gibbs potentials) of the distributed network model. The theory of this transformation is presented together with experimental results. One advantage of this approach is the network weights are prescribed without iterative gradient descent. Used as a classifier the network tied or outperformed published results on a variety of databases. 1 INTRODUCTION This paper addresses the problem of modeling a discrete database. The database is viewed as a collection of independent samples from a probability distribution. This distribution is called the underlying distribution. In contrast, the empirical distribution is the distribution obtained if you take independent random samples from the database (with replacement). The task of creating a probability model can be separated into two parts. The first part is the problem of choosing statistics of the samples which are expected to accurately represent the underlying distribution. The second part is the problem of choosing a model which is consistent with these statistics. Under reasonable assumptions, the optimal solution to the second problem is the method of Maximum Entropy. For a broad class of statistics, the 531 532 Miller and Goodman Maximum Entropy solution is a Gibbs probability distribution (Slepian, 1972). In this paper, the background and theoretical result of a transformation from joint statistics to a Gibbs energy (or network weight) representation is presented. We then outline the experimental test results of an efficient algorithm implementing this transform without using gradient descent iteration. 2 BACKGROUND Define a set T to be the set of attributes (or fields) in a database. For a particular entry (or record) of the database, define the associated set of attribute values to be the configuration W of the attributes. The set of attribute values associated with a subset beT is called a sub configuration Wb. Using this set notation the Gibbs probability distribution may be defined: pew) = Z-l . eVT(w) (1) where (2) bCT The function V is called the energy. The function Jb, called the potential junction, defines a real value for every sub configuration of the set b. Z is the normalizing constant that makes the sum of probabilities of all configurations equal to unity. Prior work in the neural network literature using the Gibbs distribution (such as the Boltzmann Machine) has primarily used second order models (Jb = 0 if Ibl > 2) (Hinton, 1986). By adding new attributes not in the original database, second order potentials have been used to model complex distributions. The work presented in this paper, in contrast, uses higher order potentials to model complex probability distributions. We begin by considering the case where every potential of every order is used to model the distribution. The Principle of Inclusion-Exclusion from set theory states that the following two equations are equivalent: g(A) Lf(b) (3) b~A f(A) L(-l)IA-bl g(b). (4) bCA The method of inverting an equation from the form of (3) into one in the form of (4) is a special case of Mobius Inversion. Clifford-Hammersley (Kindermann, 1980) used this relation to invert formula (2): JA(w) = L(-l)IA-bl Vb(w) (5) bCA Define the probability of a sub configuration p(Wb) to be the probability that the attributes in set b take on the values defined in the configuration w. Using (1) to describe the probability distribution of sub configurations, equation (5) can be written: JA(w) = L(-l)IA-bl In( p(Wb» b~A (6) Probability Estimation from a Database Using a Gibbs Energy Model 533 3 A TRANSFORMATION TO GIBBS POTENTIALS Equation (6) provides a technique for modeling distributions by potential functions rather than directly through the observable joint statistics of sets of attributes. If the model is truncated by setting high order potentials to zero, then the energy model becomes an estimate of the model obtained by collecting the joint statistics, rather than an exact equivalent. If equation (6) is used directly, the error in the energy due to setting all potentials of order d to zero grows quickly with d. For this reason (6) must be normalized if it is going to be used in a truncated modeling scheme. A normalization version of equation (2) that corrects for the unequal number of potentials of different orders is: (IAI- 1)-1 VA(w) = L Ibl- 1 Jb(W) b~A (7) This equation can be inverted to show the surprising result, a weight associated with WA: JA(W) = In(PA(w» - (IAI- 1)-1 L In(Pb(w» tEA b=A-t (8) For example, with three attribute values {x, y, z}, the following potentials are defined: J{x} = In(p(x)) J{y} = In(p(y)) J{z} = In(p(z)) ( p(xy) ) J{xy} = In p(x)p(y) ( p(yz) ) J{yz} = In p(y)p(z) ( p(xz) ) J{xz} = In p(x)p(z) J{xyz} = In ( p(xyz) ) ylp(xy)p(yz)p(xz) For a given database sample, a potential is activated if all of its defined attribute values are true for the sample. The weighted sum of all activated potentials recovers an approximation of the probability of the database sample. If all potentials of every order have been used to create the model, then this approximation is exactly the probability of the sample in the empirical distribution. The correct weighting is given by equation (7). For example it is easily verified that: In(p(xyz)) (2) -1 (2)-1 2 J{xyz} + 1 (J{xy} + J{xz} + J{yz}) (2) -1 + 0 (J{x} + J{y} + J{z}). 534 Miller and Goodman The Gibbs model truncated to second order potentials would estimate the probability in this example by: In(p( xyz)) ~ (2) -1 (2)-1 1 (J{xy} + J{xz} + J{yz}) + ° (J{x} + J{y} + J{z})' ~ In Vp(xy)p(yz)p(xz) 4 PROOF OF THE INVERSION FORMULA Theorem: Let T be a finite set. Each element of T will be called an attribute. Each attribute can take on one of a finite set of states called attribute values. A collection of attribute values for every element of T is called a configuration w. For all A ~ T (including both the empty set A = 0 and the full set A = T), let VA(w) and JA(W) be functions mapping the states of the elements of A to the real numbers. Define (r;:) = m!j(( m - n)! . n!) to be "m choose n." Let V0(w) = 0, J0(W) = 0, and let VA(W) = JA(w) if IAI = l. Then for IAI > 1: and L (IAI- 1)-1 . Vb(w) bCA Ibl=IAI-l (9) (10) are equivalent in that any assignment of VA and J A values for all A ~ T will satisfy (9) if and only if they also satisfy (1 0). Proof: Let .:7 be any assignment ofthe values JA(W) for all A ~ T. Let V be any assignment of all the values VA(W) for all A ~ T. Then clearly (9) maps any assignment .:7 to a unique V. We will represent this mapping by the function I, so (9) is abbreviated V = 1(.:7). Similarly (10) maps any assignment V to a unique.:7. Equation (10) will be abbreviated .:7 = g(V). The result of Lemma Cl below, applied with the value 1) set to n, shows that l(g(V)) = V. In Lemma C2 below, it is shown g(/(.:7)) = .:7. Therefore the equations (9) and (10) are inverse one-to-one mappings and the association of assignments between .:7 and V are identical for the two equations. Q.E.D. Lemma Cl: Rather than simply showing l(g(V)) = V, a more general result will be shown. Since the number of potentials of a given order increases exponentially with the order, it is useful to approximate the energy of a configuration by defining a maximum order 1) such that all potentials of greater order are assumed to be zero Jb(W) = 0 \:I b such that Ibl > 1). Let VA(W) be the resulting approximation to the energy VA(W). Let IAI = n. Probability Estimation from a Database Using a Gibbs Energy Model 535 Given JA(W) = VA(W) L (n - 1)-1 . ~(w) bCA Ibl=n-l and the order V approximation to equation (7): then Note: v ( 1)-1 VA(w) = L ~ ~ 1 L Jb(W), i=l bCA IbT=. A n( 1) -1 VA(W) = L V-I Vb(w). bCA Ib!=V For the case V = n, the approximation is exact VA(w) = VA(W), and so f(g(V» = V is shown. (11) (12) The lemma's result has a simple interpretation. The energy of a configuration is approximated by a scaled average of the energies of the configurations of order V. Using equation (1) to relate energies to probabilities, shows that the estimated probability is a scaled geometric mean of the order V marginal probabilities. Proof: We start with the given equation for VA(W) v ( 1)-1 L ~ ~ 1 L Jb(w). i=l b~A Ibl=· Use equation (11) to substitute Jb(W) out of the equation: VA(W) t (~~ :) -1 ~ (~(W) - cCt;;~l (i - 1)-1 . Vc(W») Ibl=. Icl=lbl-l Separate the term in the first sum where i = V ( 1)-1 ("-1 (1)-1 ) VA(w) ,E; ~ 1 V.(w) + ~ : ~ 1 ~. V.(w) "( 1)-1 -2: ~~1 L 2: (i-1)-1·Vc(w). i=l b~A cCb,lbl~l Ibl=. 1c1=lbl-l By subtracting VA(W) from both sides using equation (12) and noting the second summation over i has no terms when i = 1 we see that it is sufficient to show I:(:~;rLV.(W) t(~~;rL L (i-W'· V,(w). i=l bCA i=2 b~A cCb Ihl=. Ibl=. Icl=lbl-l 536 Miller and Goodman The right hand side inner double summation counts a given llc(w) once for every b such that e C b ~ A with i = Ibl = lei + 1. This occurs exactly IAI- lei = n - i + 1 times. Thus V-I ( 1)-1 v (1)-1 ·+1 L ~ ~ 1 L ~(w) = L ~ ~ 1 L n ~ ~ 1 . Vc(w). i=l "CA i=2 cCA 1"1=. Icl=i-l N ow perform a change of variables. Let j = i-Ion the right hand side ~ (~ - ;) -1 L ~(w) = ~ (n ~ 1) -1 L n . j . Vc(w). i=l t "CA j=l J cCA J 1"1=. !cl=j Clearly both sides are identical since n-t Q.E.D. Lemma C2: g(/(:1)) = :1 Let IAI = n. It is sufficient to show that substituting ~ out of (10) using (9) yields an identity: JA(W) = VA(w) - L (n _1)-1 ·1Ib(w) bCA,n;tl Ibl=n-l ( n - 1) -1 -1 (Ibl- 1)-1 L Ibl- 1 Jb(W) - L (n -1) L lel- 1 Jc(w). bCA bCA,n;tl cCb Ibl=n-lSeparate the term in the first sum for which b = A JA(W) = JA(w) ( n_l)-l -1 (lbl-l)-l + L Ibl- 1 h(w) - L (n - 1) L lel- 1 Jc(w). bCA "CA,n;tl cCb b;tA Ibl=n-lSubtract J A (w) from both sides. The right hand side double sum counts a given Jc(w) once for every b such that e ~ b C A with Ibl = IAI- 1 = n - 1. This occurs IAI- lei = n - lei times. It is sufficient to show Both sides are identical since: (~ _ 1)-1 Z - 1 cCA,ctA n -lei n-l n-2 ( ) -1 lel- 1 Jc(w). n - z n-l (~ _ 2)-1 Z 1 Q.E.D. Probability Estimation from a Database Using a Gibbs Energy Model 537 5 USING THE INVERSION FORMULA TO SET NETWORK WEIGHTS Our method of probability estimation is to first collect empirical frequencies of patterns (sub configurations) from the database. (An efficient hash table implementation of the algorithm is described in (Miller, 1993). The basic idea is to remove from the database a pattern with low potential whenever there is a hash collision which prevents a new pattern count from being stored.) Second, interpreting these frequencies as probabilities, we convert each pattern frequency to a potential using equation (8). We assume patterns with unknown or uncalculated frequencies have zero potential. Low order patterns which never occur are assigned a large negative potential (this approximation is needed to model events with zero probability in the empirical distribution). Finally, we calculate the probability of any new pattern not in the training set using the neural network implementation of equations (7) and (1). 6 RESULTS One way to validate the performance of a probability model is to test its performance as a classifier. The probability model is used as a classifier by calculating the probabilities of each unknown class value together with the known attribute values. The most probable combination is then chosen as the predicted class. Used as a classifier the Gibbs model tied or outperformed published results on a variety of databases. Table 1 outlines results on three datasets taken from the UC Irvine archive (Murphy, 1992). The Gibbs model results were collected from the very first experiment using the algorithm with the datasets. No difficult parameter adjustment is necessary to get the algorithm to classify at these rates. The iris database has 4 real value attributes. Each attribute was quantized into a decile ranking for use by the algorithm. 7 CONCLUSION A new method of extracting a Gibbs probability model from a database has been presented. The approach uses the Principle of Inclusion-Exclusion to invert a set of collected statistics into a set of potentials for a Gibbs energy model. A hash table implementation is used to efficiently process database records in order to collect the most important potentials, or weights, which can be stored in the available memory. Although the model is designed to give accurate probability estimates rather than simply class labels, the model in practice works well as a classifier on a variety of databases. Acknowledgements This work is funded in part by DARPA and ONR under grant NOOOI4-92-J-1860. 538 Miller and Goodman Table 1: Summary of Classification Results Database A C R Train Test Trials Gibbs Rate Compare House Voting 16 2 435 335 100 50 95.3% 95% Iris 4 3 150 120 30 100 96.3% n.a. Iris 4 3 150 149 1 1000 97.1% 98.0% Breast Cancer 9 2 699 599 100 100 97.3% n.a. Breast Cancer 9 2 369 200 169 100 95.7% 93.7% A = Attribute count in the database, excluding the class attribute C = Class count R = Record count Train = Number of records used to create the energy for one trial Test = Number of records tested in a single trial Trials = Number of independent train-test trials used to calculate the rate Gibbs Rate = Gibbs energy model classification rate Compare = Baseline classification result of other methods (Schlimmer, 1987), (Weiss, 1992),(Zhang, 1992) respectively References D. Slepian, "On Maxentropic Discrete Stationary Processes," Bell System Technical Journal, 51, pp.629-653, 1972. G.E. Hinton and T .J. Sejnowski, "Learning and Relearning in Boltzmann Machines," in Parallel Distributed Processing, Vol. I., pp.282-317, Cambridge MA: MIT Press, 1986. R. Kindermann, J .L. Snell, Markov Random Fields and their Applications, Providence, RI: American Mathematical Society, 1980. J. W. Miller, "Building Probabilistic Models from Databases" California Institute of Technology, Ph.D. Thesis 1993. P. Murphy, and D. Aha, UCI Repository of Machine Learning Databases [Machine-readable data repository at ics.uci.edu in directory /pub/machine-Iearning-databases]. Irvine, CA: University of California, Department of Information and Computer Science, 1992. Schlimmer, J. C., "Concept Acquisition Through Representational Adjustment" University of California at Irvine, Ph.D. Thesis 1987. S. Weiss, and I. Kapouleas, "An Empirical Comparison of Pattern Recognition, Neural Nets, and Machine Learning Classification Methods," in Proceedings of the 11th International Joint Conference on Artificial Intelligence Vol. 1, pp.781-787, Los Gatos, CA: Morgan Kaufmann, 1992. J. Zhang, "Selecting Typical Instances in Instance-Based Learning," in Proceedings of the Ninth International Machine Learning Conference Aberdeen, Scotland, pp.470-479, San Mateo CA: Morgan Kaufmann, 1992.
1992
110
586
Word Space Hinrich Schiitze Center for the Study of Language and Information Ventura Hall Stanford, CA 94305-4115 Abstract Representations for semantic information about words are necessary for many applications of neural networks in natural language processing. This paper describes an efficient, corpus-based method for inducing distributed semantic representations for a large number of words (50,000) from lexical coccurrence statistics by means of a large-scale linear regression. The representations are successfully applied to word sense disambiguation using a nearest neighbor method. 1 Introduction Many tasks in natural language processing require access to semantic information about lexical items and text segments. For example, a system processing the sound sequence: /rE.k~maisbi:tJ/ needs to know the topic of the discourse in order to decide which of the plausible hypotheses for analysis is the right one: e.g. "wreck a nice beach" or "recognize speech" . Similarly, a mail filtering program has to know the topical significance of words to do its job properly. Traditional semantic representations are ill-suited for artificial neural networks since they presume a varying number of elements in representations for different words which is incompatible with a fixed input window. Their localist nature also poses problems because semantic similarity (for example between dog and cat) may be hidden in inheritance hierarchies and complicated feature structures. Neural networks perform best when similarity of targets corresponds to similarity of inputs; traditional symbolic representations do not have this property. Microfeatures have been widely used to overcome these problems. However, microfeature representa895 896 Schutze tions have to be encoded by hand and don't scale up to large vocabularies. This paper presents an efficient method for deriving vector representations for words from lexical cooccurrence counts in a large text corpus. Proximity of vectors in the space (measured by the normalized correlation coefficient) corresponds to semantic similarity. Lexical coocurrence can be easily measured. However, for a vocabulary of 50,000 words, there are 2,500,000,000 possible coo currence counts to keep track of. While many of these are zero, the number of non-zero counts is still huge. On the other hand, in any document collection most of these counts are small and therefore unreliable. Therefore, letter fourgrams are used here to bootstrap the representations. Cooccurrence statistics are collected for 5,000 selected fourgrams. Since each of the 5000 fourgrams is frequent, counts are more reliable than cooccurrence counts for rare words. The 5000-by-5000 matrix used for this purpose is manageable. A vector for a lexical item is computed as the sum of fourgram vectors that occur close to it in the text. This process of confusion yields representations of words that are fine-grained enough to reflect semantic differences between the various case and inflectional forms a word may have in the corpus. The paper is organized as follows. Section 2 discusses related work. Section 3 describes the derivation of the vector representations. Section 4 performs an evaluation. The final section concludes. 2 Related Work Two kinds of semantic representations commonly used in connectionism are microfeatures (e.g. \Valtz and Pollack 1985, McClelland and Kawamoto 1986) and localist schemes in which there is a separate node for each word (e.g. Cottrell 1989). Neither approach scales up well enough in its original form to be applicable to large vocabularies and a wide variety of topics. Gallant (1991), Gallant et a1. (1992) present a less labor-intensive method based on microfeatures, but the features for core stems still have to be encoded by hand for each new document collection. The derivation of the Word Space presented here is fully automatic. It also uses feature vectors to represent words, but the features cannot be interpreted on their own. Vector similarity is the only information present in Word Space: semantically related words are close, unrelated words are distant. The emphasis on semantic similarity rather than decomposition into interpretable features is similar to Kawamoto (1988) . Scholtes (1991) uses a two-dimensional Kohonen map to represent semantic similarity. While a Kohonen map can deal with non-linea.rities (in contrast to the singular value decomposition used below), a space of much higher dimensionality is likely to capture more of the complexity of semantic relatedness present in natural language. Scholtes' idea to use n-gl'ams to reduce the number of initial features for the semantic representations is extended here by looking at n-gram (oocurrence statistics rather than occurrence in documents (cf. (Kimbrell 1988) for the use of n-grams in information retrieval). An important goal of many schemes of semantic represent.ation is to find a limited number of semantic classes (e.g. classical thesauri such as Roget's, Crouch 1990, Brown et a1. 1990). Instead, a multidimensional space is constructed here, in which each word has its own individual representation. Any clustering into classes introduces artificial boundaries that cut off words from part of their semantic neighbol'Word Space 897 governor quits knights of columbus over bishop's abortion gag rule GOVE _QUI NIGH OLUM SHOP ABOR RUL VERN QUIT HTS LUMB HOP BORT RULE ERNO ORTI ULE_ RNOR RTIO Figure 1: A line from the New York Times with selected fourgrams. hood. In large classes, there will be members "from opposite sides of the class" that are only distantly related. So any class size is problematic, since words are either separated from close neighbors or lumped together with distant terms. Conversely, a multidimensional space does not make such an arbitrary classification necessary. 3 Derivation of the Vector Representations Fourgram selection. There are about. 600,000 possible fourgrams if the empty space, numbers and non-alphanumeric characters are included as "special letters" . Of these, 95,000 occurred in 5 months of the New York Times. They were reduced to 5000 by first deleting all rare ones (frequency less than 1000) and then redundant and uninformative fourgrams as described below. If there is a group of fourgrams tha.t occurs in only one word, all but. one is delet.ed. For instance, the fourgrams BAGH, AGHD, GHDA, HDAD tend to occur together in Baghdad, so three of them will be deleted. The rationale for this move is that cooccurrence information about one of the fourgrams can be fully derived from each of the others, so that an index in the matrix would be wasted if more than one of them was included. The relative frequency of one fourgram occurring after another was calculated with fivegrams. For instance, the relative frequency of AGHD following BAGH is the frequency of the fivegram BAGHD divided by the frequency of the fourgram BAGH. Most fourgrams occur predominantly in three or four stems or words. U ninformative fourgrams are sequences such as RET! or TION that are part of so many different words (resigned, residents, retirements, resisted, . .. ; abortion, desperation, construction, detention, ... ) that knowledge about coocurrence with them carries almost no semantic information. Such fourgrams are therefore useless and are deleted. Again, fivegrams were used to identify fourgrams that occurred frequently in many stems. A set of 6290 fourgrams remained after these deletions. To reduce it to the required size of 5000, t.he most frequent 300 and the least frequent. 990 were also delet.ed. Figure 1 shows a line from the New York Times and which of the 5000 selecteo fourgrams occurred in it. Computation of fourgram vectors. The computation of word vectors described below depends on fourgram vectors that accurately reflect semantic similarity in the sense of being used to describe the same contents. Consequently, one needs to be able to compare the sets of contexts two fourgrams occur in. For this purpose, a collocation matrix for fourgrams was collected such that the entry ai,j 898 Schiitze counts the number of times that fourgram i occurs at most 200 fourgrams to the left of fourgram j. Two columns in this matrix are similar if the contexts the corresponding fourgrams are used in are similar. The counts were determined using five months of the New York Times (June - October 1990). The resulting collocation matrix is dense: only 2% of entries are zeros, because almost any two fourgrams cooccur. Only 10% of entries are smaller than 10, so that culling small counts would not increase the sparseness of the matrix. Consequently, any computation that employs the fourgram vectors directly would be inefficient. For this reason, a singular value decomposition was performed and 97 singular values extracted (cf. Deerwester et al. 1990) using an algorithm from SVDPACK (Berry 1992). Each fourgram can then be represented by a vector of 97 real values. Since the singular value decomposition finds the best least-square approximation of the original space in 97 dimensions, two fourgram vectors will be similar if their original vectors in the collocation matrix are similar. The reduced fourgram vectors can be efficiently used for confusion as described in the following section. Computation of word vectors. We can think of fourgrams as highly ambiguous terms. Therefore, they are inadequate if used directly as input to a neural net. We have to get back from fourgrams to words. For the experiment reported here, cooccurrence information was used for a second time to achieve this goal: in this case coo currence of a target word with any of the 5000 fourgrams. For each of the selected words (see below), a context vector was computed for every position at which it occurred in the text. A context vector was defined as the sum of all defined fourgram vectors in a window of 1001 fourgrams centered around the target word. The context vectors were then normalized and summed. This sum of vectors is the vector representation of the target word. It is the confusion of all its uses in the corpus. More formally, if C( w) is the set of positions in the corpus at which w occurs and if 'P(f) is the vector representation for fourgram f, then the vector representation r( w) of w is defined as: (the dot stands for normalization) • r(w) = L ( L 'P(f)) i€C(w) J close to i The treatment of words is case-sensitive. The following terminology will be used: a surface form is the string of characters as it occurs in the text; a lemma is either lower case or upper case: all letters are lower case with the possible exception of the first; word is used as a case-insensitive term. So every word has exactly two lemmas. A lemma of length n has up to 2n surface forms. Almost every lower case lemma can be realized as an upper case surface form. But upper case lemmas are hardly ever realized as lower case surface forms. The confusion vectors were computed for all 54366 lemmas that occurred at least 10 times in 18 months of the New York Times News Service (May 1989 - October 1990, about 50 million words). Table 1 lists the percentage of lower case and upper case lemmas, and the distribution of lemmas with respect to words. Word Space lemmas number percent words number percent lower case lemma only 23766 52 0 lower ca.se 32549 60 0 21817 40% upper case lemma only 13034 29% upper case both lemmas 8783 19% total 54366 100 0 total 45583 100 0 Table 1: The distribution of lower and upper case in words and lemmas. word I nearest neighbors burglar burglars thief rob mugging stray robbing lookout chase C) ate thieves disable deter intercept repel halting surveillance shield maneuvers disenchantment disenchanted sentiment resentment grudging mindful unenthusiastic domestically domestic auto/-s importers/-ed threefold inventories drastically cars Dour melodies/-dic Jazzie danceable reggae synthesizers Soul funk tunes grunts heap into ragged goose neatly pulls buzzing rake odd rough kid dad kidding mom ok buddies Mom Oh Hey hey mama S.O.B. Confessions Jill Julie biography Judith Novak Lois Learned Pulitzer Ste. dry oyster whisky hot filling rolls lean float bottle ice workforce jobs employ /-s/-ed/-ing attrition workers clerical labor hourly keepmg .. I hopmg brmg wlpmg could some would other here rest have Table 2: Ten random and one selected word and their nearest neighbors. 4 Evaluation Table 2 shows a random sample of 10 words and their ten nearest neighbors in Word Space (or less depending on how many would fit in the table). The neighbors are listed in order of proximity to the head word. burglar, disenchantment, kid, and workforce are closely related to almost all of their nearest neighbors. The same is true for disable, dom esticaUy, and Dour, if we regard as the goal to come up with a characterization of semantic similarity in a corpus (as opposed to the language in general). In the New York Times, the military use of disable dominates, Iraq's military, oil pipelines and ships are disabled. Similarly, domestic usually refers to the domestic market, and only one person named Dour occurs in the newspaper: the Senegalese jazz musician Youssou N'Dour. So these three cases can also be counted as successes. The topic/ content of grunts is moderately well characterized by other objects like goose and rake that one would also expect on a farm. Finally, little useful information can be extracted for S. D.B. and Ste. S. D.B. mainly occurs in articles about. the bestseller "Confessions of an S.O.B." Since it is not. used literally, its semantics don't come out very well. The neighbors of Ste are for the most part words associated with water, because the name of the river "Ste.-Marguerite" in Quebec (popular for salmon fishing) is the most frequent context for Ste. Since the significance of Ste depends heavily on the name it occurs in, its usefulness a.s a. contributor of semantic informa.tion is limited, so its poor characterization should probably not be seen as problematic. The word keeping has been added to the table to show that the vector representations of words that can be used in a wide variety of contexts are not. interesting. Table 3 shows that it is important for many words to make a distinction between 899 900 Schiitze word pinch (.41) Pinch kappa (.49) Kappa roe (.54) Roe completion (.73) completions ok (.60) oks triad (.52) triads nearest neighbors outs pitch Cone hitting Cary strikeout Whitehurst Teufel Dykstra mound unsalted grated cloves pepper teaspoons coarsely parsley Combine cumin casein protein/-s synthesize liposomes recombinant enzymes amino dna Phi Wesleyan graduate cum dean graduating nyu Amherst College Yale cod squid fish salmon flounder lobster haddock lobsters crab chilled Wade v overturn/-ing uphold/-ing abortion Reproductive overrule complete/-~/-s/-ing complex phase/-s uncompleted incomplete touchdown/-s interception/-s td yardage yarder tds fumble sacked d me I m wouldn t crazy you ain anymore approve/-s/-d/-ing Senate Waxman bill appropriations omnibus warhea~/-s ballistic missile[-s ss bombers intercontinental silos Triads Organized Interpol Cosa Crips gangs trafficking smuggling Table 3: Words for which case or inflection matter. word I senses % correct 2 3 sum cap ita ljs goodsLseat of government 96 92 95 interestjs special attention/financial 94 92 93 motionjs movement/proposal 92 91 92 plantjs factory /living being 94 88 92 "uling decision/to exert control 90 91 90 space area, volume/outer space 89 90 90 suitjs legal action/garments 94 95 95 tankjs combat vehicle/receptacle 97 85 95 trainjs railroad cars/to teach 94 69 89 vesse1js ship/blood vessel/hollow utensil 93 91 86 92 Table 4: Ten disambiguation experiments using the vector representations. lower case and upper case and between different inflections. The normalized correlation coefficient between the two case/inflectional forms of the word is indicated in each example. Word sense disambiguation. Word sense disambiguation is a task that many semantic phenomena bear on and therefore well suited to evaluate the quality of semantic representations. One can use the vector representations for disambiguation in the following way. The context vector of the occurrence of an ambiguous word is defined as the sum of all word vectors ocurring in a window around it. The set of context vectors of the word in the training set can be clustered. The clustering programs used were AutoClass (Cheeseman et al. 1988) and Buckshot (Cutting et al. 1992). The clusters found (between 2 and 13) were assigned senses by inspecting a few of its members (10-20). An occurrence of an ambiguous word in the test set was then disambiguated by assigning the sense of the training cluster that was closest to its context vector. Note that this method is unsupervised in that the structure of the "sense space" is analyzed automatically by clustering. See Schiitze (1992) for a more detailed description. Table 4 lists the results for ten disambiguation experiments that were performed Word Space 901 using the above algorithm. Each line shows the ambiguous words, its major senses and the success rate of disambiguation for the individual senses and all major senses together. Training and test sets were taken from the New York Times newswire and were disjoint for each word. These disambiguation results are among the best reported in the literature (e.g. Yarowsky 1992). Apparently, the vector representations respect fine sense distinctions. An interesting question is to what degree the vector representations are distributed. Using the algorithm for disambiguation described above, a set of contexts of suit was clustered and applied to a test text. When the first 30 dimensions were used for clustering the training set, the error rate was 9% in the test set. \\Then only the odd dimensions were used (1,3,5, ... ,27,29) the error was 14%. With only the even dimensions (2,4,6, ... ,28,30), 13% of occurrences in the test set were misclassified. This graceful degradation indicates that the vector representations are distributed. 5 Discussion and Conclusion The linear dimensionality reduction performed here could be a useful preprocessing step for other applications as well. Each of the fourgram features carries a small amount of information. Neglecting individual features degrades performance, but there are so many that they cannot be used directly as input to a neural network. The word sense disambiguation results suggest that no information is lost when only axes of variations extracted by the singular value decomposition are considered instead of the original 5000-dimensional fourgram vectors. Schiitze (Forthcoming) uses the same methodology for the derivation of syntactic representations for words (so that verbs and nouns occupy different regions in syntactic word space). Problems in pattern recognition often have the same characteristics: uniform distribution of information over all input features or pixels and a high-dimensional input space that causes problems in training if the features are used directly. A singular value decomposition could be a useful preprocessing step for data of this nature that makes neural nets applicable to high-dimensional problems for which training would otherwise be slow if possible at all. This paper presents Word Space, a new approach to representing semantic information about words derived from lexical cooccurrence statistics. In contrast to microfeature representations, these semantic representations can be summed for a given context to compute a representation of the topic of a text segment. It was shown that semantically related words are close in Word Space and that the vector representations can be used for word sense disambiguation. Word Space could therefore be a promising input representation for applications of neural nets in naturallanguage processing such as information filtering or language modeling in speech recognition. Acknowledgements I'm indebted to Mike Berry for SVDPACK, to NASA and RIACS for AutoClass and to the San Diego Supercomputer Center for computing resources. Thanks to Martin Kay, Julian Kupiec, Jan Pedersen, Martin Roscheisen, and Andreas Weigend for help and discussions. 902 Schutze References Berry, M. W. 1992. Large-scale sparse singular value computations. The International Journal of Supercomputer Applications 6(1):13-49. Brown, P. F., V. J. D. Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. 1990. Class-based n-gram models of natural language. Manuscript, IBM. Cheeseman, P., J. Kelly, M. Self, J. Stutz, W. Taylor, and D. Freeman. 1988. AutoClass: A Bayesian classification system. In Proceedings of the Fifth International Conference on Machine Learning. Cottrell, G. ,,y. 1989. A Connectionist Approach to Word Sense Disambiguation. London: Pitman. Crouch, C. J. 1990. An approach to the automatic construction of global thesauri. Information Processing {3 Management 26(5):629-640. Cutting, D., D. Karger, J. Pedersen, and J. Thkey. 1992. Scatter-gather: A clusterbased approach to browsing large document collections. In Proceedings of SIGIR '92. Deerwester, S., S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science 41(6):391-407. Gallant, S. I. 1991. A practical approach for representing context and for performing word sense disambiguation using neural networks. Neural Computation 3(3) :293309. Gallant, S. I., W. R. Caid, J. Carleton, R. Hecht-Nielsen, K. P. Qing, and D. Sudbeck. 1992. HNC's matchplus system. In Proceedings of TREC. Kawamoto, A. H. 1988. Distributed representations of ambiguous words and their resolution in a connectionist network. In S. 1. Small, G. W. Cottrell, and M. K. Tanenhaus (Eds.), Lexical A mbiguity Resolution: Perspectives from Psycholinguistics, Neuropsychology, and Artificial Intelligence. San Mateo CA: Morgan Kaufmann. Kimbrell, R. E. 1988. Searching for text? Send an N-gram! Byte Magazine May:297-312. McClelland, J. L., and A. H. Kawamoto. 1986. Mechanisms of sentence processing: Assigning roles to constituents of sentences. In J. L. McClelland, D. E. Rumelhart, and the PDP Research Group (Eds.), Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological M ode/s, 272-325. Cambridge MA: The MIT Press. Scholtes, J. C. 1991. Unsupervised learning and the information retrieval problem. In Proceedings of the International Joint Conference on Neural Networks. Schiitze, H. 1992. Dimensions of meaning. In Proceedings of Supercomputing '92. Schiitze, H. Forthcoming. Sublexical tagging. In Proceedings of the IEEE International Conference on N euml Networks. Waltz, D. L., and J. B. Pollack. 1985. A strongly interactive model of natural language interpretation. Cognitive Science 9:51-74. Yarowsky, D. 1992. Word-sense disambiguation using statistical models of Roget's categories trained on large corpora. In Proceedings of Coling-92.
1992
111
587
An Analog VLSI Chip for Radial Basis Functions J aneen Anderson .lohn C. Platt Synaptics, Inc. 2698 Orchard Parkway San Jose, CA 95134 Abstract David B. Kirk'" We have designed, fabricated, and tested an analog VLSI chip which computes radial basis functions in parallel. We have developed a synapse circuit that approximates a quadratic function. We aggregate these circuits to form radial basis functions. These radial basis functions are then averaged together using a follower aggregator. 1 INTRODUCTION Radial basis functions (RBFs) are a mel hod for approximating a function from scattered training points [Powell, H)87]. RBFs have been used to solve recognition and prediction problems with a fair amonnt of success [Lee, 1991] [Moody, 1989] [Platt, 1991]. The first layer of an RBF network computes t.he distance of the input to the network to a set of stored memories. Each basis function is a non-linear function of a corresponding distance. Tht> basis functions are then added together with second-layer weights to produce the output of the network. The general form of an RBF is Yi = L hii<l>i (Iii - Cj Ii) , i ( 1) where Yi is the output of the network, hij is the second-layer weight, <l>j is the non-linearity, C; is the jth memory stored in the network and f is the input to "'Current address: Caltech Computer Graphics Group, Caltech 350-74, Pasadena, CA 92115 765 766 Anderson, Platt, and Kirk output of network -++++-++-++++++----1 exp .-..-+-i-+-i-+-t-+-t-+-+-+-+-+-++++++-t-t---i p,xp ~-+-H-H-H-+-+-++++-++-++++++----1 exp .-..-+-t-+-t-+-t-+-t-+-++++++-+++++-+-~ exp ~-+-H-H-H-+-+I input to network quadratic synapse \ linear synapse Figure 1: The architecture of a Gaussian RBF network. the network. Many researchers use Gaussians to create basis functions that have a localized effect in input space [Poggio, 1990][Moody, 1989]: (2) The architecture of a Gaussian RBF network is shown in figure 1. RBFs can be implemented either via software or hardware. If high speed is not necessary, then computing all of the basis functions in software is adequate. However, if an application requires many inputs or high speed, then hardware is required. RBFs use a lot of operations more complex than simply multiplication and addition. For example, a Gaussian RBF requires an exponential for every basis function. Using a partition of unity requires a divide for every basis function. Analog VLSI is an attractive way of computing these complex operations very quickly: we can compute all of the basis functions in parallel, using a few transistors per synapse. This paper discusses an analog VLSI chip that computes radial basis functions. We discuss how we map the mathematica.l model of an RBF into compact analog hardware. We then present results from a test. chip that was fabricated. We discuss possible applications for the hardware architecture and future theoretical work. 2 MAPPING RADIAL BASIS FUNCTIONS INTO HARDWARE In order to create an analog VLSI chip. we must map the idea of radial basis functions into transistors. In order to create a high-density chip, the mathematics of RBFs must be modified to be computed more naturally by transistor physics. This section discusses the mapping from Gaussian RBFs into CMOS circuitry. An Analog VLSI Chip for Radial Basis Functions 767 r ">-------"-- output to layer 2 input to network Vref Figure 2: Circuit diagram for first-layer neuron, showing t.hree Gaussian synapses and the sense amplifier. 2.1 Computing Quadratic Distance Ideally, the first-layer synapses in figure 1 would compute a quadratic dist.ance of the input. to a stored value. Quadratics go to infinity for large values of their input. hence are hard to build in analog hardware and are not robust against outliers in tlw input data. Therefore, it is much more desirable to use a saturating non-linearity: we will use a Gaussian for a first-layer synapse, which approxima.tes a quadratic near its peak. We implement the first-layer Gaussian synapse llsing an inverter (see figure 2). The current running through each inverter from the voltage rail to ground is a Gaussian function of the inverter's input, with the peak of the Gaussian occurring halfway between the voltage rail and ground [Mead, 1980][Mead, 1992]. To adjust the center of the Gaussian, we place a capacitor between the input to thf' synapse and the input of the inverter. The inverter thus has a floating gat.e input.. We adjust the charge on the floating gate by using a combination of tunneling and non-avalanche hot electron injection [Anderson, 1990] [Anderson, 1992]. All of the Gaussian synapses for one neuron share a voltage rail. The sense amplifier holds t.hat voltage rail at a particular volt.agf>. l/ref. The output of the sense amplifier is a voltage which is linear in t.he total current being drawn by the Gaussian synapses. We use a floating gate in the sense amplifier to ensure that the output. of the sense amplifier is known when the input to the network is at a known state. Again, we adjust the floating gate via tunneling and injection. Figure 3 shows the output of the sense amplifier for four different. neurons. The data was taken from a real chip, described in section 3. The figure shows that the top of a Gaussian approximates a quadratic reasonably well. Also. thf' width and heights of the outputs of each first-layer neuron ma.tch very well. because the circuit is operated above threshold. 768 Anderson, Platt, and Kirk 0)("'\1 bl) • t\:S ..... ...... 0 > ...... ;:::l ..... 0... ~ 0 0...00 So ~ 0) <'-> 1::\0 0) • cn O 0 1 2 3 4 5 Input Voltage Figure 3: Measured output of set of four first-layer neurons. All of the synapses of each neuron are programmed to peak at the same voltage. The x-axis is the input voltage, and the y-axis is the voltage output of the sense amplifier 2.2 Computing the Basis Function To comput.e a Gaussian basis function, t.he distance produced by the first layer needs to be exponentiated. Since the output of the sense amplifier is a voltage negatively proportional to the distance, a subthreshold transistor can perform this exponentiation. However, subthreshold circuits can be slow. Also, the choice of a Gaussian basis function is somewhat arbitrary [Poggio, 1990]. Therefore, we choose to adjust the sense amplifier to produce a voltage that is both above and below threshold. The basis function that the chip comput.es can be expressed as S· J = Lk Gaussian(Ik - Cjk); _ { (5j Of~, if Sj > 0; 0, otherwise. (3) (4) where 0 is a threshold that is set by how much current is required by the sense amplifier to produce an output equal to thf~ threshold voltage of a N-type transistor. Equations 3 and 4 have an intuitive explanation. Each first-layer synapse votes on whether its input matched its stored value. The sum of these votes is Sj. If the sum Sj is less than a threshold 0, then the basis function cPj is zero. However, if the number of votes exceeds the threshold, then the basis function turns on. Therefore, one can adjust the dimensionality of the basis function by adjusting 0: the dimensionality is r N - 0 -11, where N is the numher of inputs to the network. Figure 4 shows how varying 0 changes the basis function, for N = 2. The input to the network is a two-dimensional space, represented by loca.tion on the page. The value of the basis function is represented by t.he darkness of the ink. Setting 0 = 1 yields the basis function on the left, which is a fuzzy O-dimnnsional point. Setting o = 0 yields the basis function on the right., which is a union of fuzzy I-dimensional lines. An Analog VLSI Chip for Radial Basis Functions 769 Figure 4: Examples of two simulated basis functions with differing dimensionality. Having an adjustable dimension for basis functions is useful, because it increases the robustness of the basis function. A Gaussian radial basis function is non-zero only when all elements of the input vector roughly match the center of the Gaussian. By using a hardware basis function, we can allow certain inputs not t.o match, while still turning on the basis function. 2.3 Blending the Basis Functions To make the blending of the basis functions easier to implement in analog VLSI, we decided to use an alternative method for basis function combination, called the partition of unity [Moody, 1989]: (5) The partition of unity suggests that the second layer should compute a weighted average of first-layer outputs, not just a weighted sum. We can t:ompute a weighted average reasonably well with a follower aggregator used in the linear region [Mead, 1989]. Equations 4 and 5 can both be implemented by using a wide-range amplifier as a synapse (see figure 5). The bias of the amplifier is the outpu t. of the semje amplifier. That way, the above-threshold non-linearity of the bias transist.or is applied to the output of the first layer and implements (~quation 4. The amplifier then attempts to drag the output of the second-layer neuron towards a stored value hij and implements equation 5. We store the value on a floating gate, using tunneling and injection. The follower aggregator does not implement equation 5 perfectly: the amplifiers saturate, hence introduce a non-linearity. A follower aggregator implements L tanh(a(hij yd)</Jj = O. j (6) 770 Anderson, Platt, and Kirk output of network Figure 5: Circuit diagram for second-layer synapses. We use a capacitive divider to increase the linear range (decrease a) of the amplifiers. However. the non-linearity of the amplifiers may be beneficial, because it reduces the effect of outliers in the stored h ij values. 3 RESULTS We fabricated the chip in 2 micron CMOS. The current version of the chip has 8 inputs, 159 basis functions and 4 outputs. The chip size is 2.2 millimeters by 9.6 millimeters The core radial basis function circuitry works end-to-end. By measuring the output of the sense amplifier, we can measure the response of the first layer, which is shown in figure 3. Experiments show that the average width of the first-layer Gaussians is 0.350 volts, with a standard deviation of 23 millivolts. The centers of the firstlayer Gaussians can be programmed more accurately than 15 millivolts, which is the resolution of the test setup for this chip. Further experiments show that the second-layer followers are linear to within 4% over 5 volts. Due to one mis-sized transistor, programming the second layer accurately is difficult. We have successfully tested the chip at 90 kHz, which is the speed limit of the current test setup. We have not yet tested the chip at its full speed. The static power dissipation of the chip is 2 milliwatts. Figure 6 shows an example of real end-to-end output of the chip. All synapses for each first-layer neuron are programmed to the same value. The first-Ia.yer neurons are programmed to a ramp: each neuron is programmed to respond t.o a voltage 32 millivolts higher than the previous neuron. The second layer neurons are programnled t.o values shown by y-values of the dots in figure 6. The output of the chip is shown as the solid line in figure 6. The output is measured as all of the inputs to the chip are swept simultaneously. The chip splines and smooths out the noisy stored second-layer values. Notice that the stored second-layer values are low for inputs near 2.5 V: the output of a chip is correspondingly lower. Q) ~ ...... '0 > ...... a ...... <5 An Analog VLSI Chip for Radial Basis Functions 771 .' 234 Input Voltage '. Figure 6: Example of end-to-end output measured from the chip. 4 FUTURE WORK The mathematical model of the hardware network suggests interesting theoretical future work. There are t.wo novel features of this model: the variable dimensionality of the basis functions, and the non-linearity in the partition of unity. More simulation work needs to be done to see how much of an benefit these features yield. The chip architecture discussed in this paper is suitable for many mediumdimensional function mapping problems where radial basis functions are appropriate. For example, the chip is useful for high speed control, optical character recognition, and robotics. One application of the chip we have studied further is the antialiasing of printed characters, with proportional spacing, multiple fonts, and a.rbitrary scaling. Each antialiased pixel has an intensity which is the integral of the character's partial coverage of that pixel convolved with some filter. The chip could perform a function int.erpolation for each pixel of each character. The function being interpolated is the intensity integral, based on the subpixel coverage a.'i convolv('d with the antialiasing filter kernel. Figure 7 shows the results of the anti-aliasing of the character using a simulation of the chip. 5 CONCLUSIONS We have described a multi-layer analog \'LSI neural network chip that computes radial basis functions in parallel. We use inverters as first-layer synapses, to compute Gaussians that approximate quadratics. \Ve use follower aggregators a.'55econd-Iayer neurons, to compute the basis functions and to blend the ba.'5is functions using a partition of unity. Preliminary experiments with a test chip shows that the core radial basis function circuitry works. In j he future, we will explore the new basis function model suggested by the hardware and further investigate applications of the chip. 772 Anderson, Platt, and Kirk Figure 7: Three images of the letter "a". The image on the left is the high resolution anti-aliased version of the character. The middle image is a smaller version of the left image. The right image is the chip simulation, trained to be close to the middle image, by using the left image as the training data. Acknowledgements We would like to thank Federico Faggin and Carver Mead for their good advice. Thanks to John Lazzaro who gave us a new version of Until, a graphics editor. We would also like to thank Steven Rosenberg and Bo Curry of Hewlett-Packard Laboratories for their suggestions and support. References Anderson, J., Mead, C., 1990, MOS Device for Long-Term Learning, U. S. Patent 4,935,702. Anderson, J., Mead, C., Allen, T., Wall, M., 1992, Adaptable MOS Current Mirror, U. S. Patent 5,160,899. Lee, Y., HJ91, Handwritten Digit Recognition Using k Nearest-Neighbor, Radial Basis Function, and Backpropagation Neural Networks, Neural Computation, vol. 3, no. 3, 440-449. Mead, C., Conway, L., 1980, Introduction to VLSI Systems, Addison-Wesley, Reading, MA. Mead, C., 1989, Analog VLSI and Neural Systems, Addison-Wesley, Reading, MA. Mead, C., Allen, T., Faggin, F., Anderson, J., 1992, Synaptic Element and Array, U. S. Patent 5,083,044. Moody, J., Darken, C., 1989, Fast Learning in Networks of Locally-Tuned Processing Units, Neural Computation, vol. 1, no. 2,281-294. Platt, J., 1991, Learning by Combining Memorization and Gradient Descent, In: Advances in Neural Information Processing 3, Lippman, R., Moody, J .. Touretzky, D., eds., Morgan-Kaufmann, San Mateo, CA, 714-720. Poggio, T ., Girosi, F., 1990, Regularization Algorithms for Learning Tha.t Are Equivalent to Multilayer Networks, Scienre, vol. 247, 978-982. Powell, M. J. D., 1987, Radial Basis Fundions for Multivariable Interpolation: A Review, In: Algorithms for Approximation, J. C. Mason, M. G. Cox, eds., Clarendon Press, Oxford.
1992
112
588
Predicting Complex Behavior in Sparse Asymmetric Networks An A. Minai and William B. Levy Department of Neurosurgery Box 420. Health Sciences Center University of Virginia Charlottesville. V A 22908 Abstract Recurrent networks of threshold elements have been studied intensively as associative memories and pattern-recognition devices. While most research has concentrated on fully-connected symmetric networks. which relax to stable fixed points. asymmetric networks show richer dynamical behavior. and can be used as sequence generators or flexible pattern-recognition devices. In this paper. we approach the problem of predicting the complex global behavior of a class of random asymmetric networks in terms of network parameters. These networks can show fixed-point. cyclical or effectively aperiodic behavior. depending on parameter values. and our approach can be used to set parameters. as necessary. to obtain a desired complexity of dynamics. The approach also provides qualitative insight into why the system behaves as it does and suggests possible applications. 1 INTRODUCTION Recurrent neural networks of threshold elements have been intensively investigated in recent years. in part because of their interesting dynamics. Most of the interest has focused on networks with symmetric connections. which always relax to stable fixed points (Hopfield. 1982) and can be used as associative memories or pattern-recognition devices. Networks with asymmetric connections. however. have the potential for much 556 Predicting Complex Behavior in Sparse Asymmetric Networks 557 richer dynamic behavior and may be used for learning sequences (see, e.g., Amari, 1972; Sompolinsky and Kanter, 1986). In this paper, we introduce an approach for predicting the complex global behavior of an interesting class of random sparse asymmetric networks in terms of network parameters. This approach can be used to set parameter values, as necessary, to obtain a desired activity level and qualitatively different varieties of dynamic behavior. 2 NETWORK PARAMETERS AND EQUATIONS A network consists of n identical 011 neurons with threshold O. The fixed pattern of excitatory connectivity between neurons is generated prior to simulation by a Bernoulli process with a probability p of connection from neuron j to neuron i. All excitatory connections have the fixed value w, and there is a global inhibition that is linear in the number of active neurons. If m (t ) is the number of active neurons at time t, K the inhibitory weight, y; (t) the net excitation and Z; (t) the firing status of neuron i at t, and C;j a 011 variable indicating the presence or absence of a connection from j to i, then the equations for i are: w ~ C;jZj(t-1) y;(t) = j,I~m(t-I)~n (1) w ~ C;jzj(t-I) +Km(t-I) l:t Z;(t)={1 ify;(t)2!O 0<0<1 (2) o otherwise ' If m (t-I) = 0, y; (t) = 0 'Vi. Equation (1) is a simple variant of the shunting inhibition neuron model studied by several researchers, and the network is similar to the one proposed by Marr (Marr, 1971). Note that (1) and (2) can be combined to write the neuron equations in a more familiar subtractive inhibition format Defining a == OK / (I-9)w , z; (t ) = 1 if Jt C;j Zj (t -1) - a ~ Zj (t -I) 2! 0 o otherwise 3 NETWORK BEHAVIOR (3) In this paper, we study the evolution of total activity, m(t), as the system relaxes. From Equation (3), the firing condition for neuron i at time t, given the activity m (t-l)=M at time t-I, is: e;(t).= ~C;jZj(t-l) 2! aM. Thus, in order to fire at time t, neuron i must l:t have at least r aMl active inputs. This allows us to calculate the average firing probability of a neuron given the prior activity M as: P{lo! active inputs 2!raMll= ~ (AfJpk (l-p)M.....t =p(M;n,p,a) (4) k,{aM! If M is large enough, we can use a Gaussian approximation to the 'binomial distribution 558 Minai and Levy and a hyperbolic tangent approximation to the error function to get where p(M; n~.a) = ~ [t-eif[ ~]] = ~ [l-mn+#x]] x == [aM] -Mp "Mp(l-p) Finally, when M is large enough to assume [aMl = aM, we get an even simpler form: (5) p(M; n~.a)= t[l-tanh ~] (6) where T = _1_ ~ 1tp(l-p) - a-p 2' a.~p Assuming that neurons fire independently, as they will tend to do in such large, sparse networks (Minai and Levy, I992a,b), the network's activity at time t is distributed as P {m (1 )=N I m (t-I)=M} = (Z) p(M)H (1- p(M»"-N (7) which leads to a stochastic return map for the activity: m (1) = n p(m (t-I» + O(..Jii) (8) In Figure 1, we plot m (t) against m (1 -1) for a 120 neuron network and two different values of a.. The vertical bars show two standard deviations on either side of n p(m (t-I». It is clear that the network's activity falls within the range predicted by (8). After an initial transient period, the system either switches off permanently (corresponding to the zero activity fixed point) or gets trapped in an 0 ( ..Jii) region around the point m defined by m (t) = m (t -1). We call this the attracting region of the map. The size and location of the attracting region are determined by a and largely dictate the qualitative dynamic behavior of the network. As a. ranges from 0 to 1, networks show three kinds of behavior: fixed points, short cycles, and effectively aperiodic dynamics. Before describing these behaviors, however, we introduce the notion of available neurons. Let k; be the number of input connections to neuron i (the fan-in of n. Given m (t -1) = M, if k; < [aMl, neuron; cannot possibly meet the firing criterion at time t. Such a neuron is said to be disabled by activity M. The group of neurons not disabled are considered available neurons. At any specific activity M, there is a unique set, Na (M), of available neurons in a given network, and only neurons from this set can be active at the next time step. Clearly, Na (M 1) !:: Na (M 2) if M 1 ~ M 2. The average size of the available set at a given activity M is na (M; n,p ,a) == n [1 - P {k; < [aMl} ] = n t (~)pA: (l-p )"-A: (9) A:=faMl Predicting Complex Behavior in Sparse Asymmetric Networks 559 (a) Eft'edlytJ, Aperiodic B.Jllwior 8 - 0.85, K - 0.016, w - 0.4 =- a - 0.227 o-~da&a(2000") .' 8-+~--------------------------~ .' 8 8I(t) 118 8 . . .' . . (11) IB&h AcdYit, C,cIe 8-0.85, K -0.012, w -0.4 =-a-o.l1 a.(t) UO Figure 1: Predicted Distribution of m (t+l) given met), and Empirical Data (0) for Two Networks A and B. The vertical bars represent 4 standard deviations of the predicted distribution for each m (t). Note that the empirical values fall in the predicted range. ~~--------------------------------------------------------------~ 8I(t) (a) Eft'edlyitly Aperiodic BebaYIor 8-0.85,K -0.016, w -0.4 =-a-0.221 8-+--------------------------------------________________________ ~ U8 1000 10000 r-----------------------------, (e) Low AcdYit, C,cIe 8-0.85, K -0.0241, w -0.4 ... a-0.35 a.(t) (11) Hlet- At:thit, C,cIe 8-0.85, K -0.012, w -0.4 ... 11-0.11 --+-----------------------------4 400 :zoo Figure 2: Activity time-series for three kinds of behavior shown by a 120 neuron network. Graphs (a) and (b) correspond to the data shown in Figure 1. 560 Minai and Levy It can be shown that na (M) ~ n p(M), so there are usually enough neurons available to achieve the average activity as per (8). We now describe the three kinds of dynamic behavior exhibited by our networks. (1) Fixed Point Behavior: If a is very small, m is close to n, inhibition is not strong enough to control activity and almost all neurons switch on permanently. If a is too large, iff is close to 0 and the stochastic dynamics eventually finds, and remains at, the zero activity fixed point. (2) Effectively Aperiodic Behavior: While deterministic, finite state systems such as our networks cannot show truly aperiodic or chaotic behavior, the time to repetition can be so long as to make the dynamics effectively aperiodic. This occurs when the attracting region is at a moderate activity level, well below the ceiling defined by the number of available neurons. In such a situation, the network, starting from an initial condition, successively visits a very large number of different states, and the activity, m (t), yields an effvectively aperiodic time-series of amplitude 0 (-{,1), as shown in Figure 2(a). (3) Cyclical Behavior: If the attracting region is at a high activity level, most of the available neurons must fire at every time step in order to maintain the activity predicted by (8). This forces network states to be very similar to each other, which, in turn, leads to even more similar successor states and the network settles into a relatively short limit cycle of high activity (Figure 2(b». When the attracting region is at an activity level just above switch-off, the network can get into a lowactivity limit cycle mediated by a very small group of high fan-in neurons (Figure 2(c». This effect, however, is unstable with regard to initial conditions and the value of a; it is expected to become less significant with increasing network size. u-.-------------------------------------------------. (a) Mean-0.2B7 V 8riaDce - 0.0689 • • us Flrin& Probabilit, 0.75 1 • (II) Mean-0.310 V8riance - 0.0003 G.25 FIrinc Probabilit, 8.75 1 Figure 3: Neuron firing probability histograms for two 120-neuron networks in the effectively aperiodic phase (a ~ 0.227). Graph (a) is for a network with random connectivity generated through a Bernoulli process with p = 0.2, while Graph (b) is for a network with a fixed fan-in of exactly 24, which corresponds to the mean fan-in for p = 0.2. Predicting Complex Behavior in Sparse Asymmetric Networks 561 One interesting issue that arises in the context of effectively aperiodic behavior is that of state-space sampling within the O(f,i) constraint on activity. We assess this by looking at the histogram of individual neuron firing rates. Figure 3(a) shows the histogram for a 120 neuron network in the effectively aperiodic phase. Clearly, some subspaces are being sampled much more than others and the histogram is very broad. This is mainly due to differences in the fan-in of individual neurons, and will diminish in larger networks. Figure 3(b) shows the neuron firing histogram for a 120 neuron network where each neuron has a fan-in of 24. The sampling is clearly much more "ergodic" and the dynamics less biased towards certain subspaces. 0.75 0.2.5 o-+----------------------~--------------------------------------------------~------------------------~ o 0.2.5 0.75 Figure 4: The complete set of non-zero activation values available to two identical neurons i and j with fan-in 24 in a 12O-neuron network. 562 Minai and Levy 4 ACTIVATIONDYNAMICS While our modeling so far has focused on neural firing, it is instructive to look at the underlying neuron activation values, Yi. If m (t -1) = M , the possible Yi (t) values for a neuron i with fan-in ki are given by the set y(M, ti) ={ wq :qKM I MAX(O,ki-n+M)SqSMIN(M, til} M > 0 (10) with Y (0, ki ) == {O}. Here q represents the number of active inputs to i, and the set n Yi == ~ Y(M, ki) represents the set of all possible activation values for the neuron. The network's n -dimensional activation state, y(t) == [y It Y2, ... , Yn ], evolves upon the activation space Y 1 X Y 2 X ••• x Y n' which is an extremely complex but regular object. In Figure 4, we plot a 2-dimensional subspace projection called a Y -Y plot - of the activation space for a 120-neuron network excluding the zero states. Both neurons shown have a fan-in of 24. In actuality, only a small subset of the activation space is sampled due to the constraining effects of the dynamics and the improbability of most q values. 5 RELATING THE ACTMTY LEVEL TO ex From a practical standpoint, it would be useful to know how the average activity in a network is related to its a. parameter. This can be done using the hyperbolic tangent approximation of Equation (6). First, we define the activity level at time t as r (t) == n-1m (t), i.e., the proportion of active neurons. This is a macrostate variable in the sense of (Amari, 1974). In the long term, the activity level becomes confined to a o (1/...Jn) region around the value corresponding to the activity fixed point Thus, it is reasonable to use r as an estimate for the time-averaged activity level (r). To relate m (and thus r) to a, we must solve the fixed point equation m = n p(m ). Substituting this and the definition of r into l-r--------~~--------------------~ 0.75 ., .-r----~~----~----~------.-----~ • 0.04 II ~ 0.1 Figure 5: Predicted and empirical activities for 1000 neuron networks with p = 0.05. Each data point is averaged over 7 networks. Predicting Complex Behavior in Sparse Asymmetric Networks 563 (6) gives: a(r) = p + ~ 1tp (I-p) tanh-I (1 - 2r ) 2nr (11) While a can range from 0 to 1, the approximation of (11) breaks down at very high or very small values of r. However, the range of its applicability gets wider as n increases. Figure 5 shows the perfonnance of (11) in predicting the average activity level in a 1000-neuron network. Note that a = p always leads to r = 0.5 by Equation (11). 6 CONCLUSION We have studied a general class of asymmetric networks and have developed a statistical model to relate its dynamical behavior to its parameters. This behavior, which is largely characterized by a composite parameter a, is richly varied. Understanding such behavior provides insight into the complex possibilities offered by sparse asymmetric networks, especially with regard to modeling such brain regions as the hippocampal CA3 area in mammals. The complex behavior of random asymmetric networks has been discussed before by Parisi (Parisi, 1986), Niitzel (Niitzel, 1991), and others. We show how to control this complexity in our networks by setting parameters appropriately. Acknowledgements: This research was supported by NIMH MH00622 and NIMH MH48161 to WBL, and by the Department of Neurosurgery, University of Virginia, Dr. John A. Jane, Chairman. References S. Amari (1972). Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements. IEEE Trans. on Computers C-21, 1197-1206 S. Amari (1974). A Method of Statistical Neurodynamics. Kybemetik 14,201-215 J.1. Hopfield (1982). Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Nat. Acad. Sci. USA 79, 2554-2558. D. Marr (1971). Simple Memory: A Theory for Archicortex. Phil. Trans. R. Soc. Lond. B 262, 23-81. A.A. Minai and W.B. Levy (1992a). The Dynamics of Sparse Random Networks. In Review. A.A. Minai and W.B. Levy (1992b). Setting the Activity Level in Sparse Random Networks. In Review. K. Niitzel (1991). The Length of Attractors in Asymmetric Random Neural Networks with Deterministic Dynamics. J. Phys. A: Math. Gen 24, LI51-157. G. Parisi (1982). Asymmetric Neural Networks and the Process of Learning. J. Phys. A: Math. Gen. 19, L675-L680. H. Sompolinsky and I. Kanter (1986), Temporal Association in Asymmetric Neural Networks. Phys. Rev. Lett. 57,2861-2864.
1992
113
589
Destabilization and Route to Chaos in Neural Networks with Random Connectivity Bernard Doyon Unite INSERM 230 Service de Neurologie CHUPurpan F-31059 Toulouse Cedex, France Mathias Quoy Centre d'Etudes et de Recherches de Toulouse 2, avenue Edouard Belin, BP 4025 F-31055 Toulouse Cedex, France Bruno Cessac Centre d'Etudes et de Recherches de Toulouse 2, avenue Edouard Belin, BP 4025 F-31055 Toulouse Cedex, France Manuel Samuelides Ecole Nationale Superieure de I'Aeronautique et de l'Espace 10, avenue Edouard Belin, BP 4032 F-31055 Toulouse Cedex, France Abstract The occurence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance and on the slope of the transfer function but independent of the connectivity, that allows a sustained activity and the occurence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is (Hopf bifurcation, pitchfork or flip). 549 550 Doyon, Cessac, Quoy, and Samuelides 1 INTRODUCTION Most part of studies on recurrent neural networks assume sufficient conditions of convergence. Models with symmetric synaptic connections have dynamical properties strongly connected with those of spin-glasses. In particular, they have relaxationnal dynamics caracterised by the decreasing of a function which is analogous to the energy in spin-glasses (or free energy for models submitted to thermal noise). Networks with asymmetric synaptic connections lose this convergence property and can have more complex dynamics. but searchers try to obtain such a convergence because the relaxation to a stable network state is simply interpreted as a stored pattern. However, as pointed out by Hirsch (1989), it might be very interesting, from an engineering point of view. to investigate non convergent networks because their dynamical possibilities are much richer for a given number of units. Moreover, the real brain is a highly dynamic system. Recent neurophysiological findings have focused attention on the rich temporal structures (oscillations) of neuronal processes (Gray et al., 1989), which might play an important role in information processing. Chaotic behavior has been found out in the nervous system (Gallez & Babloyantz, 1991) and might be implicated in cognitive processes (Skarda & Freeman. 1987). We have studied the emergent dynamics of a general class of non convergent networks. Some results are already available in this field. Sompolinsky et al. (1988) established strong theoretical results concerning the occurrence of chaos forfully connected networks in the thermodynamic limit (N 00) by using the Dynamic Mean Field Theory. Their model is a continuous time. continuous state dynamical system with N fully connected neurons. Each connection Jij is a gaussian random variable with zero mean and a normalized variance fllN. As the Jij'S are independent. the constant term fl can be seen as the variance of the sum of the weights connected to a given unit. Thus. the global strength of coupling remains constant for each neuron as N increases. The output function of each neuron is sigmoidal with a slope g. Sompolinsky et al. established that, in the limit N 00, there is a sharp transition from a stationary state to a chaotic flow. The onset of chaos is given by the critical value gJ=l. For gJ<1 the system admits the only fixed point zero, while for gJ >1 it is chaotic. The same authors performed simulations on finite and large values of N and showed the existence of an intermediate regime (nonzero stationary states or limit cycles) separating the stationary and the chaotic phase. but the routes to chaos were not systematically explored. The range of gJ where this intermediate behavior is observed shrinks as N increases. 2 THE MODEL The hypothesis of a fully connected network being not biologically plausible. it could be interesting to inspect how far these results could be extended as the dilution increases for a general class of networks. The model we study is defined as follows: the number of units is N. and K is the fixed number of connections received by one unit (K>I). There is no connection from one unit to itself. The K connections are randomly selected (with an uniform law) among the N-l' s. The state of each neuron i at time t is characterized by its Destabilization and Route to Chaos in Neural Networks with Random Connectivity 551 output xi (t) which is a real variable varying between -1 and 1. The discrete and parallel dynamics is given by: Jij is the synaptic weight which couples the output of unit j to the input of unit i. These weights are random independent variables chosen with a uniform law, with zero menn and a normalized variance J21 K. Notice that, with such a normalization, the standard deviation of the sum of the weights afferent to a given neuron is the constant J. One has to distinguish two effects of coupling on the behavior of such a class of models. The first effect is due to the strength of coupling, independent of the number of connections. The second one is due to the architecture of coupling, which can be studied by keeping constant the global synaptic effect of coupling. The genericity of our model cancels the peculiar dynamic features which may occur due to geometrical effects. Moreover it allows to study a model at different scales of dilution. 3 FIRST BIFURCATION For such a system, zero is always a fixed point and for low bifurcation parameter value it is the only fixed point and it is stable. Let us call Amax the eigenvalue of the matrix of synaptic weights with the greatest modulus and p = I Amaxl the spectral radius of this matrix. The loss of stability arises when the product gp is larger than 1. Our numerical simulations allow us to state that p is approximately equal to J for sufficiently largesized networks. This statement can be derived rigorously for an approximate regularized model in the thermodynamic limit (Doyon et aI., 1993). Table 1: Mean Value of the Bifurcation Parameter gJ over 30 Networks. Destabilization of the zero fixed point 1 Onset of Chaos Connectivity K 4 8 16 32 128 .954 / 1.337 .950 / 1.449 .951/1.434 .961 / 1.360 Number of neurons 256 .9651 1.298 .966 1 1.301 .9651 1.315 .958 1 1.333 512 .9701 1.258 .9781 1.233 .969/ 1.239 .972 I 1.246 We have studied by intensive simulations on a Cray I-XMP computer the statistical spectral distribution for N ranging from 4 to 512 and for K ranging from 2 to 32. Figure 1 shows two examples of spectra (for convenience, J is set to 1). The apparent drawing of a real axis is due to the real eigenvalue density but the distribution converges to a uniform one over the J radius disk, as N increases. A similar result has been theoretically 552 Doyon, Cessac, Quay, and Samuelides achieved for full gaussian matrices (Girko. 1985; Sommers et al.. 1988). Thus pquicldy decreases to J, so the loss of stability arises for a mean gJ value that increases to 1 for increasing size (Tab. 1). For a given N value, p is nearly independent of K . ..... :. .. '. ~ ",. . '.Figure 1: Plot of the Unit Disk and of the Eigenvalues in the Complex Plane. Left: 100 Spectra for N=64. K=4. Right: 10 Spectra for N=512. K=4. Three types of first bifurcation can occur, depending on the eigenvalue Ama.'t : a) Hopf Bifurcation: this corresponds to the appearance of oscillations. There are two complex conjugate eigenvalues with maximal modulus p. b) Pitchfork bifurcation: if Amax is real positive, the bifurcation arises when gAmax = 1. Zero loses its stability and two branches of stable equilibria emerge. c) Flip Bifurcation: for Amaxreal and negative a flip bifurcation occurs when g Amax = - 1. This corresponds to the appearance of a period two oscillation. As the network size increases, the proportion of Hopf bifurcations increases because the proportion of real Amax decreases, nearly independent of K . 4 ROUTE TO CHAOS To study the following bifurcations, we chose the global observable: The value m(t) correctly characterizes all types of first bifurcation that can occur. Indeed the route to chaos is qualitatively well described by this observable, as we checked it by Destabilization and Route to Chaos in Neural Networks with Random Connectivity 553 studying simultaneously xi (I). The onset- of chaos was computed by testing the sensitivity on initial conditions for m(1) . We observed the onset of chaos occurs for quite low parameter values. The transient zone from fixed point to chaos shrinks slowly to zero as the network size increases (fab. 1). The qualitative study of the routes to chaos was made on a span of networks with various connectivity and quite important size. The route towards chaos that was observed was a quasi-periodic one in all cases with some variations due to the particular symmetry x- x. '~"he following figures are obtained by plotting m(l+l) versus m(1) after discarding the transient (Fig. 2). They are not qualitatively different with a reconstruction in a higher dimensional space. The dominant features are the following ones. ! "'+J~ 0.10.0 a) b) 0.0 -4.1, ~-4~.I----------------~o:o----------------~o .i----~-')~. II ~,--------~--------~---4.1 0.0 -4.1 0.0 "r''<~ .. -.~~,;b:t" 0.0 0.1 inti) r ",: ~_ .. d) 0.1 "'(II Figure 2: Example of route to chaos when the fust bifurcation is a Hopf one. (N=128, K=16). a) After the first bifurcation, the zero fixed point has lost its stability. The series of points (m(I), m(t+l) densely covers a cycle (gJ=l.O). b) After the second Hopf bifurcation: projection of a T2 torus (gJ=l.23). c) Frequency locking on the T2 torus (gJ=1.247). d) Chaos (gJ=1.26). 554 Doyon, Cessac, Quoy, and Samuelides When the first bifurcation is a Hopf one (Fig. 2a), it is followed by a second Hopf bifurcation (Fig. 2b). Then there is a frequency locking occuring on the T2 torus born from the second Hopf bifurcation (Fig. 2c), followed by chaos (Fig. 2d). This route is then a quasi-periodic one (Ruelle & Takens, 1971 ; Newhouse et al., 1978). A slightly different feature emerges when the first bifurcation is followed by a stable resonance due to discrete time occuring before the second Hopf bifurcation. Then the limit cycle reduces to periodic points. When the second bifurcation occurs, the resonance persists until chaos is reached. When the first bifurcation is a pitchfork, it is followed by a Hopf bifurcation for each stable point of the pitchfork (due to the symmetry x -x). Then a second Hopf bifurcation occurs followed, via a frequency locking, by chaos. It follows then, despite the pitchfork bifurcation, a quasi-periodicity route. Notice that in this case, we get two symmetric strange attractors. When gJ increases, the two attractors fuse. For a first bifurcation of flip type, the route followed is like the one described by Bauer & Martienssen (1989). The flip bifurcation leads to an oscillatory system with two states. A first Hopf bifurcation arises followed by a second one leading to a quasi-periodic state, followed by a frequency locking preceeding chaos. 5 CONCLUSION We have presented a type of neural network exhibiting a chaotic behavior when increasing a bifurcation parameter. As in Sompolinsky's model, gJ is the control parameter of the network dynamics. The variance of the synaptic weights being normalized, the bifurcation values are nearly independent of the connectivity K. The magnitude of dilution is not important for the behavior. The route to chaos by quasiperiodicity seems to be generic. It suggests that such high-dimensional networks behave like low-dimensional dynamical systems. It could be much simpler to control such networks than a priori expected. From a biological point of view, we built our model to provide a tool that could be used to investigate the influence of chaotic dynamics in the cognitive processes in the brain. We clearly chose to simplify the biological complexity in order to understand a complex dynamic. We think that, if chaos plays a role in cognitive processes, it does neither depend on a specific architecture, nor on the exact internal modelling of the biological neuron. However, it could be interesting to introduce some biological caracteristics in the model. The next step will be to study the influence of non-zero entries on the behavior of the system, leading to the modelling of learning in a chaotic network. Acknowledgements This research has been partly supported by the COGNISCIENCE research program of the C.N.R.S. through PRESCOT, the Toulouse network of searchers in Cognitive Sciences. Destabilization and Route to Chaos in Neural Networks with Random Connectivity 555 References M. Bauer & W. Martienssen. (1989) Quasi-Periodicity Route to Chaos in Neural Networks. Europhys. Lett. 10: 427-431. B. Doyon, B. Cessac, M. Quoy & M. Samuelides. (1993) Control of the Transition to Chaos in Neural Networks with Random Connectivity. Int. 1. Bifurcation and Chaos (in press). D. Gallez & A. Babloyantz. (1991) Predictability of human EEG: a dynamical approach. BiGI. Cybern. 64: 381-392. V.l.. Girko. (1985) Circular Law. Theory Prob. Its Appl. (USSR) 29: 694-706. C.M. Gray, P. Koenig, A.K. Engel & W. Singer. (1989) Oscillatory responses in cat visual cortex exhibit intercolumnar synchronisation which reflects global stimulus properties. Nature 338: 334-337. M. W. Hirsch. (1989) Convergent Activation Dynamics in Continuous Time Networks. Neural Networks 2: 331-349. S. Newhouse, D. Ruelle & F. Takens. (1978) Occurrence of Strange Axiom A Attractors Near Quasi Periodic Flows on rm, m ~ 3. Commun. math. Phys. 64: 35-40. D. Ruelle & F. Takens. (1971) On the nature of turbulence. Comm. math. Phys. 20: 167-192. C.A. Skarda & W.J. Freeman. (1987) How brains makes chaos in order to make sense of the world. Behav. Brain Sci. 10: 161-195. H.J. Sommers, A. Crisanti, H. Sompolinsky & Y. Stein. (1988) Spectrum of large random asymmetric matrices. Phys. Rev. Lett. 60: 1895-1898. H. Sompolinsky, A. Crisanti & H.J. Sommers. (1988) Chaos in random neural networks. Phys. Rev. Lett. 61: 259-262.
1992
114
590
Recognition-based Segmentation of On-line Hand-printed Words M. Schenkel*, H. Weissman, I. Guyon, C. Nohl, D. Henderson AT&T Bell Laboratories, Holmdel, NJ 07733 * Swiss Federal Institute of Technology, CH-8092 Zurich Abstract This paper reports on the performance of two methods for recognition-based segmentation of strings of on-line hand-printed capital Latin characters. The input strings consist of a timeordered sequence of X-Y coordinates, punctuated by pen-lifts. The methods were designed to work in "run-on mode" where there is no constraint on the spacing between characters. While both methods use a neural network recognition engine and a graph-algorithmic post-processor, their approaches to segmentation are quite different. The first method, which we call IN SEC (for input segmentation), uses a combination of heuristics to identify particular penlifts as tentative segmentation points. The second method, which we call OUTSEC (for output segmentation), relies on the empirically trained recognition engine for both recognizing characters and identifying relevant segmentation points. 1 INTRODUCTION We address the problem of writer independent recognition of hand-printed words from an 80,OOO-word English dictionary. Several levels of difficulty in the recognition of hand-printed words are illustrated in figure 1. The examples were extracted from our databases (table 1). Except in the cases of boxed or clearly spaced characters, segmenting characters independently of the recognition process yields poor recognition performance. This has motivated us to explore recognition-based segmentation techniques. 723 724 Schenkel, Weissman, Guyon, Nohl, and Henderson Table 1: Databases used for training and testing. DB2 contains words one to five letters long, but only four and five letter words are constrained to be legal English words. DB3 contains legal English words of any length from an 80,000 word dictionary. uppercase data pad training test set approx. # database nature used set size size of donors DBl boxed letters AT&T 9000 1500 250 DB2 short words Grid 8000 1000 400 DBS English words Wacom 600 25 ffiJ[1g~ (a) boxed r~ 2 ty.i (b) spaced F!f.!r (c) pen-lifts L (:)rr:::rp (d) connected Figure 1: Examples of styles that can be found in our databases: (a) DB1; (b) DB2; (c), (d) DB2 and DB3. The line thickness or darkness is alternated at each pen-lift. The basic principle of recognition-based segmentation is to present to the recognizer many "tentative characters". The recognition scores ultimately determine the string segmentation. We have investigated two different recognition-based segmentation methods which differ in their definition of the tentative characters, but have very similar recognition engines. The data collection device provides pen trajectory information as a sequence of (x, y) coordinates at regular time intervals (10-15 ms). We use a preprocessing technique which preserves this information by keeping a finely sampled sequence of feature vectors along the pen trajectory (Guyon et al. 1991, Weissman et al. 1992). The recognizer is a Time Delay Neural Network (T DN N) (Lang and Hinton 1988, Waibel et al. 1989, Guyon et al. 1991). There is one output per class, in this case 26 outputs, providing a score for all the capital letters of the Latin alphabet. The critical step in the segmentation process is the postprocessing which disentangles various word hypotheses using the character recognition scores provided by the TDN N. For this purpose, we use conventional dynamic programming algorithms. In addition we use a dictionary that checks the solution and returns a list of similar legal words. The best word hypotheses, subject to this list, is again chosen by dynamic programming algorithms. Recognition-based segmentation relies on the recognizer to give low confidence Recognition-based Segmentation of On-line Hand-printed Words 725 scores for wrong tentative characters corresponding to a segmentation mistake. Recognizers trained only on valid characters usually perform poorly on such a task. We use "segmentation-driven training" techniques which allow the training of wrong tentative characters, produced by the segmentation engine itself, as negative examples. This additional training has reduced our error rates by more than a factor of two. In section 2 we describe the INSEG method which uses tentative characters delineated by heuristic segmentation points. It is expected to be most appropriate for hand-printed capital letters since nearly all writers separate these letters by pen-lifts. This method was inspired by a similar technique used for Optical Character Recognition (OCR) (Burges et al. 1992). In section 3 we present an alternative method, OUTSEG, which expects the recognition engine to learn empirically (learning by examples) both to recognize characters and to identify relevant segmentation points. This second method bears similarities with the OCR methods proposed by Matan et al. (1991) or Keeler et al. (1991). In section 4 we compare the two methods and present experimental results. 2 SEGMENTATION IN INPUT SPACE Figure 2 shows the different steps of the IN SEG process. Module 1 is used to define "tentative characters" delineated by "tentative cuts" (spaces or pen-lifts). The tentative characters are then handed to module 2 which performs the preprocessing and the scoring of the characters with a T DN N. The recognition results are then gathered into an interpretation graph. In module 3 the best path through that graph is found with the Viterbi algorithm. pen input 0-2 stroke detector & grouper ~~~ qff 1-2 ,:::.( 2-3 "\. 1-3 tentative characters preprocessor &TDNN u-, R 0-2 Z E '-2 ,-3 E J 2-3 2-4 E J 3-4 3-S graph F .!;! best path search ItR E EFIt Figure 2: Processing steps of the IN SEG method. 726 Schenkel, Weissman, Guyon, Nohl, and Henderson In figure 3 we show a simplified representation of an interpretation graph built by our system. Each tentative character (denoted {i, j}) has a double index: the tentative cut i at the character starting point and the tentative cut j at the character end point. We denote by X {i, j} the node associated to the score of letter X for the tentative character {i,j}. A path through the graph starts at a node X{O,.} and ends at a node Y {., m}, where ° is the word starting point and m the last pen-lift. In between, only transitions of the kind X{.,i} -+ Y{i,.} are allowed to prevent character overlapping. To avoid searching through too complex a graph, we need to perform some pruning. The spatial relationship between strokes is used to discard unlikely tentative cuts. For instance, strokes with a large horizontal overlap are bundled. The remaining tentative characters are then grouped in different ways to form alternative tentative characters. Tentative characters separated by a large horizontal spatial interval are never considered for grouping. Figure 3: Graph obtained with the input segmentation method. The grey shading in each box indicates the recognition scores (the darker, the stronger the recognition score and the higher the recognition confidence). In table 2 we present the results obtained with the T DN N recognizer used by Guyon et al. (1991), with 4 convolutional layers and 6,252 weights. Characters are preprocessed individually, which provides the network with a fixed dimension input. 3 SEGMENTATION IN OUTPUT SPACE In contrast with IN SEC, the OUTSEC method does not rely on human designed segmentation hints: the neural network learns both recognition and segmentation features from examples. Recognition-based Segmentation of On-line Hand-printed Words 727 Tentative characters are produced simply in that a window is swept over the input sequence in small steps. At each step the content of the window is taken to be a tentative character. Successive characters usually overlap considerably. L (X) time (i) 1111111111111111111111111111111111111 ~ 012... m Figure 4: T DN N outputs of the OUTSEG system. The grey curve indicates the best path through the graph, using duration modeling. The word "LOOP" was correctly recognized in spite of the ligatures which prevent segmentation on the basis of pen-lifts. In figure 4, we show the outputs of our TDN N recognizer when the word "LOOP" is processed. The main matrix is a simplified representation of our interpretation graph. Tentative character numbers i (i E {I, 2, ... , m}), run along the time direction. Each column contains the scores of all possible interpretations X (X E {A, B, C, ... , Z, nil}) of a given tentative character. The bottom line is the nil interpretation score which approximates the probability that the present input is not a character (meaningless character): P(nil{i}linput) = 1- (P(A{i}linput) + P(B{i}linput) + ... + P(Z{i} I input» The connections between nodes reflect a model of character durations. A simple way of enforcing duration is to allow only the following transitions: X { i} ~ X {i + I}, nil{i} ~ nil{i+l}, X {i} ~ nil{i + I}, nil{i} ~ X {i + I}, where X stands for a certain letter. A character interpretation can be followed by 728 Schenkel, Weissman, Guyon, Nohl, and Henderson the same interpretation but cannot be followed immediately by another character interpretation: they must be separated by nil. This permits distinguishing between letter duration and letter repetition (such as the double "0" in our example). The best path in the graph is found by the Viterbi algorithm. In fact, this simple pattern of connections corresponds to a Markov model of duration, with exponential decay. We implemented a slightly fancier model which allows the generation of any duration distribution (Weissman et al. 1992) to help prevent character omission or insertion. In our experiments, we selected two Poisson distributions to model character and the nil-class duration respectively. We use a T D N N recognizer with 3 layers and 10, 817 weights. The sequence of recognition scores is obtained by sweeping the neural network over the input. Because of the convolutional structure of the T DN N, there are many identical computations between two successive calls of the recognizer and only about one sixth of the network connections have to be reevaluated for each new tentative character. As a consequence, although the OUTSEG system processes many more tentative characters than the IN SEG system does, the overall computation time is about the same. 4 COMPARISON OF RESULTS AND CONCLUSIONS Table 2: Comparison of the performance of the two segmentation methods using a TDN N recognizer. II Error without dictionary II Error with dictionary I on DB2 % char. % word % char. % word INSEG 9 18 8.5 15 OUTSEG 10 21 8 17 on DB3 % char. % word % char. % word INSEG 8 33 5 13 OUTSEG 11 48 7 21 We summarize in table 2 the results obtained with our two segmentation methods. To complement the results obtained with database DB2, we used (without retraining) database DB3 as a control, containing words of any length from the English dictionary. In our current versions, INSEG performs better than OUTSEG. The OUTSEG method can handle connected letters (such as in the example of the word "LOOP" in figure 4), while the INSEG method, which relies on pen lifts, cannot. But, we discovered that very few people did not separate their characters by pen lifts in the data we collected. On the other hand, an advantage of the IN SEG method is that it can easily be used with recognizers other than the T DN N, whereas the OUTSEG method relies heavily on the convolutional structure of the T DN N for computational efficiency. For comparison, we substituted two other neural network recognizers to the T DN N. These networks use alternative input representations. The OCR - net was designed for Optical Character Recognition (Le Cun et al. 1989) and uses pixel map inputs. Recognition-based Segmentation of On-line Hand-printed Words 729 Its first layer performs local line orientation detection. The orientation - net has an architecture similar to that of the OCR - net, but its first layer is removed and local line orientation information, directly extracted from the pen trajectory, is transmitted to the second layer (Weissbuch and Le Cun 1992). Without a dictionary, the OCR - net has an error rate more than twice that of the T DN N but the orientation - net performs similarly. With dictionary the orientation - net has a 25% lower error rate than the T DN N. This improvement is attributed to better second and third best recognition choices, which facilitates dictionary use. Our best results to date (tables 3) were obtained with the INSEG method, using two recognizers combined with a voting scheme: the T DN N and the orientationnet. For comparison purposes we mention the results obtained by a commercial recognizer on the same data. One should notice that our dictionary is the same as the one from which the data was drawn and is probably a larger dictionary than the one used by the commercial system. Our results are substantially better than those of the commercial system. On an absolute scale they are quite satisfactory if we take into account that the test data was not cleaned at all and that more than 20% of the errors have been identified to be patterns written in cursive, misspelled or totally illegible. We expect the OUTSEG method to work best for cursive handwriting, which does not exhibit trivial segmentation hints, but we do not have any direct evidence to support this expectation as yet. Rumelhart (1992) had success with a version of OUTSEG. Work is in progress to extend the capabilities of our systems to cursive writing. Table 3: Performance of our best system. For comparison, we mention in parenthesis the performances obtained by a commercial recognizer on the same data. The performance of the commercial system with dictionary (marked with a *) are penalized because DB2 and DB3 include words not contained in its dictionary. Error without dictionary Error with dictionary Method % char. % word % char. % word DB2 7 (18) 13 (29) 7 (17*) 10 (32*) DB3 6 (20) 23 (61) 5 (18*) 11 (49*) Acknowledgments We wish to thank the entire Neural Network group at Bell Labs Holmdel for their supportive discussions. Helpful suggestions with the editing of this paper by L. Jackel and B. Boser are gratefully acknowledged. We are grateful to Anne Weissbuch, Yann Le Cun and Jan Ben for giving us their Neural Networks to tryon our IN SEG method. We are indebted to Howard Page for providing comparison figures with the commercial recognizer. The experiments were performed with the neural network simulators of B. Boser, Y. Le Cun and L. Bottou who we thank for their help and advice. 730 Schenkel, Weissman, Guyon, Nohl, and Henderson References I. Guyon, P. Albrecht, Y. Le Cun, J . Denker and W. Hubbard. Design of a neural network character recognizer for a touch terminal. Pattern Recognition, 24(2), 1991. H. Weissman, M. Schenkel, I. Guyon, C. Nohl and D. Henderson. Recognitionbased Segmentation of On-line Run-on Handprinted Words: Input vs. Output Segmentation. Submitted to Pattern Recognition, October 1992. K. J. Lang and G. E. Hinton. A time delay neural network architecture for speech recognition. Technical Report CMU-cs-88-152, Carnegie-Mellon University, Pittsburgh PA, 1988. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano and K. Lang. Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech and Signal Processing, 37:328-339, March 1989. C. J. C. Burges, O. Matan, Y. Le Cun, D. Denker, L. D. Jackel, C. E. Stenard, C. R. Nohl and J. I. Ben. Shortest path segmentation: A method for training neural networks to recognize character strings. In IJCNN'92, volume 3, Baltimore, 1992. IEEE. O. Matan, C. J . C. Burges, Y. Le Cun and J. Denker. Multi-digit recognition using a Space Dispacement Neural Network. In J. E. Moody et al., editor, Advances in Neural Information Processing Systems 4, Denver, 1992. Morgan Kaufmann. J . Keeler, D. E. Rumelhart and W-K. Leow. Integrated segmentation and recognition of hand-printed numerals. In R. Lippmann et aI., editor, Advances in Neural Information Processing Systems 3, pages 557-563, Denver, 1991. Morgan Kaufmann. Y. Le Cun, L.D. Jackel, B. Boser, J .S. Denker, H.P. Graf, I. Guyon, D. Henderson, R.E. Howard and W. Hubbard. Handwritten digit recognition: Application of neural network chips and automatic learning. IEEE Communications Magazine, pages 41-46, November 1989. A. Weissbuch and Y. Le Cun. Private communication. 1992. D. Rumelhart et al. Integrated segmentation and recognition of cursive handwriting. In Third NEC symposium Computational Learning and Cognition, Princeton, New Jersey, 1992 (to appear).
1992
115
591
Some Estimates of Necessary Number of Connections and Hidden Units for Feed-Forward Networks Adam Kowalczyk Telecom Australia, Research Laboratories 770 Blackburn Road, Clayton, Vic. 3168, Australia (a.kowalczyk@trl.oz.au) Abstract The feed-forward networks with fixed hidden units (FllU-networks) are compared against the category of remaining feed-forward networks with variable hidden units (VHU-networks). Two broad classes of tasks on a finite domain X C R n are considered: approximation of every function from an open subset of functions on X and representation of every dichotomy of X. For the first task it is found that both network categories require the same minimal number of synaptic weights. For the second task and X in general position it is shown that VHU-networks with threshold logic hidden units can have approximately lin times fewer hidden units than any FHU-network must have. 1 Introduction A good candidate artificial neural network for short term memory needs to be: (i) easy to train, (ii) able to support a broad range of tasks in a domain of interest and (iii) simple to implement. The class of feed-forward networks with fixed hidden units (HU) and adjustable synaptic weights at the top layer only (shortly: FHUnetworks) is an obvious candidate to consider in this context. This class covers a wide range of networks considered in the past, including the classical perceptron, higher order networks and non-linear associative mapping. Also a number of training algorithms were specifically devoted to this category (e.g. perceptron, madaline 639 640 Kowalczyk or pseudoinverse) and a number of hardware solutions were investigated for their implementation (e.g. optical devices [8]). Leaving aside the non-trivial tasks of constructing the domain specific HU for a FHU-network [9] and then optimal loading of specific tasks, in this paper we concentrate on assessing the abilities of such structures to support a wide range of tasks in comparison to more complex feedforward networks with multiple layers of variable HU (VHU-networks). More precisely, on a finite domain X two benchmark tests are considered: approximation of every function from an open subset of functions on X and representation of every dichotomy of X. Some necessary and sufficient estimates of minimal necessary numbers of adaptable synaptic weights and of HU are obtained and then combined with some sufficient estimates in [10] to provide the final results. In Appendix we present an outline some of our recent results on the extension of the classical Function-Counting Theorem [2] to the multilayer case and discuss some of its implications to assessing network capacities. 2 Statement of the main results In this paper X will denote a subset of R n of N points. Of interest to us are multilayer feed-forward networks (shortly FF-networks) , Fw : X R, depending on the k-tuple w = (Wl' ... , Wk) E R k of adjustable synaptic weights to be selected on loading to the network desired tasks. The FF -networks are split into two categories defined above: • FHU-network with fixed hidden units ¢>i : X -+ R k Fw(x) def I: Wi¢>i(X) (x EX), i=l (1) • VHU-networks with variable hidden units 1/Jw",i : X R depending on some adjustable synaptic weights w", where w = (Wi, w") E R k ' x R k " = Rk k' Fw(x) def I: w~1/Jw",i(X) (x EX). (2) i=l Of special interest are situations where hidden units are built from one or more layers of artificial neurons, which, for simplicity, can be thought of as devices computing simple functions of the form (Yl, .. ·,Ym) E R m ~ a(wi1Yl + Wi 2 Y2 + ... + Wim.Ym), where a : R R is a non-decreasing squashing function. Two particular examples of squashing functions are (i) infinitely differentiable sigmoid function t ~ (1 + exp( _t))-l and (ii) the step function 9(t) defined as 1 for t ~ 0 a.nd = 0, otherwise. In the latter case the artificial neuron is called a threshold logic neuron (ThLneuron). In the formulation of results below all biases are treated as synaptic weights attached to links from special constant HUs (= 1). Estimates of Necessary Connections & Hidden Units for Feed-Forward Networks 641 2.1 Function approximation The space R X of all real functions on X has the natural structure of a vector space isomorphic with RN. We introduce the euclidean norm IIIII def CE:CEX 12(x))1/2 on R X and denote by U C R X an open, non-empty subset. We say that the FFnetwork Fw can approximate a function I on X with accuracy f > 0 if II! - Fw II < € for a weight vector w ERie. Theorem 1 Assume the FF-network Fw is continuously differentiable with respect to the adjustable synaptic weights w ERie and k < N. If it can approximate any function in U with any accuracy then for almost every function lEU, if lilTli-+oo IIFw(i) - III = 0, where w(l), w(2), ... ERie, then lilTli-+oo Ilw(i)11 = 00. In the above theorem "almost every" means with the exception of a subset of the Lebesgue measure 0 on R X ~ RN. The proof of this theorem relies on use of Sard's theorem from differential topology (c.f. Section 3). Note that the above theorem is applicable in particular to the popular "back-propagation" network which is typically built from artificial neurons with the continuously differentiable sigmoid squashing function. The proof of the following theorem uses a different approach, since the network is not differentiably dependent on its synaptic weights to HUs. This theorem applies in particular to the classical FF -networks built from ThL-neurons. Theorem 2 A FF-network Fw must have 2:: N HU in the top hidden layer if all units of this layer have a finite number of activation levels and the network can approximate any function in U with any accuracy. The above theorems mean in particular that if we want to achieve an arbitrarily good approximation of any function in U def {I : X R; I/(x)1 < A}, where A > 0, and we can use one of VHU-networks of the above type with synaptic weights of a restricted magnitude only, then we have to have at least N such weights. However that many weights are necessary and sufficient to achieve the same, with a FHUnetwork (1) if the functions ¢i are linearly independent on X. So variable hidden units give no advantage in this case. 2.2 Implementation of dichotomy We say that the FF-network Fw can implement a dichotomy (X_, X+) of X if there exists w ERie such that Fw < 0 on X_ and Fw > 0 on X+. Proposition 3 A FHU-network Fw can implement every dichotomy of X if and only if it can exactly compute every function on X . In such a case it must have 2:: N HU in the top hidden layer. The non-trivial part of the above theorem is necessity in the first part of it, i.e. that being able to implement every dichotomy on X requires N (fixed) hidden units. In Section 3.3 we obtain this proposition from a stronger result. Note that the above 642 Kowalczyk proposition can be deduced from the classical Function-Counting Theorem [2] and also that an equivalent result is proved directly in [3, Theorem 7.2]. We say that the points of a subdomain X C Rn are in general position if every in R n contains no more than n points of X. Note that points of every finite sub domain of R n are in general position after a sufficiently small perturbation and that the property of being in general position is preserved under sufficiently small perturbations. Note also that the points of a typical N-point sub domain X C R n are in general position, where "typical" means with the exception of subdomains X corresponding to a certain subset of Lebesgue measure 0 in the space (Rn)N of all N -tuples of points from Rn. It is proved in [10] that for a subdomain set X C R n of N points in general position a VHU-network having i(N - 1)/nl (adjustable) ThL-neurons in the first (and the only) hidden layer can implement every dichotomy of X, where the notation itl denotes the smallest integer ~ t. Furthermore, examples are given showing that the above bound is tight. (Note that this paper corrects and gives rigorous proofs of some early results in [I, Lemma 1 and Theorem 1] and also improves [6, Theorem 4].) Combining these results with Proposition 3 we get the following result. Theorem 4 Assume that all N points of X C R n are in general position. In the class of all FF-networks which can implement every dichotomy on X there exists a VHU-network with threshold logic HU having a fraction l/n+O(1/ N) of the number of the HU that any FHU-network in this class must have. There are examples of X in general position of any even cardinality N > 0 showing that this estimate is tight. 3 Proofs Below we identify functions I : X -t R with N -tuples of their values at N -points of X (ordered in a unique manner). Under this identification the FF-networks Fw can be regarded as a transformation WERk-tFwERN (3) with the range R(Fw) def {Fw ; w E Rk} C RN. 3.1 Proof of Theorem 1. In this case the transformation (3) is continuously differentiable. Every value of it is singular since k < N, thus according to Sard's Theorem [5], R(Fw) C RN has Lebesgue measure O. It is enough to show now that if lEU - R(Fw) (4) and lim IIFw(i) - III = 0 and Ilw(i)11 < M, l-tOO (5) for some M > 0, then a contradiction follows. Actually from (5) it follows that I belongs to the topological closure cl(RM) ofRM def {Fw; w E Rk & Ilwll;:; Estimates of Necessary Connections & Hidden Units for Feed-Forward Networks 643 M}. However, RM is a compact set as a continuous image of a closed ball {w E RA: ; Ilwll :::; M}, so cl(RM) = RM. Consequently f E RM C R(Fw) which contradicts (4). Q.E.D. 3.2 Proof of Theorem 2. We con!>ider the FF-network (1) for which there exists a finite set VCR of s points such that "pW",i(X) E V for every w" ERA:", 1 ~ i ~ k' and x E X. It is sufficient to show that the set R(Fw) of all functions computable by Fw is not dense in U if k' < N . Actually, we can write R( Fw) as a union R(Fw) = (6) where each Lw" ~f {2::~l w~'l/Jw",i ; w~, ... , W~, E R} C RN is a linear subspace of dimension:::; k' :::; N uniquely determined by the vectors 'l/Jw",i E VN C RN, i = 1, ... ,k'. However there is a finite number (:::; sN) of different vectors in VN, thus there is only a finite number (:::; sNA:) of different linear subspaces in the family {Lw" ; w" E RA: II }. Hence, as k' < N, the union (6) is a closed no-where dense subset of R N as a finite union of proper linear subspaces (each of which is a closed and nowhere dense subset). Q.E.D. 3.3 Proof of Proposition 3. We state first a stronger result. We say that a set L of functions on X is convex if for any couple of functions ¢>1, ¢>2 on X any Q > 0, {3 > 0, Q + {3 = 1, the function Q¢>l + {3¢>2 also belongs to L. Proposition 5 Let L be a convex set of functions on X = {Xl, X2, ... , XN} implementing every dichotomy of X. Then for each i E {1, 2, ... , N} there exists a function ¢>i E L such that ¢>i(Xi) -# 0 and ¢>i(Xj) = 0 for 1 ~ i i- j :::; N . Proof. We define a transformation SGN : R X --+ {-1, 0, +1}N SGN(¢» def where sgn(~) def -1 if ~ < 0, sgn(O) ~f 0 and sgn(~) def + 1 if ~ > O. We denote by WA: the subset of {-1, 0, +l}N of all points q = (ql, ... ,qN) such that 2:~l Iqil = k, for k = O,1, ... ,N. We show first that convexity of L implies for k E {1, 2, ... , N} the following WA: C SGN(L) => W k - l C SGN(L). (7) For the proof assume WI: C SGN(L) and q = (q1, ... , qN) E {-1, 0, +1}N is such that 2:~l Iqil = k - 1. We need to show that there exists ¢> E L such that SGN(¢» = q. (8) 644 Kowalczyk The vector q has at least one vanishing entry, say, without loss of generality, q1 = O. Let ¢+ and ¢- be two functions in L such that SGN(¢+) = q+ SGN(¢-) = qdef def ( + 1, Q2, ... , q N ), (-l,Q2, ... ,QN)' Such ¢+ and ¢- exist since q+, q- E Wk. The function belongs to L as a convex combination of two functions from L and satisfies (8). Now note that the assumptions of the proposition imply that W N C SGN(L). Applying (7) repeatedly we find that W 1 ~ SGN(L), which means that for every index i, 1 ~ i ~ N, there exists a function ¢l E L with vanishing all entries but the i-th one. Q.E.D. N ow let us see how Proposition 3 follows from the above result. Sufficiency is obvious. For the necessity we observe that the family Fw of functions on X is convex being a linear space in the case of a FHU-network (1). Now if this network can compute every dichotomy of X, then each function ¢i as in Proposition 5 equals to FWi for some Wi E R k. Thus n(Fw) = RN since those functions make a basis of R X ~ RN. Q.E.D. 4 Discussion of results Theorem 1 combined with observations in [4] allows us to make the following contribution to the recent controversy on relevance/irrelevance of Kolmogorov's theorem on representation of continuous functions In _ R, I def [0,1] (c.f. [4, 7]), since In contains subsets of any cardinality. The FF-networks for approximations of continuous functions on In of rising accuracy have to be complex, at leAst in one of the following ways: • involve adjustment of a diverging number of synaptic weights and hidden units, or • require adjustment of synaptic weights of diverging magnitude, or • involve selection of "pathological" squashing functions. Thus one can only shift complexity from one kind to a.nother, but not eliminate it completely. Although on theoretical grounds one can easily argue the virtues and simplicity of one kind of complexity over the other, for a genuine hardware implementation any of them poses an equally serious obstacle. For the classes of FF-networks and benchmark tests considered, the networks with multiple hidden layers have no decisive superiority over the simple structures with fixed hidden units unless dimensionality of the input space is significant. Estimates of Necessary Connections & Hidden Units for Feed-Forward Networks 645 5 Appendix: Capacity and Function-Counting Theorem The above results can be viewed as a step towards estimation of capacity of networks to memorise dichotomies. We intend to elaborate this subject further now and outline some of our recent results on this matter. A more detailed presentation will be available in future publications. The capacity of a network in the sense of Cover [2] (Cover's capacity) is defined as a maximal N such that for a randomly selected subset X C R n of N points with probability 1 the network can implement 1/2 of all dichotomies of X. For a linear perceptron Ie def "\:" Fw(x) = .LJ Wi~i i=l (x EX), (9) where w E R n is the vector of adjustable synaptic weights, the capacity is 2n,and 2k for a FHU-network (1) with suitable chosen hidden units 4>1, ... , 4>1e. These results are based on the so-called Function-Counting Theorem proved for the linear perceptron in the sixties (c.r. [2]). Extension of this result to the multilayer case is still an open problem (c.f. T. Cover's talk on NIPS'92). However, we have recently obtained the following partial result in this direction. Theorem 6 Given a continuous probability density on R n , for a randomly selected subset Xc R n of N points the FF-network having the first hidden layer built from h ThL-neurons can implement nh ( ) def N - 1 C(N,nh)=2L i ' z=o (10) dichotomies of X with a non-zero probability. Such a network can be constructed using nh variable synaptic weights between input and hidden layer only. For h = 1 this theorem reduces to its classical form for which the phrase "with non-zero probability" can be strengthened to "with probability I" [2]. The proof of the theorem develops Sakurai's idea of utilising the Vandermonde determinant to show the following property of the curve c( t) def (t, t 2 , ... , tn -1), t > 0 (*) for any subset X of N points Xl = c(td, ... , XN = C(tN), tl < t2 < ... < tN, any hyperplane in Rn can intersect no more then n different segments [Xi,Xi+l] ofc. The first step of the proof is to observe that the property (*) itself implies that the count (10) holds for such a set X. The second and the crucial step consists in showing that for a sufficiently small € > 0, for any selection of points Xl, ... ,XN E R n such that Ilxi - xd I < € for i = 1, ... , n, there exists a curve c passing through these points and satisfying also the property (*). Theorem 6 implies that in the class of multilayer FF-networks having the first hidden layer built from ThL-neurons only the single hidden layer networks are the most 646 Kowalczyk efficient, since the higher layers have no influence on the number of implemented dichotomies (at least for the class of domains x C R n considered). Note that by virtue of (10) and the classical argument of Cover [2] for the class of domains X as in the Theorem 6 the capacity of the network considered is 2nh. Thus the following estimates hold. Corollary 7 In the class of FF-networks with a fixed nu.mber h of hidden units the ratio of the maximal capacity per hidden unit achievable by FHU-network to the maximal capacity per hidden unit achievable by VHU-networks having the ThLneurons in the first hidden layer only is 2h/2nh = lin. The analogous ratio for capacities per variable synaptic weight (in the class of FF-networks with a fixed number s of variable synaptic weights) is :::; 2s 12s = 1. Acknowledgement. I thank A. Sakurai of Hitachi Ltd., for helpful comments leading to the improvement of results of the paper. The permission of the Director, Telecom Australia Research Laboratories, to publish this material is gratefully acknow ledged. References [lJ E. Baum. On the capabilities of multilayer perceptrons. Journal of Complexity, 4:193-215, 1988. [2] T.M. Cover. Geometrical and statistical properties of linear inequalities with applications to pattern recognition. IEEE Trans. Elec. Comp., EC-14:326334, 1965. [3] R.M. Dudley. Central limit theorems for empirical mea.sures. Ann. Probability, 6:899-929, 1978. [4J F. Girosi and T . Poggio. Representation properties of networks: Kolmogorov's theorem is irrelevant. Neural Computation, 1:465-469, (1989). [5] M. Golubitsky and V. Guillemin. Stable Mapping and Their Singularities. Springer-Verlag, New York, 1973. [6] S. Huang and Y. Huang. Bounds on the number of hidden neurons in multilayer perceptrons. IEEE Transactions on Neural Networks, 2:47-55, (1991). [7] V. Kurkova. Kolmogorov theorem is relevant. Neura.l Computation, 1, 1992. [8] D. Psaltis, C.H. Park, and J. Hong. Higher order associative memories and their optical implementations. Neural Networks, 1:149-163, (1988). [9J N. Redding, A. Kowalczyk, and T. Downs. Higher order separability and minimal hidden-unit fan-in. In T. Kohonen et al., editor, Artificial Neural Networks, volume 1, pages 25-30. Elsevier, 1991. [10] A. Sakurai. n-h-1 networks store no less n· h + 1 examples but sometimes no more. In Proceedings of IJCNN92, pages 1II-936-III-94l. IEEE, June 1992. PART VIII SPEECH AND SIGNAL PROCESSING
1992
116
592
The Power of Approximating: a Comparison of Activation Functions Bhaskar DasGupta Department of Computer Science University of Minnesota Minneapolis, MN 55455-0159 email: dasgupta~cs.umn.edu Georg Schnitger Department of Computer Science The Pennsylvania State University University Park, PA 16802 email: georg~cs.psu.edu Abstract We compare activation functions in terms of the approximation power of their feedforward nets. We consider the case of analog as well as boolean input. 1 Introduction We consider efficient approximationsofa given multivariate function I: [-1, l]m-+ n by feedforward neural networks. We first introduce the notion of a feedforward net. Let r be a class of real-valued functions, where each function is defined on some subset of n. A r-net C is an unbounded fan-in circuit whose edges and vertices are labeled by real numbers. The real number assigned to an edge (resp. vertex) is called its weight (resp. its threshold). Moreover, to each vertex v an activation function IV E r is assigned. Finally, we assume that C has a single sink w. The net C computes a function Ie : [-1,11 m --+ n as follows. The components of the input vector x = (Xl, . .. , xm ) E [-1, 11m are assigned to the sources of C. Let Vl, ••• , Vn be the immediate predecessors of a vertex v. The input for v is then sv(x) = E~=l WiYi -tv, where Wi is the weight of the edge (Vi, V), tv is the threshold of v and Yi is the value assigned to Vi. If V is not the sink, then we assign the value Iv (sv (x)) to v. Otherwise we assign Sv (x) to v. Then Ie = Sw is the function computed by C where W is the unique sink of C. 615 616 DasGupta and Schnitger A great deal of work has been done showing that nets of two layers can approximate (in various norms) large function classes (including continuous functions) arbitrarily well (Arai, 1989; Carrol and Dickinson, 1989; Cybenko, 1989; Funahashi, 1989; Gallant and White, 1988; Hornik et al. 1989; Irie and Miyake,1988; Lapades and Farber, 1987; Nielson, 1989; Poggio and Girosi, 1989; Wei et al., 1991). Various activation functions have been used, among others, the cosine squasher, the standard sigmoid, radial basis functions, generalized radial basis functions, polynomials, trigonometric polynomials and binary thresholds. Still, as we will see, these functions differ greatly in terms of their approximation power when we only consider efficient nets; i.e. nets with few layers and few vertices. Our goal is to compare activation functions in terms of efficiency and quality of approximation. We measure efficiency by the size of the net (i.e. the number of vertices, not counting input units) and by its number of layers. Another resource of interest is the Lipschitz-bound of the net, which is a measure of the numerical stability of the net. We say that net C has Lipschitz-bound L if all weights and thresholds of C are bounded in absolute value by L and for each vertex v of C and for all inputs x, y E [-1, l]m, Itv(sv(x)) -tv(sv(y»1 :::; L ·Isv(x) - sv(y)l· (Thus we do not demand that activation function Iv has Lipschitz-bound L, but only that Iv has Lipschitz-bound L for the inputs it receives.) We measure the quality of an approximation of function I by function Ie by the Chebychev norm; i.e. by the maximum distance between I and Ie over the input domain [-1, l]m. Let r be a class of activation functions. We are particularly interested in the following two questions . • Given a function I : [-1, l]m -+ n, how well can we approximate I by a f-net with d layers, size s, and Lipschitz-bound L? Thus, we are particularly interested in the behavior of the approximation error e(s, d) as a function of size and number of layers. This set-up allows us to investigate how much the approximation error decreases with increased size and/or number of layers . • Given two classes of activation functions fl and f2, when do f 1-nets and f 2nets have essentially the same "approximation power" with respect to some error function e(s, d)? We first formalize the notion of "essentially the same approximation power" . Definition 1.1 Let e : N2 -+ n+ be a function. fl and f2 are classes of activation functions. (a). We say that fl simulates r 2 with respect to e if and only if there is a constant k such that for all functions I : [-1, l]m -+ n with Lipschitz-bound l/e(s, d), if f can be approximated by a r 2-net with d layers, size s, Lipschitzbound 2~ and approximation error e(s, d), then I can also be approximated with error e( s, d) by a r 1 -net with k( d + 1) layers, size (s + l)k and Lipschitz-bound 28 " • (b). We say that f1 and r2 are equivalent with respect to e if and only if f2 simulates f 1 with respect to e and f 1 simulates f2 with respect to e. The Power of Approximating: a Comparison of Activation Functions 617 In other words, when comparing the approximation power of activation functions, we allow size to increase polynomially and the number of layers to increase by a constant factor, but we insist on at least the same approximation error. Observe that we have linked the approximation error e( s, d) and the Lipschitz-bound of the function to be approximated. The reason is that approximations of functions with high Lipschitz-bound "tend" to have an inversely proportional approximation error. Moreover observe that the Lipschitz-bounds of the involved nets are allowed to be exponential in the size of the net. We will see in section 3, that for some activation functions far smaller Lipschitz-bounds suffice. Below we discuss our results. In section 2 we consider the case of tight approximations, i.e. e(s, d) = 2-'. Then in section 3 the more relaxed error model e(s, d) = s-d is discussed. In section 4 we consider the computation of boolean functions and show that sigmoidal nets can be far more efficient than thresholdnets. 2 Equivalence of Activation Functions for Error e( s, d) = 2- 8 We obtain the following result. Theorem 2.1 The following activation functions are equivalent with respect to error e(s, d) = 2- 3 • • the standard sigmoid O'(x) = l+ex~(-r)' • any rational function which is not a polynomial, • any root x a , provided Q is not a natural number, • the logarithm (for any base b > 1), • the gaussian e- x2 , • the radial basis functions (1 + x2)a, Q < 1, Q # 0 Notable exceptions from the list of functions equivalent to the standard sigmoid are polynomials, trigonometric polynomials and splines. We do obtain an equivalence to the standard sigmoid by allowing splines of degree s as activation functions for nets of size s. (We will always assume that splines are continuous with a single knot only.) Theorem 2.2 Assume that e(s, d) = 2-'. Then splines (of degree s for nets of size s) and the standard sigmoid are equivalent with respect to e(s, d). Remark 2.1 (a) Of course, the equivalence of spline-nets and {O' }-nets also holds for binary input. Since threshold-nets can add and multiply m m-bit numbers with constantly many layers and size polynomial in m (Rei/, 1987), threshold-nets can efficiently approximate polynomials and splines. 618 DasGupta and Schnitger Thus, we obtain that {u }-nets with d layers, size s and Lipschitz-bound L can be simulated by nets of binary thresholds. The number of layers of the simulating threshold-net will increase by a constant factor and its size will increase by a polynomial in (s + n) log(L), where n is the number of input bits. (The inclusion of n accounts for the additional increase in size when approximately computing a weighted sum by a threshold-net.) (b) If we allow size to increase by a polynomial in s + n, then threshold-nets and {u }-nets are actually equivalent with respect to error bound 2-". This follows, since a threshold function can easily be implemented by a sigmoidal gate (Maass et al., 1991). Thus, if we allow size to increase polynomially (in s + n) and the number of layers to increase by a constant factor, then {u }-nets with weights that are at most exponential (in s + n) can be simulated by {u} -nets with weights of size polynomial in s. {u }-nets and threshold-nets (respectively nets of linear thresholds) are not equivalent for analog input. The same applies to polynomials, even if we allow polynomials of degree s as activation function for nets of size s: Theorem 2.3 (a) Let sq(x) = x 2 • If a net of linear splines (with d layers and size s) approximates sq( x) over the interval [-1, 1], then its approximation error will be at least s-o( d) . (b) Let abs(x) =1 x I. If a polynomial net with d layers and size s approximates abs(x) over the interval [-1,1]' then the approximation error will be at least s-O(d). We will see in Theorem 2.5 that the standard sigmoid (and hence any activation function listed in Theorem 2.1) is capable of approximating sq(x) and abs(x) with error at most 2-" by constant-layer nets of size polynomial in s. Hence the standard sigmoid is properly stronger than linear splines and polynomials. Finally, we show that sine and the standard sigmoid are inequivalent with respect to error 2-'. Theorem 2.4 The function sine(Ax) can be approximated by a {u}-net CA with d layers, size s = AO(l/d) and error at most sO( -d). On the other hand, every {u }-net with d layers which approximates sine(Ax) with error at most ~, has to have size at least AO(l/d). Below we sketch the proof of Theorem 2.1. The proof itself will actually be more instructive than the statement of Theorem 2.1. In particular, we will obtain a general criterion that allows us to decide whether a given activation function (or class of activation functions) has at least the approximation power of splines. 2.1 Activation F\lnctions with the Approximation Power of Splines Obviously, any activation function which can efficiently approximate polynomials and the binary threshold will be able to efficiently approximate splines. This follows since a spline can be approximated by the sum p + t . q with polynomials p and q The Power of Approximating: a Comparison of Activation Functions 619 and a binary threshold t. (Observe that we can approximate a product once we can approximately square: (x + y)2/2 - x2/2 - y2/2 = x . y.) Firstly, we will see that any sufficiently smooth activation function is capable of approximating polynomials. Definition 2.1 Ld -y : n ---+ n be a function. We call -y suitable if and only if there exists real numbers a, f3 (a > 0) and an integer k such that (a) -y can be represented by the power series 2:~o ai(x - f3)i for all x E [-a, a]. The coefficients are rationals of the form ai = ~ with IPi I, IQil ~ 2ki (for i > 1). (b) For each i > 2 there exists j with i ~ j ~ i k and aj "# O. Proposition 2.1 Assume that -y is suitable with parameter k. Then, over the domain [-D, D], any degree n polynomial p can be approximated with errore by a {-y}-net Cpo Cp has 2 layers and size 0(n2k); its weights are rational numbers whose numerator and denominator are bounded in absolute value by Pmax(2 + D)PO,y(n)lh(N+l)II[_a,a1;' Here we have assumed that the coefficients of p are rational numbers with numerator and denominator bounded in absolute value by Pmax. Thus, in order to have at least the approximation power of splines, a suitable activation function has to be able to approximate the binary threshold. This is achieved by the following function class, Definition 2.2 Let r be a class of activation functions and let 9 : [1,00] ---+ n be a function. (a). We say that 9 is fast converging if and only if I g(x) - g(x + e) 1= 0(e/X2 ) for x ~ 1, e ~ 0, o < roo g( u2 )du < 00 and I roo g( u2 )du 1= 0(1/ N) for all N ~ 1. J1 J2 N (b). We say that r is powerful if and only if at least one function in r is suitable and there is a fast converging function g which can be approximated for all s > 1 (over the domain [-2",2"]) with error 2-" by a {r}-net with a constant number of layers, size polynomial in s and Lipschitz-bound 2". Fast convergence can be checked easily for differentiable functions by applying the mean value theorem. Examples are x-a for a ~ 1, exp( -x) and 0'( -x). Moreover, it is not difficult to show that each function mentioned in Theorem 2.1 is powerful. Hence Theorem 2.1 is a corollary of Theorem 2.5 Assume that r is powerful. (a) r simulates splines with respect to error e(s, d) = 2- 3 • 620 DasGupta and Schnitger (b) Assume that each activation function in r can be approximated (over the domain [-2',2']) with error 2-' by a spline-net N, of size s and with constantly many layers. Then r is equivalent to splines. Remark 2.2 Obviously, 1/x is po'wer/ul. Therefore Theorem 2.5 implies that constant-layer {l/x}-nets of size s approximate abs(x) = Ixl with error 2-'. The degree of the resulting rational function will be polynomial in s. Thus Theorem 2.5 generalizes .N ewman's approximation of the absolute value by rational functions. (Newman, 1964) 3 Equivalence of Activation Functions for Error s-d The lower bounds in the previous section suggest that the relaxed error bound e(s, d) = s-d is of importance. Indeed, it will turn out that many non-trivial smooth activation functions lead to nets that simulate {(T }-nets, provided the number of input units is counted when determining the size of the net. (We will see in section 4, that linear splines and the standard sigmoid are not equivalent if the number of inputs is not counted). The concept of threshold-property will be crucial for us. Definition 3.1 Let r be a collection of activation functions. We say that r has the threshold-property if there is a constant c such that the following two properties are satisfied for all m > 1. (a) For each 'Y E r there is a threshold-net T-y,m with c layers and size (s + m)C which computes the binary representation of'Y'(x) where h(x)-t'(x)1 ~ 2-m. The input x of T-y ,m is given in binary and consists of 2m + 1 bits; m bits describe the integral part of x, m bits describe its fractional part and one bit indicates the sign. s + m specifies the required number of output bits, i.e. s = rlog2(sup{'Y(x) : _2m+l < x < 2m+l})1. (b) There is a r -net with c layers, size m C and Lipschitz bound 2mc which approximates the binary threshold over D = [-1,1] - [-11m, 11m] with error 11m. We can now state the main result of this section. Theorem 3.1 Assume that e(s, d) = s-d. (a) Let r be a class of activation functions and assume that r has the threshold property. Then, (T and r are equivalent with respect to e . Moreover, {(T} -nets only require weights and thresholds of absolute value at most s. (Observe that r -nets are allowed to have weights as large as 2' I) (b) If rand (T are equivalent with respect to error 2-', then rand (T are equivalent with respect to error s-d. (c) Additionally, the following classes are equivalent to {(T }-nets with respect to e. (We assume throughout that all coefficients, weights and thresholds are bounded by 23 for nets of size s) . • polynomial nets (i. e. polynomials of degree s appear as activation function for nets of size s), The Power of Approximating: a Comparison of Activation Functions 621 • {"y }-nets, where ~/ is a suitable function and "y satisfies part (a) of Definition 3.1. (This includes the sine-function.) • nets of linear splines The equivalence proof involves a first phase of extracting O(dlogs) bits from the analog input. In a second phase, a binary computation is mimicked. The extraction process can be carried out with error s-1 (over the domain [-1,1] - [-l/s, l/s]) once the binary threshold is approximated. 4 Computing boolean functions As we have seen in Remark 2.1, the binary threshold (respectively linear splines) gains considerable power when computing boolean functions as compared to approximating analog functions. But sigmoidal nets will be far more powerful when only the number of neurons is counted and the number of input units is disregarded. For instance, sigmoidal nets are far more efficient for "squaring", i.e when computing: Mn = {(x, y): x E {O, l}n, y E {O, l}n:l and [xJ2;;::: [y]} (where [z] = L Zi). i Theorem 4.1 A threshold-net computing Mn must have size at least n(logn). But Mn can be computed by a (1'-net with constantly many gates. The previously best known separation of threshold-nets and sigmoidal-nets is due to Maass, Schnitger and Sontag (Maass et al., 1991). But their result only applies to threshold-nets with at most two layers; our result holds without any restriction on the number oflayers. Theorem 4.1 can be generalized to separate threshold-nets and 3-times differentiable activation functions, but this smoothness requirement is more severe than the one assumed in (Maass et al., 1991). 5 Conclusions Our results show that good approximation performance (for error 2-") hinges on two properties, namely efficient approximation of polynomials and efficient approximation of the binary threshold. These two properties are shared by a quite large class of activation functions; i.e. powerful functions. Since (non-polynomial) rational functions are powerful, we were able to generalize Newman's approximation of I x I by rational functions. On the other hand, for a good approximation performance relative to the relaxed error bound s-d it is already sufficient to efficiently approximate the binary threshold. Consequently, the class of equivalent activation functions grows considerably (but only if the number of input units is counted). The standard sigmoid is distinguished in that its approximation performance scales with the error bound: if larger error is allowed, then smaller weights suffice. Moreover, the standard sigmoid is actually more powerful than the binary threshold even when computing boolean functions. In particular, the standard sigmoid is able to take advantage of its (non-trivial) smoothness to allow for more efficient nets. 622 DasGupta and Schnitger Acknowledgements. We wish to thank R. Paturi, K. Y. Siu and V. P. Roychowdhury for helpful discussions. Special thanks go to W. Maass for suggesting this research, to E. Sontag for continued encouragement and very valuable advice and to J. Lambert for his never-ending patience. The second author gratefully acknowledges partial support by NSF-CCR-9114545. References Arai, W. (1989), Mapping abilities of three-layer networks, in "Proc. of the International Joint Conference on Neural Networks", pp. 419-423. Carrol, S. M., and Dickinson, B. W. (1989), Construction of neural nets using the Radon Transform,in "Proc. of the International Joint Conference on Neural Networks", pp. 607-611. Cybenko, G. (1989), Approximation by superposition of a sigmoidal function, Mathematics of Control, Signals, and System, 2, pp. 303-314. Funahashi, K. (1989), On the approximate realization of continuous mappings by neural networks, Neural Networks, 2, pp. 183-192. Gallant, A. R., and White, H. (1988), There exists a neural network that does not make avoidable mistakes, in "Proc. of the International Joint Conference on Neural Networks" , pp. 657-664. Hornik, K., Stinchcombe, M., and White, H. (1989), Multilayer Feedforward Networks are Universal Approximators, Neural Networks, 2, pp. 359-366. Irie, B., and Miyake, S. (1988), Capabilities of the three-layered perceptrons, in "Proc. of the International Joint Conference on Neural Networks", pp. 641-648. Lapades, A., and Farbar, R. (1987), How neural nets work, in "Advances in Neural Information Processing Systems" , pp. 442-456. Maass, W., Schnitger, G., and Sontag, E. (1991), On the computational power of sigmoid versus boolean threshold circuits, in "Proc. of the 32nd Annual Symp. on Foundations of Computer Science" , pp. 767-776. Newman, D. J. (1964), Rational approximation to I x I , Michigan Math. Journal, 11, pp. 11-14. Hecht-Nielson, R. (1989), Theory of backpropagation neural networks, in "Proc. of the International Joint Conference on Neural Networks", pp. 593-611. Poggio, T., and Girosi, F. (1989), A theory of networks for Approximation and learning, Artificial Intelligence Memorandum, no 1140. Reif, J. H. (1987), On threshold circuits and polynomial computation, in "Proceedings of the 2nd Annual Structure in Complexity theory", pp. 118-123. Wei, Z., Yinglin, Y., and Qing, J. (1991), Approximation property of multi-layer neural networks ( MLNN ) and its application in nonlinear simulation, in "Proc. of the International Joint Conference on Neural Networks", pp. 171-176.
1992
117
593
Neural Network Model Selection Using Asymptotic Jackknife Estimator and Cross-Validation Method Yong Liu Department of Physics and Institute for Brain and Neural Systems Box 1843, Brown University Providence, RI, 02912 Abstract Two theorems and a lemma are presented about the use of jackknife estimator and the cross-validation method for model selection. Theorem 1 gives the asymptotic form for the jackknife estimator. Combined with the model selection criterion, this asymptotic form can be used to obtain the fit of a model. The model selection criterion we used is the negative of the average predictive likehood, the choice of which is based on the idea of the cross-validation method. Lemma 1 provides a formula for further exploration of the asymptotics of the model selection criterion. Theorem 2 gives an asymptotic form of the model selection criterion for the regression case, when the parameters optimization criterion has a penalty term. Theorem 2 also proves the asymptotic equivalence of Moody's model selection criterion (Moody, 1992) and the cross-validation method, when the distance measure between response y and regression function takes the form of a squared difference. 1 INTRODUCTION Selecting a model for a specified problem is the key to generalization based on the training data set. In the context of neural network, this corresponds to selecting an architecture. There has been a substantial amount of work in model selection (Lindley, 1968; Mallows, 1973j Akaike, 1973; Stone, 1977; Atkinson, 1978j Schwartz, 599 600 Liu 1978; Zellner, 1984; MacKay, 1991; Moody, 1992; etc.). In Moody's paper (Moody, 1992), the author generalized Akaike Information Criterion (AIC) (Akaike, 1973) in the regression case and introduced the term effective number of parameters. It is thus of great interest to see what the link between this criterion and the crossvalidation method (Stone, 1974) is and what we can gain from it, given the fact that AIC is asymptotically equivalent to the cross-validation method (Stone, 1977). In the method of cross-validation (Stone, 1974), a data set, which has a data point deleted from the original training data set, is used to estimate the parameters of a model by optimizing a parameters optimization criterion. The optimal parameters thus obtained are called the jackknife estimator (Miller, 1974). Then the predictive likelihood of the deleted data point is calculated, based on the estimated parameters. This is repeated for each data point in the original training data set. The fit of the model, or the model selection criterion, is chosen as the negative of the average of these predictive likelihoods. However, the computational cost of estimating parameters for different data point deletion is expensive. In section 2, we obtained an asymptotic formula (theorem 1) for the jackknife estimator based on optimizing a parameters optimization criterion with one data point deleted from the training data set. This somewhat relieves the computational cost mentioned above. This asymptotic formula can be used to obtain the model selection criterion by plugging it into the criterion. Furthermore, in section 3, we obtained the asymptotic form of the model selection criterion for the general case (Lemma 1) and for the special case when the parameters optimization criterion has a penalty term (theorem 2). We also proved the equivalence of Moody's model selection criterion (Moody, 1992) and the cross-validation method (theorem 2). Only sketchy proofs are given when these theorems and lemma are introduced. The detail of the proofs are given in section 4. 2 APPROXIMATE JACKKNIFE ESTIMATOR Let the parameters optimization criterion, with data set w = {(:Vi, yd, i = 1, ... , n} and parameters 9, be Cw (9), and let W-i denote the data set with ith data point deleted from w. If we denote 8 and 8_ i as the optimal parameters for criterion Cw (9) and Cw _.(9), respectively, \79 as the derivative with respect to 9 and superscript t ~s transpose, we have the following theorem about the relationship between 8 and 9_i · Theorem 1 If the criterion function Cw (9) is an infinite- order differentiable function and its derivatives are bounded around 8. The estimator 8 -i (also called jackknife estimator (Miller, 1974)) can be approzimated as 8_i - 8 ~ -(\79\7~Cw(8) \79\7~Ci(8»-1\79Ci(8) in which Ci(9) = Cw(9) - Cw_.(9). (1) Proof. Use the Taylor expansion of equation \7 9Cw_.(8_d terms higher than the second order. o around 9. Ignore Model Selection Using Asymptotic Jackknife Estimator & Cross-Validation Method 601 Example 1: Using the generalized maximum likelihood method from Bayesian analysis 1 (Berger, 1985), if 7r( 0) is the prior on the parameters and the observations are mutually independent, for which the distribution is modeled as ylx ,..... f(Ylx, 0), the parameters optimization criterion is Thus Ci(O) = logf(Yilxi, 0). If we ignore the influence of the deleted data point in the dt nominator of equation 1, we have (3) Example 2: In the special case of example I, with noninformative prior 7r( 0) = 1, the criterion is the ordinary log-likelihood function, thus 9_i-9~- [ L VeV~logf(Yj lxj,O) j-lVelogf(Yilxi,O). (4) (xi,Y.:)Ew 3 CROSS-VALIDATION METHOD AND MODEL SELECTION CRITERION Hereafter we use the negative of the average predictive likelihood, or, 1 Tm(w) = -- L logf(Yi lXi, O-i) n (x"y.:)Ew (5) as the model selection criterion, in which n is the size of the training data set w, mE .. Vi denotes parametric probability models f(Ylx, 0) and .tVi is the set of all the models in consideration. It is well known that Tm(w) is an unbiased estimator of r(00,9(.)), the risk of using the model m and estimator 0, when the true parameters are 00 and the training data set is w (Stone, 1974; Efron and Gong, 1983; etc.), i.e., r(Oo, 0(.)) E{Tm(w)} E{ -logf(Ylx, 9(w))} E{ -~ L logf(Yj IXj, O(w)) } (6) (x] ,Y] )Ew ... in which Wn = {( Xj , Yj ), j = I, ... k} is the test data set, 9(.) is an implicit function of the training data set wand it is the estimator we decide to use after we have observed the training data set w. The expectation above is taken over the randomness of w, x, Y and Wn . The optimal model will be the one that minimizes this criterion. This procedure of using 9_ t and Tm(w) to obtain an estimation of risk is often called the cross-validation method (Stone, 1974; Efron and Gong, 1983). Remark: After we have obtained 9 for a model, we can use equation 1 to calculate 9_ i for each i, and put the resulting 9_ i into equation 5 to get the fit of the model, thus we will be able to compare different models m E .tVi. 1 Strictly speaking, it is a method to find the posterior mode. 602 Liu Lemma 1 If the probability model f(ylx, 8), as a function,. of 8, is differentiable up to infinite order and its derivatives are bounded around 8. The approximation to the model selection criterion, equation 5, can be written as Tm(w) ~ -~ L logf(Yi lXi, 8) ~ L V'~logf(Yi lXi, 8)(8_i - 8) (7) n n (Xi,Yi)Ew (Xi,Yi)Ew Proof. Igoring the terms higher than the second order of the Taylor expansion of logf(Yj IXj, 8_i ) around 8 will yield the result. Ezample 2 (continued): Using equation 4, we have, for the modei selection criterion, 1 "" A L logf(Yi lXi, 9) n (xi,y.)Ew 1 n 2: V'~logf(Yi lXi, 8)A -IV' /}logf(Yi lXi, 8). (8) (:Ci,y.)Ew in which A = E(X]'Y3)EW V'/}V'~logf(Yjlxj,9). If the model f(Ylx,8) is the true one, the second term is asymptotically equal to p, the number of parameters in the model. So the model selection criterion is log-likelihood + number of parameters of the model. This is the well known Akaike's Information Criterion (AIC) (Akaike, 1973). Ezample 1( continued): Consider the probability model 1 f(Ylx,8) = ,8exp( 20'2 E(y, 1}/}( X))) (9) in which,8 is a normalization factor, E(y, 1}/}(x)) is a distance measure between Y and regression function 1}/} (x). E(·) as function of 9 is assumed differentiable. Denoting2 U(8,~, w) = E(Xi,Yi)EW E(Yi' 1}/}(xd) 20'2log1T(81~), we have the following theorem, Theorem 2 For the model specified in equation 9 and the parameters optimization criterion specified in equation 2 (ezample 1), under regular condition, the unbiased estimator of E{ ~ 2: E(Yi,1}e(xd)} (xi,y.)Ew .. asymptotically equals to 1"" L E(Yi' 1}e(x~)) + n (:Ci,y .. )Ew (10) 1 n L V'~E(Yi,1}e(xd){V'/}V'~U(8,).,w)}-IV'/}E(Yi,1}9(xd)· (11) (Xi,Yi)Ew 2For example, 1r(OIA) = Alp(O, (72 fA), this corresponds to U(O, A, w) = L £(Yi,l1s(xi)) + A02 + const(A, (72). (z"Yi)Ew Model Selection Using Asymptotic Jackknife Estimator & Cross-Validation Method 603 For the case when £(Y, 179(Z)) = (y -179(Z))2, we get, for the asymptotic equivalency of the equation 11, (12) in wh.ich W = {(Zi,Yi), i = 1, ... , n} is the training data set, Wn = {(zi,yd, ~ = 1, ... , k} is the test data set, and £(8,w) = ~ L(:z:"y,)EW E(Yi,179(Zi)). Proof. This result comes directly from theorem 1 and lemma 1. Some asymptotic technique has to be used. Remark: The result in equation 12 was first proposed by Moody (Moody, 1992). The effective number of parameters formulated in his paper corresponds to the summation in equation 12. Since the result in this theorem comes directly from the asymptotics of the cross-validation method and the jackknife estimator, it gives the equivalency proof between Moody's model selection criterion and the crossvalidation method. The detailed proof of this theorem, presented in section 4, is in spirit the same as the one presented in Stone's paper about the proof of the asymptotic equivalence of Ale and the cross-validation method (Stone, 1977). 4 DETAILED PROOF OF LEl\1MAS AND THEOREMS In order to prove theorem 1, lemma 1 and theorem 2, we will present three auxiliary lemmas first. Lemma 2 For random variable sequence Zn and Yn, if limn-+co Zn liffin-+co Yn = z, then Zn and Yn are asymptotically equivalent. Z and Proof. This comes from the definition of asymptotic equivalence. Because asymptotically the two random variable will behave the same as random variable z. Lemma 3 Consider the summation Li h(Zi' Ydg(Zi' z). If E(h(z, y)lz, z) is a constant c independent of z, y, z, then the summation is asymptotically equivalent to cLig(Zi'Z). Proof. According to the theorem of large number, lim ~ '" h(Zi' Yi)g(Zt, z) n-+co n ~ t E(h(z, y)g(z, z)) E(E(h(z, y)lz, z)g(z, z)) = cE(g(z, z)) which is the same as the limit of ~ Li g(Zt' z). Using lemma 2, we get the result of this lemma. Lemma 4 If T}9 (.) and g( 8, .) are differentiable up to the second order, and the model Y = T}9 (z) + f with f ,...., ,/V (0, (]'2) is the true model, the second derivative with 604 Liu respect to 8 of n i=l evaluated at the minimum of U, i. e., iJ, is asymptotically independent of random variable {Yi, i = 1, ... , n}. Pro~of. Explicit calculation of the second derivative of U with respect to 8, evaluated at 8, gives n V9V~U(iJ,'\,W) = 2:LV977J(1;JV~179(:Z:t) i=l i=l + As n approaches infinite, the effect of the second term in U vanishes, iJ approach the mean squared error estimator with infinite amount of data points, or the true parameters 80 of the model (consistency of MSE estimator (Jennrich, 1969)), E(y779(z)) approaches E(Y-7790(Z)) which is O. According to lemma 2 and lemma 3, the second term of this second derivative vanishes asymptotically. So as n approaches infinite, the second derivative of U with respect to 8, evaluated at iJ, approaches n V' 9 V~U(80), '\, w) = 2 L V' 97790 (zi)V~7790 (Zi) + V' 9 V~g( 80 , ,\) which is independent of {Yi, i = 1, ... , n}. According to lemma 2, the result of this lemma is readily obtained. Now we give the detailed proof of theorem 1, lemma 1 and theorem 2. Proof of Theorem 1. The jackknife estimator iJ- i satisfies, V 9Cw_ .. (ILi) O. The Taylor expansion of the left side of this equation around 8 gives VeCW_i(iJ) + VeV~Cw_.(iJ)(iJ_i - iJ) + O(liJ- i - 91 2 ) = 0 According to the definition of iJ and iJ_ i , their difference is thus a small quantity. Also because of the boundness of the derivatives, we can ignore higher order terms in the Taylor expansion and get the approximation iJ-i - iJ ~ -(V9V~CW_i(iJ))-1V'9Cw_.(iJ) Since 9 satisfies V' 9Cw(iJ) = 0, we can rewrite this equation and obtain equation 1. Proof of Lemma 1. The Taylor expansion of 10gf(Yi IZi' iJ-d around iJ is 10gf(Yi IZi, iJ-d = 10gf(Yi IZi, iJ) + V'~logf(Yi IZi, iJ)(iJ_ i - iJ) + O(liJ-i - 91 2 ) Putting this into equation 5 and ignoring higher order terms for the same argument as that presented in the proof of theorem 1, we readily get equation 7. Proof of Theorem 2. Up to an additive constant dependent only on ,\ and cr2 , the optimization criterion, or equation 2, can be rewritten as (13) Model Selection Using Asymptotic Jackknife Estimator & Cross-Validation Method 605 Now putting equation 9 and 13 into equation 3, we get, O-i - 0 ~ -{V' 9 V'~U(8, .\, w)} -IV' 9£(Yi, 17e(:z:d) (14) Putting equation 14 into equation 7, we get, for the model selection criterion, 1 '" 1 Tm(w) = n ~ 2u2 £(Yi, 17e(:z:d) + (:t:"Yi)Ew 1 '" 1 t t ~ }-l )) ~ 2u2 V'9£(Yi, 17e(:z:d){V'9V'9U(O,>.,w) V'9£(Yi,17e(:Z:i r.. (:t:i,Yi)Ew (15) Recall the discussion associated with equation 6 and now 1 '" ~ 1 '" 1 E{ -"k ~ 10gf(Y;I:Z:j,0)} = E{"k ~ 2u2£(Yj,17e(:Z:;))} (:t:"y,)Ew" (:t:"Yj)Ew" (16) after some simple algebra, we can obtain the unbiased estimator of equation 10. The result is equation 15 multiplied by 2u2 , or equation 11. Thus we prove the first part of the theorem. Now consider the case when £(Y,179(:Z:)) = (y -179(:z:))2 (17) The second term of equation 11 now becomes ~ L 4(Yi -17e(:z:d)2V'~17e(:Z:i){V'9V'~U(8, >',w)}-1V'917e(:Z:i) (18) (:t:"Yi)Ew As n approaches infinite, 0 approach the true parameters 0o, V' 917e(:Z:') approaches V'9179 0 (:Z:.) and E((y -17e(:z:)))2 asymptotically equals to u 2 • Using lemma 4 and lemma 3, we get, for the asymptotic equivalency of equation 18, .!..u2 L 2V'~17§(:z:d{V'9V'~U(0,>.,W)}-12V'917§(:z:d (19) n If we use notation £(O,w) = ~ L(:t:"Yi)EW £(Yi,179(:z:d), with £(Y,179(:Z:)) of the form specified in equation 17, we can get, a -a V'9 n£(0,w) = -2V'9179(:Z:i) (20) Yi Combining this with equation 19 and equation 11, we can readily obtain equation 12. 5 SUMMARY In this paper, we used asymptotics to obtain the jackknife estimator, which can be used to get the fit of a model by plugging it into the model selection criterion. Based on the idea of the cross-validation method, we used the negative of the average predicative likelihood as the model selection criterion. We also obtained the asymptotic form of the model selection criterion and proved that when the parameters optimization criterion is the mean squared error plus a penalty term, this asymptotic form is the same as the form presented by (Moody, 1992). This also served to prove the asymptotic equivalence of this criterion to the method of cross-validation. 606 Liu Acknowledgements The author thanks all the members of the Institute for Brain and Neural Systems, in particular, Professor Leon N Cooper for reading the draft of this paper, and Dr. Nathan Intrator, Michael P. Perrone and Harel Shouval for helpful comments. This research was supported by grants from NSF, ONR and ARO. References Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Petrov and Czaki, editors, Proceedings of the 2nd International Symposium on Information Theory, pages 267-281. Atkinson, A. C. (1978). Posterior probabilities for choosing a regression model. Biometrika, 65:39-48. Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis. SpringerVerlag. Efron, B. and Gong, G. (1983). A leisurely look at the bootstrap, the jackknife and cross-validation. Amer. Stat., 37:36-48. Jennrich, R. (1969). Asymptotic properties of nonlinear least squares estimators. Ann. Math. Stat., 40:633-643. Lindley, D. V. (1968). The choice of variables in multiple regression (with discussion). J. Roy. Stat. Soc., Ser. B, 30:31-66. MacKay, D. (1991). Bayesian methods for adaptive models. PhD thesis, California Institute of Technology. Mallows, C. L. (1973). Some comments on Cpo Technometrics, 15:661-675. Miller, R. G. (1974). The jackknife - a review. Biometrika, 61:1-15. Moody, J. E. (1992). The effective number of parameters, an analysis of generalization and regularization in nonlinear learning system. In Moody, J. E., Hanson, S. J., and Lippmann, R. P., editors, Advances in Neural Information Processing System 4. Morgan Kaufmann Publication. Schwartz, G. (1978). Estimating the dimension of a model. Ann. Stat, 6:461-464. Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions (with discussion). J. Roy. Stat. Soc., Ser. B. Stone, M. (1977). An asymptotic equivalence of choice of model by cross-validation and Akaike's criterion. J. Roy. Stat. Soc., Ser. B, 39(1):44-47. Zellner, A. (1984). Posterior odds ratios for regression hypotheses: General consideration and some specific results. In Zellner, A., editor, Basic Issues in Econometrics, pages 275-305. University of Chicago Press.
1992
118
594
Context-Dependent Multiple Distribution Phonetic Modeling with MLPs Michael Cohen SRI International Menlo Park. CA 94025 Horacio Franco Nelson Morgan SRl International IntI. Computer Science Inst. David Rumelhart Stanford University Stanford, CA 94305 Abstract Berkeley, CA 94704 Victor Abrash SRI International A number of hybrid multilayer perceptron (MLP)/hidden Markov model (HMM:) speech recognition systems have been developed in recent years (Morgan and Bourlard. 1990). In this paper. we present a new MLP architecture and training algorithm which allows the modeling of context-dependent phonetic classes in a hybrid MLP/HMM: framework. The new training procedure smooths MLPs trained at different degrees of context dependence in order to obtain a robust estimate of the cootext-dependent probabilities. Tests with the DARPA Resomce Management database have shown substantial advantages of the context-dependent MLPs over earlier cootextindependent MLPs. and have shown substantial advantages of this hybrid approach over a pure HMM approach. 1 INTRODUCTION Bidden Markov models are used in most current state-of-the-art continuous-speech recognition systems. A hidden Markov model (HMM) is a stochastic finite state machine with two sets of probability distributions. Associated with each state is a probability distribution over transitions to next states and a probability distribution over output symbols (often referred to as observation probabilities). When applied to continuous speech. the observation probabilities are typically used to model local 649 650 Cohen, Franco, Morgan, Rumelhart, and Abrash speech features such as spectra, and the transition probabilities are used to model the displacement of these features through time. HMMs of individual phonetic segments (phones) can be concatenated to model words and word models can be concatenated, according to a grammar, to model sentences, resulting in a finite state representation of acoustic-phonetic, phonological, and syntactic structure. The HMM approach is limited by the need for strong statistical assumptions that are unlikely to be valid for speech. Previous work by Morgan and Bourlard (1990) has shown both theoretically and practically that some of these limitations can be overcome by using multilayer perceptrons (MLPs) to estimate the HMM state-dependent observation probabilities. In addition to relaxing the restrictive independence assumptions of traditional HMMs, this approach results in a reduction in the number of parameters needed for detailed phonetic modeling as a result of increased sharing of model parameters between phonetic classes. Recently, this approach was applied to the SRI-DECIPHER™ system, a state-of-the-art continuous speech recognition system (Cohen et al., 1990), using an MLP to provide estimates of context-independent posterior probabilities of phone classes, which were then converted to HMM context-independent state observation likelihoods using Bayes' rule (Renals et aI., 1992). In this paper, we describe refinements of the system to model phonetic classes with a sequence of context-dependent probabilities. Context-dependent modeling: The realization of individual phones in continuous speech is highly dependent upon phonetic context. For example, the sound of the vowel /ae/ in the words "map" and "tap" is different, due to the influence of the preceding phone. These context effects are referred to as "coarticulation". Experience with HMM technology has shown that using context-dependent phonetic models improves recognition accuracy significantly (Schwartz et al., 1985). This is so because acoustic correlates of coarticulatory effects are explicitly modeled, producing sharper and less overlapping probability density functions for the different phone classes. Context-dependent HMMs use different probability distributions for every phone in every different relevant context. This practice causes problems that are due to the reduced amount of data available to train phones in highly specific contexts, resulting in models that are not robust and generalize poorly. The solution to this problem used by many HMM systems is to train models at many different levels of contextspecificity, including biphone (conditioned only on the phone immediately to the left or right), generalized biphone (conditioned on the broad class of the phone to the left or right), triphone (conditioned on the phone to the left and the right), generalized triphone, and word specific phone. Models conditioned by more specific contexts are linearly smoothed with more general models. The "deleted interpolation" algorithm (Jelinek and Mercer, 1980) provides linear weighting coefficients for the observation probabilities with different degrees of context dependence by maximizing the likelihood of the different models over new, unseen data. This approach cannot be directly extended to MLP-based systems because averaging the weights of two MLPs does not result in an MLP with the average performance. It would be possible to use this approach to average the probabilities that are output from different MLPs; however, since the MLP training algorithm is a discriminant procedure, it would be desirable to use a discriminant or error-based procedure to smooth the MLP probabilities together. An earlier approach to context-dependent phonetic modeling with MLPs was proposed by Bourlard et al. (1992). It is based on factoring the context-dependent likelihood and uses a set of binary inputs to the network to specify context classes. The number Context-Dependent Multiple Distribution Phonetic Modeling with MLPs 651 of parameters and the computational load using this approach are not much greater than those for the original context-independent net. The context-dependent modeling approach we present here uses a different factoring of the desired context-dependent likelihoods. a network architecture that shares the input-to-hidden layer among the context-dependent classes to reduce the number of parameters. and a training procedure that smooths networks with different degrees of context-dependence in order to achieve robustness in probability estimates. Multidistribution modeling: Experience with HMM-based systems has shown the importance of modeling phonetic units with a sequence of distributions rather than a single distribution. This allows the model to capture some of the dynamics of phonetic segments. The SRI-DECIPHER™ system models most phones with a sequence of three HMM states. Our initial hybrid system used only a single MLP output unit for each HMM phonetic class. This output unit supplied the probability for all the states of the associated phone model. Our initial attempt to extend the hybrid system to the modeling of a sequence of distributions for each phone involved increasing the number of output units from 69 (corresponding to phone classes) to 200 (corresponding to the states of the HMM phone models). This resulted in an increase in word-recognition error rate by almost 30%. Experiments at ICSI had a similar result (personal communication). The higher error rate seemed to be due to the discriminative nature of the MLP training algorithm. The new MLP. with 200 output units. was attempting to discriminate subphonetic classes. corresponding to HMM states. As a result. the MLP was attempting to discriminate into separate classes acoustic vectors that corresponded to the same phone and. in many cases. were very similar but were aligned with different HMM states. There were likely to have been many cases in which almost identical acoustic training vectors were labeled as a positive example in one instance and a negative example in another for the same output class. The appropriate level at which to train discrimination is likely to be the level of the phone (or higher) rather than the subphonetic HMM-state level (to which these outputs units correspond). The new architecture presented here accomplishes this by training separate output layers for each of the three HMM states. resulting in a network trained to discriminate at the phone level. while allowing three distributions to model each phone. This approach is combined with the context-dependent modeling approach. described in Section 3. 2 HYBRID MLP/HMM The SRI-DECIPHER™ system is a phone-based. speaker-independent. continuousspeech recognition system. based on semicontinuous (tied Gaussian mixture) HMMs (Cohen et al.. 1990). The system extracts four features from the input speech waveform. including 12th-order mel cepstrum. log energy. and their smoothed derivatives. The front end produces the 26 coefficients for these four features for each 10ms frame of speech. Training of the phonetic models is based on maximum-likelihood estimation using the forward-backward algorithm (Levinson et a1.. 1983). Recognition uses the Viterbi algorithm (Levinson et al .• 1983) to find the HMM state sequence (corresponding to a sentence) with the highest probability of generating the observed acoustic sequence. The hybrid MLP/HMM DECIPHERTM system substitutes (scaled) probability estimates computed with MLPs for the tied-mixture HMM state-dependent observation 652 Cohen, Franco, Morgan, Rumelhart, and Abrash probability densities. No changes are made in the topology of the HMM system. The initial hybrid system used an MLP to compute context-independent phonetic probabilities for the 69 phone classes in the DECIPHER TM system. Separate probabilities were not computed for the different states of phone models. During the Viterbi recognition search. the probability of acoustic vector Yt given the phone class qj. P (Yt I qj)' is required for each HMM state. Since MLPs can compute Bayesian posterior probabilities. we compute the required HMM probabilities using P (Y I .) = P (q j I Yt )P (Yt ) (l) t q] P(qj) The factor P (qj I Yt ) is the posterior probability of phone class qj given the input vector Y at time t. This is computed by a backpropagation-trained (Rumelhart et al .• 1986) three-layer feed-forward MLP. P (qj) is the prior probability of phone class % and is estimated by counting class occurrences in the examples used to train the MLP. P (Yt ) is common to all states for any given time frame. and can therefore be discarded in the Viterbi computation. since it will not change the optimal state sequence used to get the recognized string. The MLP has an input layer of 234 units. spanning 9 frames (with 26 coefficients for each) of cepstra. delta-cepstra. log-energy. and delta-log-energy that are normalized to have zero mean and unit variance. The hidden layer has 1000 units. and the output layer has 69 units. one for each context-independent phonetic class in the DECIPHER TM system. Both the hidden and output layers consist of sigmoidal units. The MLP is trained to estimate P (q. I Yt ). where qj is the class associated with the middle frame of the input window. Stochastic ~adient descent is used. The training signal is provided by the HMM DECIPHER system previously trained by the forward-backward algorithm. Forced Viterbi alignments (alignments to the known word string) for every training sentence provide phone labels. among 69 classes. for every frame of speech. The target distribution is defined as 1 for the index corresponding to the phone class label and 0 for the other classes. A minimum relative entropy between posterior target distribution and posterior output distribution is used as a training criterion. With this training criterion and target distribution. assuming enough parameters in the MLP. enough training data. and that the training does not get stuck in a local minimum. the MLP outputs will approximate the posterior class probabilities P (q j I Yt ) (Morgan and Bourlard. 1990). Frame classification on an independent cross-validation set is used to control the learning rate and to decide when to stop training as in Renals et al. (1992). The initial learning rate is kept constant until cross-validation performance increases less than 0.5%, after which it is reduced as l/2n until performance increases no further. 3 CONTEXT-DEPENDENCE Our initial implementation of context-dependent MLPs models generalized biphone phonetic categories. We chose a set of eight left and eight right generalized biphone phonetic-context classes, based principally on place of articulation and acoustic characteristics. The context-dependent architecture is shown in Figure 1. A separate output layer (consisting of 69 output units corresponding to 69 context-dependent phonetic classes) is trained for each context. The context-dependent MLP can be viewed as a set of MLPs. one for each context. which have the same input-to-hidden Context-Dependent Multiple Distribution Phonetic Modeling with MLPs 653 weights. Separate sets of context-dependent output layers are used to model context effects in different states of HMM phone models. thereby combining the modeling of multiple phonetic distributions and cmtext-dependence. During training and recognition. speech frames aligned with first states of HMM phones are associated with the appropriate left context output layer. those aligned with last states of HMM phones are associated with the appropriate right context output layer. and middle states of three state models are associated with the context-independent output layer. As a result. since the training proceeds (as before) as if each output layer were part of an independent net. the system learns discriminatioo between the different phonetic classes within an output layer (which now corresponds to a specific context and HMM-state position). but does not learn discrimjnatioo between occurrences of the same phooe in different contexts or between the different states of the same HMM phone. L1 RS 1,000 hidden unIts 234 Inputs Figure 1: Context-Dependent MLP 3.1 CONTEXT ·DEPENDENT FACTORING In a context-dependent HMM. every state is associated with a specific phone class and context During the Viterbi recognition search. P (Yt Iqj .CA:) (the probability of acoustic vector Yt given the phone class qj in the context class CA:) is required for each state. We compute the required HMM probabilities using I. _ P (qj IYt .CA:)P (Yt ICA:) P(Yt % .c/c) P(qj Ic/c) where P (Yt ICA:) can be factored again as I P (Ck IYt)p (Yt) P(Yt CA:) = ----P (CA:) (2) (3) The factor P(qj IYt.cA:) is the posterior probability of phone class qj given the input vector Yt and the context class C/c' To compute this factor. we consider the cooditioning on C/c in (2) as restricting the set of input vectors only to those produced in the context C/c. If M is the number of context classes. this implementation uses a set of M MLPs (all sharing the same input-to-hidden layer) similar to those used in the context-independent case except that each MLP is trained using only input-output examples obtained from the corresponding context. Ck. 654 Cohen, Franco, Morgan, Rumelhart, and Abrash Every context-specific net performs a simpler classification than in the contextindependent case because within a context the acoustics corresponding to different phones have less overlap. P (Ck Iy,) is computed by a second MLP. A three-layer feed-forward MLP is used which has 1000 hidden units and an output unit corresponding to each context class. P (qj Ic!) and P (Ck) are estimated by counting over the training examples. Finally, P CY,) is common to all states for any given time frame, and can therefore be discarded in the Viterbi computation, since it will not change the optimal state sequence used to get the recognized string. 3.2 CONTEXT -DEPENDENT TRAINING AND SMOOTHING We use the following method to achieve robust training of context-specific nets: An initial context-independent MLP is trained, as described in Section 2, to estimate the context-independent posterior probabilities over the N phone classes. After the context-independent training converges, the resulting weights are used to initialize the weights going to the context-specific output layers. Context-dependent training proceeds by backpropagating error only from the appropriate output layer for each training example. Otherwise, the training procedure is similar to that for the contextindependent net, using stochastic gradient descent and a relative-entropy training criterion. Overall classification performance evaluated on an independent cross-validation set is used to determine the learning rate and stopping point. Only hidden-to-output weights are adjusted during context-dependent training. We can view the separate output layers as belonging to independent nets, each one trained on a non-overlapping subset of the original training data. Every context-specific net would asymptotically converge to the context conditioned posteriors P (qj IY, ,Ck) given enough training data and training iterations. As a result of the initialization, the net starts estimating P (qj IY,), and from that point it follows a trajectory in weight space, incrementally moving away from the context-independent parameters as long as classification performance on the cross-validation set improves. As a result, the net retains useful information from the context-independent initial conditions. In this way, we perform a type of nonlinear smoothing between the pure context-independent parameters and the pure context-dependent parameters. 4 EVALUATION Training and recognition experiments were conducted using the speaker-independent, continuous-speech, DARPA Resource Management database. The vocabulary size is 998 words. Tests were run both with a word-pair (perplexity 60) grammar and with no grammar. The training set for the HMM system and for the MLP consisted of the 3990 sentences that make up the standard DARPA speaker-independent training set for the Resource Management task. The 600 sentences making up the Resource Management February 89 and October 89 test sets were used for cross-validation during both the context-independent and context-dependent MLP training, and for tuning HMM system parameters (e.g., word transition weight). Context-Dependent Multiple Distribution Phonetic Modeling with MLPs 655 Table 1: Percent Word Error and Parameter Count with Word-Pair Grammar CIMLP CD~P HMM MIXED Feb91 5.~ 4.7 3.~ 3.2 Sep92a 10.9 7.6 10.1 7.7 Sep92b 9.5 6.6 7.0 5.7 # Parms 300K 1400K 5500K 6 lOOK Table 2: Percent Word Error with No Grammar CIMLP CDMLP HMM MIXED Feb91 24.7 18.4 19.3 15.9 Sep92a 31.5 27.1 29.2 25.4 Sep92b 30.9 24.9 26.6 21.5 Table I presents word recognition error and number of system parameters for four different versions of the system, for three different Resource Management test sets using the word-pair grammar. Table 2 presents word recognition error for the corresponding tests with no grammar (the number of system parameters are the same as those shown in Table I). Comparing context-independent MLP (CIMLP) to context-dependent MLP (CDMLP) shows improvements with CDMLP in all six tests, ranging from a 15% to 30% reduction in word error. The CDMLP system combines multiple-distribution modeling with the context-dependent modeling technique. The CDMLP system performs better than the context-dependent HMM: (CDHMM:) system in five out of the six tests. The :MIXED system uses a weighted mixture of the logs of state obseIV ation likelihoods provided by the CIMLP and the CDHMM: (Renals et al., 1992). This system shows the best recognition performance so far achieved with the DECIPHER TM system on the Resource Management database. In all six tests, it performs significantly better than the pure CDIDv1M: system. 5 DISCUSSION The results shown in Tables I and 2 suggest that MLP estimation of HMM obsexvation likelihoods can improve the performance of standard IDv1M:s. These results also suggest that systems that use MLP-based probability estimation make more efficient use of their parameters than standard HMM: systems. In standard HMMs, most of the parameters in the system are in the obseIVation distributions associated with the individual states of phone models. MLPs use representations that are more distributed in nature, allowing more sharing of representational resources and better allocation of representational resources based on training. In addition, since MLPs are trained to discriminate between classes, they focus on modeling boundaries between classes rather than class internals. One should keep in mind that the reduction in memory needs that may be attained by replacing HMM distributions with MLP-based estimates must be traded off against increased computational load during both training and recognition. The MLP computations during training and recognition are much larger than the corresponding Gaussian mixture computations for IDv1M: systems. 656 Cohen, Franco, Morgan, Rumelhart, and Abrash The results also show that the context-dependent modeling approach presented here substantially improves performance over the earlier context-independent MLP. In addition, the context-dependent MLP performed better than the context-dependent HMM in five out of the six tests although the CDMLP is a far simpler system than the CDHMM, with approximately a factor of four fewer parameters and modeling of only generalized biphone phonetic contexts. The CDHMM uses a range of contextdependent models including generalized and specific biphone, triphone, and wordspecific phone. The fact that context-dependent MLPs can perform as well or better than context-dependent HMMs while using less specific models suggests that they may be more vocabulary-independent, which is useful when porting systems to new tasks. In the near future we will test the CDMLP system on new vocabularies. The MLP smoothing approach described here can be extended to the modeling of finer context classes. A hierarchy of context classes can be defined in which each context class at one level is included in a broader class at a higher level. The context-specific MLP at a given level in the hierarchy is initialized with the weights of a previously trained context-specific MLP at the next higher level, and then finer context training can proceed as described in Section 3.2. The distributed representation used by MLPs is exploited in the context-dependent modeling approach by sharing the input-to-hidden layer weights between all context classes. This sharing substantially reduces the number of parameters to train and the amount of computation required during both training and recognition. In addition, we do not adjust the input-to-hidden weights during the context-dependent phase of training, assuming that the features provided by the hidden layer activations are relatively low level and are appropriate for context-dependent as well as context-independent modeling. The large decrease in cross-validation error observed going from contextindependent to context-dependent MLPs (30.6% to 21.4%) suggests that the features learned by the hidden layer during the context-independent training phase, combined with the extra modeling power of the context-specific hidden-to-output layers, were adequate to capture the more detailed context-specific phone classes. The best performance shown in Tables 1 and 2 is that of the MIXED system, which combines CIMLP and CDHMM probabilities. The CDMLP probabilities can also be combined with CDHMM probabilities; however, we hope that the planned extension of our CDMLP system to finer contexts will lead to a better system than the MIXED system without the need for such mixing, therefore resulting in a simpler system. The context-dependent MLP shown here has more than 1,400,000 weights. We were able to robustly train such a large network by using a cross-validation set to determine when to stop training, sharing many of the weights between context classes, and smoothing context-dependent with context-independent MLPs using the approach described in Section 3.2. In addition, the Ring Array Processor (RAP) special purpose hardware, developed at ICSI (Morgan et aI., 1992), allowed rapid training of such large networks on large data sets. In order to reduce the number of weights in the MLP, we are currently exploring alternative architectures which apply the smoothing techniques described here to binary context inputs. 6 CONCLUSIONS MLP-based probability estimation can be useful for both improving recognition accuracy and reducing memory needs for HMM-based speech recognition systems. These benefits, however, must be weighed against increased computational requirements. Context-Dependent Multiple Distribution Phonetic Modeling with MLPs 657 We have presented a new MLP architecture and training procedure for modeling context-dependent phonetic classes with a sequence of distributions. Tests using the DARPA Resource Management database have shown improvements in recognition performance using this new approach, modeling only generalized biphone context categories. These results suggest that sharing input-to-hidden weights between context categories (and not retraining them during the context-dependent training phase) results in a hidden layer representation which is adequate for context-dependent as well as context-independent modeling, error-based smoothing of context-independent and context-dependent weights is effective for training a robust model, and using separate output layers and hidden-to-output weights corresponding to different context classes of different states of HMM: phone models is adequate to capture acoustic effects which change throughout the production of individual phonetic segments. Acknowledgements The work reported here was partially supported by DARPA Contract MDA904-9O-C5253. Discussions with Herve Bourlard were very helpful. References H. Bourlard, N. Morgan, C. Wooters, and S. Renals (1992), "CDNN: A Context Dependent Neural Network for Continuous Speech Recognition," ICASSP, pp. 349352, San Francisco. M. Cohen, H. Murveit. J Bernstein, P. Price. and M. Weintraub (1990), "The DECIPHER Speech Recognition System." ICASSP, pp. 77-80, Alburquerque. New Mexico. F. Jelinek and R. Mercer (1980). "Interpolated estimation of markov source parameters from sparse data," in Pattern Recognition in Practice, E. Gelsema and L. Kanal. Eds. Amsterdam: North-Holland. pp. 381-397. S. Levinson, L. Rabiner, and M. Sondhi (1983). "An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition." Bell Syst. Tech. Journal 62, pp. 1035-1074. N. Morgan and H. Bourlard (1990). "Continuous Speech Recognition Using Multilayer Perceptrons with Hidden Markov Models," ICASSP, pp. 413-416. Alburquerque, New Mexico. N. Morgan. 1. Beck, P. Kohn, 1. Bilmes. E. Allman, and 1. Beer (1992). "The Ring Array Processor (RAP): A Multiprocessing Peripheral for Connectionist Applications." Journal of Parallel and Distributed Computing, pp. 248-259. S. Renals, N. Morgan, M. Cohen, and H. Franco (1992), "Connectionist Probability &timation in the DECIPHER Speech Recognition System," ICASSP, pp. 601-604. San Francisco. D. Rumelhart. G. Hinton. and R. Williams (1986), "Learning Internal Representations by Error Propagation." in Parallel Distributed Processing: Explorations of the Microstructure of Cognition, vol 1: Foundations. D. Rumelhart & 1. McOelland. Eds. Cambridge: MIT Press. R. Schwartz. Y. Chow. O. Kimball, S. Roucos. M. Krasner. and 1. Makhoul (1985), "Context-dependent modeling for acoustic-phonetic recognition of continuous speech." ICASSP, pp. 1205-1208.
1992
119
595
STIMULUS ENCODING BY MUL TIDIMENSIONAL RECEPTIVE FIELDS IN SINGLE CELLS AND CELL POPULATIONS IN VI OF A WAKE MONKEY Edward Stern Center for Neural Computation and Department of Neurobiology Life Sciences Institute Hebrew University Jerusalem, Israel Eilon Vaadia Center for Neural Computation and Physiology Department Hadassah Medical School Hebrew University Jerusalem, Israel ABSTRACT Ad Aertsen Institut fur Neuroinfonnatik Ruhr-Universitat-Bochum Bochum, Gennany Shaul Hochstein Center for Neural Computation and Department of Neurobiology, Life Sciences Institute Hebrew University Jerusalem, Israel Multiple single neuron responses were recorded from a single electrode in VI of alert, behaving monkeys. Drifting sinusoidal gratings were presented in the cells' overlapping receptive fields, and the stimulus was varied along several visual dimensions. The degree of dimensional separability was calculated for a large population of neurons, and found to be a continuum. Several cells showed different temporal response dependencies to variation of different stimulus dimensions, i.e. the tuning of the modulated firing was not necessarily the same as that of the mean firing rate. We describe a multidimensional receptive field, and use simultaneously recorded responses to compute a multi-neuron receptive field, describing the information processing capabilities of a group of cells. Using dynamic correlation analysis, we propose several computational schemes for multidimensional spatiotemporal tuning for groups of cells. The implications for neuronal coding of stimuli are discussed. 377 378 Stern, Aensen, Vaadia, and Hochstein INTRODUCTION The receptive field is perhaps the most useful concept for understanding neuronal information processing. The ideal definition of the receptive field is that set of stimuli which cause a change in the neuron's firing properties. However, as with many such concepts, the use of the receptive field in describing the behavior of sensory neurons falls short of the ideal. The classical method for describing the receptive field has been to measure the "tuning curve" i.e. the response of the neuron as a function of the value of one dimension of the stimulus. This presents a problem because the sensory world is multidimensional; For example, even a simple visual stimulus, such as a patch of a sinusoidal grating, may vary in location, orientation, spatial frequency, temporal frequency, movement direction and speed, phase, contrast, color, etc. Does the tuning to one dimension remain constant when other dimensions are varied? i.e. are the dimensions linearly separable? It is not unreasonable to expect inseparability: Consider an oriented, spatially discrete receptive field. The excitation generated by passing a bar through the receptive field will of course change with orientation. However, the shape of this tuning curve will depend upon the bar width, related to the spatial frequency. This effect has not been studied quantitatively, however. If interactions among dimensions exist, do they account for a large portion of the cell's response variance? Are there discrete populations of cells, with some cells showing interactions among dimensions and others not? These question have clear implications for the problem of neural coding. Related to the question of dimensional separability is that of stimulus encoding: Given that the receptive field is multidimensional in nature, how can the cell maximize the amount of stimulus information it encodes? Does the neuron use a single code to represent all the stimulus dimensions? It is possible that interactions lead to greater uncertainty in stimulus identification. Does the small number of visual cortical cells encode all the possible combinations of stimuli using only spike rate as the dependent variable? We present data indicating that more information is indeed present in the neuronal response, and propose a new approach for its utilization. The final problem that we address is the following: Clearly, many cells participate in the stimulus encoding process. Arriving at a valid concept of a multidimensional receptive field, can we generalize this concept to more than one cell introducing the notion of a multi-cellular receptive field? METHODS Drifting sinusoidal gratings were presented for 500 msec to the central 10 degrees of the visual field of monkeys performing a fixation task. The gratings were varied in orientation, spatial frequency,temporal frequency, and movement direction. We recorded from up to 3 cells simultaneously with a single electrode in the monkey's primary visual cortex (VI). The cells described in this study were well separated, using a templatematching procedure. The responses of the neurons were plotted as Peri-Stimulus Time Histograms (PSTHs) and their parameters quantified (Abeles, 1982), and offline Fourier analysis and time-dependent crosscorrelation analysis (Aertsen et ai, 1989) were performed. Stimulus Encoding by Multidimensional Receptive Fields in Single Cells and Cell Populations 379 RESULTS Recording the responses of visual cortical neurons to stimuli varied over a number of dimensions, we found that in some cases, the tuning curve to one dimension depended on the value of another dimension. Figure lA shows the spatial-frequency tuning curve of a single cell measured at 2 different stimulus orientations. When the orientation of the stimulus is 72 degrees, the peak response is at a spatial frequency of 4.5 cycles/degree (cpd), while at an orientation of216 degrees, the spatial frequency of peak response is 2.3 cpd. If the responses to different visual dimensions were truly linearly separable, the tuning curve to any single dimension would have the same shape and, in particular, position of peak, despite any variations in other dimensions. If the tuning curves are not parallel, then interactions must exist between dimensions. Clearly, this is an example of a cell whose responses are not linearly separable. In order to quantify the inseparability phenomenon, analyses of variance were performed, using spike rate as the dependent variable, and the visual dimensions of the stimuli as the independent variables. We then measured the amount of interaction as a percentage of the total between-conditions <1) 0.9 ~ 0.8 bO O.7 s:: 'C 0.6 u:: "0 0.5 ~ 0.4 E 0.3 a 0.2 s:: 0.1 A. Spatial Frequency Tuning Dependence upon Orientation O~--~~~~n---P-~~~ 0.1 1 10 Spatial Frequency (cpd) __ ORl=72 -6- ORl=216 nfac=34 nfac=45 B. Interaction effects between stimulus dimensions: Percentage or total variance 30'.....-------------. 25 !£1I11~ 1I11~ II oooooo~ _ N f""I ~ \I"') 110 , I ' 1 , , - - - - - _ N 1""'11 "'" It/') % of non-residual variance Figure 1: Dimensional Inseparability or Visual Cortical Neurons. A: An example or dimensionsional inseparability in the response or a single cell; B: Histogram or dimensional inseparability as a percentage or total response variance. 380 Stern, Aertsen, Vaadia, and Hochstein variance divided by the residuals. The resulting histogram for 69 cells is shown in Figure lB. Although there are several cells with non-significant interactions, i.e. linearly separable dimensions, this is not the majority of cells. The amount of dimensional inseparability seems to be a continuum. We suggest that separability is a significant variable in the coding capability of the neurons, which must be taken into account when modeling the representation of sensory information by cortical neural networks. We found that the time course of the response was not always constant, but varied with stimulus parameters. Cortical cell responses may have components which are sustained (constant over time), transient (with a peak near stimulus onset and/or offset), or modulated (varying with the stimulus period). For example, Figure 2 shows the responses of a single neuron in VI to 50 stimuli, varying in orientation and spatial frequency. Each response is plotted as a PSTH, and the stippled bar under the PSTH indicates the time of Orientation DD~DD 4.5 D [:j Eaij tJ D D~EJijjG5D 23DG!5~GjD D~[MjE5E:5 liM ! f.ll !!u • D.I 11m., f Iq lip • B.' Ill.' f 1.'1 1.5 G:J [;J CiIIJ ~ c:::J Ej~~~E:::J .... t I 1.1 I Ft.2 ! U .. t p.i ! u .s lI!.t ! I.t J I!.' ! i.4 J O .8D~~~D DG5~tJD 04oEJt5DCj II·' ! U I flU t f.4 I 117.1 • f .1 I 11'" ! 1.0 I p.i : u! Figure 2: Spatial Frequency/Orientation Tuning of Responses of VI Cell Stimulus Encoding by Multidimensional Receptive Fields in Single Cells and Cell Populations 381 the stimulus presentation (500 msec). The numbers beneath each PSTH are the firing rate averaged over the response time. and the standard deviations of the response over repetitions of the stimulus (in this case 40). Clearly. the cell is orientation selective, and the neuronal response is also tuned to spatial frequency. The stimulus eliciting the highest firing rate is ORI=252 degrees; SF=3.2 cycles/degree (cpd). However, when looking at the responses to lower spatial frequencies, we see a modulation in the PSTH. The modulation, when present, has 2 peaks, corresponding to the temporal frequency of the stimulus grating (4 cycles/second). Therefore, although the response rate of the cell is lower at low spatial frequencies than for other stimuli, the spike train carries additional information about another stimulus dimension. If the visual neuron is considered as a linear system, the predicted response to a drifting sinusoidal grating would be a (rectified) sinusoid of the same (temporal) frequency as that of the stimulus, i.e. a modulated response (Enroth-Cugell & Robson, 1966; Hochstein & Shapley, 1976; Spitzer & Hochstein. 1988). However, as seen in Figure 2, in some stimulus regimes the cell's response deviates from linearity. We conclude that the linearity or nonlinearity of the response is dependent upon the stimulus conditions (Spitzer & Hochstein, 1985). A modulated response is one that would be expected from simple cells, while the sustained response seen at higher spatial frequencies is that expected from complex cells. Our data therefore suggest that the simple/complex cell categorization is not complete. A further example of response time-course dependence on stimulus parameters is seen in Figure 3A. In this case, the stimulus was varied in spatial frequency and temporal frequency, while other dimensions were held constant. Again, as spatial frequency is raised. the modulation of the PSTH gives way to a more sustained response. Funhennore, as temporal frequency is raised. both the sustained and the modulated responses are replaced by a single transient response. When present, the frequency of the modulation follows that of the temporal frequency of the stimulus. Fourier analysis of the response histograms (Figure 3B) reveals that the DC and fundamental component (FC) are not tuned to the same stimulus values (arrows indicating peaks). We propose that this information may be available to the cell readout, enabling the single cell to encode multiple stimulus dimensions simultaneously. Thus, a complete description of the receptive field must be multidimensional in nature. Furthermore, in light of the evidence that the spike train is not constant, one of the dimensions which must be used to display the receptive field must be time. Figure 4 shows one method of displaying a multidimensional response map, with time along the abscissa (in 10 msec bins) and orientation along the ordinate. In the top two figures, the z axis, represented in gray-scale, is the number of counts (spikes) per bin. Therefore, each line is a PSTH, with counts (bin height) coded by shading. In this example. cell 2 (upper picture) is tuned to orientation, with peaks at 90 and 270 degrees. The cell is only slightly direction selective, as represented by the fact that the 2 areas of high activity are similarly shaded. However, there is a transient peak at270 degrees which 382 Stern, Aertsen, Vaadia, and Hochstein A. Spatial Frequency (cpd) 0.6 0.11 1.1 1.5 2.1 lb DG:J~G:J~ ,.0 ! 4.21 p.. !'" 1 III" ! I.e 1 p.l ! I.S 1 111.1 ! 1.11 RCJ~~~~ " .4 : 1.51 pi., ! m.s /II.! ••.• lIZ.' ! n.I Fi.S ! is.1 C:W~~CitJ~ I III"! "'I ~i.I ! tl.S pu ! 14.1 p.1 ! IU ".S ! 14.1 ~ 2 UJ~CWJ[MJ~ pi •• • D.C [fT.' ! 1M FO.I ! IS! tl.O ! 1 .1 IA.' ! 1M B. nonnalized values 1 o 2 16 TF (cycles/second) 1.1 0.8 0.6 SF (cpd) Figure 3: A. TF/SF Tuning of response of VI cell. 1 B. Tuning of DC and FC of response to stimulus parameters. is absent at 90 degrees. The middle picture. representing a simultaneously recorded cell shows a different pattern of activity. The orientation tuning of this cell is similar to that of cell 2, but it has slIonger directional selectivity. (towards 90 degrees). In this case, the lIansient is also at 90 degrees. The bottom picture shows the joint activity of these 2 cells. Rather than each line being a PSTH, each line is a Joint PSTH (JPSTH; Aertsen et al. 1989). This histogram represents the time-dependent correlated activity of a pair of cells. It is equivalent to sliding a window across a spike lIain of one neuron and Stimulus Encoding by Multidimensional Receptive Fields in Single Cells and Cell Populations 383 III t 324 216 108 o 324 ~ 216 Q,I :s .§ 108 S I: .~ 0 ... => 324 216 108 o cell 2 cell 3 cell 2,3 coincidence o 250 500 time (msec) Figure 4: Response Maps. counts/bin 750 1000 SF=4.S cpd 250 200 150 \00 50 o 60 50 40 30 20 10 o Top, Middle: Single-cell Multidimensional Receptive Fields; Bottom: Multi-Cell Multidimensional Receptive Field asking when a spike from another neuron falls within the window. The size of the window can be varied; here we used 2 msec. Therefore, we are asking when these cells fire within 2 msec of each other, and how this is connected to the stimulus. The z axis is now coincidences per bin. We may consider this the logical AND activity of these cells; if there is a cell receiving infonnation from both of these neurons, this is the receptive field which would describe its input. Clearly. it is different from the each of the 2 individual cells. In our results. it is more narrowly tuned. and the tuning can not be predicted from the individual components. We emphasize that this is the "raw" JPSTH. which is not corrected for stimulus effects. common input. or normalized. This is because we want a measure comparable to the PSTHs themselves, to compare a multi-unit 384 Stern, Aertsen, Vaadia, and Hochstein receptive field to its single unit components. In this case, however, a significant (p<O.01; Palm et ai, 1988) "mono-directional" interaction is present. For a more complete description of the receptive field, this type of figure, shown here for one spatial frequency only, can be shown for all spatial frequencies as "slices" along a fourth axis. However, space limitations prevent us from presenting this multidimensional aspect of the multicellular receptive field. CONCLUSIONS We have shown that interactions among stimulus dimensions account for a significant proponion of the response variance of V 1 cells. The variance of the interactions itself may be a useful parameter when considering a population response, as the amount and location of the dimensional inseparability varies among cells. We have also shown that different temporal characteristics of the spike trains can be tuned to different dimensions, and add to the encoding capabilities of the cell in a neurobioiogically realistic manner. Finally, we use these results to generate multidimensional receptive fields, for single cells and small groups of cells. We emphasize that this can be generalized to larger populations of cells, and to compute the population responses of cells that may be meaningful for the cone x as a biological neuronal network. Acknowledgements We thank Israel Nelken, Hagai Bergman, Volodya Yakovlev, Moshe Abeles, Peter Hillman, Roben Shapley and Valentino Braitenberg for helpful discussions. This study was supponed by grants from the U.S.-Israel Bi-National Science Foundation (BSF) and the Israel Academy of Sciences. References 1. Abeles, M. Quantification, Smoothing, and Confidence Limits for Single Units' Histograms 1. Neurosci. Melhods 5 ,317-325,1982. 2. Aertsen, A.M.H.J., Gerstein, G. L., Habib, M.K., and Palm, G. Dynamics of Neuronal Firing Correlation: Modulation of "Effective Connectivity" 1. Neurophysio151 (5),900-917, 1989. 3. Enroth-CugeU, C. and Robson, J.G. The Contrast Sensitivity of Retinal Ganglion Cells of the Call Physiol. Lond 187, 517-552,1966. 4. Hochstein, S. and Shapley, R. M. Linear and Nonlinear Spatial Subunits in Y Cat Retinal Ganglion Cells 1 Physiol. Lond 262, 265-284, 1976. 5. Palm, G., Aensen, A.M.H.J. and Gerstein, G.L. On the Significance of Correlations Among Neuronal Spike Trains Bioi. Cybern. 59, 1-11, 1988. 6. Spitzer, H. and Hochstein, S. Simple and Complex-Cell Response Dependencies on Stimulation Parameters 1.Neurophysiol 53,1244-1265,1985. 7. Spitzer, H. and Hochstein, S. Complex Cell Receptive Field Models Prog. in Neurobiology, 31 ,285-309, 1988.
1992
12
596
On the Use of Evidence in Neural Networks David H. Wolpert The Santa Fe Institute 1660 Old Pecos Trail Santa Fe, NM 87501 Abstract The Bayesian "evidence" approximation has recently been employed to determine the noise and weight-penalty terms used in back-propagation. This paper shows that for neural nets it is far easier to use the exact result than it is to use the evidence approximation. Moreover, unlike the evidence approximation, the exact result neither has to be re-calculated for every new data set, nor requires the running of computer code (the exact result is closed form). In addition, it turns out that the evidence procedure's MAP estimate for neural nets is, in toto, approximation error. Another advantage of the exact analysis is that it does not lead one to incorrect intuition, like the claim that using evidence one can "evaluate different priors in light of the data". This paper also discusses sufficiency conditions for the evidence approximation to hold, why it can sometimes give "reasonable" results, etc. 1 THE EVIDENCE APPROXIMATION It has recently become popular to consider the problem of training neural nets from a Bayesian viewpoint (Buntine and Weigend 1991, MacKay 1992). The usual way of doing this starts by assuming that there is some underlying target function f from Rn to R, parameterized by an N-dimensional weight vector w. We are provided with a training set L of noisecorrupted samples of f. Our goal is to make a guess for w, basing that guess only on L. Now assume we have Li.d. additive gaussian noise resulting in P(L I w, ~) oc exp(-~ X2)), where X2(w, L) is the usual sum-squared training set error, and ~ reflects the noise level. Assume further that P(w I a.) oc exp(-o.W(w)), where W(w) is the sum of the squares ofthe weights. If the values of a. and ~ are known and fixed, to the values ~ and ~t respectively, then P(w) 539 540 Wolpert = pew I o.J and P(L I w) = P(L I W. ~t). Bayes' theorem then tells us that the posterior is proportional to the product of the likelihood and the prior. i.e .• pew I L) 0<: peL I w) pew). Consequently. finding the maximum a posteriori (MAP) w - the w which maximizes pew I L) - is equivalent to finding the w minimizing X2{w. L) + (Ut I ~t)W{w). This can be viewed as a justification for performing gradient descent with weight-decay. One of the difficulties with the foregoing is that we almost never know Ut and ~t in realworld problems. One way to deal with this is to estimate Ut and ~t. for example via a technique like cross-validation. In contrast. a Bayesian approach to this problem would be to set priors over a. and ~. and then examine the consequences for the posterior of w. This Bayesian approach is the starting point for the "evidence" approximation created by Gull {Gull 1989). One makes three assumptions. for pew I y). peL I w. y). and P{y). (For simplicity of the exposition, from now on the two hyperparameters a. and ~ will be expressed as the two components of the single vector y.) The quantity of interest is the posterior: pew I L) = J dyP{w, yl L) = J dy [(pew. y I L) / P{y I L)} x P{y I L)] (I) The evidence approximation suggests that if P{y I L) is sharply peaked about y = 1. while the term in curly brackets is smooth about y = 1, then one can approximate the w-dependence of pew I L) as pew, 11 L) / P{11 L) 0<: P(L I w.1) pew 11). In other words. with the evidence approximation. one sets the ~sterior by taking pew) = pew 11) and P(L I w) = peL I w.1). where 1 is the MAPy. PeL I y) = J dw [peL I w. y) pew I y)] is known as the "evidence" for L given y. For relatively smooth P(y). the peak of P{y I L) is the same as the peak of the evidence (hence the name "evidence approximation"). Although the current discussion will only explicitly consider using evidence to set hyperparameters like a. and ~. most of what will be said also applies to the use of evidence to set other characteristics of the learner. like its architecture. MacKay has applied the evidence approximation to finding the posterior for the neural net pew I a.) and P(L I w.~) recounted above combined with a P{y) = P{a., ~) which is uniform over all a. and ~ from 0 to +00 (MacKay 1992). In addition to the error introduced by the evidence approximation, additional error is introduced by his need to approximate 1. MacKay states that although he expects his approximation for 1 to be valid. "it is a matter of further research to establish [conditions for] this approximation to be reliable". 2 THE EXACT CALCULATION It is always true that the exact posterior is given by pew) = f dyP(w I y) P(y). P(L I w) = Jdy {P(L I w. y) x pew I y) x P(y)} I pew); pew I L) 0<: J dy {P(L I w. y) x pew I y) x P(y)} where the proportionality constant. being independent of w. is irrelevant. (2) Using the neural net pew I a.) and peL I w, ~) recounted above, and MacKay's P{y), it is trivial to use equation 2 to calculate that pew) 0<: [W(w)r(N12 + 1), where N is the number of weIghts. Similarly, with m the number of pairs in L, peL I w) 0<: [x2{w. L)r(m12 + 1). {See (Wolpert 1992) and (Buntine and Weigend 1991). and allow the output values in L to range On the Use of Evidence in Neural Networks 541 from -00 to +00.) These two results give us the exact expression for the posterior pew I L). In contrast, the evidence-approximated posterior oc exp[-a'(L) W(w) - W(L) X2(w, L)]. It is illuminating to compare this exact calculation to the calculation based on the evidence approximation. A good deal of relati vely complicated mathematics followed by some computer-based numerical estimation is necessary to arrive at the answer given by the evidence approximation. (This is due to the need to approximate y.) In contrast, to perform the exact calculation one only need evaluate a simple gaussian integral, which can be done in closed form, and in particular one doesn't need to petform any computer-based numerical estimation. In addition, with the evidence procedure ''I must be re-evaluated for each new data set, which means that the formula giving the posterior must be re-derived every time one uses a new data set. In contrast, the exact calculation's formula for the posterior holds for any data set; no re-calculations are required. So as a practical tool, the exact calculation is both far simpler and quicker to use than the calculation based on the evidence approximation. Another advantage of the exact calculation, of course, is that it is exact. Indeed, consider the simple case where the noise is fixed, i.e., P(y) = P(YI) O(Y2 - Pt), so that the only term we must "deal with" is YI = a. Set all other distributions as in (MacKay 1992). For this case, the w-dependence of the exact posterior can be quite different from that of the evidenceapproximated posterior. In particular, note that the MAP estimate based on the exact calculation is w = O. This is, of course, a silly answer, and reflects the poor choice of distributions made in (MacKay 1992). In particular, it directly reflects the un-normalizability of MacKay's P(a). However the important point is that this is the exactly correct answer for those distributions. On the other hand, the evidence procedure will result in an MAP estimate of argminw [X2(w, L) + (a' / W)W(w)], where a' and P' are derived from L. This answer will often be far from the correct answer ofw = O. Note also that the evidence approximations's answer will vary, perhaps greatly, with L, whereas the correct answer is L-independent. Finally, since the correct answer is w = 0, the difference between the evidence procedure's answer and the correct answer is equal to the evidence procedure's answer. In other words, although there exist scenarios for which the evidence approximation is valid, neural nets with flat P(YI) is not one of them; for this scenario, the evidence procedure's answer is in toto approximation error. (A possible reason for this is presented in section 4.) If one were to use a more reasonable P(a), uniform only from 0 to an upper cut-off Clmax' the results would be essentially the same, for large enough Clmax' The effect on the exact posterior, to first order, is to introduce a small region around w = 0 in which P(w) behaves like a decaying exponential in W(w) (the exponent being set by Clmax) rather than like [W(w)r(N/2 + 1) (T. Wallstrom, private communication). For large enough Clmax' the region is small enough so that the exact posterior still has a peak very close toO. On the other hand, for large enough Clmax, there is no change in the evidence procedure's answer. (Generically, the major effect on the evidence procedure of modifying P(y) is not to change its guess for P(w I L), but rather to change the associated error, i.e., change whether sufficiency conditions for the validity of the approximation are met. See below.) Even with a normalizable prior, the evidence procedure's answer is still essentially all approximation error. Consider again the case where the prior over both a and P is uniform. With the evidence approximation, the log of the posterior is -{ X2(w, L) + (a' / W)W(w) }, where a' and P' are set by the data. On the other hand, the exact calculation shows that the log of the pos542 Wolpert terior is really given by -{ In[x2(w, L)] + (N+2/ m+2) In [w(w)] }. What's interesting about this is not simply the logarithms, absent from the evidence approximation's answer, but also the factor mUltiplying the tenn involving the "weight penalty" quantity W(w). In the evidence approximation, this factor is data-dependent, whereas in the exact calculation it only depends on the number of data. Moreover, the value of this factor in the exact calculation tells us that if the number of weights increases, or alternatively the number of training examples decreases, the "weight penalty" term becomes more important, and fitting the training examples becomes less important. (It is not at all clear that this trade-off between N and m is reflected in (a' / W), the corresponding factor from the evidence approximation.) As before, if we have upper cut-offs on P(y), so that the MAP estimate may be reasonable, things don't change much. For such a scenario, the N vs. m trade-off governing the relative importance ofW(w) and X2(w, L) still holds, but only to lowest order, and only in the region sufficiently far from the ex-singularities (like w = 0) so that pew I L) behaves like [W(w)r(N!2 + 1) x [X2(w, L)r(m!2 + 1). All of this notwithstanding, the evidence approximation has been reported to give good results in practice. This should not be all that surprising. There are many procedures which are formally illegal but which still give reasonable advice. (Some might classify all of nonBayesian statistics that way.) The evidence procedure fixes y to a single value, essentially by maximum likelihood. That'S not unreasonable, just usually illegal (as well as far more laborious than the correct Bayesian procedure). In addition, the tests of the evidence approximation reported in (MacKay 1992) are not all that convincing. For paper 1, the evidence approximation gives a' = 2.5. For any other a in an interval extending three orders of magnitude about this a', test set error is essentially unchanged (see figure 5 of (MacKay 1992». Since such error is what we're ultimately interested in, this is hardly a difficult test of the evidence approximation. In paper 2 of (MacKay 1992) the initial use of the evidence approximation is "a failure of Bayesian prediction"; P(y I L) doesn't correlate with test set error (see figure 7). MacKay addresses this by arguing that poor Bayesian results are never wrong, but only "an opportunity to learn" (in contrast to poor non-Bayesian results?). Accordingly, he modifies the system while looking at the test set, to get his desired correlation on the test set. To do this legally, he should have instead modified his system while looking at a validation set, separate from the test set. However if he had done that, it would have raised the question of why one should use evidence at all; since one is already assuming that behavior on a validation set corresponds to behavior on a test set, why not just set a and p via cross-validation? 3 EVIDENCE AND THE PRIOR Consider again equation 1. Since y depends on the data L, it would appear that when the evidence approximation is valid, the data determines the prior, or as MacKay puts it, "the modem Bayesian ... does not assign the priors - many different priors can be ... compared in the light of the data by evaluating the evidence" (MacKay 1992). If this were true, it would remove perhaps the most major objection which has been raised concerning Bayesian analysis - the need to choose priors in a subjective manner, independent of the data. However the exact pew) given by equation 2 is data-independent. So one has chosen the prior, in a subjective way. The evidence procedure is simply providing a data-dependent approximation to a data-independent quantity. In no sense does the evidence procedure allow one to side-step the need to make subjective assumptions which fix pew). On the Use of Evidence in Neural Networks 543 Since the true pew) doesn't vary with L whereas the evidence approximation's pew) does, one might suspect that that approximation to pew) can be quite poor, even when the evidence approximation to the posterior is good. Indeed, ifP(w I YI) is exponential, there is no non-pathological scenario for which the evidence approximation to pew) is correct: Theorem 1: Assume that pew I YI) oc e-YI U(w). Then the only way that one can have pew) oc e-a U(w) for some constant a is ijP(YI) = Of or all YI '* a. Proof: Our proposed equality is exp( -a xU) = IdYl {P(YI) x exp( -YI xU)} (the normalization factors having all been absorbed into P(YI». We must find an a and a normalizable P(YI) such that this equality holds for all allowed U. Let u be such an allowed value of U. Take the derivative with respect to U of both sides of the proposed equality t times, and evaluate for U = u. The result is at = IdYl «yd x R(YI» for any integer t ~ 0, where R(YI) == P(Yt) exp(u(a - Yt». Using this, we see thaddYt«Yt - a)2 x R(Yt» = O. Since both R(Yl) and (Yt - a)2 are nowhere negative, this means that for all YI for which (Yt - a)2 '* 0, R(Yl) must equal zero. Therefore R(Yt) must equal zero for all Yl '* a. QED. Since the evidence approximation for the prior is always wrong, how can its approximation for the posterior ever be good? To answer this, write pew I L) = PeL I w) X [P'(w) + E(w)] / P(L), where P'(w) is the evidence approximation to pew). (It is assumed that we know the likelihood exactly.) This means that pew I L) - {PeL I w) X P'(w) / P(L)} , the error in the evidence procedure's estimate for the posterior, equals peL I w) x E(w) / PeL). So we can have arbitrarily large E(w) and not introduce sizable error into the posterior of w, but only for those w for which PeL I w) is small. As L varies, the w with non-negligible likelihood vary, and the Y such thatfor those w pew I y) is a good approximation to pew) varies. When it works, the y given by the evidence approximation reflects this changing of Y with L. 4 SUFFICIENCY CONDITIONS FOR EVIDENCE TO WORK Note that regardless of how peaked the evidence is, -{ X2(w, L) + (a'i W)W(w)} '* - ( In[x2(w, L)] + (N+2 / m+2) In[W(w)] }; the evidence approximation always has nonnegligible error for neural nets used with flat P(Y). To understand this, one must carefully elucidate a set of sufficiency conditions necessary for the evidence approximation to be valid. (Unfortunately, this has never been done before. A direct consequence is that no one has ever checked, formally, that a particular use of the evidence approximation is justified.) One such set of sufficiency conditions, the one implicit in all attempts to date to justify the evidence approximation (i.e., the one implicit in the logic of equation I), is the following: P(y I L) is sharply peaked about a particular Y,y. pew, Y I L) / P(y I L) varies slowly around Y = y. pew, Y I L) is infinitesimal for all Y sufficiently far from y. (i) (ii) (iii) Formally, condition (iii) can be taken to mean that there exists a not too large positive constant k, and a small positive constant b, such that I pew I L) - k I y!..c+iJ dy P(w, Y I L) I is bounded by a small constant E for all w. (As stated, (iii) has k = 1. This will almost always 544 Wolpert be the case in practice and will usually be assumed, but it is not needed to prove theorem 2.) Condition (ii) can be taken to mean that across ["I - 0, "I + 0], IP(w I y, L) - pew 1"1 L)I < 't, for some small positive constant 't, for all w. (Here and throughout this paper, when y is multi-dimensional, "0" is taken to be a small positive vector.) Theorem 2: When conditions (i), (ii), and (iii) hold, pew I L) == PeL I w, "I) x pew I "I), up to an (irrelevant) overall proportionality constant. Proof: Condition (iii) gives IP(wIL) - kJy!o+f> dy[P(wlyL)xP(yIL)]I < £forallw. J y+f> y+f> However Ik y-O dy[P(wly,L)xP(yIL)] - kP(wl"lL)Jy-O dyP(ylL)I < 'tk x Iy!: dy P(y I L), by condition (ii). If we now combine these two results, we see that y+8 J y+8 . I P(w I L) - k P(w 1"1 L) Jy-O dyP(y I L) I < £ + 'tk x y-O dyP(y I L). Smce the integral is bounded by 1, IP(wIL) - kP(wl"lL)Iy!;8 dyP(ylL)I < £+'tk.Sincethe integral is independent of w, up to an overall proportionality constant (that integral times k) the w-dependence of P(w I L) can be approximated by that of P(w I "I, L) oc PeL I w, 1) x pew I "I), incurring error less than £ + 'tk. Take k not too large and both £ and 't small. QED. Note that the proof would go through even if P(y I L) were not peaked about "I, or if P(y I L) were peaked about some point far from the "I for which (ii) and (iii) hold; nowhere in the proof is the definition of "I from condition (i) used. However in practice, when condition (iii) is met, k = 1, P(y I L) falls to 0 outside of ["I - 0, "I + 0], and pew I y, L) stays reasonably bounded for all such y. (If this weren't the case, then P(w I y, L) would have to fall to 0 outside of ["I - 0, "I + 0], something which is rarely true.) So we see that we could either just give conditions (ii) and (iii), or we could give (i), (ii), and the extra condition that P( w I y, L) is bounded small enough so that condition (iii) is met (In addition, one can prove that if the evidence approximation is valid, then conditions (i) and (ii) give condition (iii).) In any case, it should be noted that conditions (i) and (ii) by themselves are not sufficient for the evidence approximation to be valid. To see this, have w be one-dimensional, and let pew, yl L) = 0 both for {Iy- "II < 0, Iw - w*1 < v} and for {Iy- "II> 0, Iw - w*1 > v}. Let it be constant everywhere else (within certain bounds of allowed yand w). Then for both a and v small, conditions (i) and (ii) hold: the evidence is peaked about "I, and 't = O. Yet for the true MAP w, w*, the evidence approximation fails badly. (Generically, this scenario will also result in a big error if rather than using the evidence-approximated posterior to guess the MAP w. one instead uses it to evaluate the posterior-averaged f, Idf f P(f I L).) Gull mentions only condition (i). MacKay also mentions condition (ii), but not condition (iii). Neither author plugs in for £ and 't, or in any other way uses their distributions to infer bounds on the error accompanying their use of the evidence approximation. Since by (i) P(y I L) is sharply peaked about "I, one would expect that for (ii) to hold pew, yl L) must also be sharply peaked about "I. Although this line of reasoning can be formalized, it turns out to be easier to prove the result using sufficiency condition (iii): Theorem 3: If condition (iii) holds, then for all w such that P(w I L) > c > £,for each component i ofy, pew, Yi I L) must have a Yi-peak somewhere within 0i[l + 2£ / (c - E)] of(y'h Proof: Condition (iii) withk= 1 tellsusthatP(wIL)- Iy!;8 oyP(w,yIL)<£.Extending On the Use of Evidence in Neural Networks 545 the integrals over ')j~ gives P(w 1 L) J(y_i~-H»i dYi P(w, Yi 1 L) < E. From now on the i subscript on Y and a will be implicit. We have E > Jy!o-H>+r dy P(w, Y 1 L) for any scalar r > O. Assume that P(w, Y 1 L) doesn't have a peak anywhere in [y - a, y + a + r]. Without loss of generality, assume also that P( w, Y + aiL) ~ P(w, Y - aiL). These two assumptions mean that for any Y E [y + a, y + a + r], the value of P(w, Y 1 L) exceeds the maximal value y+Mr it takes on in the interval [y - a, y + a]. Therefore JY-H> dy P(w, Y 1 L) ~ (r / 2a) x y+o y-H> Jy~ dyP(w,yIL).Thismeansthady~ dyP(w,yIL) < 2aE/r.ButsinceP(wIL)< E + Jy~-H> dyP(w, y 1 L), this means that P(w 1 L) < E(1 + 2a / r). So ifP(w 1 L) > c > E, r < 2E / (c - E), and there must be a peak of P(w, Y 1 L) within ao + 2E/(C - E)) of y. QED. So for those w with non-negligible posterior, for E small, the y-peak of P(w, y 1 L) oc P(L 1 w, y) x P(w 1 y) x P(y) must lie essentially within the peak of P(y 1 L). Therefore: Theorem 4: Assume that P(w 1 YI) = exp(-YI U(w)) / ZI(YI) for some function U(.), P(L 1 w, yiJ = exp(-Y2 V(w, L)) / ~(Y2' w)for somefunction V(., .), andP(y) = P(YI)P('Y2)' (The Zj act as normalization constants.) Then if condition (iii) holds, for all w with nonnegligible posterior the y-solution to the equations -U(w) + i)YI [In(p(YI) -In(ZI(YI))] = 0 -V(w, L) + i)Y2 [In(p(YiJ - In(~(Y2' w))] = 0 must like within the y-peak ofP(y 1 L). Proof: P(w, yl L) oc {P(YI) xP(Y2) x exp[-yIU(w) - Y2 V(w, L)] } / {ZI(YI) X Zz(Y2, w)}. For both i = 1 and i = 2, evaluate i)r. {f dYj;ti P(w, Y 1 L)}, and set it equal to zero. This gives the two equations. Now define "the y-peak of P(y 1 L)" to mean a cube with i-component width ai[l + 2E / (c - E)], centered on y, where having a "non-negligible posterior" means P(w 1 L) > c. Applying theorem 3, we get the result claimed. QED. In particular, in MacKay's scenario, P(y) is uniform, W(w) = I:.i=1 (Wi)2, and V(w, L) = X2(w, L). Therefore ZI and Zz are proportional to (YlrN/2 and (yi)-rn!2 respectively. This means that if the vector {YI' Y2} = {N / [2W(w)], m / [2X2(w, L)]} does not lie within the peak of the evidence for the MAP w, condition (iii) does not hold. That YI / Y2 must approximately equal [N X2(w, L)] / [m W(w)] should not be too surprising. If we set the wgradient of both the evidence-approximated and exact P(w 1 L) to zero, and demand that the same w,w', solves both equations, we get YI /Y2 = -[(N + 2) X2(w', L)] / [(m + 2)W(w')]. (Unfortunately, if one continues and evaluates i)wii)wl(w 1 L) at w', often one finds that it has opposite signs for the two posteriors - a graphic failure of the evidence approximation.) It is not clear from the provided neural net data whether this condition is met in (MacKay 1992). However it appears that the corresponding condition is nm met, for YI at least, for the scenario in (Gull 1992) in which the evidence approximation is used with U(.) being the entropy. (See (Strauss et al. 1993, Wolpert et al. 1993).) Since conditions (i) through (iii) 546 Wolpert are sufficient conditions, not necessary ones, this does not prove that Gull's use of evidence is invalid. (It is still an open problem to delineate the full iff for when the evidence approximation is valid, though it appears that matching of peaks as in theorem 3 is necessary. See (Wolpert et al. 1993).) However this does mean that the justification offered by Gull for his use of evidence is apparently invalid. It might also help explain why Gull's results were "visually disappointing and ... clearly ... 'over-fitted''', to use his terms. The first equation in theorem 4 can be used to set restrictions on the set of w which both have non-negligible posterior and for which condition (iii) holds. Consider for example MacKay's scenario, where that equation says that N /2W(w) must lie within the width of the evidence peak. If the evidence peak is sharp, this means that unless all w with non-negligible posterior have essentially the same W(w), condition (iii) can not hold for all of them. Finally, if for some reason one wishes to know y, theorem 4 can sometimes be used to circumvent the common difficulty of evaluating P(y I L). To do this, one assumes that conditions (i) through (iii) hold. Then one finds any w with a non-negligible posterior (say by use of the evidence approximation coupled with approximations to P(y I L)) and uses it in theorem 4 to find a y which must lie within the peak of P(y I L), and therefore must lie close to the correct value of y. To summarize, there might be scenarios in which the exact calculation of the quantity of interest is intractable, so that some approximation like evidence is necessary. Alternatively, if one's choice ofP(w I y), P(y), and P(L I w, y) is poor, the evidence approximation would be useful if the error in that approximation somehow "cancels" error in the choice of distributions. However if one believes one's choice of distributions, and if the quantity of interest is P(w I L), then at a minimum one should check conditions (i) through (iii) before using the evidence approximation. When one is dealing with neural nets, one needn't even do that; the exact calculation is quicker and simpler than using the evidence approximation. Acknowledgments This work was done at the SFI and was supported in part by NLM grant F37 LMOOOll. I would like to thank Charlie Strauss and Tim Wall strom for stimulating discussion. References Buntine, W., Weigend, A. (1991). Bayesian back-propagation. Complex Systems, S, 603. Gull, S.F. (1989). Developments in maximum entropy data analysis. In "Maximum-entropy and Bayesian methods", J. Skilling (Ed.). Kluwer Academics publishers. MacKay, DJ.C. (1992). Bayesian Interpolation. A Practical Framework for Backpropagation Networks. Neural Computation, 4,415 and 448. Strauss, C.E.M, Wolpert, D.H., Wolf, D.R. (1993). Alpha, Evidence, and the Entropic Prior. In "Maximum-entropy and Bayesian methods", A. Mohammed-Djafari (Ed.). Kluwer Academics publishers. In press Wolpert, D.H. (1992). A Rigorous Investigation of "Evidence" and "Occam Factors" in Bayesian Reasoning. SFI TR 92-03-13. Submitted. Wolpert, D.H., Strauss, C.E.M., Wolf, D.R. (1993). On evidence and the marginalization of alpha in the entropic prior. In preparation. PART VI NETWORK DYNAMICS AND CHAOS
1992
120
597
Automatic Capacity Tuning of Very Large VC-dimension Classifiers I. Guyon AT&T Bell Labs, 50 Fremont st., 6th floor, San Francisco, CA 94105 isabelle@neural.att.com B. Boser· EECS Department, University of California, Berkeley, CA 94720 boser@eecs.berkeley.edu V. Vapnik AT&T Bell Labs, Room 4G-314, Holmdel, NJ 07733 v lad@neural.att.com Abstract Large VC-dimension classifiers can learn difficult tasks, but are usually impractical because they generalize well only if they are trained with huge quantities of data. In this paper we show that even high-order polynomial classifiers in high dimensional spaces can be trained with a small amount of training data and yet generalize better than classifiers with a smaller VC-dimension. This is achieved with a maximum margin algorithm (the Generalized Portrait). The technique is applicable to a wide variety of classifiers, including Perceptrons, polynomial classifiers (sigma-pi unit networks) and Radial Basis Functions. The effective number of parameters is adjusted automatically by the training algorithm to match the complexity of the problem. It is shown to equal the number of those training patterns which are closest patterns to the decision boundary (supporting patterns). Bounds on the generalization error and the speed of convergence of the algorithm are given. Experimental results on handwritten digit recognition demonstrate good generalization compared to other algorithms. 1 INTRODUCTION Both experimental evidence and theoretical studies [1] link the generalization of a classifier to the error on the training examples and the capacity of the classifier. ·Part of this work was done while B. Boser was at AT&T Bell Laboratories. He is now at the University of California, Berkeley. 147 148 Guyon, Boser, and Vapnik Classifiers with a large number of adjustable parameters, and therefore large capacity, likely learn the training set without error, but exhibit poor generalization. Conversely, a classifier with insufficient capacity might not be able to learn the task at all. The goal of capacity tuning methods is to find the optimal capacity which minimizes the expected generalization error for a given amount of training data. Capacity tuning techniques include: starting with a low capacity system and allocating more parameters as needed or starting with an large capacity system and eliminating unnecessary adjustable parameters with regularization. The first method requires searching in the space of classifier structures which possibly contains many local minima. The second method is computationally inefficient since it does not avoid adjusting a large number of parameters although the effective number of parameters may be small. With the method proposed in this paper, the capacity of some very large VCdimension classifiers is adjusted automatically in the process of training. The problem is formulated as a quadratic programming problem which has a single global minimum. Only the effective parameters get adjusted during training which ensures compu tational efficiency. 1.1 MAXIMUM MARGIN AND SUPPORTING PATTERNS Here is a familiar problem: Given is a limited number of training examples from two classes A and B; find the linear decision boundary which yields best generalization performance. When the training data is scarce, there exists usually many errorless separations (figure 1.1). This is especially true when the dimension of input space (i.e. the number of tunable parameters) is large compared to the number of training examples. The question arises which of these solutions to choose? The one solution that achieves the largest possible margin between the decision boundary and the training patterns (figure 1.2) is optimal in the "minimax" sense [2] (see section 2.2). This choice is intuitively justifiable: a new example from class A is likely to fall within or near the convex envelope of the examples of class A (and similarly for class B). By providing the largest possible "safety" margin, we minimize the chances that examples from class A and B cross the border to the wrong side. An important property of the maximum margin solution is that it is only dependent upon a restricted number of training examples, called supporting patterns (or informative patterns). These are those examples which lie on the margin and therefore are closest to the decision boundary (figure 1.2). The number m of linearly independent supporting patterns satisfies the inequality: m ~ min(N + 1,p). (1) In this inequality, (N + 1) is the number of adjustable parameters and equals the Vapnik-Chervonenkis dimension (VC-dimension) [2], and p is the number of training examples. In reference [3], we show that the generalization error is bounded by m/p and therefore m is a measure of complexity of the learning problem. Because m is bounded by p and is generally a lot smaller than p, the maximum margin solution obtains good generalization even when the problem is grossly underdetermined, i.e. the number of training patterns p is much smaller than the number of adjustable parameters, N + 1. In section 2.3 we show that the existence of supporting patterns is advantageous for computational reasons as well. Automatic Capacity Tuning of Very Large VC-dimension Classifiers 149 x 8 •• • • (1) -.Figure 1: Linear separations. Xi A II 8 (2) -.A (1) When many linear decision rules separate the training set, which one to choose? (2) The maximum margin solution. The distance to the decision boundary of the closest training patterns is maximized. The grey shading indicates the margin area in which no pattern falls. The supporting patterns (in white) lie on the margin. 1.2 NON-LINEAR CLASSIFIERS Although algorithms that maximize the margin between classes have been known for many years [4, 2], they have for computational reasons so far been limited to the special case of finding linear separations and consequently to relatively simple classification problems. In this paper, we present an extension to one of these maximum margin training algorithms called the "Generalized Portrait Method" (G P) [2] to various non-linear classifiers, including including Perceptrons, polynomial classifiers (sigma-pi unit networks) and kernel classifiers (Radial Basis Functions) (figure 2). The new algorithm trains efficiently very high VC-dimension classifiers with a huge number of tunable parameters. Despite the large number of free parameters, the solution exhibits good generalization due to the inherent regularization of the maximum margin cost function. As an example, let us consider the case of a second order polynomial classifier. Its decision surface is described by the following equation: 2::::: WiXi + 2::::: WijXiXj + b = O. (2) i i,j he Wi, Wij and b are adjustable parameters, and Xi are the coordinates of a pattern x. If n is the dimension of input pattern x, the number of adjustable parameters of the second order polynomial classifier is [n( n + 1 )/2] + 1. In general, the number of adjustable parameters of a qth order polynomial is of the order of N ~ n q • The G P algorithm has been tested on the problem of handwritten digit recognition. The input patterns consist of 16 X 16 pixel images (n = 256). The results achieved 150 Guyon, Boser, and Vapnik 256 3.104 8.107 4.109 1 . 1012 10.5 0 5.8% 5.2% 4.9% 5.2% Table 1: Handwritten digit recognition experiments. The first database (DB1) consists of 1200 clean images recorded from ten subjects. Half of this data is used for training, and the other half is used to evaluate the generalization performance. The other database (DB2) consists of 7300 images for training and 2000 for testing and has been recorded from actual mail pieces. We use ten polynomial classification functions of order q, separating one class against all others. We list the number N of adjustable parameters, the error rates on the test set and the average number <m>of supporting patterns per separating hypersurface. The results compare favorably to neural network classifiers which minimize the mean squared error with backpropagation. For the one layer network (linear classifier),the error on the test set is 12.7 % on DB1 and larger than 25 % on DB2. The lowest error rate for DB2, 4.9 %, obtained with a forth order polynomial, is comparable to the 5.1 % error obtained with a multi-layer neural network with sophisticated architecture being trained and tested on the same data [6]. with polynomial classifiers of order q are summarized in table 1. Also listed is the number of adjustable parameters, N. This quantity increases rapidly with q and quickly reaches a level that is computationally intractable for algorithms thdt explicitly compute each parameter [5]. Moreover, as N increases, the learning problem becomes grossly underdetermined: the number of training patterns (p = 600 for DB1 and p = 7300 for DB2) becomes very small compared to N. Nevertheless, good generalization is achieved as shown by the experimental results listed in the table. This is a consequence of the inherent regularization of the algorithm. An important concern is the sensitivity of the maximum margin solution to the presence of outliers in the training data. It is indeed important to remove undesired outliers (such as meaningless or mislabeled patterns) to get best generalization performance. Conversely, "good" outliers (such as examples of rare styles) must be kept. Cleaning techniques have been developed based on the re-examination by a human supervisor of those supporting patterns which result in the largest increase of the margin when removed, and thus, are the most likely candidates for outliers [3]. In our experiments on DB2 with linear classifiers, the error rate on the test set dropped from 15.2% to 10.5% after cleaning the training data (not the test data). 2 ALGORITHM DESIGN The properties of the G P algorithm arise from merging two separate ideas: Training in dual space, and minimizing the maximum loss. For large VC-dimension classifiers (N ~ p), the first idea reduces the number of effective parameters to be actually Automatic Capacity Tuning of Very Large VC-dimension Classifiers 151 computed from N to p. The second idea reduces it from p to m. 2.1 DUALITY We seek a decision function for pattern vectors x of dimension n belonging to either of two classes A and B. The input to the training algorithm is a set of p examples Xi with labels Yi: where {Yk = 1 Yk =-1 if Xk E class A if Xk E class B. (3) From these training examples the algorithm finds the parameters of the decision function D(x) during a learning phase. After training, the classification of unknown patterns is predicted according to the following rule: x E A if D(x) > 0 x E B otherwise. (4) We limit ourselves to classifiers linear in their parameters, but not restricted to linear dependences in their input components, such as Perceptrons and kernel-based classifiers. Perceptrons [5] have a decision function defined as: N D(x) = w . <p(x) + b = L Wi<Pi(X) + b, (5) i=l where the <Pi are predefined functions of x, and the Wi and b are the adjustable parameters of the decision function. This definition encompasses that of polynomial classifiers. In that particular case, the <Pi are products of components of vector x(see equation 2). Kernel-based classifiers, have a decision function defined as: p D(x) = L CtkI«Xk, x) + b, (6) k=l The coefficients Ctk and the bias b are the parameters to be adjusted and the Xk are the training patterns. The function I< is a predefined kernel, for example a potential function [7] or any Radial Basis Function (see for instance [8]). Perceptrons and RBF's are often considered two very distinct approaches to classification. However, for a number of training algorithms, the resulting decision function can be cast either in the form of equation (5) or (6). This has been pointed out in the literature for the Perceptron and potential function algorithms [7], for the polynomial classifiers trained with pseudo-inverse [9] and more recently for regularization algorithms and RBF's [8]. In those cases, Perceptrons and RBF's constitute dual representations of the same decision function. The duality principle can be understood simply in the case of Heb b 's learning rule. The weight vector of a linear Perceptron (<pi(X) = Xi), trained with Hebb's rule, is simply the average of all training patterns Xk, multiplied by their class membership polarity Yk: 1 p w = - LYkXk . P k=l 152 Guyon, Boser, and Vapnik Substituting this solution into equation (5), we obtain the dual representation 1 p D(x) = w· x + b = - 2: Yk Xk . X + b . P k=l The corresponding kernel classifier has kernel K(x, x') = X· x' and the dual parameters ctk are equal to (l/p)Yk. In general, a training algorithm for Perceptron classifiers admits a dual kernel representation if its solution is a linear combination of the training patterns in ip-space: p w = L ctkip(Xk) . (7) k=l Reciprocally, a kernel classifier admits a dual Perceptron representation if the kernel function possesses a finite (or infinite) expansion of the form: K(x, x') = L ipi(X) ipi(X/) . (8) Such is the case for instance for some symmetric kernels [10]. that we have been using include Examples of kernels K(x, x') (x. x, + l)q K(x, x') tanh (1' x . x') K(x, x') exp (1' x . x') 1 K(x, x') exp (-lIx - x/1l2/-y) K(x, x') exp (-lIx - x/ll/-Y) K(x, x') (x. x' + l)q exp (-llx - x/ll/-Y) (polynomial of order q), (neural units), (exponential) , (gaussian RBF), (exponential RBF), (mixed polynomial & RBF). (9) These kernels have positive parameters (the integer q or the real number -y) which can be determined with a Structural Risk Minimization or Cross-Validation procedure (see for instance [2]). More elaborate kernels incorporating known invariances of the data could be used also. The G P algorithm computes the maximum margin solution in the kernel representation. This is crucial for making the computation tractable when training very large VC-dimension classifiers. Training a classifier in the kernel representation is computationally advantageous when the dimension N of vectors w (or the VCdimension N + 1) is large compared to the number of parameters ctk, which equals the number of training patterns p. This is always true if the kernel function possesses an infinite expansions (8). The experimental results listed in table 1 indicate that this argument holds in practice even for low order polynomial expansions when the dimension n of input space is sufficiently large. 2.2 MINIMIZING THE MAXIMUM LOSS The margin, defined as the Euclidean distance between the decision boundary and the closest training patterns in ip-space can be computed as (10) Automatic Capacity Tuning of Very Large VC-dimension Classifiers 153 The goal of the maximum margin training algorithm is to find the decision function D(x) which maximizes M, that is the solution of the optimization problem . YkD(Xk) m~x~n IIwll . (11) The solution w of this problem depends only on those patterns which are on the margin, i.e. the ones that are closest to the decision boundary, called supporting patterns. It can be shown that w can indeed be represented as a linear combination of the supporting patterns in ip-space [4, 2, 3] (see section 2.3). In the classical framework of loss minimization, problem 11 is equivalent to minimizing (over w) the maximum loss. The loss function is defined as This "minimax" approach contrasts with training algorithms which minimize the average loss. For example, backpropagation minimizes the mean squared error (MSE), which is the average of The benefit of minimax algorithms is that the solution is a function only of a restricted number of training patterns, namely the supporting patterns. This results in high computational efficiency in those cases when the number m of supporting patterns is small compared to both the total number of training patterns p and the dimension N of ip-space. 2.3 THE GENERALIZED PORTRAIT The G P algorithm consists in formulating the problem 11 in the dual a-space as the quadratic programming problem of maximizing the cost function p 1 J(a, b) = L ak (1- bYk) - -a . H . a, k=l 2 under the constrains ak > 0 [4, 2]. The p x p square matrix H has elements: Hkl = YkYIK(Xk,Xl). where K(x, x') is a kernel, such as the ones proposed in (9), which can be expanded as in (8). Examples are shown in figure 2. K(x, x') is not restricted to the dot product K(x, x') = x . x' as in the original formulation of the GP algorithm [2]. In order for a unique solution to exist, H must be positive definite. The bias b can be either fixed or optimized together with the parameters ak. This case introduces another set of constraints: Ek Ykak = 0 [4]. The quadratic programming problem thus defined can be solved efficiently by standard numerical methods [11]. Numerical computation can be further reduced by processing iteratively small chunks of data [2]. The computational time is linear the dimension n of x-space (not the dimension N of ip-space) and in the number p of training examples and polynomial in the number m < min(N + 1,p) of supporting 154 Guyon, Boser, and Vapnik Xi ~!;~;t;.':<l • • A '.r•• • • • • 8 • • ••• • (1 ) Figure 2: Non-linear separations. X • • 8 • • •• • • (2) •• A •• • • Decision boundaries obtained by maximizing the margin in ip-space (see text). The grey shading indicates the margin area projected back to x-space. The supporting patterns (white) lie on the margin. (1) Polynomial classifier of order two (sigma-pi unit network), with kernel K(x, x') = (x. x' + 1)2. (2) Kernel classifier (RBF) with kernel K(x,x) = (exp -llx - x'lI/lO). patterns. It can be theoretically proven that it is a polynomial in m of order lower than 10, but experimentally an order 2 was observed. Only the supporting patterns appear in the solution with non-zero weight a'k: D(x) = LYka'kK(Xk, x) + h, k Substituting (8) in D(x), we obtain: W = LYka'kip(Xk) . k (12) (13) Using the kernel representation, with a factorized kernel (such as 9), the classification time is linear in n (not N) and in m (not p). 3 CONCLUSIONS We presented an algorithm to train in high dimensional spaces polynomial classifiers and Radial Basis functions which has remarquable computational and generalization performances. The algorithms seeks the solution with the largest possible margin on both side of the decision boundary. The properties of the algorithm arise from the fact that the solution is a function only of a small number of supporting patterns, namely those training examples that are closest to the decision boundary. The generalization error of the maximum margin classifier is bounded by the ratio Automatic Capacity Tuning of Very Large VC-dimension Classifiers 155 of the number of linearly independent supporting patterns and the number of training examples. This bound is tighter than a bound based on the VC-dimension of the classifier family. For further improvement of the generalization error, outliers corresponding to supporting patterns with large elk can be eliminated automatically or with the assistance of a supervisor. This feature suggests other interesting applications of the maximum margin algorithm for database cleaning. Acknowledgements We wish to thank our colleagues at UC Berkeley and AT&T Bell Laboratories for many suggestions and stimulating discussions. Comments by L. Bottou, C. Cortes, S. Sanders, S. Solla, A. Zakhor, are gratefully acknowledged. We are especially indebted to R. Baldick and D. Hochbaum for investigating the polynomial convergence property, S. Hein for providing the code for constrained nonlinear optimization, and D. Haussler and M. Warmuth for help and advice regarding performance bounds. References [1] 1. Guyon, V. Vapnik, B. Boser, L. Bottou, and S.A. Solla. Structural risk minimization for character recognition. In J. Moody and et aI., editors, NIPS 4, San Mateo CA, 1992. IEEE, Morgan Kaufmann. [2] V.N. Vapnik. Estimation of dependences based on empirical data. Springer, New York, 1982. [3] B. Boser, 1. Guyon, and V. Vapnik. An training algorithm for optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory, pages 144-152, Pittsburgh, July 1992. ACM. [4] P. F. Lambert. Designing patterns recognizers with extremal paradigm information. In Watanabe S., editor, Methodologies of Pattern Recognition, pages 359-391. Academic Press, 1969. [5] R.O. Duda and P.E. Hart. Pattern Classification And Scene Analysis. Wiley and Son, 1973. [6] Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and 1. D. Jackel. Back-propagation applied to handwritten zipcode recognition. Neural Computation, 1(4):541-551, 1989. [7] M.A. Aizerman, E.M. Braverman, and L.I. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821-837, 1964. [8] T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247:978 - 982, February 1990. [9] T. Poggio. On optimal nonlinear associative recall. Bioi. Cybern., 19:201,1975. [10] G.F Roach. Green's Functions. Cambridge University Press, Cambridge, 1982 (second ed.). [11] D. Luenberger. Linear and Non-linear Programming. Addidon Wesley, 1984.
1992
121
598
Interposing an ontogenic model between Genetic Algorithms and Neural Networks Richard K. Belew rikGcs.ucsd.edu Cognitive Computer Science Research Group Computer Science & Engr. Dept. (0014) University of California - San Diego La Jolla, CA 92093 Abstract The relationships between learning, development and evolution in Nature is taken seriously, to suggest a model of the developmental process whereby the genotypes manipulated by the Genetic Algorithm (GA) might be expressed to form phenotypic neural networks (NNet) that then go on to learn. ONTOL is a grammar for generating polynomial NN ets for time-series prediction. Genomes correspond to an ordered sequence of ONTOL productions and define a grammar that is expressed to generate a NNet. The NNet's weights are then modified by learning, and the individual's prediction error is used to determine GA fitness. A new gene doubling operator appears critical to the formation of new genetic alternatives in the preliminary but encouraging results presented. 1 Introduction Two natural phenomena, the learning done by individuals' nervous systems and the evolution done by populations of individuals, have served as the basis of distinct classes of adaptive algorithms, neural networks (NNets) and Genetic Algorithms (GAs), resp. Interactions between learning and evolution in Nature suggests that combining NNet and GA algorithmic techniques might also yield interesting hybrid algorithms. 99 100 Belew Dimension X t-H x • t-2 2 ><t =' W4 Xt - 2 + W3 Xt-f,..-X t- I + X· t-I Figure 1: Polynomial networks X t D e 9 r e e Taking the analogy to learning and evolution seriously, we propose that the missing feature is a the developmental process whereby the genotypes manipulated by the GA are expressed to form phenotypic NNets that then go on to learn. Previous attempts to use the GA to search for good NN et topologies have foundered exactly because they have assumed an overly direct genotype-to-phenotype correspondence. This research is therefore consistent with other NN et research the physiology of neural development [3] as well as those into "constructive" methods for changing network topologies adaptively during the training process [4]. Additional motivation derives from the growing body of neuroscience demonstrating the importance of developmental processes as the shapers of effective learning networks. Cognitively, the resolution of false dicotomies like "nature/nurture" and "nativist/empiricist" also depends on a richer language for describing the way genetically determined characteristics and within-lifetime changes by individuals can interact. Because GAs and NNets are each complicated technologies in their own right, and because the focus of the current research is a model of development that can span between them, three major simplifications have been imposed for the preliminary research reported here. First, in order to stay close to the mathematical theory of functional approximation, we restrict the form of our NNets to what can be called "polynomials networks" (cf. [7]). That is, we will consider networks with a first layer of linear units (Le., terms in the polynomial that are simply weighted input Xi), a second layer with units that form products of first-layer units, a third layer with units that form products of second-layer units, etc.; see Figure 1, and below for an example. As depicted in Figure 1 the space of polynomial networks can be viewed as two-dimensional, parameterized by dimension (Le., how much history of the time series is used) and degree. There remains the problem of finding the best parameter values for this particular polynomial form. Much of classicial optimization theory and more recent NNet research is concerned with various methods for performing this task. Previous reInterposing an ontogenic model between Genetic Algorithms and Neural Networks 101 search has demonstrated that the global sampling behavior of the GA works very effectively with any gradient, local search technique [2]. The second major simplification, then, is that for the time being we use only the most simple-minded gradient method: first-order, fixed-step gradient descent. Analytically, this is the most tractible, and the general algorithm design can readily replace this with any other local search technique. The final simplification is that we focus on one of the most parsimonious of problems, time series prediction: The GA is used to evolve NNets that are good at predicting X1+1 given access to an unbounded history X 1, X1-1, X1-2, .... Polynomial ai)proximations of an arbitrary time series can vary in two dimensions: the extent to which they rely on this history, and (e.g., how far back in time), and in their degree. The Stone-Weierstrauss Aproximation Theorem guarantees that, within this two-dimensional space, there exists some polynomial that will match the desired temporal sequence to arbitrarily precision. The problem, of course, is that over a history H and allowing m degree terms there exists O(Hm) terms, far too many to search effectively. From the perspective of function approximation, then, this work corresponds to a particular heuristic for searching for the correct polynomial form, the parameters of which will be tuned with a gradient technique. 2 Expression of the ONTOL grammar Every multi-cellular organism has the problem of using a single genetic description contained in the first germ cell as specification for all of its various cell types. The genome therefore appears to contain a set of developmental instructions, subsets of which become "relevant" to the particular context in which each developing cell finds itself. If we imagine that each cell type is a unique symbol in some alphabet, and that the mature organism is a string of symbols, it becomes very natural to model the developmental process as a (context-sensitive) grammar generating this string [6, 5]. The initial germ cell becomes the start symbol. A series of production rules specify the expansion (mitosis) of this non-terminal (cell) into two other symbols that then develop according to the same set of genetically-determined rules, until all cells are in a mature, terminal state. ONTOL is a grammar for generating cells in the two-dimensional space of polynomial networks. The left hand side (LHS) of productions in this grammar define conditions on the cells' internal Clock state and on the state of its eight Moore neighbors. The RHS of the production defines one of five cell-state update actions that are performed ifthe LHS condition is satisfied: A cell can mitosize either left or down (M Left, M Down), meaning that this adjacent cell now becomes filled with an identical copy; Die (Le., disappear entirely); Tick (simply decrement its internal Clock state); or Terminate (cease development). Only terminating cells form synaptic connections, and only to adjacent neighbors. The developmental process is begun by placing a single "gamete" cell at the origin of the 2d polyspace, with its Clock state initialized to a maximal value M azClock = 4; this state is decremented every time a gene is fired. If and when a gene causes this cell to undergo mitosis, a new cell, either to the left or below the originial cell, is created. Critically, the same set of genetic instructions contained in the original gametic cell are used to control transitions of all its progeny cells (much like a 102 Belew B B B B B B :2 B Deg=l DiII:l #0 B U B 0 B B B B B B B B 1 B Deg:l Di.-l *-~. 1 B B B B B B B B B B 1 B Deg=:2 DiII=1 #-~ 1 B B B B Figure 2: Logistic genome, engineered cellular automaton's transition table), even though the differing contexts of each cell are likely to cause different genes to be applied in different cells. Figure 2 shows a trace of this developmental process: each snap-shot shows the Clock states of all active (non-terminated) cells, the coordinates of the cell being expressed, and the gene used to control its expression. 3 Experimental design Each generation begins by developing and evaluating each genotype in the population. First, each genome in the population is expressed to form an executable Lisp lambda expression computing a polynomial and a corresponding set of initial weights for each of its terms. If this expression can be performed successfully and the individual is viable (Le., their genomes can be interpretted to build well-formed networks), the individual is exposed to NTrain sequential instances of the time series. Fitness is then defined to be its cumulative error on the next NTest time steps. After the entire population has been evaluated, the next generation is formed according to a relatively conventional genetic algorithm: more successful individuals are differentially reproduced and genetic operators are applied to these to experiment with novel, but similar, alternatives. Each genome is cloned zero, one or more times using a proportional selection algorithm that guarantees the expected number of offspring is proportional to an individual's relative fitness. Variation is introduced into the population by mutation and recombination genetic operators that explore new genes and genomic combinations. Four types of mutation were applied, with the probability of a mutation proportional to genome length. First, some random portion of an extant gene might be randomly altered, e.g., changing an initial weight, adding or deleting a constraint on a condition, changing the gene's action. Because a gene's order in the genome can affect its probability of being expressed, a second form of mutation permutes the order of the genes on III III QI C ~ ..-4 "Interposing an ontogenic model between Genetic Algorithms and Neural Networks 103 3.S Min Avg --3 2.S ~~~(!~~~~~Ifl 2 .~~~~ loS 1 O.S o ~----~----~------~----~------~----~------~----~ o 100 200 300 400 Generations SOO 600 Figure 3: Poulation Minimum and Average Fitness 700 800 the genome. A third class of mutation removes genes from the genome, always "trimming" them from the end. Combined with the expression mechanism's bias towards the head of the genomic list, this trimming operation creates a pressure towards putting genes critical to early ontogeny near the head. The final and critical form of mutation randomly selects a gene to be doubled: a duplicate copy of the gene is constructed and inserted at a randomly selected position in the genome. After all mutations have been performed, cross-over is performed between pairs of individuals. 4 Experiments To demonstrate, consider the problem of predicting a particularly difficult time series, the chaotic logistic map: X t = 4.0Xt _ 1 - 4.oxl_ 1. The example of Figure 2 showed an ONTOL genome engineered to produce the desired logistic polynomial. This "genetically engineered" solution is merely evidence that a genetic solution exists that can be interpretted to form the desired phenotypic form; the real test is of course whether the GA can find it or something similar. Early generations are not encouraging. Figure 3 shows the minimum (i.e., best) prediction error and popUlation average error for the first 800 generations of a typical simulation. Initial progress is rapid because in the initial, randomly constructed population, fully half of the individuals are not even viable. These are strongly 104 Belew 100 ~----~----~------~----~------~-----r----~~----~ Nonlinear polys 90 80 70 ~o 50 40 30 20 10 100 200 300 400 500 600 700 800 Generations Figure 4: Complex polynomials selected against, of course, and within the first two or three generations at least 95% of all generations remain viable. For the next several hundred generations, however, all of aNTOL's developmental machinery appears for naught as the dominant phenotypic individuals are the most "simplistic" linear, first-degree approximators of the form W1Xl + woo Even here, however, the GA is able to work in conjunction with the gradient learning process is able to achieve Baldwin-like effects optimizing Wo and WI [1]. The simulation reaches a "simplistic plateau," then, as it converges on a population composed of the best predictors the simplistic linear, first-degree network topology permits for this time series. In the background, however, genetic operators are continuing to explore a wide variety of genotypic forms that all have the property of generating roughly the same simplistic phenotypes. Figure 4 shows that there are significant numbers of "complex" polynomialsl in early generations, and some of these have much higher than average fitness2 On average, however, genes leading to complex phenotypes provide lead to poorer approximations than the simplistic ones, and are quickly culled. lI.e., either nonlinear terms or higher dimensional dependence on the past 2Note the good solutions in the first 50 generations, as well as subsequent dips during the simplistic plateau. Ul Q) i t!) ... III ~ ~ Interposing an ontogenic model between Genetic Algorithms and Neural Networks 105 12000~-----r----~~----'------'------~----~------~----~ 10000 8000 6000 4000 2000 Selected Neutral --_. o~----~----~~----~----~------~----~------~----~ o 100 200 300 400 500 600 700 800 Generations Figure 5: Genome length A critical aspect of the redundancy introduced by gene doubling is that old genetic material is freed to mutate into new forms without threatening the phenotype's viability. When compared to a population of mediocre, simplistic networks any complex networks able to provide more accurate predictions have much higher fitness, and eventually are able to take over the population. Around generation 400, then, Figure 3 shows the fitness dropping from the simplistic plateau, and Figure 4 shows the number of complex polynomials increasing. Many of these individuals' genomes indeed encode grammars that form polynomials of the desired functional form. A surprising feature of these simulations is that while the genes leading to complex phenotypes are present from the beginning and continue to be explored during the simplistic plateau, it takes many generations before these genes are successfully composed into robust, consistently viable genotypes. How do the complex genotypes discovered in later generations differ from those in the initial population? One piece of the answer is revealed in Figure 5: later genomes are much longer. All 100 individuals in the initial population have exactly five genes, and so the initial "gene pool" size is 500. In the experiments just described, this number grows asymptotically to approximately 6000 total genes (i.e., 60 per individual, on average) during the simplistic plateau, and then explodes a second time to more than 10,000 as the population converts to complex polynomials. It appears that gene duplication creates a very constructive form of redundancy: mulitple copies of crit106 Belew ical genes help the genotype maintain the more elaborate development programs required to form complex phenotypes. Micro-analysis of the most successful individuals in later generations supports this view. While many parts of their genomes appear inconsequential (for example, relative to the engineered genome of Figure 2), both the M Down gene and the two-element Terminate genes, critical to forming polynomials that are "morphologically isomorphic" with the correct solution, are consistently present. This hypothesis is also supported by results from a second experiment, also plotted on Figure 5. Recall that the increase in genome size caused by gene doubling is offset by a trimming mutation that periodically shortens a genome. The curve labelled "Neutral" shows the results of these opposing operations when the next generation is formed randomly, rather than being selected for better prediction. Under neutral selection, genome size grows slightly from initial size, but gene doubling and genome trimming then quickly reach equilibrium. When we select for better predictors, however, longer genomes are clearly preferred, at least up to a point. The apparent asymptote accompanying the simplistic plateau suggests that if these simulations were extended, the length of complex genotypes would also stabalize.a Acknowledgements I gratefully acknowledge the warm and stimulating research environments provide by Domenico Parisi and colleagues at the Psychological Institute, CNR, Rome, Italy, and Jean-Arcady Meyer and colleagues in the Groupe de BioInformatique, Ecole Normale Superieure in Paris, France. References [1] R. K. Belew. Evolution, learning and culture: computational metaphors for adaptive search. Complex Systems, 4(1):11-49, 1990. [2] R. K. Belew, J. McInerney, and N. N. Schraudolph. Evolving networks: Using the Genetic Algorithm with connectionist learning. In Proc. Second Artificial Life Conference, pages 511-547, New York, 1991. Addison-Wesley. [3] J. D. Cowan and A. E. Friedman. Development and regeneration of eye-brain maps: A computational model. In Advances in Neural Info. Proc. Systems 2, pages 92-99. Morgan Kaufman, 1990. [4] S. E. Fahlman and C. Lebiere. The Cascade-Correlation learning architecture. In D. S. Touretzky, editor, Advances in Neural Info. Proc. Systems ..I, pages 524-532. Morgan Kaufmann, 1990. [5] H. Kitano. Designing neural networks using genetic algorithms with graph generation system. Complex Systems, 4(4), 1990. [6] A. Lindenmayer and G. Rozenberg. Automata, languages, development. NorthHolland, Amsterdam, 1976. [7] T. D. Sanger, R. S. Sutton, and C. J. Matheus. Iterative construction of sparse polynomials. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural Info. Proc. Systems ..I, pages 1064-1071. Morgan Kaufmann, 1992.
1992
122
599
Bayesian Learning via Stochastic Dynamics Radford M. Neal Department of Computer Science University of Toronto Toronto, Ontario, Canada M5S lA4 Abstract The attempt to find a single "optimal" weight vector in conventional network training can lead to overfitting and poor generalization. Bayesian methods avoid this, without the need for a validation set, by averaging the outputs of many networks with weights sampled from the posterior distribution given the training data. This sample can be obtained by simulating a stochastic dynamical system that has the posterior as its stationary distribution. 1 CONVENTIONAL AND BAYESIAN LEARNING I view neural networks as probabilistic models, and learning as statistical inference. Conventional network learning finds a single "optimal" set of network parameter values, corresponding to maximum likelihood or maximum penalized likelihood inference. Bayesian inference instead integrates the predictions of the network over all possible values of the network parameters, weighting each parameter set by its posterior probability in light of the training data. 1.1 NEURAL NETWORKS AS PROBABILISTIC MODELS Consider a network taking a vector of real-valued inputs, x, and producing a vector of real-valued outputs, y, perhaps computed using hidden units. Such a network architecture corresponds to a function, I, with y = I(x, w), where w is a vector of connection weights. If we assume the observed outputs, y, are equal to y plus Gaussian noise of standard deviation (j, the network defines the conditional probability 475 476 Neal for an observed output vector given an input vector as follows: P(y I x, 0") ex: exp( -IY - !(x, w)12 /20"2) (1) The probability of the outputs in a training set (Xl, yt), ... , (Xn, Yn) given this fixed noise level is therefore P(Yl, ... , Yn I Xl,·.·, Xn, 0") ex: exp( - E lYe - !(Xe, w)12 /20"2) (2) e Often 0" is unknown. A Bayesian approach to handling this is to assign 0" a vague prior distribution and then ·.ntcgrating it away, giving the following probability for the training set (see (Buntine and Weigend, 1991) or (Neal, 1992) for details): mp±nD P(Yl,"" Yn I Xl, ... , Xn) ex: (so + E lYe - !(Xe, w)12) 2 (3) e where So and mo are parameters of the prior for 0". 1.2 CONVENTIONAL LEARNING Conventional backpropagation learning tries to find the weight vector that assigns the highest probability to the training data, or equivalently, that minimizes minus the log probability of the training data. When 0" is assumed known, we can use (2) to obtain the following objective function to minimize: M(w) = E lYe - !(Xe, w)12 / 20"2 (4) e When 0" is unknown, we can instead minimize the following, derived from (3): M(w) (5) e Conventional learning often leads to the network over fitting the training data modeling the noise, rather than the true regularities. This can be alleviated by stopping learning when the the performance of the network on a separate validation set begins to worsen, rather than improve. Another way to avoid overfitting is to include a weight decay term in the objective function, as follows: M'(w) = Alwl 2 + M(w) (6) Here, the data fit term, M(w), may come from either (4) or (5). We must somehow find an appropriate value for A, perhaps, again, using a separate validation set. 1.3 BAYESIAN LEARNING AND PREDICTION Unlike conventional training, Bayesian learning does not look for a single "optimal" set of network weights. Instead, the training data is used to find the posterior probability distribution over weight vectors. Predictions for future cases are made by averaging the outputs obtained with all possible weight vectors, with each contributing in proportion to its posterior probability. To obtain the posterior, we must first define a prior distribution for weight vectors. We might, for example, give each weight a Gaussian prior of standard deviation w: (7) Bayesian Learning via Stochastic Dynamics 477 We can then obtain the posterior distribution over weight vectors given the training cases (Xl, yt), ... , (Xn, Yn) using Bayes' Theorem: P(w I (Xl, yt}, ... , (Xn, Yn)) oc P(w) P(YI, ... , Yn I Xl, ... , Xn, w) (8) Based on the training data, the best prediction for the output vector in a test case with input vector X., assuming squared-error loss, is Y. = J /(x.,w)P(w I (xI,yd,· .. ,(xn,Yn))dw (9) A full predictive distribution for the outputs in the test case can also be obtained, quantifying the uncertainty in the above prediction. 2 INTEGRATION BY MONTE CARLO METHODS Integrals such as that of (9) are difficult to evaluate. Buntine and Weigend (1991) and MacKay (1992) approach this problem by approximating the posterior distribution by a Gaussian. Instead, I evaluate such integrals using Monte Carlo methods. If we randomly select weight vectors, wo, ... , WN-I, each distributed according to the posterior, the prediction for a test case can be found by approximating the integral of (9) by the average output of networks with these weights: y. ~ ~ L/(x.,Wt) (10) t This formula is valid even if the Wt are dependent, though a larger sample may then be needed to achieve a given error bound. Such a sample can be obtained by simulating an ergodic Markov chain that has the posterior as its stationary distribution. The early part of the chain, before the stationary distribution has been reached, is discarded. Subsequent vectors are used to estimate the integral. 2.1 FORMULATING THE PROBLEM IN TERMS OF ENERGY Consider the general problem of obtaining a sample of (dependent) vectors, qt, with probabilities given by P( q). For Bayesian network learning, q will be the weight vector, or other parameters from which the weights can be obtained, and the distribution of interest will be the posterior. It will be convenient to express this probability distribution in terms of a potential energy function, E( q), chosen so that P(q) oc exp(-E(q)) (11) A momentum vector, p, of the same dimensions as q, is also introduced, and defined to have a kinetic energy of ~ \pI2. The sum of the potential and kinetic energies is the Hamiltonian: H(q,p) = E(q) + ~lpl2 (12) From the Hamiltonian, we define ajoint probability distribution over q and p (phase space) as follows: P(q,p) oc exp(-H(q,p)) (13) The marginal distribution for q in (13) is that of (11), from which we wish to sample. 478 Neal We can therefore proceed by sampling from this joint distribution for q and p, and then just ignoring the values obtained for p. 2.2 HAMILTONIAN DYNAMICS Sampling from the distribution (13) can be split into two subproblems first, to sample uniformly from a surface where H, and hence the probability, is constant, and second, to visit points of differing H with the correct probabilities. The solutions to these subproblems can then be interleaved to give an overall solution. The first subproblem can be solved by simulating the Hamiltonian dynamics of the system, in which q and p evolve through a fictitious time, r, according to the following equations: dq dr 8H 8p = p, dp 8H = -- = -VE(q) dr 8q (14) This dynamics leaves H constant, and preserves the volumes of regions of phase space. It therefore visits points on a surface of constant H with uniform probability. When simulating this dynamics, some discrete approximation must be used. The leapfrog method exactly maintains the preservation of phase space volume. Given a size for the time step, E, an iteration of the leapfrog method goes as follows: p(r+ E/2) q(r+ E) p(r + E) per) - (E/2)VE(q(r») q(r)+Ep p(r + E) - (E/2)V E(q(r + E» 2.3 THE STOCHASTIC DYNAMICS METHOD (15) To create a Markov chain that converges to the distribution of (13), we must interleave leapfrog iterations, which keep H (approximately) constant, with steps that can change H. It is convenient for the latter to affect only p, since it enters into H in a simple way. This general approach is due to Anderson (1980). I use stochastic steps of the following form to change H: p' (16) where 0 < (l' < 1, and n is a random vector with components picked independently from Gaussian distributions of mean zero and standard deviation one. One can show that these steps leave the distribution of (13) invariant. Alternating these stochastic steps with dynamical leapfrog steps will therefore sample values for q and p with close to the desired probabilities. In so far as the discretized dynamics does not keep H exactly constant, however, there will be some degree of bias, which will be eliminated only in the limit as E goes to zero. It is best to use a value of (l' close to one, as this reduces the random walk aspect of the dynamics. If the random term in (16) is omitted, the procedure is equivalent to ordinary batch mode backpropagation learning with momentum. Bayesian Learning via Stochastic Dynamics 479 2.4 THE HYBRID MONTE CARLO METHOD The bias introduced into the stochastic dynamics method by using an approximation to the dynamics is eliminated in the Hybrid Monte Carlo method of Duane, Kennedy, Pendleton, and Roweth (1987). This method is a variation on the algorithm of Metropolis, et al (1953), which generates a Markov chain by considering randomly-selected changes to the state. A change is always accepted if it lowers the energy (H), or leaves it unchanged. If it increases the energy, it is accepted with probability exp( -LlH), and is rejected otherwise, with the old state then being repeated. In the Hybrid Monte Carlo method, candidate changes are produced by picking a random value for p from its distribution given by (13) and then performing some predetermined number of leapfrog steps. If the leapfrog method were exact, H would be unchanged, and these changes would always be accepted. Since the method is actually only approximate, H sometimes increases, and changes are sometimes rejected, exactly cancelling the bias introduced by the approximation. Of course, if the errors are very large, the acceptance probability will be very low, and it will take a long time to reach and explore the stationary distribution. To avoid this, we need to choose a step size (f) that is small enough. 3 RESULTS ON A TEST PROBLEM I use the "robot arm" problem of MacKay (1992) for testing. The task is to learn the mapping from two real-valued inputs, Xl and X2, to two real-valued outputs, YI and Y2, given by ih = 2.0 cos(xI) + 1.3 COS(XI + X2) Y2 = 2.0 sin(xI) + 1.3 sin(xi + X2) (17) (18) Gaussian noise of mean zero and standard deviation 0.05 is added to (YI' Y2) to give the observed position, (YI, Y2). The training and test sets each consist of 200 cases, with Xl picked randomly from the ranges [-1.932, -0.453] and [+0.453, +1.932], and X2 from the range [0.534,3.142]. A network with 16 sigmoidal hidden units was used. The output units were linear. Like MacKay, I group weights into three categories input to hidden, bias to hidden, and hidden/bias to output. MacKay gives separate priors to weights in each category, finding an appropriate value of w for each. I fix w to one, but multiply each weight by a scale factor associated with its category before using it, giving an equivalent effect. For conventional training with weight decay, I use an analogous scheme with three weight decay constants (.\ in (6». In all cases, I assume that the true value of u is not known. I therefore use (3) for the training set probability, and (5) for the data fit term in conventional training. I set 80 = rno = 0.1, which corresponds to a very vague prior for u. 3.1 PERFORMANCE OF CONVENTIONAL LEARNING Conventional backpropagation learning was tested on the robot arm problem to gauge how difficult it is to obtain good generalization with standard methods. 480 Neal (a) .006.5 +-l,-----t--___ ==!r.:-::,===*" (b) .006.5 +--+-,--t-----+----_+_ ..................................................... o. ......... .0060~~~.-... -... ~.~-r---~~---~ -~O+-r----t----~I_---~ \.. .~.5+-~~ ___ --r----~---~ 1.0-"', .0000+-~~ ... -.. -... -.• r .. -.. -... -.. -... -.. -... -... ~.~---~ ........................ . ~5+--4,,----+-------4-~~--~ .0050 +-----'''''''"--t----==-1I_---;--..... .~.5+----~---~~~====~ .~o .oow+----~---~~---~ o 50 ~ ~ 0 50 100 ~ Herations X 1000 Iterations X 1000 Figure 1: Conventional backpropagation learning (a) with no weight decay, (b) with carefully-chosen weight decay constants. The solid lines give the squared error on the training data, the dotted lines the squared error on the test data. Fig. l(a) shows results obtained without using weight decay. Error on the test set declined initially, but then increased with further training. To achieve good results, the point where the test error reaches its minimum would have to be identified using a separate validation set. Fig. l(b) shows results using good weight decay constants, one for each category of weights, taken from the Bayesian runs described below. In this case there is no need to stop learning early, but finding the proper weight decay constants by nonBayesian methods would be a problem. Again, a validation set seems necessary, as well as considerable computation. Use of a validation set is wasteful, since data that could otherwise be included in the training set must be excluded. Standard techniques for avoiding this, such as "N-fold" cross-validation, are difficult to apply to neural networks. 3.2 PERFORMANCE OF BAYESIAN LEARNING Bayesian learning was first tested using the unbiased Hybrid Monte Carlo method. The parameter vector in the simulations (q) consisted of the unsealed network weights together with the scale factors for the three weight categories. The actual weight vector (w) was obtained by multiplying each unsealed weight by the scale factor for its category. Each Hybrid Monte Carlo run consisted of 500 Metropolis steps. For each step, a trajectory consisting of 1000 leapfrog iterations with f = 0.00012 was computed, and accepted or rejected based on the change in H at its end-point. Each run therefore required 500,000 batch gradient evaluations, and took approximately four hours on a machine rated at about 25 MIPS. Fig. 2(a) shows the training and test error for the early portion of one Hybrid Monte Carlo run. After initially declining, these values fluctuate about an average. Though not apparent in the figure, some quantities (notably the scale factors) require a hundred or more steps to reach their final distribution. The first 250 steps of each run were therefore discarded as not being from the stationary distribution. Fig. 2(b) shows the training and test set errors produced by networks with weight vectors taken from the last 250 steps of the same run. Also shown is the error on the test set using the average of the outputs of all these networks that is, the estimate given by (10) for the Bayesian prediction of (9). For the run shown, this (a) .0140 .0120 .01110 .1lO8O .1lO6O .0040 o I d "'.,"; ...... ~.J! •• ,J ! ......... ~,. ..J>v-A , " .. """'-' ~ 50 100 I&era&ions X 1000 Bayesian Learning via Stochastic Dynamics 481 (b) .0070 --I.----t-------1I---+--+-----II-----!.0061 --li-----t---__II----t.-+-----II-t----!~~~-~~-~-M~~~~--~~~~~ .0064 --Ih--;--t--t-rh-i:'lIIt-i1lr--te!H-+--~!-!1I5B-+._i:_r_-!­ .0062 ~~~~-+:III'!I''HI:4---::.fII-+Iit-'-i''*''-.+-__..H*<~II'-+:~:.H!t_.;+­ .0060 --I~14.~!H-;;1fH-~~fml!--lF-Hi:-:t-_i_f.;i_¥_'i~f--i!h"lt~!tt­ .~.--I~~~~-~~~__Ir+~~~~~--~ .00545 --Ia---;J--=.~"*-~---I~---'--+-.!i=_--1~-~-t­ .00S4 --Ift-;;,-;nr-t----:---jih---Jr:--+----r---IHt--JbT;rl.0052 ~-'\+~'fliIllnl~HI\,rbItl-"'AiIA>tI-Wl~'"T1I'cyJ-y;I-fif'L'\.-tti-tf'l~ .0050 --I-lf---'--f+-'~'---'..:&.!1---'-''---F>LL.:'--....,I--...J.:....--!300 350 400 450 I&era&ions x 1000 Figure 2: Bayesian learning using Hybrid Mon~e Carlo (a) early portion of run, (b) last 250 iterations. The solid lines give the squared error on the training set, the dotted lines the squared error on the test set, for individual networks. The dashed line in (b) is the test error when using the average of the outputs of all 250 networks. +3» +2» ' •• Figure 3: Predictive distribution for ",. outputs. The two regions from which +1» • training data was drawn are outlined. • Circles indicate the true, noise-free out0.0puts for a grid of cases in the input space. The dots in the vicinity of each circle (often piled on top of it) are the ·LD outputs of every fifth network from the last 250 iterations of a Hybrid Monte Carlo run. ·2.0·u:.:~:.'. .. ~., ::, .. ' ·10 ·1.0 0» +LD +10 +3.0 test set error using averaged outputs is 0.00559, which is (slightly) better than any results obtained using conventional training. Note that with Bayesian training no validation set is necessary. The analogues of the weight decay constants the weight scale factors are found during the course of the simulation. Another advantage of the Bayesian approach is that it can provide an indication of how uncertain the predictions for test cases are. Fig. 3 demonstrates this. As one would expect, the uncertainty is greater for test cases with inputs outside the region where training data was supplied. 3.3 STOCHASTIC DYNAMICS VS. HYBRID MONTE CARLO The uncorrected stochastic dynamics method will have some degree of systematic bias, due to inexact simulation of the dynamics. Is the amount of bias introduced of any practical importance, however? 482 Neal (a) .IXY1O ~------+---~-++-----~----~-----4-(b) .0068 .0066 .0064 • .D062 .0060 • .005. .oo5ti .0054 • ~ .0052 .0050 \. ~ \ ~ .. .0048 250 ~ 400 • 21 • • II Iterations X 1000 Iterations X 1000 Figure 4: Bayesian learning using uncorrected stochastic dynamics (a) Training and test error for the last 250 iterations of a run with c = 0.00012, (b) potential energy (E) for a run with c = 0.00030. Note the two peaks where the dynamics became unstable. To help answer this question, the stochastic dynamics method was run with parameters analogous to those used in the Hybrid Monte Carlo runs. The step size of ( = 0.00012 used in those runs was chosen to be as large as possible while keeping the number of trajectories rejected low (about 10%). A smaller step size would not give competitive results, so this value was used for the stochastic dynamics runs as well. A value of 0.999 for 0' in (16) was chosen as being (loosely) equivalent to the use of trajectories 1000 iterations long in the Hybrid Monte Carlo runs. The results shown in Fig. 4(a) are comparable to those obtained using Hybrid Monte Carlo in Fig. 2(b). Fig. 4(b) shows that with a larger step size the uncorrected stochastic dynamics method becomes unstable. Large step sizes also cause problems for the Hybrid Monte Carlo method, however, as they lead to high rejection rates. The Hybrid Monte Carlo method may be the more robust choice in some circumstances, but uncorrected stochastic dynamics can also give good results. As it is simpler, the stochastic dynamics method may be better for hardware implementation, and is a more plausible starting point for any attempt to relate Bayesian methods to biology. Numerous other variations on these methods are possible as well, some of which are discussed in (Neal, 1992). References Andersen, H. C. (1980) "Molecular dynamics simulations at constant pressure and/or temperature", Journal of Chemical Physics, vol. 72, pp. 2384-2393. Buntine, W. L. and Weigend, A. S. (1991) "Bayesian back-propagation", Complex Systems, vol. 5, pp. 603-643. Duane, S., Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987) "Hybrid Monte Carlo", Physics Letters B, vol. 195, pp. 216-222. MacKay, D. J. C. (1992) "A practical Bayesian framework for backpropagation networks", Neural Computation, vol. 4, pp. 448-472. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953) "Equation of state calculations by fast computing machines", Journal of Chemical Physics, vol. 21, pp. 1087-1092. Neal, R. M. (1992) "Bayesian training of backpropagation networks by the hybrid Monte Carlo method", CRG-TR-92-1, Dept. of Computer Science, University of Toronto.
1992
123