node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
594
0
Title: Design and Implementation of a Replay Framework based on a Partial Order Planner Abstract: In this paper we describe the design and implementation of the derivation replay framework, dersnlp+ebl (Derivational snlp+ebl), which is based within a partial order planner. dersnlp+ebl replays previous plan derivations by first repeating its earlier decisions in the context of the new problem situation, then extending the replayed path to obtain a complete solution for the new problem. When the replayed path cannot be extended into a new solution, explanation-based learning (ebl) techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. To keep retrieval costs low, dersnlp+ebl normally stores plan derivations for individual goals, and replays one or more of these derivations in solving multi-goal problems. Cases covering multiple goals are stored only when subplans for individual goals cannot be successfully merged. The aim in constructing the case library is to predict these goal interactions and to store a multi-goal case for each set of negatively interacting goals. We provide empirical results demonstrating the effectiveness of dersnlp+ebl in improving planning performance on randomly-generated problems drawn from a complex domain.
[ 593, 1122, 1194, 1621 ]
Test
595
2
Title: LEARNING TO CONTROL FAST-WEIGHT MEMORIES: AN ALTERNATIVE TO DYNAMIC RECURRENT NETWORKS (Neural Computation, 4(1):131-139, 1992) Abstract: Previous algorithms for supervised sequence learning are based on dynamic recurrent networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: The first net learns to produce context dependent weight changes for the second net whose weights may vary very quickly. The method offers the potential for STM storage efficiency: A single weight (instead of a full-fledged unit) may be sufficient for storing temporal information. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. One experiment shows how the system can be used for adaptive temporary variable binding.
[ 121, 233 ]
Test
596
6
Title: The Disk-Covering Method for Tree Reconstruction Abstract: Evolutionary tree reconstruction is a very important step in many biological research problems, and yet is extremely difficult for a variety of computational, statistical, and scientific reasons. In particular, the reconstruction of very large trees containing significant amounts of divergence is especially challenging. We present in this paper a new tree reconstruction method, which we call the Disk-Covering Method, which can be used to recover accurate estimations of the evolutionary tree for otherwise intractable datasets. DCM obtains a decomposition of the input dataset into small overlapping sets of closely related taxa, reconstructs trees on these subsets (using a "base" phylogenetic method of choice), and then combines the subtrees into one tree on the entire set of taxa. Because the subproblems analyzed by DCM are smaller, com-putationally expensive methods such as maximum likelihood estimation can be used without incurring too much cost. At the same time, because the taxa within each subset are closely related, even very simple methods (such as neighbor-joining) are much more likely to be highly accurate. The result is that DCM-boosted methods are typically faster and more accurate as compared to "naive" use of the same method. In this paper we describe the basic ideas and techniques in DCM, and demonstrate the advantages of DCM experimentally by simulating sequence evolution on a variety of trees.
[ 299 ]
Train
597
5
Title: Learning Semantic Grammars with Constructive Inductive Logic Programming Abstract: Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semantic-grammar acquisition problem can be viewed as the learning of search-control heuristics in a logic program. Appropriate control rules are learned using a new first-order induction algorithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and out-perform previous approaches based on connectionist techniques.
[ 106, 204, 224, 434, 675 ]
Train
598
5
Title: Dynamic Hammock Predication for Non-predicated Instruction Set Architectures Abstract: Conventional speculative architectures use branch prediction to evaluate the most likely execution path during program execution. However, certain branches are difficult to predict. One solution to this problem is to evaluate both paths following such a conditional branch. Predicated execution can be used to implement this form of multi-path execution. Predicated architectures fetch and issue instructions that have associated predicates. These predicates indicate if the instruction should commit its result. Predicating a branch reduces the number of branches executed, eliminating the chance of branch misprediction at the cost of executing additional instructions. In this paper, we propose a restricted form of multi-path execution called Dynamic Predication for architectures with little or no support for predicated instructions in their instruction set. Dynamic predication dynamically predicates instruction sequences in the form of a branch hammock, concurrently executing both paths of the branch. A branch hammock is a short forward branch that spans a few instructions in the form of an if-then or if-then-else construct. We mark these and other constructs in the executable. When the decode stage detects such a sequence, it passes a predicated instruction sequence to a dynamically scheduled execution core. Our results show that dynamic predication can accrue speedups of up to 13%.
[ 158, 302, 307, 428, 432 ]
Train
599
2
Title: A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl Abstract: In the barn owl, the self-organization of the auditory map of space in the external nucleus of the inferior colliculus (ICx) is strongly influenced by vision, but the nature of this interaction is unknown. In this paper a biologically plausible and mini-malistic model of ICx self-organization is proposed where the ICx receives a learn signal based on the owl's visual attention. When the visual attention is focused in the same spatial location as the auditory input, the learn signal is turned on, and the map is allowed to adapt. A two-dimensional Kohonen map is used to model the ICx, and simulations were performed to evaluate how the learn signal would affect the auditory map. When primary area of visual attention was shifted at different spatial locations, the auditory map shifted to the corresponding location. The shift was complete when done early in the development and partial when done later. Similar results have been observed in the barn owl with its visual field modified with prisms. Therefore, the simulations suggest that a learn signal, based on visual attention, is a possible explanation for the auditory plasticity.
[ 747 ]
Validation
600
2
Title: Separating hippocampal maps Spatial Functions of the Hippocampal Formation and the Abstract: The place fields of hippocampal cells in old animals sometimes change when an animal is removed from and then returned to an environment [ Barnes et al., 1997 ] . The ensemble correlation between two sequential visits to the same environment shows a strong bimodality for old animals (near 0, indicative of remapping, and greater than 0.7, indicative of a similar representation between experiences), but a strong unimodality for young animals (greater than 0.7, indicative of a similar representation between experiences). One explanation for this is the multi-map hypothesis in which multiple maps are encoded in the hippocampus: old animals may sometimes be returning to the wrong map. A theory proposed by Samsonovich and McNaughton (1997) suggests that the Barnes et al. experiment implies that the maps are pre-wired in the CA3 region of hippocampus. Here, we offer an alternative explanation in which orthogonalization properties in the dentate gyrus (DG) region of hippocampus interact with errors in self-localization (reset of the path integrator on re-entry into the environment) to produce the bimodality.
[ 205, 745, 747, 1052 ]
Train
601
4
Title: Active Gesture Recognition using Partially Observable Markov Decision Processes Abstract: M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 367 Appeared 13th IEEE Intl. Conference on Pattern Recognition (ICPR '96), Vienna, Austria. Abstract We present a foveated gesture recognition system that guides an active camera to foveate salient features based on a reinforcement learning paradigm. Using vision routines previously implemented for an interactive environment, we determine the spatial location of salient body parts of a user and guide an active camera to obtain images of gestures or expressions. A hidden-state reinforcement learning paradigm based on the Partially Observable Markov Decision Process (POMDP) is used to implement this visual attention. The attention module selects targets to foveate based on the goal of successful recognition, and uses a new multiple-model Q-learning formulation. Given a set of target and distractor gestures, our system can learn where to foveate to maximally discriminate a particular gesture.
[ 3, 552, 565, 611, 1741 ]
Validation
602
1
Title: Every Niching Method has its Niche: Fitness Sharing and Implicit Sharing Compared Abstract: Various extensions to the Genetic Algorithm (GA) attempt to find all or most optima in a search space containing several optima. Many of these emulate natural speciation. For co-evolutionary learning to succeed in a range of management and control problems, such as learning game strategies, such methods must find all or most optima. However, suitable comparison studies are rare. We compare two similar GA specia-tion methods, fitness sharing and implicit sharing. Using a realistic letter classification problem, we find they have advantages under different circumstances. Implicit sharing covers optima more comprehensively, when the population is large enough for a species to form at each optimum. With a population not large enough to do this, fitness sharing can find the optima with larger basins of attraction, and ignore the peaks with narrow bases, while implicit sharing is more easily distracted. This indicates that for a speciated GA trying to find as many near-global optima as possible, implicit sharing works well only if the population is large enough. This requires prior knowledge of how many peaks exist.
[ 163, 1114, 2334 ]
Train
603
0
Title: METHOD-SPECIFIC KNOWLEDGE COMPILATION: TOWARDS PRACTICAL DESIGN SUPPORT SYSTEMS Abstract: Modern knowledge systems for design typically employ multiple problem-solving methods which in turn use different kinds of knowledge. The construction of a heterogeneous knowledge system that can support practical design thus raises two fundamental questions: how to accumulate huge volumes of design information, and how to support heterogeneous design processing? Fortunately, partial answers to both questions exist separately. Legacy databases already contain huge amounts of general-purpose design information. In addition, modern knowledge systems typically characterize the kinds of knowledge needed by specific problem-solving methods quite precisely. This leads us to hypothesize method-specific data-to-knowledge compilation as a potential mechanism for integrating heterogeneous knowledge systems and legacy databases for design. In this paper, first we outline a general computational architecture called HIPED for this integration. Then, we focus on the specific issue of how to convert data accessed from a legacy database into a form appropriate to the problem-solving method used in a heterogeneous knowledge system. We describe an experiment in which a legacy knowledge system called Interactive Kritik is integrated with an ORACLE database using IDI as the communication tool. The limited experiment indicates the computational feasibility of method-specific data-to-knowledge compilation, but also raises additional research issues.
[ 540, 670, 1047, 1640 ]
Test
604
2
Title: First experiments using a mixture of nonlinear experts for time series prediction Abstract: This paper investigates the advantages and disadvantages of the mixture of experts (ME) model (introduced to the connectionist community in [JJNH91] and applied to time series analysis in [WM95]) on two time series where the dynamics is well understood. The first series is a computer-generated series, consisting of a mixture between a noise-free process (the quadratic map) and a noisy process (a composition of a noisy linear autoregressive and a hyperbolic tangent). There are three main results: (1) the ME model produces significantly better results than single networks; (2) it discovers the regimes correctly and also allows us to characterize the sub-processes through their variances. (3) due to the correct matching of the noise level of the model to that of the data it avoids overfitting. The second series is the laser series used in the Santa Fe competition; the ME model also obtains excellent out-of-sample predictions, allows for analysis and shows no overfitting.
[ 74, 668 ]
Test
605
2
Title: Learning Viewpoint Invariant Representations of Faces in an Attractor Network Abstract: In natural visual experience, different views of an object tend to appear in close temporal proximity as an animal manipulates the object or navigates around it. We investigated the ability of an attractor network to acquire view invariant visual representations by associating first neighbors in a pattern sequence. The pattern sequence contains successive views of faces of ten individuals as they change pose. Under the network dynamics developed by Griniasty, Tsodyks & Amit (1993), multiple views of a given subject fall into the same basin of attraction. We use an independent component (ICA) representation of the faces for the input patterns (Bell & Sejnowski, 1995). The ICA representation has advantages over the principal component representation (PCA) for viewpoint-invariant recognition both with and without the attractor network, suggesting that ICA is a better representation than PCA for object recognition.
[ 476, 576, 676 ]
Test
606
1
Title: Analysis of the Numerical Effects of Parallelism on a Parallel Genetic Algorithm Abstract: This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarse-grain geographically structured parallel genetic algorithm. Our experiments provide preliminary evidence that asynchronous versions of these algorithms have a lower run time than synchronous GAs. Our analysis shows that this improvement is due to (1) decreased synchronization costs and (2) high numerical efficiency (e.g. fewer function evaluations) for the asynchronous GAs. This analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs.
[ 163, 537 ]
Test
607
2
Title: A Support Vector Machine Approach to Decision Trees Abstract: Key ideas from statistical learning theory and support vector machines are generalized to decision trees. A support vector machine is used for each decision in the tree. The "optimal" decision tree is characterized, and both a primal and dual space formulation for constructing the tree are proposed. The result is a method for generating logically simple decision trees with multivariate linear or nonlinear decisions. The preliminary results indicate that the method produces simple trees that generalize well with respect to other decision tree algorithms and single support vector machines.
[ 438, 821, 1055, 1306 ]
Test
608
2
Title: Regularization Theory and Neural Networks Architectures Abstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of
[ 133, 179, 394, 611, 680, 975, 2230 ]
Validation
609
2
Title: Interactive Segmentation of Three-dimensional Medical Images (Extended abstract) Abstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of
[ 719, 747 ]
Train
610
2
Title: Figure 1: The architecture of a Kohonen network. Each input neuron is fully connected with Abstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of
[ 427, 747 ]
Train
611
2
Title: Learning networks for face analysis and synthesis Abstract: This paper presents an overview of the face-related projects in our group. The unifying theme underlying our work is the use of example-based learning methods for both analyzing and synthesizing face images. We label the example face images (and for the problem of face detection, "near miss" faces as well) with descriptive parameters for pose, expression, identity, and face vs. non-face. Then, by using example-based learning techniques, we develop networks for performing analysis tasks such as pose and expression estimation, face recognition, and face detection in cluttered scenes. In addition to these analysis applications, we show how the example-based technique can also be used as a novel method for image synthesis that is for computer graphics.
[ 28, 133, 179, 368, 386, 413, 511, 579, 601, 608, 633, 716, 718, 719, 864, 899, 938, 1079, 1103, 1116, 1265, 1315, 1352, 1488, 1493, 1499, 1564, 1668, 1732, 1755, 1763, 1935, 2050, 2230, 2260, 2325, 2340, 2352, 2378, 2385, 2501, 2505, ...
Validation
612
0
Title: Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases Abstract: This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good "lessons" to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.
[ 289, 629, 1046, 1348, 2568 ]
Validation
613
2
Title: A Generalized Hidden Markov Model for the Recognition of Human Genes in DNA Abstract: We present a statistical model of genes in DNA. A Generalized Hidden Markov Model (GHMM) provides the framework for describing the grammar of a legal parse of a DNA sequence (Stormo & Haussler 1994). Probabilities are assigned to transitions between states in the GHMM and to the generation of each nucleotide base given a particular state. Machine learning techniques are applied to optimize these probabilities using a standardized training set. Given a new candidate sequence, the best parse is deduced from the model using a dynamic programming algorithm to identify the path through the model with maximum probability. The GHMM is flexible and modular, so new sensors and additional states can be inserted easily. In addition, it provides simple solutions for integrating cardinality constraints, reading frame constraints, "indels", and homology searching. The description and results of an implementation of such a gene-finding model, called Genie, is presented. The exon sensor is a codon frequency model conditioned on windowed nucleotide frequency and the preceding codon. Two neural networks are used, as in (Brunak, Engelbrecht, & Knudsen 1991), for splice site prediction. We show that this simple model performs quite well. For a cross-validated standard test set of 304 genes [ftp://www-hgc.lbl.gov/pub/genesets] in human DNA, our gene-finding system identified up to 85% of protein-coding bases correctly with a specificity of 80%. 58% of exons were exactly identified with a specificity of 51%. Genie is shown to perform favorably compared with several other gene-finding systems.
[ 14, 232, 268, 616, 2107, 2496, 2571 ]
Validation
614
6
Title: Optimality and Domination in Repeated Games with Bounded Players Abstract: We examine questions of optimality and domination in repeated stage games where one or both players may draw their strategies only from (perhaps different) computationally bounded sets. We also consider optimality and domination when bounded convergence rates of the infinite payoff. We develop a notion of a "grace period" to handle the problem of vengeful strategies.
[ 615 ]
Validation
615
6
Title: Efficient Algorithms for Learning to Play Repeated Games Against Computationally Bounded Adversaries Abstract: We study the problem of efficiently learning to play a game optimally against an unknown adversary chosen from a computationally bounded class. We both contribute to the line of research on playing games against finite automata, and expand the scope of this research by considering new classes of adversaries. We introduce the natural notions of games against recent history adversaries (whose current action is determined by some simple boolean formula on the recent history of play), and games against statistical adversaries (whose current action is determined by some simple function of the statistics of the entire history of play). In both cases we give efficient algorithms for learning to play penny-matching and a more difficult game called contract . We also give the most powerful positive result to date for learning to play against finite automata, an efficient algorithm for learning to play any game against any finite automata with probabilistic actions and low cover time.
[ 54, 98, 555, 556, 614 ]
Train
616
2
Title: A Decision Tree System for Finding Genes in DNA Abstract: MORGAN is an integrated system for finding genes in vertebrate DNA sequences. MORGAN uses a variety of techniques to accomplish this task, the most distinctive of which is a decision tree classifier. The decision tree system is combined with new methods for identifying start codons, donor sites, and acceptor sites, and these are brought together in a frame-sensitive dynamic programming algorithm that finds the optimal segmentation of a DNA sequence into coding and noncoding regions (exons and introns). The optimal segmentation is dependent on a separate scoring function that takes a subsequence and assigns to it a score reflecting the probability that the sequence is an exon. The scoring functions in MORGAN are sets of decision trees that are combined to give a probability estimate. Experimental results on a database of 570 vertebrate DNA sequences show that MORGAN has excellent performance by many different measures. On a separate test set, it achieves an overall accuracy of 95%, with a correlation coefficient of 0.78 and a sensitivity and specificity for coding bases of 83% and 79%. In addition, MORGAN identifies 58% of coding exons exactly; i.e., both the beginning and end of the coding regions are predicted correctly. This paper describes the MORGAN system, including its decision tree routines and the algorithms for site recognition, and its performance on a benchmark database of vertebrate DNA.
[ 268, 438, 613, 2046 ]
Train
617
2
Title: The Use of Neural Networks to Support "Intelligent" Scientific Computing Abstract: In this paper we report on the use of backpropagation based neural networks to implement a phase of the computational intelligence process of the PYTHIA[3] expert system for supporting the numerical simulation of applications modelled by partial differential equations (PDEs). PYTHIA is an exemplar based reasoning system that provides advice on what method and parameters to use for the simulation of a specified PDE based application. When advice is requested, the characteristics of the given model are matched with the characteristics of previously seen classes of models. The performance of various solution methods on previously seen similar classes of models is then used as a basis for predicting what method to use. Thus, a major step of the reasoning process in PYTHIA involves the analysis and categorization of models into classes of models based on their characteristics. In this study we demonstrate the use of neural networks to identify the class of predefined models whose characteristics match the ones of the specified PDE based application.
[ 226 ]
Validation
618
6
Title: 0 Inductive learning of compact rule sets by using efficient hypotheses reduction Abstract: A method is described which reduces the hypotheses space with an efficient and easily interpretable reduction criteria called a - reduction. A learning algorithm is described based on a - reduction and analyzed by using probability approximate correct learning results. The results are obtained by reducing a rule set to an equivalent set of kDNF formulas. The goal of the learning algorithm is to induce a compact rule set describing the basic dependencies within a set of data. The reduction is based on criterion which is very exible and gives a semantic interpretation of the rules which fulfill the criteria. Comparison with syntactical hypotheses reduction show that the a reduction improves search and has a smaller probability of missclassification.
[ 478, 638 ]
Train
619
3
Title: MEDIATING INSTRUMENTAL VARIABLES Abstract: A method is described which reduces the hypotheses space with an efficient and easily interpretable reduction criteria called a - reduction. A learning algorithm is described based on a - reduction and analyzed by using probability approximate correct learning results. The results are obtained by reducing a rule set to an equivalent set of kDNF formulas. The goal of the learning algorithm is to induce a compact rule set describing the basic dependencies within a set of data. The reduction is based on criterion which is very exible and gives a semantic interpretation of the rules which fulfill the criteria. Comparison with syntactical hypotheses reduction show that the a reduction improves search and has a smaller probability of missclassification.
[ 260 ]
Train
620
2
Title: How Receptive Field Parameters Affect Neural Learning Abstract: We identify the three principle factors affecting the performance of learning by networks with localized units: unit noise, sample density, and the structure of the target function. We then analyze the effect of unit receptive field parameters on these factors and use this analysis to propose a new learning algorithm which dynamically alters receptive field properties during learning.
[ 747 ]
Train
621
4
Title: Reinforcement Learning Methods for Continuous-Time Markov Decision Problems Abstract: Semi-Markov Decision Problems are continuous time generalizations of discrete time Markov Decision Problems. A number of reinforcement learning algorithms have been developed recently for the solution of Markov Decision Problems, based on the ideas of asynchronous dynamic programming and stochastic approximation. Among these are TD(), Q-learning, and Real-time Dynamic Programming. After reviewing semi-Markov Decision Problems and Bellman's optimality equation in that context, we propose algorithms similar to those named above, adapted to the solution of semi-Markov Decision Problems. We demonstrate these algorithms by applying them to the problem of determining the optimal control for a simple queueing system. We conclude with a discussion of circumstances under which these algorithms may be usefully ap plied.
[ 471, 552, 565, 738, 1859 ]
Train
622
2
Title: CLASSIFICATION USING HIERARCHICAL MIXTURES OF EXPERTS Abstract: There has recently been widespread interest in the use of multiple models for classification and regression in the statistics and neural networks communities. The Hierarchical Mixture of Experts (HME) [1] has been successful in a number of regression problems, yielding significantly faster training through the use of the Expectation Maximisation algorithm. In this paper we extend the HME to classification and results are reported for three common classification benchmark tests: Exclusive-Or, N-input Parity and Two Spirals.
[ 74, 345, 377 ]
Validation
623
3
Title: State-Space Abstraction for Anytime Evaluation of Probabilistic Networks Abstract: One important factor determining the computa - tional complexity of evaluating a probabilistic network is the cardinality of the state spaces of the nodes. By varying the granularity of the state spaces, one can trade off accuracy in the result for computational efficiency. We present an any - time procedure for approximate evaluation of probabilistic networks based on this idea. On application to some simple networks, the proce - dure exhibits a smooth improvement in approxi - mation quality as computation time increases. This suggests that statespace abstraction is one more useful control parameter for designing real-time probabilistic reasoners.
[ 637, 1064, 1172, 1937, 2140, 2341 ]
Test
624
6
Title: Measuring the Difficulty of Specific Learning Problems Abstract: Existing complexity measures from contemporary learning theory cannot be conveniently applied to specific learning problems (e.g., training sets). Moreover, they are typically non-generic, i.e., they necessitate making assumptions about the way in which the learner will operate. The lack of a satisfactory, generic complexity measure for learning problems poses difficulties for researchers in various areas; the present paper puts forward an idea which may help to alleviate these. It shows that supervised learning problems fall into two, generic, complexity classes only one of which is associated with computational tractability. By determining which class a particular problem belongs to, we can thus effectively evaluate its degree of generic difficulty.
[ 11, 112, 163, 572, 659, 700 ]
Test
625
2
Title: Statistical Biases in Backpropagation Learning Keywords: Cognitive Science, Pattern recognition Abstract: The paper investigates the statistical effects which may need to be exploited in supervised learning. It notes that these effects can be classified according to their conditionality and their order and proposes that learning algorithms will typically have some form of bias towards particular classes of effect. It presents the results of an empirical study of the statistical bias of backpropagation. The study involved applying the algorithm to a wide range of learning problems using a variety of different internal architectures. The results of the study revealed that backpropagation has a very specific bias in the general direction of statistical rather than relational effects. The paper shows how the existence of this bias effectively constitutes a weakness in the algorithm's ability to discount noise.
[ 659, 700 ]
Validation
626
2
Title: Scatter-partitioning RBF network for function regression and image Abstract: segmentation: Preliminary results Abstract. Scatter-partitioning Radial Basis Function (RBF) networks increase their number of degrees of freedom with the complexity of an input-output mapping to be estimated on the basis of a supervised training data set. Due to its superior expressive power a scatter-partitioning Gaussian RBF (GRBF) model, termed Supervised Growing Neural Gas (SGNG), is selected from the literature. SGNG employs a one-stage error-driven learning strategy and is capable of generating and removing both hidden units and synaptic connections. A slightly modified SGNG version is tested as a function estimator when the training surface to be fitted is an image, i.e., a 2-D signal whose size is finite. The relationship between the generation, by the learning system, of disjointed maps of hidden units and the presence, in the image, of pictorially homogeneous subsets (segments) is investigated. Unfortunately, the examined SGNG version performs poorly both as function estimator and image segmenter. This may be due to an intrinsic inadequacy of the one-stage error-driven learning strategy to adjust structural parameters and output weights simultaneously but consistently. In the framework of RBF networks, further studies should investigate the combination of two-stage error-driven learning strategies with synapse generation and removal criteria. y Internal report of the paper entitled "Image segmentation with scatter-partitioning RBF networks: A feasibility study," to be presented at the conference Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation, part of SPIE's International Symposium on Optical Science, Engineering and Instrumentation, 19-24 July 1998, San Diego, CA.
[ 261, 687 ]
Validation
627
2
Title: Learning Symbolic Rules Using Artificial Neural Networks Abstract: A distinct advantage of symbolic learning algorithms over artificial neural networks is that typically the concept representations they form are more easily understood by humans. One approach to understanding the representations formed by neural networks is to extract symbolic rules from trained networks. In this paper we describe and investigate an approach for extracting rules from networks that uses (1) the NofM extraction algorithm, and (2) the network training method of soft weight-sharing. Previously, the NofM algorithm had been successfully applied only to knowledge-based neural networks. Our experiments demonstrate that our extracted rules generalize better than rules learned using the C4.5 system. In addition to being accurate, our extracted rules are also reasonably comprehensible.
[ 338, 1057, 1270, 1562, 1679 ]
Test
628
2
Title: Figure 8: time complexity of unit parallelism measured on MANNA theoretical prediction #nodes N time Abstract: Our experience showed us that exibility in expressing a parallel algorithm for simulating neural networks is desirable even if it is not possible then to obtain the most efficient solution for any single training algorithm. We believe that the advantages of a clear and easy to understand program predominates the disadvantages of approaches allowing only for a specific machine or neural network algorithm. We currently investigate if other neural network models are worth while being parallelized, and how the resulting parallel algorithms can be composed of a few common basic building blocks and the logarithmic tree as efficient communication structure. 1 2 4 8 2 500 connections 40 000 connections [1] D. Ackley, G. Hinton, T. Sejnowski: A Learning Algorithm for Boltzmann Machines, Cognitive Science 9, pp. 147-169, 1985 [2] B. M. Forrest et al.: Implementing Neural Network Models on Parallel Computers, The computer Journal, vol. 30, no. 5, 1987 [3] W. Giloi: Latency Hiding in Message Passing Architectures, International Parallel Processing Symposium, April 1994, Cancun, Mexico, IEEE Computer Society Press [4] T. Nordstrm, B. Svensson: Using And Designing Massively Parallel Computers for Artificial Neural Networks, Journal Of Parallel And Distributed Computing, vol. 14, pp. 260-285, 1992 [5] A. Kramer, A. Vincentelli: Efficient parallel learning algorithms for neural networks, Advances in Neural Information Processing Systems I, D. Touretzky (ed.), pp. 40-48, 1989 [6] T. Kohonen: Self-Organization and Associative Memory, Springer-Verlag, Berlin, 1988 [7] D. A. Pomerleau, G. L. Gusciora, D. L. Touretzky, H. T. Kung: Neural Network Simulation at Warp Speed: How We Got 17 Million Connections Per Second, IEEE Intern. Conf. Neural Networks, July 1988 [8] A. Rbel: Dynamic selection of training patterns for neural networks: A new method to control the generalization, Technical Report 92-39, Technical University of Berlin, 1993 [9] D. E. Rumelhart, D. E. Hinton, R. J. Williams: Learning internal representations by error propagation, Rumelhart & McClelland (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. I, pp. 318-362, Bradford Books/MIT Press, Cambridge, MA, 1986 [10] W. Schiffmann, M. Joost, R. Werner: Comparison of optimized backpropagation algorithms, Proc. of the European Symposium on Artificial Neural Networks, ESANN '93, Brussels, pp. 97-104, 1993 [11] J. Schmidhuber: Accelerated Learning in BackPropagation Nets, Connectionism in perspective, Elsevier Science Publishers B.V. (North-Holland), pp 439-445,1989 [12] M. Taylor, P. Lisboa (eds.): Techniques and Applications of Neural Networks, Ellis Horwood, 1993 [13] M. Witbrock, M. Zagha: An implementation of backpropagation learning on GF11, a large SIMD parallel computer, Parallel Computing, vol. 14, pp. 329-346, 1990 [14] X. Zhang, M. Mckenna, J. P. Mesirov, D. L. Waltz: The backpropagation algorithm on grid and hypercube architectures, Parallel Computing, vol. 14, pp. 317-327, 1990
[ 747 ]
Train
629
0
Title: Using Introspective Reasoning to Select Learning Strategies Abstract: In order to learn effectively, a system must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires a declaratflive representation of the reasoning performed by the system during the performance task. This paper presents a taxonomy of possible reasoning failures that can occur during this task, their declarative representations, and their associations with particular learning strategies. We propose a theory of Meta-XPs, which are explanation structures that help the system identify failure types and choose appropriate learning strategies in order to avoid similar mistakes in the future. A program called Meta-AQUA embodies the theory and processes examples in the domain of drug smuggling.
[ 221, 612, 1535, 1537 ]
Train
630
2
Title: Input to State Stabilizability for Parameterized Families of Systems Key Words: Nonlinear stability, Robust control, Abstract: In order to learn effectively, a system must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires a declaratflive representation of the reasoning performed by the system during the performance task. This paper presents a taxonomy of possible reasoning failures that can occur during this task, their declarative representations, and their associations with particular learning strategies. We propose a theory of Meta-XPs, which are explanation structures that help the system identify failure types and choose appropriate learning strategies in order to avoid similar mistakes in the future. A program called Meta-AQUA embodies the theory and processes examples in the domain of drug smuggling.
[ 408, 447 ]
Validation
631
2
Title: Extracting Provably Correct Rules from Artificial Neural Networks Abstract: Although connectionist learning procedures have been applied successfully to a variety of real-world scenarios, artificial neural networks have often been criticized for exhibiting a low degree of comprehensibility. Mechanisms that automatically compile neural networks into symbolic rules offer a promising perspective to overcome this practical shortcoming of neural network representations. This paper describes an approach to neural network rule extraction based on Validity Interval Analysis (VI-Analysis). VI-Analysis is a generic tool for extracting symbolic knowledge from Backpropagation-style artificial neural networks. It does this by propagating whole intervals of activations through the network in both the forward and backward directions. In the context of rule extraction, these intervals are used to prove or disprove the correctness of conjectured rules. We describe techniques for generating and testing rule hypotheses, and demonstrate these using some simple classification tasks including the MONK's benchmark problems. Rules extracted by VI-Analysis are provably correct. No assumptions are made about the topology of the network at hand, as well as the procedure employed for training the network.
[ 383, 1057, 1562 ]
Validation
632
3
Title: Toward Optimal Feature Selection Abstract: In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.
[ 401, 430, 635, 683, 1582, 1909, 2508 ]
Train
633
4
Title: Chapter 1 Reinforcement Learning for Planning and Control Abstract: In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.
[ 197, 294, 565, 566, 611, 1459 ]
Train
634
0
Title: Oblivious Decision Trees and Abstract Cases Abstract: In this paper, we address the problem of case-based learning in the presence of irrelevant features. We review previous work on attribute selection and present a new algorithm, Oblivion, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. We hypothesize that this approach will efficiently identify relevant features even when they interact, as in parity concepts. We report experimental results on artificial domains that support this hypothesis, and experiments with natural domains that show improvement in some cases but not others. In closing, we discuss the implications of our experiments, consider additional work on irrelevant features, and outline some directions for future research.
[ 256, 430, 635, 686, 1513, 1570, 2593 ]
Train
635
6
Title: Learning Boolean Concepts in the Presence of Many Irrelevant Features Abstract: In this paper, we address the problem of case-based learning in the presence of irrelevant features. We review previous work on attribute selection and present a new algorithm, Oblivion, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. We hypothesize that this approach will efficiently identify relevant features even when they interact, as in parity concepts. We report experimental results on artificial domains that support this hypothesis, and experiments with natural domains that show improvement in some cases but not others. In closing, we discuss the implications of our experiments, consider additional work on irrelevant features, and outline some directions for future research.
[ 89, 109, 172, 177, 208, 256, 264, 375, 381, 430, 436, 442, 474, 632, 634, 640, 651, 660, 683, 686, 722 ]
Validation
636
4
Title: Robot Shaping: Developing Situated Agents through Learning Abstract: Learning plays a vital role in the development of situated agents. In this paper, we explore the use of reinforcement learning to "shape" a robot to perform a predefined target behavior. We connect both simulated and real robots to A LECSYS, a parallel implementation of a learning classifier system with an extended genetic algorithm. After classifying different kinds of Animat-like behaviors, we explore the effects on learning of different types of agent's architecture (monolithic, flat and hierarchical) and of training strategies. In particular, hierarchical architecture requires the agent to learn how to coordinate basic learned responses. We show that the best results are achieved when both the agent's architecture and the training strategy match the structure of the behavior pattern to be learned. We report the results of a number of experiments carried out both in simulated and in real environments, and show that the results of simulations carry smoothly to real robots. While most of our experiments deal with simple reactive behavior, in one of them we demonstrate the use of a simple and general memory mechanism. As a whole, our experimental activity demonstrates that classifier systems with genetic algorithms can be practically employed to develop autonomous agents.
[ 118, 294, 552, 764, 1573, 2027, 2173, 2174, 2204, 2233, 2672, 2687 ]
Train
637
3
Title: Computational complexity reduction for BN2O networks using similarity of states Abstract: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity.
[ 515, 623 ]
Test
638
6
Title: Learning a set of primitive actions with an Induction of decision trees. Machine Learning, 1(1):81-106, Abstract: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity.
[ 89, 227, 264, 294, 296, 348, 375, 383, 441, 449, 521, 618, 701, 722, 779, 862, 1027, 1189, 1312, 1327, 1460, 1468, 1539, 1808, 1895, 1919, 2045, 2146, 2171, 2409, 2426, 2465, 2541, 2673 ]
Train
639
3
Title: Bayesian Unsupervised Learning of Higher Order Structure Abstract: Multilayer architectures such as those used in Bayesian belief networks and Helmholtz machines provide a powerful framework for representing and learning higher order statistical relations among inputs. Because exact probability calculations with these models are often intractable, there is much interest in finding approximate algorithms. We present an algorithm that efficiently discovers higher order structure using EM and Gibbs sampling. The model can be interpreted as a stochastic recurrent network in which ambiguity in lower-level states is resolved through feedback from higher levels. We demonstrate the performance of the algorithm on bench mark problems.
[ 250, 584, 1763 ]
Test
640
6
Title: Learning in the Presence of Malicious Errors Abstract: In this paper we study an extension of the distribution-free model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an adversary with unbounded computational power and access to the entire history of the learning algorithm's computation. Thus, we study a worst-case model of errors. Our results include general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems.
[ 20, 109, 130, 199, 287, 459, 549, 574, 591, 635, 732, 1363, 1897, 2054, 2475 ]
Test
641
3
Title: Comparing Bayesian Model Class Selection Criteria by Discrete Finite Mixtures Abstract: We investigate the problem of computing the posterior probability of a model class, given a data sample and a prior distribution for possible parameter settings. By a model class we mean a group of models which all share the same parametric form. In general this posterior may be very hard to compute for high-dimensional parameter spaces, which is usually the case with real-world applications. In the literature several methods for computing the posterior approximately have been proposed, but the quality of the approximations may depend heavily on the size of the available data sample. In this work we are interested in testing how well the approximative methods perform in real-world problem domains. In order to conduct such a study, we have chosen the model family of finite mixture distributions. With certain assumptions, we are able to derive the model class posterior analytically for this model family. We report a series of model class selection experiments on real-world data sets, where the true posterior and the approximations are compared. The empirical results support the hypothesis that the approximative techniques can provide good estimates of the true posterior, especially when the sample size grows large.
[ 376, 484, 1739 ]
Train
642
3
Title: Constructing Bayesian finite mixture models by the EM algorithm Abstract: Email: Firstname.Lastname@cs.Helsinki.FI Report C-1996-9, University of Helsinki, Department of Computer Science. Abstract In this paper we explore the use of finite mixture models for building decision support systems capable of sound probabilistic inference. Finite mixture models have many appealing properties: they are computationally efficient in the prediction (reasoning) phase, they are universal in the sense that they can approximate any problem domain distribution, and they can handle multimod-ality well. We present a formulation of the model construction problem in the Bayesian framework for finite mixture models, and describe how Bayesian inference is performed given such a model. The model construction problem can be seen as missing data estimation and we describe a realization of the Expectation-Maximization (EM) algorithm for finding good models. To prove the feasibility of our approach, we report crossvalidated empirical results on several publicly available classification problem datasets, and compare our results to corresponding results obtained by alternative techniques, such as neural networks and decision trees. The comparison is based on the best results reported in the literature on the datasets in question. It appears that using the theoretically sound Bayesian framework suggested here the other reported results can be outperformed with a relatively small effort.
[ 484, 697 ]
Train
643
0
Title: Modeling Case-based Planning for Repairing Reasoning Failures Abstract: One application of models of reasoning behavior is to allow a reasoner to introspectively detect and repair failures of its own reasoning process. We address the issues of the transferability of such models versus the specificity of the knowledge in them, the kinds of knowledge needed for self-modeling and how that knowledge is structured, and the evaluation of introspective reasoning systems. We present the ROBBIE system which implements a model of its planning processes to improve the planner in response to reasoning failures. We show how ROBBIE's hierarchical model balances model generality with access to implementation-specific details, and discuss the qualitative and quantitative measures we have used for evaluating its introspective component.
[ 49, 50, 150, 222, 583, 1121, 1904, 2489 ]
Train
644
4
Title: Multi-criteria reinforcement learning Abstract: We consider multi-criteria sequential decision making problems, where the criteria are ordered according to their importance. Structural properties of these problems are touched and reinforcement learning algorithms, which learn asymptotically optimal decisions, are derived. Computer experiments confirm the theoretical results and provide further insight in the learning processes.
[ 210, 552, 565 ]
Train
645
3
Title: On the Markov Equivalence of Chain Graphs, Undirected Graphs, and Acyclic Digraphs Abstract: Graphical Markov models use undirected graphs (UDGs), acyclic directed graphs (ADGs), or (mixed) chain graphs to represent possible dependencies among random variables in a multivariate distribution. Whereas a UDG is uniquely determined by its associated Markov model, this is not true for ADGs or for general chain graphs (which include both UDGs and ADGs as special cases). This paper addresses three questions regarding the equivalence of graphical Markov models: when is a given chain graph Markov equivalent (1) to some UDG? (2) to some (at least one) ADG? (3) to some decomposable UDG? The answers are obtained by means of an extension of Frydenberg's (1990) elegant graph-theoretic characterization of the Markov equivalence of chain graphs.
[ 51, 211, 312, 742 ]
Validation
646
3
Title: Constructing Computationally Efficient Bayesian Models via Unsupervised Clustering Probabilistic Reasoning and Bayesian Belief Networks, Abstract: Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used.
[ 450 ]
Train
647
3
Title: Balls and Urns Abstract: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.
[ 81 ]
Test
648
2
Title: Reprint of: Sontag, E.D., "Remarks on stabilization and input-to-state stability," Abstract: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.
[ 693 ]
Train
649
0
Title: Concept Learning and Heuristic Classification in Weak-Theory Domains 1 Abstract: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.
[ 96, 166, 457, 479, 752, 1643, 2123, 2294, 2403, 2560, 2581 ]
Train
650
4
Title: Learning to Use Selective Attention and Short-Term Memory in Sequential Tasks Abstract: This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or memory-based) learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates only task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [ Ron et al., 1994 ] , Parti-game [ Moore, 1993 ] , G-algorithm [ Chap-man and Kaelbling, 1991 ] , and Variable Resolution Dynamic Programming [ Moore, 1991 ] . It builds on Utile Suffix Memory [ McCallum, 1995c ] , which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal statesfar fewer than the 2500 3 states that would have resulted from a fixed-sized history-window ap proach.
[ 472, 483, 656, 1006, 1557 ]
Test
651
2
Title: A Monotonic Measure for Optimal Feature Selection Abstract: Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distance-based measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification.
[ 430, 635 ]
Train
652
5
Title: ARB: A Hardware Mechanism for Dynamic Reordering of Memory References* Abstract: Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distance-based measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification.
[ 86, 249, 373 ]
Train
653
4
Title: Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email Abstract: This paper describes a novel method by which a dialogue agent can learn to choose an optimal dialogue strategy. While it is widely agreed that dialogue strategies should be formulated in terms of communicative intentions, there has been little work on automatically optimizing an agent's choices when there are multiple ways to realize a communicative intention. Our method is based on a combination of learning algorithms and empirical evaluation techniques. The learning component of our method is based on algorithms for reinforcement learning, such as dynamic programming and Q-learning. The empirical component uses the PARADISE evaluation framework (Walker et al., 1997) to identify the important performance factors and to provide the performance function needed by the learning algorithm. We illustrate our method with a dialogue agent named ELVIS (EmaiL Voice Interactive System), that supports access to email over the phone. We show how ELVIS can learn to choose among alternate strategies for agent initiative, for reading messages, and for summarizing email folders.
[ 552, 2480 ]
Train
654
5
Title: Incremental Reduced Error Pruning Abstract: This paper outlines some problems that may occur with Reduced Error Pruning in Inductive Logic Programming, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of this algorithm cannot be recommended for domains with a very specific concept description.
[ 135, 1260 ]
Train
655
2
Title: Determining Mental State from EEG Signals Using Parallel Implementations of Neural Networks Abstract: EEG analysis has played a key role in the modeling of the brain's cortical dynamics, but relatively little effort has been devoted to developing EEG as a limited means of communication. If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequences of these mental states. EEG pattern recognition is a difficult problem and hinges on the success of finding representations of the EEG signals in which the patterns can be distinguished. In this article, we report on a study comparing three EEG representations, the unprocessed signals, a reduced-dimensional representation using the Karhunen-Loeve transform, and a frequency-based representation. Classification is performed with a two-layer neural network implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc.. Execution time comparisons show over a hundred-fold speed up over a Sun Sparc 10. The best classification accuracy on untrained samples is 73% using the frequency-based representation.
[ 198, 747 ]
Train
656
4
Title: Reinforcement Learning: A Survey Abstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
[ 148, 650, 657, 773 ]
Validation
657
4
Title: Adding Memory to XCS Abstract: We add internal memory to the XCS classifier system. We then test XCS with internal memory, named XCSM, in non-Markovian environments with two and four aliasing states. Experimental results show that XCSM can easily converge to optimal solutions in simple environments; moreover, XCSM's performance is very stable with respect to the size of the internal memory involved in learning. However, the results we present evidence that in more complex non-Markovian environments, XCSM may fail to evolve an optimal solution. Our results suggest that this happens because, the exploration strategies currently employed with XCS, are not adequate to guarantee the convergence to an optimal policy with XCSM, in complex non-Markovian environments.
[ 656, 1581 ]
Train
658
1
Title: Hill Climbing with Learning (An Abstraction of Genetic Algorithm) Abstract: Simple modification of standard hill climbing optimization algorithm by taking into account learning features is discussed. Basic concept of this approach is the socalled probability vector, its single entries determine probabilities of appearance of '1' entries in n-bit vectors. This vector is used for the random generation of n-bit vectors that form a neighborhood (specified by the given probability vector). Within the neighborhood a few best solutions (with smallest functional values of a minimized function) are recorded. The feature of learning is introduced here so that the probability vector is updated by a formal analogue of Hebbian learning rule, well-known in the theory of artificial neural networks. The process is repeated until the probability vector entries are close either to zero or to one. The resulting probability vector unambiguously determines an n-bit vector which may be interpreted as an optimal solution of the given optimization task. Resemblance with genetic algorithms is discussed. Effectiveness of the proposed method is illustrated by an example of looking for global minima of a highly multimodal function.
[ 163, 427, 1577 ]
Train
659
2
Title: Trading Spaces: Computation, Representation and the Limits of Uninformed Learning Abstract: fl Research on this paper was partly supported by a Senior Research Leave fellowship granted by the Joint Council (SERC/MRC/ESRC) Cognitive Science Human Computer Interaction Initiative to one of the authors (Clark). Thanks to the Initiative for that support. y The order of names is arbitrary.
[ 11, 163, 397, 624, 625, 695 ]
Test
660
6
Title: Efficient Algorithms for Identifying Relevant Features Abstract: This paper describes efficient methods for exact and approximate implementation of the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This bias is useful for learning domains where many irrelevant features are present in the training data. We first introduce FOCUS-2, a new algorithm that exactly implements the MIN-FEATURES bias. This algorithm is empirically shown to be substantially faster than the FOCUS algorithm previously given in [ Al-muallim and Dietterich, 1991 ] . We then introduce the Mutual-Information-Greedy, Simple-Greedy and Weighted-Greedy algorithms, which apply efficient heuristics for approximating the MIN-FEATURES bias. These algorithms employ greedy heuristics that trade optimality for computational efficiency. Experimental studies show that the learning performance of ID3 is greatly improved when these algorithms are used to preprocess the training data by eliminating the irrelevant features from ID3's consideration. In particular, the Weighted-Greedy algorithm provides an excellent and efficient approximation of the MIN
[ 635 ]
Validation
661
3
Title: A Statistical Approach to Decision Tree Modeling Abstract: A statistical approach to decision tree modeling is described. In this approach, each decision in the tree is modeled parametrically as is the process by which an output is generated from an input and a sequence of decisions. The resulting model yields a likelihood measure of goodness of fit, allowing ML and MAP estimation techniques to be utilized. An efficient algorithm is presented to estimate the parameters in the tree. The model selection problem is presented and several alternative proposals are considered. A hidden Markov version of the tree is described for data sequences that have temporal dependencies.
[ 71, 74, 76, 1191 ]
Train
662
2
Title: Free Energy Minimization Algorithm for Decoding and Cryptanalysis three binary vectors: s of length N Abstract: where A is a binary matrix. Our task is to infer s given z and A, and given assumptions about the statistical properties of s and n. This problem arises in the decoding of a noisy signal transmitted using a linear code A, and in the inference of the sequence of a linear feedback shift register (LFSR) from noisy observations [1, 2].
[ 181, 518 ]
Validation
663
2
Title: Perceptual Development and Learning: From Behavioral, Neurophysiological, and Morphological Evidence To Computational Models Abstract: An intelligent system has to be capable of adapting to a constantly changing environment. It therefore, ought to be capable of learning from its perceptual interactions with its surroundings. This requires a certain amount of plasticity in its structure. Any attempt to model the perceptual capabilities of a living system or, for that matter, to construct a synthetic system of comparable abilities, must therefore, account for such plasticity through a variety of developmental and learning mechanisms. This paper examines some results from neuroanatomical, morphological, as well as behavioral studies of the development of visual perception; integrates them into a computational framework; and suggests several interesting experiments with computational models that can yield insights into the development of visual perception. In order to understand the development of information processing structures in the brain, one needs knowledge of changes it undergoes from birth to maturity in the context of a normal environment. However, knowledge of its development in aberrant settings is also extremely useful, because it reveals the extent to which the development is a function of environmental experience (as opposed to genetically determined pre-wiring). Accordingly, we consider development of the visual system under both normal and restricted rearing conditions. The role of experience in the early development of the sensory systems in general, and the visual system in particular, has been widely studied through a variety of experiments involving carefully controlled manipulation of the environment presented to an animal. Extensive reviews of such results can be found in (Mitchell, 1984; Movshon, 1981; Hirsch, 1986; Boothe, 1986; Singer, 1986). Some examples of manipulation of visual experience are total pattern deprivation (e.g., dark rearing), selective deprivation of a certain class of patterns (e.g., vertical lines), monocular deprivation in animals with binocular vision, etc. Extensive studies involving behavioral deficits resulting from total visual pattern deprivation indicate that the deficits arise primarily as a result of impairment of visual information processing in the brain. The results of these experiments suggest specific developmental or learning mechanisms that may be operating at various stages of development, and at different levels in the system. We will discuss some of these hhhhhhhhhhhhhhh This is a working draft. All comments, especially constructive criticism and suggestions for improvement, will be appreciated. I am indebted to Prof. James Dannemiller for introducing me to some of the literature in infant development; to Prof. Leonard Uhr for his helpful comments on an initial draft of the paper; and to numerous researchers whose experimental work has provided the basis for the model outlined in this paper. This research was partially supported by grants from the National Science Foundation and the University of Wisconsin Graduate School.
[ 496, 501, 2393 ]
Train
664
2
Title: Diffusion of Context and Credit Information in Markovian Models Abstract: This paper studies the problem of ergodicity of transition probability matrices in Marko-vian models, such as hidden Markov models (HMMs), and how it makes very difficult the task of learning to represent long-term context for sequential data. This phenomenon hurts the forward propagation of long-term context information, as well as learning a hidden state representation to represent long-term context, which depends on propagating credit information backwards in time. Using results from Markov chain theory, we show that this problem of diffusion of context and credit is reduced when the transition probabilities approach 0 or 1, i.e., the transition probability matrices are sparse and the model essentially deterministic. The results found in this paper apply to learning approaches based on continuous optimization, such as gradient descent and the Baum-Welch algorithm.
[ 350, 978 ]
Test
665
2
Title: Requirements and use of neural networks for industrial appli- Abstract: Modern industry of today needs flexible, adaptive and fault-tolerant methods for information processing. Several applications have shown that neural networks fulfill these requirements. In this paper application areas, in which neural networks have been successfully used, are presented. Then a kind of check list is described, which mentioned the different steps, when applying neural networks. The paper finished with a discussion of some neural networks projects done in the research group Interactive Planning at the Research Center for Computer Science (FZI).
[ 294, 427, 747 ]
Train
666
2
Title: NeuroPipe a neural network based system for pipeline inspec- Abstract: Gas, oil and other pipelines need to be inspected for corrosion and other defects at regular intervals. For this application Pipetronix GmbH (PTX) Karlsruhe has developed a special ultrasonic based probe. Based on the recorded wall thicknesses of this so called pipe pig the Research center for computer science (FZI) has developed in cooperation with PTX an automatic inspection system called NeuroPipe. NeuroPipe has the task to detect defects like metal loss. The kernel of this inspection tool is a neural classifier which was trained using manually collected defect examples. The following paper focus on the aspects of successfull use of learning methods in an industrial application.
[ 745 ]
Train
667
2
Title: Recognizing Handwritten Digits Using Mixtures of Linear Models Abstract: We construct a mixture of locally linear generative models of a collection of pixel-based images of digits, and use them for recognition. Different models of a given digit are used to capture different styles of writing, and new images are classified by evaluating their log-likelihoods under each model. We use an EM-based algorithm in which the M-step is computationally straightforward principal components analysis (PCA). Incorporating tangent-plane information [12] about expected local deformations only requires adding tangent vectors into the sample covariance matrices for the PCA, and it demonstrably improves performance.
[ 257, 480, 867, 1356, 1928, 1974, 2072, 2227, 2270, 2570 ]
Train
668
2
Title: Nonlinear gated experts for time series: discovering regimes and avoiding overfitting Abstract: In: International Journal of Neural Systems 6 (1995) p. 373 - 399. URL of this paper: ftp://ftp.cs.colorado.edu/pub/Time-Series/MyPapers/experts.ps.Z, or http://www.cs.colorado.edu/~andreas/Time-Series/MyPapers/experts.ps.Z University of Colorado Computer Science Technical Report CU-CS-798-95. In the analysis and prediction of real-world systems, two of the key problems are nonstationarity(often in the form of switching between regimes), and overfitting (particularly serious for noisy processes). This article addresses these problems using gated experts, consisting of a (nonlinear) gating network, and several (also nonlinear) competing experts. Each expert learns to predict the conditional mean, and each expert adapts its width to match the noise level in its regime. The gating network learns to predict the probability of each expert, given the input. This article focuses on the case where the gating network bases its decision on information from the inputs. This can be contrasted to hidden Markov models where the decision is based on the previous state(s) (i.e., on the output of the gating network at the previous time step), as well as to averaging over several predictors. In contrast, gated experts soft-partition the input space. This article discusses the underlying statistical assumptions, derives the weight update rules, and compares the performance of gated experts to standard methods on three time series: (1) a computer-generated series, obtained by randomly switching between two nonlinear processes, (2) a time series from the Santa Fe Time Series Competition (the light intensity of a laser in chaotic state), and (3) the daily electricity demand of France, a real-world multivariate problem with structure on several time scales. The main results are (1) the gating network correctly discovers the different regimes of the process, (2) the widths associated with each expert are important for the segmentation task (and they can be used to characterize the sub-processes), and (3) there is less overfitting compared to single networks (homogeneous multi-layer perceptrons), since the experts learn to match their variances to the (local) noise levels. This can be viewed as matching the local complexity of the model to the local complexity of the data.
[ 263, 310, 604, 975, 1315, 1508, 2413, 2414, 2562, 2683 ]
Train
669
5
Title: Drug design by machine learning: Modelling drug activity Abstract: This paper describes an approach to modelling drug activity using machine learning tools. Some experiments in modelling the quantitative structure-activity relationship (QSAR) using a standard, Hansch, method and a machine learning system Golem were already reported in the literature. The paper describes the results of applying two other machine learning systems, Magnus Assistant and Retis, on the same data. The results achieved by the machine learning systems, are better then the results of the Hansch method; therefore, machine learning tools can be considered as very promising for solving that kind of problems. The given results also illustrate the variations of performance of the different machine learning systems applied to this drug design problem.
[ 509 ]
Validation
670
0
Title: Rule Based Database Integration in HIPED Heterogeneous Intelligent Processing in Engineering Design Abstract: In this paper 1 we describe one aspect of our research in the project called HIPED, which addressed the problem of performing design of engineering devices by accessing heterogeneous databases. The front end of the HIPED system consisted of interactive KRI-TIK, a multimodal reasoning system that combined case based and model based reasoning to solve a design problem. This paper focuses on the backend processing where five types of queries received from the front end are evaluated by mapping them appropriately using the "facts" about the schemas of the underlying databases and "rules" that establish the correspondance among the data in these databases in terms of relationships such as equivalence, overlap and set containment. The uniqueness of our approach stems from the fact that the mapping process is very forgiving in that the query received from the front end is evaluated with respect to a large number of possibilities. These possibilities are encoded in the form of rules that consider various ways in which the tokens in the given query may match relation names, attrribute names, or values in the underlying tables. The approach has been implemented using CORAL deductive database system as the rule processing engine.
[ 603 ]
Train
671
4
Title: Learning to Achieve Goals Abstract: Temporal difference methods solve the temporal credit assignment problem for reinforcement learning. An important subproblem of general reinforcement learning is learning to achieve dynamic goals. Although existing temporal difference methods, such as Q learning, can be applied to this problem, they do not take advantage of its special structure. This paper presents the DG-learning algorithm, which learns efficiently to achieve dynamically changing goals and exhibits good knowledge transfer between goals. In addition, this paper shows how traditional relaxation techniques can be applied to the problem. Finally, experimental results are given that demonstrate the superiority of DG learning over Q learning in a moderately large, synthetic, non-deterministic domain.
[ 552, 562, 565, 566, 1459, 1599 ]
Test
672
6
Title: Cryptographic Limitations on Learning Boolean Formulae and Finite Automata Abstract: In this paper we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses. Our methods reduce the problems of cracking a number of well-known public-key cryptosys- tems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory: in particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography. We also apply our results to obtain strong intractability results for approximating a gener - alization of graph coloring. fl This research was conducted while the author was at Harvard University and supported by an A.T.& T. Bell Laboratories scholarship. y Supported by grants ONR-N00014-85-K-0445, NSF-DCR-8606366 and NSF-CCR-89-02500, DAAL03-86-K-0171, DARPA AFOSR 89-0506, and by SERC.
[ 109, 130, 456, 535, 549, 574, 591, 924, 1003, 1004, 1293, 1343, 1363, 1386, 1460, 2004, 2040, 2329, 2360, 2653, 2696 ]
Train
673
2
Title: Increasing Consensus Accuracy in DNA Fragment Assemblies by Incorporating Fluorescent Trace Representations Abstract: We present a new method for determining the consensus sequence in DNA fragment assemblies. The new method, Trace-Evidence, directly incorporates aligned ABI trace information into consensus calculations via our previously described representation, TraceData Classifications. The new method extracts and sums evidence indicated by the representation to determine consensus calls. Using the Trace-Evidence method results in automatically produced consensus sequences that are more accurate and less ambiguous than those produced with standard majority- voting methods. Additionally, these improvements are achieved with less coverage than required by the standard methods using Trace-Evidence and a coverage of only three, error rates are as low as those with a coverage of over ten sequences.
[ 194 ]
Test
674
2
Title: Transfer in a Connectionist Model of the Acquisition of Morphology Abstract: The morphological systems of natural languages are replete with examples of the same devices used for multiple purposes: (1) the same type of morphological process (for example, suffixation for both noun case and verb tense) and (2) identical morphemes (for example, the same suffix for English noun plural and possessive). These sorts of similarity would be expected to convey advantages on language learners in the form of transfer from one morphological category to another. Connectionist models of morphology acquisition have been faulted for their supposed inability to represent phonological similarity across morphological categories and hence to facilitate transfer. This paper describes a connectionist model of the acquisition of morphology which is shown to exhibit transfer of this type. The model treats the morphology acquisition problem as one of learning to map forms onto meanings and vice versa. As the network learns these mappings, it makes phonological generalizations which are embedded in connection weights. Since these weights are shared by different morphological categories, transfer is enabled. In a set of experiments with artificial stimuli, networks were trained first on one morphological task (e.g., tense) and then on a second (e.g., number). It is shown that in the context of suffixation, prefixation, and template rules, the second task is facilitated when the second category either makes use of the same forms or the same general process type (e.g., prefixation) as the first.
[ 427 ]
Train
675
5
Title: Combining FOIL and EBG to Speed-up Logic Programs Abstract: This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm is shown to be an improvement over competing EBL approaches in several domains. Additionally, the algorithm is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.
[ 344, 597, 1429, 1442, 1445, 2215, 2339 ]
Validation
676
2
Title: A Model of Invariant Object Recognition in the Visual System Abstract: Neurons in the ventral stream of the primate visual system exhibit responses to the images of objects which are invariant with respect to natural transformations such as translation, size, and view. Anatomical and neurophysiological evidence suggests that this is achieved through a series of hierarchical processing areas. In an attempt to elucidate the manner in which such representations are established, we have constructed a model of cortical visual processing which seeks to parallel many features of this system, specifically the multi-stage hierarchy with its topologically constrained convergent connectivity. Each stage is constructed as a competitive network utilising a modified Hebb-like learning rule, called the trace rule, which incorporates previous as well as current neuronal activity. The trace rule enables neurons to learn about whatever is invariant over short time periods (e.g. 0.5 s) in the representation of objects as the objects transform in the real world. The trace rule enables neurons to learn the statistical invariances about objects during their transformations, by associating together representations which occur close together in time. We show that by using the trace rule training algorithm the model can indeed learn to produce transformation invariant responses to natural stimuli such as faces.
[ 605, 1014, 1056, 1091 ]
Train
677
2
Title: Recurrent Neural Networks for Missing or Asynchronous Data Abstract: In this paper we propose recurrent neural networks with feedback into the input units for handling two types of data analysis problems. On the one hand, this scheme can be used for static data when some of the input variables are missing. On the other hand, it can also be used for sequential data, when some of the input variables are missing or are available at different frequencies. Unlike in the case of probabilistic models (e.g. Gaussian) of the missing variables, the network does not attempt to model the distribution of the missing variables given the observed variables. Instead it is a more "discriminant" approach that fills in the missing variables for the sole purpose of minimizing a learning criterion (e.g., to minimize an output error).
[ 71 ]
Train
678
2
Title: Algebraic Transformations of Objective Functions Abstract: Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable. The transformations include simplification of products of expressions, functions of one or two expressions, and sparse matrix products (all of which may be interpreted as Legendre transformations); also the minimum and maximum of a set of expressions. These transformations introduce new interneurons which force the network to seek a saddle point rather than a minimum. Other transformations allow control of the network dynamics, by reconciling the Lagrangian formalism with the need for fixpoints. We apply the transformations to simplify a number of structured neural networks, beginning with the standard reduction of the winner-take-all network from O(N 2 ) connections to O(N ). Also susceptible are inexact graph-matching, random dot matching, convolutions and coordinate transformations, and sorting. Simulations show that fixpoint-preserving transformations may be applied repeatedly and elaborately, and the example networks still robustly converge.
[ 528 ]
Validation
679
0
Title: Developing Case-Based Reasoning for Structural Design Abstract: The use of case-based reasoning as a process model of design involves the subtasks of recalling previously known designs from memory and adapting these design cases or subcases to fit the current design context. The development of this process model for a particular design domain proceeds in parallel with the development of a representation for the cases, the case memory organisation, and the design knowledge needed in addition to specific designs. The selection of a particular representational paradigm for these types of information, and the details of its use for a particular problemsolving domain, depend on the intended use of the information to be represented and the project information available, as well as the nature of the domain. In this paper we describe the development and implementation of four case-based design systems: CASECAD, CADSYN, WIN, and DEMEX. Each system is described in terms of the content, organisation, and source of case memory, and the implementation of case recall and case adaptation. A comparison of these systems considers the relative advantages and disadvantages of the implementations.
[ 30, 32 ]
Train
680
2
Title: Cortical Mechanisms of Visual Recognition and Learning: A Hierarchical Kalman Filter Model Abstract: We describe a biologically plausible model of dynamic recognition and learning in the visual cortex based on the statistical theory of Kalman filtering from optimal control theory. The model utilizes a hierarchical network whose successive levels implement Kalman filters operating over successively larger spatial and temporal scales. Each hierarchical level in the network predicts the current visual recognition state at a lower level and adapts its own recognition state using the residual error between the prediction and the actual lower-level state. Simultaneously, the network also learns an internal model of the spatiotemporal dynamics of the input stream by adapting the synaptic weights at each hierarchical level in order to minimize prediction errors. The Kalman filter model respects key neuroanatomical data such as the reciprocity of connections between visual cortical areas, and assigns specific computational roles to the inter-laminar connections known to exist between neurons in the visual cortex. Previous work elucidated the usefulness of this model in explaining neurophysiological phenomena such as endstopping and other related extra-classical receptive field effects. In this paper, in addition to providing a more detailed exposition of the model, we present a variety of experimental results demonstrating the ability of this model to perform robust spatiotemporal segmentation and recognition of objects and image sequences in the presence of varying amounts of occlusion, background clutter, and noise.
[ 74, 90, 608, 747 ]
Test
681
1
Title: Guiding or Hiding: Explorations into the Effects of Learning on the Rate of Evolution. Abstract: Individual lifetime learning can `guide' an evolving population to areas of high fitness in genotype space through an evolutionary phenomenon known as the Baldwin effect (Baldwin, 1896; Hin-ton & Nowlan, 1987). It is the accepted wisdom that this guiding speeds up the rate of evolution. By highlighting another interaction between learning and evolution, that will be termed the Hiding effect, it will be argued here that this depends on the measure of evolutionary speed one adopts. The Hiding effect shows that learning can reduce the selection pressure between individuals by `hiding' their genetic differences. There is thus a trade-off between the Baldwin effect and the Hiding effect to determine learning's influence on evolution and two factors that contribute to this trade-off, the cost of learning and landscape epis tasis, are investigated experimentally.
[ 402, 712, 2309 ]
Validation
682
4
Title: Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators Abstract: This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximator with less computation time. We demonstrate the algorithm on the problem of finding good parameters for a memory based learner and show the tradeoffs involved in choosing the right amount of computation to spend on each evaluation.
[ 88, 1791, 2430 ]
Train
683
0
Title: On the Greediness of Feature Selection Algorithms Abstract: Based on our analysis and experiments using real-world datasets, we find that the greediness of forward feature selection algorithms does not severely corrupt the accuracy of function approximation using the selected input features, but improves the efficiency significantly. Hence, we propose three greedier algorithms in order to further enhance the efficiency of the feature selection processing. We provide empirical results for linear regression, locally weighted regression and k-nearest-neighbor models. We also propose to use these algorithms to develop an offline Chinese and Japanese handwriting recognition system with auto matically configured, local models.
[ 430, 632, 635, 686, 1860 ]
Train
684
3
Title: Finding Overlapping Distributions with MML Abstract: This paper considers an aspect of mixture modelling. Significantly overlapping distributions require more data for their parameters to be accurately estimated than well separated distributions. For example, two Gaussian distributions are considered to significantly overlap when their means are within three standard deviations of each other. If insufficient data is available, only a single component distribution will be estimated, although the data originates from two component distributions. We consider how much data is required to distinguish two component distributions from one distribution in mixture modelling using the minimum message length (MML) criterion. First, we perform experiments which show the MML criterion performs well relative to other Bayesian criteria. Second, we make two improvements to the existing MML estimates, that improve its performance with overlapping distributions.
[ 161, 525, 779, 1425, 1550 ]
Train
685
2
Title: Performance of the GCel-512 and PowerXPlorer for parallel neural network simulations Abstract: This report presents new results from work performed in the framework of the IC 3 A pro-gramme. Using the GCel-512 and the PowerXPlorer made available by the UvA, a performance prediction model for several neural network simulations could be validated quantitatively both for a larger processor grid and for a different target parallel processor configuration. The performance prediction model and its application on a popular neural network model | backpropagation | decomposed via network decomposition will be discussed here. Using the model, the suitability of the GCel-512 and PowerXPlorer are discussed in terms of performance, speedup, efficiency and scalability.
[ 217, 747 ]
Validation
686
0
Title: Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms Abstract: With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes.
[ 44, 116, 119, 223, 225, 318, 381, 430, 634, 635, 683, 760, 1273, 1698, 2541, 2557 ]
Train
687
2
Title: Growing Cell Structures A Self-organizing Network for Unsupervised and Supervised Learning Abstract: We present a new self-organizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process which also includes occasional removal of units. The second variant of the model is a supervised learning method which results from the combination of the abovementioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible in contrast to earlier approaches toperform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks which generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented which are better than any results previously published. fl submitted for publication
[ 110, 626, 741, 745, 1157, 1564, 1700, 1704, 2381, 2683 ]
Train
688
4
Title: Exploration and Model Building in Mobile Robot Domains Abstract: I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours.
[ 60, 207, 252, 412, 552, 566, 835, 1248 ]
Train
689
1
Title: Where Genetic Algorithms Excel Abstract: We analyze the performance of a Genetic Algorithm (GA) we call Culling and a variety of other algorithms on a problem we refer to as Additive Search Problem (ASP). ASP is closely related to several previously well studied problems, such as the game of Mastermind and additive fitness functions. We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Culling is efficient on ASP, highly noise tolerant, and the best known approach in some regimes. Noisy ASP is the first problem we are aware of where a Genetic Type Algorithm bests all known competitors. Standard GA's, by contrast, perform much more poorly on ASP than hillclimbing and other approaches even though the Schema theorem holds for ASP. We generalize ASP to k-ASP to study whether GA's will achieve `implicit parallelism' in a problem with many more schemata. GA's fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a Mean Field Theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GA's can beat competing methods.
[ 163, 967, 1580, 1826 ]
Train
690
0
Title: Instance Pruning Techniques Abstract: The nearest neighbor algorithm and its derivatives are often quite successful at learning a concept from a training set and providing good generalization on subsequent input vectors. However, these techniques often retain the entire training set in memory, resulting in large memory requirements and slow execution speed, as well as a sensitivity to noise. This paper provides a discussion of issues related to reducing the number of instances retained in memory while maintaining (and sometimes improving) generalization accuracy, and mentions algorithms other researchers have used to address this problem. It presents three intuitive noise-tolerant algorithms that can be used to prune instances from the training set. In experiments on 29 applications, the algorithm that achieves the highest reduction in storage also results in the highest generalization accuracy of the three methods.
[ 445, 2457 ]
Train
691
4
Title: Reinforcement Learning in the Multi-Robot Domain Abstract: This paper describes a formulation of reinforcement learning that enables learning in noisy, dynamic environemnts such as in the complex concurrent multi-robot learning domain. The methodology involves minimizing the learning space through the use behaviors and conditions, and dealing with the credit assignment problem through shaped reinforcement in the form of heterogeneous reinforcement functions and progress estimators. We experimentally validate the ap proach on a group of four mobile robots learning a foraging task.
[ 552, 565, 1557, 1649, 2658 ]
Test
692
6
Title: Decision Tree Induction: How Effective is the Greedy Heuristic? Abstract: Most existing decision tree systems use a greedy approach to induce trees | locally optimal splits are induced at every node of the tree. Although the greedy approach is suboptimal, it is believed to produce reasonably good trees. In the current work, we attempt to verify this belief. We quantify the goodness of greedy tree induction empirically, using the popular decision tree algorithms, C4.5 and CART. We induce decision trees on thousands of synthetic data sets and compare them to the corresponding optimal trees, which in turn are found using a novel map coloring idea. We measure the effect on greedy induction of variables such as the underlying concept complexity, training set size, noise and dimensionality. Our experiments show, among other things, that the expected classification cost of a greedily induced tree is consistently very close to that of the optimal tree.
[ 296, 378, 438, 1191 ]
Train
693
2
Title: FURTHER FACTS ABOUT INPUT TO STATE STABILIZATION "Further facts about input to state stabilization", IEEE Abstract: Report SYCON-88-15 ABSTRACT Previous results about input to state stabilizability are shown to hold even for systems which are not linear in controls, provided that a more general type of feedback be allowed. Applications to certain stabilization problems and coprime factorizations, as well as comparisons to other results on input to state stability, are also briefly discussed.
[ 531, 648, 1471, 1501 ]
Train