node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
2,294
0
Title: Cooperative Bayesian and Case-Based Reasoning for Solving Multiagent Planning Tasks Abstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.
[ 66, 649, 1140, 2380, 2529 ]
Train
2,295
1
Title: Diplomarbeit A Genetic Algorithm for the Topological Optimization of Neural Networks Abstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.
[ 163, 427, 881, 2667 ]
Train
2,296
1
Title: TECHNIQUES FOR REDUCING THE DISRUPTION OF SUPERIOR BUILDING BLOCKS IN GENETIC ALGORITHMS Abstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.
[ 145, 163, 2248, 2251 ]
Test
2,297
6
Title: Efficient Construction of Networks for Learned Representations with General to Specific Relationships Abstract: Machine learning systems often represent concepts or rules as sets of attribute-value pairs. Many learning algorithms generalize or specialize these concept representations by removing or adding pairs. Thus concepts are created that have general to specific relationships. This paper presents algorithms to connect concepts into a network based on their general to specific relationships. Since any concept can access related concepts quickly, the resulting structure allows increased efficiency in learning and reasoning. The time complexity of one set of learning models improves from O(n log n) to O(log n) (where n is the number of nodes) when using the general to specific structure.
[ 2304 ]
Validation
2,298
1
Title: Convergence Analysis of Canonical Genetic Algorithms Abstract: This paper analyzes the convergence properties of the canonical genetic algorithm (CGA) with mutation, crossover and proportional reproduction applied to static optimization problems. It is proved by means of homogeneous finite Markov chain analysis that a CGA will never converge to the global optimum regardless of the initialization, crossover operator and objective function. But variants of CGAs that always maintain the best solution in the population, either before or after selection, are shown to converge to the global optimum due to the irreducibility property of the underlying original nonconvergent CGA. These results are discussed with respect to the schema theorem.
[ 163, 2518 ]
Train
2,299
0
Title: Case Retrieval Nets Applied to Large Case Bases Abstract: This article presents some experimental results obtained from applying the Case Retrieval Net approach to large case bases. The obtained results suggest that CRNs can successfully handle case bases larger than considered in other reports.
[ 75, 1854, 1855 ]
Train
2,300
5
Title: Applying a Machine Learning Workbench: Experience with Agricultural Databases Abstract: This paper reviews our experience with the application of machine learning techniques to agricultural databases. We have designed and implemented a machine learning workbench, WEKA, which permits rapid experimentation on a given dataset using a variety of machine learning schemes, and has several facilities for interactive investigation of the data: preprocessing attributes, evaluating and comparing the results of different schemes, and designing comparative experiments to be run offline. We discuss the partnership between agricultural scientist and machine learning researcher that our experience has shown to be vital to success. We review in some detail a particular agricultural application concerned with the culling of dairy herds.
[ 479, 1337, 2636 ]
Validation
2,301
3
Title: editors. Representing Preferences as Ceteris Paribus Comparatives Abstract: Decision-theoretic preferences specify the relative desirability of all possible outcomes of alternative plans. In order to express general patterns of preference holding in a domain, we require a language that can refer directly to preferences over classes of outcomes as well as individuals. We present the basic concepts of a theory of meaning for such generic compar-atives to facilitate their incremental capture and exploitation in automated reasoning systems. Our semantics lifts comparisons of individuals to comparisons of classes "other things being equal" by means of contextual equivalences, equivalence relations among individuals that vary with the context of application. We discuss implications of the theory for represent ing preference information.
[ 1901, 1907, 2588 ]
Validation
2,302
1
Title: Genes, Phenes and the Baldwin Effect: Learning and Evolution in a Simulated Population Abstract: The Baldwin Effect, first proposed in the late nineteenth century, suggests that the course of evolutionary change can be influenced by individually learned behavior. The existence of this effect is still a hotly debated topic. In this paper clear evidence is presented that learning-based plasticity at the phenotypic level can and does produce directed changes at the genotypic level. This research confirms earlier experimental work done by others, notably Hinton & Nowlan (1987). Further, the amount of plasticity of the learned behavior is shown to be crucial to the size of the Baldwin Effect: either too little or too much and the effect disappears or is significantly reduced. Finally, for learnable traits, the case is made that over many generations it will become easier for the population as a whole to learn these traits (i.e. the phenotypic plasticity of these traits will increase). In this gradual transition from a genetically driven population to one driven by learning, the importance of the Baldwin Effect decreases.
[ 403, 1353, 1880, 2104, 2111 ]
Train
2,303
0
Title: Case-based reactive navigation: A case-based method for on-line selection and adaptation of reactive control parameters Abstract: This article presents a new line of research investigating on-line learning mechanisms for autonomous intelligent agents. We discuss a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system, and provides more flexible performance in novel environments without extensive high-level reasoning that would otherwise slow the system down. The method is implemented in the ACBARR (A Case-BAsed Reactive Robotic) system, and evaluated through empirical simulation of the system on several different environments, including "box canyon" environments known to be problematic for reactive control systems in general. fl Technical Report GIT-CC-92/57, College of Computing, Georgia Institute of Technology, Atlanta, Geor gia, 1992.
[ 858, 1951, 2035 ]
Train
2,304
2
Title: GENERALIZATION BY CONTROLLED EXPANSION OF EXAMPLES Abstract: SG (Specific to General) is a learning system that derives general rules from specific examples. SG learns incrementally with good speed and generalization. The SG network is built of many simple nodes that adapt to the problem being learned. Learning is done without requiring user adjustment of sensitive parameters and noise is tolerated with graceful degradation in performance. Nodes learn important features in the input space and then monitor the ability of the features to predict output values. Learning is O(n log n) for each example, where n is the number of nodes in the network, and the number of inputs and output values are treated as constants. An enhanced network topology reduces time complexity to O(log n). Empirical results show that the model gives good generalization and that learning converges in a small number of training passes.
[ 908, 2297 ]
Test
2,305
4
Title: Analytical Mean Squared Error Curves for Temporal Difference Learning Abstract: We provide analytical expressions governing changes to the bias and variance of the lookup table estimators provided by various Monte Carlo and temporal difference value estimation algorithms with o*ine updates over trials in absorbing Markov reward processes. We have used these expressions to develop software that serves as an analysis tool: given a complete description of a Markov reward process, it rapidly yields an exact mean-square-error curve, the curve one would get from averaging together sample mean-square-error curves from an infinite number of learning trials on the given problem. We use our analysis tool to illustrate classes of mean-square-error curve behavior in a variety of example reward processes, and we show that although the various temporal difference algorithms are quite sensitive to the choice of step-size and eligibility-trace parameters, there are values of these parameters that make them similarly competent, and generally good.
[ 321, 2150, 2183 ]
Train
2,306
2
Title: On the Applicability of Neural Network and Machine Learning Methodologies to Natural Language Processing Abstract: We examine the inductive inference of a complex grammar specifically, we consider the task of training a model to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. We investigate the following models: feed-forward neural networks, Fransconi-Gori-Soda and Back-Tsoi locally recurrent networks, Elman, Narendra & Parthasarathy, and Williams & Zipser recurrent networks, Euclidean and edit-distance nearest-neighbors, simulated annealing, and decision trees. The feed-forward neural networks and non-neural network machine learning models are included primarily for comparison. We address the question: How can a neural network, with its distributed nature and gradient descent based iterative calculations, possess linguistic capability which is traditionally handled with symbolic computation and recursive processes? Initial simulations with all models were only partially successful by using a large temporal window as input. Models trained in this fashion did not learn the grammar to a significant degree. Attempts at training recurrent networks with small temporal input windows failed until we implemented several techniques aimed at improving the convergence of the gradient descent training algorithms. We discuss the theory and present an empirical study of a variety of models and learning algorithms which highlights behaviour not present when attempting to learn a simpler grammar.
[ 427, 2049, 2594 ]
Validation
2,307
2
Title: Parallel Gradient Distribution in Unconstrained Optimization Abstract: A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton or a conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. For nonconvex, as well as convex, differentiable functions, the best point found by the k processors is taken, or any better point. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case, and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicate a speedup of the order of the number of processors employed.
[ 406, 1772, 1939 ]
Train
2,308
0
Title: Problem Formulation, Program Synthesis and Program Transformation Techniques for Simulation, Optimization and Constraint Satisfaction (Research Statement) Abstract: A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton or a conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. For nonconvex, as well as convex, differentiable functions, the best point found by the k processors is taken, or any better point. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case, and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicate a speedup of the order of the number of processors employed.
[ 240, 2652 ]
Train
2,309
4
Title: EVOLVING SENSORS IN ENVIRONMENTS OF CONTROLLED COMPLEXITY Abstract: 1 . Sensors represent a crucial link between the evolutionary forces shaping a species' relationship with its environment, and the individual's cognitive abilities to behave and learn. We report on experiments using a new class of "latent energy environments" (LEE) models to define environments of carefully controlled complexity which allow us to state bounds for random and optimal behaviors that are independent of strategies for achieving the behaviors. Using LEE's analytic basis for defining environments, we then use neural networks (NNets) to model individuals and a steady - state genetic algorithm to model an evolutionary process shaping the NNets, in particular their sensors. Our experiments consider two types of "contact" and "ambient" sensors, and variants where the NNets are not allowed to learn, learn via error correction from internal prediction, and via reinforcement learning. We find that predictive learning, even when using a larger repertoire of the more sophisticated ambient sensors, provides no advantage over NNets unable to learn. However, reinforcement learning using a small number of crude contact sensors does provide a significant advantage. Our analysis of these results points to a tradeoff between the genetic "robustness" of sensors and their informativeness to a learning system.
[ 403, 538, 681, 1325, 1969 ]
Validation
2,310
0
Title: Machine Learning: An Annotated Bibliography for the 1995 AI Statistics Tutorial on Machine Learning (Version 1) Abstract: This is a brief annotated bibliography that I wanted to make available to the attendees of my Machine Learning tutorial at the 1995 AI & Statistics Workshop. These slides are available in my WWW pages under slides. Please contact me if you have any questions. Please also note the date (listed above) on which this was most recently updated. While I plan to make occasional updates to this file, it is bound to be outdated quickly. Also, I apologize for the lack of figures, but my time on this project is limited and the slides should compensate. Finally, this bibliography is, by definition, This book is now out of date. Both Pat Langley and Tom Mitchell are in the process of writing textbooks on this subject, but we're still waiting for them. Until then, I suggest looking at both the Readings and the recent ML conference proceedings (both International and European). There are also a few introductory papers on this subject, though I haven't gotten around to putting them in here yet. However, Pat Langley and Dennis Kibler (1988) have written a good paper on ML as an empirical science, and Pat has written several editorials of use to the ML author (Langley 1986; 1987; 1990). incomplete, and I've left out many other references that may be of some use.
[ 66, 318, 2583, 2607 ]
Train
2,311
3
Title: Bayesian MARS Abstract: A Bayesian approach to multivariate adaptive regression spline (MARS) fitting (Friedman, 1991) is proposed. This takes the form of a probability distribution over the space of possible MARS models which is explored using reversible jump Markov chain Monte Carlo methods (Green, 1995). The generated sample of MARS models produced is shown to have good predictive power when averaged and allows easy interpretation of the relative importance of predictors to the overall fit.
[ 161, 2285, 2448 ]
Train
2,312
5
Title: Theory-Guided Induction of Logic Programs by Inference of Regular Languages recursive clauses. merlin on the Abstract: resent allowed sequences of resolution steps for the initial theory. There are, however, many characterizations of allowed sequences of resolution steps that cannot be expressed by a set of resolvents. One approach to this problem is presented, the system mer-lin, which is based on an earlier technique for learning finite-state automata that represent allowed sequences of resolution steps. merlin extends the previous technique in three ways: i) negative examples are considered in addition to positive examples, ii) a new strategy for performing generalization is used, and iii) a technique for converting the learned automaton to a logic program is included. Results from experiments are presented in which merlin outperforms both a system using the old strategy for performing generalization, and a traditional covering technique. The latter result can be explained by the limited expressiveness of hypotheses produced by covering and also by the fact that covering needs to produce the correct base clauses for a recursive definition before
[ 521, 1082, 1259, 2587 ]
Train
2,313
3
Title: PERFECT SIMULATION OF CONDITIONALLY SPECIFIED MODELS Abstract: We discuss how the ideas of producing perfect simulations based on coupling from the past for finite state space models naturally extend to mul-tivariate distributions with infinite or uncountable state spaces such as auto-gamma, auto-Poisson and auto-negative-binomial models, using Gibbs sampling in combination with sandwiching methods originally introduced for perfect simulation of point processes.
[ 2208 ]
Train
2,314
2
Title: GENERAL CLASSES OF CONTROL-LYAPUNOV FUNCTIONS Abstract: The main result of this paper establishes the equivalence between null asymptotic controllability of nonlinear finite-dimensional control systems and the existence of continuous control-Lyapunov functions (clf's) defined by means of generalized derivatives. In this manner, one obtains a complete characterization of asymptotic controllability, applying in principle to a far wider class of systems than Artstein's Theorem (which relates closed-loop feedback stabilization to the existence of smooth clf's). The proof relies on viability theory and optimal control techniques. 1. Introduction. In this paper, we study systems of the general form
[ 2187 ]
Train
2,315
6
Title: Metric Entropy and Minimax Risk in Classification Abstract: We apply recent results on the minimax risk in density estimation to the related problem of pattern classification. The notion of loss we seek to minimize is an information theoretic measure of how well we can predict the classification of future examples, given the classification of previously seen examples. We give an asymptotic characterization of the minimax risk in terms of the metric entropy properties of the class of distributions that might be generating the examples. We then use these results to characterize the minimax risk in the special case of noisy two-valued classification problems in terms of the Assouad density and the
[ 109, 2287 ]
Train
2,316
1
Title: Guided Crossover: A New Operator for Genetic Algorithm Based Optimization Abstract: Genetic algorithms (GAs) have been extensively used in different domains as a means of doing global optimization in a simple yet reliable manner. They have a much better chance of getting to global optima than gradient based methods which usually converge to local sub optima. However, GAs have a tendency of getting only moderately close to the optima in a small number of iterations. To get very close to the optima, the GA needs a very large number of iterations. Whereas gradient based optimizers usually get very close to local optima in a relatively small number of iterations. In this paper we describe a new crossover operator which is designed to endow the GA with gradient-like abilities without actually computing any gradients and without sacrificing global optimality. The operator works by using guidance from all members of the GA population to select a direction for exploration. Empirical results in two engineering design domains and across both binary and floating point representations demonstrate that the operator can significantly improve the steady state error of the GA optimizer.
[ 163, 743, 744, 2030, 2659 ]
Validation
2,317
1
Title: Cellular Encoding Applied to Neurocontrol Abstract: Neural networks are trained for balancing 1 and 2 poles attached to a cart on a fixed track. For one variant of the single pole system, only pole angle and cart position variables are supplied as inputs; the network must learn to compute velocities. All of the problems are solved using a fixed architecture and using a new version of cellular encoding that evolves an application specific architecture with real-valued weights. The learning times and generalization capabilities are compared for neural networks developed using both methods. After a post processing simplification, topologies produced by cellular encoding were very simple and could be analyzed. Architectures with no hidden units were produced for the single pole and the two pole problem when velocity information is supplied as an input. Moreover, these linear solutions display good generalization. For all the control problems, cellular encoding can automatically generate architectures whose complexity and structure reflect the features of the problem to solve.
[ 1353, 2429, 2624 ]
Validation
2,318
3
Title: EXACT TRANSITION PROBABILITIES FOR THE INDEPENDENCE METROPOLIS SAMPLER Abstract: A recent result of Jun Liu's has shown how to compute explicitly the eigen-values and eigenvectors for the Markov chain derived from a special case of the Hastings sampling algorithm, known as the indepdendence Metropolis sampler. In this note, we show how to extend the result to obtain exact n-step transition probabilities for any n. This is done first for a chain on a finite state space, and then extended to a general (discrete or continuous) state space. The paper concludes with some implications for diagnostic tests of convergence of Markov chain samplers.
[ 491, 1977, 2153 ]
Train
2,319
0
Title: Learning When Reformulation is Appropriate for Iterative Design Abstract: It is well known that search-space reformulation can improve the speed and reliability of numerical optimization in engineering design. We argue that the best choice of reformulation depends on the design goal, and present a technique for automatically constructing rules that map the design goal into a reformulation chosen from a space of possible reformulations. We tested our technique in the domain of racing-yacht-hull design, where each reformulation corresponds to incorporating constraints into the search space. We applied a standard inductive-learning algorithm, C4.5, to a set of training data describing which constraints are active in the optimal design for each goal encountered in a previous design session. We then used these rules to choose an appropriate reformulation for each of a set of test cases. Our experimental results show that using these reformulations improves both the speed and the reliability of design optimization, outperforming competing methods and approaching the best performance possible.
[ 227, 2131, 2479 ]
Train
2,320
6
Title: Inserting the best known bounds for weighted bipar tite matching [11], with 1=2 p polynomial-time Abstract: we apply the reduction to their two core children, the total sum of their matching weights becomes O(n), and if for each comparison of a spine node and a critical node we apply the reduction to the core child of the spine node, the total sum of their matching weights becomes O(n). With regards to the O( 2 ) comparisons of two critical nodes, their sum cannot exceed O( 2 n) in total weight. Thus, since we have a total of O(n) edges involved in the matchings, in time O(n), we can reduce the total sum of the matching weights to O( 2 n). Theorem 6.6 Let M : IR 1 ! IR be a monotone function bounding the time complexity UWBM. Moreover, let M satisfy that M (x) = x 1+" f(x), where " 0 is a constant, f (x) = O(x o(1) ), f is monotone, and for some constants b 1 ; b 2 , 8x; y b 1 : f(xy) b 2 f (x)f (y). Then, with = " p in time O(n 1+o(1) + M (n)). Proof: We spend O(n polylog n + time(UWBM(k 2 n))) on the matchings. So, by Theorem 5.10, we have that Comp-Core-Trees can be computed in time O(n polylog n + time(UWBM(k 2 n))). Applying Theo 4 = 16, we get Corollary 6.7 MAST is computable in time O(n 1:5 log n). [4] W.H.E. Day. Computational complexity of inferring phylogenies from dissimilarity matrices. Bulletin of Mathematical Biology, 49(4):461-467, 1987. [5] M. Farach, S. Kannan, and T. Warnow. A robust model for finding optimal evolutionary trees. Al-gorithmica, 1994. In press. See also STOC '93.
[ 1853, 2511 ]
Train
2,321
2
Title: Asymptotic Controllability Implies Feedback Stabilization Abstract: |
[ 2186, 2187, 2370 ]
Train
2,322
2
Title: PHONETIC CLASSIFICATION OF TIMIT SEGMENTS PREPROCESSED WITH LYON'S COCHLEAR MODEL USING A SUPERVISED/UNSUPERVISED HYBRID NEURAL NETWORK Abstract: We report results on vowel and stop consonant recognition with tokens extracted from the TIMIT database. Our current system differs from others doing similar tasks in that we do not use any specific time normalization techniques. We use a very detailed biologically motivated input representation of the speech tokens - Lyon's cochlear model as implemented by Slaney [20]. This detailed, high dimensional representation, known as a cochleagram, is classified by either a back-propagation or by a hybrid supervised/unsupervised neural network classifier. The hybrid network is composed of a biologically motivated unsupervised network and a supervised back-propagation network. This approach produces results comparable to those obtained by others without the addition of time normalization.
[ 359, 2498, 2499, 2500 ]
Test
2,323
3
Title: PHONETIC CLASSIFICATION OF TIMIT SEGMENTS PREPROCESSED WITH LYON'S COCHLEAR MODEL USING A SUPERVISED/UNSUPERVISED HYBRID NEURAL NETWORK Abstract: MOU 130: Feasibility study of fully autonomous vehicles using decision-theoretic control Final Report
[ 492, 788, 1268, 2419 ]
Train
2,324
6
Title: APPLICATION OF ESOP MINIMIZATION IN MACHINE LEARNING AND KNOWLEDGE DISCOVERY Abstract: This paper presents a new application of an Exclusive-Sum-Of-Products (ESOP) minimizer EXORCISM-MV-2: to Machine Learning, and particularly, in Pattern Theory. An analysis of various logic synthesis programs has been conducted at Wright Laboratory for machine learning applications. Creating a robust and efficient Boolean minimizer for machine learning that would minimize a decomposed function cardinality (DFC) measure of functions would help to solve practical problems in application areas that are of interest to the Pattern Theory Group especially those problems that require strongly unspecified multiple-valued-input functions with a large number of variables. For many functions, the complexity minimization of EXORCISM-MV-2 is better than that of Espresso. For small functions, they are worse than those of the Curtis-like Decomposer. However, EXORCISM is much faster, can run on problems with more variables, and significant DFC improvements have also been found. We analyze the cases when EXORCISM is worse than Espresso and propose new improvements for strongly unspecified functions.
[ 1161, 2326 ]
Validation
2,325
2
Title: Incremental Polynomial Model-Controller Network: a self organising non-linear controller Abstract: The aim of this study is to present the "Incremental Polynomial Model-Controller Network" (IPMCN). This network is composed of controllers each one attached to a model used for its indirect design. At each instant the controller connected to the model performing the best is selected. An automatic network construction algorithm is discribed in this study. It makes the IPMCN a self-organising non-linear controller. However the emphasis is on the polynomial controllers that are the building blocks of the IPMCN. From an analysis of the properties of polynomial functions for system modelling it is shown that multiple low order odd polynomials are very suitable to model non-linear systems. A closed loop reference model method to design a controller from a odd polynomial model is then described. The properties of the IPMCN are illustrated according to a second order system having both system states y and _y involving non-linear behaviour. It shows that as a component of a network or alone, a low order odd polynomial controller performs much better than a linear adaptive controller. Moreover, the number of controllers is significantly reduced with the increase of the polynomial order of the controllers and an improvement of the control performance is proportional to the decrease of the number of controllers. In addition, the clustering free approach, applied for the selection of the controllers, makes the IPMCN insensitive to the number of quantities involving nonlinearity in the system. The use of local controllers capable of handling systems with complex dynamics will make this scheme one of the most effective approaches for the control of non-linear systems.
[ 611, 745, 2538 ]
Train
2,326
6
Title: Pattern Theoretic Feature Extraction and Constructive Induction Abstract: This paper offers a perspective on features and pattern finding in general. This perspective is based on a robust complexity measure called Decomposed Function Car-dinality. A function decomposition algorithm for minimizing this complexity measure and finding the associated features is outlined. Results from experiments with this algorithm are also summarized.
[ 317, 508, 2324 ]
Train
2,327
6
Title: A Comparison of New and Old Algorithms for A Mixture Estimation Problem Abstract: We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised learning and give simple derivations for many of the standard iterative algorithms like gradient projection and EM. In this framework, the distance between the new and old proportion vectors is used as a penalty term. The square distance leads to the gradient projection update, and the relative entropy to a new update which we call the exponentiated gradient update (EG ). Curiously, when a second order Taylor expansion of the relative entropy is used, we arrive at an update EM which, for = 1, gives the usual EM update. Experimentally, both the EM -update and the EG -update for > 1 outperform the EM algorithm and its variants. We also prove a polynomial bound on the rate of convergence of the EG algorithm.
[ 76, 1924, 2015, 2034 ]
Train
2,328
4
Title: A Comparison of Direct and Model-Based Reinforcement Learning Abstract: This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We find that in this task model-based approaches support reinforcement learning from smaller amounts of training data and efficient handling of changing goals.
[ 1782 ]
Validation
2,329
6
Title: Programming Research Group A LEARNABILITY MODEL FOR UNIVERSAL REPRESENTATIONS Abstract: This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We find that in this task model-based approaches support reinforcement learning from smaller amounts of training data and efficient handling of changing goals.
[ 672, 1290, 1428, 1918, 2080, 2589 ]
Train
2,330
1
Title: A comparison of the fixed and floating building block representation in the genetic algorithm Abstract: This article compares the traditional, fixed problem representation style of a genetic algorithm (GA) with a new floating representation in which the building blocks of a problem are not fixed at specific locations on the individuals of the population. In addition, the effects of non-coding segments on both of these representations is studied. Non-coding segments are a computational model of non-coding DNA and floating building blocks mimic the location independence of genes. The fact that these structures are prevalent in natural genetic systems suggests that they may provide some advantages to the evolutionary process. Our results show that there is a significant difference in how GAs solve a problem in the fixed and floating representations. GAs are able to maintain a more diverse population with the floating representation. The combination of non-coding segments and floating building blocks appears to encourage a GA to take advantage of its parallel search and recombination abilities.
[ 1631, 1696, 1769, 2598, 2604 ]
Validation
2,331
6
Title: Improved Hoeffding-Style Performance Guarantees for Accurate Classifiers Abstract: We extend Hoeffding bounds to develop superior probabilistic performance guarantees for accurate classifiers. The original Hoeffding bounds on classifier accuracy depend on the accuracy itself as a parameter. Since the accuracy is not known a priori, the parameter value that gives the weakest bounds is used. We present a method that loosely bounds the accuracy using the old method and uses the loose bound as an improved parameter value for tighter bounds. We show how to use the bounds in practice, and we generalize the bounds for individual classifiers to form uniform bounds over multiple classifiers.
[ 571, 2694 ]
Train
2,332
1
Title: Improved Hoeffding-Style Performance Guarantees for Accurate Classifiers Abstract: Evolving Cooperative Groups: Preliminary Results Abstract Multi-agent systems require coordination of sources with distinct expertise to perform complex tasks effectively. In this paper, we use co-evolutionary approach using genetic algorithms to evolve multiple individuals who can effectively cooperate to solve a common problem. We concurrently run a GA for each individual in the group. In this paper, we experiment with a room painting domain which requires cooperation of two agents. We have used two mechanisms for evaluating an individual in one population: (a) pair it randomly with members from the other population, (b) pair it with members of the other population in a shared memory containing the best pairs found so far. Both the approaches are successful in generating optimal behavior patterns. However, our preliminary results exhibit a slight edge for the shared memory approach.
[ 1117, 2334 ]
Validation
2,333
6
Title: Recursive Automatic Algorithm Selection for Inductive Learning Abstract: COINS Technical Report 94-61 August 1994
[ 102, 318, 2135, 2583 ]
Train
2,334
1
Title: New Methods for Competitive Coevolution Abstract: We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites." Competitive coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. "Competitive fitness sharing" changes the way fitness is measured, "shared sampling" provides a method for selecting a strong, diverse set of parasites, and the "hall of fame" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift.
[ 54, 209, 602, 1588, 1790, 1832, 1836, 1917, 2103, 2332, 2353 ]
Test
2,335
2
Title: Function Approximation with Neural Networks and Local Methods: Bias, Variance and Smoothness Abstract: We review the use of global and local methods for estimating a function mapping R m ) R n from samples of the function containing noise. The relationship between the methods is examined and an empirical comparison is performed using the multi-layer perceptron (MLP) global neural network model, the single nearest-neighbour model, a linear local approximation (LA) model, and the following commonly used datasets: the Mackey-Glass chaotic time series, the Sunspot time series, British English Vowel data, TIMIT speech phonemes, building energy prediction data, and the sonar dataset. We find that the simple local approximation models often outperform the MLP. No criterion such as classification/prediction, size of the training set, dimensionality of the training set, etc. can be used to distinguish whether the MLP or the local approximation method will be superior. However, we find that if we consider histograms of the k-NN density estimates for the training datasets then we can choose the best performing method a priori by selecting local approximation when the spread of the density histogram is large and choosing the MLP otherwise. This result correlates with the hypothesis that the global MLP model is less appropriate when the characteristics of the function to be approximated varies throughout the input space. We discuss the results, the smoothness assumption often made in function approximation, and the bias/variance dilemma.
[ 74, 2378 ]
Test
2,336
2
Title: A Fast Kohonen Net Implementation for Abstract: We present an implementation of Kohonen Self-Organizing Feature Maps for the Spert-II vector microprocessor system. The implementation supports arbitrary neural map topologies and arbitrary neighborhood functions. For small networks, as used in real-world tasks, a single Spert-II board is measured to run Kohonen net classification at up to 208 million connections per second (MCPS). On a speech coding benchmark task, Spert-II performs on-line Kohonen net training at over 100 million connection updates per second (MCUPS). This represents almost a factor of 10 improvement compared to previously reported implementations. The asymptotic peak speed of the system is 213 MCPS and 213 MCUPS.
[ 745, 2579 ]
Train
2,337
2
Title: Learning to segment images using dynamic feature binding an isolated object in an image is Abstract: Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent back propagation to complex-valued units.
[ 2218, 2606, 2610 ]
Train
2,338
0
Title: Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier Abstract: The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed so far. In this paper we show that the SBC does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin. The key to this finding lies in the distinction between classification and probability estimation: correct classification can be achieved even when the probability estimates used contain large errors. We show that the previously-assumed region of optimality of the SBC is a second-order infinitesimal fraction of the actual one. This is followed by the derivation of several necessary and several sufficient conditions for the optimality of the SBC. For example, the SBC is optimal for learning arbitrary conjunctions and disjunctions, even though they violate the independence assumption. The paper also reports empirical evidence of the SBC's competitive performance in domains containing substantial degrees of attribute dependence.
[ 1024, 1478, 1986, 2127, 2443, 2677 ]
Test
2,339
5
Title: An intelligent search method using Inductive Logic Programming Abstract: We propose a method to use Inductive Logic Programming to give heuristic functions for searching goals to solve problems. The method takes solutions of a problem or a history of search and a set of background knowledge on the problem. In a large class of problems, a problem is described as a set of states and a set of operators, and is solved by finding a series of operators. A solution, a series of operators that brings an initial state to a final state, is transformed into positive and negative examples of a relation "better-choice", which describes that an operator is better than others in a state. We also give a way to use the "better-choice" relation as a heuristic function. The method can use any logic program as background knowledge to induce heuristics, and induced heuristics has high readability. The paper inspects the method by applying to a puzzle.
[ 344, 675, 2126 ]
Test
2,340
2
Title: Generalization to local remappings of the visuomotor coordinate transformation Abstract: We propose a method to use Inductive Logic Programming to give heuristic functions for searching goals to solve problems. The method takes solutions of a problem or a history of search and a set of background knowledge on the problem. In a large class of problems, a problem is described as a set of states and a set of operators, and is solved by finding a series of operators. A solution, a series of operators that brings an initial state to a final state, is transformed into positive and negative examples of a relation "better-choice", which describes that an operator is better than others in a state. We also give a way to use the "better-choice" relation as a heuristic function. The method can use any logic program as background knowledge to induce heuristics, and induced heuristics has high readability. The paper inspects the method by applying to a puzzle.
[ 611, 1935 ]
Train
2,341
3
Title: Dynamic Belief Networks for Discrete Monitoring Abstract: We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is data-driven. However the model construction process combines sensor data about events with externally provided information about agents' behaviour, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using light-beam sensor data. In addition to presenting a generic network structure for monitoring domains, we describe the use of more complex network structures which address two specific monitoring problems, sensor validation and the Data Association Problem.
[ 559, 623, 788, 1172, 1757, 1842, 2425 ]
Train
2,342
6
Title: The Power of Decision Tables Abstract: We evaluate the power of decision tables as a hypothesis space for supervised learning algorithms. Decision tables are one of the simplest hypothesis spaces possible, and usually they are easy to understand. Experimental results show that on artificial and real-world domains containing only discrete features, IDTM, an algorithm inducing decision tables, can sometimes outperform state-of-the-art algorithms such as C4.5. Surprisingly, performance is quite good on some datasets with continuous features, indicating that many datasets used in machine learning either do not require these features, or that these features have few values. We also describe an incremental method for performing cross-validation that is applicable to incremental learning algorithms including IDTM. Using incremental cross-validation, it is possible to cross-validate a given dataset and IDTM in time that is linear in the number of instances, the number of features, and the number of label values. The time for incremental cross-validation is independent of the number of folds chosen, hence leave-one-out cross-validation and ten-fold cross-validation take the same time.
[ 381, 497, 1270, 2151, 2197, 2577, 2593 ]
Validation
2,343
3
Title: Feature Subset Selection Using the Wrapper Method: Overfitting and Dynamic Search Space Topology Abstract: In the wrapper approach to feature subset selection, a search for an optimal set of features is made using the induction algorithm as a black box. The estimated future performance of the algorithm is the heuristic guiding the search. Statistical methods for feature subset selection including forward selection, backward elimination, and their stepwise variants can be viewed as simple hill-climbing techniques in the space of feature subsets. We utilize best-first search to find a good feature subset and discuss overfitting problems that may be associated with searching too many feature subsets. We introduce compound operators that dynamically change the topology of the search space to better utilize the information available from the evaluation of feature subsets. We show that compound operators unify previous approaches that deal with relevant and irrelevant features. The improved feature subset selection yields significant improvements for real-world datasets when using the ID3 and the Naive-Bayes induction algorithms.
[ 208, 430, 1337, 1618, 2443 ]
Train
2,344
2
Title: A Neural Network Based Head Tracking System Abstract: We have constructed an inexpensive, video-based, motorized tracking system that learns to track a head. It uses real time graphical user inputs or an auxiliary infrared detector as supervisory signals to train a convolutional neural network. The inputs to the neural network consist of normalized luminance and chrominance images and motion information from frame differences. Subsampled images are also used to provide scale invariance. During the online training phase, the neural network rapidly adjusts the input weights depending upon the reliability of the different channels in the surrounding environment. This quick adaptation allows the system to robustly track a head even when other objects are moving within a cluttered background.
[ 2707 ]
Test
2,345
6
Title: The Hardness of Problems on Thin Colored Graphs Abstract: In this paper, we consider the complexity of a number of combinatorial problems; namely, Intervalizing Colored Graphs (DNA physical mapping), Triangulating Colored Graphs (perfect phylogeny), (Directed) (Modified) Colored Cutwidth, Feasible Register Assignment and Module Allocation for graphs of bounded treewidth. Each of these problems has as a characteristic a uniform upper bound on the tree or path width of the graphs in "yes"-instances. For all of these problems with the exceptions of feasible register assignment and module allocation, a vertex or edge coloring is given as part of the input. Our main results are that the parameterized variant of each of the considered problems is hard for the complexity classes W [t] for all t 2 Z + . We also show that Intervalizing Colored Graphs, Triangulating Colored Graphs, and
[ 2005, 2418, 2511 ]
Train
2,346
6
Title: Parity: The Problem that Won't Go Away Abstract: It is well-known that certain learning methods (e.g., the perceptron learning algorithm) cannot acquire complete, parity mappings. But it is often overlooked that state-of-the-art learning methods such as C4.5 and backpropagation cannot generalise from incomplete parity mappings. The failure of such methods to generalise on parity mappings may be sometimes dismissed on the grounds that it is `impossible' to generalise over such mappings, or that parity problems are mathematical constructs having little to do with real-world learning. However, this paper argues that such a dismissal is unwarranted. It shows that parity mappings are hard to learn because they are statistically neutral and that statistical neutrality is a property which we should expect to encounter frequently in real-world contexts. It also shows that the generalization failure on parity mappings occurs even when large, minimally incomplete mappings are used for training purposes, i.e., when claims about the impossibility of generalization are particularly suspect.
[ 397, 1301, 1595, 1967 ]
Train
2,347
1
Title: Chapter 4 Empirical comparison of stochastic algorithms Empirical comparison of stochastic algorithms in a graph Abstract: There are several stochastic methods that can be used for solving NP-hard optimization problems approximatively. Examples of such algorithms include (in order of increasing computational complexity) stochastic greedy search methods, simulated annealing, and genetic algorithms. We investigate which of these methods is likely to give best performance in practice, with respect to the computational effort each requires. We study this problem empirically by selecting a set of stochastic algorithms with varying computational complexity, and by experimentally evaluating for each method how the goodness of the results achieved improves with increasing computational time. For the evaluation, we use a graph optimization problem, which is closely related to several real-world practical problems. To get a wider perspective of the goodness of the achieved results, the stochastic methods are also compared against special-case greedy heuristics. This investigation suggests that although genetic algorithms can provide good results, simpler stochastic algorithms can achieve similar performance more quickly.
[ 1303, 2254, 2265 ]
Train
2,348
3
Title: Sequential Importance Sampling for Nonparametric Bayes Models: The Next Generation Running Title: SIS for Nonparametric Bayes Abstract: There are two generations of Gibbs sampling methods for semi-parametric models involving the Dirichlet process. The first generation suffered from a severe drawback; namely that the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators. Steve MacEachern is Associate Professor, Department of Statistics, Ohio State University, Merlise Clyde is Assistant Professor, Institute of Statistics and Decision Sciences, Duke University, and Jun Liu is Assistant Professor, Department of Statistics, Stanford University. The work of the second author was supported in part by the National Science Foundation grants DMS-9305699 and DMS-9626135, and that of the last author by the National Science Foundation grants DMS-9406044, DMS-9501570, and the Terman Fellowship.
[ 1783, 2682 ]
Train
2,349
2
Title: No Free Lunch for Early Stopping Abstract: We show that, with a uniform prior on hypothesis functions having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error. We also show that regularization methods are equivalent to early stopping with certain non-uniform prior on the early stopping solutions.
[ 1929 ]
Train
2,350
6
Title: Exact Learning of -DNF Formulas with Malicious Membership Queries Abstract: We show that, with a uniform prior on hypothesis functions having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error. We also show that regularization methods are equivalent to early stopping with certain non-uniform prior on the early stopping solutions.
[ 1004, 2168 ]
Train
2,351
2
Title: ERROR STABILITY PROPERTIES OF GENERALIZED GRADIENT-TYPE ALGORITHMS Abstract: We present a unified framework for convergence analysis of the generalized subgradient-type algorithms in the presence of perturbations. One of the principal novel features of our analysis is that perturbations need not tend to zero in the limit. It is established that the iterates of the algorithms are attracted, in a certain sense, to an "-stationary set of the problem, where " depends on the magnitude of perturbations. Characterization of those attraction sets is given in the general (nonsmooth and nonconvex) case. The results are further strengthened for convex, weakly sharp and strongly convex problems. Our analysis extends and unifies previously known results on convergence and stability properties of gradient and subgradient methods, including their incremental, parallel and "heavy ball" modifications. fl The first author is supported in part by CNPq grant 300734/95-6. Research of the second author was supported in part by the International Science Foundation Grant NBY000, the International Science Foundation and Russian Goverment Grant NBY300 and the Russian Foundation for Fundamental Research Grant N 95-01-01448. y Instituto de Matematica Pura e Aplicada, Estrada Dona Castorina 110, Jardim Bot^anico, Rio de Janeiro, RJ, CEP 22460-320, Brazil. Email : solodov@impa.br. z Operations Research Department, Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow, Russia, 119899.
[ 1772 ]
Train
2,352
2
Title: Power System Security Margin Prediction Using Radial Basis Function Networks Abstract: Dr. McCalley's research is partially supported through grants from National Science Foundation and Pacific Gas and Electric Company. Dr. Honavar's research is partially supported through grants from National Science Foundation and the John Deere Foundation. This paper will appear in: Proceedings of the 29th Annual North American Power Symposium, Oct. 13-14. 1997, Laramie, Wyoming.
[ 611, 2055 ]
Validation
2,353
1
Title: Dynamics of Co-evolutionary Learning Abstract: Co-evolutionary learning, which involves the embedding of adaptive learning agents in a fitness environment which dynamically responds to their progress, is a potential solution for many technological chicken and egg problems, and is at the heart of several recent and surprising successes, such as Sim's artificial robot and Tesauro's backgammon player. We recently solved the two spirals problem, a difficult neural network benchmark classification problem, using the genetic programming primitives set up by [Koza, 1992]. Instead of using absolute fitness, we use a relative fitness [Angeline & Pollack, 1993] based on a competition for coverage of the data set. As the population reproduces, the fitness function driving the selection changes, and subproblem niches are opened, rather than crowded out. The solutions found by our method have a symbiotic structure which suggests that by holding niches open, crossover is better able to discover modular build ing blocks.
[ 415, 2334 ]
Test
2,354
6
Title: The Power of Team Exploration: Two Robots Can Learn Unlabeled Directed Graphs Abstract: We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots, which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by actively wandering through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk on a graph converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs.
[ 400, 453, 555, 556, 2360, 2455 ]
Train
2,355
2
Title: CONVIS: Action Oriented Control and Visualization of Neural Networks Introduction and Technical Description Abstract: We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots, which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by actively wandering through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk on a graph converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs.
[ 217, 763, 1879 ]
Train
2,356
6
Title: Learning With Unreliable Boundary Queries Abstract: We introduce a model for learning from examples and membership queries in situations where the boundary between positive and negative examples is somewhat ill-defined. In our model, queries near the boundary of a target concept may receive incorrect or "don't care" responses, and the distribution of examples has zero probability mass on the boundary region. The motivation behind our model is that in many cases the boundary between positive and negative examples is complicated or "fuzzy." However, one may still hope to learn successfully, because the typical examples that one sees do not come from that region. We present several positive results in this new model. We show how to learn the intersection of two arbitrary halfspaces when membership queries near the boundary may be answered incorrectly. Our algorithm is an extension of an algorithm of Baum [7, 6] that learns the intersection of two halfspaces whose bounding planes pass through the origin in the PAC-with-membership-queries model. We also describe algorithms for learning several subclasses of monotone DNF formulas.
[ 1105, 1705, 2246 ]
Train
2,357
2
Keyword: Running Title: Local Multivariate Binary Processors Abstract: We thank Sue Becker, Peter Hancock and Darragh Smyth for helpful comments on this work. The work of Dario Floreano and Bill Phillips was supported by a Network Grant from the Human Capital and Mobility Programme of the European Community.
[ 1656, 2499 ]
Train
2,358
2
Title: Physiological Gain Leads to High ISI Variability in a Simple Model of a Cortical Regular Abstract: To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky and Koch, 1993), it is critical to examine the dynamics of their neuronal integration as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick et al., 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency vs. injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory "1= p N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state, and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady state behavior is predominant and ISI's are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
[ 2503 ]
Train
2,359
0
Title: Computer-Supported Argumentation for Cooperative Design on the World-Wide Web Abstract: This paper describes an argumentation system for cooperative design applications on the Web. The system provides experts involved in such procedures means of expressing and weighing their individual arguments and preferences, in order to argue for or against the selection of a certain choice. It supports defeasible and qualitative reasoning in the presence of ill-structured information. Argumentation is performed through a set of discourse acts which call a variety of procedures for the propagation of information in the corresponding discussion graph. The paper also reports on the integration of Case Based Reasoning techniques, used to resolve current design issues by considering previous similar situations, and the specitcation of similarity measures between the various argumentation items, the aim being to estimate the variations among opinions of the designers involved in cooperative design.
[ 66, 2520 ]
Train
2,360
6
Title: Efficient Learning of Typical Finite Automata from Random Walks (Extended Abstract) Abstract: This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting to model the "typical" labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton, along with (2) a learning model in which the learner is not provided with the means to experiment with the machine, but rather must learn solely by observing the automaton's output behavior on a random input sequence. The main contribution of this paper is in presenting the first efficient algorithms for learning non-trivial classes of automata in an entirely passive learning model. We adopt an on-line learning model in which the learner is asked to predict the output of the next state, given the next symbol of the random input sequence; the goal of the learner is to make as few prediction mistakes as possible. Assuming the learner has a means of resetting the target machine to a fixed start state, we first present an efficient algorithm that makes an expected polynomial number of mistakes in this model. Next, we show how this first algorithm can be used as a subroutine by a second algorithm that also makes a polynomial number of mistakes even in the absence of a reset. Along the way, we prove a number of combinatorial results for randomly labeled automata. We also show that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi-random. Finally, we discuss an extension of our results to a model in which automata are used to represent distributions over binary strings.
[ 400, 556, 574, 672, 1006, 1386, 2004, 2040, 2273, 2354 ]
Train
2,361
1
Title: Program Search with a Hierarchical Variable Length Representation: Genetic Programming, Simulated Annealing and Hill Climbing Abstract: This paper presents a comparison of Genetic Programming(GP) with Simulated Annealing (SA) and Stochastic Iterated Hill Climbing (SIHC) based on a suite of program discovery problems which have been previously tackled only with GP. All three search algorithms employ the hierarchical variable length representation for programs brought into recent prominence with the GP paradigm [8]. We feel it is not intuitively obvious that mutation-based adaptive search can handle program discovery yet, to date, for each GP problem we have tried, SA or SIHC also work.
[ 139, 163, 2175, 2216 ]
Train
2,362
3
Title: Outperforming the Gibbs sampler empirical estimator for nearest neighbor random fields Abstract: Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random fields and to Gibbs samplers with deterministic sweep, but our approach applies to any sampler that uses reversible variable-at-a-time updating with deterministic sweep. The structure of the transition distribution of the sampler is exploited to construct further empirical estimators that are combined with the standard empirical estimator to reduce asymptotic variance. The extra computational cost is negligible. When the random field is spatially homogeneous, symmetrizations of our estimator lead to further variance reduction. The performance of the estimators is evaluated in a simulation study of the Ising model.
[ 1870, 2510 ]
Test
2,363
1
Title: Modeling the Evolution of Motivation Abstract: In order for learning to improve the adaptiveness of an animal's behavior and thus direct evolution in the way Baldwin suggested, the learning mechanism must incorporate an innate evaluation of how the animal's actions influence its reproductive fitness. For example, many circumstances that damage an animal, or otherwise reduce its fitness are painful and tend to be avoided. We refer to the mechanism by which an animal evaluates the fitness consequences of its actions as a "motivation system," and argue that such a system must evolve along with the behaviors it evaluates. We describe simulations of the evolution of populations of agents instantiating a number of different architectures for generating action and learning, in worlds of differing complexity. We find that in some cases, members of the populations evolve motivation systems that are accurate enough to direct learning so as to increase the fitness of the actions the agents perform. Furthermore, the motivation systems tend to incorporate systematic distortions in their representations of the worlds they inhabit; these distortions can increase the adaptiveness of the behavior generated.
[ 129, 163, 1719, 1969, 2165 ]
Test
2,364
0
Title: Automatic Phonetic Transcription of Words Based On Sparse Data Abstract: The relation between the orthography and the phonology of a language has traditionally been modelled by hand-crafted rule sets. Machine-learning (ML) approaches offer a means to gather this knowledge automatically. Problems arise when the training material is sparse. Generalising from sparse data is a well-known problem for many ML algorithms. We present experiments in which connectionist, instance-based, and decision-tree learning algorithms are applied to a small corpus of Scottish Gaelic. instance-based learning in the ib1-ig algorithm yields the best generalisation performance, and that most algorithms tested perform tolerably well. Given the availability of a lexicon, even if it is sparse, ML is a valuable and efficient tool for automatic phonetic transcription of written text.
[ 862, 1644, 1812 ]
Train
2,365
5
Title: A Reduced Multipipeline Machine Description that Preserves Scheduling Constraints Abstract: High performance compilers increasingly rely on accurate modeling of the machine resources to efficiently exploit the instruction level parallelism of an application. In this paper, we propose a reduced machine description that results in faster detection of resource contentions while preserving the scheduling constraints present in the original machine description. The proposed approach reduces a machine description in an automated, error-free, and efficient fashion. Moreover, it fully supports schedulers that backtrack and process operations in arbitrary order. Reduced descriptions for the DEC Alpha 21064, MIPS R3000/R3010, and Cydra 5 result in 4 to 7 times faster detection of resource contentions and require 22 to 90% of the memory storage used by the original machine descriptions.
[ 2189, 2190, 2668 ]
Train
2,366
3
Title: Choice of Thresholds for Wavelet Shrinkage Estimate of the Spectrum fff j g are level-dependent Abstract: We study the problem of estimating the log spectrum of a stationary Gaussian time series by thresholding the empirical wavelet coefficients. We propose the use of thresholds t j;n depending on sample size n, wavelet basis and resolution level j. At fine resolution levels (j = 1; 2; :::), we propose The purpose of this thresholding level is to make the reconstructed log-spectrum as nearly noise-free as possible. In addition to being pleasant from a visual point of view, the noise-free character leads to attractive theoretical properties over a wide range of smoothness assumptions. Previous proposals set much smaller thresholds and did not enjoy these properties. t j;n = ff j log n;
[ 1910, 2081, 2458, 2506, 2575 ]
Train
2,367
6
Title: Data Mining using MLC A Machine Learning Library in C http://www.sgi.com/Technology/mlc Abstract: Data mining algorithms including machine learning, statistical analysis, and pattern recognition techniques can greatly improve our understanding of data warehouses that are now becoming more widespread. In this paper, we focus on classification algorithms and review the need for multiple classification algorithms. We describe a system called MLC ++ , which was designed to help choose the appropriate classification algorithm for a given dataset by making it easy to compare the utility of different algorithms on a specific dataset of interest. MLC ++ not only provides a workbench for such comparisons, but also provides a library of C ++ classes to aid in the development of new algorithms, especially hybrid algorithms and multi-strategy algorithms. Such algorithms are generally hard to code from scratch. We discuss design issues, interfaces to other programs, and visualization of the resulting classifiers.
[ 1833, 2577 ]
Train
2,368
4
Title: Reinforcement Learning with Modular Neural Networks for Control Abstract: Reinforcement learning methods can be applied to control problems with the objective of optimizing the value of a function over time. They have been used to train single neural networks that learn solutions to whole tasks. Jacobs and Jordan [5] have shown that a set of expert networks combined via a gating network can more quickly learn tasks that can be decomposed. Even the decomposition can be learned. Inspired by Boyan's work of modular neural networks for learning with temporal-difference methods [4], we modify the reinforcement learning algorithm called Q-Learning to train a modular neural network to solve a control problem. The resulting algorithm is demonstrated on the classical pole-balancing problem. The advantage of such a method is that it makes it possible to deal with complex dynamic control problem effectively by using task decomposition and competitive learning.
[ 85, 465, 2642 ]
Train
2,369
0
Title: Case-Based Sonogram Classification Abstract: This report replicates and extends results reported by Naval Air Warfare Center (NAWC) personnel on the automatic classification of sonar images. They used novel case-based reasoning systems in their empirical studies, but did not obtain comparative analyses using standard classification algorithms. Therefore, the quality of the NAWC results were unknown. We replicated the NAWC studies and also tested several other classifiers (i.e., both case-based and otherwise) from the machine learning literature. These comparisons and their ramifications are detailed in this paper. Next, we investigated Fala and Walker's two suggestions for future work (i.e., on combining their similarity functions and on an alternative case representation). Finally, we describe several ways to incorporate additional domain-specific knowledge when applying case-based classifiers to similar tasks.
[ 256, 426, 2607 ]
Validation
2,370
2
Title: CONTROL-LYAPUNOV FUNCTIONS FOR TIME-VARYING SET STABILIZATION Abstract: This paper shows that, for time varying systems, global asymptotic controllability to a given closed subset of the state space is equivalent to the existence of a continuous control-Lyapunov function with respect to the set.
[ 2321 ]
Train
2,371
0
Title: Learning Adaptation Strategies by Introspective Reasoning about Memory Search Abstract: In case-based reasoning systems, the case adaptation process is traditionally controlled by static libraries of hand-coded adaptation rules. This paper proposes a method for learning adaptation knowledge in the form of adaptation strategies of the type developed and hand-coded by Kass [90] . Adaptation strategies differ from standard adaptation rules in that they encode general memory search procedures for finding the information needed during case adaptation; this paper focuses on the issues involved in learning memory search procedures to form the basis of new adaptation strategies. It proposes a method that starts with a small library of abstract adaptation rules and uses introspective reasoning about the system's memory organization to generate the memory search plans needed to apply those rules. The search plans are then packaged with the original abstract rules to form new adaptation strategies for future use. This process allows a CBR system not only to learn about its domain, by storing the results of case adaptation, but also to learn how to apply the cases in its memory more effectively.
[ 583, 1126, 2489 ]
Test
2,372
0
Title: Goal-Driven Learning: Fundamental Issues (A Symposium Report) Abstract: In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research di rections in goal-driven learning. fl Appears in AI Magazine, 14(4):67-72, 1993
[ 1126, 2398, 2489 ]
Train
2,373
2
Title: Evaluating Neural Network Predictors by Bootstrapping Abstract: We present a new method, inspired by the bootstrap, whose goal it is to determine the quality and reliability of a neural network predictor. Our method leads to more robust forecasting along with a large amount of statistical information on forecast performance that we exploit. We exhibit the method in the context of multi-variate time series prediction on financial data from the New York Stock Exchange. It turns out that the variation due to different resamplings (i.e., splits between training, cross-validation, and test sets) is significantly larger than the variation due to different network conditions (such as architecture and initial weights). Furthermore, this method allows us to forecast a probability distribution, as opposed to the traditional case of just a single value at each time step. We demonstrate this on a strictly held-out test set that includes the 1987 stock market crash. We also compare the performance of the class of neural networks to identically bootstrapped linear models.
[ 916, 1366, 2374, 2413, 2414, 2507 ]
Train
2,374
2
Title: Predictions with Confidence Intervals (Local Error Bars) Abstract: We present a new method for obtaining local error bars, i.e., estimates of the confidence in the predicted value that depend on the input. We approach this problem of nonlinear regression in a maximum likelihood framework. We demonstrate our technique first on computer generated data with locally varying, normally distributed target noise. We then apply it to the laser data from the Santa Fe Time Series Competition. Finally, we extend the technique to estimate error bars for iterated predictions, and apply it to the exact competition task where it gives the best performance to date.
[ 916, 1366, 2373, 2507, 2513 ]
Validation
2,375
3
Title: Minimax Bayes, asymptotic minimax and sparse wavelet priors Abstract: Pinsker(1980) gave a precise asymptotic evaluation of the minimax mean squared error of estimation of a signal in Gaussian noise when the signal is known a priori to lie in a compact ellipsoid in Hilbert space. This `Minimax Bayes' method can be applied to a variety of global non-parametric estimation settings with parameter spaces far from ellipsoidal. For example it leads to a theory of exact asymptotic minimax estimation over norm balls in Besov and Triebel spaces using simple co-ordinatewise estimators and wavelet bases. This paper outlines some features of the method common to several applications. In particular, we derive new results on the exact asymptotic minimax risk over weak ` p balls in R n as n ! 1, and also for a class of `local' estimators on the Triebel scale. By its very nature, the method reveals the structure of asymptotically least favorable distributions. Thus we may simulate `least favorable' sample paths. We illustrate this for estimation of a signal in Gaussian white noise over norm balls in certain Besov spaces. In wavelet bases, when p < 2, the least favorable priors are sparse, and the resulting sample paths strikingly different from those observed in Pinsker's ellipsoidal setting (p = 2). Acknowledgements. I am grateful for many conversations with David Donoho and Carl Taswell, and to a referee for helpful comments. This work was supported in part by NSF grants DMS 84-51750, 9209130, and NIH PHS grant GM21215-12.
[ 1910, 2416, 2661 ]
Train
2,376
2
Title: Multimodality Exploration in Training an Unsupervised Projection Pursuit Neural Network Abstract: Graphical inspection of multimodality is demonstrated using unsupervised lateral-inhibition neural networks. Three projection pursuit indices are compared on low dimensional simulated and real-world data: principal components [22], Legendre poly nomial [6] and projection pursuit network [16].
[ 359, 2422, 2499, 2500 ]
Train
2,377
3
Title: Adaptive proposal distribution for random walk Metropolis algorithm Abstract: The choice of a suitable MCMC method and further the choice of a proposal distribution is known to be crucial for the convergence of the Markov chain. However, in many cases the choice of an effective proposal distribution is difficult. As a remedy we suggest a method called Adaptive Proposal (AP). Although the stationary distribution of the AP algorithm is slightly biased, it appears to provide an efficient tool for, e.g., reasonably low dimensional problems, as typically encountered in non-linear regression problems in natural sciences. As a realistic example we include a successful application of the AP algorithm in parameter estimation for the satellite instrument 'GO-MOS'. In this paper we also present a comprehensive test procedure and systematic performance criteria for comparing Adaptive Proposal algorithm with more traditional Metropolis algorithms.
[ 468, 491, 2025 ]
Train
2,378
2
Title: Priors, Stabilizers and Basis Functions: from regularization to radial, tensor and additive splines Abstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular we had discussed how standard smoothness functionals lead to a subclass of regularization networks, the well-known Radial Basis Functions approximation schemes. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same extension that leads from Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions and some forms of Projection Pursuit Regression. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we show the relation between activation functions of the Gaussian and sigmoidal type by considering the simple case of the kernel G(x) = jxj. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that generalize into Hyper Basis Functions, b) some tensor product splines, and c) additive splines that generalize into schemes of the type of ridge approximation, hinge functions and one-hidden-layer perceptrons. This paper describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory. This research is sponsored by grants from the Office of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041 (which includes funds from DARPA provided under the HPCC program); and by a grant from the National Institutes of Health under contract NIH 2-S07-RR07047. Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ONR contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at the Whitaker College, Massachusetts Institute of Technology. c fl Massachusetts Institute of Technology, 1993
[ 611, 1668, 2050, 2335 ]
Validation
2,379
1
Title: The Evolution of Size in Variable Length Representations Abstract: In many cases programs length's increase (known as bloat, fluff and increasing structural complexity) during artificial evolution. We show bloat is not specific to genetic programming and suggest it is inherent in search techniques with discrete variable length representations using simple static evaluation functions. We investigate the bloating characteristics of three non-population and one population based search techniques using a novel mutation operator. An artificial ant following the Santa Fe trail problem is solved by simulated annealing, hill climbing, strict hill climbing and population based search using two variants of the the new subtree based mutation operator. As predicted bloat is observed when using unbiased mutation and is absent in simulated annealing and both hill climbers when using the length neutral mutation however bloat occurs with both mutations when using a population. We conclude that there are two causes of bloat 1) search operators with no length bias tend to sample bigger trees and 2) competition within populations favours longer programs as they can usually reproduce more accurately.
[ 1984, 2206 ]
Validation
2,380
3
Title: Massively Parallel Case-Based Reasoning with Probabilistic Similarity Metrics Abstract: We propose a probabilistic case-space metric for the case matching and case adaptation tasks. Central to our approach is a probability propagation algorithm adopted from Bayesian reasoning systems, which allows our case-based reasoning system to perform theoretically sound probabilistic reasoning. The same probability propagation mechanism actually offers a uniform solution to both the case matching and case adaptation problems. We also show how the algorithm can be implemented as a connectionist network, where efficient massively parallel case retrieval is an inherent property of the system. We argue that using this kind of an approach, the difficult problem of case indexing can be completely avoided. Pp. 144-154 in Topics in Case-Based Reasoning, edited by Stefan Wess, Klaus-Dieter Althoff and Michael M. Richter. Volume 837, Lecture
[ 215, 288, 485, 1838, 2294, 2514, 2561 ]
Train
2,381
2
Title: Evolving Artificial Neural Networks using the Baldwin Effect Abstract: This paper describes how through simple means a genetic search towards optimal neural network architectures can be improved, both in the convergence speed as in the quality of the final result. This result can be theoretically explained with the Baldwin effect, which is implemented here not just by the learning process of the network alone, but also by changing the network architecture as part of the learning procedure. This can be seen as a combination of two different techniques, both help ing and improving on simple genetic search.
[ 687, 1606, 2667 ]
Train
2,382
2
Title: Trees and Splines in Survival Analysis Abstract: Technical Report No. 275 Revised March 30, 1995 University of Washington Department of Statistics Seattle, Washington 98195 Abstract During the past few years several nonparametric alternatives to the Cox proportional hazards model have appeared in the literature. These methods extend techniques that are well known from regression analysis to the analysis of censored survival data. In this paper we discuss methods based on (partition) trees and (polynomial) splines, analyze two datasets using both Survival Trees[1] and HARE[2], and compare the strengths and weaknesses of the two methods. One of the strengths of HARE is that its model fitting procedure has an implicit check for proportionality of the underlying hazards model. It also provides an explicit model for the conditional hazards function, which makes it very convenient to obtain graphical summaries. On the other hand, the tree-based methods automatically partition a dataset into groups of cases that are similar in survival history. Results obtained by survival trees and HARE are often complimentary. Trees and splines in survival analysis should provide the data analyst with two useful tools when analyzing survival data.
[ 2013 ]
Train
2,383
2
Title: Alternative Discrete-Time Operators and Their Application to Nonlinear Models Abstract: Technical Report No. 275 Revised March 30, 1995 University of Washington Department of Statistics Seattle, Washington 98195 Abstract During the past few years several nonparametric alternatives to the Cox proportional hazards model have appeared in the literature. These methods extend techniques that are well known from regression analysis to the analysis of censored survival data. In this paper we discuss methods based on (partition) trees and (polynomial) splines, analyze two datasets using both Survival Trees[1] and HARE[2], and compare the strengths and weaknesses of the two methods. One of the strengths of HARE is that its model fitting procedure has an implicit check for proportionality of the underlying hazards model. It also provides an explicit model for the conditional hazards function, which makes it very convenient to obtain graphical summaries. On the other hand, the tree-based methods automatically partition a dataset into groups of cases that are similar in survival history. Results obtained by survival trees and HARE are often complimentary. Trees and splines in survival analysis should provide the data analyst with two useful tools when analyzing survival data.
[ 427, 1820 ]
Test
2,384
3
Title: ESTIMATING FUNCTIONS OF PROBABILITY DISTRIBUTIONS FROM A FINITE SET OF SAMPLES Part II: Bayes Estimators Abstract: This paper is the second in a series of two on the problem of estimating a function of a probability distribution from a finite set of samples of that distribution. In the first paper 1 , the Bayes estimator for a function of a probability distribution was introduced, the optimal properties of the Bayes estimator were discussed, and the Bayes and frequency-counts estimators for the Sh-annon entropy were derived and graphically contrasted. In the current paper the analysis of the first paper is extended by the derivation of Bayes estimators for several other functions of interest in statistics and information theory. These functions are (powers of) the mutual information, chi-squared for tests of independence, variance, covariance, and average. Finding Bayes estimators for several of these functions requires extensions to the analytical techniques developed in the first paper, and these extensions form the main body of this paper. This paper extends the analysis in other ways as well, for example by enlarging the class of potential priors beyond the uniform prior assumed in the first paper. In particular, the use of the entropic and Dirichlet priors is considered.
[ 2460 ]
Train
2,385
2
Title: Receptive Fields for Vision: from Hyperacuity to Object Recognition Abstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.
[ 611, 2499, 2676 ]
Train
2,386
2
Title: Optimising Local Hebbian Learning: use the ffi-rule Abstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.
[ 2387 ]
Validation
2,387
2
Title: CNN: a Neural Architecture that Learns Multiple Transformations of Spatial Representations Abstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.
[ 2386 ]
Test
2,388
2
Title: Combining Neural Network Forecasts on Wavelet-Transformed Time Series Abstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.
[ 427, 2575 ]
Test
2,389
3
Title: On Computing the Largest Fraction of Missing Information for the EM Algorithm and the Worst Abstract: We address the problem of computing the largest fraction of missing information for the EM algorithm and the worst linear function for data augmentation. These are the largest eigenvalue and its associated eigenvector for the Jacobian of the EM operator at a maximum likelihood estimate, which are important for assessing convergence in iterative simulation. An estimate of the largest fraction of missing information is available from the EM iterates; this is often adequate since only a few figures of accuracy are needed. In some instances the EM iteration also gives an estimate of the worst linear function. We show that the power method for eigencomputation can be used to compute efficient and accurate estimates of both quantities. Unlike eigenvalue decomposition, the power method computes only the largest eigenvalue and eigenvector of a matrix, it can take advantage of a good eigenvector estimate as an initial value and it can be terminated after only a few figures of accuracy are obtained. Moreover, the matrix products needed in the power method can be computed by extrapolation, obviating the need to form the Jacobian of the EM operator. We give results of simultation studies on multivariate normal data showing this approach becomes more efficient as the data dimension increases than methods that use a finite-difference approximation to the Jacobian, which is the only general-purpose alternative available. fl Funded by National Institutes of Health Small Business Innovation Reseach Grant 5R44CA65147-03, and by Office of Naval Research contracts N00014-96-1-0192 and N00014-96-1-0330. We are indebted to Tim Hesterberg, Jim Schimert, Doug Clarkson, Anne Greenbaum, and Adrian Raftery for comments and discussion that helped advance this research and improve this paper.
[ 345, 2421 ]
Test
2,390
2
Title: A HIERARCHICAL COMMUNITY OF EXPERTS Abstract: We describe a directed acyclic graphical model that contains a hierarchy of linear units and a mechanism for dynamically selecting an appropriate subset of these units to model each observation. The non-linear selection mechanism is a hierarchy of binary units each of which gates the output of one of the linear units. There are no connections from linear units to binary units, so the generative model can be viewed as a logistic belief net (Neal 1992) which selects a skeleton linear model from among the available linear units. We show that Gibbs sampling can be used to learn the parameters of the linear and binary units even when the sampling is so brief that the Markov chain is far from equilibrium.
[ 36, 74, 76, 2227 ]
Validation
2,391
6
Title: A Note on Learning from Multiple-Instance Examples Abstract: We describe a simple reduction from the problem of PAC-learning from multiple-instance examples to that of PAC-learning with one-sided random classification noise. Thus, all concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. We also describe a more efficient (and somewhat technically more involved) reduction to the Statistical-Query model that results in a polynomial-time algorithm for learning axis-parallel rectangles with sample complexity ~ O(d 2 r=* 2 ), saving roughly a factor of r over the results of Auer et al. (1997).
[ 507, 2427, 2548 ]
Validation
2,392
1
Title: Hierarchical Learning with Procedural Abstraction Mechanisms Abstract: We describe a simple reduction from the problem of PAC-learning from multiple-instance examples to that of PAC-learning with one-sided random classification noise. Thus, all concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. We also describe a more efficient (and somewhat technically more involved) reduction to the Statistical-Query model that results in a polynomial-time algorithm for learning axis-parallel rectangles with sample complexity ~ O(d 2 r=* 2 ), saving roughly a factor of r over the results of Auer et al. (1997).
[ 1925 ]
Train
2,393
2
Title: Coordination and Control Structures and Processes: Possibilities for Connectionist Networks (CN) Abstract: The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs along with all other types of programs are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost.
[ 496, 663, 1813, 1896, 1952, 2029 ]
Train