node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
1,494
2
Title: Avoiding Saturation By Trajectory Reparameterization Abstract: The problem of trajectory tracking in the presence of input constraints is considered. The desired trajectory is reparameterized on a slower time scale in order to avoid input saturation. Necessary conditions that the reparameterizing function must satisfy are derived. The deviation from the nominal trajectory is minimized by formulating the problem as an optimal control problem.
[ 1282 ]
Validation
1,495
1
Title: Clique Detection via Genetic Programming Topics in Combinatorial Optimization Abstract: Genetic Programming is utilized as a technique for detecting cliques in a network. Candidate cliques are represented in lists, and the lists are manipulated such that larger cliques are formed from the candidates. The clique detection problem has some interesting implications to the Strongly Typed Genetic Programming paradigm, namely in forming a class hierarchy. The problem is also useful in that it is easy to add noise.
[ 995, 1231 ]
Validation
1,496
0
Title: Adaptive Similarity Assessment for Case-Based Explanation.* Abstract: Genetic Programming is utilized as a technique for detecting cliques in a network. Candidate cliques are represented in lists, and the lists are manipulated such that larger cliques are formed from the candidates. The clique detection problem has some interesting implications to the Strongly Typed Genetic Programming paradigm, namely in forming a class hierarchy. The problem is also useful in that it is easy to add noise.
[ 1125 ]
Train
1,497
0
Title: Combining Rules and Cases to Learn Case Adaptation Abstract: Computer models of case-based reasoning (CBR) generally guide case adaptation using a fixed set of adaptation rules. A difficult practical problem is how to identify the knowledge required to guide adaptation for particular tasks. Likewise, an open issue for CBR as a cognitive model is how case adaptation knowledge is learned. We describe a new approach to acquiring case adaptation knowledge. In this approach, adaptation problems are initially solved by reasoning from scratch, using abstract rules about structural transformations and general memory search heuristics. Traces of the processing used for successful rule-based adaptation are stored as cases to enable future adaptation to be done by case-based reasoning. When similar adaptation problems are encountered in the future, these adaptation cases provide task- and domain-specific guidance for the case adaptation process. We present the tenets of the approach concerning the relationship between memory search and case adaptation, the memory search process, and the storage and reuse of cases representing adaptation episodes. These points are discussed in the context of ongoing research on DIAL, a computer model that learns case adaptation knowledge for case-based disaster response planning.
[ 580, 901, 1126, 1163, 1212 ]
Test
1,498
0
Title: INFERENTIAL THEORY OF LEARNING: Developing Foundations for Multistrategy Learning Abstract: The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization.
[ 163, 289, 582, 1049, 1071, 1163, 1351, 1483, 1534, 2158, 2398, 2450 ]
Validation
1,499
2
Title: Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers Abstract: The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by k-means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well-founded, but also superior in a practical application. This report describes research done at the Center for Biological and Computational Learning, the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology, and at AT&T Bell Laboratories (now AT&T Research, and Lucent Technologies Bell Laboratories). Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. BS thanks the M.I.T. for hospitality during a three-week visit in March 1995, where this work was started. At the time of the study, BS, CB, and VV were with AT&T Bell Laboratories, NJ; KS, FG, PN, and TP were with the Massachusetts Institute of Technology. KS is now with the Department of Information Systems and Computer Science at the National University of Singapore, Lower Kent Ridge Road, Singapore 0511; CB and PN are with Lucent Technologies, Bell Laboratories, NJ; VV is with AT&T Research, NJ. BS was supported by the Studienstiftung des deutschen Volkes; CB was supported by ARPA under ONR contract number N00014-94-C-0186. We thank A. Smola for useful discussions. Please direct correspondence to Bernhard Scholkopf, bs@mpik-tueb.mpg.de, Max-Planck-Institut fur biologische Kybernetik, Spemannstr. 38, 72076 Tubingen, Germany.
[ 611, 1050, 1306, 1310 ]
Train
1,500
6
Title: On the Induction of Intelligible Ensembles Abstract: Ensembles of classifiers, e.g. decision trees, often exhibit greater predictive accuracy than single classifiers alone. Bagging and boosting are two standard ways of generating and combining multiple classifiers. Unfortunately, the increase in predictive performance is usually linked to a dramatic decrease in intelligibility: ensembles are more or less black boxes comparable to neural networks. So far attempts at pruning of ensembles have not been very successful, approximately reducing ensembles into half. This paper describes a different approach which both tries to keep ensemble-sizes small during induction already and also limits the complexity of single classifiers rigorously. Single classifiers are decision-stumps of a prespecified maximal depth. They are combined by majority voting. Ensembles are induced and pruned by a simple hill-climbing procedure. These ensembles can reasonably be transformed into equivalent decision trees. We conduct some empirical evaluation to investigate both predictive accuracies and classifier complexities.
[ 1238, 1267, 1484, 2180 ]
Train
1,501
2
Title: A Characterization of Integral Input to State Stability Abstract: Just as input to state stability (iss) generalizes the idea of finite gains with respect to supremum norms, the new notion of integral input to state stability (iiss) generalizes the concept of finite gain when using an integral norm on inputs. In this paper, we obtain a necessary and sufficient characterization of the iiss property, expressed in terms of dissipation inequalities.
[ 447, 693, 1471 ]
Validation
1,502
3
Title: Belief Networks, Hidden Markov Models, and Markov Random Fields: a Unifying View Abstract: The use of graphs to represent independence structure in multivariate probability models has been pursued in a relatively independent fashion across a wide variety of research disciplines since the beginning of this century. This paper provides a brief overview of the current status of such research with particular attention to recent developments which have served to unify such seemingly disparate topics as probabilistic expert systems, statistical physics, image analysis, genetics, decoding of error-correcting codes, Kalman filters, and speech recognition with Markov models.
[ 577, 772, 1288, 1393 ]
Validation
1,503
3
Title: Belief Revision in Probability Theory Abstract: In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments are properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation.
[ 1108, 1276, 1308, 1504, 1507, 1525 ]
Train
1,504
3
Title: From Inheritance Relation to Non-Axiomatic Logic Abstract: At the beginning of the paper, three binary term logics are defined. The first is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a Non-Axiomatic Logic is defined. It has a term-oriented language and an experience-grounded semantics. It can uniformly represents and processes randomness, fuzziness, and ignorance. It can also uniformly carries out deduction, abduction, induction, and revision.
[ 1108, 1276, 1308, 1415, 1503, 1506, 1507, 1525 ]
Train
1,505
6
Title: Probably Approximately Optimal Derivation Strategies Abstract: An inference graph can have many "derivation strategies", each a particular ordering of the steps involved in reducing a given query to a sequence of database retrievals. An "optimal strategy" for a given distribution of queries is a complete strategy whose "expected cost" is minimal, where the expected cost depends on the conditional probabilities that each requested retrieval succeeds, given that a member of this class of queries is posed. This paper describes the PAO algorithm that first uses a set of training examples to approximate these probability values, and then uses these estimates to produce a "probably approximately optimal" strategy | i.e., given any *; ffi > 0, PAO produces a strategy whose cost is within * of the cost of the optimal strategy, with probability greater than 1 ffi. This paper also shows how to obtain these strategies in time polynomial in 1=*, 1=ffi and the size of the inference graph, for many important classes of graphs, including all and-or trees.
[ 251, 865, 932, 2560 ]
Train
1,506
3
Title: Non-Axiomatic Reasoning System (Version 2.2) used to show how the system works. The limitations of Abstract: NARS uses a new form of term logic, or an extended syllogism, in which several types of uncertainties can be represented and processed, and in which deduction, induction, abduction, and revision are carried out in a unified format. The system works in an asynchronously parallel way. The memory of the system is dynamically organized, and can also be interpreted as a network.
[ 1108, 1276, 1308, 1415, 1504, 1507, 1525 ]
Train
1,507
3
Title: A Unified Treatment of Uncertainties Abstract: Uncertainty in artificial intelligence" is an active research field, where several approaches have been suggested and studied for dealing with various types of uncertainty. However, it's hard to rank the approaches in general, because each of them is usually aimed at a special application environment. This paper begins by defining such an environment, then show why some existing approaches cannot be used in such a situation. Then a new approach, Non-Axiomatic Reasoning System, is introduced to work in the environment. The system is designed under the assumption that the system's knowledge and resources are usually insufficient to handle the tasks imposed by its environment. The system can consistently represent several types of uncertainty, and can carry out multiple operations on these uncertainties. Finally, the new approach is compared with the previous approaches in terms of uncertainty representation and interpretation.
[ 1108, 1308, 1503, 1504, 1506 ]
Train
1,508
2
Title: Segmenting Time Series using Gated Experts with Simulated Annealing Abstract: Many real-world time series are multi-stationary, where the underlying data generating process (DGP) switches between different stationary subprocesses, or modes of operation. An important problem in modeling such systems is to discover the underlying switching process, which entails identifying the number of subprocesses and the dynamics of each subprocess. For many time series, this problem is ill-defined, since there are often no obvious means to distinguish the different subprocesses. We discuss the use of nonlinear gated experts to perform the segmentation and system identification of the time series. Unlike standard gated experts methods, however, we use concepts from statistical physics to enhance the segmentation for high-noise problems where only a few experts are required.
[ 668, 1724 ]
Validation
1,509
2
Title: Synthetic Aperture Radar Processing by a Multiple Scale Neural System for Boundary and Surface Representation Abstract: Many real-world time series are multi-stationary, where the underlying data generating process (DGP) switches between different stationary subprocesses, or modes of operation. An important problem in modeling such systems is to discover the underlying switching process, which entails identifying the number of subprocesses and the dynamics of each subprocess. For many time series, this problem is ill-defined, since there are often no obvious means to distinguish the different subprocesses. We discuss the use of nonlinear gated experts to perform the segmentation and system identification of the time series. Unlike standard gated experts methods, however, we use concepts from statistical physics to enhance the segmentation for high-noise problems where only a few experts are required.
[ 589, 592, 1144 ]
Train
1,510
6
Title: The Problem with Noise and Small Disjuncts Abstract: Systems that learn from examples often create a disjunctive concept definition. The disjuncts in the concept definition which cover only a few training examples are referred to as small disjuncts. The problem with small disjuncts is that they are more error prone than large disjuncts, but may be necessary to achieve a high level of predictive accuracy [Holte, Acker, and Porter, 1989]. This paper extends previous work done on the problem of small disjuncts by taking noise into account. It investigates the assertion that it is hard to learn from noisy data because it is difficult to distinguish between noise and true exceptions. In the process of evaluating this assertion, insights are gained into the mechanisms by which noise affects learning. Two domains are investigated. The experimental results in this paper suggest that for both Shapiro's chess endgame domain [Shapiro, 1987] and for the Wisconsin breast cancer domain [Wolberg, 1990], the assertion is true, at least for low levels (5-10%) of class noise.
[ 790, 1234, 2057 ]
Validation
1,511
2
Title: In Stable Dynamic Parameter Adaptation Abstract:
[ 1357 ]
Train
1,512
3
Title: Cross-Validation and the Bootstrap: Estimating the Error Rate of a Prediction Rule Abstract: A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased, but can be highly variable. This article discusses bootstrap estimates of prediction error, which can be thought of as smoothed versions of cross-validation. A particular bootstrap method, the 632+ rule, is shown to substantially outperform cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we also consider estimating the variability of an error rate estimate. All of the results here are nonparametric, and apply to any possible prediction rule: however we only study classification problems with 0-1 loss in detail. Our simulations include "smooth" prediction rules like Fisher's Linear Discriminant Function, and unsmooth ones like Nearest Neighbors.
[ 949, 999, 1087, 1112, 1235, 1267, 1335, 1463, 1608, 1671 ]
Train
1,513
0
Title: Fast NP Chunking Using Memory-Based Learning Techniques Abstract: In this paper we discuss the application of Memory-Based Learning (MBL) to fast NP chunking. We first discuss the application of a fast decision tree variant of MBL (IGTree) on the dataset described in (Ramshaw and Marcus, 1995), which consists of roughly 50,000 test and 200,000 train items. In a second series of experiments we used an architecture of two cascaded IGTrees. In the second level of this cascaded classifier we added context predictions as extra features so that incorrect predictions from the first level can be corrected, yielding a 97.2% generalisation accuracy with training and testing times in the order of seconds to minutes.
[ 634, 785, 862, 1328, 1812 ]
Train
1,514
6
Title: Is Consistency Harmful? Abstract: We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a new goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems.
[ 429, 1333 ]
Test
1,515
4
Title: Evolving Optimal Populations with XCS Classifier Systems Abstract: We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a new goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems.
[ 163, 936, 961, 988, 1447, 1581, 1711 ]
Train
1,516
1
Title: Solving 3-SAT by GAs Adapting Constraint Weights Abstract: Handling NP complete problems with GAs is a great challenge. In particular the presence of constraints makes finding solutions hard for a GA. In this paper we present a problem independent constraint handling mechanism, Stepwise Adaptation of Weights (SAW), and apply it for solving the 3-SAT problem. Our experiments prove that the SAW mechanism substantially increases GA performance. Furthermore, we compare our SAW-ing GA with the best heuristic technique we could trace, WGSAT, and conclude that the GA is superior to the heuristic method.
[ 833, 1136, 1218 ]
Train
1,517
2
Title: Robust Interpretation of Neural-Network Models Abstract: Artificial Neural Network seem very promising for regression and classification, especially for large covariate spaces. These methods represent a non-linear function as a composition of low dimensional ridge functions and therefore appear to be less sensitive to the dimensionality of the covariate space. However, due to non uniqueness of a global minimum and the existence of (possibly) many local minima, the model revealed by the network is non stable. We introduce a method to interpret neural network results which uses novel robustification techniques. This results in a robust interpretation of the model employed by the network. Simulated data from known models is used to demonstrate the interpretability results and to demonstrate the effects of different regularization methods on the robustness of the model. Graphical methods are introduced to present the interpretation results. We further demonstrate how interaction between covariates can be revealed. From this study we conclude that the interpretation method works well, but that NN models may sometimes be misinterpreted, especially if the approximations to the true model are less robust.
[ 1612 ]
Validation
1,518
1
Title: Feature selection through Functional Links with Evolutionary Computation for Neural Networks Abstract: In this paper we describe different ways to select and transform features using evolutionary computation. The features are intended to serve as inputs to a feedforward network. The first way is the selection of features using a standard genetic algorithm, and the solution found specifies whether a certain feature should be present or not. We show that for the prediction of unemployment rates in various European countries, this is a succesfull approach. In fact, this kind of selection of features is a special case of so-called functional links. Functional links transform the input pattern space to a new pattern space. As functional links one can use polynomials, or more general functions. Both can be found using evolutionary computation. Polynomial functional links are found by evolving a coding of the powers of the polynomial. For symbolic functions we can use genetic programming. Genetic programming finds the symbolic functions that are to be applied to the inputs. We compare the workings of the latter two methods on two artificial datasets, and on a real-world medical image dataset.
[ 1536 ]
Train
1,519
5
Title: What online Machine Learning can do for Knowledge Acquisition A Case Study Abstract: This paper reports on the development of a realistic knowledge-based application using the MOBAL system. Some problems and requirements resulting from industrial-caliber tasks are formulated. A step-by-step account of the construction of a knowledge base for such a task demonstrates how the interleaved use of several learning algorithms in concert with an inference engine and a graphical interface can fulfill those requirements. Design, analysis, revision, refinement and extension of a working model are combined in one incremental process. This illustrates the balanced cooperative modeling approach. The case study is taken from the telecommunications domain and more precisely deals with security management in telecommunications networks. MOBAL would be used as part of a security management tool for acquiring, validating and refining a security policy. The modeling approach is compared with other approaches, such as KADS and stand-alone machine learning.
[ 963, 1177 ]
Train
1,520
2
Title: Equivariant adaptive source separation Abstract: Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach.
[ 59, 354, 570, 834, 839, 872, 873, 874, 920, 1072, 1200, 1211, 1245, 1246, 1258, 1524, 1526, 1709 ]
Train
1,521
6
Title: Improving Bagging Performance by Increasing Decision Tree Diversity Abstract: Ensembles of decision trees often exhibit greater predictive accuracy than single trees alone. Bagging and boosting are two standard ways of generating and combining multiple trees. Boosting has been empirically determined to be the more effective of the two, and it has recently been proposed that this may be because it produces more diverse trees than bagging. This paper reports empirical findings that strongly support this hypothesis. We enforce greater decision tree diversity in bagging by a simple modification of the underlying decision tree learner that utilizes randomly-generated decision stumps of predefined depth as the starting point for tree induction. The modified procedure yields very competitive results while still retaining one of the attractive properties of bagging: all iterations are independent. Additionally, we also investigate a possible integration of bagging and boosting. All these ensemble-generating procedures are compared empirically on various domains.
[ 70, 1185, 1237, 1484 ]
Train
1,522
6
Title: Improving Bagging Performance by Increasing Decision Tree Diversity Abstract: ARCING THE EDGE Leo Breiman Technical Report 486 , Statistics Department University of California, Berkeley CA. 94720 Abstract Recent work has shown that adaptively reweighting the training set, growing a classifier using the new weights, and combining the classifiers constructed to date can significantly decrease generalization error. Procedures of this type were called arcing by Breiman[1996]. The first successful arcing procedure was introduced by Freund and Schapire[1995,1996] and called Adaboost. In an effort to explain why Adaboost works, Schapire et.al. [1997] derived a bound on the generalization error of a convex combination of classifiers in terms of the margin. We introduce a function called the edge, which differs from the margin only if there are more than two classes. A framework for understanding arcing algorithms is defined. In this framework, we see that the arcing algorithms currently in the literature are optimization algorithms which minimize some function of the edge. A relation is derived between the optimal reduction in the maximum value of the edge and the PAC concept of weak learner. Two algorithms are described which achieve the optimal reduction. Tests on both synthetic and real data cast doubt on the Schapire et.al. There is recent empirical evidence that significant reductions in generalization error can be gotten by growing a number of different classifiers on the same training set and letting these vote for the best class. Freund and Schapire ([1995], [1996] ) proposed an algorithm called AdaBoost which adaptively reweights the training set in a way based on the past history of misclassifications, constructs a new classifier using the current weights, and uses the misclassification rate of this classifier to determine the size of its vote. In a number of empirical studies on many data sets using trees (CART or C4.5) as the base classifier (Drucker and Cortes[1995], Quinlan[1996], Freud and Schapire[1996], Breiman[1996]) AdaBoost produced dramatic decreases in generalization error compared to using a single tree. Error rates were reduced to the point where tests on some well-known data sets gave the result that CART plus AdaBoost did significantly better than any other of the commonly used classification methods (Breiman[1996] ). Meanwhile, empirical results showed that other methods of adaptive resampling (or reweighting) and combining (called "arcing" by Breiman [1996]) also led to low test set error rates. An algorithm called arc-x4 (Breiman[1996]) gave error rates almost identical to Adaboost. Ji and Ma[1997] worked with classifiers consisting of randomly selected hyperplanes and using a different method of adaptive resampling and unweighted voting, also got low error rates. Thus, there are a least three arcing algorithms extant, all of which give excellent classification accuracy. explanation.
[ 569, 1484 ]
Validation
1,523
1
Title: A Generalized Permutation Approach to Job Shop Scheduling with Genetic Algorithms Abstract: In order to sequence the tasks of a job shop problem (JSP) on a number of machines related to the technological machine order of jobs, a new representation technique mathematically known as "permutation with repetition" is presented. The main advantage of this single chromosome representation is in analogy to the permutation scheme of the traveling salesman problem (TSP) that it cannot produce illegal sets of operation sequences (infeasible symbolic solutions). As a consequence of the representation scheme a new crossover operator preserving the initial scheme structure of permutations with repetition will be sketched. Its behavior is similar to the well known Order-Crossover for simple permutation schemes. Actually the GOX operator for permutations with repetition arises from a Generalisation of OX. Computational experiments show, that GOX passes the information from a couple of parent solutions efficiently to offspring solutions. Together, the new representation and GOX support the cooperative aspect of the genetic search for scheduling problems strongly.
[ 343, 813, 815, 880, 1060, 1136 ]
Train
1,524
2
Title: BLIND SEPARATION OF DELAYED SOURCES BASED ON INFORMATION MAXIMIZATION Abstract: Recently, Bell and Sejnowski have presented an approach to blind source separation based on the information maximization principle. We extend this approach into more general cases where the sources may have been delayed with respect to each other. We present a network architecture capable of coping with such sources, and we derive the adaptation equations for the delays and the weights in the network by maximizing the information transferred through the network. Examples using wideband sources such as speech are presented to illustrate the algorithm.
[ 570, 576, 1243, 1245, 1381, 1520, 1526 ]
Validation
1,525
3
Title: Reference Classes and Multiple Inheritances Abstract: The reference class problem in probability theory and the multiple inheritances (extensions) problem in non-monotonic logics can be referred to as special cases of conflicting beliefs. The current solution accepted in the two domains is the specificity priority principle. By analyzing an example, several factors (ignored by the principle) are found to be relevant to the priority of a reference class. A new approach, Non-Axiomatic Reasoning System (NARS), is discussed, where these factors are all taken into account. It is argued that the solution provided by NARS is better than the solutions provided by probability theory and non-monotonic logics.
[ 1276, 1415, 1503, 1504, 1506 ]
Train
1,526
2
Title: Working Paper IS-97-22 (Information Systems) A First Application of Independent Component Analysis to Extracting Structure Abstract: This paper discusses the application of a modern signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). This can be viewed as a factorization of the portfolio since joint probabilities become simple products in the coordinate system of the ICs. We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent but large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. Independent component analysis is a potentially powerful method of analyzing and understanding driving mechanisms in financial markets. There are further promising applications to risk management since ICA focuses on higher order statistics.
[ 576, 839, 1520, 1524 ]
Train
1,527
3
Title: A THEORY OF INFERRED CAUSATION perceive causal relationships in uncon trolled observations. 2. the task Abstract: This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation.
[ 211, 260, 419, 827, 909, 971, 1086, 1240, 1543, 1747, 1894, 2076, 2088, 2166, 2221, 2420, 2524, 2525, 2561 ]
Validation
1,528
5
Title: Using Qualitative Models to Guide Inductive Learning Abstract: This paper presents a method for using qualitative models to guide inductive learning. Our objectives are to induce rules which are not only accurate but also explainable with respect to the qualitative model, and to reduce learning time by exploiting domain knowledge in the learning process. Such ex-plainability is essential both for practical application of inductive technology, and for integrating the results of learning back into an existing knowledge-base. We apply this method to two process control problems, a water tank network and an ore grinding process used in the mining industry. Surprisingly, in addition to achieving explainability the classificational accuracy of the induced rules is also increased. We show how the value of the qualitative models can be quantified in terms of their equivalence to additional training examples, and finally discuss possible extensions.
[ 151, 426, 1487 ]
Train
1,529
4
Title: Explanation Based Learning: A Comparison of Symbolic and Neural Network Approaches Abstract: Explanation based learning has typically been considered a symbolic learning method. An explanation based learning method that utilizes purely neural network representations (called EBNN) has recently been developed, and has been shown to have several desirable properties, including robustness to errors in the domain theory. This paper briefly summarizes the EBNN algorithm, then explores the correspondence between this neural network based EBL method and EBL methods based on symbolic representations.
[ 565, 882, 1314 ]
Validation
1,530
1
Title: Performance of Multi-Parent Crossover Operators on Numerical Function Optimization Problems Abstract: The multi-parent scanning crossover, generalizing the traditional uniform crossover, and diagonal crossover, generalizing 1-point (n-point) crossovers, were introduced in [5]. In subsequent publications, see [6, 18, 19], several aspects of multi-parent recombination are discussed. Due to space limitations, however, a full overview of experimental results showing the performance of multi-parent GAs on numerical optimization problems has never been published. This technical report is meant to fill this gap and make results available.
[ 145, 163, 1218, 1424, 2089 ]
Train
1,531
0
Title: NACODAE: Navy Conversational Decision Aids Environment Abstract: This report documents NACODAE, the Navy Conversational Decision Aids Environment being developed at the Navy Center for Applied Research in Artificial Intelligence (NCARAI), which is a branch of the Naval Research Laboratory. NA-CODAE is a software prototype that is being developed under the Practical Advances in Case-Based Reasoning project, which is funded by the Office for Naval Research, for the purpose of assisting Navy and other DoD personnel in decision aids tasks such as system maintenance, operational training, crisis response planning, logistics, fault diagnosis, target classification, and meteorological nowcasting. Implemented in Java, NACODAE can be used on any machine containing a Java virtual machine (e.g., PCs, Unix). This document describes and exemplifies NACODAE's capabilities. Our goal is to transition this tool to operational personnel, and to continue its enhancement through user feedback and by testing recent research advances in case-based reasoning and related areas.
[ 66, 887, 983, 1154 ]
Test
1,532
3
Title: Automated Decomposition of Model-based Learning Problems Abstract: A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of decompositional, model-based learning (DML), a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
[ 327, 558, 577, 782, 925, 1480, 1487 ]
Test
1,533
1
Title: Evolving Visual Routines Abstract: Traditional machine vision assumes that the vision system recovers a a complete, labeled description of the world [ Marr, 1982 ] . Recently, several researchers have criticized this model and proposed an alternative model which considers perception as a distributed collection of task-specific, task-driven visual routines [ Aloimonos, 1993, Ullman, 1987 ] . Some of these researchers have argued that in natural living systems these visual routines are the product of natural selection [ Ramachandran, 1985 ] . So far, researchers have hand-coded task-specific visual routines for actual implementations (e.g. [ Chapman, 1993 ] ). In this paper we propose an alternative approach in which visual routines for simple tasks are evolved using an artificial evolution approach. We present results from a series of runs on actual camera images, in which simple routines were evolved using Genetic Programming techniques [ Koza, 1992 ] . The results obtained are promising: the evolved routines are able to correctly classify up to 93% of the images, which is better than the best algorithm we were able to write by hand.
[ 781, 846, 900, 970, 1277, 1730 ]
Train
1,534
0
Title: The Use of Explicit Goals for Knowledge to Guide Inference and Learning Abstract: Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it. This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.
[ 289, 1122, 1148, 1163, 1278, 1498, 1556, 1597 ]
Validation
1,535
0
Title: Decision Models: A Theory of Volitional Explanation Abstract: This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations, as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanation patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used and evaluated.
[ 289, 629, 1348, 1537 ]
Train
1,536
1
Title: Representation and Evolution of Neural Networks Abstract: An evolutionary approach for developing improved neural network architectures is presented. It is shown that it is possible to use genetic algorithms for the construction of backpropagation networks for real world tasks. Therefore a network representation is developed with certain properties. Results with various application are presented.
[ 163, 207, 1518, 1728, 2504 ]
Test
1,537
0
Title: Incremental Learning of Explanation Patterns and their Indices Abstract: This paper describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Recent work in AI has dealt with the issue of using past explanations stored in the reasoner's memory to understand novel situations. However, this process assumes that past explanations are well understood and provide good "lessons" to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Instead, it is reasonable to assume that the reasoner would have gaps in its knowledge base. By reasoning about a new situation, the reasoner should be able to fill in these gaps as new information came in, reorganize its explanations in memory, and gradually evolve a better understanding of its domain. We present a story understanding program that retrieves past explanations from situations already in memory, and uses them to build explanations to understand novel stories about terrorism. In doing so, the system refines its understanding of the domain by filling in gaps in these explanations, by elaborating the explanations, or by learning new indices for the explanations. This is a type of incremental learning since the system improves its explanatory knowledge of the domain in an incremental fashion rather than by learning new XPs as a whole.
[ 289, 629, 1348, 1535, 1556 ]
Train
1,538
2
Title: Analysis of Drifting Dynamics with Neural Network Hidden Markov Models Abstract: We present a method for the analysis of nonstationary time series with multiple operating modes. In particular, it is possible to detect and to model both a switching of the dynamics and a less abrupt, time consuming drift from one mode to another. This is achieved in two steps. First, an unsupervised training method provides prediction experts for the inherent dynamical modes. Then, the trained experts are used in a hidden Markov model that allows to model drifts. An application to physiological wake/sleep data demonstrates that analysis and modeling of real-world time series can be improved when the drift paradigm is taken into account.
[ 1724 ]
Train
1,539
5
Title: Finding new rules for incomplete theories: Explicit biases for induction with contextual information. In Proceedings Abstract: addressed in KBANN (which translates a theory into a neural-net, refines it using backpropagation, and then retranslates the result back into rules) by adding extra hidden units and connections to the initial network; however, this would require predetermining the num In this paper, we have presented constructive induction techniques recently added to the EITHER theory refinement system. Intermediate concept utilization employs existing rules in the theory to derive higher-level features for use in induction. Intermediate concept creation employs inverse resolution to introduce new intermediate concepts in order to fill gaps in a theory than span multiple levels. These revisions allow EITHER to make use of imperfect domain theories in the ways typical of previous work in both constructive induction and theory refinement. As a result, EITHER is able to handle a wider range of theory imperfections than any other existing theory refinement system.
[ 136, 378, 449, 638, 924, 2091 ]
Train
1,540
4
Title: MultiPlayer Residual Advantage Learning With General Function Approximation Abstract: A new algorithm, advantage learning, is presented that improves on advantage updating by requiring that a single function be learned rather than two. Furthermore, advantage learning requires only a single type of update, the learning update, while advantage updating requires two different types of updates, a learning update and a normilization update. The reinforcement learning system uses the residual form of advantage learning. An application of reinforcement learning to a Markov game is presented. The testbed has continuous states and nonlinear dynamics. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. On each time step , each player chooses one of two possible actions; turn left or turn right, resulting in a 90 degree instantaneous change in the aircraft s heading. Reinforcement is given only when the missile hits the plane or the plane reaches an escape distance from the missile. The advantage function is stored in a single-hidden-layer sigmoidal network. Speed of learning is increased by a new algorithm , Incremental Delta-Delta (IDD), which extends Jacobs (1988) Delta-Delta for use in incremental training, and differs from Suttons Incremental Delta-Bar-Delta (1992) in that it does not require the use of a trace and is amenable for use with general function approximation systems. The advantage learning algorithm for optimal control is modified for games in order to find the minimax point, rather than the maximum. Empirical results gathered using the missile/aircraft testbed validate theory that suggests residual forms of reinforcement learning algorithms converge to a local minimum of the mean squared Bellman residual when using general function approximation systems. Also, to our knowledge, this is the first time an approximate second order method has been used with residual algorithms. Empirical results are presented comparing convergence rates with and without the use of IDD for the reinforcement learning testbed described above and for a supervised learning testbed. The results of these experiments demonstrate IDD increased the rate of convergence and resulted in an order of magnitude lower total asymptotic error than when using backpropagation alone.
[ 565, 842, 1045, 1118, 1378, 1443, 1459 ]
Train
1,541
1
Title: Unsupervised Learning with the Soft-Means Algorithm Abstract: This note describes a useful adaptation of the `peak seeking' regime used in unsupervised learning processes such as competitive learning and `k-means'. The adaptation enables the learning to capture low-order probability effects and thus to more fully capture the probabilistic structure of the training data.
[ 1395 ]
Test
1,542
0
Title: Protein Sequencing Experiment Planning Using Analogy protein sequencing experiments. Planning is interleaved with experiment execution, Abstract: Experiment design and execution is a central activity in the natural sciences. The SeqER system provides a general architecture for the integration of automated planning techniques with a variety of domain knowledge in order to plan scientific experiments. These planning techniques include rule-based methods and, especially, the use of derivational analogy. Derivational analogy allows planning experience, captured as cases, to be reused. Analogy also allows the system to function in the absence of strong domain knowledge. Cases are efficiently and flexibly retrieved from a large casebase using massively parallel methods.
[ 801 ]
Test
1,543
3
Title: Belief Networks Revisited Abstract: Experiment design and execution is a central activity in the natural sciences. The SeqER system provides a general architecture for the integration of automated planning techniques with a variety of domain knowledge in order to plan scientific experiments. These planning techniques include rule-based methods and, especially, the use of derivational analogy. Derivational analogy allows planning experience, captured as cases, to be reused. Analogy also allows the system to function in the absence of strong domain knowledge. Cases are efficiently and flexibly retrieved from a large casebase using massively parallel methods.
[ 1324, 1527 ]
Train
1,544
1
Title: Monitoring in Embedded Agents Abstract: Finding good monitoring strategies is an important process in the design of any embedded agent. We describe the nature of the monitoring problem, point out what makes it difficult, and show that while periodic monitoring strategies are often the easiest to derive, they are not always the most appropriate. We demonstrate mathematically and empirically that for a wide class of problems, the so-called "cupcake problems", there exists a simple strategy, interval reduction, that outperforms periodic monitoring. We also show how features of the environment may influence the choice of the optimal strategy. The paper concludes with some thoughts about a monitoring strategy taxonomy, and what its defining features might be.
[ 163, 566, 1206 ]
Train
1,545
3
Title: Learning Goal Oriented Bayesian Networks for Telecommunications Risk Management Abstract: This paper discusses issues related to Bayesian network model learning for unbalanced binary classification tasks. In general, the primary focus of current research on Bayesian network learning systems (e.g., K2 and its variants) is on the creation of the Bayesian network structure that fits the database best. It turns out that when applied with a specific purpose in mind, such as classification, the performance of these network models may be very poor. We demonstrate that Bayesian network models should be created to meet the specific goal or purpose intended for the model. We first present a goal-oriented algorithm for constructing Bayesian networks for predicting uncollectibles in telecommunications risk-management datasets. Second, we argue and demonstrate that current Bayesian network learning methods may fail to perform satisfactorily in real life applications since they do not learn models tailored to a specific goal or purpose. Third, we discuss the performance of goal oriented K2 and its variant.
[ 1086, 1582, 1909 ]
Validation
1,546
4
Title: Analytical Mean Squared Error Curves in Temporal Difference Learning Abstract: We have calculated analytical expressions for how the bias and variance of the estimators provided by various temporal difference value estimation algorithms change with o*ine updates over trials in absorbing Markov chains using lookup table representations. We illustrate classes of learning curve behavior in various chains, and show the manner in which TD is sensitive to the choice of its step size and eligibility trace parameters.
[ 63, 565, 1376 ]
Validation
1,547
2
Title: Misclassification Minimization Abstract: The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in n-dimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A Frank-Wolfe-type algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that "counts" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming formulation of the LPEC associated with the misclassification minimization problem.
[ 142, 227, 427, 1283 ]
Train
1,548
3
Title: Free energy coding In Abstract: In this paper, we introduce a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed free energy approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The expectation-maximization parameter estimation algorithms minimize this effective codeword length. We illustrate the performance of free energy coding on a simple problem where a compression factor of two is gained by using the new method.
[ 76, 1291, 1374 ]
Train
1,549
3
Title: Explaining Predictions in Bayesian Networks and Influence Diagrams Abstract: As Bayesian Networks and Influence Diagrams are being used more and more widely, the importance of an efficient explanation mechanism becomes more apparent. We focus on predictive explanations, the ones designed to explain predictions and recommendations of probabilistic systems. We analyze the issues involved in defining, computing and evaluating such explanations and present an algorithm to compute them.
[ 339, 1602 ]
Train
1,550
6
Title: MDL and MML Similarities and Differences (Introduction to Minimum Encoding Inference Part III) Abstract: Tech Report 207 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed.
[ 84, 157, 684, 1158, 1199, 1238, 1419, 1425, 1427, 1555, 1624 ]
Train
1,551
2
Title: A Performance Analysis of the CNS-1 on Large, Dense Backpropagation Networks Connectionist Network Supercomputer Abstract: We determine in this study the sustained performance of the CNS-1 during training and evaluation of large multilayered feedforward neural networks. Using a sophisticated coding, the 128-node machine would achieve up to 111 Giga connections per second (GCPS) and 22 Giga connection updates per second (GCUPS). During recall the machine would archieve 87% of the peak multiply-accumulate performance. The training of large nets is less efficient than the recall but only by a factor of 1.5 to 2. The benchmark is parallelized and the machine code is optimized before analyzing the performance. Starting from an optimal parallel algorithm, CNS specific optimizations still reduce the run time by a factor of 4 for recall and by a factor of 3 for training. Our analysis also yields some strategies for code optimization. The CNS-1 is still in design, and therefore we have to model the run time behavior of the memory system and the interconnection network. This gives us the option of changing some parameters of the CNS-1 system in order to analyze their performance impact.
[ 272, 914 ]
Test
1,552
0
Title: on Case-Based Reasoning Integrations Case-Based Seeding for an Interactive Crisis Response Assistant Abstract: In this paper, we present an interactive, case-based approach to crisis response that provides users with the ability to rapidly develop good responses while allowing them to retain ultimate control over the decision-making process. We have implemented this approach in Inca, an INteractive Crisis Assistant for planning and scheduling in crisis domains. Inca relies on case-based methods to seed the response development process with initial candidate solutions drawn from previous cases. The human user then interacts with Inca to adapt these solutions to the current situation. We will discuss this interactive approach to crisis response using an artificial hazardous materials domain, Haz-Mat, that we developed for the purpose of evaluating candidate assistant mechanisms for crisis response.
[ 1212, 1553, 1554 ]
Test
1,553
0
Title: Learning to Predict User Operations for Adaptive Scheduling Abstract: Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling.
[ 82, 901, 1552, 1554 ]
Train
1,554
0
Title: CABINS A Framework of Knowledge Acquisition and Iterative Revision for Schedule Improvement and Reactive Repair Abstract: Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling.
[ 901, 951, 1401, 1552, 1553 ]
Train
1,555
3
Title: Bayesian and Information-Theoretic Priors for Bayesian Network Parameters Abstract: We consider Bayesian and information-theoretic approaches for determining non-informative prior distributions in a parametric model family. The information-theoretic approaches are based on the recently modified definition of stochastic complexity by Rissanen, and on the Minimum Message Length (MML) approach by Wallace. The Bayesian alternatives include the uniform prior, and the equivalent sample size priors. In order to be able to empirically compare the different approaches in practice, the methods are instantiated for a model family of practical importance, the family of Bayesian networks.
[ 558, 1158, 1550 ]
Train
1,556
0
Title: A Goal-Based Approach to Intelligent Information Retrieval Abstract: Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems. The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system's inferential abilities and current state. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods.
[ 1534, 1537 ]
Test
1,557
4
Title: Using Communication to Reduce Locality in Distributed Multi-Agent Learning Abstract: This paper attempts to bridge the fields of machine learning, robotics, and distributed AI. It discusses the use of communication in reducing the undesirable effects of locality in fully distributed multi-agent systems with multiple agents/robots learning in parallel while interacting with each other. Two key problems, hidden state and credit assignment, are addressed by applying local undirected broadcast communication in a dual role: as sensing and as reinforcement. The methodology is demonstrated on two multi-robot learning experiments. The first describes learning a tightly-coupled coordination task with two robots, the second a loosely-coupled task with four robots learning social rules. Communication is used to 1) share sensory data to overcome hidden state and 2) share reinforcement to overcome the credit assignment problem between the agents and bridge the gap between local/individual and global/group payoff.
[ 650, 691, 1649, 1687 ]
Validation
1,558
1
Title: How good are genetic algorithms at finding large cliques: an experimental study Abstract: This paper investigates the power of genetic algorithms at solving the MAX-CLIQUE problem. We measure the performance of a standard genetic algorithm on an elementary set of problem instances consisting of embedded cliques in random graphs. We indicate the need for improvement, and introduce a new genetic algorithm, the multi-phase annealed GA, which exhibits superior performance on the same problem set. As we scale up the problem size and test on "hard" benchmark instances, we notice a degraded performance in the algorithm caused by premature convergence to local minima. To alleviate this problem, a sequence of modifications are implemented ranging from changes in input representation to systematic local search. The most recent version, called union GA, incorporates the features of union cross-over, greedy replacement, and diversity enhancement. It shows a marked speed-up in the number of iterations required to find a given solution, as well as some improvement in the clique size found. We discuss issues related to the SIMD implementation of the genetic algorithms on a Thinking Machines CM-5, which was necessitated by the intrinsically high time complexity (O(n 3 )) of the serial algorithm for computing one iteration. Our preliminary conclusions are: (1) a genetic algorithm needs to be heavily customized to work "well" for the clique problem; (2) a GA is computationally very expensive, and its use is only recommended if it is known to find larger cliques than other algorithms; (3) although our customization effort is bringing forth continued improvements, there is no clear evidence, at this time, that a GA will have better success in circumventing local minima.
[ 163, 1136, 2564 ]
Test
1,559
2
Title: In Active Learning with Statistical Models Abstract: For many types of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regres sion are both efficient and accurate.
[ 71, 740, 929, 1664, 1683, 1697, 1703, 1791, 2658 ]
Train
1,560
4
Title: DESIGN AND ANALYSIS OF EFFICIENT REINFORCEMENT LEARNING ALGORITHMS Abstract: For many types of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regres sion are both efficient and accurate.
[ 456, 535, 791, 1161, 2198 ]
Train
1,561
6
Title: Characterizing Rational versus Exponential Learning Curves Abstract: We consider the standard problem of learning a concept from random examples. Here a learning curve can be defined to be the expected error of a learner's hypotheses as a function of training sample size. Haussler, Littlestone and Warmuth have shown that, in the distribution free setting, the smallest expected error a learner can achieve in the worst case over a concept class C converges rationally to zero error (i.e., fi(1=t) for training sample size t). However, recently Cohn and Tesauro have demonstrated how exponential convergence can often be observed in experimental settings (i.e., average error decreasing as e fi(t) ). By addressing a simple non-uniformity in the original analysis, this paper shows how the dichotomy between rational and exponential worst case learning curves can be recovered in the distribution free theory. These results support the experimental findings of Cohn and Tesauro: for finite concept classes, any consistent learner achieves exponential convergence, even in the worst case; but for continuous concept classes, no learner can exhibit sub-rational convergence for every target concept and domain distribution. A precise boundary between rational and exponential convergence is drawn for simple concept chains. Here we show that somewhere dense chains always force rational convergence in the worst case, but exponential convergence can always be achieved for nowhere dense chains.
[ 967 ]
Test
1,562
2
Title: Using Sampling and Queries to Extract Rules from Trained Neural Networks Abstract: Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of real-valued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing rule-extraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and M -of-N rules, and present experiments that show that our method is more efficient than conventional search-based approaches.
[ 355, 627, 631, 1637 ]
Test
1,563
1
Title: Fast EquiPartitioning of Rectangular Domains using Stripe Decomposition Abstract: This paper presents a fast algorithm that provides optimal or near optimal solutions to the minimum perimeter problem on a rectangular grid. The minimum perimeter problem is to partition a grid of size M N into P equal area regions while minimizing the total perimeter of the regions. The approach taken here is to divide the grid into stripes that can be filled completely with an integer number of regions . This striping method gives rise to a knapsack integer program that can be efficiently solved by existing codes. The solution of the knapsack problem is then used to generate the grid region assignments. An implementation of the algorithm partitioned a 1000 1000 grid into 1000 regions to a provably optimal solution in less than one second. With sufficient memory to hold the M N grid array, extremely large minimum perimeter problems can be solved easily.
[ 53, 357, 803 ]
Test
1,564
2
Title: GROWING RADIAL BASIS FUNCTION NETWORKS Abstract: This paper presents and evaluates two algorithms for incrementally constructing Radial Basis Function Networks, a class of neural networks which looks more suitable for adtaptive control applications than the more popular backpropagation networks. The first algorithm has been derived by a previous method developed by Fritzke, while the second one has been inspired by the CART algorithm developed by Breiman for generation regression trees. Both algorithms proved to work well on a number of tests and exhibit comparable performances. An evaluation on the standard case study of the Mackey-Glass temporal series is reported.
[ 611, 687, 745, 899, 1672 ]
Validation
1,565
1
Title: Evolving Fuzzy Prototypes for Efficient Data Clustering Abstract: number of prototypes used to represent each class, the position of each prototype within its class and the membership function associated with each prototype. This paper proposes a novel, evolutionary approach to data clustering and classification which overcomes many of the limitations of traditional systems. The approach rests on the optimisation of both the number and positions of fuzzy prototypes using a real-valued genetic algorithm (GA). Because the GA acts on all of the classes at once, the system benefits naturally from global information about possible class interactions. In addition, the concept of a receptive field for each prototype is used to replace the classical distance-based membership function by an infinite fuzzy support, multidimensional, Gaussian function centred over the prototype and with unique variance in each dimension, reflecting the tightness of the cluster. Hence, the notion of nearest-neighbour is replaced by that of nearest attracting prototype (NAP). The proposed model is a completely self-optimising, fuzzy system called GA-NAP. Most data clustering algorithms, including the popular K-means algorithm, require a priori knowledge about the problem domain to fix the number and starting positions of the prototypes. Although such knowledge may be assumed for domains whose dimensionality is fairly small or whose underlying structure is relatively intuitive, it is clearly much less accessible in hyper-dimensional settings, where the number of input parameters may be very large. Classical systems also suffer from the fact that they can only define clusters for one class at a time. Hence, no account is made of potential interactions among classes. These drawbacks are further compounded by the fact that the ensuing classification is typically based on a fixed, distance-based membership function for all prototypes. This paper proposes a novel approach to data clustering and classification which overcomes the aforementioned limitations of traditional systems. The model is based on the genetic evolution of fuzzy prototypes. A real-valued genetic algorithm (GA) is used to optimise both the number and positions of prototypes. Because the GA acts on all of the classes at once and measures fitness as classification accuracy, the system naturally profits from global information about class interaction. The concept of a receptive field for each prototype is also presented and used to replace the classical, fixed distance-based function by an infinite fuzzy support membership function. The new membership function is inspired by that used in the hidden layer of RBF networks. It consists of a multidimensional Gaussian function centred over the prototype and with a unique variance in each dimension that reflects the tightness of the cluster. During classification, the notion of nearest-neighbour is replaced by that of nearest attracting prototype (NAP). The proposed model is a completely self-optimising, fuzzy system called GA-NAP.
[ 899, 1088 ]
Test
1,566
6
Title: Worst-case Quadratic Loss Bounds for Prediction Using Linear Functions and Gradient Descent Abstract: In this paper we study the performance of gradient descent when applied to the problem of on-line linear prediction in arbitrary inner product spaces. We prove worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the sequence to predict. The algorithms we use are variants and extensions of on-line gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of on-line prediction for classes of smooth functions.
[ 453, 1124, 1567 ]
Test
1,567
6
Title: Improved Bounds about On-line Learning of Smooth Functions of a Single Variable Abstract: We consider the complexity of learning classes of smooth functions formed by bounding different norms of a function's derivative. The learning model is the generalization of the mistake-bound model to continuous-valued functions. Suppose F q is the set of all absolutely continuous functions f from [0; 1] to R such that jjf 0 jj q 1, and opt(F q ; m) is the best possible bound on the worst-case sum of absolute prediction errors over sequences of m trials. We show that for all q 2, opt(F q ; m) = fi(
[ 1124, 1358, 1566 ]
Train
1,568
0
Title: The Utility of Feature Weighting in Nearest-Neighbor Algorithms Abstract: Nearest-neighbor algorithms are known to depend heavily on their distance metric. In this paper, we investigate the use of a weighted Euclidean metric in which the weight for each feature comes from a small set of options. We describe Diet, an algorithm that directs search through a space of discrete weights using cross-validation error as its evaluation function. Although a large set of possible weights can reduce the learner's bias, it can also lead to increased variance and overfitting. Our empirical study shows that, for many data sets, there is an advantage to weighting features, but that increasing the number of possible weights beyond two (zero and one) has very little benefit and sometimes degrades performance.
[ 430, 1053, 1328 ]
Train
1,569
5
Title: Estimating Attributes: Analysis and Extensions of RELIEF Abstract: In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.
[ 208, 430, 877, 1008, 1010, 1011, 1073, 1182, 1327, 1486, 1587, 1609, 1679, 1684, 1721, 1726 ]
Train
1,570
0
Title: Average-Case Analysis of a Nearest Neighbor Algorithm Abstract: In this paper we present an average-case analysis of the nearest neighbor algorithm, a simple induction method that has been studied by many researchers. Our analysis assumes a conjunctive target concept, noise-free Boolean attributes, and a uniform distribution over the instance space. We calculate the probability that the algorithm will encounter a test instance that is distance d from the prototype of the concept, along with the probability that the nearest stored training case is distance e from this test instance. From this we compute the probability of correct classification as a function of the number of observed training cases, the number of relevant attributes, and the number of irrelevant attributes. We also explore the behavioral implications of the analysis by presenting predicted learning curves for artificial domains, and give experimental results on these domains as a check on our reasoning.
[ 634, 1109, 1111, 1164, 1339, 1678 ]
Train
1,571
1
Title: Average-Case Analysis of a Nearest Neighbor Algorithm Abstract: Eugenic Evolution for Combinatorial Optimization John William Prior Report AI98-268 May 1998
[ 163, 343, 1216, 1218, 2202 ]
Train
1,572
1
Title: The Coevolution of Mutation Rates Abstract: In order to better understand life, it is helpful to look beyond the envelop of life as we know it. A simple model of coevolution was implemented with the addition of genes for longevity and mutation rate in the individuals. This made it possible for a lineage to evolve to be immortal. It also allowed the evolution of no mutation or extremely high mutation rates. The model shows that when the individuals interact in a sort of zero-sum game, the lineages maintain relatively high mutation rates. However, when individuals engage in interactions that have greater consequences for one individual in the interaction than the other, lineages tend to evolve relatively low mutation rates. This model suggests that different genes may have evolved different mutation rates as adaptations to the varying pressures of interactions with other genes.
[ 163, 1139, 1598 ]
Train
1,573
4
Title: Genetics-based Machine Learning and Behaviour Based Robotics: A New Synthesis complexity grows, the learning task Abstract: difficult. We face this problem using an architecture based on learning classifier systems and on the description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to structural properties of animal behavioural organization, as proposed by ethologists. After a
[ 163, 636, 764, 910, 1432, 1481, 1673, 2174 ]
Train
1,574
0
Title: Probabilistic Instance-Based Learning Abstract: Traditional instance-based learning methods base their predictions directly on (training) data that has been stored in the memory. The predictions are based on weighting the contributions of the individual stored instances by a distance function implementing a domain-dependent similarity metrics. This basic approach suffers from three drawbacks: com-putationally expensive prediction when the database grows large, overfitting in the presence of noisy data, and sensitivity to the selection of a proper distance function. We address all these issues by giving a probabilistic interpretation to instance-based learning, where the goal is to approximate predictive distributions of the attributes of interest. In this probabilistic view the instances are not individual data items but probability distributions, and we perform Bayesian inference with a mixture of such prototype distributions. We demonstrate the feasibility of the method empirically for a wide variety of public domain classification data sets.
[ 484, 1017 ]
Test
1,575
1
Title: A Comparative Study of Genetic Search Abstract: We present a comparative study of genetic algorithms and their search properties when treated as a combinatorial optimization technique. This is done in the context of the NP-hard problem MAX-SAT, the comparison being relative to the Metropolis process, and by extension, simulated annealing. Our contribution is two-fold. First, we show that for large and difficult MAX-SAT instances, the contribution of cross-over to the search process is marginal. Little is lost if it is dispensed altogether, running mutation and selection as an enlarged Metropolis process. Second, we show that for these problem instances, genetic search consistently performs worse than simulated annealing when subject to similar resource bounds. The correspondence between the two algorithms is made more precise via a decomposition argument, and provides a framework for interpreting our results.
[ 163, 1136, 1305 ]
Train
1,576
6
Title: What do Constructive Learners Really Learn? Abstract: In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This may be necessary if the initial representation is inadequate or inappropriate. However, the distinction between constructive and non-constructive methods appears to be highly ambiguous. Several conventional definitions of the process of constructive induction appear to include all conceivable learning processes. In this paper I argue that the process of constructive learning should be identified with that of relational learning (i.e., I suggest that
[ 375, 426, 1266, 1595 ]
Validation
1,577
1
Title: Fast Probabilistic Modeling for Combinatorial Optimization Abstract: Probabilistic models have recently been utilized for the optimization of large combinatorial search problems. However, complex probabilistic models that attempt to capture inter-parameter dependencies can have prohibitive computational costs. The algorithm presented in this paper, termed COMIT, provides a method for using probabilistic models in conjunction with fast search techniques. We show how COMIT can be used with two very different fast search algorithms: hillclimbing and Population-based incremental learning (PBIL). The resulting algorithms maintain many of the benefits of probabilistic modeling, with far less computational expense. Extensive empirical results are provided; COMIT has been successfully applied to jobshop scheduling, traveling salesman, and knapsack problems. This paper also presents a review of probabilistic modeling for combi natorial optimization.
[ 343, 427, 658, 1580 ]
Train
1,578
5
Title: SFOIL: Stochastic Approach to Inductive Logic Programming Abstract: Current systems in the field of Inductive Logic Programming (ILP) use, primarily for the sake of efficiency, heuristically guided search techniques. Such greedy algorithms suffer from local optimization problem. Present paper describes a system named SFOIL, that tries to alleviate this problem by using a stochastic search method, based on a generalization of simulated annealing, called Markovian neural network. Various tests were performed on benchmark, and real-world domains. The results show both, advantages and weaknesses of stochastic approach.
[ 877, 1010, 1061, 1182, 1622, 1651 ]
Test
1,579
2
Title: A Radial Basis Function Approach to Financial Time Series Analysis Abstract: Current systems in the field of Inductive Logic Programming (ILP) use, primarily for the sake of efficiency, heuristically guided search techniques. Such greedy algorithms suffer from local optimization problem. Present paper describes a system named SFOIL, that tries to alleviate this problem by using a stochastic search method, based on a generalization of simulated annealing, called Markovian neural network. Various tests were performed on benchmark, and real-world domains. The results show both, advantages and weaknesses of stochastic approach.
[ 1103 ]
Test
1,580
1
Title: MIMIC: Finding Optima by Estimating Probability Densities Abstract: In many optimization problems, the structure of solutions reflects complex relationships between the different input parameters. For example, experience may tell us that certain parameters are closely related and should not be explored independently. Similarly, experience may establish that a subset of parameters must take on particular values. Any search of the cost landscape should take advantage of these relationships. We present MIMIC, a framework in which we analyze the global structure of the optimization landscape. A novel and efficient algorithm for the estimation of this structure is derived. We use knowledge of this structure to guide a randomized search through the solution space and, in turn, to refine our estimate of the structure. Our technique obtains significant speed gains over other randomized optimization procedures.
[ 689, 1577, 1625 ]
Train
1,581
4
Title: A Study of the Generalization Capabilities of XCS Abstract: We analyze the generalization behavior of the XCS classifier system in environments in which only a few generalizations can be done. Experimental results presented in the paper evidence that the generalization mechanism of XCS can prevent it from learning even simple tasks in such environments. We present a new operator, named Specify, which contributes to the solution of this problem. XCS with the Specify operator, named XCSS, is compared to XCS in terms of performance and generalization capabilities in different types of environments. Experimental results show that XCSS can deal with a greater variety of environments and that it is more robust than XCS with respect to population size.
[ 657, 764, 1447, 1515, 1711 ]
Train
1,582
3
Title: Efficient Learning of Selective Bayesian Network Classifiers Abstract: In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.
[ 632, 1086, 1545, 1908, 1909, 2017, 2677 ]
Test
1,583
2
Title: Evolutionary Design of Neural Architectures A Preliminary Taxonomy and Guide to Literature Abstract: In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.
[ 900, 2396, 2563 ]
Validation
1,584
0
Title: Towards a Theory of Optimal Similarity Measures way of learning a similarity measure from the Abstract: The effectiveness of a case-based reasoning system is known to depend critically on its similarity measure. However, it is not clear whether there are elusive and esoteric similarity measures which might improve the performance of a case-based reasoner if substituted for the more commonly used measures. This paper therefore deals with the problem of choosing the best similarity measure, in the limited context of instance-based learning of classifications of a discrete example space. We consider both `fixed' similarity measures and `learnt' ones. In the former case, we give a definition of a similarity measure which we believe to be `optimal' w.r.t. the current prior distribution of target concepts and prove its optimality within a restricted class of similarity measures. We then show how this `optimal' similarity measure is instantiated by some specific prior distributions, and conclude that a very simple similarity measure is as good as any other in these cases. In a further section, we then show how our definition leads naturally to a conjecture about the
[ 1164, 1328, 1626, 2037, 2151 ]
Validation
1,585
4
Title: Q-Learning for Bandit Problems Abstract: Multi-armed bandits may be viewed as decompositionally-structured Markov decision processes (MDP's) with potentially very-large state sets. A particularly elegant methodology for computing optimal policies was developed over twenty ago by Gittins [Gittins & Jones, 1974]. Gittins' approach reduces the problem of finding optimal policies for the original MDP to a sequence of low-dimensional stopping problems whose solutions determine the optimal policy through the so-called "Gittins indices." Katehakis and Veinott [Katehakis & Veinott, 1987] have shown that the Gittins index for a process in state i may be interpreted as a particular component of the maximum-value function associated with the "restart-in-i" process, a simple MDP to which standard solution methods for computing optimal policies, such as successive approximation, apply. This paper explores the problem of learning the Git-tins indices on-line without the aid of a process model; it suggests utilizing process-state-specific Q-learning agents to solve their respective restart-in-state-i subproblems, and includes an example in which the online reinforcement learning approach is applied to a problem of stochastic scheduling|one instance drawn from a wide class of problems that may be formulated as bandit problems.
[ 565, 738, 804 ]
Train
1,586
6
Title: On the Boosting Ability of Top-Down Decision Tree Learning Algorithms provably optimal for decision tree Abstract: We analyze the performance of top-down algorithms for decision tree learning, such as those employed by the widely used C4.5 and CART software packages. Our main result is a proof that such algorithms are boosting algorithms. By this we mean that if the functions that label the internal nodes of the decision tree can weakly approximate the unknown target function, then the top-down algorithms we study will amplify this weak advantage to build a tree achieving any desired level of accuracy. The bounds we obtain for this amplification show an interesting dependence on the splitting criterion used by the top-down algorithm. More precisely, if the functions used to label the internal nodes have error 1=2 fl as approximations to the target function, then for the splitting criteria used by CART and C4.5, trees of size (1=*) O(1=fl 2 * 2 ) and (1=*) O(log(1=*)=fl 2 ) (respectively) suffice to drive the error below *. Thus (for example), a small constant advantage over random guessing is amplified to any larger constant advantage with trees of constant size. For a new splitting criterion suggested by our analysis, the much stronger fl A preliminary version of this paper appears in Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, pages 459-468, ACM Press, 1996. Authors' addresses: M. Kearns, AT&T Research, 600 Mountain Avenue, Room 2A-423, Murray Hill, New Jersey 07974; electronic mail mkearns@research.att.com. Y. Mansour, Department of Computer Science, Tel Aviv University, Tel Aviv, Israel; electronic mail mansour@math.tau.ac.il. Y. Mansour was supported in part by the Israel Science Foundation, administered by the Israel Academy of Science and Humanities, and by a grant of the Israeli Ministry of Science and Technology.
[ 1388 ]
Train
1,587
5
Title: A counter example to the stronger version of the binary tree hypothesis Abstract: The paper describes a counter example to the hypothesis which states that a greedy decision tree generation algorithm that constructs binary decision trees and branches on a single attribute-value pair rather than on all values of the selected attribute will always lead to a tree with fewer leaves for any given training set. We show also that RELIEFF is less myopic than other impurity functions and that it enables the induction algorithm that generates binary decision trees to reconstruct optimal (the smallest) decision trees in more cases.
[ 1569 ]
Train
1,588
1
Title: Automatic Modularization by Speciation Abstract: Real-world problems are often too difficult to be solved by a single monolithic system. There are many examples of natural and artificial systems which show that a modular approach can reduce the total complexity of the system while solving a difficult problem satisfactorily. The success of modular artificial neural networks in speech and image processing is a typical example. However, designing a modular system is a difficult task. It relies heavily on human experts and prior knowledge about the problem. There is no systematic and automatic way to form a modular system for a problem. This paper proposes a novel evolutionary learning approach to designing a modular system automatically, without human intervention. Our starting point is speciation, using a technique based on fitness sharing. While speciation in genetic algorithms is not new, no effort has been made towards using a speciated population as a complete modular system. We harness the specialized expertise in the species of an entire population, rather than a single individual, by introducing a gating algorithm. We demonstrate our approach to automatic modularization by improving co-evolutionary game learning. Following earlier researchers, we learn to play iterated prisoner's dilemma. We review some problems of earlier co-evolutionary learning, and explain their poor generalization ability and sudden mass extinctions. The generalization ability of our approach is significantly better than past efforts. Using the specialized expertise of the entire speciated population though a gating algorithm, instead of the best individual, is the main contributor to this improvement.
[ 1114, 1117, 2334 ]
Train
1,589
4
Title: Learning to Sense Selectively in Physical Domains Abstract: In this paper we describe an approach to representing, using, and improving sensory skills for physical domains. We present Icarus, an architecture that represents control knowledge in terms of durative states and sequences of such states. The system operates in cycles, activating a state that matches the environmental situation and letting that state control behavior until its conditions fail or until finding another matching state with higher priority. Information about the probability that conditions will remain satisfied minimizes demands on sensing, as does knowledge about the durations of states and their likely successors. Three statistical learning methods let the system gradually reduce sensory load as it gains experience in a domain. We report experimental evaluations of this ability on three simulated physical tasks: flying an aircraft, steering a truck, and balancing a pole. Our experiments include lesion studies that identify the reduction in sensing due to each of the learning mechanisms and others that examine the effect of domain characteristics.
[ 910 ]
Test
1,590
1
Title: The Exploitation of Cooperation in Iterated Prisoner's Dilemma Abstract: We follow Axelrod [2] in using the genetic algorithm to play Iterated Prisoner's Dilemma. Each member of the population (i.e., each strategy) is evaluated by how it performs against the other members of the current population. This creates a dynamic environment in which the algorithm is optimising to a moving target instead of the usual evaluation against some fixed set of strategies, causing an "arms race" of innovation [3]. We conduct two sets of experiments. The first set investigates what conditions evolve the best strategies. The second set studies the robustness of the strategies thus evolved, that is, are the strategies useful only in the round robin of its population or are they effective against a wide variety of opponents? Our results indicate that the population has nearly always converged by about 250 generations, by which time the bias in the population has almost always stabilised at 85%. Our results confirm that cooperation almost always becomes the dominant strategy [1, 2]. We can also confirm that seeding the population with expert strategies is best done in small amounts so as to leave the initial population with plenty of genetic diversity [7]. The lack of robustness in strategies produced in the round robin evaluation is demonstrated by some examples of a population of nave cooperators being exploited by a defect-first strategy. This causes a sudden but ephemeral decline in the population's average score, but it recovers when less nave cooperators emerge and do well against the exploiting strategies. This example of runaway evolution is brought back to reality by a suitable mutation, reminiscent of punctuated equilibria [12]. We find that a way to reduce such navity is to make the GA population play against an extra,
[ 163, 910, 965 ]
Train
1,591
2
Title: Unsupervised Learning by Convex and Conic Coding Abstract: Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve feedback connections that project a reconstruction back to the input layer.
[ 33, 36, 871, 954, 1050, 1701 ]
Train
1,592
2
Title: A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction Abstract: Although recurrent neural nets have been moderately successful in learning to emulate finite-state machines (FSMs), the continuous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learning progresses. dolce consists of a standard recurrent neural net trained by gradient descent and an adaptive clustering technique that quantizes the state space. dolce is based on the assumption that a finite set of discrete internal states is required for the task, and that the actual network state belongs to this set but has been corrupted by noise due to inaccuracy in the weights. dolce learns to recover the discrete state with maximum a posteriori probability from the noisy state. Simulations show that dolce leads to a significant improvement in generalization performance over earlier neural net approaches to FSM induction.
[ 405, 1161, 1176, 1293, 1298, 1734 ]
Train
1,593
3
Title: Boltzmann Chains and Hidden Markov Models Abstract: We propose a statistical mechanical framework for the modeling of discrete time series. Maximum likelihood estimation is done via Boltzmann learning in one-dimensional networks with tied weights. We call these networks Boltzmann chains and show that they contain hidden Markov models (HMMs) as a special case. Our framework also motivates new architectures that address particular shortcomings of HMMs. We look at two such architectures: parallel chains that model feature sets with disparate time scales, and looped networks that model long-term dependencies between hidden states. For these networks, we show how to implement the Boltzmann learning rule exactly, in polynomial time, without resort to simulated or mean-field annealing. The necessary computations are done by exact decimation procedures from statistical mechanics.
[ 978, 1116, 1288, 1437, 1461 ]
Validation