node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
1,694 | 1 | Title: Strategy Adaptation by Competing Subpopulations
Abstract: The breeder genetic algorithm BGA depends on a set of control parameters and genetic operators. In this paper it is shown that strategy adaptation by competing subpopulations makes the BGA more robust and more efficient. Each subpopulation uses a different strategy which competes with other subpopulations. Numerical results are pre sented for a number of test functions. | [
793,
1455
] | Train |
1,695 | 0 | Title: Analogical Problem Solving by Adaptation of Schemes
Abstract: We present a computational approach to the acquisition of problem schemes by learning by doing and to their application in analogical problem solving. Our work has its background in automatic program construction and relies on the concept of recursive program schemes. In contrast to the usual approach to cognitive modelling where computational models are designed to fit specific data we propose a framework to describe certain empirically established characteristics of human problem solving and learning in a uniform and formally sound way. | [
1354
] | Train |
1,696 | 1 | Title: The Royal Road for Genetic Algorithms: Fitness Landscapes and GA Performance
Abstract: Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ("Royal Road" functions), and present some initial experimental results concerning the role of crossover and "building blocks" on landscapes constructed from features of this class. | [
163,
1114,
1334,
1769,
1771,
1872,
1943,
1971,
2175,
2250,
2330
] | Train |
1,697 | 2 | Title: Neural Network Exploration Using Optimal Experiment Design
Abstract: We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. The author was also funded by ATR Human Information Processing Laboratories, Siemens Corporate Research and NSF grant CDA-9309300. | [
16,
804,
929,
1559,
1599,
1664,
1667,
1683,
1703
] | Train |
1,698 | 0 | Title: CBET: a Case Base Exploration Tool
Abstract: We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. The author was also funded by ATR Human Information Processing Laboratories, Siemens Corporate Research and NSF grant CDA-9309300. | [
66,
430,
686,
1626,
2597
] | Train |
1,699 | 0 | Title: On the Usefulness of Re-using Diagnostic Solutions
Abstract: Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and model-based diagnostic problem solving in a unifying framework. | [
799,
1215,
2656
] | Test |
1,700 | 2 | Title: Growing a Hypercubical Output Space in a Self-Organizing Feature Map
Abstract: Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and model-based diagnostic problem solving in a unifying framework. | [
687,
745,
1157
] | Validation |
1,701 | 2 | Title: Pattern analysis and synthesis in attractor neural networks
Abstract: The representation of hidden variable models by attractor neural networks is studied. Memories are stored in a dynamical attractor that is a continuous manifold of fixed points, as illustrated by linear and nonlinear networks with hidden neurons. Pattern analysis and synthesis are forms of pattern completion by recall of a stored memory. Analysis and synthesis in the linear network are performed by bottom-up and top-down connections. In the nonlinear network, the analysis computation additionally requires rectification nonlinearity and inner product inhibition between hidden neurons. One popular approach to sensory processing is based on generative models, which assume that sensory input patterns are synthesized from some underlying hidden variables. For example, the sounds of speech can be synthesized from a sequence of phonemes, and images of a face can be synthesized from pose and lighting variables. Hidden variables are useful because they constitute a simpler representation of the variables that are visible in the sensory input. Using a generative model for sensory processing requires a method of pattern analysis. Given a sensory input pattern, analysis is the recovery of the hidden variables from which it was synthesized. In other words, analysis and synthesis are inverses of each other. There are a number of approaches to pattern analysis. In analysis-by-synthesis, the synthetic model is embedded inside a negative feedback loop[1]. Another approach is to construct a separate analysis model[2]. This paper explores a third approach, in which visible-hidden pairs are embedded as attractive fixed points, or attractors, in the state space of a recurrent neural network. The attractors can be regarded as memories stored in the network, and analysis and synthesis as forms of pattern completion by recall of a memory. The approach is illustrated with linear and nonlinear network architectures. In both networks, the synthetic model is linear, as in principal | [
33,
36,
832,
1591
] | Test |
1,702 | 6 | Title: Decision Graphs An Extension of Decision Trees
Abstract: Technical Report No: 92/173 (C) Jonathan Oliver 1992 Shortened appeared in AI and Statistics 1993[14] Abstract: In this paper, we examine Decision Graphs, a generalization of decision trees. We present an inference scheme to construct decision graphs using the Minimum Message Length Principle. Empirical tests demonstrate that this scheme compares favourably with other decision tree inference schemes. This work provides a metric for comparing the relative merit of the decision tree and decision graph formalisms for a particular domain. | [
1161,
1199
] | Train |
1,703 | 4 | Title: REINFORCEMENT DRIVEN INFORMATION ACQUISITION IN NON-DETERMINISTIC ENVIRONMENTS
Abstract: For an agent living in a non-deterministic Markov environment (NME), what is, in theory, the fastest way of acquiring information about its statistical properties? The answer is: to design "optimal" sequences of "experiments" by performing action sequences that maximize expected information gain. This notion is implemented by combining concepts from information theory and reinforcement learning. Experiments show that the resulting method, reinforcement driven information acquisition, can explore certain NMEs much faster than conventional random exploration. | [
740,
1559,
1697
] | Train |
1,704 | 2 | Title: LBG-U method for vector quantization an improvement over LBG inspired from neural networks
Abstract: Internal Report 97-01 | [
687,
745,
1157
] | Train |
1,705 | 6 | Title: Learning from Incomplete Boundary Queries Using Split Graphs and Hypergraphs (Extended Abstract)
Abstract: We consider learnability with membership queries in the presence of incomplete information. In the incomplete boundary query model introduced by Blum et al. [7], it is assumed that membership queries on instances near the boundary of the target concept may receive a "don't know" answer. We show that zero-one threshold functions are efficiently learnable in this model. The learning algorithm uses split graphs when the boundary region has radius 1, and their generalization to split hypergraphs (for which we give a split-finding algorithm) when the boundary region has constant radius greater than 1. We use a notion of indistinguishability of concepts that is appropriate for this model. | [
459,
1364,
1469,
2356
] | Test |
1,706 | 0 | Title: A Performance Model for Knowledge-based Systems
Abstract: Most techniques for verification and validation are directed at functional properties of programs. However, other properties of programs are also essential. This paper describes a model for the average computing time of a KADS knowledge-based system based on its structure. An example taken from an existing knowledge-based system is used to demonstrate the use of the cost-model in designing the system. | [
799,
1385,
1635
] | Validation |
1,707 | 0 | Title: Supporting Combined Human and Machine Planning: The Prodigy 4.0 User Interface Version 2.0*
Abstract: Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information. * This research is sponsored as part of the DARPA/RL Knowledge Based Planning and Scheduling Initiative under grant number F30602-95-1-0018. A short version of this document appeared as Cox, M. T., & Veloso, M. M. (1997). Supporting combined human and machine planning: An interface for planning by analogical reasoning. In D. B. Leake & E. Plaza (Eds.), Case-Based Reasoning Research and Development: Second International Conference on Case-Based Reasoning (pp. 531-540). Berlin: Springer-Verlag. | [
824,
825,
1215
] | Test |
1,708 | 6 | Title: A Simpler Look at Consistency
Abstract: One of the major goals of most early concept learners was to find hypotheses that were perfectly consistent with the training data. It was believed that this goal would indirectly achieve a high degree of predictive accuracy on a set of test data. Later research has partially disproved this belief. However, the issue of consistency has not yet been resolved completely. We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. | [
1333
] | Train |
1,709 | 2 | Title: NEURAL NETWORK APPROACH TO BLIND SEPARATION AND ENHANCEMENT OF IMAGES
Abstract: In this contribution we propose a new solution for the problem of blind separation of sources (for one dimensional signals and images) in the case that not only the waveform of sources is unknown, but also their number. For this purpose multi-layer neural networks with associated adaptive learning algorithms are developed. The primary source signals can have any non-Gaussian distribution, i.e. they can be sub-Gaussian and/or super-Gaussian. Computer experiments are presented which demonstrate the validity and high performance of the proposed approach. | [
59,
570,
839,
1520
] | Train |
1,710 | 2 | Title: A Non-linear Information Maximisation Algorithm that Performs Blind Separation.
Abstract: A new learning algorithm is derived which performs online stochastic gradient ascent in the mutual information between outputs and inputs of a network. In the absence of a priori knowledge about the `signal' and `noise' components of the input, propagation of information depends on calibrating network non-linearities to the detailed higher-order moments of the input density functions. By incidentally minimising mutual information between outputs, as well as maximising their individual entropies, the network `fac-torises' the input into independent components. As an example application, we have achieved near-perfect separation of ten digitally mixed speech signals. Our simulations lead us to believe that our network performs better at blind separation than the Herault-Jutten network, reflecting the fact that it is derived rigorously from the mutual information objective. | [
576,
1450,
1656
] | Train |
1,711 | 4 | Title: Environments with Classifier Systems (Experiments on Adding Memory to XCS)
Abstract: Pier Luca Lanzi Technical Report N. 97.45 October 17 th , 1997 | [
1515,
1581
] | Validation |
1,712 | 6 | Title: An Efficient Extension to Mixture Techniques for Prediction and Decision Trees
Abstract: We present a method for maintaining mixtures of prunings of a prediction or decision tree that extends the "node-based" prunings of [Bun90, WST95, HS97] to the larger class of edge-based prunings. The method includes an efficient online weight allocation algorithm that can be used for prediction, compression and classification. Although the set of edge-based prunings of a given tree is much larger than that of node-based prunings, our algorithm has similar space and time complexity to that of previous mixture algorithms for trees. Using the general on-line framework of Freund and Schapire [FS97], we prove that our algorithm maintains correctly the mixture weights for edge-based prunings with any bounded loss function. We also give a similar algorithm for the logarithmic loss function with a corresponding weight allocation algorithm. Finally, we describe experiments comparing node-based and edge-based mixture models for estimating the probability of the next word in English text, which show the ad vantages of edge-based models. | [
453,
569,
1025,
1290
] | Train |
1,713 | 3 | Title: A simulation approach to convergence rates for Markov chain Monte Carlo algorithms
Abstract: Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis-Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. Rosenthal (1995b) presents a method to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in Rosenthal's theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it can not provide the guarantees offered by analytical proof. Acknowledgements. We thank Brad Carlin for assistance and encouragement. | [
41,
115,
138,
416,
468,
889,
892,
1716,
1870,
1978,
1982,
1991,
1992,
2002,
2153,
2510
] | Train |
1,714 | 3 | Title: Uncertain Inferences and Uncertain Conclusions
Abstract: Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three. Under some treatments of uncertainty, the inference itself is never characterized by uncertainty. We explore both the significance of uncertainty in the premises and in the conclusion of an argument that involves uncertainty. We argue that for uncertainty to characterize the conclusion of an inference is natural, but that there is an interplay between uncertainty in the premises and uncertainty in the procedure of argument itself. We show that it is possible in principle to incorporate all uncertainty in the premises, rendering uncertainty arguments deductively valid. But we then argue (1) that this does not reflect human argument, (2) that it is computationally costly, and (3) that the gain in simplicity obtained by allowing uncertainty in inference can sometimes outweigh the loss of flexibility it entails. | [
1458
] | Validation |
1,715 | 1 | Title: A Sampling-Based Heuristic for Tree Search Applied to Grammar Induction
Abstract: In the field of Operation Research and Artificial Intelligence, several stochastic search algorithms have been designed based on the theory of global random search (Zhigljavsky 1991). Basically, those techniques iteratively sample the search space with respect to a probability distribution which is updated according to the result of previous samples and some predefined strategy. Genetic Algorithms (GAs) (Goldberg 1989) or Greedy Randomized Adaptive Search Procedures (GRASP) (Feo & Resende 1995) are two particular instances of this paradigm. In this paper, we present SAGE, a search algorithm based on the same fundamental mechanisms as those techniques. However, it addresses a class of problems for which it is difficult to design transformation operators to perform local search because of intrinsic constraints in the definition of the problem itself. For those problems, a procedural approach is the natural way to construct solutions, resulting in a state space represented as a tree or a DAG. The aim of this paper is to describe the underlying heuristics used by SAGE to address problems belonging to that class. The performance of SAGE is analyzed on the problem of grammar induction and its successful application to problems from the recent Abbadingo DFA learning competition is presented. | [
163,
793,
1386,
1734
] | Train |
1,716 | 3 | Title: Analysis of the Gibbs sampler for a model related to James-Stein estimators
Abstract: Summary. We analyze a hierarchical Bayes model which is related to the usual empirical Bayes formulation of James-Stein estimators. We consider running a Gibbs sampler on this model. Using previous results about convergence rates of Markov chains, we provide rigorous, numerical, reasonable bounds on the running time of the Gibbs sampler, for a suitable range of prior distributions. We apply these results to baseball data from Efron and Morris (1975). For a different range of prior distributions, we prove that the Gibbs sampler will fail to converge, and use this information to prove that in this case the associated posterior distribution is non-normalizable. Acknowledgements. I am very grateful to Jun Liu for suggesting this project, and to Neal Madras for suggesting the use of the Submartingale Convergence Theorem herein. I thank Kate Cowles and Richard Tweedie for helpful conversations, and thank the referees for useful comments. | [
41,
138,
892,
1713,
1982,
1991,
2153
] | Train |
1,717 | 1 | Title: 3 Representation Issues in Neighborhood Search and Evolutionary Algorithms
Abstract: Evolutionary Algorithms are often presented as general purpose search methods. Yet, we also know that no search method is better than another over all possible problems and that in fact there is often a good deal of problem specific information involved in the choice of problem representation and search operators. In this paper we explore some very general properties of representations as they relate to neighborhood search methods. In particular, we looked at the expected number of local optima under a neighborhood search operator when averaged overall possible representations. The number of local optima under a neighborhood search operator for standard Binary and standard binary reflected Gray codes is developed and explored as one measure of problem complexity. We also relate number of local optima to another metric, , designed to provide one measure of complexity with respect to a simple genetic algorithm. Choosing a good representation is a vital component of solving any search problem. However, choosing a good representation for a problem is as difficult as choosing a good search algorithm for a problem. Wolpert and Macready's (1995) No Free Lunch (NFL) theorem proves that no search algorithm is better than any other over all possible discrete functions. Radcliffe and Surry (1995) extend these notions to also cover the idea that all representations are equivalent when their behavior is considered on average over all possible functions. To understand these results, we first outline some of the simple assumptions behind this theorem. First, assume the optimization problem is discrete; this describes all combinatorial optimization problems-and really all optimization problems being solved on computers since computers have finite precision. Second, we ignore the fact that we can resample points in the space. The "No Free Lunch" result can be stated as follows: | [
163,
941,
1380,
1441
] | Validation |
1,718 | 2 | Title: PREDICTING SUNSPOTS AND EXCHANGE RATES WITH CONNECTIONIST NETWORKS
Abstract: We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. The ultimate goal is prediction accuracy. We analyze two time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. Weight-elimination also manages to extract some part of the dynamics of the notoriously noisy currency exchange rates and makes the network solution interpretable. | [
28,
157,
201,
810,
1079,
1373,
1867,
2138,
2413,
2414,
2582
] | Test |
1,719 | 1 | Title: An Analysis of the MAX Problem in Genetic Programming hold only in some cases, in
Abstract: We present a detailed analysis of the evolution of genetic programming (GP) populations using the problem of finding a program which returns the maximum possible value for a given terminal and function set and a depth limit on the program tree (known as the MAX problem). We confirm the basic message of [ Gathercole and Ross, 1996 ] that crossover together with program size restrictions can be responsible for premature convergence to a suboptimal solution. We show that this can happen even when the population retains a high level of variety and show that in many cases evolution from the sub-optimal solution to the solution is possible if sufficient time is allowed. In both cases theoretical models are presented and compared with actual runs. | [
1216,
1257,
1911,
2175,
2261,
2363
] | Train |
1,720 | 5 | Title: Least Generalizations and Greatest Specializations of Sets of Clauses
Abstract: The main operations in Inductive Logic Programming (ILP) are generalization and specialization, which only make sense in a generality order. In ILP, the three most important generality orders are subsumption, implication and implication relative to background knowledge. The two languages used most often are languages of clauses and languages of only Horn clauses. This gives a total of six different ordered languages. In this paper, we give a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in each of these six ordered sets. We survey results already obtained by others and also contribute some answers of our own. Our main new results are, firstly, the existence of a computable least generalization under implication of every finite set of clauses containing at least one non-tautologous function-free clause (among other, not necessarily function-free clauses). Secondly, we show that such a least generalization need not exist under relative implication, not even if both the set that is to be generalized and the background knowledge are function-free. Thirdly, we give a complete discussion of existence and non-existence of greatest specializations in each of the six ordered languages. | [
849
] | Test |
1,721 | 5 | Title: Machine learning in blood group determination of Danish Jersey cattle (causal probabilistic network). Dobljene mreze
Abstract: In the following paper we approach the problem with different machine learning algorithms and show that they can be compared with causal probabilistic networks in terms of performance and comprehensibility. | [
1569
] | Validation |
1,722 | 3 | Title: BAYESIAN TIME SERIES: Models and Computations for the Analysis of Time Series in the Physical Sciences
Abstract: This articles discusses developments in Bayesian time series mod-elling and analysis relevant in studies of time series in the physical and engineering sciences. With illustrations and references, we discuss: Bayesian inference and computation in various state-space models, with examples in analysing quasi-periodic series; isolation and modelling of various components of error in time series; decompositions of time series into significant latent subseries; nonlinear time series models based on mixtures of auto-regressions; problems with errors and uncertainties in the timing of observations; and the development of non-linear models based on stochastic deformations of time scales. | [
99,
1619,
1723
] | Test |
1,723 | 3 | Title: Modelling and robustness issues in Bayesian time series analysis
Abstract: Some areas of recent development and current interest in time series are noted, with some discussion of Bayesian modelling efforts motivated by substantial practical problems. The areas include non-linear auto-regressive time series modelling, measurement error structures in state-space modelling of time series, and issues of timing uncertainties and time deformations. Some discussion of the needs and opportunities for work on non/semi-parametric models and robustness issues is given in each context. | [
1619,
1722
] | Test |
1,724 | 2 | Title: Annealed Competition of Experts for a Segmentation and Classification of Switching Dynamics
Abstract: We present a method for the unsupervised segmentation of data streams originating from different unknown sources which alternate in time. We use an architecture consisting of competing neural networks. Memory is included in order to resolve ambiguities of input-output relations. In order to obtain maximal specialization, the competition is adiabatically increased during training. Our method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and input-output relations are ambiguous. Only a small dataset is needed for the training proceedure. Applications to time series from complex systems demonstrate the potential relevance of our approach for time series analysis and short-term prediction. | [
1079,
1492,
1508,
1538
] | Train |
1,725 | 2 | Title: DIFFERENTIALLY GENERATED NEURAL NETWORK CLASSIFIERS ARE EFFICIENT
Abstract: Differential learning for statistical pattern classification is described in [5]; it is based on the classification figure-of-merit (CFM) objective function described in [9, 5]. We prove that differential learning is asymptotically efficient, guaranteeing the best generalization allowed by the choice of hypothesis class (see below) as the training sample size grows large, while requiring the least classifier complexity necessary for Bayesian (i.e., minimum probability-of-error) discrimination. Moreover, differential learning almost always guarantees the best generalization allowed by the choice of hypothesis class for small training sample sizes. | [
921,
1265
] | Train |
1,726 | 5 | Title: Prognosing the Survival Time of the Patients with the Anaplastic Thyroid Carcinoma with Machine Learning
Abstract: Anaplastic thyroid carcinoma is a rare but very aggressive tumor. Many factors that might influence the survival of patients have been suggested. The aim of our study was to determine which of the factors, known at the time of admission to the hospital, might predict survival of patients with anaplastic thyroid carcinoma. Our aim was also to assess the relative importance of the factors and to identify potentially useful decision and regression trees generated by machine learning algorithms. Our study included 126 patients (90 females and 36 males; mean age was 66.7 years) with anaplastic thyroid carcinoma treated at the Institute of Oncology Ljubljana from 1972 to 1992. Patients were classified into categories according to 11 attributes: sex, age, history, physical findings, extent of disease on admission, and tumor morphology. In this paper we compare the machine learning approach with the previous statistical evaluations on the problem (uni-variate and multivariate analysis) and show that it can provide more thorough analysis and improve understanding of the data. | [
314,
1182,
1569,
1679,
1684
] | Train |
1,727 | 6 | Title: Machine Learning, 22(1/2/3):95-121, 1996. On the Worst-case Analysis of Temporal-difference Learning Algorithms
Abstract: We study the behavior of a family of learning algorithms based on Sutton's method of temporal differences. In our on-line learning framework, learning takes place in a sequence of trials, and the goal of the learning algorithm is to estimate a discounted sum of all the reinforcements that will be received in the future. In this setting, we are able to prove general upper bounds on the performance of a slightly modified version of Sutton's so-called TD() algorithm. These bounds are stated in terms of the performance of the best linear predictor on the given training sequence, and are proved without making any statistical assumptions of any kind about the process producing the learner's observed training sequence. We also prove lower bounds on the performance of any algorithm for this learning problem, and give a similar analysis of the closely related problem of learning to predict in a model in which the learner must produce predictions for a whole batch of observations before receiving reinforcement. | [
565,
738,
1376
] | Validation |
1,728 | 1 | Title: Dynamic Parameter Encoding for Genetic Algorithms
Abstract: The common use of static binary place-value codes for real-valued parameters of the phenotype in Holland's genetic algorithm (GA) forces either the sacrifice of representational precision for efficiency of search or vice versa. Dynamic Parameter Encoding (DPE) is a mechanism that avoids this dilemma by using convergence statistics derived from the GA population to adaptively control the mapping from fixed-length binary genes to real values. DPE is shown to be empirically effective and amenable to analysis; we explore the problem of premature convergence in GAs through two convergence models. | [
129,
163,
168,
1110,
1249,
1474,
1536,
1775
] | Train |
1,729 | 1 | Title: Adapting Crossover in a Genetic Algorithm
Abstract: Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Despite theoretical analysis, however, it appears difficult to predict when a particular crossover form will be optimal for a given problem. This paper describes an adaptive genetic algorithm that decides, as it runs, which form is optimal. | [
163,
1016,
1110
] | Train |
1,730 | 1 | Title: Evolving Edge Detectors with Genetic Programming edge detectors for 1-D signals and image profiles. The
Abstract: images. We apply genetic programming techniques to the production of high | [
1034,
1533
] | Train |
1,731 | 3 | Title: A Gibbs Sampling Approach to Cointegration
Abstract: This paper reviews the application of Gibbs sampling to a cointegrated VAR system. Aggregate imports and import prices for Belgium are modelled using two cointegrating relations. Gibbs sampling techniques are used to estimate from a Bayesian perspective the cointegrating relations and their weights in the VAR system. Extensive use of spectral analysis is made to get insight into convergence issues. | [
888
] | Test |
1,732 | 2 | Title: Improving the Performance of Radial Basis Function Networks by Learning Center Locations
Abstract: This paper reviews the application of Gibbs sampling to a cointegrated VAR system. Aggregate imports and import prices for Belgium are modelled using two cointegrating relations. Gibbs sampling techniques are used to estimate from a Bayesian perspective the cointegrating relations and their weights in the VAR system. Extensive use of spectral analysis is made to get insight into convergence issues. | [
611,
853,
1493,
1644,
2225,
2423
] | Train |
1,733 | 3 | Title: Decision Analysis by Augmented Probability Simulation
Abstract: We provide a generic Monte Carlo method to find the alternative of maximum expected utility in a decision analysis. We define an artificial distribution on the product space of alternatives and states, and show that the optimal alternative is the mode of the implied marginal distribution on the alternatives. After drawing a sample from the artificial distribution, we may use exploratory data analysis tools to approximately identify the optimal alternative. We illustrate our method for some important types of influence diagrams. (Decision Analysis, Influence Diagrams, Markov chain Monte Carlo, Simulation) | [
41,
904
] | Train |
1,734 | 1 | Title: A Stochastic Search Approach to Grammar Induction
Abstract: This paper describes a new sampling-based heuristic for tree search named SAGE and presents an analysis of its performance on the problem of grammar induction. This last work has been inspired by the Abbadingo DFA learning competition [14] which took place between Mars and November 1997. SAGE ended up as one of the two winners in that competition. The second winning algorithm, first proposed by Rod-ney Price, implements a new evidence-driven heuristic for state merging. Our own version of this heuristic is also described in this paper and compared to SAGE. | [
163,
793,
1249,
1592,
1715
] | Test |
1,735 | 0 | Title: Supporting Conversational Case-Based Reasoning in an Integrated Reasoning Framework Conversational Case-Based Reasoning
Abstract: Conversational case-based reasoning (CCBR) has been successfully used to assist in case retrieval tasks. However, behavioral limitations of CCBR motivate the search for integrations with other reasoning approaches. This paper briefly describes our group's ongoing efforts towards enhancing the inferencing behaviors of a conversational case-based reasoning development tool named NaCoDAE. In particular, we focus on integrating NaCoDAE with machine learning, model-based reasoning, and generative planning modules. This paper defines CCBR, briefly summarizes the integrations, and explains how they enhance the overall system. Our research focuses on enhancing the performance of conversational case-based reasoning (CCBR) systems (Aha & Breslow, 1997). CCBR is a form of case-based reasoning where users initiate problem solving conversations by entering an initial problem description in natural language text. This text is assumed to be a partial rather than a complete problem description. The CCBR system then assists in eliciting refinements of this description and in suggesting solutions. Its primary purpose is to provide a focus of attention for the user so as to quickly provide a solution(s) for their problem. Figure 1 summarizes the CCBR problem solving cycle. Cases in a CCBR library have three components: | [
983,
1002,
1626
] | Train |
1,736 | 1 | Title: Evolving Cooperation Strategies
Abstract: The identification, design, and implementation of strategies for cooperation is a central research issue in the field of Distributed Artificial Intelligence (DAI). We propose a novel approach to the construction of cooperation strategies for a group of problem solvers based on the Genetic Programming (GP) paradigm. GPs are a class of adaptive algorithms used to evolve solution structures that optimize a given evaluation criterion. Our approach is based on designing a representation for cooperation strategies that can be manipulated by GPs. We present results from experiments in the predator-prey domain, which has been extensively studied as a easy-to-describe but difficult-to-solve cooperation problem domain. The key aspect of our approach is the minimal reliance on domain knowledge and human intervention in the construction of good cooperation strategies. Promising comparison results with prior systems lend credence to the viability of this ap proach. | [
1178,
1690,
1737
] | Train |
1,737 | 1 | Title: A Simulation of Adaptive Agents in a Hostile Environment
Abstract: In this paper we use the genetic programming technique to evolve programs to control an autonomous agent capable of learning how to survive in a hostile environment. In order to facilitate this goal, agents are run through random environment configurations. Randomly generated programs, which control the interaction of the agent with its environment, are recombined to form better programs. Each generation of the population of agents is placed into the Simulator with the ultimate goal of producing an agent capable of surviving any environment. The environment that an agent is presented consists of other agents, mines, and energy. The goal of this research is to construct a program which when executed will allow an agent (or agents) to correctly sense, and mark, the presence of items (energy and mines) in any environment. The Simulator determines the raw fitness of each agent by interpreting the associated program. General programs are evolved to solve this problem. Different environmental setups are presented to show the generality of the solution. These environments include one agent in a fixed environment, one agent in a fluctuating environment, and multiple agents in a fluctuating environment cooperating together. The genetic programming technique was extremely successful. The average fitness per generation in all three environments tested showed steady improvement. Programs were successfully generated that enabled an agent to handle any possible environment. | [
380,
415,
1178,
1736
] | Train |
1,738 | 1 | Title: Evolving nonTrivial Behaviors on Real Robots: an Autonomous Robot that Picks up Objects
Abstract: Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a nontrivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. | [
219,
538,
1264,
1689
] | Train |
1,739 | 6 | Title: Model Selection based on Minimum Description Length
Abstract: Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a nontrivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. | [
641,
848
] | Train |
1,740 | 1 | Title: As mentioned in the introduction, an encod-ing/crossover pair makes a spectrum of geographical linkages. A
Abstract: It is open as to which chromosomal dimension performs best. Although higher-dimensional encodings (whether real or imaginary) can preserve more geographical gene linkages, we suspect that too high a dimension would not perform desirably. We are studying the question of which dimension of encoding is best for a given instance. It is likely that the optimal dimension is somehow dependent on the chromosome size and the input graph topology; interactions with the flexibility of crossover are yet unknown. The interaction of these considerations with the number of cuts used in the crossover is also an open issue. * In relocating genes onto a multi-dimensional chromosome, the simplest way is via a sequential assignment such as row-major order. Section 4 showed that performance improves when a DFS-row-major reembedding is used for two- and three-dimensional encodings. We suspect that this phenomenon will be consistent for higher-dimensional cases, and hope to perform more detailed investigations in the future. Although DFS reordering proved to be helpful for both linear encodings [19] and multi-dimensional encodings, we do not believe DFS-row-major reembedding is a good approach for the multi-dimensional cases since the row-major embedding is so simplistic. We are considering alternative 2-dimensional and 3-dimensional reembeddings which will hopefully provide further improvement. [4] T. N. Bui and B. R. Moon. Hyperplane synthesis for genetic algorithms. In Fifth International Conference on Genetic Algorithms, pages 102-109, July 1993. [5] T. N. Bui and B. R. Moon. Analyzing hyperplane synthesis in genetic algorithms using clustered schemata. In International Conference on Evolutionary Computation, Oct. 1994. Lecture Notes in Computer Science, 866:108-118, Springer-Verlag. | [
1136,
1305
] | Test |
1,741 | 4 | Title: Reinforcement Learning Algorithm for Partially Observable Markov Decision Problems
Abstract: Increasing attention has been paid to reinforcement learning algorithms in recent years, partly due to successes in the theoretical analysis of their behavior in Markov environments. If the Markov assumption is removed, however, neither generally the algorithms nor the analyses continue to be usable. We propose and analyze a new learning algorithm to solve a certain class of non-Markov decision problems. Our algorithm applies to problems in which the environment is Markov, but the learner has restricted access to state information. The algorithm involves a Monte-Carlo policy evaluation combined with a policy improvement method that is similar to that of Markov decision problems and is guaranteed to converge to a local maximum. The algorithm operates in the space of stochastic policies, a space which can yield a policy that performs considerably better than any deterministic policy. Although the space of stochastic policies is continuous|even for a discrete action space|our algorithm is computationally tractable. | [
492,
565,
601,
738,
1841
] | Validation |
1,742 | 3 | Title: On MCMC Sampling in Hierarchical Longitudinal Models SUMMARY
Abstract: Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples. | [
2456
] | Validation |
1,743 | 1 | Title: Feature Selection Methods: Genetic Algorithms vs. Greedy-like Search
Abstract: This paper presents a comparison between two feature selection methods, the Importance Score (IS) which is based on a greedy-like search and a genetic algorithm-based (GA) method, in order to better understand their strengths and limitations and their area of application. The results of our experiments show a very strong relation between the nature of the data and the behavior of both systems. The Importance Score method is more efficient when dealing with little noise and small number of interacting features, while the genetic algorithms can provide a more robust solution at the expense of increased computational effort. Keywords. feature selection, machine learning, genetic algorithms, search. | [
2192
] | Validation |
1,744 | 2 | Title: Simple Synchrony Networks: Learning Generalisations across Syntactic Constituents
Abstract: This paper describes a training algorithm for Simple Synchrony Networks (SSNs), and reports on experiments in language learning using a recursive grammar. The SSN is a new connectionist architecture combining a technique for learning about patterns across time, Simple Recurrent Networks (SRNs), with Temporal Synchrony Variable Binding (TSVB). The use of TSVB means the SSN can learn about entities in the training set, and generalise this information to entities in the test set. In the experiments, the network is trained on sentences with up to one embedded clause, and with some words restricted to certain classes of constituent. During testing, the network generalises information learned to sentences with up to three embedded clauses, and with words appearing in any constituent. These results demonstrate that SSNs learn generalisations across syntactic constituents. | [
2041
] | Test |
1,745 | 1 | Title: Learning Recursive Sequences via Evolution of Machine-Language Programs
Abstract: We use directed search techniques in the space of computer programs to learn recursive sequences of positive integers. Specifically, the integer sequences of squares, x 2 ; cubes, x 3 ; factorial, x!; and Fibonacci numbers are studied. Given a small finite prefix of a sequence, we show that three directed searches|machine-language genetic programming with crossover, exhaustive iterative hill climbing, and a hybrid (crossover and hill climbing)|can automatically discover programs that exactly reproduce the finite target prefix and, moreover, that correctly produce the remaining sequence up to the underlying machine's precision. Our machine-language representation is generic|it contains instructions for arithmetic, register manipulation and comparison, and control flow. We also introduce an output instruction that allows variable-length sequences as result values. Importantly, this representation does not contain recursive operators; recursion, when needed, is automatically synthesized from primitive instructions. For a fixed set of search parameters (e.g., instruction set, program size, fitness criteria), we compare the efficiencies of the three directed search techniques on the four sequence problems. For this parameter set, an evolutionary-based search always outperforms exhaustive hill climbing as well as undirected random search. Since only the prefix of the target sequence is variable in our experiments, we posit that this approach to sequence induction is potentially quite general. | [
2175,
2641
] | Validation |
1,746 | 2 | Title: DISCRETE-TIME TRANSITIVITY AND ACCESSIBILITY: ANALYTIC SYSTEMS 1
Abstract: This paper studies the problem, and establishes the desired implication for analytic systems in several cases: (i) compact state space, (ii) under a Poisson stability condition, and (iii) in a generic sense. In addition, the paper studies accessibility properties of the "control sets" recently introduced in the context of dynamical systems studies. Finally, various examples and counterexamples are provided relating the various Lie algebras introduced in past work. | [
1948
] | Train |
1,747 | 3 | Title: FROM BAYESIAN NETWORKS TO CAUSAL NETWORKS
Abstract: This paper demonstrates the use of graphs as a mathematical tool for expressing independencies, and as a formal language for communicating and processing causal information for decision analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the theory of Bayesian networks and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. As an example, we show how the effect of smoking on lung cancer can be quantified from non-experimental data, using a minimal set of qualitative assumptions. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. fl Portions of this paper were presented at the 49th Session of the International Statistical Institute, Florence, Italy, August 25 - September 3, 1993. | [
419,
1324,
1326,
1527,
2160,
2434,
2559
] | Test |
1,748 | 6 | Title: A Quantum Computational Learning Algorithm
Abstract: An interesting classical result due to Jackson allows polynomial-time learning of the function class DNF using membership queries. Since in most practical learning situations access to a membership oracle is unrealistic, this paper explores the possibility that quantum computation might allow a learning algorithm for DNF that relies only on example queries. A natural extension of Fourier-based learning into the quantum domain is presented. The algorithm requires only an example oracle, and it runs in O( 2 n ) time, a result that appears to be classically impossible. The algorithm is unique among quantum algorithms in that it does not assume a priori knowledge of a function and does not operate on a superposition that includes all possible basis states. | [
456,
2182
] | Train |
1,749 | 3 | Title: Algebraic Techniques for Efficient Inference in Bayesian Networks
Abstract: A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. These algorithms use graph-theoretic techniques to analyze and exploit network topology. In this paper, we examine the problem of efficient probabilistic inference in a belief network as a combinatorial optimization problem, that of finding an optimal factoring given an algebraic expression over a set of probability distributions. We define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and present simple, easily implemented algorithms with excellent performance. We also show how use of an algebraic perspective permits significant extension to the belief net representation. | [
2164
] | Validation |
1,750 | 5 | Title: Modeling Superscalar Processors via Statistical Simulation
Abstract: A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. These algorithms use graph-theoretic techniques to analyze and exploit network topology. In this paper, we examine the problem of efficient probabilistic inference in a belief network as a combinatorial optimization problem, that of finding an optimal factoring given an algebraic expression over a set of probability distributions. We define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and present simple, easily implemented algorithms with excellent performance. We also show how use of an algebraic perspective permits significant extension to the belief net representation. | [
2106
] | Test |
1,751 | 2 | Title: Global self organization of all known protein sequences reveals inherent biological signatures self organization method
Abstract: A global classification of all currently known protein sequences is performed. Every protein sequence is partitioned into segments of 50 amino acids and a dynamic-programming distance is calculated between each pair of segments. This space of segments is first embedded into Euclidean space with small metric distortion. A novel self-organized cross-validated clustering algorithm is then applied to the embedded space with Euclidean distances. The resulting hierarchical tree of clusters offers a new representation of protein sequences and families, which compares favorably with the most updated classifications based on functional and structural protein data. Motifs and domains such as the Zinc Finger, EF hand, Homeobox, EGF-like and others are automatically correctly identified. A novel representation of protein families is introduced, from which functional biological kinship of protein families can be deduced, as demonstrated for the transporters family. | [
2691
] | Train |
1,752 | 0 | Title: EXPLANATORY INTERFACE IN INTERACTIVE DESIGN ENVIRONMENTS
Abstract: Explanation is an important issue in building computer-based interactive design environments in which a human designer and a knowledge system may cooperatively solve a design problem. We consider the two related problems of explaining the system's reasoning and the design generated by the system. In particular, we analyze the content of explanations of design reasoning and design solutions in the domain of physical devices. We describe two complementary languages: task-method-knowledge models for explaining design reasoning, and structure-behavior-function models for explaining device designs. INTERACTIVE KRITIK is a computer program that uses these representations to visually illustrate the system's reasoning and the result of a design episode. The explanation of design reasoning in INTERACTIVE KRITIK is in the context of the evolving design solution, and, similarly, the explanation of the design solution is in the context of the design reasoning. | [
2706
] | Validation |
1,753 | 2 | Title: Object Oriented Design of a BP Neural Network Simulator and Implementation on the Connection Machine (CM-5)
Abstract: In this paper we describe the implementation of the backpropagation algorithm by means of an object oriented library (ARCH). The use of this library relieve the user from the details of a specific parallel programming paradigm and at the same time allows a greater portability of the generated code. To provide a comparision with existing solutions, we survey the most relevant implementations of the algorithm proposed so far in the literature, both on dedicated and general purpose computers. Extensive experimental results show that the use of the library does not hurt the performance of our simulator, on the contrary our implementation on a Connection Machine (CM-5) is comparable with the fastest in its category. | [
2268
] | Train |
1,754 | 4 | Title: The Neural Network House: An Overview
Abstract: Typical home comfort systems utilize only rudimentary forms of energy management and conservation. The most sophisticated technology in common use today is an automatic setback thermostat. Tremendous potential remains for improving the efficiency of electric and gas usage. However, home residents who are ignorant of the physics of energy utilization cannot design environmental control strategies, but neither can energy management experts who are ignorant of the behavior patterns of the inhabitants. Adaptive control seems the only alternative. We have begun building an adaptive control system that can infer appropriate rules of operation for home comfort systems based on the lifestyle of the inhabitants and energy conservation goals. Recent research has demonstrated the potential of neural networks for intelligent control. We are constructing a prototype control system in an actual residence using neural network reinforcement learning and prediction techniques. The residence is equipped with sensors to provide information about environmental conditions (e.g., temperatures, ambient lighting level, sound and motion in each room) and actuators to control the gas furnace, electric space heaters, gas hot water heater, lighting, motorized blinds, ceiling fans, and dampers in the heating ducts. This paper presents an overview of the project as it now stands. | [
1867,
1869
] | Validation |
1,755 | 2 | Title: Learning Controllers from Examples a motivation for searching alternative, empirical techniques for generating controllers.
Abstract: Today there is a great interest in discovering methods that allow a faster design and development of real-time control software. Control theory helps when linear controllers have to be developed but it does not support the generation In this paper, it is discussed how Machine Learning has been applied to the Function, and Locally Receptive Field Function Approximators. Three integrated learning algorithms, two of which are original, are described and then tried on two experimental test cases. The first test case is provided by an industrial robot KUKA IR-361 engaged into the "peg-into-hole" task, while the second is a classical prediction task on the Mackey-Glass chaotic series. From the experimental comparison, it appears that both Fuzzy Controllers and RBFNs synthesised from examples are excellent approximators, and that they can be even more accurate than MLPs. of non-linear controllers, which in many cases (such as in compliant motion control) | [
611,
2432
] | Validation |
1,756 | 1 | Title: Soft Computing: the Convergence of Emerging Reasoning Technologies
Abstract: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms. | [
168,
745,
1663,
2603,
2613
] | Train |
1,757 | 3 | Title: A case study in dynamic belief networks: monitoring walking, fall prediction and detection
Abstract: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms. | [
1268,
1842,
2221,
2341
] | Train |
1,758 | 4 | Title: Simultaneous Learning of Control Laws and Local Environment Representations for Intelligent Navigation Robots
Abstract: Two issues of an intelligent navigation robot have been addressed in this work. First is the robot's ability to learn a representation of the local environment and use this representation to identify which local environment it is in. This is done by first extracting features from the sensors which are more informative than just distances of obstacles in various directions. Using these features a reduced ring representation (RRR) of the local environment is derived. As the robot navigates, it learns the RRR signatures of all the new environment types it encounters. For purpose of identification, a ring matching criteria is proposed where the robot tries to match the RRR from the sensory input to one of the RRRs in its library. The second issue addressed is that of learning hill climbing control laws in the local environments. Unlike conventional neuro-controllers, a reinforcement learning framework, where the robot first learns a model of the environment and then learns the control law in terms of a neural network is proposed here. The reinforcement function is generated from the sensory inputs of the robot before and after a control action is taken. Three key results shown in this work are that (1) The robot is able to build its library of RRR signatures perfectly even with significant sensor noise for eight different local environ-mets, (2) It was able to identify its local environment with an accuracy of more than 96%, once the library is build, and (3) the robot was able to learn adequate hill climbing control laws which take it to the distinctive state of the local environment for five different environment types. | [
2412
] | Train |
1,759 | 3 | Title: Belief Maintenance in Bayesian Networks
Abstract: Two issues of an intelligent navigation robot have been addressed in this work. First is the robot's ability to learn a representation of the local environment and use this representation to identify which local environment it is in. This is done by first extracting features from the sensors which are more informative than just distances of obstacles in various directions. Using these features a reduced ring representation (RRR) of the local environment is derived. As the robot navigates, it learns the RRR signatures of all the new environment types it encounters. For purpose of identification, a ring matching criteria is proposed where the robot tries to match the RRR from the sensory input to one of the RRRs in its library. The second issue addressed is that of learning hill climbing control laws in the local environments. Unlike conventional neuro-controllers, a reinforcement learning framework, where the robot first learns a model of the environment and then learns the control law in terms of a neural network is proposed here. The reinforcement function is generated from the sensory inputs of the robot before and after a control action is taken. Three key results shown in this work are that (1) The robot is able to build its library of RRR signatures perfectly even with significant sensor noise for eight different local environ-mets, (2) It was able to identify its local environment with an accuracy of more than 96%, once the library is build, and (3) the robot was able to learn adequate hill climbing control laws which take it to the distinctive state of the local environment for five different environment types. | [
2288,
2697,
2698
] | Train |
1,760 | 2 | Title: Parallel Environments for Implementing Neural Networks
Abstract: As artificial neural networks (ANNs) gain popularity in a variety of application domains, it is critical that these models run fast and generate results in real time. Although a number of implementations of neural networks are available on sequential machines, most of these implementations require an inordinate amount of time to train or run ANNs, especially when the ANN models are large. One approach for speeding up the implementation of ANNs is to implement them on parallel machines. This paper surveys the area of parallel environments for the implementations of ANNs, and prescribes desired characteristics to look for in such implementations. | [
427,
1879
] | Train |
1,761 | 3 | Title: An extension of Fill's exact sampling algorithm to non-monotone chains*
Abstract: We provide an extension of Fill's (1998) exact sampler algorithm. Our algorithm is similar to Fill's, however it makes no assumptions regarding stochastic monotonicity, discreteness of the state space, the existence of densities, etc. We illustrate our algorithm on a simple example. | [
2208
] | Train |
1,762 | 6 | Title: Improved Uniform Test Error Bounds
Abstract: We derive distribution-free uniform test error bounds that improve on VC-type bounds for validation. We show how to use knowledge of test inputs to improve the bounds. The bounds are sharp, but they require intense computation. We introduce a method to trade sharpness for speed of computation. Also, we compute the bounds for several test cases. | [
571,
2495,
2694
] | Train |
1,763 | 2 | Title: A Brief History of Connectionism
Abstract: Connectionist research is firmly established within the scientific community, especially within the multi-disciplinary field of cognitive science. This diversity, however, has created an environment which makes it difficult for connectionist researchers to remain aware of recent advances in the field, let alone understand how the field has developed. This paper attempts to address this problem by providing a brief guide to connectionist research. The paper begins by defining the basic tenets of connectionism. Next, the development of connectionist research is traced, commencing with connectionism's philosophical predecessors, moving to early psychological and neuropsychological influences, followed by the mathematical and computing contributions to connectionist research. Current research is then reviewed, focusing specifically on the different types of network architectures and learning rules in use. The paper concludes by suggesting that neural network research|at least in cognitive science|should move towards models that incorporate the relevant functional principles inherent in neurobiological systems. | [
407,
611,
639,
745,
2611
] | Train |
1,764 | 3 | Title: Some remarks on Scheiblechner's treatment of ISOP models.
Abstract: Scheiblechner (1995) proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) that replaces Rasch's (1980) specific objectivity assumptions with two interesting ordinal assumptions. Special cases of Scheiblechner's model include standard unidimensional factor analysis models in which the loadings are held constant, and the Rasch model for binary item responses. Closely related are the doubly-monotone item response models of Mokken (1971; see also Mokken and Lewis, 1982; Si-jtsma, 1988; Molenaar, 1991; Sijtsma and Junker, 1996; and Sijtsma and Hemker, 1996). More generally, strictly unidimensional latent variable models have been considered in some detail by Holland and Rosenbaum (1986), Ellis and van den Wollenberg (1993), and Junker (1990, 1993). The purpose of this note is to provide connections with current research in foundations and nonparametric latent variable and item response modeling that are missing from Scheiblechner's (1995) paper, and to point out important related work by Hemker et al. (1996a,b), Ellis and Junker (1996) and Junker and Ellis (1996). We also discuss counterexamples to three major theorems in the paper. By carrying out these three tasks, we hope to provide researchers interested in the foundations of measurement and item response modeling the opportunity to give the ISOP approach the careful attention it deserves. | [
1770,
1938
] | Train |
1,765 | 3 | Title: A Characterization of Monotone Unidimensional Latent Variable Models
Abstract: Scheiblechner (1995) proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) that replaces Rasch's (1980) specific objectivity assumptions with two interesting ordinal assumptions. Special cases of Scheiblechner's model include standard unidimensional factor analysis models in which the loadings are held constant, and the Rasch model for binary item responses. Closely related are the doubly-monotone item response models of Mokken (1971; see also Mokken and Lewis, 1982; Si-jtsma, 1988; Molenaar, 1991; Sijtsma and Junker, 1996; and Sijtsma and Hemker, 1996). More generally, strictly unidimensional latent variable models have been considered in some detail by Holland and Rosenbaum (1986), Ellis and van den Wollenberg (1993), and Junker (1990, 1993). The purpose of this note is to provide connections with current research in foundations and nonparametric latent variable and item response modeling that are missing from Scheiblechner's (1995) paper, and to point out important related work by Hemker et al. (1996a,b), Ellis and Junker (1996) and Junker and Ellis (1996). We also discuss counterexamples to three major theorems in the paper. By carrying out these three tasks, we hope to provide researchers interested in the foundations of measurement and item response modeling the opportunity to give the ISOP approach the careful attention it deserves. | [
1770,
1938
] | Train |
1,766 | 2 | Title: Computational Models of Sensorimotor Integration Computational Maps and Motor Control.
Abstract: The sensorimotor integration system can be viewed as an observer attempting to estimate its own state and the state of the environment by integrating multiple sources of information. We describe a computational framework capturing this notion, and some specific models of integration and adaptation that result from it. Psychophysical results from two sensorimotor systems, subserving the integration and adaptation of visuo-auditory maps, and estimation of the state of the hand during arm movements, are presented and analyzed within this framework. These results suggest that: (1) Spatial information from visual and auditory systems is integrated so as to reduce the variance in localization. (2) The effects of a remapping in the relation between visual and auditory space can be predicted from a simple learning rule. (3) The temporal propagation of errors in estimating the hand's state is captured by a linear dynamic observer, providing evidence for the existence of an internal model which simulates the dynamic behavior of the arm. | [
427,
477,
1810
] | Train |
1,767 | 4 | Title: Incremental Evolution of Complex General Behavior
Abstract: Several researchers have demonstrated how complex action sequences can be learned through neuro-evolution (i.e. evolving neural networks with genetic algorithms). However, complex general behavior such as evading predators or avoiding obstacles, which is not tied to specific environments, turns out to be very difficult to evolve. Often the system discovers mechanical strategies (such as moving back and forth) that help the agent cope, but are not very effective, do not appear believable and would not generalize to new environments. The problem is that a general strategy is too difficult for the evolution system to discover directly. This paper proposes an approach where such complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general. The task transitions are implemented through successive stages of delta-coding (i.e. evolving modifications), which allows even converged populations to adapt to the new task. The method is tested in the stochastic, dynamic task of prey capture, and compared with direct evolution. The incremental approach evolves more effective and more general behavior, and should also scale up to harder tasks. | [
500,
2257
] | Train |
1,768 | 4 | Title: Evolving Neural Networks to Play Go
Abstract: Go is a difficult game for computers to master, and the best go programs are still weaker than the average human player. Since the traditional game playing techniques have proven inadequate, new approaches to computer go need to be studied. This paper presents a new approach to learning to play go. The SANE (Symbiotic, Adaptive Neuro-Evolution) method was used to evolve networks capable of playing go on small boards with no pre-programmed go knowledge. On a 9 fi 9 go board, networks that were able to defeat a simple computer opponent were evolved within a few hundred generations. Most significantly, the networks exhibited several aspects of general go playing, which suggests the approach could scale up well. | [
2257
] | Train |
1,769 | 1 | Title: Testing the Robustness of the Genetic Algorithm on the Floating Building Block Representation.
Abstract: Recent studies on a floating building block representation for the genetic algorithm (GA) suggest that there are many advantages to using the floating representation. This paper investigates the behavior of the GA on floating representation problems in response to three different types of pressures: (1) a reduction in the amount of genetic material available to the GA during the problem solving process, (2) functions which have negative-valued building blocks, and (3) randomizing non-coding segments. Results indicate that the GA's performance on floating representation problems is very robust. Significant reductions in genetic material (genome length) may be made with relatively small decrease in performance. The GA can effectively solve problems with negative building blocks. Randomizing non-coding segments appears to improve rather than harm GA performance. | [
1696,
2330
] | Test |
1,770 | 2 | Title: A Survey of Theory and Methods of Invariant Item Ordering
Abstract: This work was initiated while Junker was visiting the University of Utrecht with the support of a Carnegie Mellon University Faculty Development Grant, and the generous hospitality of the Social Sciences Faculty, University of Utrecht. Additional support was provided by the Office of Naval Research, Cognitive Sciences Division, Grant N00014-87-K-0277 and the National Institute of Mental Health, Training Grant MH15758. | [
1764,
1765,
1938
] | Train |
1,771 | 1 | Title: When Will a Genetic Algorithm Outperform Hill Climbing?
Abstract: We analyze a simple hill-climbing algorithm (RMHC) that was previously shown to outperform a genetic algorithm (GA) on a simple "Royal Road" function. We then analyze an "idealized" genetic algorithm (IGA) that is significantly faster than RMHC and that gives a lower bound for GA speed. We identify the features of the IGA that give rise to this speedup, and discuss how these features can be incorporated into a real GA. | [
1696,
1775,
1872
] | Train |
1,772 | 2 | Title: New Inexact Parallel Variable Distribution Algorithms Editor:
Abstract: We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed among p processors. Each processor has the primary responsibility for updating its block of variables while allowing the remaining "secondary" variables to change in a restricted fashion along some easily computable directions. We propose useful generalizations that consist, for the general unconstrained case, of replacing exact global solution of the subproblems by a certain natural sufficient descent condition, and, for the convex case, of inexact subproblem solution in the PVD algorithm. These modifications are the key features of the algorithm that has not been analyzed before. The proposed modified algorithms are more practical and make it easier to achieve good load balancing among the parallel processors. We present a general framework for the analysis of this class of algorithms and derive some new and improved linear convergence results for problems with weak sharp minima of order 2 and strongly convex problems. We also show that nonmonotone synchronization schemes are admissible, which further improves flexibility of PVD approach. | [
2307,
2351
] | Train |
1,773 | 2 | Title: Canonical Momenta Indicators of Financial Markets and Neocortical EEG
Abstract: A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. | [
2545
] | Train |
1,774 | 2 | Title: Networks of Spiking Neurons: The Third Generation of Neural Network Models
Abstract: The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neuro biology. | [
990,
1891,
1968,
2619
] | Train |
1,775 | 1 | Title: GENETIC ALGORITHMS AND VERY FAST SIMULATED REANNEALING: A COMPARISON
Abstract: We compare Genetic Algorithms (GA) with a functional search method, Very Fast Simulated Reannealing (VFSR), that not only is efficient in its search strategy, but also is statistically guaranteed to find the function optima. GA previously has been demonstrated to be competitive with other standard Boltzmann-type simulated annealing techniques. Presenting a suite of six standard test functions to GA and VFSR codes from previous studies, without any additional fine tuning, strongly suggests that VFSR can be expected to be orders of magnitude more efficient than GA. | [
163,
1728,
1771,
1793,
1795,
2082,
2116
] | Train |
1,776 | 5 | Title: Extending Theory Refinement to M-of-N Rules
Abstract: In recent years, machine learning research has started addressing a problem known as theory refinement. The goal of a theory refinement learner is to modify an incomplete or incorrect rule base, representing a domain theory, to make it consistent with a set of input training examples. This paper presents a major revision of the Either propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend Either to refine M-of-N rules. The resulting algorithm, Neither (New Either), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the M-of-N format. To demonstrate the advantages of Neither, we present experimental results from two real-world domains. | [
136,
1595,
2543
] | Train |
1,777 | 3 | Title: Representing Aggregate Belief through the Competitive Equilibrium of a Securities Market
Abstract: We consider the problem of belief aggregation: given a group of individual agents with probabilistic beliefs over a set of of uncertain events, formulate a sensible consensus or aggregate probability distribution over these events. Researchers have proposed many aggregation methods, although on the question of which is best the general consensus is that there is no consensus. We develop a market-based approach to this problem, where agents bet on uncertain events by buying or selling securities contingent on their outcomes. Each agent acts in the market so as to maximize expected utility at given securities prices, limited in its activity only by its own risk aversion. The equilibrium prices of goods in this market represent aggregate beliefs. For agents with constant risk aversion, we demonstrate that the aggregate probability exhibits several desirable properties, and is related to independently motivated techniques. We argue that the market-based approach provides a plausible mechanism for belief aggregation in multiagent systems, as it directly addresses self-motivated agent incentives for participation and for truthfulness, and can provide a decision-theoretic foundation for the "expert weights" often employed in centralized pooling techniques. | [
1802,
2064
] | Test |
1,778 | 2 | Title: SEMILINEAR PREDICTABILITY MINIMIZATION PRODUCES WELL-KNOWN FEATURE DETECTORS Neural Computation, 1996 (accepted)
Abstract: Predictability minimization (PM | Schmidhuber, 1992) exhibits various intuitive and theoretical advantages over many other methods for unsupervised redundancy reduction. So far, however, there were only toy applications of PM. In this paper, we apply semilinear PM to static real world images and find: without a teacher and without any significant preprocessing, the system automatically learns to generate distributed representations based on well-known feature detectors, such as orientation sensitive edge detectors and off-center-on-surround-like structures, thus extracting simple features related to those considered useful for image pre-processing and compression. | [
731,
2024
] | Test |
1,779 | 4 | Title: REINFORCEMENT LEARNING WITH SELF-MODIFYING POLICIES
Abstract: A learner's modifiable components are called its policy. An algorithm that modifies the policy is a learning algorithm. If the learning algorithm has modifiable components represented as part of the policy, then we speak of a self-modifying policy (SMP). SMPs can modify the way they modify themselves etc. They are of interest in situations where the initial learning algorithm itself can be improved by experience | this is what we call "learning to learn". How can we force some (stochastic) SMP to trigger better and better self-modifications? The success-story algorithm (SSA) addresses this question in a lifelong reinforcement learning context. During the learner's life-time, SSA is occasionally called at times computed according to SMP itself. SSA uses backtracking to undo those SMP-generated SMP-modifications that have not been empirically observed to trigger lifelong reward accelerations (measured up until the current SSA call | this evaluates the long-term effects of SMP-modifications setting the stage for later SMP-modifications). SMP-modifications that survive SSA represent a lifelong success history. Until the next SSA call, they build the basis for additional SMP-modifications. Solely by self-modifications our SMP/SSA-based learners solve a complex task in a partially observable environment (POE) whose state space is far bigger than most reported in the POE literature. | [
2007
] | Train |
1,780 | 4 | Title: Machine Learning, Shifting Inductive Bias with Success-Story Algorithm, Adaptive Levin Search, and Incremental Self-Improvement
Abstract: We study task sequences that allow for speeding up the learner's average reward intake through appropriate shifts of inductive bias (changes of the learner's policy). To evaluate long-term effects of bias shifts setting the stage for later bias shifts we use the "success-story algorithm" (SSA). SSA is occasionally called at times that may depend on the policy itself. It uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations (measured up until the current SSA call). Bias shifts that survive SSA represent a lifelong success history. Until the next SSA call, they are considered useful and build the basis for additional bias shifts. SSA allows for plugging in a wide variety of learning algorithms. We plug in (1) a novel, adaptive extension of Levin search and (2) a method for embedding the learner's policy modification strategy within the policy itself (incremental self-improvement). Our inductive transfer case studies involve complex, partially observable environments where traditional reinforcement learning fails. | [
2007
] | Validation |
1,781 | 5 | Title: Learning Singly-Recursive Relations from Small Datasets
Abstract: The inductive logic programming system LOPSTER was created to demonstrate the advantage of basing induction on logical implication rather than -subsumption. LOPSTER's sub-unification procedures allow it to induce recursive relations using a minimum number of examples, whereas inductive logic programming algorithms based on -subsumption require many more examples to solve induction tasks. However, LOPSTER's input examples must be carefully chosen; they must be along the same inverse resolution path. We hypothesize that an extension of LOPSTER can efficiently induce recursive relations without this requirement. We introduce a generalization of LOPSTER named CRUSTACEAN that has this capability and empirically evaluate its ability to induce recursive relations. | [
1819,
2663
] | Validation |
1,782 | 4 | Title: Least-Squares Temporal Difference Learning
Abstract: Submitted to NIPS-98 TD() is a popular family of algorithms for approximate policy evaluation in large MDPs. TD() works by incrementally updating the value function after each observed transition. It has two major drawbacks: it makes inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto [5] eliminates all stepsize parameters and improves data efficiency. This paper extends Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from = 0 to arbitrary values of ; at the extreme of = 1, the resulting algorithm is shown to be a practical formulation of supervised linear regression. Third, it presents a novel, intuitive interpretation of LSTD as a model-based reinforcement learning technique. | [
134,
295,
565,
566,
2328
] | Validation |
1,783 | 3 | Title: Importance Sampling
Abstract: Technical Report No. 9805, Department of Statistics, University of Toronto Abstract. Simulated annealing | moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions | has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. | [
48,
2348
] | Train |
1,784 | 1 | Title: Genetic Programming and Redundancy
Abstract: The Genetic Programming optimization method (GP) elaborated by John Koza [ Koza, 1992 ] is a variant of Genetic Algorithms. The search space of the problem domain consists of computer programs represented as parse trees, and the crossover operator is realized by an exchange of subtrees. Empirical analyses show that large parts of those trees are never used or evaluated which means that these parts of the trees are irrelevant for the solution or redundant. This paper is concerned with the identification of the redundancy occuring in GP. It starts with a mathematical description of the behavior of GP and the conclusions drawn from that description among others explain the "size problem" which denotes the phenomenon that the average size of trees in the population grows with time. | [
55,
163,
380,
844,
1184,
2199,
2688,
2705
] | Train |
1,785 | 1 | Title: A DISCUSSION ON SOME DESIGN PRINCIPLES FOR EFFICIENT CROSSOVER OPERATORS FOR GRAPH COLORING PROBLEMS
Abstract: A year ago, a new metaheuristic for graph coloring problems was introduced by Costa, Hertz and Dubuis. They have shown, with computer experiments, some clear indication of the benefits of this approach. Graph coloring has many applications specially in the areas of scheduling, assignments and timetabling. The metaheuristic can be classified as a memetic algorithm since it is based on a population search in which periods of local optimization are interspersed with phases in which new configurations are created from earlier well-developed configurations or local minima of the previous iterative improvement process. The new population is created using crossover operators as in genetic algorithms. In this paper we discuss how a methodology inspired in Competitive Analysis may be relevant to the problem of designing better crossover operators. RESUMO: No ultimo ano uma nova metaheurstica para o problema de colora~c~ao em grafos foi apre-sentada por Costa, Hertz e Dubuis. Eles mostraram, com experimentos computacionais, algumas indica~c~oes claras dos benefcios desta nova tecnica. Colora~c~ao em grafos tem muitas aplica~c~oes, especialmente na area de programa~c~ao de tarefas, localiza~c~ao e horario . A metaheurstica pode ser classificada como algoritmo memetico desde que seja baseada em uma busca de popula~c~ao cujos perodos de otimiza~c~ao local s~ao intercalados com fases onde novas configura~c~oes s~ao criadas a partir de boas configura~c~oes ou mnimos locais de itera~c~oes anteriores. A nova popula~c~ao e criada usando opera~c~oes de crossover como em algoritmos geneticos. Neste artigo apresen-tamos como uma metodologia baseada em Competitive Analysis pode ser relevante para construir opera~c~oes de crossover. | [
2564
] | Validation |
1,786 | 6 | Title: Decision Trees: Equivalence and Propositional Operations
Abstract: For the well-known concept of decision trees as it is used for inductive inference we study the natural concept of equivalence: two decision trees are equivalent if and only if they represent the same hypothesis. We present a simple efficient algorithm to establish whether two decision trees are equivalent or not. The complexity of this algorithm is bounded by the product of the sizes of both decision trees. The hypothesis represented by a decision tree is essentially a boolean function, just like a proposition. Although every boolean function can be represented in this way, we show that disjunctions and conjunctions of decision trees can not efficiently be represented as decision trees, and simply shaped propositions may require exponential size for representation as de cision trees. | [
2207
] | Validation |
1,787 | 2 | Title: An integrated approach to the study of object features in visual recognition
Abstract: We propose to assess the relevance of theories of synaptic modification as models of feature extraction in human vision, by using masks derived from synaptic weight patterns to occlude parts of the stimulus images in psychophysical experiments. In the experiment reported here, we found that a mask derived from principal component analysis of object images was more effective in reducing the generalization performance of human subjects than a mask derived from another method of feature extraction (BCM), based on higher-order statistics of the images. | [
359,
2499,
2676
] | Train |
1,788 | 2 | Title: Path-integral evolution of chaos embedded in noise: Duffing neocortical analog
Abstract: A two dimensional time-dependent Duffing oscillator model of macroscopic neocortex exhibits chaos for some ranges of parameters. We embed this model in moderate noise, typical of the context presented in real neocortex, using PATHINT, a non-Monte-Carlo path-integral algorithm that is particularly adept in handling nonlinear Fokker-Planck systems. This approach shows promise to investigate whether chaos in neocortex, as predicted by such models, can survive in noisy contexts. | [
1795,
2178,
2181
] | Train |
1,789 | 2 | Title: Pruning with generalization based weight saliencies: flOBD, flOBS
Abstract: The purpose of most architecture optimization schemes is to improve generalization. In this presentation we suggest to estimate the weight saliency as the associated change in generalization error if the weight is pruned. We detail the implementation of both an O(N )-storage scheme extending OBD, as well as an O(N 2 ) scheme extending OBS. We illustrate the viability of the approach on pre diction of a chaotic time series. | [
2469
] | Train |
1,790 | 1 | Title: Using Genetic Programming to Evolve Board Evaluation Functions
Abstract: The purpose of most architecture optimization schemes is to improve generalization. In this presentation we suggest to estimate the weight saliency as the associated change in generalization error if the weight is pruned. We detail the implementation of both an O(N )-storage scheme extending OBD, as well as an O(N 2 ) scheme extending OBS. We illustrate the viability of the approach on pre diction of a chaotic time series. | [
22,
415,
523,
565,
2334
] | Validation |
1,791 | 4 | Title: Q2: Memory-based active learning for optimizing noisy continuous functions field expands beyond prediction and function
Abstract: This paper introduces a new algorithm, Q2, for optimizing the expected output of a multi-input noisy continuous function. Q2 is designed to need only a few experiments, it avoids strong assumptions on the form of the function, and it is autonomous in that it requires little problem-specific tweaking. Four existing approaches to this problem (response surface methods, numerical optimization, supervised learning, and evolutionary methods) all have inadequacies when the requirement of "black box" behavior is combined with the need for few experiments. Q2 uses instance-based determination of a convex region of interest for performing experiments. In conventional instance-based approaches to learning, a neighborhood was defined by proximity to a query point. In contrast, Q2 defines the neighborhood by a new geometric procedure that captures the size and | [
682,
1559,
1859
] | Train |
1,792 | 6 | Title: 21 Using n 2 classifier in constructive induction
Abstract: In this paper, we propose a multi-classification approach for constructive induction. The idea of an improvement of classification accuracy is based on iterative modification of input data space. This process is independently repeated for each pair of n classes. Finally, it gives (n 2 n)/2 input data subspaces of attributes dedicated for optimal discrimination of appropriate pairs of classes. We use genetic algorithms as a constructive induction engine. A final classification is obtained by a weighted majority voting rule, according to n 2 - classifier approach. The computational experiment was performed on medical data set. The obtained results point out the advantage of using a multi-classification model (n 2 classifier) in constructive induction in relation to the analogous single-classifier approach. | [
163,
430,
582,
2067
] | Test |
1,793 | 2 | Title: STATISTICAL MECHANICS OF COMBAT WITH HUMAN FACTORS
Abstract: This highly interdisciplinary project extends previous work in combat modeling and in control-theoretic descriptions of decision-making human factors in complex activities. A previous paper has established the first theory of the statistical mechanics of combat (SMC), developed using modern methods of statistical mechanics, baselined to empirical data gleaned from the National Training Center (NTC). This previous project has also established a JANUS(T)-NTC computer simulation/wargame of NTC, providing a statistical ``what-if '' capability for NTC scenarios. This mathematical formulation is ripe for control-theoretic extension to include human factors, a methodology previously developed in the context of teleoperated vehicles. Similar NTC scenarios differing at crucial decision points will be used for data to model the inuence of decision making on combat. The results may then be used to improve present human factors and C 2 algorithms in computer simulations/wargames. Our approach is to ``subordinate'' the SMC nonlinear stochastic equations, fitted to NTC scenarios, to establish the zeroth order description of that combat. In practice, an equivalent mathematical-physics representation is used, more suitable for numerical and formal work, i.e., a Lagrangian representation. Theoretically, these equations are nested within a larger set of nonlinear stochastic operator-equations which include C 3 human factors, e.g., supervisory decisions. In this study, we propose to perturb this operator theory about the SMC zeroth order set of equations. Then, subsets of scenarios fit to zeroth order, originally considered to be similarly degenerate, can be further split perturbatively to distinguish C 3 decision-making inuences. New methods of Very Fast Simulated Re-Annealing (VFSR), developed in the previous project, will be used for fitting these models to empirical data. | [
1775,
1795,
2082,
2178,
2181
] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.