node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
1,794
2
Title: NONLINEAR NONEQUILIBRIUM NONQUANTUM NONCHAOTIC STATISTICAL MECHANICS OF NEOCORTICAL INTERACTIONS Abstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent
[ 1795, 2178, 2545 ]
Validation
1,795
2
Title: Application of statistical mechanics methodol- ogy to term-structure bond-pricing models, Mathl. Comput. Modelling Application of Abstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent
[ 1775, 1788, 1793, 1794, 2082, 2178, 2181, 2545 ]
Test
1,796
1
Title: Evaluating and Improving Steady State Evolutionary Algorithms on Constraint Satisfaction Problems Abstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent
[ 833, 1030, 1917, 2001 ]
Validation
1,797
1
Title: Improving the Performance of Evolutionary Optimization by Dynamically Scaling the Evaluation Function Abstract: Traditional evolutionary optimization algorithms assume a static evaluation function, according to which solutions are evolved. Incremental evolution is an approach through which a dynamic evaluation function is scaled over time in order to improve the performance of evolutionary optimization. In this paper, we present empirical results that demonstrate the effectiveness of this approach for genetic programming. Using two domains, a two-agent pursuit-evasion game and the Tracker [6] trail-following task, we demonstrate that incremental evolution is most successful when applied near the beginning of an evolutionary run. We also show that incremental evolution can be successful when the intermediate evaluation functions are more difficult than the target evaluation function, as well as when they are easier than the target function.
[ 1221, 1409, 2200 ]
Validation
1,798
2
Title: Toward a unified theory of spatiotemporal processing in the retina Abstract: Traditional evolutionary optimization algorithms assume a static evaluation function, according to which solutions are evolved. Incremental evolution is an approach through which a dynamic evaluation function is scaled over time in order to improve the performance of evolutionary optimization. In this paper, we present empirical results that demonstrate the effectiveness of this approach for genetic programming. Using two domains, a two-agent pursuit-evasion game and the Tracker [6] trail-following task, we demonstrate that incremental evolution is most successful when applied near the beginning of an evolutionary run. We also show that incremental evolution can be successful when the intermediate evaluation functions are more difficult than the target evaluation function, as well as when they are easier than the target function.
[ 2120 ]
Validation
1,799
1
Title: On the Effectiveness of Evolutionary Search in High-Dimensional NK-Landscapes Abstract: NK-landscapes offer the ability to assess the performance of evolutionary algorithms on problems with different degrees of epistasis. In this paper, we study the performance of six algorithms in NK-landscapes with low and high dimension while keeping the amount of epistatic interactions constant. The results show that compared to genetic local search algorithms, the performance of standard genetic algorithms employing crossover or mutation significantly decreases with increasing problem size. Furthermore, with increasing K, crossover based algorithms are in both cases outperformed by mutation based algorithms. However, the relative performance differences between the algorithms grow significantly with the dimension of the search space, indicating that it is important to consider high-dimensional landscapes for evaluating the performance of evolutionary algorithms.
[ 163, 727, 1424, 2205 ]
Validation
1,800
3
Title: Rational Belief Revision (Preliminary Report) Abstract: Theories of rational belief revision recently proposed by Alchourron, Gardenfors, Makin-son, and Nebel illuminate many important issues but impose unnecessarily strong standards for correct revisions and make strong assumptions about what information is available to guide revisions. We reconstruct these theories according to an economic standard of rationality in which preferences are used to select among alternative possible revisions. By permitting multiple partial specifications of preferences in ways closely related to preference-based nonmonotonic logics, the reconstructed theory employs information closer to that available in practice and offers more flexible ways of selecting revisions. We formally compare this new conception of rational belief revision with the original theories, adapt results about universal default theories to prove that there is unlikely to be any universal method of rational belief revision, and examine formally how different limitations on rationality affect belief revision.
[ 342, 1901, 1907, 1994, 1995, 2016 ]
Test
1,801
2
Title: A FAMILY OF FIXED-POINT ALGORITHMS FOR INDEPENDENT COMPONENT ANALYSIS Abstract: Independent Component Analysis (ICA) is a statistical signal processing technique whose main applications are blind source separation, blind deconvolution, and feature extraction. Estimation of ICA is usually performed by optimizing a 'contrast' function based on higher-order cumulants. In this paper, it is shown how almost any error function can be used to construct a contrast function to perform the ICA estimation. In particular, this means that one can use contrast functions that are robust against outliers. As a practical method for tnding the relevant extrema of such contrast functions, a txed-point iteration scheme is then introduced. The resulting algorithms are quite simple and converge fast and reliably. These algorithms also enable estimation of the independent components one-by-one, using a simple deation scheme.
[ 576, 1067, 1814, 1821 ]
Train
1,802
3
Title: Toward a Market Model for Bayesian Inference Abstract: We present a methodology for representing probabilistic relationships in a general-equilibrium economic model. Specifically, we define a precise mapping from a Bayesian network with binary nodes to a market price system where consumers and producers trade in uncertain propositions. We demonstrate the correspondence between the equilibrium prices of goods in this economy and the probabilities represented by the Bayesian network. A computational market model such as this may provide a useful framework for investigations of belief aggregation, distributed probabilistic inference, resource allocation under uncertainty, and other problems of de centralized uncertainty.
[ 1777, 2064 ]
Train
1,803
3
Title: Change Point and Change Curve Modeling in Stochastic Processes and Spatial Statistics Abstract: 1 This article will appear in Volume 1, no. 4 (1994) of Journal of Applied Statistical Science. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. This research was supported by ONR contract no. N-00014-91-J-1074, by NIH Grant no. 5R01HD26330-02, by the Ministere de la Recherche et de l'Espace, Paris, by the Universite de Paris VI, and by INRIA, Rocquencourt, France. Raftery thanks the latter two institutions, Paul Deheuvels and Gilles Celeux for hearty hospitality during his Paris sabbatical in which this article was written. This article was prepared for presentation at the Conference on Applied Change Point Analysis, University of Maryland-Baltimore, March 17-18, 1993. Parts of this article review collaborative research with others to whom I would like to express my appreciation, namely Volkan Akman, Jeff Banfield, Nhu Le, Steven Lewis, Doug Martin, Fionn Murtagh, Ross Taplin and Simon Tavare.
[ 84, 99, 1913 ]
Validation
1,804
0
Title: CHARADE: a Platform for Emergencies Management Systems Abstract: This paper describe the functional architecture of CHARADE a software platform devoted to the development of a new generation of intelligent environmental decision support systems. The CHARADE platform is based on the a task-oriented approach to system design and on the exploitation of a new architecture for problem solving, that integrates case-based reasoning and constraint reasoning. The platform is developed in an objectoriented environment and upon that a demonstrator will be developed for managing first intervention attack to forest fires.
[ 1805, 2289 ]
Train
1,805
0
Title: Planning in a Complex Real Domain Abstract: Dimensions of complexity raised during the definition of a system aimed at supporting the planning of initial attack to forest fires are presented and discussed. The complexity deriving from the highly dynamic and unpredictable domain of forest fire, the one realated to the individuation and integration of planning techniques suitable to this domain, the complexity of addressing the problem of taking into account the role of the user to be supported by the system and finally the complexity of an architecture able to integrate different subsystems. In particular we focus on the severe constraints to the definition of a planning approach posed by the fire fighting domain, constraints which cannot be satisfied completely by any of the current planning paradigms. We propose an approach based on the integratation of skeletal planning and case based reasoning techniques with constraint reasoning. More specifically temporal constraints are used in two steps of the planning process: plan fitting and adaptation, and resource scheduling. Work on the development of the system software architecture with a OOD methodology is in progress.
[ 1804, 2289 ]
Test
1,806
2
Title: MBP on T0: mixing floating- and fixed-point formats in BP learning Abstract: We examine the efficient implementation of back prop type algorithms on T0 [4], a vector processor with a fixed point engine, designed for neural network simulation. A matrix formulation of back prop, Matrix Back Prop [1], has been shown to be very efficient on some RISCs [2]. Using Matrix Back Prop, we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line method. Since high efficiency is futile if convergence is poor (due to the use of fixed point arithmetic), we use a mixture of fixed and floating point operations. The key observation is that the precision of fixed point is sufficient for good convergence, if the range is appropriately chosen. Though the most expensive computations are implemented in fixed point, we achieve a rate of convergence that is comparable to the floating point version. The time taken for conversion between fixed and floating point is also shown to be reasonable.
[ 2279, 2570 ]
Train
1,807
1
Title: A Preliminary Investigation of Evolution as a Form Design Strategy Abstract: We describe the preliminary version of our investigative software, GGE Generative Genetic Explorer, in which genetic operations interact with Au-toCAD to generate novel 3D forms for the architect. GGE allows us to asess how evolutionary algorithms should be tailored to suit Architecture CAD tasks.
[ 163, 2490 ]
Validation
1,808
6
Title: Where Do SE-trees Perform? (Part I) Abstract: As a classifier, a Set Enumeration (SE) tree can be viewed as a generalization of decision trees. We empirically characterize domains in which SE-trees are particularly advantageous relative to decision trees. Specifically, we show that:
[ 638, 2210 ]
Train
1,809
0
Title: Rule Generation and Compaction in the wwtp Abstract: In this paper we discuss our approach to learning classification rules from data. We sketch out two modules of our architecture, namely LINNEO + and GAR. LINNEO + , which is a knowledge acquisition tool for ill-structured domains automatically generating classes from examples that incrementally works with an unsupervised strategy. LINNEO + 's output, a representation of the conceptual structure of the domain in terms of classes, is the input to GAR that is used to generate a set of classification rules for the original training set. GAR can generate both conjunctive and disjunctive rules. Herein we present an application of these techniques to data obtained from a real wastewater treatment plant in order to help the construction of a rule base. This rule will be used for a knowledge-based system that aims to supervise the whole process.
[ 2071, 2585 ]
Train
1,810
2
Title: Computation and Psychophysics of Sensorimotor Integration Abstract: In this paper we discuss our approach to learning classification rules from data. We sketch out two modules of our architecture, namely LINNEO + and GAR. LINNEO + , which is a knowledge acquisition tool for ill-structured domains automatically generating classes from examples that incrementally works with an unsupervised strategy. LINNEO + 's output, a representation of the conceptual structure of the domain in terms of classes, is the input to GAR that is used to generate a set of classification rules for the original training set. GAR can generate both conjunctive and disjunctive rules. Herein we present an application of these techniques to data obtained from a real wastewater treatment plant in order to help the construction of a rule base. This rule will be used for a knowledge-based system that aims to supervise the whole process.
[ 1766 ]
Validation
1,811
2
Title: Disambiguation and Grammar as Emergent Soft Constraints Abstract: When reading a sentence such as "The diplomat threw the ball in the ballpark for the princess" our interpretation changes from a dance event to baseball and back to dance. Such on-line disambiguation happens automatically and appears to be based on dynamically combining the strengths of association between the keywords and the two senses. Subsymbolic neural networks are very good at modeling such behavior. They learn word meanings as soft constraints on interpretation, and dynamically combine these constraints to form the most likely interpretation. On the other hand, it is very difficult to show how systematic language structures such as relative clauses could be processed in such a system. The network would only learn to associate them to specific contexts and would not be able to process new combinations of them. A closer look at understanding embedded clauses shows that humans are not very systematic in processing grammatical structures either. For example, "The girl who the boy who the girl who lived next door blamed hit cried" is very difficult to understand, whereas "The car that the man who the dog that had rabies bit drives is in the garage" is not. This difference emerges from the same semantic constraints that are at work in the disambiguation task. In this chapter we will show how the subsymbolic parser can be combined with high-level control that allows the system to process novel combinations of relative clauses systematically, while still being sensitive to the semantic constraints.
[ 204, 427, 2410 ]
Train
1,812
2
Title: GENERALIZATION PERFORMANCE OF BACKPROPAGATION LEARNING ON A SYLLABIFICATION TASK Abstract: We investigated the generalization capabilities of backpropagation learning in feed-forward and recurrent feed-forward connectionist networks on the assignment of syllable boundaries to orthographic representations in Dutch (hyphenation). This is a difficult task because phonological and morphological constraints interact, leading to ambiguity in the input patterns. We compared the results to different symbolic pattern matching approaches, and to an exemplar-based generalization scheme, related to a k-nearest neighbour approach, but using a similarity metric weighed by the relative information entropy of positions in the training patterns. Our results indicate that the generalization performance of backpropagation learning for this task is not better than that of the best symbolic pattern matching approaches, and of exemplar-based generalization.
[ 783, 785, 862, 1155, 1407, 1513, 2364 ]
Validation
1,813
2
Title: Pruning Strategies for the MTiling Constructive Learning Algorithm Abstract: We present a framework for incorporating pruning strategies in the MTiling constructive neural network learning algorithm. Pruning involves elimination of redundant elements (connection weights or neurons) from a network and is of considerable practical interest. We describe three elementary sensitivity based strategies for pruning neurons. Experimental results demonstrate a moderate to significant reduction in the network size without compromising the network's generalization performance.
[ 503, 1818, 2393 ]
Train
1,814
2
Title: Independent Component Analysis by General Non-linear Hebbian-like Learning Rules Abstract: A number of neural learning rules have been recently proposed for Independent Component Analysis (ICA). The rules are usually derived from information-theoretic criteria such as maximum entropy or minimum mutual information. In this paper, we show that in fact, ICA can be performed by very simple Hebbian or anti-Hebbian learning rules, which may have only weak relations to such information-theoretical quantities. Rather suprisingly, practically any non-linear function can be used in the learning rule, provided only that the sign of the Hebbian/anti-Hebbian term is chosen correctly. In addition to the Hebbian-like mechanism, the weight vector is here constrained to have unit norm, and the data is preprocessed by prewhitening, or sphering. These results imply that one can choose the non-linearity so as to optimize desired statistical or numerical criteria.
[ 570, 576, 834, 1067, 1801 ]
Validation
1,815
2
Title: Submitted to Circuits, Systems and Signal Processing Neural Network Constructive Algorithms: Trading Generalization for Learning Efficiency? Abstract: There are currently several types of constructive, or growth, algorithms available for training a feed-forward neural network. This paper describes and explains the main ones, using a fundamental approach to the multi-layer perceptron problem-solving mechanisms. The claimed convergence properties of the algorithms are verified using just two mapping theorems, which consequently enables all the algorithms to be unified under a basic mechanism. The algorithms are compared and contrasted and the deficiencies of some highlighted. The fundamental reasons for the actual success of these algorithms are extracted, and used to suggest where they might most fruitfully be applied. A suspicion that they are not a panacea for all current neural network difficulties, and that one must somewhere along the line pay for the learning efficiency they promise, is developed into an argument that their generalization abilities will lie on average below that of back-propagation.
[ 238, 253, 489, 2670, 2671 ]
Validation
1,816
4
Title: Generalized Prioritized Sweeping Abstract: Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping.
[ 558, 559, 566, 1934, 2485 ]
Train
1,817
2
Title: Selection of Distance Metrics and Feature Subsets for k-Nearest Neighbor Classifiers Abstract: Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping.
[ 2464 ]
Train
1,818
2
Title: Constructive Neural Network Learning Algorithms for Multi-Category Real-Valued Pattern Classification Abstract: Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping.
[ 1813, 2029, 2073 ]
Validation
1,819
5
Title: The Difficulties of Learning Logic Programs with Cut Abstract: As real logic programmers normally use cut (!), an effective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate first a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the difficulties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is difficult, and current induction techniques should probably be restricted to purely declarative logic languages.
[ 224, 1781, 2580 ]
Train
1,820
2
Title: The Gamma MLP for Speech Phoneme Recognition Abstract: We define a Gamma multi-layer perceptron (MLP) as an MLP with the usual synaptic weights replaced by gamma filters (as proposed by de Vries and Principe (de Vries & Principe 1992)) and associated gain terms throughout all layers. We derive gradient descent update equations and apply the model to the recognition of speech phonemes. We find that both the inclusion of gamma filters in all layers, and the inclusion of synaptic gains, improves the performance of the Gamma MLP. We compare the Gamma MLP with TDNN, Back-Tsoi FIR MLP, and Back-Tsoi IIR MLP architectures, and a local approximation scheme. We find that the Gamma MLP results in a substantial reduction in error rates.
[ 2383, 2569 ]
Train
1,821
2
Title: One-unit Learning Rules for Independent Component Analysis Abstract: Neural one-unit learning rules for the problem of Independent Component Analysis (ICA) and blind source separation are introduced. In these new algorithms, every ICA neuron develops into a separator that tnds one of the independent components. The learning rules use very simple constrained Hebbian/anti-Hebbian learning in which decorrelating feedback may be added. To speed up the convergence of these stochastic gradient descent rules, a novel com putationally ecient txed-point algorithm is introduced.
[ 834, 1801 ]
Train
1,822
2
Title: Book Review New Kids on the Block way in the field of connectionist modeling. The Abstract: Connectionist Models is a collection of forty papers representing a wide variety of research topics in connectionism. The book is distinguished by a single feature: the papers are almost exclusively contributions of graduate students active in the field. The students were selected by a rigorous review process and participated in a two week long summer school devoted to connectionism 2 . As the ambitious editors state in the foreword: These are bold claims and, if true, the reader is presented with an exciting opportunity to sample the frontiers of connectionism. Their words imply two ways to approach the book. The book must be read not just as a random collection of scientific papers, but also as a challenge to evaluate a controversial field. 2 This summer school is actually the third in a series, previous ones being held in 1986 and 1988. The proceedings of the 1988 summer school (which I had the priviledge of participating in) are reviewed by Nigel Goddard in [4]. Continuing the pattern, a fourth school is scheduled to be held in 1993 in Boulder, CO.
[ 1656, 2662 ]
Test
1,823
6
Title: The Complexity of Theory Revision Abstract: A knowledge-based system uses its database (a.k.a. its "theory") to produce answers to the queries it receives. Unfortunately, these answers may be incorrect if the underlying theory is faulty. Standard "theory revision" systems use a given set of "labeled queries" (each a query paired with its correct answer) to transform the given theory, by adding and/or deleting either rules and/or antecedents, into a related theory that is as accurate as possible. After formally defining the theory revision task and bounding its sample complexity, this paper addresses the task's computational complexity. It first proves that, unless P = N P , no polynomial time algorithm can identify the optimal theory, even given the exact distribution of queries, except in the most trivial of situations. It also shows that, except in such trivial situations, no polynomial-time algorithm can produce a theory whose inaccuracy is even close (i.e., within a particular polynomial factor) to optimal. These results justify the standard practice of hill-climbing to a locally-optimal theory, based on a given set of labeled sam ples.
[ 2487, 2580 ]
Test
1,824
6
Title: Constructing Conjunctions using Systematic Search on Decision Trees Abstract: This paper investigates a dynamic path-based method for constructing conjunctions as new attributes for decision tree learning. It searches for conditions (attribute-value pairs) from paths to form new attributes. Compared with other hypothesis-driven new attribute construction methods, the new idea of this method is that it carries out systematic search with pruning over each path of a tree to select conditions for generating a conjunction. Therefore, conditions for constructing new attributes are dynamically decided during search. Empirically evaluation in a set of artificial and real-world domains shows that the dynamic path-based method can improve the performance of selective decision tree learning in terms of both higher prediction accuracy and lower theory complexity. In addition, it shows some performance advantages over a fixed path-based method and a fixed rule-based method for learning decision trees.
[ 102, 1595, 2006, 2675 ]
Train
1,825
2
Title: GUESSING CAN OUTPERFORM MANY LONG TIME LAG ALGORITHMS Abstract: Numerous recent papers focus on standard recurrent nets' problems with long time lags between relevant signals. Some propose rather sophisticated, alternative methods. We show: many problems used to test previous methods can be solved more quickly by random weight guessing.
[ 68, 121, 978, 979, 1966 ]
Train
1,826
1
Title: A Computational View of Population Genetics (preliminary version) Abstract: This paper contributes to the study of nonlinear dynamical systems from a computational perspective. These systems are inherently more powerful than their linear counterparts (such as Markov chains), which have had a wide impact in Computer Science, and they seem likely to play an increasing role in future. However, there are as yet no general techniques available for handling the computational aspects of discrete nonlinear systems, and even the simplest examples seem very hard to analyze. We focus in this paper on a class of quadratic systems that are widely used as a model in population genetics and also in genetic algorithms. These systems describe a process where random matings occur between parental chromosomes via a mechanism known as "crossover": i.e., children inherit pieces of genetic material from different parents according to some random rule. Our results concern two fundamental quantitative properties of crossover systems: 1. We develop a general technique for computing the
[ 689, 2630 ]
Train
1,827
6
Title: Efficient Algorithms for Inverting Evolution Abstract: Evolution is a stochastic process which operates on the DNA of species. The evolutionary process leaves tell-tale signs in the DNA which can be used to construct phylogenies, or evolutionary trees, for a set of species. Maximum Likelihood Estimations (MLE) methods seek the evolutionary tree which is most likely to have produced the DNA under consideration. While these methods are widely accepted and intellectually satisfying, they have been computationally intractable. In this paper, we address the intractability of MLE methods as follows. We introduce a metric on stochastic process models of evolution. We show that this metric is meaningful by proving that in order for any algorithm to distinguish between two stochatic models that are close according to this metric, it needs to be given a lot of observations. We complement this result with a simple and efficient algorithm for inverting the stochastic process of evolution, that is, for building the tree from observations on the DNA of the species. Our result can be viewed as a result on the PAC-learnability of the class of distributions produced by tree-like processes. Though there have been many heuristics suggested for this problem, our algorithm is the first one with a guaranteed convergence rate, and further, this rate is within a polynomial of the lower-bound rate we establish. Ours is also the the first polynomial-time algorithm which is guaranteed to converge at all to the correct tree.
[ 299, 574, 1962, 2083, 2110, 2224 ]
Train
1,828
4
Title: Learning in Continuous Domains with Delayed Rewards Abstract: Much has been done to develop learning techniques for delayed reward problems in worlds where the actions and/or states are approximated by discrete representations. Although this is acceptable in some applications there are many more situations where such an approximation is difficult and unnatural. For instance, in applications such as robotic,s where real machines interact with the real world, learning techniques that use real valued continuous quantities are required. Presented in this paper is an extension to Q-learning that uses both real valued states and actions. This is achieved by introducing activation strengths to each actuator system of the robot. This allow all actuators to be active to some continuous amount simultaneously. Learning occurs by incrementally adapting both the expected future reward to goal evaluation function and the gradients of that function with respect to each actuator system.
[ 567, 2014, 2018 ]
Test
1,829
3
Title: On the Connection Between Stochastic Smoothing, Filtering and Estimation with Incomplete Data Abstract: Connections between stochastic smoothing/filtering and estimation with incomplete data are investigated. It is shown, under the right censoring scheme, that the Kaplan-Meier estimator can be characterized as a moment estimate based on a stochastic filter/smoother(a pseudo-filter/smoother). Motivated by this result, a potentially useful martingale approach for estimation and convergence with incomplete data is proposed: estimators are characterized as pseudo-stochastic smoothers (which sometimes reduce to filters), which are described by a (system of) stochastic integral equation(s); recent results in convergence of stochastic integrals and stochastic differential equations are then applied to address convergence issues. As an illustration, the double censoring problem is revisited under this framework, a closed form estimator is proposed and convergence properties studied. Martingale theory plays a vital role in the entire analysis. This approach is in essence a self-consistency method. 1
[ 2421 ]
Validation
1,830
0
Title: Inductive CBR for Customer Support Abstract: Over the past few years, the telecommunications paradigm has been shifting rapidly from hardware to middleware. In particular, the traditional issues of service characteristics and network control are being replaced by the modern, customer-driven issues of network and service management (e.g., electronic commerce, one-stop shops). An area of service management which has extremely high visibility and negative impact when managed badly is that of problem handling. Problem handling is a very knowledge intensive activity, particularly nowadays with the increase in number and complexity of services becoming available. Trials at several BT support centres have already demonstrated the potential of case-based reasoning technology in improving current practice for problem detection and diagnosis. A major cost involved in implementing a case-based system is in the manual building of the initial case base and then in the subsequent maintenance of that case base over time. This paper shows how inductive machine learning can be combined with case-based reasoning to produce an intelligent system capable of both extracting knowledge from raw data automatically and reasoning from that knowledge. In addition to discovering knowledge in existing data repositories, the integrated system may be used to acquire and revise knowledge continually. Experiments with the suggested integrated approach demonstrate promise and justify the next step.
[ 2466, 2585 ]
Train
1,831
1
Title: Some Training Subset Selection Methods for Supervised Learning in Genetic Programming Abstract: When using the Genetic Programming (GP) Algorithm on a difficult problem with a large set of training cases, a large population size is needed and a very large number of function-tree evaluations must be carried out. This paper describes how to reduce the number of such evaluations by selecting a small subset of the training data set on which to actually carry out the GP algorithm. Three subset selection methods described in the paper are: Dynamic Subset Selection (DSS), using the current GP run to select `difficult' and/or disused cases, Historical Subset Selection (HSS), using previous GP runs, Random Subset Selection (RSS). GP, GP+DSS, GP+HSS, GP+RSS are compared on a large classification problem. GP+DSS can produce better results in less than 20% of the time taken by GP. GP+HSS can nearly match the results of GP, and, perhaps surprisingly, GP+RSS can occasionally approach the results of GP. GP and GP+DSS are then compared on a smaller problem, and a hybrid Dynamic Fitness Function (DFF), based on DSS, is proposed.
[ 163, 1832, 1836 ]
Train
1,832
1
Title: Tackling the Boolean Even N Parity Problem with Genetic Programming and Limited-Error Fitness standard GP Abstract: This paper presents Limited Error Fitness (LEF), a modification to the standard supervised learning approach in Genetic Programming (GP), in which an individual's fitness score is based on how many cases remain uncovered in the ordered training set after the individual exceeds an error limit. The training set order and the error limit are both altered dynamically in response to the performance of the fittest individual in the previous generation.
[ 55, 415, 1831, 1836, 2334 ]
Train
1,833
6
Title: Pruning Decision Trees with Misclassification Costs Abstract: We describe an experimental study of pruning methods for decision tree classifiers when the goal is minimizing loss rather than error. In addition to two common methods for error minimization, CART's cost-complexity pruning and C4.5's error-based pruning, we study the extension of cost-complexity pruning to loss and one pruning variant based on the Laplace correction. We perform an empirical comparison of these methods and evaluate them with respect to loss. We found that applying the Laplace correction to estimate the probability distributions at the leaves was beneficial to all pruning methods. Unlike in error minimization, and somewhat surprisingly, performing no pruning led to results that were on par with other methods in terms of the evaluation criteria. The main advantage of pruning was in the reduction of the decision tree size, sometimes by a factor of ten. While no method dominated others on all datasets, even for the same domain different pruning mechanisms are better for different loss matrices.
[ 2367 ]
Train
1,834
1
Title: Genetic Algorithms, Tournament Selection, and the Effects of Noise Abstract: IlliGAL Report No. 95006 July 1995
[ 1905 ]
Test
1,835
6
Title: Implementation Issues in the Fourier Transform Algorithm Abstract: The Fourier transform of boolean functions has come to play an important role in proving many important learnability results. We aim to demonstrate that the Fourier transform techniques are also a useful and practical algorithm in addition to being a powerful theoretical tool. We describe the more prominent changes we have introduced to the algorithm, ones that were crucial and without which the performance of the algorithm would severely deteriorate. One of the benefits we present is the confidence level for each prediction which measures the likelihood the prediction is correct.
[ 2011, 2182 ]
Validation
1,836
1
Title: Small Populations over Many Generations can beat Large Populations over Few Generations in Genetic Programming Abstract: This paper looks at the use of small populations in Genetic Programming (GP), where the trend in the literature appears to be towards using as large a population as possible, which requires more memory resources and CPU-usage is less efficient. Dynamic Subset Selection (DSS) and Limited Error Fitness (LEF) are two different, adaptive variations of the standard supervised learning method used in GP. This paper compares the performance of GP, GP+DSS, and GP+LEF, on a 958 case classification problem, using a small population size of 50. A similar comparison between GP and GP+DSS is done on a larger and messier 3772 case classification problem. For both problems, GP+DSS with the small population size consistently produces a better answer using fewer tree evaluations than other runs using much larger populations. Even standard GP can be seen to perform well with the much smaller population size, indicating that it is certainly worth an exploratory run or three with a small population size before assuming that a large population size is necessary. It is an interesting notion that smaller can mean faster and better.
[ 415, 1831, 1832, 2334 ]
Train
1,837
5
Title: Data Mining and Knowledge Discovery, Adaptive Fraud Detection Abstract: One method for detecting fraud is to check for suspicious changes in user behavior. This paper describes the automatic design of user profiling methods for the purpose of fraud detection, using a series of data mining techniques. Specifically, we use a rule-learning program to uncover indicators of fraudulent behavior from a large database of customer transactions. Then the indicators are used to create a set of monitors, which profile legitimate customer behavior and indicate anomalies. Finally, the outputs of the monitors are used as features in a system that learns to combine evidence to generate high-confidence alarms. The system has been applied to the problem of detecting cellular cloning fraud based on a database of call records. Experiments indicate that this automatic approach performs better than hand-crafted methods for detecting fraud. Furthermore, this approach can adapt to the changing conditions typical of fraud detection environments.
[ 382, 2132 ]
Validation
1,838
3
Title: Learning in neural networks with Bayesian prototypes Abstract: Given a set of samples of a probability distribution on a set of discrete random variables, we study the problem of constructing a good approximative neural network model of the underlying probability distribution. Our approach is based on an unsupervised learning scheme where the samples are first divided into separate clusters, and each cluster is then coded as a single vector. These Bayesian prototype vectors consist of conditional probabilities representing the attribute-value distribution inside the corresponding cluster. Using these prototype vectors, it is possible to model the underlying joint probability distribution as a simple Bayesian network (a tree), which can be realized as a feedforward neural network capable of probabilistic reasoning. In this framework, learning means choosing the size of the prototype set, partitioning the samples into the corresponding clusters, and constructing the cluster prototypes. We describe how the prototypes can be determined, given a partition of the samples, and present a method for evaluating the likelihood of the corresponding Bayesian tree. We also present a greedy heuristic for searching through the space of different partition schemes with different numbers of clusters, aiming at an optimal approximation of the probability distribution.
[ 485, 719, 1908, 2380, 2514 ]
Test
1,839
1
Title: Context Preserving Crossover in Genetic Programming. Abstract: This paper introduces two new crossover operators for Genetic Programming (GP). Contrary to the regular GP crossover, the operators presented attempt to preserve the context in which subtrees appeared in the parent trees. A simple coordinate scheme for nodes in an S-expression tree is proposed, and crossovers are only allowed between nodes with exactly or partially matching coordinates.
[ 290, 2688, 2705 ]
Validation
1,840
1
Title: Hierarchical Genetic Programming (HGP) extensions discover, modify, and exploit subroutines to accelerate the evolution of Abstract: A fundamental problem in learning from observation and interaction with an environment is defining a good representation, that is a representation which captures the underlying structure and functionality of the domain. This chapter discusses an extension of the genetic programming (GP) paradigm based on the idea that subroutines obtained from blocks of good representations act as building blocks and may enable a faster evolution of even better representations. This GP extension algorithm is called adaptive representation through learning (ARL). It has built-in mechanisms for (1) creation of new subroutines through discovery and generalization of blocks of code; (2) deletion of subroutines. The set of evolved subroutines extracts common knowledge emerging during the evolutionary process and acquires the necessary structure for solving the problem. ARL was successfully tested on the problem of controlling an agent in a dynamic and non-deterministic environment. Results with the automatic discovery of subroutines show the potential to better scale up the GP technique to complex problems. While HGP approaches improve the efficiency and scalability of genetic programming (GP) for many applications [Koza, 1994b], several issues remain unresolved. The scalability of HGP techniques could be further improved by solving two such issues. One is the characterization of the value of subroutines. Current methods for HGP do not attempt to decide what is relevant, i.e. which blocks of code or subroutines may be worth giving special attention, but employ genetic operations on subroutines at random points. The other issue is the time-course of the generation of new subroutines. Current HGP techniques do not make informed choices to automatically decide when creation or modification of subroutines is advantageous or necessary. The Adaptive Representation through Learning (ARL) algorithm copes with both of these problems. The what issue is addressed by relying on local measures such as parent-offspring differential fitness and block activation in order to discover useful subroutines and by learning which subroutines are useful. The when issue is addressed by relying on global population measures such as population entropy in order to predict when search reaches local optima and escape them. ARL co-evolves a set of subroutines which extends the set of problem primitives.
[ 2688, 2705 ]
Train
1,841
4
Title: Learning Without State-Estimation in Partially Observable Markovian Decision Processes Abstract: Reinforcement learning (RL) algorithms provide a sound theoretical basis for building learning control architectures for embedded agents. Unfortunately all of the theory and much of the practice (see Barto et al., 1983, for an exception) of RL is limited to Marko-vian decision processes (MDPs). Many real-world decision tasks, however, are inherently non-Markovian, i.e., the state of the environment is only incompletely known to the learning agent. In this paper we consider only partially observable MDPs (POMDPs), a useful class of non-Markovian decision processes. Most previous approaches to such problems have combined computationally expensive state-estimation techniques with learning control. This paper investigates learning in POMDPs without resorting to any form of state estimation. We present results about what TD(0) and Q-learning will do when applied to POMDPs. It is shown that the conventional discounted RL framework is inadequate to deal with POMDPs. Finally we develop a new framework for learning without state-estimation in POMDPs by including stochastic policies in the search space, and by defining the value or utility of a dis tribution over states.
[ 564, 1741 ]
Train
1,842
3
Title: Fall Diagnosis using Dynamic Belief Networks Abstract: The task is to monitor walking patterns and give early warning of falls using foot switch and mercury trigger sensors. We describe a dynamic belief network model for fall diagnosis which, given evidence from sensor observations, outputs beliefs about the current walking status and makes predictions regarding future falls. The model represents possible sensor error and is parametrised to allow customisation to the individual being monitored.
[ 1268, 1757, 2341 ]
Train
1,843
0
Title: Goal-based Explanation Evaluation 1 Abstract: 1 I would like to thank my dissertation advisor, Roger Schank, for his very valuable guidance on this research, and to thank the Cognitive Science reviewers for their helpful comments on a draft of this paper. The research described here was conducted primarily at Yale University, supported in part by the Defense Advanced Research Projects Agency, monitored by the Office of Naval Research under contract N0014-85-K-0108 and by the Air Force Office of Scientific Research under contract F49620-88-C-0058.
[ 2626 ]
Validation
1,844
4
Title: Two Methods for Hierarchy Learning in Reinforcement Environments Abstract: This paper describes two methods for hierarchically organizing temporal behaviors. The first is more intuitive: grouping together common sequences of events into single units so that they may be treated as individual behaviors. This system immediately encounters problems, however, because the units are binary, meaning the behaviors must execute completely or not at all, and this hinders the construction of good training algorithms. The system also runs into difficulty when more than one unit is (or should be) active at the same time. The second system is a hierarchy of transition values. This hierarchy dynamically modifies the values that specify the degree to which one unit should follow another. These values are continuous, allowing the use of gradient descent during learning. Furthermore, many units are active at the same time as part of the system's normal functionings.
[ 1845, 1979 ]
Test
1,845
4
Title: ON LEARNING HOW TO LEARN LEARNING STRATEGIES Abstract: This paper introduces the "incremental self-improvement paradigm". Unlike previous methods, incremental self-improvement encourages a reinforcement learning system to improve the way it learns, and to improve the way it improves the way it learns ..., without significant theoretical limitations | the system is able to "shift its inductive bias" in a universal way. Its major features are: (1) There is no explicit difference between "learning", "meta-learning", and other kinds of information processing. Using a Turing machine equivalent programming language, the system itself occasionally executes self-delimiting, initially highly random "self-modification programs" which modify the context-dependent probabilities of future action sequences (including future self-modification programs). (2) The system keeps only those probability modifications computed by "useful" self-modification programs: those which bring about more payoff (reward, reinforcement) per time than all previous self-modification programs. (3) The computation of payoff per time takes into account all the computation time required for learning | the entire system life is considered: boundaries between learning trials are ignored (if there are any). A particular implementation based on the novel paradigm is presented. It is designed to exploit what conventional digital machines are good at: fast storage addressing, arithmetic operations etc. Experiments illustrate the system's mode of operation. Keywords: Self-improvement, self-reference, introspection, machine-learning, reinforcement learning. Note: This is the revised and extended version of an earlier report from November 24, 1994.
[ 68, 979, 1844, 1979 ]
Train
1,846
2
Title: A Neural Network Architecture for High-Speed Database Query Processing Abstract: Artificial neural networks (ANN), due to their inherent parallelism and potential fault tolerance, offer an attractive paradigm for robust and efficient implementations of large modern database and knowledge base systems. This paper explores a neural network model for efficient implementation of a database query system. The application of the proposed model to a high-speed library query system for retrieval of multiple items is based on partial match of the specified query criteria with the stored records. The performance of the ANN realization of the database query module is analyzed and compared with other techniques commonly in current computer systems. The results of this analysis suggest that the proposed ANN design offers an attractive approach for the realization of query modules in large database and knowledge base systems, especially for retrieval based on partial matches.
[ 1847, 1927, 2537 ]
Train
1,847
2
Title: A Neural Network Architecture for Syntax Analysis Abstract: Artificial neural networks (ANN), due to their inherent parallelism and potential fault tolerance, offer an attractive paradigm for robust and efficient implementations of large modern database and knowledge base systems. This paper explores a neural network model for efficient implementation of a database query system. The application of the proposed model to a high-speed library query system for retrieval of multiple items is based on partial match of the specified query criteria with the stored records. The performance of the ANN realization of the database query module is analyzed and compared with other techniques commonly in current computer systems. The results of this analysis suggest that the proposed ANN design offers an attractive approach for the realization of query modules in large database and knowledge base systems, especially for retrieval based on partial matches.
[ 1846, 1927 ]
Test
1,848
6
Title: On the Power of Equivalence Queries Abstract: In 1990, Angluin showed that no class exhibiting a combinatorial property called "approximate fingerprints" can be identified exactly using polynomially many Equivalence queries (of polynomial size). Here we show that this is a necessary condition: every class without approximate fingerprints has an identification strategy that makes a polynomial number of Equivalence queries. Furthermore, if the class is "honest" in a technical sense, the computational power required by the strategy is within the polynomial-time hierarchy, so proving non learnability is at least as hard as showing P 6= NP.
[ 2483 ]
Train
1,849
5
Title: Profile-Driven Instruction Level Parallel Scheduling with Application to Super Blocks Abstract: Code scheduling to exploit instruction level parallelism (ILP) is a critical problem in compiler optimization research, in light of the increased use of long-instruction-word machines. Unfortunately, optimum scheduling is com-putationally intractable, and one must resort to carefully crafted heuristics in practice. If the scope of application of a scheduling heuristic is limited to basic blocks, considerable performance loss may be incurred at block boundaries. To overcome this obstacle, basic blocks can be coalesced across branches to form larger regions such as super blocks. In the literature, these regions are typically scheduled using algorithms that are either oblivious to profile information (under the assumption that the process of forming the region has fully utilized the profile information), or use the profile information as an addendum to classical scheduling techniques. We believe that even for the simple case of linear code regions such as super blocks, additional performance improvement can be gained by utilizing the profile information in scheduling as well. We propose a general paradigm for converting any profile-insensitive list sched-uler to a profile-sensitive scheduler. Our technique is developed via a theoretical analysis of a simplified abstract model of the general problem of profile-driven scheduling over any acyclic code region, yielding a scoring measure for ranking branch instructions. The ranking digests the profile information and has the useful property that scheduling with respect to rank is provably good for minimizing the expected completion time of the region, within the limits of the abstraction. While the ranking scheme is computation-ally intractable in the most general case, it is practicable for super blocks and suggests the heuristic that we present in this paper for profile-driven scheduling of super blocks. Experiments show that our heuristic offers substantial performance improvement over prior methods on a range of integer benchmarks and several machine models.
[ 2163 ]
Test
1,850
1
Title: Genetic Programming for Pedestrians Abstract: We propose an extension to the Genetic Programming paradigm which allows users of traditional Genetic Algorithms to evolve computer programs. To this end, we have to introduce mechanisms like transscription, editing and repairing into Genetic Programming. We demonstrate the feasibility of the approach by using it to develop programs for the prediction of sequences of integer numbers.
[ 163, 2554 ]
Validation
1,851
2
Title: Faster Learning in Multi-Layer Networks by Handling Abstract: Generalized delta rule, popularly known as back-propagation (BP) [9, 5] is probably one of the most widely used procedures for training multi-layer feed-forward networks of sigmoid units. Despite reports of success on a number of interesting problems, BP can be excruciatingly slow in converging on a set of weights that meet the desired error criterion. Several modifications for improving the learning speed have been proposed in the literature [2, 4, 8, 1, 6]. BP is known to suffer from the phenomenon of flat spots [2]. The slowness of BP is a direct consequence of these flat-spots together with the formulation of the BP Learning rule. This paper proposes a new approach to minimizing the error that is suggested by the mathematical properties of the conventional error function and that effectively handles flat-spots occurring in the output layer. The robustness of the proposed technique is demonstrated on a number of data-sets widely studied in the machine learning community.
[ 503, 1896 ]
Validation
1,852
3
Title: On Sequential Simulation-Based Methods for Bayesian Filtering Abstract: In this report, we present an overview of sequential simulation-based methods for Bayesian filtering of nonlinear and non-Gaussian dynamic models. It includes in a general framework numerous methods proposed independently in various areas of science and proposes some original developments.
[ 99, 2592 ]
Train
1,853
6
Title: 99-113. Construction of Phylogenetic Trees, Science, Fitting the Gene Lineage Into Its Species Lineage. A Abstract: 6] Farach, M. and Thorup, M. 1993. Fast Comparison of Evolutionary Trees, Technical Report 93-46, DIMACS, Rutgers University, Piscataway, NJ.
[ 1861, 2320 ]
Train
1,854
0
Title: Case Retrieval Nets: Basic Ideas and Extensions Abstract: An efficient retrieval of a relatively small number of relevant cases from a huge case base is a crucial subtask of Case-Based Reasoning. In this article, we present Case Retrieval Nets (CRNs), a memory model that has recently been developed for this task. The main idea is to apply a spreading activation process to a net-like case memory in order to retrieve cases being similar to a posed query case. We summarize the basic ideas of CRNs, suggest some useful extensions, and present some initial experimental results which suggest that CRNs can successfully handle case bases larger than considered usually in the CBR community.
[ 75, 1855, 1864, 1976, 2048, 2122, 2299, 2482, 2645 ]
Train
1,855
0
Title: Applying Case Retrieval Nets to Diagnostic Tasks in Technical Domains Abstract: This paper presents Objectdirected Case Retrieval Nets, a memory model developed for an application of Case-Based Reasoning to the task of technical diagnosis. The key idea is to store cases, i.e. observed symptoms and diagnoses, in a network and to enhance this network with an object model encoding knowledge about the devices in the application domain.
[ 75, 1854, 1864, 2075, 2122, 2299, 2482 ]
Train
1,856
3
Title: Identifiability, Improper Priors and Gibbs Sampling for Generalized Linear Models Abstract: Alan E. Gelfand is a Professor in the Department of Statistics at the University of Connecti-cut, Storrs, CT 06269. Sujit K. Sahu is a Lecturer at the School of Mathematics, University of Wales, Cardiff, CF2 4YH, UK. The research of the first author was supported in part by NSF grant DMS 9301316 while the second author was supported in part by an EPSRC grant from UK. The authors thank Brad Carlin, Kate Cowles, Gareth Roberts and an anonymous referee for valuable comments.
[ 2421 ]
Train
1,857
3
Title: Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification Abstract: Technical Report No. 9702, Department of Statistics, University of Toronto Abstract. Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover high-level properties of the data, such as which inputs are relevant to predicting the response.
[ 125, 160, 2020, 2540, 2681 ]
Train
1,858
6
Title: NP-Completeness of Minimum Rule Sets Abstract: Rule induction systems seek to generate rule sets which are optimal in the complexity of the rule set. This paper develops a formal proof of the NP-Completeness of the problem of generating the simplest rule set (MIN RS) which accurately predicts examples in the training set for a particular type of generalization algorithm algorithm and complexity measure. The proof is then informally extended to cover a broader spectrum of complexity measures and learning algorithms.
[ 2481, 2528 ]
Train
1,859
4
Title: Self-Improving Factory Simulation using Continuous-time Average-Reward Reinforcement Learning Abstract: Many factory optimization problems, from inventory control to scheduling and reliability, can be formulated as continuous-time Markov decision processes. A primary goal in such problems is to find a gain-optimal policy that minimizes the long-run average cost. This paper describes a new average-reward algorithm called SMART for finding gain-optimal policies in continuous time semi-Markov decision processes. The paper presents a detailed experimental study of SMART on a large unreliable production inventory problem. SMART outperforms two well-known reliability heuristics from industrial engineering. A key feature of this study is the integration of the reinforcement learning algorithm directly into two commercial discrete-event simulation packages, ARENA and CSIM, paving the way for this approach to be applied to many other factory optimization problems for which there already exist simulation models.
[ 471, 548, 554, 565, 621, 1791 ]
Train
1,860
0
Title: Efficient Locally Weighted Polynomial Regression Predictions Abstract: Locally weighted polynomial regression (LWPR) is a popular instance-based algorithm for learning continuous non-linear mappings. For more than two or three inputs and for more than a few thousand dat-apoints the computational expense of predictions is daunting. We discuss drawbacks with previous approaches to dealing with this problem, and present a new algorithm based on a multiresolution search of a quickly-constructible augmented kd-tree. Without needing to rebuild the tree, we can make fast predictions with arbitrary local weighting functions, arbitrary kernel widths and arbitrary queries. The paper begins with a new, faster, algorithm for exact LWPR predictions. Next we introduce an approximation that achieves up to a two-orders-of-magnitude speedup with negligible accuracy losses. Increasing a certain approximation parameter achieves greater speedups still, but with a correspondingly larger accuracy degradation. This is nevertheless useful during operations such as the early stages of model selection and locating optima of a fitted surface. We also show how the approximations can permit real-time query-specific optimization of the kernel width. We conclude with a brief discussion of potential extensions for tractable instance-based learning on datasets that are too large to fit in a com puter's main memory.
[ 548, 683, 906, 2428, 2430, 2658 ]
Train
1,861
6
Title: A Six-Point Condition for Ordinal Matrices keywords: additive, algorithm, evolution, ordinal, phylogeny Abstract: Ordinal assertions in an evolutionary context are of the form "species s is more similar to species x than to species y" and can be deduced from a distance matrix M of interspecies dissimilarities (M [s; x] < M [s; y]). Given species x and y, the ordinal binary character c xy of M is defined by c xy (s) = 1 if and only if M [s; x] < M[s; y], for all species s. In this paper we present several results concerning the inference of evolutionary trees or phylogenies from ordinal assertions. In particular, we present A six-point condition that characterizes those distance matrices whose ordinal binary characters are pairwise compatible. This characterization is analogous to the four-point condition for additive matrices. An optimal O(n 2 ) algorithm, where n is the number of species, for recovering a phylogeny that realizes the ordinal binary characters of a distance matrix that satisfies the six-point condition. An NP-completeness result on determining if there is a phylogeny that realizes k or more of the ordinal binary characters of a given distance matrix.
[ 1853 ]
Train
1,862
6
Title: Continuous-valued Xof-N Attributes Versus Nominal Xof-N Attributes for Constructive Induction: A Case Study Abstract: An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. In this paper, we explore the characteristics and performance of continuous-valued Xof-N attributes versus nominal Xof-N attributes for constructive induction. Nominal Xof-Ns are more representationally powerful than continuous-valued Xof-Ns, but the former suffer the "fragmentation" problem, although some mechanisms such as subsetting can help to solve the problem. Two approaches to constructive induction using continuous-valued Xof-Ns are described. Continuous-valued Xof-Ns perform better than nominal ones on domains that need Xof-Ns with only one cut point. On domains that need Xof-N representations with more than one cut point, nominal Xof-Ns perform better than continuous-valued ones. Experimental results on a set of artificial and real-world domains support these statements.
[ 1595, 1644, 1863, 1964, 2675 ]
Test
1,863
6
Title: Effects of Different Types of New Attribute on Constructive Induction Abstract: This paper studies the effects on decision tree learning of constructing four types of attribute (conjunctive, disjunctive, Mof-N, and Xof-N representations). To reduce effects of other factors such as tree learning methods, new attribute search strategies, evaluation functions, and stopping criteria, a single tree learning algorithm is developed. With different option settings, it can construct four different types of new attribute, but all other factors are fixed. The study reveals that conjunctive and disjunctive representations have very similar performance in terms of prediction accuracy and theory complexity on a variety of concepts. Moreover, the study demonstrates that the stronger representation power of Mof-N than conjunction and disjunction and the stronger representation power of Xof-N than these three types of new attribute can be reflected in the performance of decision tree learning.
[ 1595, 1644, 1862, 1964 ]
Test
1,864
0
Title: An Investigation of Marker-Passing Algorithms for Analogue Retrieval Abstract: If analogy and case-based reasoning systems are to scale up to very large case bases, it is important to analyze the various methods used for retrieving analogues to identify the features of the problem for which they are appropriate. This paper reports on one such analysis, a comparison of retrieval by marker passing or spreading activation in a semantic network with Knowledge-Directed Spreading Activation, a method developed to be well-suited for retrieving semantically distant analogues from a large knowledge base. The analysis has two complementary components: (1) a theoretical model of the retrieval time based on a number of problem characteristics, and (2) experiments showing how the retrieval time of the approaches varies with the knowledge base size. These two components, taken together, suggest that KDSA is more likely than SA to be able to scale up to retrieval in large knowledge bases.
[ 1854, 1855, 2122, 2276 ]
Train
1,865
2
Title: DNA Sequence Classification Using Compression-Based Induction Abstract: DIMACS Technical Report 95-04 April 1995
[ 2107 ]
Train
1,866
2
Title: A Model of Rapid Memory Formation in the Hippocampal System Abstract: Our ability to remember events and situations in our daily life demonstrates our ability to rapidly acquire new memories. There is a broad consensus that the hippocampal system (HS) plays a critical role in the formation and retrieval of such memories. A computational model is described that demonstrates how the HS may rapidly transform a transient pattern of activity representing an event or a situation into a persistent structural encoding via long-term potentiation and long-term depression.
[ 2272 ]
Train
1,867
2
Title: A comparison of neural net and conventional techniques for lighting control Abstract: We compare two techniques for lighting control in an actual room equipped with seven banks of lights and photoresistors to detect the lighting level at four sensing points. Each bank of lights can be independently set to one of sixteen intensity levels. The task is to determine the device intensity levels that achieve a particular configuration of sensor readings. One technique we explored uses a neural network to approximate the mapping between sensor readings and device intensity levels. The other technique we examined uses a conventional feedback control loop. The neural network approach appears superior both in that it does not require experimentation on the fly (and hence fluctuating light intensity levels during settling, and lengthy settling times) and in that it can deal with complex interactions that conventional control techniques do not handle well. This comparison was performed as part of the "Adaptive House" project, which is described briefly. Further directions for control in the
[ 1718, 1754 ]
Train
1,868
3
Title: Convergence in Norm for Alternating Expectation-Maximization (EM) Type Algorithms 1 Abstract: We provide a sufficient condition for convergence of a general class of alternating estimation-maximization (EM) type continuous-parameter estimation algorithms with respect to a given norm. This class includes EM, penalized EM, Green's OSL-EM, and other approximate EM algorithms. The convergence analysis can be extended to include alternating coordinate-maximization EM algorithms such as Meng and Rubin's ECM and Fessler and Hero's SAGE. The condition for monotone convergence can be used to establish norms under which the distance between successive iterates and the limit point of the EM-type algorithm approaches zero monotonically. For illustration, we apply our results to estimation of Poisson rate parameters in emission tomography and establish that in the final iterations the logarithm of the EM iterates converge monotonically in a weighted Euclidean norm.
[ 2421 ]
Test
1,869
2
Title: Refining PID Controllers using Neural Networks Abstract: The Kbann approach uses neural networks to refine knowledge that can be written in the form of simple propositional rules. We extend this idea further by presenting the Manncon algorithm by which the mathematical equations governing a PID controller determine the topology and initial weights of a network, which is further trained using backpropagation. We apply this method to the task of controlling the outflow and temperature of a water tank, producing statistically-significant gains in accuracy over both a standard neural network approach and a non-learning PID controller. Furthermore, using the PID knowledge to initialize the weights of the network produces statistically less variation in testset accuracy when compared to networks initialized with small random numbers.
[ 1754, 2409 ]
Train
1,870
3
Title: Von Mises type statistics for single site updated local interaction random fields Abstract: Random field models in image analysis and spatial statistics usually have local interactions. They can be simulated by Markov chains which update a single site at a time. The updating rules typically condition on only a few neighboring sites. If we want to approximate the expectation of a bounded function, can we make better use of the simulations than through the empirical estimator? We describe symmetrizations of the empirical estimator which are computationally feasible and can lead to considerable variance reduction. The method is reminiscent of the idea behind generalized von Mises statistics. To simplify the exposition, we consider mainly nearest neighbor random fields and the Gibbs sampler.
[ 1713, 2362 ]
Validation
1,871
2
Title: 3D Object Recognition Using Unsupervised Feature Extraction Abstract: Intrator (1990) proposed a feature extraction method that is related to recent statistical theory (Huber, 1985; Friedman, 1987), and is based on a biologically motivated model of neuronal plasticity (Bienenstock et al., 1982). This method has been recently applied to feature extraction in the context of recognizing 3D objects from single 2D views (Intrator and Gold, 1991). Here we describe experiments designed to analyze the nature of the extracted features, and their relevance to the theory and psychophysics of object recognition.
[ 359, 2499 ]
Test
1,872
1
Title: Modeling Building-Block Interdependency Dynamical and Evolutionary Machine Organization Group Abstract: The Building-Block Hypothesis appeals to the notion of problem decomposition and the assembly of solutions from sub-solutions. Accordingly, there have been many varieties of GA test problems with a structure based on building-blocks. Many of these problems use deceptive fitness functions to model interdependency between the bits within a block. However, very few have any model of interdependency between building-blocks; those that do are not consistent in the type of interaction used intra-block and inter-block. This paper discusses the inadequacies of the various test problems in the literature and clarifies the concept of building-block interdependency. We formulate a principled model of hierarchical interdependency that can be applied through many levels in a consistent manner and introduce Hierarchical If-and-only-if (H-IFF) as a canonical example. We present some empirical results of GAs on H-IFF showing that if population diversity is maintained and linkage is tight then the GA is able to identify and manipulate building-blocks over many levels of assembly, as the Building-Block Hypothesis suggests.
[ 163, 1257, 1696, 1771 ]
Train
1,873
2
Title: Achieving Super Computer Performance with a DSP Array Processor Abstract: The MUSIC system (MUlti Signal processor system with Intelligent Communication) is a parallel distributed memory architecture based on digital signal processors (DSP). A system with 60 processor elements is operational. It has a peak performance of 3.8 GFlops, an electrical power consumption of less than 800 W (including forced air cooling) and fits into a 19" rack. Two applications (the back-propagation algorithm for neural net learning and molecular dynamics simulations) run about 6 times faster than on a CRAY Y-MP and 2 times faster than on a NEC SX-3. A sustained performance of more than 1 GFlops is reached. The selling price of such a system would be in the range of about 300'000 US$.
[ 1998 ]
Validation
1,874
2
Title: DYNAMICAL BEHAVIOR OF ARTIFICIAL NEURAL NETWORKS WITH RANDOM WEIGHTS Abstract: In this paper we report a Monte Carlo study of the dynamics of large untrained, feedforward, neural networks with randomly chosen weights and feedback. The analysis consists of looking at the percent of the systems that exhibit chaos, the distrubution of largest Lyapunov exponents, and the distrubution of correlation dimensions. As the systems become more complex (increasing inputs and neurons), the probability of chaos approaches unity. The correlation dimension is typically much smaller than the system dimension.
[ 1920 ]
Validation
1,875
6
Title: On the Effect of Analog Noise in Discrete-Time Analog Computations Abstract: We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the
[ 407, 1891, 2439, 2553 ]
Validation
1,876
3
Title: An Improved Model for Spatially Correlated Binary Responses Abstract: In this paper we extend the basic autologistic model to include covariates and an indication of sampling effort. The model is applied to sampled data instead of the traditional use for image analysis where complete data are available. We adopt a Bayesian set-up and develop a hybrid Gibbs sampling estimation procedure. Using simulated examples, we show that the autologistic model with covariates for sample data improves predictions as compared to the simple logistic regression model and the standard autologistic model (without covariates).
[ 2634 ]
Train
1,877
0
Title: Learning High Utility Rules by Incorporating Search Control Guidance Committee Abstract: In this paper we extend the basic autologistic model to include covariates and an indication of sampling effort. The model is applied to sampled data instead of the traditional use for image analysis where complete data are available. We adopt a Bayesian set-up and develop a hybrid Gibbs sampling estimation procedure. Using simulated examples, we show that the autologistic model with covariates for sample data improves predictions as compared to the simple logistic regression model and the standard autologistic model (without covariates).
[ 251, 414, 578, 717, 2215 ]
Test
1,878
1
Title: Evolving Deterministic Finite Automata Using Cellular Encoding programming and cellular encoding. Programs are evolved that Abstract: This paper presents a method for the initial singlestate zygote. The results evolution of deterministic finite
[ 2571, 2624 ]
Train
1,879
2
Title: Rochester Connectionist Simulator Abstract: Specifying, constructing and simulating structured connectionist networks requires significant programming effort. System tools can greatly reduce the effort required, and by providing a conceptual structure within which to work, make large and complex network simulations possible. The Rochester Connectionist Simulator is a system tool designed to aid specification, construction and simulation of connectionist networks. This report describes this tool in detail: the facilities provided and how to use them, as well as details of the implementation. Through this we hope not only to make designing and verifying connectionist networks easier, but also to encourage the development and refinement of connectionist research tools themselves.
[ 763, 1760, 2355 ]
Test
1,880
1
Title: On the relationship between distributed group-behaviour and the behavioural complexity of individuals [Wilson Sober 1994]) Abstract:
[ 2302 ]
Train
1,881
5
Title: Integrity Constraints in ILP using a Monte Carlo approach Abstract: Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely indu ctive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy the orem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by ra ndom generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples.
[ 344, 2449, 2450 ]
Train
1,882
0
Title: Symposium Title: Tutorial Discourse What Makes Human Explanations Effective? Abstract: Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely indu ctive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy the orem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by ra ndom generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples.
[ 1989 ]
Train
1,883
1
Title: A Trade Network Game With Endogenous Partner Selection 1 Abstract: This paper develops an evolutionary trade network game (TNG) that combines evolutionary game play with endogenous partner selection. Successive generations of resource-constrained buyers and sellers choose and refuse trade partners on the basis of continually updated expected payoffs. Trade partner selection takes place in accordance with a modified Gale-Shapley matching mechanism, and trades are implemented using trade strategies evolved via a standardly specified genetic algorithm. The trade partnerships resulting from the matching mechanism are shown to be core stable and Pareto optimal in each successive trade cycle. Nevertheless, computer experiments suggest that these static optimality properties may be inadequate measures of optimality from an evolutionary perspective.
[ 2177 ]
Test
1,884
2
Title: Using Neural Networks to Identify Jets Abstract: A neural network method for identifying the ancestor of a hadron jet is presented. The idea is to find an efficient mapping between certain observed hadronic kinematical variables and the quark/gluon identity. This is done with a neuronic expansion in terms of a network of sigmoidal functions using a gradient descent procedure, where the errors are back-propagated through the network. With this method we are able to separate gluon from quark jets originating from Monte Carlo generated e + e events with ~ 85% accuracy. The result is independent on the MC model used. This approach for isolating the gluon jet is then used to study the so-called string effect. In addition, heavy quarks (b and c) in e + e reactions can be identified on the 50% level by just observing the hadrons. In particular we are able to separate b-quarks with an efficiency and purity, which is comparable with what is expected from vertex detectors. We also speculate on how the neural network method can be used to disentangle different hadronization schemes by compressing the dimensionality of the state space of hadrons.
[ 745, 1885, 1886, 1902 ]
Train
1,885
2
Title: LU TP 90-3 Finding Gluon Jets with a Neural Trigger Abstract: Using a neural network classifier we are able to separate gluon from quark jets originating from Monte Carlo generated e + e events with 85 90% accuracy. PACS numbers: 13.65.+i, 12.38Qk, 13.87.Fh
[ 745, 1884, 1886, 1902 ]
Train
1,886
2
Title: LU TP 91-4 Self-organizing Networks for Extracting Jet Features Abstract: Self-organizing neural networks are briefly reviewed and compared with supervised learning algorithms like back-propagation. The power of self-organization networks is in their capability of displaying typical features in a transparent manner. This is successfully demonstrated with two applications from hadronic jet physics; hadronization model discrimination and separation of b,c and light quarks.
[ 745, 1884, 1885, 1902 ]
Validation
1,887
2
Title: LU TP 93-13 On Langevin Updating in Multilayer Perceptrons Abstract: The Langevin updating rule, in which noise is added to the weights during learning, is presented and shown to improve learning on problems with initially ill-conditioned Hessians. This is particularly important for multilayer perceptrons with many hidden layers, that often have ill-conditioned Hessians. In addition, Manhattan updating is shown to have a similar effect.
[ 1902, 2258 ]
Validation
1,888
6
Title: Approximating Hyper-Rectangles: Learning and Pseudo-random Sets Abstract: The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subprob-lem common to the derandomization of depth-2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the distribution of n independent multivalued random variables. We present improved upper bounds for a class of such problems of approximating high-dimensional rectangles that arise in PAC learning and pseudorandomness.
[ 109, 507, 2427 ]
Train
1,889
2
Title: The Functional Transfer of Knowledge for Coronary Artery Disease Diagnosis Abstract: A distinction between two forms of task knowledge transfer, representational and functional, is reviewed followed by a discussion of MTL, a modified version of the multiple task learning (MTL) neural network method of functional transfer. The MTL method employs a separate learning rate, k , for each task output node k. k varies as a function of a measure of relatedness, R k , between the kth task and the primary task of interest. An MTL network is applied to a diagnostic domain of four levels of coronary artery disease. Results of experiments demonstrate the ability of MTL to develop a predictive model for one level of disease which has superior diagnostic ability over models produced by either single task learning or standard multiple task learning.
[ 562, 730, 2586, 2648 ]
Test
1,890
1
Title: Genetic Algorithms for Adaptive Planning of Path and Trajectory of a Mobile Robot in 2D Terrains Abstract: This paper proposes genetic algorithms (GAs) for path planning and trajectory planning of an autonomous mobile robot. Our GA-based approach has an advantage of adaptivity such that the GAs work even if an environment is time-varying or unknown. Therefore, it is suitable for both off-line and on-line motion planning. We first presents a GA for path planning in a 2D terrain. Simulation results on the performance and adaptivity of the GA on randomly generated terrains are shown. Then, we discuss extensions of the GA for solving both path planning and trajectory planning simultaneously.
[ 163, 1060, 2039 ]
Validation
1,891
2
Title: Vapnik-Chervonenkis Dimension of Recurrent Neural Networks Abstract: Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, polynomial, piecewise-polynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only. An important difference between recurrent and feedforward nets is that a fixed recurrent net can receive inputs of arbitrary length. Therefore we are particularly interested in the case k w. Ignoring multiplicative constants, the main results say roughly the following: * For architectures with activation = any fixed nonlinear polynomial, the VC dimension is wk. * For architectures with activation = any fixed piecewise polynomial, the VC dimension is between wk and w 2 k. * For architectures with activation = H (threshold nets), the VC dimension is between w log(k=w) and minfwk log wk; w 2 + w log wkg. * For the standard sigmoid (x) = 1=(1 + e x ), the VC dimension is between wk and w 4 k 2 . An earlier version of this paper has appeared in Proc. 3rd European Workshop on Computational Learning Theory, LNCS 1208, pages 223-237, Springer, 1997.
[ 58, 200, 206, 411, 1149, 1774, 1875 ]
Validation
1,892
0
Title: Abstraction and Decomposition in Hillclimbing Design Optimization Abstract: The performance of hillclimbing design optimization can be improved by abstraction and decomposition of the design space. Methods for automatically finding and exploiting such abstractions and decompositions are presented in this paper. A technique called "Operator Importance Analysis" finds useful abstractions. It does so by determining which of a given set of operators are the most important for a given class of design problems. Hillclimbing search runs faster when performed using this this smaller set of operators. A technique called "Operator Interaction Analysis" finds useful decompositions. It does so by measuring the pairwise interaction between operators. It uses such measurements to form an ordered partition of the operator set. This partition can then be used in a "hierarchic" hillclimbing algorithm which runs faster than ordinary hillclimbing with an unstructured operator set. We have implemented both techniques and tested them in the domain of racing yacht hull design. Our experimental results show that these two methods can produce substantial speedups with little or no loss in quality of the resulting designs.
[ 2131 ]
Validation
1,893
2
Title: Learning NonLinearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks Abstract: This paper investigates an algorithm for the construction of decisions trees comprised of linear threshold units and also presents a novel algorithm for the learning of non-linearly separable boolean functions using Madaline-style networks which are isomorphic to decision trees. The construction of such networks is discussed, and their performance in learning is compared with standard BackPropagation on a sample problem in which many irrelevant attributes are introduced. Littlestone's Winnow algorithm is also explored within this architecture as a means of learning in the presence of many irrelevant attributes. The learning ability of this Madaline-style architecture on nonoptimal (larger than necessary) networks is also explored.
[ 102, 1895, 1908 ]
Train