node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
1,094 | 2 | Title: Plasticity in cortical neuron properties: Modeling the effects of an NMDA antagonist and a GABA
Abstract: Infusion of a GABA agonist (Reiter & Stryker, 1988) and infusion of an NMDA receptor antagonist (Bear et al., 1990), in the primary visual cortex of kittens during monocular deprivation, shifts ocular dominance toward the closed eye, in the cortical region near the infusion site. This reverse ocular dominance shift has been previously modeled by variants of a covariance synaptic plasticity rule (Bear et al., 1990; Clothiaux et al., 1991; Miller et al., 1989; Reiter & Stryker, 1988). Kasamatsu et al. (1997, 1998) showed that infusion of an NMDA receptor antagonist in adult cat primary visual cortex changes ocular dominance distribution, reduces binocularity, and reduces orientation and direction selectivity. This paper presents a novel account of the effects of these pharmacological treatments, based on the EXIN synaptic plasticity rules (Marshall, 1995), which include both an instar afferent excitatory and an outstar lateral inhibitory rule. Functionally, the EXIN plasticity rules enhance the efficiency, discrimination, and context-sensitivity of a neural network's representation of perceptual patterns (Marshall, 1995; Marshall & Gupta, 1998). The EXIN model decreases lateral inhibition from neurons outside the infusion site (control regions) to neurons inside the infusion region, during monocular deprivation. In the model, plasticity in afferent pathways to neurons affected by the pharmacological treatments is assumed to be blocked , as opposed to previous models (Bear et al., 1990; Miller et al., 1989; Reiter & Stryker, 1988), in which afferent pathways from the open eye to neurons in the infusion region are weakened . The proposed model is consistent with results suggesting that long-term plasticity can be blocked by NMDA antagonists or by postsynaptic hyperpolarization (Bear et al., 1990; Dudek & Bear, 1992; Goda & Stevens, 1996; Kirkwood et al., 1993). Since the role of plasticity in lateral inhibitory pathways in producing cortical plasticity has not received much attention, several predictions are made based on the EXIN lateral inhibitory plasticity rule. | [
355,
1659,
2085,
2228
] | Validation |
1,095 | 6 | Title: Learning Unions of Boxes with Membership and Equivalence Queries
Abstract: We present two algorithms that use membership and equivalence queries to exactly identify the concepts given by the union of s discretized axis-parallel boxes in d-dimensional discretized Euclidean space where each coordinate can have n discrete values. The first algorithm receives at most sd counterexamples and uses time and membership queries polynomial in s and log n for d any constant. Further, all equivalence queries made can be formulated as the union of O(sd log s) axis-parallel boxes. Next, we introduce a new complexity measure that better captures the complexity of a union of boxes than simply the number of boxes and dimensions. Our new measure, , is the number of segments in the target polyhedron where a segment is a maximum portion of one of the sides of the polyhedron that lies entirely inside or entirely outside each of the other halfspaces defining the polyhedron. We then present an improvement of our first algorithm that uses time and queries polynomial in and log n. The hypothesis class used here is decision trees of height at most 2sd. Further we can show that the time and queries used by this algorithm are polynomial in d and log n for s any constant thus generalizing the exact learnability of DNF formulas with a constant number of terms. In fact, this single algorithm is efficient for either s or d constant. | [
792,
798,
1360,
1433,
1456
] | Validation |
1,096 | 1 | Title: Pruning backpropagation neural networks using modern stochastic optimization techniques
Abstract: Approaches combining genetic algorithms and neural networks have received a great deal of attention in recent years. As a result, much work has been reported in two major areas of neural network design: training and topology optimization. This paper focuses on the key issues associated with the problem of pruning a multilayer perceptron using genetic algorithms and simulated annealing. The study presented considers a number of aspects associated with network training that may alter the behavior of a stochastic topology optimizer. Enhancements are discussed that can improve topology searches. Simulation results for the two mentioned stochastic optimization methods applied to nonlinear system identification are presented and compared with a simple random search. | [
1069
] | Train |
1,097 | 3 | Title: Belief Propagation and Revision in Networks with Loops
Abstract: Local belief propagation rules of the sort proposed by Pearl (1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay a foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive an analytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal (although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP estimate obtained by belief revision at convergence is guaranteed to give the globally optimal sequence of states. The result is independent of the length of the cycle and the size of the state space. For networks with multiple loops, we introduce the concept of a "balanced network" and show simulation results comparing belief revision and update in such networks. We show that the Turbo code structure is balanced and present simulations on a toy Turbo code problem indicating the decoding obtained by belief revision at convergence is significantly more likely to be correct. This report describes research done at the Center for Biological and Computational Learning and the Department of Brain and Cognitive Sciences of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. YW was also supported by NEI R01 EY11005 to E. H. Adelson | [
1393
] | Train |
1,098 | 1 | Title: Scheduling Maintenance of Electrical Power Transmission Networks Using Genetic Programming
Abstract: Previous work showed the combination of a Genetic Algorithm using an order or permutation chromosome combined with hand coded "Greedy" Optimizers can readily produce an optimal schedule for a four node test problem [ Langdon, 1995 ] . Following this the same GA has been used to find low cost schedules for the South Wales region of the UK high voltage power network. This paper describes the evolution of the best known schedule for the base South Wales problem using Genetic Programming starting from the hand coded heuris tics used with the GA. | [
163,
343,
1305,
1911
] | Train |
1,099 | 1 | Title: A Methodology for Processing Problem Constraints in Genetic Programming
Abstract: Search mechanisms of artificial intelligence combine two elements: representation, which determines the search space, and a search mechanism, which actually explores the space. Unfortunately, many searches may explore redundant and/or invalid solutions. Genetic programming refers to a class of evolutionary algorithms based on genetic algorithms but utilizing a parameterized representation in the form of trees. These algorithms perform searches based on simulation of nature. They face the same problems of redundant/invalid subspaces. These problems have just recently been addressed in a systematic manner. This paper presents a methodology devised for the public domain genetic programming tool lil-gp. This methodology uses data typing and semantic information to constrain the representation space so that only valid, and possibly unique, solutions will be explored. The user enters problem-specific constraints, which are transformed into a normal set. This set is checked for feasibility, and subsequently it is used to limit the space being explored. The constraints can determine valid, possibly unique space. Moreover, they can also be used to exclude subspaces the user considers uninteresting, using some problem-specific knowledge. A simple example is followed thoroughly to illustrate the constraint language, transformations, and the normal set. Experiments with boolean 11-multiplexer illustrate practical applications of the method to limit redundant space exploration by utilizing problem-specific knowledge. fl Supported by a grant from NASA/JSC: NAG 9-847. | [
163,
1178
] | Train |
1,100 | 2 | Title: Observability of Linear Systems with Saturated Outputs
Abstract: In this paper, we present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a saturation function before it can be measured. | [
1464
] | Train |
1,101 | 0 | Title: On Reasoning from Data
Abstract: In this paper, we present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a saturation function before it can be measured. | [
843,
1111,
1328
] | Train |
1,102 | 5 | Title: Automated Refinement of First-Order Horn-Clause Domain Theories
Abstract: Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. | [
136,
985,
1174,
1370,
1413,
1479
] | Validation |
1,103 | 2 | Reference: [39] <author> Yoda, M. </author> <year> (1994). </year> <title> Predicting the Tokyo stock market. </title> <editor> In Deboeck, G.J. (Ed.) </editor> <year> (1994). </year> <title> Trading on the Edge. </title> <address> New York: </address> <publisher> Wiley., </publisher> <pages> 66-79. </pages> <institution> VITA Graduate School Southern Illinois University Daniel Nikolaev Nikovski Date of Birth: </institution> <address> April 13, 1969 606 West College Street, Apt.4, Rm. 6, Carbondale, Illinois 62901 150 Hristo Botev Boulevard, Apt. </address> <month> 54, </month> <title> 4004 Plovdiv, Bulgaria Technical University - Sofia, Bulgaria Engineer of Computer Systems and Control Thesis Title: Adaptive Computation Techniques for Time Series Analysis Major Professor: </title> <journal> Dr. Mehdi Zargham </journal>
Abstract: Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. | [
74,
427,
611,
1079,
1579
] | Train |
1,104 | 6 | Title: Feature Generation for Sequence Categorization
Abstract: The problem of sequence categorization is to generalize from a corpus of labeled sequences procedures for accurately labeling future unlabeled sequences. The choice of representation of sequences can have a major impact on this task, and in the absence of background knowledge a good representation is often not known and straightforward representations are often far from optimal. We propose a feature generation method (called FGEN) that creates Boolean features that check for the presence or absence of heuristically selected collections of subsequences. We show empirically that the representation computed by FGEN improves the accuracy of two commonly used learning systems (C4.5 and Ripper) when the new features are added to existing representations of sequence data. We show the superiority of FGEN across a range of tasks selected from three domains: DNA sequences, Unix command sequences, and English text. | [
1260,
1262
] | Test |
1,105 | 6 | Title: PAC Learning Intersections of Halfspaces with Membership Queries (Extended Abstract)
Abstract: | [
591,
798,
1026,
2356
] | Validation |
1,106 | 1 | Title: Genetic Algorithms for Combinatorial Optimization: The Assembly Line Balancing Problem
Abstract: Genetic algorithms are one example of the use of a random element within an algorithm for combinatorial optimization. We consider the application of the genetic algorithm to a particular problem, the Assembly Line Balancing Problem. A general description of genetic algorithms is given, and their specialized use on our test-bed problems is discussed. We carry out extensive computational testing to find appropriate values for the various parameters associated with this genetic algorithm. These experiments underscore the importance of the correct choice of a scaling parameter and mutation rate to ensure the good performance of a genetic algorithm. We also describe a parallel implementation of the genetic algorithm and give some comparisons between the parallel and serial implementations. Both versions of the algorithm are shown to be effective in producing good solutions for problems of this type (with appropriately chosen parameters). | [
163,
1065,
1305
] | Train |
1,107 | 0 | Title: The Possible Contribution of AI to the Avoidance of Crises and Wars: Using CBR Methods
Abstract: This paper presents the application of Case-Based Reasoning methods to the KOSIMO data base of international conflicts. A Case-Based Reasoning tool - VIE-CBR has been deveolped and used for the classification of various outcome variables, like political, military, and territorial outcome, solution modalities, and conflict intensity. In addition, the case retrieval algorithms are presented as an interactive, user-modifiable tool for intelli gently searching the conflict data base for precedent cases. | [
1328,
1617
] | Test |
1,108 | 3 | Title: Confidence as Higher Order Uncertainty proposed for handling higher order uncertainty, including the Bayesian approach,
Abstract: | [
1503,
1504,
1506,
1507
] | Train |
1,109 | 6 | Title: Inductive Bias in Case-Based Reasoning Systems
Abstract: In order to learn more about the behaviour of case-based reasoners as learning systems, we form-alise a simple case-based learner as a PAC learning algorithm, using the case-based representation hCB; i. We first consider a `naive' case-based learning algorithm CB1( H ) which learns by collecting all available cases into the case-base and which calculates similarity by counting the number of features on which two problem descriptions agree. We present results concerning the consistency of this learning algorithm and give some partial results regarding its sample complexity. We are able to characterise CB1( H ) as a `weak but general' learning algorithm. We then consider how the sample complexity of case-based learning can be reduced for specific classes of target concept by the application of inductive bias, or prior knowledge of the class of target concepts. Following recent work demonstrating how case-based learning can be improved by choosing a similarity measure appropriate to the concept being learnt, we define a second case-based learning `algorithm' CB2 which learns using the best possible similarity measure that might be inferred for the chosen target concept. While CB2 is not an executable learning strategy (since the chosen similarity measure is defined in terms of a priori knowledge of the actual target concept) it allows us to assess in the limit the maximum possible contribution of this approach to case-based learning. Also, in addition to illustrating the role of inductive bias, the definition of CB2 simplifies the general problem of establishing which functions might be represented in the form hCB; i. Reasoning about the case-based representation in this special case has therefore been a little more straight-forward than in the general case of CB1( H ), allowing more substantial results regarding representable functions and sample complexity to be presented for CB2. In assessing these results, we are forced to conclude that case-based learning is not the best approach to learning the chosen concept space (the space of monomial functions). We discuss, however, how our study has demonstrated, in the context of case-based learning, the operation of concepts well known in machine learning such as inductive bias and the trade-off between computational complexity and sample complexity. | [
1328,
1570,
2151
] | Train |
1,110 | 1 | Title: On The State of Evolutionary Computation
Abstract: In the past few years the evolutionary computation landscape has been rapidly changing as a result of increased levels of interaction between various research groups and the injection of new ideas which challenge old tenets. The effect has been simultaneously exciting, invigorating, annoying, and bewildering to the old-timers as well as the new-comers to the field. Emerging out of all of this activity are the beginnings of some structure, some common themes, and some agreement on important open issues. We attempt to summarize these emergent properties in this paper. | [
163,
793,
1016,
1728,
1729
] | Train |
1,111 | 0 | Title: Towards a Better Understanding of Memory-Based Reasoning Systems
Abstract: We quantify both experimentally and analytically the performance of memory-based reasoning (MBR) algorithms. To start gaining insight into the capabilities of MBR algorithms, we compare an MBR algorithm using a value difference metric to a popular Bayesian classifier. These two approaches are similar in that they both make certain independence assumptions about the data. However, whereas MBR uses specific cases to perform classification, Bayesian methods summarize the data probabilistically. We demonstrate that a particular MBR system called Pebls works comparatively well on a wide range of domains using both real and artificial data. With respect to the artificial data, we consider distributions where the concept classes are separated by functional discriminants, as well as time-series data generated by Markov models of varying complexity. Finally, we show formally that Pebls can learn (in the limit) natural concept classes that the Bayesian classifier cannot learn, and that it will attain perfect accuracy whenever | [
258,
1101,
1328,
1339,
1412,
1570,
2292
] | Test |
1,112 | 2 | Title: Flexible Metric Nearest Neighbor Classiflcation
Abstract: The K-nearest-neighbor decision rule assigns an object of unknown class to the plurality class among the K labeled "training" objects that are closest to it. Closeness is usually deflned in terms of a metric distance on the Euclidean space with the input measurement variables as axes. The metric chosen to deflne this distance can strongly efiect performance. An optimal choice depends on the problem at hand as characterized by the respective class distributions on the input measurement space, and within a given problem, on the location of the unknown object in that space. In this paper new types of K-nearest-neighbor procedures are described that estimate the local relevance of each input variable, or their linear combinations, for each individual point to be classifled. This information is then used to separately customize the metric used to deflne distance from that object in flnding its nearest neighbors. These procedures are a hybrid between regular K-nearest-neighbor methods and treestructured recursive partitioning techniques popular in statistics and machine learning. | [
101,
719,
926,
1073,
1403,
1512,
1618,
2415
] | Test |
1,113 | 1 | Title: Staged Hybrid Genetic Search for Seismic Data Imaging
Abstract: Seismic data interpretation problems are typically solved using computationally intensive local search methods which often result in inferior solutions. Here, a traditional hybrid genetic algorithm is compared with different staged hybrid genetic algorithms on the geophysical imaging static corrections problem. The traditional hybrid genetic algorithm used here applied local search to every offspring produced by genetic search. The staged hybrid genetic algorithms were designed to temporally separate the local and genetic search components into distinct phases so as to minimize interference between the two search methods. The results show that some staged hybrid genetic algorithms produce higher quality solutions while using significantly less computational time for this problem. | [
163,
1153,
1380
] | Test |
1,114 | 1 | Title: Using Genetic Algorithms to Explore Pattern Recognition in the Immune System COMMENTS WELCOME
Abstract: This paper describes an immune system model based on binary strings. The purpose of the model is to study the pattern recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of the model. The paper reports simulation experiments on two pattern recognition problems that are relevant to natural immune systems. Finally, it reviews the relation between the model and explicit fitness sharing techniques for genetic algorithms, showing that the immune system model implements a form of implicit fitness sharing. | [
163,
602,
1117,
1261,
1371,
1588,
1603,
1696
] | Train |
1,115 | 2 | Title: The locally linear Nested Network for robot manipulation
Abstract: We present a method for accurate representation of high-dimensional unknown functions from random samples drawn from its input space. The method builds representations of the function by recursively splitting the input space in smaller subspaces, while in each of these subspaces a linear approximation is computed. The representations of the function at all levels (i.e., depths in the tree) are retained during the learning process, such that a good generalisation is available as well as more accurate representations in some subareas. Therefore, fast and accurate learning are combined in this method. | [
820,
1252
] | Validation |
1,116 | 2 | Title: Equivalence of Linear Boltzmann Chains and Hidden Markov Models sequence L, is: where Z(; A;
Abstract: Several authors have made a link between hidden Markov models for time series and energy-based models (Luttrell 1989, Williams 1990, Saul and Jordan 1995). Saul and Jordan (1995) discuss a linear Boltzmann chain model with state-state transition energies A ii 0 (going from state i to state i 0 ) and symbol emission energies B ij , under which the probability of an entire state fi l ; j l g L Whilst any HMM can be written as a linear Boltzmann chain by setting exp(A ii 0 ) = a ii 0 , exp(B ij ) = b ij and exp( i ) = i , not all linear Boltzmann chains can be represented as HMMs (Saul and Jordan 1995). However, the difference between the two models is minimal. To be precise, if the final hidden | [
31,
611,
708,
736,
1593
] | Train |
1,117 | 1 | Title: A Coevolutionary Approach to Learning Sequential Decision Rules
Abstract: We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions. | [
247,
562,
910,
1114,
1225,
1261,
1588,
1603,
2089,
2332
] | Train |
1,118 | 2 | Title: Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta
Abstract: Appropriate bias is widely viewed as the key to efficient learning and generalization. I present a new algorithm, the Incremental Delta-Bar-Delta (IDBD) algorithm, for the learning of appropriate biases based on previous learning experience. The IDBD algorithm is developed for the case of a simple, linear learning system|the LMS or delta rule with a separate learning-rate parameter for each input. The IDBD algorithm adjusts the learning-rate parameters, which are an important form of bias for this system. Because bias in this approach is adapted based on previous learning experience, the appropriate testbeds are drifting or non-stationary learning tasks. For particular tasks of this type, I show that the IDBD algorithm performs better than ordinary LMS and in fact finds the optimal learning rates. The IDBD algorithm extends and improves over prior work by Jacobs and by me in that it is fully incremental and has only a single free parameter. This paper also extends previous work by presenting a derivation of the IDBD algorithm as gradient descent in the space of learning-rate parameters. Finally, I offer a novel interpretation of the IDBD algorithm as an incremental form of hold-one-out cross validation. | [
134,
1540
] | Train |
1,119 | 2 | Title: Adaptive Parameter Pruning in Neural Networks
Abstract: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization. An open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This paper presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. The results of extensive experimentation indicate that lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. Results of statistical significance tests comparing autoprune to the new method lprune as well as to backpropagation with early stopping are given for 14 different problems. | [
881,
1203,
2405
] | Train |
1,120 | 2 | Title: ICSIM: An Object-Oriented Connectionist Simulator gives an overview of the simulator. Its main concepts, the
Abstract: ICSIM is a connectionist net simulator being developed at ICSI and written in Sather. It is object-oriented to meet the requirements for flexibility and reuse of homogeneous and structured connectionist nets and to allow the user to encapsulate efficient customized implementations perhaps running on dedicated hardware. Nets are composed by combining off-the-shelf library classes and if necessary by specializing some of their behaviour. General user interface classes allow a uniform or customized graphic presentation of the nets being modeled. | [
1677,
2275
] | Train |
1,121 | 0 | Title: Generic Teleological Mechanisms and their Use in Case Adaptation
Abstract: In experience-based (or case-based) reasoning, new problems are solved by retrieving and adapting the solutions to similar problems encountered in the past. An important issue in experience-based reasoning is to identify different types of knowledge and reasoning useful for different classes of case-adaptation tasks. In this paper, we examine a class of non-routine case-adaptation tasks that involve patterned insertions of new elements in old solutions. We describe a model-based method for solving this task in the context of the design of physical devices. The method uses knowledge of generic teleological mechanisms (GTMs) such as cascading. Old designs are adapted to meet new functional specifications by accessing and instantiating the appropriate GTM. The Kritik2 system evaluates the computational feasibility and sufficiency of this method for design adaptation. | [
540,
643,
806,
1046,
1138,
1344,
1640
] | Validation |
1,122 | 0 | Title: A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems
Abstract: The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems. 1 | [
578,
594,
717,
799,
1194,
1534
] | Train |
1,123 | 0 | Title: MAC/FAC: A Model of Similarity-based Retrieval
Abstract: We present a model of similarity-based retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes experience and use purely structural analogical re-mindings. Our model, called MAC/FAC (for "many are called but few are chosen") consists of two stages. The first stage (MAC) uses a computationally cheap, non-structural matcher to filter candidates from a pool of memory items. That is, we redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been fully implemented, and we show that it is capable of modeling patterns of access found in psychological data. | [
75,
539,
541,
1176,
1188,
1354,
1483,
1674,
1680
] | Train |
1,124 | 6 | Title: ON-LINE LEARNING OF LINEAR FUNCTIONS
Abstract: We present an algorithm for the on-line learning of linear functions which is optimal to within a constant factor with respect to bounds on the sum of squared errors for a worst case sequence of trials. The bounds are logarithmic in the number of variables. Furthermore, the algorithm is shown to be optimally robust with respect to noise in the data (again to within a constant factor). Key words. Machine learning; computational learning theory; on-line learning; linear functions; worst-case loss bounds; adaptive filter theory. Subject classifications. 68T05. | [
453,
1566,
1567
] | Validation |
1,125 | 0 | Title: Constructive Similarity Assessment: Using Stored Cases to Define New Situations
Abstract: A fundamental issue in case-based reasoning is similarity assessment: determining similarities and differences between new and retrieved cases. Many methods have been developed for comparing input case descriptions to the cases already in memory. However, the success of such methods depends on the input case description being sufficiently complete to reflect the important features of the new situation, which is not assured. In case-based explanation of anomalous events during story understanding, the anomaly arises because the current situation is incompletely understood; consequently, similarity assessment based on matches between known current features and old cases is likely to fail because of gaps in the current case's description. Our solution to the problem of gaps in a new case's description is an approach that we call constructive similarity assessment. Constructive similarity assessment treats similarity assessment not as a simple comparison between fixed new and old cases, but as a process for deciding which types of features should be investigated in the new situation and, if the features are borne out by other knowledge, added to the description of the current case. Constructive similarity assessment does not merely compare new cases to old: using prior cases as its guide, it dynamically carves augmented descriptions of new cases out of memory. | [
166,
817,
818,
857,
1483,
1496
] | Validation |
1,126 | 0 | Title: Towards A Computer Model of Memory Search Strategy Learning
Abstract: Much recent research on modeling memory processes has focused on identifying useful indices and retrieval strategies to support particular memory tasks. Another important question concerning memory processes, however, is how retrieval criteria are learned. This paper examines the issues involved in modeling the learning of memory search strategies. It discusses the general requirements for appropriate strategy learning and presents a model of memory search strategy learning applied to the problem of retrieving relevant information for adapting cases in case-based reasoning. It discusses an implementation of that model, and, based on the lessons learned from that implementation, points towards issues and directions in refining the model. | [
580,
583,
923,
1212,
1497,
2371,
2372,
2489
] | Train |
1,127 | 1 | Title: Recombination Operator, its Correlation to the Fitness Landscape and Search Performance
Abstract: The author reserves all other publication and other rights in association with the copyright in the thesis, and except as hereinbefore provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatever without the author's prior written permission. | [
163,
728,
793,
938,
1153
] | Train |
1,128 | 3 | Title: On Structured Variational Approximations
Abstract: The problem of approximating a probability distribution occurs frequently in many areas of applied mathematics, including statistics, communication theory, machine learning, and the theoretical analysis of complex systems such as neural networks. Saul and Jordan (1996) have recently proposed a powerful method for efficiently approximating probability distributions known as structured variational approximations. In structured variational approximations, exact algorithms for probability computation on tractable substructures are combined with variational methods to handle the interactions between the substructures which make the system as a whole intractable. In this note, I present a mathematical result which can simplify the derivation of struc tured variational approximations in the exponential family of distributions. | [
76,
1288,
1393
] | Train |
1,129 | 2 | Title: A Self-Organizing Binary Decision Tree For Incrementally Defined Rule Based
Abstract: This paper presents an ASOCS (adaptive self-organizing concurrent system) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. This paper focuses on adaptive algorithm 3 (AA3) and details its architecture and learning algorithm. It has advantages over previous ASOCS models in simplicity, implementability, and cost. An ASOCS can operate in either a data processing mode or a learning mode. During the data processing mode, an ASOCS acts as a parallel hardware circuit. In learning mode, rules expressed as boolean conjunctions are incrementally presented to the ASOCS. All ASOCS learning algorithms incorporate a new rule in a distributed fashion in a short, bounded time. | [
26,
809,
814,
919,
1080,
1190,
1222
] | Train |
1,130 | 1 | Title: Dynamic Hill Climbing: Overcoming the limita- tions of optimization techniques
Abstract: This paper describes a novel search algorithm, called dynamic hill climbing, that borrows ideas from genetic algorithms and hill climbing techniques. Unlike both genetic and hill climbing algorithms, dynamic hill climbing has the ability to dynamically change its coordinate frame during the course of an optimization. Furthermore, the algorithm moves from a coarse-grained search to a fine-grained search of the function space by changing its mutation rate and uses a diversity-based distance metric to ensure that it searches new regions of the space. Dynamic hill climbing is empirically compared to a traditional genetic algorithm using De Jong's well-known five function test suite [4] and is shown to vastly surpass the performance of the genetic algorithm, often finding better solutions using only 1% as many function evaluations. | [
163,
959,
1334
] | Test |
1,131 | 1 | Title: ADAPTIVE TESTING OF CONTROLLERS FOR AUTONOMOUS VEHICLES
Abstract: Autonomous vehicles are likely to require sophisticated software controllers to maintain vehicle performance in the presence of vehicle faults. The test and evaluation of complex software controllers is expected to be a challenging task. The goal of this e ffort is to apply machine learning techniques from the field of arti ficial intelligence to the general problem of evaluating an intelligent controller for an autonomous vehicle. The approach involves subjecting a controller to an adaptively chosen set of fault scenarios within a vehicle simulator, and searching for combinations of faults that produce noteworthy performance by the vehicle controller. The search employs a genetic algorithm. We illustrate the approach by evaluating the performance of a subsumption-based controller for an autonomous vehicle. The preliminary evidence suggests that this approach is an e ffective alternative to manual testing of sophisticated software controllers. | [
910,
966,
1253
] | Test |
1,132 | 6 | Title: A Theory of Unsupervised Speedup Learning
Abstract: Speedup learning seeks to improve the efficiency of search-based problem solvers. In this paper, we propose a new theoretical model of speedup learning which captures systems that improve problem solving performance by solving a user-given set of problems. We also use this model to motivate the notion of "batch problem solving," and argue that it is more congenial to learning than sequential problem solving. Our theoretical results are applicable to all serially decomposable domains. We empirically validate our results in the domain of Eight Puzzle. 1 | [
1309
] | Train |
1,133 | 3 | Title: A Fast Non-Parametric Density Estimation Algorithm
Abstract: Non-parametric density estimation is the problem of approximating the values of a probability density function, given samples from the associated distribution. Non-parametric estimation finds applications in discriminant analysis, cluster analysis, and flow calculations based on Smoothed Particle Hydrodynamics. Usual estimators make use of kernel functions, and require on the order of n 2 arithmetic operations to evaluate the density at n sample points. We describe a sequence of special weight functions which requires almost linear number of operations in n for the same computation. | [
719,
1666
] | Test |
1,134 | 1 | Title: Discontinuity in evolution: how different levels of organization imply pre-adaptation
Abstract: Non-parametric density estimation is the problem of approximating the values of a probability density function, given samples from the associated distribution. Non-parametric estimation finds applications in discriminant analysis, cluster analysis, and flow calculations based on Smoothed Particle Hydrodynamics. Usual estimators make use of kernel functions, and require on the order of n 2 arithmetic operations to evaluate the density at n sample points. We describe a sequence of special weight functions which requires almost linear number of operations in n for the same computation. | [
1264,
2281
] | Test |
1,135 | 6 | Title: Learning First-Order Acyclic Horn Programs from Entailment
Abstract: In this paper, we consider learning first-order Horn programs from entailment. In particular, we show that any subclass of first-order acyclic Horn programs with constant arity is exactly learnable from equivalence and entailment membership queries provided it allows a polynomial-time subsumption procedure and satisfies some closure conditions. One consequence of this is that first-order acyclic determinate Horn programs with constant arity are exactly learnable from equiv alence and entailment membership queries. | [
1174,
1442,
1444
] | Train |
1,136 | 1 | Title: Using Neural Networks and Genetic Algorithms as Heuristics for NP-Complete Problems
Abstract: Paradigms for using neural networks (NNs) and genetic algorithms (GAs) to heuristically solve boolean satisfiability (SAT) problems are presented. Since SAT is NP-Complete, any other NP-Complete problem can be transformed into an equivalent SAT problem in polynomial time, and solved via either paradigm. This technique is illustrated for hamiltonian circuit (HC) problems. | [
163,
727,
800,
935,
1018,
1030,
1060,
1063,
1142,
1286,
1333,
1516,
1523,
1558,
1575,
1594,
1740
] | Train |
1,137 | 4 | Title: Learning Conventions in Multiagent Stochastic Domains using Likelihood Estimates
Abstract: Fully cooperative multiagent systemsthose in which agents share a joint utility modelis of special interest in AI. A key problem is that of ensuring that the actions of individual agents are coordinated, especially in settings where the agents are autonomous decision makers. We investigate approaches to learning coordinated strategies in stochastic domains where an agent's actions are not directly observable by others. Much recent work in game theory has adopted a Bayesian learning perspective to the more general problem of equilibrium selection, but tends to assume that actions can be observed. We discuss the special problems that arise when actions are not observable, including effects on rates of convergence, and the effect of action failure probabilities and asymmetries. We also use likelihood estimates as a means of generalizing fictitious play learning models in our setting. Finally, we propose the use of maximum likelihood as a means of removing strategies from consideration, with the aim of convergence to a conventional equilibrium, at which point learning and deliberation can cease. | [
558,
1459,
1687
] | Train |
1,138 | 0 | Title: Learning Generic Mechanisms from Experiences for Analogical Reasoning
Abstract: Humans appear to often solve problems in a new domain by transferring their expertise from a more familiar domain. However, making such cross-domain analogies is hard and often requires abstractions common to the source and target domains. Recent work in case-based design suggests that generic mechanisms are one type of abstractions used by designers. However, one important yet unexplored issue is where these generic mechanisms come from. We hypothesize that they are acquired incrementally from problem-solving experiences in familiar domains by generalization over patterns of regularity. Three important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. In this paper, we show that mental models in a familiar domain provide the content, and together with the problem-solving context in which learning occurs, also provide the constraints for learning generic mechanisms from design experiences. In particular, we show how the model-based learning method integrated with similarity-based learning addresses the issues in generalization from experiences. | [
806,
1121,
1344,
1420,
1597,
2706
] | Train |
1,139 | 1 | Title: Optimal Mutation Rates in Genetic Search
Abstract: The optimization of a single bit string by means of iterated mutation and selection of the best (a (1+1)-Genetic Algorithm) is discussed with respect to three simple fitness functions: The counting ones problem, a standard binary encoded integer, and a Gray coded integer optimization problem. A mutation rate schedule that is optimal with respect to the success probability of mutation is presented for each of the objective functions, and it turns out that the standard binary code can hamper the search process even in case of unimodal objective functions. While normally a mutation rate of 1=l (where l denotes the bit string length) is recommendable, our results indicate that a variation of the mutation rate is useful in cases where the fitness function is a multimodal pseudo-boolean function, where multimodality may be caused by the objective function as well as the encoding mechanism. | [
163,
780,
793,
1018,
1572,
1598
] | Train |
1,140 | 1 | Title: LEARNING ROBOT BEHAVIORS USING GENETIC ALGORITHMS
Abstract: Genetic Algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the The approach to learning behaviors for robots described here reflects a particular methodology for learning via a simulation model. The motivation is that making mistakes on real systems may be costly or dangerous. In addition, time constraints might limit the number of experiences during learning in the real world, while in many cases, the simulation model can be made to run faster than real time. Since learning may require experimenting with behaviors that might occasionally produce unacceptable results if applied to the real world, or might require too much time in the real environment, we assume that hypothetical behaviors will be evaluated in a simulation model (the off-line system). As illustrated in Figure 1, the current best behavior can be placed in the real, on-line system, while learning continues in the off-line system [1]. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. The expectation is that behaviors learned in these simulations will be useful in real-world environments. Previous studies have illustrated that knowledge learned under simulation is robust and might be applicable to the real world if the simulation is more general (i.e. has more noise, more varied conditions, etc.) than the real world environment [2]. Where this is not possible, it is important to identify the differences between the simulation and the world and note the effect upon the learning process. The research reported here continues to examine this hypothesis. The next section very briefly explains the learning algorithm (and gives pointers to where more extensive documentation can be found). After that, the actual robot is described. Then we describe the simulation of the robot. The task _______________ actual robot. | [
811,
910,
964,
965,
966,
981,
1311,
2294
] | Train |
1,141 | 3 | Title: Bayesian Graphical Modeling for Intelligent Tutoring Systems
Abstract: Conventional Intelligent Tutoring Systems (ITS) do not acknowledge uncertainty about the student's knowledge. Yet, both the outcome of any teaching intervention and the exact state of the student's knowledge are uncertain. In recent years, researchers have made startling progress in the management of uncertainty in knowledge-based systems. Building on these developments, we describe an ITS architecture that explicitly models uncertainty. This will facilitate more accurate student modeling and provide ITS's which can learn. | [
1172,
1240,
1241
] | Test |
1,142 | 1 | Title: A NN Algorithm for Boolean Satisfiability Problems
Abstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a neural network algorithm (NNSAT) with GSAT [4], a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are difficult for traditional satisfiability algorithms. Results suggest that NNSAT scales better as the number of variables increase, solving at least as many hard SAT problems. | [
1018,
1136
] | Train |
1,143 | 1 | Title: Neural Networks in an Artificial Life Perspective
Abstract: In the last few years several researchers within the Artificial Life and Mobile Robotics community used Artificial Neural Networks. Explicitly viewing Neural Networks in an Artificial Life perspective has a number of consequences that make research on what we will call Artificial Life Neural Networks ( ALNNs) rather different from traditional connectionist research. The aim of the paper is to make the differences between ALNNs and "classical" neural networks explicit. | [
1404,
2165,
2429
] | Train |
1,144 | 2 | Title: VIEWNET ARCHITECTURES FOR INVARIANT 3-D OBJECT LEARNING AND RECOGNITION FROM MULTIPLE 2-D VIEWS
Abstract: 3 The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system (Fuzzy ARTMAP) that classifies the preprocessed representations into 2-D view categories whose outputs are combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence over time from 3-D object category nodes as multiple 2-D views are experienced. VIEWNET was benchmarked on an MIT Lincoln Laboratory database of 128x128 2-D views of aircraft, including small frontal views, with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex. | [
592,
1509
] | Test |
1,145 | 2 | Title: A Unifying View of Some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses
Abstract: Recent interest has come about in deriving various neural network architectures for modelling time-dependent signals. A number of algorithms have been published for multilayer perceptrons with synapses described by finite impulse response (FIR) and infinite impulse response (IIR) filters (the latter case is also known as Locally Recurrent Globally Feedforward Networks). The derivations of these algorithms have used different approaches in calculating the gradients, and in this note, we present a short, but unifying account of how these different algorithms compare for the FIR case, both in derivation, and performance. New algorithms are subsequently presented. Simulation results have been performed to benchmark these algorithms. In this note, results are compared for the Mackey-Glass chaotic time series against a number of other methods including a standard multilayer perceptron, and a local approximation method. | [
1323
] | Train |
1,146 | 2 | Title: The optimal number of learning samples and hidden units in function approximation with a feedforward network
Abstract: This paper presents a methodology to estimate the optimal number of learning samples and the number of hidden units needed to obtain a desired accuracy of a function approximation by a feedforward network. The representation error and the generalization error, components of the total approximation error are analyzed and the approximation accuracy of a feedforward network is investigated as a function of the number of hidden units and the number of learning samples. Based on the asymptotical behavior of the approximation error, an asymptotical model of the error function (AMEF) is introduced of which the parameters can be determined experimentally. An alternative model of the error function, which include theoretical results about general bounds of approximation, is also analyzed. In combination with knowledge about the computational complexity of the learning rule an optimal learning set size and number of hidden units can be found resulting in a minimum computation time for a given desired precision of the approximation. This approach was applied to optimize the learning of the camera-robot mapping of a visually guided robot arm and a complex logarithm function approximation. | [
820,
1676
] | Train |
1,147 | 3 | Title: Decomposable graphical Gaussian model determination
Abstract: We propose a methodology for Bayesian model determination in decomposable graphical Gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative structures for the hyperparameters of the latter, and their consequences for the model. Model determination is carried out by implementing a reversible jump MCMC sampler. In particular, the dimension-changing move we propose involves adding or dropping an edge from the graph. We characterise the set of moves which preserve the decomposability of the graph, giving a fast algorithm for maintaining the junction tree representation of the graph at each sweep. As state variable, we propose to use the incomplete variance-covariance matrix, containing only the elements for which the corresponding element of the inverse is nonzero. This allows all computations to be performed locally, at the clique level, which is a clear advantage for the analysis of large and complex data-sets. Finally, the statistical and computational performance of the procedure is illustrated by means of both artificial and real multidimensional data-sets. | [
161,
772,
1240,
1241,
1347
] | Train |
1,148 | 0 | Title: Opportunistic Reasoning: A Design Perspective
Abstract: An essential component of opportunistic behavior is opportunity recognition, the recognition of those conditions that facilitate the pursuit of some suspended goal. Opportunity recognition is a special case of situation assessment, the process of sizing up a novel situation. The ability to recognize opportunities for reinstating suspended problem contexts (one way in which goals manifest themselves in design) is crucial to creative design. In order to deal with real world opportunity recognition, we attribute limited inferential power to relevant suspended goals. We propose that goals suspended in the working memory monitor the internal (hidden) representations of the currently recognized objects. A suspended goal is satisfied when the current internal representation and a suspended goal match. We propose a computational model for working memory and we compare it with other relevant theories of opportunistic planning. This working memory model is implemented as part of our IMPROVISER system. | [
30,
285,
486,
1355,
1534,
1597
] | Train |
1,149 | 2 | Title: What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation
Abstract: Technical Report UMIACS-TR-96-22 and CS-TR-3617 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract One of the most important aspects of any machine learning paradigm is how it scales according to problem size and complexity. Using a task with known optimal training error, and a pre-specified maximum number of training updates, we investigate the convergence of the backpropagation algorithm with respect to a) the complexity of the required function approximation, b) the size of the network in relation to the size required for an optimal solution, and c) the degree of noise in the training data. In general, for a) the solution found is worse when the function to be approximated is more complex, for b) oversized networks can result in lower training and generalization error in certain cases, and for c) the use of committee or ensemble techniques can be more beneficial as the level of noise in the training data is increased. For the experiments we performed, we do not obtain the optimal solution in any case. We further support the observation that larger networks can produce better training and generalization error using a face recognition example where a network with many more parameters than training points generalizes better than smaller networks. | [
912,
1150,
1323,
1891,
2044
] | Test |
1,150 | 2 | Title: Lessons in Neural Network Training: Overfitting Lessons in Neural Network Training: Overfitting May be Harder
Abstract: For many reasons, neural networks have become very popular AI machine learning models. Two of the most important aspects of machine learning models are how well the model generalizes to unseen data, and how well the model scales with problem complexity. Using a controlled task with known optimal training error, we investigate the convergence of the backpropagation (BP) algorithm. We find that the optimal solution is typically not found. Furthermore, we observe that networks larger than might be expected can result in lower training and generalization error. This result is supported by another real world example. We further investigate the training behavior by analyzing the weights in trained networks (excess degrees of freedom are seen to do little harm and to aid convergence), and contrasting the interpolation characteristics of multi-layer perceptron neural networks (MLPs) and polynomial models (overfitting behavior is very different the MLP is often biased towards smoother solutions). Finally, we analyze relevant theory outlining the reasons for significant practical differences. These results bring into question common beliefs about neural network training regarding convergence and optimal network size, suggest alternate guidelines for practical use (lower fear of excess degrees of freedom), and help to direct future work (e.g. methods for creation of more parsimonious solutions, importance of the MLP/BP bias and possibly worse performance of improved training algorithms). | [
912,
1149,
1323,
1630
] | Train |
1,151 | 5 | Title: Learning Classification Rules Using Lattices
Abstract: This paper presents a novel induction algorithm, Rulearner, which induces classification rules using a Galois lattice as an explicit map through the search space of rules. The construction of lattices from data is initially discussed and the use of these structures in inducing classification rules is examined. The Rulearner system is shown to compare favorably with commonly used symbolic learning methods which use heursitics rather than an explicit map to guide their search through the rule space. Furthermore, our learning system is shown to be robust in the presence of noisy data. The Rulearner system is also capable of learning both decision lists as well as unordered rule sets and thus allows for comparisons of these different learning paradigms within the same algorithmic framework. | [
1350
] | Train |
1,152 | 0 | Title: Fish and Shrink. A next step towards e-cient case retrieval in large scaled case bases
Abstract: Keywords: Case-Based Reasoning, case retrieval, case representation This paper deals with the retrieval of useful cases in case-based reasoning. It focuses on the questions of what "useful" could mean and how the search for useful cases can be organized. We present the new search algorithm Fish and Shrink that is able to search quickly through the case base, even if the aspects that deflne usefulness are spontaneously combined at query time. We compare Fish and Shrink to other algorithms and show that most of them make an implicit closed world assumption. We flnally refer to a realization of the presented idea in the context of the prototype of the FABEL-Project 1 . The scenery is as follows. Previously collected cases are stored in a large scaled case base. An expert describes his problem and gives the aspects in which the requested case should be similar. The similarity measure thus given spontaneously shall now be used to explore the case base within a short time, shall present a required number of cases and make sure that none of the other cases is more similar. The question is now how to prepare the previously collected cases and how to deflne a retrieval algorithm which is able to deal with sponta neously user-deflned similarity measures. | [
1453
] | Test |
1,153 | 1 | Title: Evolution in Time and Space The Parallel Genetic Algorithm
Abstract: The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation. | [
163,
856,
942,
1063,
1065,
1070,
1077,
1113,
1127,
1204,
1205,
1219,
1257,
1410,
1455,
1611,
1675
] | Test |
1,154 | 0 | Title: Case-Based Learning: Beyond Classification of Feature Vectors
Abstract: The dominant theme of case-based research at recent ML conferences has been on classifying cases represented by feature vectors. However, other useful tasks can be targeted, and other representations are often preferable. We review the recent literature on case-based learning, focusing on alternative performance tasks and more expressive case representations. We also highlight topics in need of additional research. | [
819,
1531
] | Train |
1,155 | 0 | Title: Memory-Based Lexical Acquisition and Processing
Abstract: Current approaches to computational lexicology in language technology are knowledge-based (competence-oriented) and try to abstract away from specific formalisms, domains, and applications. This results in severe complexity, acquisition and reusability bottlenecks. As an alternative, we propose a particular performance-oriented approach to Natural Language Processing based on automatic memory-based learning of linguistic (lexical) tasks. The consequences of the approach for computational lexicology are discussed, and the application of the approach on a number of lexical acquisition and disambiguation tasks in phonology, morphology and syntax is described. | [
783,
785,
862,
1328,
1407,
1601,
1812
] | Test |
1,156 | 3 | Title: A NEW SEQUENTIAL SIMULATED ANNEALING METHOD
Abstract: Let H be a function not explicitly defined, but approximable by a sequence (H n ) n0 of functional estimators. In this context we propose a new sequential algorithm to optimise asymptotically H using stepwise estimators H n . We prove under mild conditions the almost sure convergence in law of this algorithm. | [
1013
] | Validation |
1,157 | 2 | Title: Some Competitive Learning Methods (Some additions and refinements are planned for
Abstract: Let H be a function not explicitly defined, but approximable by a sequence (H n ) n0 of functional estimators. In this context we propose a new sequential algorithm to optimise asymptotically H using stepwise estimators H n . We prove under mild conditions the almost sure convergence in law of this algorithm. | [
687,
741,
745,
1700,
1704
] | Train |
1,158 | 3 | Title: Stochastic Complexity Based Estimation of Missing Elements in Questionnaire Data
Abstract: In this paper we study a new information-theoretically justified approach to missing data estimation for multivariate categorical data. The approach discussed is a model-based imputation procedure relative to a model class (i.e., a functional form for the probability distribution of the complete data matrix), which in our case is the set of multinomial models with some independence assumptions. Based on the given model class assumption an information-theoretic criterion can be derived to select between the different complete data matrices. Intuitively this general criterion, called stochastic complexity, represents the shortest code length needed for coding the complete data matrix relative to the model class chosen. Using this information-theoretic criteria, the missing data problem is reduced to a search problem, i.e., finding the data completion with minimal stochastic complexity. In the experimental part of the paper we present empirical results of the approach using two real data sets, and compare these results to those achived by commonly used techniques such as case deletion and imputating sample averages. | [
1550,
1555
] | Validation |
1,159 | 1 | Title: An evolutionary tabu search algorithm and the NHL scheduling problem
Abstract: We present in this paper a new evolutionary procedure for solving general optimization problems that combines efficiently the mechanisms of genetic algorithms and tabu search. In order to explore the solution space properly interaction phases are interspersed with periods of optimization in the algorithm. An adaptation of this search principle to the National Hockey League (NHL) problem is discussed. The hybrid method developed in this paper is well suited for Open Shop Scheduling problems (OSSP). The results obtained appear to be quite satisfactory. | [
163,
1485,
2564
] | Test |
1,160 | 6 | Title: PFSA Modelling of Behavioural Sequences by Evolutionary Programming Rockhampton, Queensland. (1994) "PFSA Modelling of Behavioural
Abstract: Behavioural observations can often be described as a sequence of symbols drawn from a finite alphabet. However the inductive inference of such strings by any automated technique to produce models of the data is a nontrivial task. This paper considers modelling of behavioural data using probabilistic finite state automata (PFSAs). There are a number of information-theoretic techniques for evaluating possible hypotheses. The measure used in this paper is the Minimum Message Length (MML) of Wallace. Although attempts have been made to construct PFSA models by incremental addition of substrings using heuristic rules and the MML to give the lowest information cost, the resultant models cannot be shown to be globally optimal. Fogel's Evolutionary Programming can produce globally optimal PFSA models by evolving data structures of arbitrary complexity without the requirement to encode the PFSA into binary strings as in Genetic Algorithms. However, evaluation of PFSAs during the evolution process by the MML of the PFSA alone is not possible since there will be symbols which cannot be consumed by a partially correct solution. It is suggested that the addition of a "can't consume'' symbol to the symbol alphabet obviates this difficulty. The addition of this null symbol to the alphabet also permits the evolution of explanatory models which need not explain all of the data, a useful property to avoid overfitting noisy data. Results are given for a test set for which the optimal pfsa model is known and for a set of eye glance data derived from an instrument panel simulator. | [
1166
] | Train |
1,161 | 6 | Title: Inductive Learning by Selection of Minimal Complexity Representations
Abstract: Behavioural observations can often be described as a sequence of symbols drawn from a finite alphabet. However the inductive inference of such strings by any automated technique to produce models of the data is a nontrivial task. This paper considers modelling of behavioural data using probabilistic finite state automata (PFSAs). There are a number of information-theoretic techniques for evaluating possible hypotheses. The measure used in this paper is the Minimum Message Length (MML) of Wallace. Although attempts have been made to construct PFSA models by incremental addition of substrings using heuristic rules and the MML to give the lowest information cost, the resultant models cannot be shown to be globally optimal. Fogel's Evolutionary Programming can produce globally optimal PFSA models by evolving data structures of arbitrary complexity without the requirement to encode the PFSA into binary strings as in Genetic Algorithms. However, evaluation of PFSAs during the evolution process by the MML of the PFSA alone is not possible since there will be symbols which cannot be consumed by a partially correct solution. It is suggested that the addition of a "can't consume'' symbol to the symbol alphabet obviates this difficulty. The addition of this null symbol to the alphabet also permits the evolution of explanatory models which need not explain all of the data, a useful property to avoid overfitting noisy data. Results are given for a test set for which the optimal pfsa model is known and for a set of eye glance data derived from an instrument panel simulator. | [
1560,
1592,
1702,
2324,
2423,
2657
] | Train |
1,162 | 3 | Title: Signal Processing and Communications Reversible Jump Sampler for Autoregressive Time Series, Employing Full Conditionals to
Abstract: Technical Report CUED/F-INFENG/TR. 304 We use reversible jump Markov chain Monte Carlo (MCMC) methods (Green 1995) to address the problem of model order uncertainty in au-toregressive (AR) time series within a Bayesian framework. Efficient model jumping is achieved by proposing model space moves from the full conditional density for the AR parameters, which is obtained analytically. This is compared with an alternative method, for which the moves are cheaper to compute, in which proposals are made only for the new parameters in each move. Results are presented for both synthetic and audio time series. | [
1613
] | Test |
1,163 | 0 | Title: Case-Based Planning to Learn
Abstract: Learning can be viewed as a problem of planning a series of modifications to memory. We adopt this view of learning and propose the applicability of the case-based planning methodology to the task of planning to learn. We argue that relatively simple, fine-grained primitive inferential operators are needed to support flexible planning. We show that it is possible to obtain the benefits of case-based reasoning within a planning to learn framework. | [
1497,
1498,
1534
] | Test |
1,164 | 6 | Title: PAC Analyses of a `Similarity Learning' IBL Algorithm
Abstract: V S-CBR [14] is a simple instance-based learning algorithm that adjusts a weighted similarity measure as well as collecting cases. This paper presents a `PAC' analysis of V S-CBR, motivated by the PAC learning framework, which demonstrates two main ideas relevant to the study of instance-based learners. Firstly, the hypothesis spaces of a learner on different target concepts can be compared to predict the difficulty of the target concepts for the learner. Secondly, it is helpful to consider the `constituent parts' of an instance-based learner: to explore separately how many examples are needed to infer a good similarity measure and how many examples are needed for the case base. Applying these approaches, we show that V S-CBR learns quickly if most of the variables in the representation are irrelevant to the target concept and more slowly if there are more relevant variables. The paper relates this overall behaviour to the behaviour of the constituent parts of V S-CBR. | [
1570,
1584,
1626
] | Train |
1,165 | 5 | Title: Discovering Compressive Partial Determinations in Mixed Numerical and Symbolic Domains
Abstract: Partial determinations are an interesting form of dependency between attributes in a relation. They generalize functional dependencies by allowing exceptions. We modify a known MDL formula for evaluating such partial determinations to allow for its use in an admissible heuristic in exhaustive search. Furthermore we describe an efficient preprocessing-based approach for handling numerical attributes. An empirical investigation tries to evaluate the viability of the presented ideas. | [
430,
1327
] | Train |
1,166 | 6 | Title: Assessment of candidate pfsa models induced from symbol datasets
Abstract: The induction of the optimal finite state machine explanation from symbol strings is known to be at least NP-complete. However, satisfactory approximately optimal explanations may be found by the use of Evolutionary Programming. It has been shown that an information theoretic measure of finite state machine explanations can be used as the fitness function required for the evaluation of candidate explanations during the search for a near-optimal explanation. It is not obvious from the measure which class of explanation will be favoured over others during the search. By empirical studies it is possible to gain some insight into the dimensions the measure is optimising. In general, for probabilistic finite state machines, explanations assessed by a minimum message length estimator with the minimum number of transitions will be favoured over other explanations. The information measure will also favour explanations with uneven distributions of frequencies on transitions from a node suggesting that repeated sequences in symbol strings will be preferred as an explanation. Approximate bounds for acceptance of explanations and the length of string required for induction to be successful are also derived by considerations of the simplest possible and random explanations and their information measure. | [
1160
] | Train |
1,167 | 1 | Title: Evolving Globally Synchronized Cellular Automata
Abstract: How does an evolutionary process interact with a decentralized, distributed system in order to produce globally coordinated behavior? Using a genetic algorithm (GA) to evolve cellular automata (CAs), we show that the evolution of spontaneous synchronization, one type of emergent coordination, takes advantage of the underlying medium's potential to form embedded particles. The particles, typically phase defects between synchronous regions, are designed by the evolutionary process to resolve frustrations in the global phase. We describe in detail one typical solution discovered by the GA, delineating the discovered synchronization algorithm in terms of embedded particles and their interactions. We also use the particle-level description to analyze the evolutionary sequence by which this solution was discovered. Our results have implications both for understanding emergent collective behavior in natural systems and for the automatic programming of decentralized spatially extended multiprocessor systems. | [
1330,
1331,
1332
] | Test |
1,168 | 3 | Title: Bootstrapping Z Estimators
Abstract: We prove a general bootstrap theorem for possibly infinite-dimensional Zestimators which builds on the recent infinite-dimensional Ztheorem due to Van der Vaart (1995). Our result extends finite-dimensional results of this type for the bootstrap due to Arcones and Gine (1992), Lele (1991), and Newton and Raftery (1994). We sketch three examples of models with infinite-dimensional parameter spaces fi as applicatons of our general theorem. | [
802
] | Train |
1,169 | 2 | Title: Individual and Collective Prognostic Prediction
Abstract: The prediction of survival time or recurrence time is an important learning problem in medical domains. The Recurrence Surface Approximation (RSA) method is a natural, effective method for predicting recurrence times using censored input data. This paper introduces the Survival Curve RSA (SC-RSA), an extension to the RSA approach which produces accurate predicted rates of recurrence, while maintaining accuracy on individual predicted recurrence times. The method is applied to the problem of breast cancer recurrence using two different datasets. | [
524,
1284,
1454
] | Validation |
1,170 | 6 | Title: Selective sampling using the Query by Committee algorithm Running title: Selective sampling using Query by Committee
Abstract: We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons. Keywords: selective sampling, query learning, Bayesian Learning, experimental design fl Yoav Freund, Room 2B-428, AT&T Laboratories, 700 Mountain Ave., Murray Hill, NJ, 07974. Telephone:908-582-3164. | [
517,
1198
] | Train |
1,171 | 2 | Title: Nonlinear Component Analysis as a Kernel Eigenvalue Problem
Abstract: A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible 5-pixel products in 16fi16 images. We give the derivation of the method and present first experimental results on polynomial feature extraction for pattern recognition. | [
1050
] | Train |
1,172 | 3 | Title: Introduction to the Special Section on Knowledge-Based Construction of Probabilistic and Decision Models (IEEE Transactions
Abstract: Modeling techniques developed recently in the AI and uncertain reasoning communities permit significantly more flexible specifications of probabilistic knowledge. Specifically, graphical decision-modeling formalisms|belief networks, influence diagrams, and their variants|provide compact representation of probabilistic relationships, and support inference algorithms that automatically exploit the dependence structure in such models [1, 3, 4]. These advances have brought on a resurgence of interest in computational decision systems based on normative theories of belief and preference. However, graphical decision-modeling languages are still quite limited for purposes of knowledge representation because, while they can describe the relationships among particular event instances, they cannot capture general knowledge about probabilistic relationships across classes of events. The inability to capture general knowledge is a serious impediment for those AI tasks in which the relevant factors of a decision problem cannot be enumerated in advance. A graphical decision model encodes a particular set of probabilistic dependencies, a predefined set of decision alternatives, and a specific mathematical form for a utility function. Given a properly specified model, there exist relatively efficient algorithms for calculating posterior probabilities and optimal decision policies. A range of similar cases may be handled by parametric variations of the original model. However, if the structure of dependencies, the set of available alternatives, or the form of utility function changes from situation to situation, then a fixed network representation is no longer adequate. An ideal computational decision system would possess general, broad knowledge of a domain, but would have the ability to reason about the particular circumstances of any given decision problem within the domain. One obvious approach|which we call call knowledge-based model construction (KBMC)|is to generate a decision model dynamically at run-time, based on the problem description and information received thus far. Model construction consists of selection, instantiation, and assembly of causal and associational relationships from a broad knowledge base of general relationships among domain concepts. For example, suppose we wish to develop a system to recommend appropriate actions for maintaining a computer network. The natural graphical decision model would include chance | [
623,
915,
1141,
2108,
2341
] | Test |
1,173 | 6 | Title: Dynamical Selection of Learning Algorithms
Abstract: Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain have met with limited success. This paper proposes a new approach to predicting a given example's class by locating it in the "example space" and then choosing the best learner(s) in that region of the example space to make predictions. The regions of the example space are defined by the prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region. This dynamic approach to learning algorithm selection is compared to other methods for selecting from multiple learning algorithms. The approach is then extended to weight rather than select the algorithms according to their past performance in a given region. Both approaches are further evaluated on a set of Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain (e.g. [Aha92, Breiman84]) or for a portion of the domain ([Brodley93, Brodley94]) have met with limited success. This paper proposes a new approach that dynamically selects a learning algorithm for each example by locating it in the "example space" and then choosing the best learner(s) for prediction in that part of the example space. The regions of the example space are formed by the observed prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region which is defined by the "cross-validation history." This paper introduces DS, a method for the dynamic selection of a learning algorithm(s). We call it "dynamic" because the learning algorithm(s) used to classify a novel example depends on that example. Preliminary experimentation motivated DW, an extension to DS that dynamically weights the learners predictions according to their regional accuracy. Further experimentation compares DS and DW to a collection of other meta-learning strategies such as cross-validation ([Breiman84]) and various forms of stacking ([Wolpert92]). In this phase of the experiementation, the meta-learners have six constituent learners which are heterogeneous in their search and representation methods (e.g. a rule learner, CN2 [Clark89]; a decision tree learner, C4.5 [Quinlan93]; an oblique decision tree learner, OC1 [Murthy93]; an instance-based learner, PEBLS [Cost93]; a k-nearest neighbor learner, ten domains and compared to several other meta-learning strategies. | [
318,
1328,
2583
] | Train |
1,174 | 6 | Title: LEARNING CONCEPTS BY ASKING QUESTIONS
Abstract: Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take an active part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. Learning begins when the trainer shows Marvin an example of the concept to be learned. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept. | [
303,
414,
893,
902,
1033,
1074,
1102,
1135,
1297
] | Train |
1,175 | 1 | Title: Complex Environments to Complex Behaviors FROM COMPLEX ENVIRONMENTS TO COMPLEX BEHAVIORS
Abstract: Adaptation of ecological systems to their environments is commonly viewed through some explicit fitness function defined a priori by the experimenter, or measured a posteriori by estimations based on population size and/or reproductive rates. These methods do not capture the role of environmental complexity in shaping the selective pressures that control the adaptive process. Ecological simulations enabled by computational tools such as the Latent Energy Environments (LEE) model allow us to characterize more closely the effects of environmental complexity on the evolution of adaptive behaviors. LEE is described in this paper. Its motivation arises from the need to vary complexity in controlled and predictable ways, without assuming the relationship of these changes to the adaptive behaviors they engender. This goal is achieved through a careful characterization of environments in which different forms of "energy" are well-defined. A genetic algorithm using endogenous fitness and local selection is used to model the evolutionary process. Individuals in the population are modeled by neural networks with simple sensory-motor systems, and variations in their behaviors are related to interactions with varying environments. We outline the results of three experiments that analyze different sources of environmental complexity and their effects on the collective behaviors of evolving populations. | [
1325,
1628
] | Train |
1,176 | 2 | Title: Distributed Representations and Nested Compositional Structure
Abstract: Adaptation of ecological systems to their environments is commonly viewed through some explicit fitness function defined a priori by the experimenter, or measured a posteriori by estimations based on population size and/or reproductive rates. These methods do not capture the role of environmental complexity in shaping the selective pressures that control the adaptive process. Ecological simulations enabled by computational tools such as the Latent Energy Environments (LEE) model allow us to characterize more closely the effects of environmental complexity on the evolution of adaptive behaviors. LEE is described in this paper. Its motivation arises from the need to vary complexity in controlled and predictable ways, without assuming the relationship of these changes to the adaptive behaviors they engender. This goal is achieved through a careful characterization of environments in which different forms of "energy" are well-defined. A genetic algorithm using endogenous fitness and local selection is used to model the evolutionary process. Individuals in the population are modeled by neural networks with simple sensory-motor systems, and variations in their behaviors are related to interactions with varying environments. We outline the results of three experiments that analyze different sources of environmental complexity and their effects on the collective behaviors of evolving populations. | [
254,
1123,
1285,
1354,
1592,
2272
] | Test |
1,177 | 5 | Title: An Efficient Subsumption Algorithm for Inductive Logic Programming
Abstract: In this paper we investigate the efficiency of - subsumption (` ), the basic provability relation in ILP. As D ` C is NP-complete even if we restrict ourselves to linked Horn clauses and fix C to contain only a small constant number of literals, we investigate in several restrictions of D. We first adapt the notion of determinate clauses used in ILP and show that -subsumption is decidable in polynomial time if D is determinate with respect to C. Secondly, we adapt the notion of k-local Horn clauses and show that - subsumption is efficiently computable for some reasonably small k. We then show how these results can be combined, to give an efficient reasoning procedure for determinate k-local Horn clauses, an ILP-problem recently suggested to be polynomial predictable by Cohen (1993) by a simple counting argument. We finally outline how the -reduction algorithm, an essential part of every lgg ILP-learning algorithm, can be im proved by these ideas. | [
1180,
1519,
1620,
1627
] | Test |
1,178 | 1 | Title: Strongly Typed Genetic Programming
Abstract: BBN Technical Report #7866: Abstract Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called "strongly typed" genetic programming (STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate syntactically correct parse trees. Key concepts for STGP are generic functions, which are not true strongly typed functions but rather templates for classes of such functions, and generic data types, which are analogous. To illustrate STGP, we present four examples involving vector/matrix manipulation and list manipulation: (1) the multi-dimensional least-squares regression problem, (2) the multi-dimensional Kalman filter, (3) the list manipulation function NTH, and (4) the list manipulation function MAPCAR. | [
163,
854,
956,
995,
1034,
1099,
1230,
1231,
1232,
1362,
1476,
1688,
1690,
1736,
1737
] | Validation |
1,179 | 2 | Title: Even with Arbitrary Transfer Functions, RCC Cannot Compute Certain FSA
Abstract: Category: algorithms and architectures | recurrent networks. No part of this paper has been submitted elsewhere. Preference: poster. Abstract Existing proofs demonstrating the computational limitations of the Recurrent Cascade Correlation (RCC) Network (Fahlman, 1991) explicitly limit their results to units having sigmoidal or hard-threshold transfer functions (Giles et al., 1995; and Kremer, 1996). The proof given here shows that, for any given finite, discrete, deterministic transfer function used by the units of an RCC network, there are finite-state automata (FSA) that the network cannot model, no matter how many units are used. The proof applies equally well to continuous transfer functions with a finite number of fixed-points, such as the sigmoid function. | [
946
] | Test |
1,180 | 5 | Title: Efficient Algorithms for -Subsumption
Abstract: subsumption is a decidable but incomplete approximation of logic implication, important to inductive logic programming and theorem proving. We show that by context based elimination of possible matches a certain superset of the determinate clauses can be tested for subsumption in polynomial time. We discuss the relation between subsumption and the clique problem, showing in particular that using additional prior knowledge about the substitution space only a small fraction of the search space can be identified as possibly containing globally consistent solutions, which leads to an effective pruning rule. We present empirical results, demonstrating that a combination of both of the above approaches provides an extreme reduction of computational effort. | [
1177,
1620
] | Train |
1,181 | 6 | Title: Learning Sparse Perceptrons
Abstract: We introduce a new algorithm designed to learn sparse perceptrons over input representations which include high-order features. Our algorithm, which is based on a hypothesis-boosting method, is able to PAC-learn a relatively natural class of target concepts. Moreover, the algorithm appears to work well in practice: on a set of three problem domains, the algorithm produces classifiers that utilize small numbers of features yet exhibit good generalization performance. Perhaps most importantly, our algorithm generates concept descriptions that are easy for humans to understand. | [
25,
569,
1431
] | Validation |
1,182 | 5 | Title: Overcoming the myopia of inductive learning algorithms with RELIEFF
Abstract: Current inductive machine learning algorithms typically use greedy search with limited looka-head. This prevents them to detect significant conditional dependencies between the attributes that describe training objects. Instead of myopic impurity functions and lookahead, we propose to use RELI-EFF, an extension of RELIEF developed by Kira and Rendell [10], [11], for heuristic guidance of inductive learning algorithms. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems and the results are compared with some other well known machine learning algorithms. Excellent results on artificial data sets and two real world problems show the advantage of the presented approach to inductive learning. | [
1010,
1073,
1569,
1578,
1684,
1726
] | Validation |
1,183 | 4 | Title: Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition
Abstract: This paper describes the MAXQ method for hierarchical reinforcement learning based on a hierarchical decomposition of the value function and derives conditions under which the MAXQ decomposition can represent the optimal value function. We show that for certain execution models, the MAXQ decomposition will produce better policies than Feudal Q learning. | [
562,
738,
1193,
1202
] | Train |
1,184 | 1 | Title: Causality in Genetic Programming
Abstract: Causality relates changes in the structure of an object with the effects of such changes, that is changes in the properties or behavior of the object. This paper analyzes the concept of causality in Genetic Programming (GP) and suggests how it can be used in adapting control parameters for speeding up GP search. We first analyze the effects of crossover to show the weak causality of the GP representation and operators. Hierarchical GP approaches based on the discovery and evolution of functions amplify this phenomenon. However, selection gradually retains strongly causal changes. Causality is correlated to search space exploitation and is discussed in the context of the exploration-exploitation tradeoff. The results described argue for a bottom-up GP evolutionary thesis. Finally, new developments based on the idea of GP architecture evolution (Koza, 1994a) are discussed from the causality perspective. | [
120,
141,
781,
844,
860,
1362,
1784,
2199
] | Validation |
1,185 | 6 | Title: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants
Abstract: Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only "hard" areas but also outliers and noise. | [
1000,
1521
] | Train |
1,186 | 3 | Title: Rationality and Intelligence
Abstract: The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. The concept of rational agency has long been considered a leading candidate to fulfill this role. This paper outlines a gradual evolution in the formal conception of rationality that brings it closer to our informal conception of intelligence and simultaneously reduces the gap between theory and practice. Some directions for future research are indicated. | [
492,
591,
1268,
1309
] | Validation |
1,187 | 6 | Title: Rationality and Intelligence
Abstract: Design and Evaluation of the RISE 1.0 Learning System Pedro Domingos pedrod@ics.uci.edu Technical Report 94-34 August 30, 1994 | [
426,
1234
] | Train |
1,188 | 2 | Title: In Estimating analogical similarity by dot-products of Holographic Reduced Representations.
Abstract: Models of analog retrieval require a computationally cheap method of estimating similarity between a probe and the candidates in a large pool of memory items. The vector dot-product operation would be ideal for this purpose if it were possible to encode complex structures as vector representations in such a way that the superficial similarity of vector representations reflected underlying structural similarity. This paper describes how such an encoding is provided by Holographic Reduced Representations (HRRs), which are a method for encoding nested relational structures as fixed-width distributed representations. The conditions under which structural similarity is reflected in the dot-product rankings of | [
1123,
1354
] | Train |
1,189 | 6 | Title: Figure 3: Average model size accepted from a ran-dom prefix-closed samples of various size, and
Abstract: that is based on Angluin's L fl algorithm. The algorithm maintains a model consistent with its past examples. When a new counterexample arrives it tries to extend the model in a minimal fashion. We conducted a set of experiments where random automata that represent different strategies were generated, and the algorithm tried to learn them based on prefix-closed samples of their behavior. The algorithm managed to learn very compact models that agree with the samples. The size of the sample had a small effect on the size of the model. The experimental results suggest that for random prefix-closed samples the algorithm behaves well. However, following Angluin's result on the difficulty of learning almost uniform complete samples [ An-gluin, 1978 ] , it is obvious that our algorithm does not solve the complexity issue of inferring a DFA from a general prefix-closed sample. We are currently looking for classes of prefix-closed samples in which US-L* behaves well. [ Carmel and Markovitch, 1994 ] D. Carmel and S. Markovitch. The M* algorithm: Incorporating opponent models into adversary search. Technical Report CIS report 9402, Technion, March 1994. [ Carmel and Markovitch, 1995 ] D. Carmel and S. Markovitch. Unsupervised learning of finite automata: A practical approach. Technical Report CIS report 9504, Technion, March 1995. [ Shoham and Tennenholtz, 1994 ] Y. Shoham and M. Tennenholtz. Co-Learning and the evolution of social activity. Technical Report STAN-CS-TR-94-1511, Stanford Univrsity, Department of Computer Science, 1994. | [
638,
1643,
1687
] | Train |
1,190 | 2 | Title: Analysis of the Convergence and Generalization of AA1
Abstract: that is based on Angluin's L fl algorithm. The algorithm maintains a model consistent with its past examples. When a new counterexample arrives it tries to extend the model in a minimal fashion. We conducted a set of experiments where random automata that represent different strategies were generated, and the algorithm tried to learn them based on prefix-closed samples of their behavior. The algorithm managed to learn very compact models that agree with the samples. The size of the sample had a small effect on the size of the model. The experimental results suggest that for random prefix-closed samples the algorithm behaves well. However, following Angluin's result on the difficulty of learning almost uniform complete samples [ An-gluin, 1978 ] , it is obvious that our algorithm does not solve the complexity issue of inferring a DFA from a general prefix-closed sample. We are currently looking for classes of prefix-closed samples in which US-L* behaves well. [ Carmel and Markovitch, 1994 ] D. Carmel and S. Markovitch. The M* algorithm: Incorporating opponent models into adversary search. Technical Report CIS report 9402, Technion, March 1994. [ Carmel and Markovitch, 1995 ] D. Carmel and S. Markovitch. Unsupervised learning of finite automata: A practical approach. Technical Report CIS report 9504, Technion, March 1995. [ Shoham and Tennenholtz, 1994 ] Y. Shoham and M. Tennenholtz. Co-Learning and the evolution of social activity. Technical Report STAN-CS-TR-94-1511, Stanford Univrsity, Department of Computer Science, 1994. | [
809,
1129,
1321
] | Train |
1,191 | 6 | Title: Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms
Abstract: The term "bias" is widely used|and with different meanings|in the fields of machine learning and statistics. This paper clarifies the uses of this term and shows how to measure and visualize the statistical bias and variance of learning algorithms. Statistical bias and variance can be applied to diagnose problems with machine learning bias, and the paper shows four examples of this. Finally, the paper discusses methods of reducing bias and variance. Methods based on voting can reduce variance, and the paper compares Breiman's bagging method and our own tree randomization method for voting decision trees. Both methods uniformly improve performance on data sets from the Irvine repository. Tree randomization yields perfect performance on the Letter Recognition task. A weighted nearest neighbor algorithm based on the infinite bootstrap is also introduced. In general, decision tree algorithms have moderate-to-high variance, so an important implication of this work is that variance|rather than appropriate or inappropriate machine learning bias|is an important cause of poor performance for decision tree algorithms. | [
661,
692,
1053,
1290,
2423
] | Validation |
1,192 | 4 | Title: Roles of Macro-Actions in Accelerating Reinforcement Learning
Abstract: We analyze the use of built-in policies, or macro-actions, as a form of domain knowledge that can improve the speed and scaling of reinforcement learning algorithms. Such macro-actions are often used in robotics, and macro-operators are also well-known as an aid to state-space search in AI systems. The macro-actions we consider are closed-loop policies with termination conditions. The macro-actions can be chosen at the same level as primitive actions. Macro-actions commit the learning agent to act in a particular, purposeful way for a sustained period of time. Overall, macro-actions may either accelerate or retard learning, depending on the appropriateness of the macro-actions to the particular task. We analyze their effect in a simple example, breaking the acceleration effect into two parts: 1) the effect of the macro-action in changing exploratory behavior, independent of learning, and 2) the effect of the macro-action on learning, independent of its effect on behavior. In our example, both effects are significant, but the latter appears to be larger. Finally, we provide a more complex gridworld illustration of how appropriately chosen macro-actions can accelerate overall learning. | [
321,
875,
2150,
2473
] | Train |
1,193 | 4 | Title: Reinforcement Learning with Hierarchies of Machines
Abstract: We present a new approach to reinforcement learning in which the policies considered by the learning process are constrained by hierarchies of partially specified machines. This allows for the use of prior knowledge to reduce the search space and provides a framework in which knowledge can be transferred across problems and in which component solutions can be recombined to solve larger and more complicated problems. Our approach can be seen as providing a link between reinforcement learning and behavior-based or teleo-reactive approaches to control. We present provably convergent algorithms for problem-solving and learning with hierarchical machines and demonstrate their effectiveness on a problem with several thousand states. | [
1183
] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.