node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
394 | 2 | Title: Prior Information and Generalized Questions
Abstract: This report describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. This research is sponsored by a grant from National Science Foundation under contract ASC-9217041 and a grant from ONR/ARPA under contract N00014-92-J-1879. The author was supported by a Postdoctoral Fellowship (Le 1014/1-1) from the Deutsche Forschungsgemeinschaft and a NSF/CISE Postdoctoral Fellowship. | [
125,
133,
608
] | Train |
395 | 1 | Title: Evolving Graphs and Networks with Edge Encoding: Preliminary Report
Abstract: We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress. | [
163,
189,
191
] | Validation |
396 | 5 | Title: Geometric Comparison of Classifications and Rule Sets*
Abstract: We present a technique for evaluating classifications by geometric comparison of rule sets. Rules are represented as objects in an n-dimensional hyperspace. The similarity of classes is computed from the overlap of the geometric class descriptions. The system produces a correlation matrix that indicates the degree of similarity between each pair of classes. The technique can be applied to classifications generated by different algorithms, with different numbers of classes and different attribute sets. Experimental results from a case study in a medical domain are included. | [
378,
478
] | Train |
397 | 2 | Title: Truth-from-Trash Learning and the Mobot
Abstract: As natural resources become less abundant, we naturally become more interested in, and more adept at utilisation of waste materials. In doing this we are bringing to bear a ploy which is of key importance in learning | or so I argue in this paper. In the `Truth from Trash' model, learning is viewed as a process which uses environmental feedback to assemble fortuitous sensory predispositions (sensory `trash') into useful, information vehicles, i.e., `truthful' indicators of salient phenomena. The main aim will be to show how a computer implementation of the model has been used to enhance (through learning) the strategic abilities of a simulated, football playing mobot. | [
659,
747,
2346
] | Validation |
398 | 3 | Title: Axioms of Causal Relevance
Abstract: This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irrelevance, as in "Learning X will not alter our belief in Y , once we know Z." Two versions of causal irrelevance are analyzed, probabilistic and deterministic. We show that, unless stability is assumed, the probabilistic definition yields a very loose structure, that is governed by just two trivial axioms. Under the stability assumption, probabilistic causal irrelevance is isomorphic to path interception in cyclic graphs. Under the deterministic definition, causal irrelevance complies with all of the axioms of path interception in cyclic graphs, with the exception of transitivity. We compare our formalism to that of [Lewis, 1973], and offer a graphical method of proving theorems about causal relevance. | [
248,
419,
776
] | Train |
399 | 2 | Title: Representing and Learning Visual Schemas in Neural Networks for Scene Analysis
Abstract: Using scene analysis as the task, this research focuses on three fundamental problems in neural network systems: (1) limited processing resources, (2) representing schemas, and (3) learning schemas. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on the other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting the gathered information. The system should also learn to represent structured knowledge from examples of objects and scenes. VISOR, the system described in this paper, consists of three main components. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. The Response Module learns to associate the schema activation patterns with external responses. It enables the external environment to provide reinforcement feedback for the learning of schematic structures. | [
15,
427,
1250,
1251
] | Train |
400 | 6 | Title: Learning Algorithms with Applications to Robot Navigation and Protein Folding
Abstract: Using scene analysis as the task, this research focuses on three fundamental problems in neural network systems: (1) limited processing resources, (2) representing schemas, and (3) learning schemas. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on the other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting the gathered information. The system should also learn to represent structured knowledge from examples of objects and scenes. VISOR, the system described in this paper, consists of three main components. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. The Response Module learns to associate the schema activation patterns with external responses. It enables the external environment to provide reinforcement feedback for the learning of schematic structures. | [
14,
258,
555,
2354,
2360
] | Test |
401 | 3 | Title: Learning Limited Dependence Bayesian Classifiers
Abstract: We present a framework for characterizing Bayesian classification methods. This framework can be thought of as a spectrum of allowable dependence in a given probabilistic model with the Naive Bayes algorithm at the most restrictive end and the learning of full Bayesian networks at the most general extreme. While much work has been carried out along the two ends of this spectrum, there has been surprising little done along the middle. We analyze the assumptions made as one moves along this spectrum and show the tradeoffs between model accuracy and learning speed which become critical to consider in a variety of data mining domains. We then present a general induction algorithm that allows for traversal of this spectrum depending on the available computational power for carrying out induction and show its application in a number of domains with different properties. | [
442,
577,
632,
2462
] | Validation |
402 | 1 | Title: The Evolutionary Cost of Learning
Abstract: Traits that are acquired by members of an evolving population during their lifetime, through adaptive processes such as learning, can become genetically specified in later generations. Thus there is a change in the level of learning in the population over evolutionary time. This paper explores the idea that as well as the benefits to be gained from learning, there may also be costs to be paid for the ability to learn. It is these costs that supply the selection pressure for the genetic assimilation of acquired traits. Two models are presented that attempt to illustrate this assertion. The first uses Kauffman's NK fitness landscapes to show the effect that both explicit and implicit costs have on the assimilation of learnt traits. A characteristic `hump' is observed in the graph of the level of plasticity in the population showing that learning is first selected for and then against as evolution progresses. The second model is a practical example in which neural network controllers are evolved for a small mobile robot. Results from this experiment also show the hump. | [
163,
219,
403,
538,
681
] | Train |
403 | 1 | Title: Landscapes, Learning Costs and Genetic Assimilation.
Abstract: The evolution of a population can be guided by phenotypic traits acquired by members of that population during their lifetime. This phenomenon, known as the Baldwin Effect, can speed the evolutionary process as traits that are initially acquired become genetically specified in later generations. This paper presents conditions under which this genetic assimilation can take place. As well as the benefits that lifetime adaptation can give a population, there may be a cost to be paid for that adaptive ability. It is the evolutionary trade-off between these costs and benefits that provides the selection pressure for acquired traits to become genetically specified. It is also noted that genotypic space, in which evolution operates, and phenotypic space, on which adaptive processes (such as learning) operate, are, in general, of a different nature. To guarantee an acquired characteristic can become genetically specified, then these spaces must have the property of neighbourhood correlation which means that a small distance between two individuals in phenotypic space implies that there is a small distance between the same two individuals in genotypic space. | [
402,
2104,
2302,
2309
] | Validation |
404 | 2 | Title: EE380L:Neural Networks for Pattern Recognition POp Trees under the guidance of
Abstract: Decision Trees have been widely used for classification/regression tasks. They are relatively much faster to build as compared to Neural Networks and are understandable by humans. In normal decision trees, based on the input vector, only one branch is followed. In Probabilistic OPtion trees, based on the input vector we follow all of the subtrees with some probability. These probabilities are learned by the system. Probabilistic decisions are likely to be useful, when the boundary of classes submerge in each other, or when there is noise in the input data. In addition they provide us with a confidence measure. We allow option nodes in our trees, Again, instead of uniform voting, we learn the weightage of every subtree. | [
102
] | Test |
405 | 2 | Title: Finite State Machines and Recurrent Neural Networks Automata and Dynamical Systems Approaches
Abstract: Decision Trees have been widely used for classification/regression tasks. They are relatively much faster to build as compared to Neural Networks and are understandable by humans. In normal decision trees, based on the input vector, only one branch is followed. In Probabilistic OPtion trees, based on the input vector we follow all of the subtrees with some probability. These probabilities are learned by the system. Probabilistic decisions are likely to be useful, when the boundary of classes submerge in each other, or when there is noise in the input data. In addition they provide us with a confidence measure. We allow option nodes in our trees, Again, instead of uniform voting, we learn the weightage of every subtree. | [
753,
1285,
1592
] | Train |
406 | 2 | Title: Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization
Abstract: The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method . Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The results presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay. | [
230,
311,
427,
2307
] | Train |
407 | 2 | Title: Constructing Deterministic Finite-State Automata in Recurrent Neural Networks
Abstract: Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can construct second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols, the constructive algorithm generates a "programmed" neural network with O(n) neurons and O(mn) weights. We compare our algorithm to other methods proposed in the literature. | [
512,
1298,
1763,
1875,
2439
] | Test |
408 | 2 | Title: Constructing Deterministic Finite-State Automata in Recurrent Neural Networks
Abstract: Report SYCON-93-09 Recent Results on Lyapunov-theoretic Techniques for Nonlinear Stability ABSTRACT This paper presents a Converse Lyapunov Function Theorem motivated by robust control analysis and design. Our result is based upon, but generalizes, various aspects of well-known classical theorems. In a unified and natural manner, it (1) includes arbitrary bounded disturbances acting on the system, (2) deals with global asymptotic stability, (3) results in smooth (infinitely differentiable) Lyapunov functions, and (4) applies to stability with respect to not necessarily compact invariant sets. As a corollary of the obtained Converse Theorem, we show that the well-known Lyapunov sufficient condition for "input-to-state stability" is also necessary, settling positively an open question raised by several authors during the past few years. | [
630
] | Train |
409 | 2 | Title: Extraction of Rules from Discrete-Time Recurrent Neural Networks
Abstract: The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFA's) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFA's the model which best approximates the learned regular grammar. Keywords: Recurrent Neural Networks, Grammatical Inference, Regular Languages, Deterministic Finite-State Automata, Rule Extraction, Generalization Performance, Model Selection, Occam's Razor. fl Technical Report CS-TR-3465 and UMIACS-TR-95-54, University of Maryland, College Park, MD 20742. Ac cepted for publication in Neural Networks. | [
753,
1298,
1606,
2582
] | Validation |
410 | 4 | Title: High-Performance Job-Shop Scheduling With A Time-Delay TD() Network
Abstract: Job-shop scheduling is an important task for manufacturing industries. We are interested in the particular task of scheduling payload processing for NASA's space shuttle program. This paper summarizes our previous work on formulating this task for solution by the reinforcement learning algorithm T D(). A shortcoming of this previous work was its reliance on hand-engineered input features. This paper shows how to extend the time-delay neural network (TDNN) architecture to apply it to irregular-length schedules. Experimental tests show that this TDNN-T D() network can match the performance of our previous hand-engineered system. The tests also show that both neural network approaches significantly outperform the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them. | [
2,
82,
298,
305,
565
] | Validation |
411 | 2 | Title: POWER OF NEURAL NETS
Abstract: Report SYCON-91-11 ABSTRACT This paper deals with the simulation of Turing machines by neural networks. Such networks are made up of interconnections of synchronously evolving processors, each of which updates its state according to a "sigmoidal" linear combination of the previous states of all units. The main result states that one may simulate all Turing machines by nets, in linear time. In particular, it is possible to give a net made up of about 1,000 processors which computes a universal partial-recursive function. (This is an update of Report SYCON-91-08; new results include the simulation in linear time of binary-tape machines, as opposed to the unary alphabets used in the previous version.) | [
512,
536,
1470,
1600,
1891,
2232,
2582,
2594
] | Train |
412 | 4 | Title: The Influence of Domain Properties on the Performance of Real-Time Search Algorithms
Abstract: This research is sponsored by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations. | [
552,
688
] | Train |
413 | 2 | Title: Convergence Rates of Approximation by Translates
Abstract: In this paper we consider the problem of approximating a function belonging to some function space by a linear combination of n translates of a given function G. Using a lemma by Jones (1990) and Barron (1991) we show that it is possible to define function spaces and functions G for which the rate of convergence to zero of the error is O( 1 p n ) in any number of dimensions. The apparent avoidance of the "curse of dimensionality" is due to the fact that these function spaces are more and more constrained as the dimension increases. Examples include spaces of the Sobolev type, in which the number of weak derivatives is required to be larger than the number of dimensions. We give results both for approximation in the L 2 norm and in the L 1 norm. The interesting feature of these results is that, thanks to the constructive nature of Jones' and Barron's lemma, an iterative procedure is defined that can achieve this rate. This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, at the Artificial Intelligence Laboratory and at the Department of Mathematics, University of Trento, Italy. Gabriele Anzellotti is with the Department of Mathematics, University of Trento, Italy. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation (S1-801534-2). Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. c fl Massachusetts Institute of Technology, 1992 | [
611
] | Train |
414 | 0 | Title: Acquiring Recursive and Iterative Concepts with Explanation-Based Learning explanation-based generalization, generalizing explanation structures, generalizing to
Abstract: University of Wisconsin Computer Sciences Technical Report 876 (September 1989) Abstract In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning. | [
440,
908,
1174,
1442,
1445,
1877
] | Train |
415 | 1 | Title: Competitive Environments Evolve Better Solutions for Complex Tasks
Abstract: University of Wisconsin Computer Sciences Technical Report 876 (September 1989) Abstract In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning. | [
163,
188,
209,
523,
712,
789,
995,
1737,
1790,
1832,
1836,
2103,
2353,
2664
] | Train |
416 | 3 | Title: A note on convergence rates of Gibbs sampling for nonparametric mixtures
Abstract: We consider a mixture model where the mixing distribution is random and is given a Dirichlet process prior. We describe the general structure of two Gibbs sampling algorithms that are useful for approximating Bayesian inferences in this problem. When the kernel f(x j ) of the mixture is bounded, we show that the Markov chains resulting from the Gibbs sampling are uniformly ergodic, and we provide an explicit rate bound. Unfortunately, the bound is not sharp in general; improving sensibly the bound seems however quite difficult. | [
137,
138,
1713,
1913,
2510
] | Train |
417 | 5 | Title: Constructing Intermediate Concepts by Decomposition of Real Functions
Abstract: In learning from examples it is often useful to expand an attribute-vector representation by intermediate concepts. The usual advantage of such structuring of the learning problem is that it makes the learning easier and improves the comprehensibility of induced descriptions. In this paper, we develop a technique for discovering useful intermediate concepts when both the class and the attributes are real-valued. The technique is based on a decomposition method originally developed for the design of switching circuits and recently extended to handle incompletely specified multi-valued functions. It was also applied to machine learning tasks. In this paper, we introduce modifications, needed to decompose real functions and to present them in symbolic form. The method is evaluated on a number of test functions. The results show that the method correctly decomposes fairly complex functions. The decomposition hierarchy does not depend on a given repertoir of basic functions (background knowledge). | [
317,
508
] | Validation |
418 | 6 | Title: Heterogeneous Uncertainty Sampling for Supervised Learning
Abstract: Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. These methods can greatly reduce the number of instances that an expert need label. One problem with this approach is that the classifier best suited for an application may be too expensive to train or use during the selection of instances. We test the use of one classifier (a highly efficient probabilistic one) to select examples for training another (the C4.5 rule induction program). Despite being chosen by this heterogeneous approach, the uncertainty samples yielded classifiers with lower error rates than random samples ten times larger. | [
135,
479,
740,
1198,
1269,
1312
] | Train |
419 | 3 | Title: On the Testability of Causal Models with Latent and Instrumental Variables
Abstract: Certain causal models involving unmeasured variables induce no independence constraints among the observed variables but imply, nevertheless, inequality constraints on the observed distribution. This paper derives a general formula for such inequality constraints as induced by instrumental variables, that is, exogenous variables that directly affect some variables but not all. With the help of this formula, it is possible to test whether a model involving instrumental variables may account for the data, or, conversely, whether a given vari able can be deemed instrumental. | [
248,
260,
398,
1326,
1527,
1747
] | Test |
420 | 2 | Title: Sample Size Calculations for Smoothing Splines Based on Bayesian Confidence Intervals
Abstract: Bayesian confidence intervals of a smoothing spline are often used to distinguish two curves. In this paper, we provide an asymptotic formula for sample size calculations based on Bayesian confidence intervals. Approximations and simulations on special functions indicate that this asymptotic formula is reasonably accurate. Key Words: Bayesian confidence intervals; sample size; smoothing spline. fl Address: Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110. Tel.: (805)893-4870. Fax: (805)893-2334. E-mail: yuedong@pstat.ucsb.edu. Supported by the National Institute of Health under Grants R01 EY09946, P60 DK20572 and P30 HD18258. | [
10,
192,
193,
439,
519,
2214,
2590
] | Train |
421 | 6 | Title: Improved Boosting Algorithms Using Confidence-rated Predictions
Abstract: We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find improved parameter settings as well as a refined criterion for training weak hypotheses. We give a specific method for assigning confidences to the predictions of decision trees, a method closely related to one used by Quinlan. This method also suggests a technique for growing decision trees which turns out to be identical to one proposed by Kearns and Mansour. We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularly to the multi-label case in which each example may belong to more than one class. We give two boosting methods for this problem. One of these leads to a new method for handling the single-label case which is simpler but as effective as techniques suggested by Freund and Schapire. Finally, we give some experimental results comparing a few of the algorithms discussed in this paper. | [
255
] | Validation |
422 | 1 | Title: Genetic Self-Learning
Abstract: Evolutionary Algorithms are direct random search algorithms which imitate the principles of natural evolution as a method to solve adaptation (learning) tasks in general. As such they have several features in common which can be observed on the genetic and phenotypic level of living species. In this paper the algorithms' capability of adaptation or learning in a wider sense is demonstrated, and it is focused on Genetic Algorithms to illustrate the learning process on the population level (first level learning), and on Evolution Strategies to demonstrate the learning process on the meta-level of strategy parameters (second level learning). | [
163,
1069,
1685,
1691
] | Train |
423 | 3 | Title: LEARNING BAYESIAN NETWORKS WITH LOCAL STRUCTURE
Abstract: We examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability distributions (CPDs) that quantify these networks. This increases the space of possible models, enabling the representation of CPDs with a variable number of parameters. The resulting learning procedure induces models that better emulate the interactions present in the data. We describe the theoretical foundations and practical aspects of learning local structures and provide an empirical evaluation of the proposed learning procedure. This evaluation indicates that learning curves characterizing this procedure converge faster, in the number of training instances, than those of the standard procedure, which ignores the local structure of the CPDs. Our results also show that networks learned with local structures tend to be more complex (in terms of arcs), yet require fewer parameters. | [
62,
557,
558,
1290,
1934,
2425
] | Test |
424 | 6 | Title: Validation of Voting Committees
Abstract: This paper contains a method to bound the test errors of voting committees with members chosen from a pool of trained classifiers. There are so many prospective committees that validating them directly does not achieve useful error bounds. Because there are fewer classifiers than prospective committees, it is better to validate the classifiers individually, then use linear programming to infer committee error bounds. We test the method using credit card data. Also, we extend the method to infer bounds for classifiers in general. | [
571
] | Test |
425 | 4 | Title: Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil
Abstract: An accurate simulation of a heating coil is used to compare the performance of a PI controller, a neural network trained to predict the steady-state output of the PI controller, a neural network trained to minimize the n-step ahead error between the coil output and the set point, and a reinforcement learning agent trained to minimize the sum of the squared error over time. Although the PI controller works very well for this task, the neural networks do result in improved performance. | [
85,
565
] | Train |
426 | 5 | Title: Rule Induction with CN2: Some Recent Improvements
Abstract: The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. | [
29,
303,
318,
335,
836,
937,
1061,
1187,
1275,
1486,
1528,
1576,
2126,
2369,
2431
] | Validation |
427 | 2 | Title: Book Review Introduction to the Theory of Neural Computation Reviewed by: 2
Abstract: Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models | [
18,
146,
201,
202,
205,
230,
250,
304,
312,
350,
353,
391,
399,
406,
461,
477,
493,
494,
526,
561,
584,
587,
610,
658,
665,
674,
696,
698,
823,
916,
946,
954,
1037,
1103,
1274,
1283,
1284,
1318,
1340,
1399,
1405,
1488,
1547,
1577,... | Train |
428 | 5 | Title: Selective Eager Execution on the PolyPath Architecture
Abstract: Control-flow misprediction penalties are a major impediment to high performance in wide-issue superscalar processors. In this paper we present Selective Eager Execution (SEE), an execution model to overcome mis-speculation penalties by executing both paths after diffident branches. We present the micro-architecture of the PolyPath processor, which is an extension of an aggressive superscalar, out-of-order architecture. The PolyPath architecture uses a novel instruction tagging and register renaming mechanism to execute instructions from multiple paths simultaneously in the same processor pipeline, while retaining maximum resource availability for single-path code sequences. Performance results of our detailed execution-driven, pipeline-level simulations show that the SEE concept achieves a potential average performance improvement of 48% on the SPECint95 benchmarks. A realistic implementation with a dynamic branch confidence estimator can improve performance by as much as 36% for the go benchmark, and an average of 14% on SPECint95, when compared to a normal superscalar, out-of-order, speculative execution, monopath processor. Moreover, our architectural model is both elegant and practical to implement, using a small amount of additional state and control logic. | [
184,
302,
432,
433,
598
] | Test |
429 | 3 | Title: Classifiers: A Theoretical and Empirical Study
Abstract: This paper describes how a competitive tree learning algorithm can be derived from first principles. The algorithm approximates the Bayesian decision theoretic solution to the learning task. Comparative experiments with the algorithm and the several mature AI and statistical families of tree learning algorithms currently in use show the derived Bayesian algorithm is consistently as good or better, although sometimes at computational cost. Using the same strategy, we can design algorithms for many other supervised and model learning tasks given just a probabilistic representation for the kind of knowledge to be learned. As an illustration, a second learning algorithm is derived for learning Bayesian networks from data. Implications to incremental learning and the use of multiple models are also discussed. | [
29,
1290,
1514
] | Test |
430 | 6 | Title: Irrelevant Features and the Subset Selection Problem
Abstract: We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets. | [
52,
89,
119,
172,
177,
208,
223,
236,
256,
381,
524,
547,
632,
634,
635,
651,
683,
686,
1020,
1165,
1207,
1270,
1284,
1568,
1569,
1617,
1637,
1698,
1792,
2033,
2137,
2197,
2343,
2487,
2557,
2593
] | Test |
431 | 6 | Title: Rule-based Machine Learning Methods for Functional Prediction
Abstract: We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance. | [
156,
1608,
2137
] | Train |
432 | 5 | Title: Limited Dual Path Execution
Abstract: This work presents a hybrid branch predictor scheme that uses a limited form of dual path execution along with dynamic branch prediction to improve execution times. The ability to execute down both paths of a conditional branch enables the branch penalty to be minimized; however, relying exclusively on dual path execution is infeasible due because instruction fetch rates far exceed the capability of the pipeline to retire a single branch before others must be processed. By using confidence information, available in the dynamic branch prediction state tables, a limited form of dual path execution becomes feasible. This reduces the burden on the branch predictor by allowing predictions of low confidence to be avoided. In this study we present a new approach to gather branch prediction confidence with little or no overhead, and use this confidence mechanism to determine whether dual path execution or branch prediction should be used. Comparing this hybrid predictor model to the dynamic branch predictor shows a dramatic decrease in misprediction rate, which translates to an reduction in runtime of over 20%. These results imply that dual path execution, which often is thought to be an excessively resource consuming method, may be a worthy approach if restricted with an appropriate predicting set. | [
165,
184,
302,
428,
433,
598
] | Train |
433 | 5 | Title: Threaded Multiple Path Execution
Abstract: This paper presents Threaded Multi-Path Execution (TME), which exploits existing hardware on a Simultaneous Multi-threading (SMT) processor to speculatively execute multiple paths of execution. When there are fewer threads in an SMT processor than hardware contexts, threaded multi-path execution uses spare contexts to fetch and execute code along the less likely path of hard-to-predict branches. This paper describes the hardware mechanisms needed to enable an SMT processor to efficiently spawn speculative threads for threaded multi-path execution. The Mapping Synchronization Bus is described, which enables the spawning of these multiple paths. Policies are examined for deciding which branches to fork, and for managing competition between primary and alternate path threads for critical resources. Our results show that TME increases the single program performance of an SMT with eight thread contexts by 14%-23% on average, depending on the misprediction penalty, for programs with a high misprediction rate. | [
158,
184,
428,
432
] | Test |
434 | 0 | Title: Computational Learning in Humans and Machines
Abstract: In this paper we review research on machine learning and its relation to computational models of human learning. We focus initially on concept induction, examining five main approaches to this problem, then consider the more complex issue of learning sequential behaviors. After this, we compare the rhetoric that sometimes appears in the machine learning and psychological literature with the growing evidence that different theoretical paradigms typically produce similar results. In response, we suggest that concrete computational models, which currently dominate the field, may be less useful than simulations that operate at a more abstract level. We illustrate this point with an abstract simulation that explains a challenging phenomenon in the area of category learning, and we conclude with some general observations about such abstract models. | [
597,
1339,
2473
] | Train |
435 | 2 | Title: Homology Detection via Family Pairwise Search a straightforward generalization of pairwise sequence comparison algorithms to
Abstract: The function of an unknown biological sequence can often be accurately inferred by identifying sequences homologous to the original sequence. Given a query set of known homologs, there exist at least three general classes of techniques for finding additional homologs: pairwise sequence comparisons, motif analysis, and hidden Markov modeling. Pairwise sequence comparisons are typically employed when only a single query sequence is known. Hidden Markov models (HMMs), on the other hand, are usually trained with sets of more than 100 sequences. Motif-based methods fall in between these two extremes. | [
0,
8,
14,
258,
751
] | Train |
436 | 6 | Title: Pattern Theoretic Knowledge Discovery
Abstract: Future research directions in Knowledge Discovery in Databases (KDD) include the ability to extract an overlying concept relating useful data. Current limitations involve the search complexity to find that concept and what it means to be "useful." The Pattern Theory research crosses over in a natural way to the aforementioned domain. The goal of this paper is threefold. First, we present a new approach to the problem of learning by Discovery and robust pattern finding. Second, we explore the current limitations of a Pattern Theoretic approach as applied to the general KDD problem. Third, we exhibit its performance with experimental results on binary functions, and we compare those results with C4.5. This new approach to learning demonstrates a powerful method for finding patterns in a robust manner. | [
635
] | Train |
437 | 2 | Title: A Gentle Guide to Multiple Alignment Version Please send comments, critique, flames and praise Instructions
Abstract: Prerequisites. An understanding of the dynamic programming (edit distance) approach to pairwise sequence alignment is useful for parts 1.3, 1.4, and 2. Also, familiarity with the use of Internet resources would be helpful for part 3. For the former, see Chapters 1.1 - 1.3, and for the latter, see Chapter 2 of the Hypertext Book of the GNA-VSNS Biocomputing Course at http://www.techfak.uni-bielefeld.de/bcd/Curric/welcome.html. General Rationale. You will understand why Multiple Alignment is considered a challenging problem, you will study approaches that try to reduce the number of steps needed to calculate the optimal solution, and you will study fast heuristics. In a case study involving immunoglobulin sequences, you will study multiple alignments obtained from WWW servers, recapitulating results from an original paper. Revision History. Version 1.01 on 17 Sep 1995. Expanded Ex.9. Updated Ex.46. Revised Solution Sheet -re- Ex.3+12. Marked more Exercises by "A" (to be submitted to the Instructor). Various minor clarifications in content | [
14
] | Test |
438 | 6 | Title: A System for Induction of Oblique Decision Trees
Abstract: This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. | [
21,
79,
96,
227,
296,
391,
550,
607,
616,
692,
710,
720
] | Train |
439 | 2 | Title: Adaptive tuning of numerical weather prediction models: Randomized GCV in three and four dimensional data assimilation
Abstract: This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. | [
97,
192,
388,
420
] | Validation |
440 | 4 | Title: Hierarchical Explanation-Based Reinforcement Learning
Abstract: Explanation-Based Reinforcement Learning (EBRL) was introduced by Dietterich and Flann as a way of combining the ability of Reinforcement Learning (RL) to learn optimal plans with the generalization ability of Explanation-Based Learning (EBL) (Di-etterich & Flann, 1995). We extend this work to domains where the agent must order and achieve a sequence of subgoals in an optimal fashion. Hierarchical EBRL can effectively learn optimal policies in some of these sequential task domains even when the subgoals weakly interact with each other. We also show that when a planner that can achieve the individual subgoals is available, our method converges even faster. | [
367,
414,
562
] | Train |
441 | 0 | Title: BRACE: A Paradigm For the Discretization of Continuously Valued Data
Abstract: Discretization of continuously valued data is a useful and necessary tool because many learning paradigms assume nominal data. A list of objectives for efficient and effective discretization is presented. A paradigm called BRACE (Boundary Ranking And Classification Evaluation) that attempts to meet the objectives is presented along with an algorithm that follows the paradigm. The paradigm meets many of the objectives, with potential for extension to meet the remainder. Empirical results have been promising. For these reasons BRACE has potential as an effective and efficient method for discretization of continuously valued data. A further advantage of BRACE is that it is general enough to be extended to other types of clustering/unsupervised learning. | [
297,
638
] | Validation |
442 | 3 | Title: Searching for dependencies in Bayesian classifiers j A n V n j If the attributes
Abstract: Naive Bayesian classifiers which make independence assumptions perform remarkably well on some data sets but poorly on others. We explore ways to improve the Bayesian classifier by searching for dependencies among attributes. We propose and evaluate two algorithms for detecting dependencies among attributes and show that the backward sequential elimination and joining algorithm provides the most improvement over the naive Bayesian classifier. The domains on which the most improvement occurs are those domains on which the naive Bayesian classifier is significantly less accurate than a decision tree learner. This suggests that the attributes used in some common databases are not independent conditioned on the class and that the violations of the independence assumption that affect the accuracy of the classifier The Bayesian classifier (Duda & Hart, 1973) is a probabilistic method for classification. It can be used to determine the probability that an example j belongs to class C i given values of attributes of an example represented as a set of n nominally-valued attribute-value pairs of the form A 1 = V 1 j : ^ P (A k = V k j jC i ) may be estimated from the training data. To determine the most likely class of a test example, the probability of each class is computed with Equation 1. A classifier created in this manner is sometimes called a simple (Langley, 1993) or naive (Kononenko, 1990) Bayesian classifier. One important evaluation metric for machine learning methods is the predictive accuracy on unseen examples. This is measured by randomly selecting a subset of the examples in a database to use as training examples and reserving the remainder to be used as test examples. In the case of the simple Bayesian classifier, the training examples are used to estimate probabilities and Equation 1.1 is then used can be detected from training data. | [
401,
635,
1908
] | Train |
443 | 2 | Title: Parameterization studies for the SAM and HMMER methods of hidden Markov model generation
Abstract: Multiple sequence alignment of distantly related viral proteins remains a challenge to all currently available alignment methods. The hidden Markov model approach offers a new, flexible method for the generation of multiple sequence alignments. The results of studies attempting to infer appropriate parameter constraints for the generation of de novo HMMs for globin, kinase, aspartic acid protease, and ribonuclease H sequences by both the SAM and HMMER methods are described. | [
14
] | Train |
444 | 2 | Title: Fools Gold: Extracting Finite State Machines From Recurrent Network Dynamics
Abstract: Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network, the next step is to understand the information processing carried out by the network. Some researchers (Giles et al., 1992; Watrous & Kuhn, 1992; Cleeremans et al., 1989) have resorted to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes two conditions, sensitivity to initial conditions and frivolous computational explanations due to discrete measurements (Kolen & Pollack, 1993), which allow these extraction methods to return illusionary finite state descriptions. | [
144,
753
] | Train |
445 | 0 | Title: Bias and the Probability of Generalization
Abstract: In order to be useful, a learning algorithm must be able to generalize well when faced with inputs not previously presented to the system. A bias is necessary for any generalization, and as shown by several researchers in recent years, no bias can lead to strictly better generalization than any other when summed over all possible functions or applications. This paper provides examples to illustrate this fact, but also explains how a bias or learning algorithm can be better than another in practice when the probability of the occurrence of functions is taken into account. It shows how domain knowledge and an understanding of the conditions under which each learning algorithm performs well can be used to increase the probability of accurate generalization, and identifies several of the conditions that should be considered when attempting to select an appropriate bias for a particular problem. | [
318,
690
] | Validation |
446 | 4 | Title: H-learning: A Reinforcement Learning Method to Optimize Undiscounted Average Reward
Abstract: In this paper, we introduce a model-based reinforcement learning method called H-learning, which optimizes undiscounted average reward. We compare it with three other reinforcement learning methods in the domain of scheduling Automatic Guided Vehicles, transportation robots used in modern manufacturing plants and facilities. The four methods differ along two dimensions. They are either model-based or model-free, and optimize discounted total reward or undiscounted average reward. Our experimental results indicate that H-learning is more robust with respect to changes in the domain parameters, and in many cases, converges in fewer steps to better average reward per time step than all the other methods. An added advantage is that unlike the other methods it does not have any parameters to tune. | [
552,
554
] | Train |
447 | 2 | Title: A Smooth Converse Lyapunov Theorem for Robust Stability
Abstract: This paper presents a Converse Lyapunov Function Theorem motivated by robust control analysis and design. Our result is based upon, but generalizes, various aspects of well-known classical theorems. In a unified and natural manner, it (1) allows arbitrary bounded time-varying parameters in the system description, (2) deals with global asymptotic stability, (3) results in smooth (infinitely differentiable) Lyapunov functions, and (4) applies to stability with respect to not necessarily compact invariant sets. 1. Introduction. This work is motivated by problems of robust nonlinear stabilization. One of our main | [
630,
1471,
1501
] | Test |
448 | 1 | Title: Grounding Robotic Control with Genetic Neural Networks
Abstract: Technical Report AI94-223 May 1994 Abstract An important but often neglected problem in the field of Artificial Intelligence is that of grounding systems in their environment such that the representations they manipulate have inherent meaning for the system. Since humans rely so heavily on semantics, it seems likely that the grounding is crucial to the development of truly intelligent behavior. This study investigates the use of simulated robotic agents with neural network processors as part of a method to ensure grounding. Both the topology and weights of the neural networks are optimized through genetic algorithms. Although such comprehensive optimization is difficult, the empirical evidence gathered here shows that the method is not only tractable but quite fruitful. In the experiments, the agents evolved a wall-following control strategy and were able to transfer it to novel environments. Their behavior suggests that they were also learning to build cognitive maps. | [
163,
191
] | Validation |
449 | 0 | Title: Correcting Imperfect Domain Theories: A Knowledge-Level Analysis
Abstract: Explanation-Based Learning [Mitchell et al., 1986; DeJong and Mooney, 1986] has shown promise as a powerful analytical learning technique. However, EBL is severely hampered by the requirement of a complete and correct domain theory for successful learning to occur. Clearly, in non-trivial domains, developing such a domain theory is a nearly impossible task. Therefore, much research has been devoted to understanding how an imperfect domain theory can be corrected and extended during system performance. In this paper, we present a characterization of this problem, and use it to analyze past research in the area. Past characterizations of the problem (e.g, [Mitchell et al., 1986; Rajamoney and DeJong, 1987]) have viewed the types of performance errors caused by a faulty domain theory as primary. In contrast, we focus primarily on the types of knowledge deficiencies present in the theory, and from these derive the types of performance errors that can result. Correcting the theory can be viewed as a search through the space of possible domain theories, with a variety of knowledge sources that can be used to guide the search. We examine the types of knowledge used by a variety of past systems for this purpose. The hope is that this analysis will indicate the need for a "universal weak method" of domain theory correction, in which different sources of knowledge for theory correction can be freely and flexibly combined. | [
479,
566,
638,
1539
] | Test |
450 | 3 | Title: Mapping Bayesian Networks to Boltzmann Machines
Abstract: We study the task of tnding a maximal a posteriori (MAP) instantiation of Bayesian network variables, given a partial value assignment as an initial constraint. This problem is known to be NP-hard, so we concentrate on a stochastic approximation algorithm, simulated annealing. This stochastic algorithm can be realized as a sequential process on the set of Bayesian network variables, where only one variable is allowed to change at a time. Consequently, the method can become impractically slow as the number of variables increases. We present a method for mapping a given Bayesian network to a massively parallel Bolztmann machine neural network architecture, in the sense that instead of using the normal sequential simulated annealing algorithm, we can use a massively parallel stochastic process on the Boltzmann machine architecture. The neural network updating process provably converges to a state which solves a given MAP task. | [
646,
954,
2558
] | Validation |
451 | 4 | Title: Parameterized Heuristics for Intelligent Adaptive Network Routing in Large Communication Networks
Abstract: Parameterized heuristics offers an elegant and powerful theoretical framework for design and analysis of autonomous adaptive communication networks. Routing of messages in such networks presents a real-time instance of a multi-criterion optimization problem in a dynamic and uncertain environment. This paper describes a framework for heuristic routing in large networks. The effectiveness of the heuristic routing mechanism upon which Quo Vadis is based is described as part of a simulation study within a network with grid topology. A formal analysis of the underlying principles is presented through the incremental design of a set of heuristic decision functions that can be used to guide messages along a near-optimal (e.g., minimum delay) path in a large network. This paper carefully derives the properties of such heuristics under a set of simplifying assumptions about the network topology and load dynamics and identify the conditions under which they are guaranteed to route messages along an optimal path. The paper concludes with a discussion of the relevance of the theoretical results presented in the paper to the design of intelligent autonomous adaptive communication networks and an outline of some directions of future research. | [
552,
2537,
2666
] | Test |
452 | 3 | Title: Principal Curve Clustering With Noise
Abstract: Technical Report 317 Department of Statistics University of Washington. 1 Derek Stanford is Graduate Research Assistant and Adrian E. Raftery is Professor of Statistics and Sociology, both at the Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, USA. E-mail: stanford@stat.washington.edu and raftery@stat.washington.edu. Web: http://www.stat.washington.edu/raftery. This research was supported by ONR grants N00014-96-1-0192 and N00014-96-1-0330. The authors are grateful to Simon Byers, Gilles Celeux and Christian Posse for helpful discussions. | [
117,
347,
513
] | Train |
453 | 6 | Title: How to Use Expert Advice (Extended Abstract)
Abstract: We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We give implications of this result on the performance of batch learning algorithms in a PAC setting which improve on the best results currently known in this context. We also extend our analysis to the case in which log loss is used instead of the expected number of mistakes. | [
9,
514,
549,
591,
706,
876,
1025,
1124,
1269,
1358,
1449,
1566,
1661,
1712,
2034,
2059,
2092,
2098,
2099,
2156,
2354,
2455,
2618
] | Train |
454 | 0 | Title: Towards Formalizations in Case-Based Reasoning for Synthesis
Abstract: This paper presents the formalization of a novel approach to structural similarity assessment and adaptation in case-based reasoning (Cbr) for synthesis. The approach has been informally presented, exemplified, and implemented for the domain of industrial building design (Borner 1993). By relating the approach to existing theories we provide the foundation of its systematic evaluation and appropriate usage. Cases, the primary repository of knowledge, are represented structurally using an algebraic approach. Similarity relations provide structure preserving case modifications modulo the underlying algebra and an equational theory over the algebra (so available). This representation of a modeled universe of discourse enables theory-based inference of adapted solutions. The approach enables us to incorporate formally generalization, abstraction, geometrical transformation, and their combinations into Cbr. | [
183,
539,
541,
1368
] | Train |
455 | 4 | Title: Learning from an Automated Training Agent
Abstract: A learning agent employing reinforcement learning is hindered because it only receives the critic's sparse and weakly informative training information. We present an approach in which an automated training agent may also provide occasional instruction to the learner in the form of actions for the learner to perform. The learner has access to both the critic's feedback and the trainer's instruction. In the experiments, we vary the level of the trainer's interaction with the learner, from allowing the trainer to instruct the learner at almost every time step, to not allowing the trainer to respond at all. We also vary a parameter that controls how the learner incorporates the trainer's actions. The results show significant reductions in the average number of training trials necessary to learn to perform the task. | [
374,
552
] | Validation |
456 | 6 | Title: Boosting a weak learning algorithm by majority To be published in Information and Computation
Abstract: We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper "The strength of weak learnability", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances. | [
25,
549,
569,
672,
1296,
1560,
1748,
2099,
2455,
2653
] | Train |
457 | 0 | Title: A Computational Model of Ratio Decidendi
Abstract: This paper proposes a model of ratio decidendi as a justification structure consisting of a series of reasoning steps, some of which relate abstract predicates to other abstract predicates and some of which relate abstract predicates to specific facts. This model satisfies an important set of characteristics of ratio decidendi identified from the jurisprudential literature. In particular, the model shows how the theory under which a case is decided controls its precedential effect. By contrast, a purely exemplar-based model of ratio decidendi fails to account for the dependency of prece-dential effect on the theory of decision. | [
166,
649
] | Train |
458 | 2 | Title: Quantifying neighbourhood preservation in topographic mappings
Abstract: Mappings that preserve neighbourhood relationships are important in many contexts, from neurobiology to multivariate data analysis. It is important to be clear about precisely what is meant by preserving neighbourhoods. At least three issues have to be addressed: how neighbourhoods are defined, how a perfectly neighbourhood preserving mapping is defined, and how an objective function for measuring discrepancies from perfect neighbour-hood preservation is defined. We review several standard methods, and using a simple example mapping problem show that the different assumptions of each lead to non-trivially different answers. We also introduce a particular measure for topographic distortion, which has the form of a quadratic assignment problem. Many previous methods are closely related to this measure, which thus serves to unify disparate approaches. | [
745,
747
] | Train |
459 | 6 | Title: Pac Learning, Noise, and Geometry
Abstract: This paper describes the probably approximately correct model of concept learning, paying special attention to the case where instances are points in Euclidean n-space. The problem of learning from noisy training data is also studied. | [
109,
130,
267,
640,
1705
] | Train |
460 | 4 | Title: Learning Roles: Behavioral Diversity in Robot Teams
Abstract: This paper describes research investigating behavioral specialization in learning robot teams. Each agent is provided a common set of skills (motor schema-based behavioral assemblages) from which it builds a task-achieving strategy using reinforcement learning. The agents learn individually to activate particular behavioral assemblages given their current situation and a reward signal. The experiments, conducted in robot soccer simulations, evaluate the agents in terms of performance, policy convergence, and behavioral diversity. The results show that in many cases, robots will automatically diversify by choosing heterogeneous behaviors. The degree of diversification and the performance of the team depend on the reward structure. When the entire team is jointly rewarded or penalized (global reinforcement), teams tend towards heterogeneous behavior. When agents are provided feedback individually (local reinforcement), they converge to identical policies. | [
148,
281
] | Test |
461 | 2 | Title: Product Unit Learning constructive algorithm is then introduced which adds product units to a network
Abstract: Product units provide a method of automatically learning the higher-order input combinations required for the efficient synthesis of Boolean logic functions by neural networks. Product units also have a higher information capacity than sigmoidal networks. However, this activation function has not received much attention in the literature. A possible reason for this is that one encounters some problems when using standard backpropagation to train networks containing these units. This report examines these problems, and evaluates the performance of three training algorithms on networks of this type. Empirical results indicate that the error surface of networks containing product units have more local minima than corresponding networks with summation units. For this reason, a combination of local and global training algorithms were found to provide the most reliable convergence. We then investigate how `hints' can be added to the training algorithm. By extracting a common frequency from the input weights, and training this frequency separately, we show that convergence can be accelerated. In order to compare their performance with other transfer functions, product units were implemented as candidate units in the Cascade Correlation (CC) [13] system. Using these candidate units resulted in smaller networks which trained faster than when the any of the standard (three sigmoidal types and one Gaussian) transfer functions were used. This superiority was confirmed when a pool of candidate units of four different nonlinear activation functions were used, which have to compete for addition to the network. Extensive simulations showed that for the problem of implementing random Boolean logic functions, product units are always chosen above any of the other transfer functions. | [
427
] | Train |
462 | 2 | Title: Draft Symbolic Representation of Neural Networks
Abstract: An early and shorter version of this paper has been accepted for presenta tion at IJCAI'95. | [
187,
1644,
2582
] | Test |
463 | 4 | Title: A Cognitive Model of Learning to Navigate
Abstract: Our goal is to develop a cognitive model of how humans acquire skills on complex cognitive tasks. We are pursuing this goal by designing computational architectures for the NRL Navigation task, which requires competent sensorimotor coordination. In this paper, we analyze the NRL Navigation task in depth. We then use data from experiments with human subjects learning this task to guide us in constructing a cognitive model of skill acquisition for the task. Verbal protocol data augments the black box view provided by execution traces of inputs and outputs. Computational experiments allow us to explore a space of alternative architectures for the task, guided by the quality of fit to human performance data. | [
3,
333,
477,
483,
564
] | Validation |
464 | 3 | Title: On the Logic of Iterated Belief Revision
Abstract: We show in this paper that the AGM postulates are too week to ensure the rational preservation of conditional beliefs during belief revision, thus permitting improper responses to sequences of observations. We remedy this weakness by proposing four additional postulates, which are sound relative to a qualitative version of probabilistic conditioning. Contrary to the AGM framework, the proposed postulates characterize belief revision as a process which may depend on elements of an epistemic state that are not necessarily captured by a belief set. We also show that a simple modification to the AGM framework can allow belief revision to be a function of epistemic states. We establish a model-based representation theorem which characterizes the proposed postulates and constrains, in turn, the way in which entrenchment orderings may be transformed under iterated belief revision. | [
275,
279,
467,
573
] | Validation |
465 | 4 | Title: Strategy Learning with Multilayer Connectionist Representations 1
Abstract: Results are presented that demonstrate the learning and fine-tuning of search strategies using connectionist mechanisms. Previous studies of strategy learning within the symbolic, production-rule formalism have not addressed fine-tuning behavior. Here a two-layer connectionist system is presented that develops its search from a weak to a task-specific strategy and fine-tunes its performance. The system is applied to a simulated, real-time, balance-control task. We compare the performance of one-layer and two-layer networks, showing that the ability of the two-layer network to discover new features and thus enhance the original representation is critical to solving the balancing task. | [
85,
103,
118,
294,
466,
523,
565,
566,
2027,
2368,
2672
] | Train |
466 | 4 | Title: On the Computational Economics of Reinforcement Learning
Abstract: Following terminology used in adaptive control, we distinguish between indirect learning methods, which learn explicit models of the dynamic structure of the system to be controlled, and direct learning methods, which do not. We compare an existing indirect method, which uses a conventional dynamic programming algorithm, with a closely related direct reinforcement learning method by applying both methods to an infinite horizon Markov decision problem with unknown state-transition probabilities. The simulations show that although the direct method requires much less space and dramatically less computation per control action, its learning ability in this task is superior to, or compares favorably with, that of the more complex indirect method. Although these results do not address how the methods' performances compare as problems become more difficult, they suggest that given a fixed amount of computational power available per control action, it may be better to use a direct reinforcement learning method augmented with indirect techniques than to devote all available resources to a computation-ally costly indirect method. Comprehensive answers to the questions raised by this study depend on many factors making up the eco nomic context of the computation. | [
16,
294,
465,
552,
565,
566
] | Validation |
467 | 3 | Title: A Knowledge-Based Framework for Belief Change Part I: Foundations
Abstract: We propose a general framework in which to study belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes ' if he knows that ' is true in all the worlds he considers most plausible. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin [HF89] for modeling knowledge in multi-agent systems. We show that our framework is quite expressive and lets us model in a natural way a number of different scenarios for belief change. For example, we show how we can capture an analogue to prior probabilities, which can be updated by "conditioning". In a related paper, we show how the two best studied scenarios, belief revision and belief update, fit into the framework. | [
270,
276,
342,
464,
495,
729,
2000,
2016
] | Train |
468 | 3 | Title: Adaptive Markov Chain Monte Carlo through Regeneration Summary
Abstract: Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution . This is done by calculating averages over the sample path of a Markov chain having as its stationary distribution. For computational efficiency, the Markov chain should be rapidly mixing. This can sometimes be achieved only by careful design of the transition kernel of the chain, on the basis of a detailed preliminary exploratory analysis of . An alternative approach might be to allow the transition kernel to adapt whenever new features of are encountered during the MCMC run. However, if such adaptation occurs infinitely often, the stationary distribution of the chain may be disturbed. We describe a framework, based on the concept of Markov chain regeneration, which allows adaptation to occur infinitely often, but which does not disturb the stationary distribution of the chain or the consistency of sample-path averages. Key Words: Adaptive method; Bayesian inference; Gibbs sampling; Markov chain Monte Carlo; | [
182,
491,
896,
1713,
2377
] | Train |
469 | 2 | Title: Interpolation Models with Multiple
Abstract: A traditional interpolation model is characterized by the choice of reg-ularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant ff, and the noise model has a single parameter fi. The ratio ff=fi alone is responsible for determining globally all these attributes of the interpolant: its `complexity', `flexibility', `smoothness', `characteristic scale length', and `characteristic amplitude'. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of `conditional convexity' when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error. | [
78,
160,
214
] | Test |
470 | 0 | Title: What Daimler-Benz has learned as an industrial partner from the Machine Learning Project StatLog
Abstract: Author of this paper was co-ordinator of the Machine Learning project StatLog during 1990-1993. This project was supported financially by the European Community. The main aim of StatLog was to evaluate different learning algorithms using real industrial and commercial applications. As an industrial partner and contributor, Daimler-Benz has introduced different applications to Stat-Log among them fault diagnosis, letter and digit recognition, credit-scoring and prediction of the number of registered trucks. We have learned a lot of lessons from this project which have effected our application oriented research in the field of Machine Learning (ML) in Daimler-Benz. We have distinguished that, especially, more research is necessary to prepare the ML-algorithms to handle the real industrial and commercial applications. In this paper we describe, shortly, the Daimler-Benz applications in StatLog, we discuss shortcomings of the applied ML-algorithms and finally we outline the fields where we think further research is necessary. | [
478
] | Test |
471 | 4 | Title: In Improving Elevator Performance Using Reinforcement Learning
Abstract: This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are nonstationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility. | [
2,
103,
295,
621,
1045,
1632,
1859
] | Validation |
472 | 4 | Title: Category: Control, Navigation and Planning. Key words: Reinforcement learning, Exploration, Hidden state. Prefer oral presentation.
Abstract: This paper presents Fringe Exploration, a technique for efficient exploration in partially observable domains. The key idea, (applicable to many exploration techniques), is to keep statistics in the space of possible short-term memories, instead of in the agent's current state space. Experimental results in a partially observable maze and in a difficult driving task with visual routines show dramatic performance improvements. | [
552,
566,
650,
1006
] | Validation |
473 | 4 | Title: Improving Policies without Measuring Merits
Abstract: Performing policy iteration in dynamic programming should only require knowledge of relative rather than absolute measures of the utility of actions what Baird (1993) calls the advantages of actions at states. Nevertheless, existing methods in dynamic programming (including Baird's) compute some form of absolute utility function. For smooth problems, advantages satisfy two differential consistency conditions (including the requirement that they be free of curl), and we show that enforcing these can lead to appropriate policy improvement solely in terms of advantages. | [
552,
1459
] | Train |
474 | 2 | Title: Protein Structure Prediction: Selecting Salient Features from Large Candidate Pools
Abstract: We introduce a parallel approach, "DT-Select," for selecting features used by inductive learning algorithms to predict protein secondary structure. DT-Select is able to rapidly choose small, nonredundant feature sets from pools containing hundreds of thousands of potentially useful features. It does this by building a decision tree, using features from the pool, that classifies a set of training examples. The features included in the tree provide a compact description of the training data and are thus suitable for use as inputs to other inductive learning algorithms. Empirical experiments in the protein secondary-structure task, in which sets of complex features chosen by DT-Select are used to augment a standard artificial neural network representation, yield surprisingly little performance gain, even though features are selected from very large feature pools. We discuss some possible reasons for this result. 1 | [
635,
698
] | Train |
475 | 1 | Title: Basic PSugal an extension package for the development of Distributed Genetic Algorithms
Abstract: This paper presents the extension package developed by the author at the Faculty of Sciences and Technology of the New University of Lisbon, designed for experimentation with Coarse-Grained Distributed Genetic Algorithms (DGA). The package was implemented as an extension to the Basic Sugal system, developed by Andrew Hunter at the University of Sunderland, U.K., which is primarily intended to be used in the research of Sequential or Serial Genetic Algorithms (SGA). | [
168
] | Train |
476 | 2 | Title: A self-organizing multiple-view representation of 3D objects
Abstract: We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the network's behavior was qualitatively similar to that of human subjects. | [
605,
1056,
1091
] | Validation |
477 | 4 | Title: Forward models: Supervised learning with a distal teacher
Abstract: Internal models of the environment have an important role to play in adaptive systems in general and are of particular importance for the supervised learning paradigm. In this paper we demonstrate that certain classical problems associated with the notion of the "teacher" in supervised learning can be solved by judicious use of learned internal models as components of the adaptive system. In particular, we show how supervised learning algorithms can be utilized in cases in which an unknown dynamical system intervenes between actions and desired outcomes. Our approach applies to any supervised learning algorithm that is capable of learning in multi-layer networks. *This paper is a revised version of MIT Center for Cognitive Science Occasional Paper #40. We wish to thank Michael Mozer, Andrew Barto, Robert Jacobs, Eric Loeb, and James McClelland for helpful comments on the manuscript. This project was supported in part by BRSG 2 S07 RR07047-23 awarded by the Biomedical Research Support Grant Program, Division of Research Resources, National Institutes of Health, by a grant from ATR Auditory and Visual Perception Research Laboratories, by a grant from Siemens Corporation, by a grant from the Human Frontier Science Program, and by grant N00014-90-J-1942 awarded by the Office of Naval Research. | [
229,
294,
333,
427,
463,
565,
566,
745,
1645,
1766,
2409,
2658
] | Train |
478 | 0 | Title: An Improved Algorithm for Incremental Induction of Decision Trees
Abstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. | [
52,
96,
215,
218,
227,
274,
303,
396,
470,
479,
497,
520,
523,
565,
568,
618,
754
] | Train |
479 | 0 | Title: Learning physical descriptions from functional definitions, examples, Learning from examples: The effect of different conceptual
Abstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. | [
92,
136,
147,
418,
449,
478,
649,
1354,
1627,
2300,
2636
] | Train |
480 | 2 | Title: Modelling the Manifolds of Images of Handwritten Digits
Abstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. | [
257,
667,
2270,
2570
] | Train |
481 | 6 | Title: The Weighted Majority Algorithm
Abstract: fl This research was primarily conducted while this author was at the University of Calif. at Santa Cruz with support from ONR grant N00014-86-K-0454, and at Harvard University, supported by ONR grant N00014-85-K-0445 and DARPA grant AFOSR-89-0506. Current address: NEC Research Institute, 4 Independence Way, Princeton, NJ 08540. E-mail address: nickl@research.nj.nec.com. y Supported by ONR grants N00014-86-K-0454 and N00014-91-J-1162. Part of this research was done while this author was on sabbatical at Aiken Computation Laboratory, Harvard, with partial support from the ONR grants N00014-85-K-0445 and N00014-86-K-0454. Address: Department of Computer Science, University of California at Santa Cruz. E-mail address: manfred@cs.ucsc.edu. | [
9
] | Train |
482 | 0 | Title: Simple Selection of Utile Control Rules in Speedup Learning
Abstract: Many recent approaches to avoiding the utility problem in speedup learning rely on sophisticated utility measures and significant numbers of training data to accurately estimate the utility of control knowledge. Empirical results presented here and elsewhere indicate that a simple selection strategy of retaining all control rules derived from a training problem explanation quickly defines an efficient set of control knowledge from few training problems. This simple selection strategy provides a low-cost alternative to example-intensive approaches for improving the speed of a problem solver. | [
13,
251,
551,
578
] | Train |
483 | 4 | Title: The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces
Abstract: Parti-game is a new algorithm for learning feasible trajectories to goal regions in high dimensional continuous state-spaces. In high dimensions it is essential that learning does not plan uniformly over a state-space. Parti-game maintains a decision-tree partitioning of state-space and applies techniques from game-theory and computational geometry to efficiently and adaptively concentrate high resolution only on critical areas. The current version of the algorithm is designed to find feasible paths or trajectories to goal regions in high dimensional spaces. Future versions will be designed to find a solution that optimizes a real-valued criterion. Many simulated problems have been tested, ranging from two-dimensional to nine-dimensional state-spaces, including mazes, path planning, non-linear dynamics, and planar snake robots in restricted spaces. In all cases, a good solution is found in less than ten trials and a few minutes. | [
277,
294,
367,
463,
552,
566,
650,
749,
933
] | Train |
484 | 3 | Title: Comparing Predictive Inference Methods for Discrete Domains
Abstract: Predictive inference is seen here as the process of determining the predictive distribution of a discrete variable, given a data set of training examples and the values for the other problem domain variables. We consider three approaches for computing this predictive distribution, and assume that the joint probability distribution for the variables belongs to a set of distributions determined by a set of parametric models. In the simplest case, the predictive distribution is computed by using the model with the maximum a posteriori (MAP) posterior probability. In the evidence approach, the predictive distribution is obtained by averaging over all the individual models in the model family. In the third case, we define the predictive distribution by using Rissanen's new definition of stochastic complexity. Our experiments performed with the family of Naive Bayes models suggest that when using all the data available, the stochastic complexity approach produces the most accurate predictions in the log-score sense. However, when the amount of available training data is decreased, the evidence approach clearly outperforms the two other approaches. The MAP predictive distribution is clearly inferior in the log-score sense to the two more sophisticated approaches, but for the 0/1-score the MAP approach may still in some cases produce the best results. | [
641,
642,
1574
] | Validation |
485 | 3 | Title: Bayesian Case-Based Reasoning with Neural Networks
Abstract: Given a problem, a case-based reasoning (CBR) system will search its case memory and use the stored cases to find the solution, possibly modifying retrieved cases to adapt to the required input specifications. In this paper we introduce a neural network architecture for efficient case-based reasoning. We show how a rigorous Bayesian probability propagation algorithm can be implemented as a feedforward neural network and adapted for CBR. In our approach the efficient indexing problem of CBR is naturally implemented by the parallel architecture, and heuristic matching is replaced by a probability metric. This allows our CBR to perform theoretically sound Bayesian reasoning. We also show how the probability propagation actually offers a solution to the adaptation problem in a very natural way. | [
711,
1838,
2380,
2514,
2561
] | Train |
486 | 0 | Title: CASE-BASED CREATIVE DESIGN
Abstract: Designers across a variety of domains engage in many of the same creative activities. Since much creativity stems from using old solutions in novel ways, we believe that case-based reasoning can be used to explain many creative design processes. | [
64,
231,
284,
285,
1148,
1278,
1597,
2276
] | Train |
487 | 2 | Title: Language as a dynamical system
Abstract: Designers across a variety of domains engage in many of the same creative activities. Since much creativity stems from using old solutions in novel ways, we believe that case-based reasoning can be used to explain many creative design processes. | [
291,
538
] | Train |
488 | 6 | Title: Prediction, Learning, Uniform Convergence, and Scale-sensitive Dimensions
Abstract: We present a new general-purpose algorithm for learning classes of [0; 1]-valued functions in a generalization of the prediction model, and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant factor in general. We apply this result, together with techniques due to Haussler and to Benedek and Itai, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each * > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0; 1]-valued functions to be agnostically learnable to within *, and to be an *-uniform Glivenko-Cantelli class. | [
109,
549,
591,
2053
] | Train |
489 | 2 | Title: Multiple Network Systems (Minos) Modules: Task Division and Module Discrimination 1
Abstract: It is widely considered an ultimate connectionist objective to incorporate neural networks into intelligent systems. These systems are intended to possess a varied repertoire of functions enabling adaptable interaction with a non-static environment. The first step in this direction is to develop various neural network algorithms and models, the second step is to combine such networks into a modular structure that might be incorporated into a workable system. In this paper we consider one aspect of the second point, namely: processing reliability and hiding of wetware details. Pre- sented is an architecture for a type of neural expert module, named an Authority. An Authority consists of a number of Minos modules. Each of the Minos modules in an Authority has the same processing capabilities, but varies with respect to its particular specialization to aspects of the problem domain. The Authority employs the collection of Minoses like a panel of experts. The expert with the highest confidence is believed, and it is the answer and confidence quotient that are transmitted to other levels in a system hierarchy. | [
46,
238,
301,
747,
1815,
2670
] | Train |
490 | 4 | Title: Learning policies for partially observable environments: Scaling up
Abstract: Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for finding optimal behavior do not appear to scale well and have been unable to find satisfactory policies for problems with more than a dozen states. After a brief review of pomdp's, this paper discusses several simple solution methods and shows that all are capable of finding near-optimal policies for a selection of extremely small pomdp's taken from the learning literature. In contrast, we show that none are able to solve a slightly larger and noisier problem based on robot navigation. We find that a combination of two novel approaches performs well on these problems and suggest methods for scaling to even larger and more complicated domains. | [
5,
6,
45,
213,
220,
492,
734
] | Validation |
491 | 3 | Title: Self Regenerative Markov Chain Monte Carlo Summary
Abstract: We propose a new method of construction of Markov chains with a given stationary distribution . This method is based on construction of an auxiliary chain with some other stationary distribution and picking elements of this auxiliary chain a suitable number of times. The proposed method has many advantages over its rivals. It is easy to implement; it provides a simple analysis; it can be faster and more efficient than the currently available techniques and it can also be adapted during the course of the simulation. We make theoretical and numerical comparisons of the characteristics of the proposed algorithm with some other MCMC techniques. | [
182,
468,
2318,
2377
] | Train |
492 | 4 | Title: Approximating Optimal Policies for Partially Observable Stochastic Domains
Abstract: The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. | [
45,
213,
490,
565,
734,
1186,
1741,
2323,
2419
] | Validation |
493 | 1 | Title: Parallel Search for Neural Network Under the guidance of
Abstract: The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. | [
427
] | Validation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.