node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
494 | 2 | Title: Connectionist Modeling of the Fast Mapping Phenomenon
Abstract: The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. | [
427,
747
] | Train |
495 | 3 | Title: Abduction to Plausible Causes: An Event-based Model of Belief Update
Abstract: The Katsuno and Mendelzon (KM) theory of belief update has been proposed as a reasonable model for revising beliefs about a changing world. However, the semantics of update relies on information which is not readily available. We describe an alternative semantical view of update in which observations are incorporated into a belief set by: a) explaining the observation in terms of a set of plausible events that might have caused that observation; and b) predicting further consequences of those explanations. We also allow the possibility of conditional explanations. We show that this picture naturally induces an update operator conforming to the KM postulates under certain assumptions. However, we argue that these assumptions are not always reasonable, and they restrict our ability to integrate update with other forms of revision when reasoning about action. fl Some parts of this report appeared in preliminary form as An Event-Based Abductive Model of Update, Proc. of Tenth Canadian Conf. on in AI, Banff, Alta., (1994). | [
270,
339,
342,
467
] | Validation |
496 | 2 | Title: BRAINSTRUCTURED CONNECTIONIST NETWORKS THAT PERCEIVE AND LEARN
Abstract: This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for the need for, and usefulness of, appropriate successively larger brain-like structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of such structures. The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g., houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation-discovery (feedback-guided growth of new links and nodes, subject to brain-like constraints (e.g., local receptive fields, global convergence-divergence). The information processing transforms discovered through generation are fine-tuned by feedback-guided reweight-ing of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g., letters of the alphabet, cups, apples, bananas) through feedback-guided generation and reweighting. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. | [
501,
663,
1896,
2393
] | Train |
497 | 6 | Title: Decision Tree Induction Based on Efficient Tree Restructuring
Abstract: The ability to restructure a decision tree efficiently enables a variety of approaches to decision tree induction that would otherwise be prohibitively expensive. Two such approaches are described here, one being incremental tree induction (ITI), and the other being non-incremental tree induction using a measure of tree quality instead of test quality (DMTI). These approaches and several variants offer new computational and classifier characteristics that lend themselves to particular applications. | [
478,
2342
] | Test |
498 | 3 | Title: A variational approach to Bayesian logistic regression models and their extensions
Abstract: We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that accurate variational techniques can be used to obtain a closed form posterior distribution over the parameters given the data thereby yielding a posterior predictive model. The results are readily extended to (binary) belief networks. For belief networks we also derive closed form posteriors in the presence of missing values. Finally, we show that the dual of the regression problem gives a latent variable density model, the variational formulation of which leads to exactly solvable EM updates. | [
107,
108,
250
] | Train |
499 | 3 | Title: IMPROVING THE MEAN FIELD APPROXIMATION VIA THE USE OF MIXTURE DISTRIBUTIONS
Abstract: Mean field methods provide computationally efficient approximations to posterior probability distributions for graphical models. Simple mean field methods make a completely factorized approximation to the posterior, which is unlikely to be accurate when the posterior is multimodal. Indeed, if the posterior is multi-modal, only one of the modes can be captured. To improve the mean field approximation in such cases, we employ mixture models as posterior approximations, where each mixture component is a factorized distribution. We describe efficient methods for optimizing the parameters in these models. | [
250,
1287,
1288
] | Train |
500 | 4 | Title: 2-D Pole Balancing with Recurrent Evolutionary Networks
Abstract: The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. In this paper we present a more difficult version to the classic problem where the cart and pole can move in a plane. We demonstrate a neuroevolution system (Enforced Sub-Populations, or ESP) that can solve this difficult problem without velocity information. | [
247,
563,
1767,
2444
] | Train |
501 | 2 | Title: Some Biases for Efficient Learning of Spatial, Temporal, and Spatio-Temporal Patterns
Abstract: This paper introduces and explores some representational biases for efficient learning of spatial, temporal, or spatio-temporal patterns in connectionist networks (CN) massively parallel networks of simple computing elements. It examines learning mechanisms that constructively build up network structures that encode information from environmental stimuli at successively higher resolutions as needed for the tasks (e.g., perceptual recognition) that the network has to perform. Some simple examples are presented to illustrate the the basic structures and processes used in such networks to ensure the parsimony of learned representations by guiding the system to focus its efforts at the minimal adequate resolution. Several extensions of the basic algorithm for efficient learning using multi-resolution representations of spatial, temporal, or spatio-temporal patterns are discussed. | [
174,
496,
503,
663
] | Train |
502 | 4 | Title: Fast Online Q()
Abstract: Q()-learning uses TD()-methods to accelerate Q-learning. The update complexity of previous online Q() implementations based on lookup-tables is bounded by the size of the state/action space. Our faster algorithm's update complexity is bounded by the number of actions. The method is based on the observation that Q-value updates may be postponed until they are needed. | [
294,
565,
567,
747,
2536
] | Train |
503 | 2 | Title: Generative Learning Structures and Processes for Generalized Connectionist Networks
Abstract: Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture the number of processing elements and the connectivity among them as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology e.g., rather slow learning and the need for an a-priori choice of a network architecture. Several alternative designs as well as a range of control structures and processes which can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized and several extensions and refinements of such algorithms, and directions for future research are outlined. | [
174,
501,
1813,
1851,
1952,
2029,
2396
] | Train |
504 | 2 | Title: MANIAC: A Next Generation Neurally Based Autonomous Road Follower
Abstract: The use of artificial neural networks in the domain of autonomous vehicle navigation has produced promising results. ALVINN [Pomerleau, 1991] has shown that a neural system can drive a vehicle reliably and safely on many different types of roads, ranging from paved paths to interstate highways. Even with these impressive results, several areas within the neural paradigm for autonomous road following still need to be addressed. These include transparent navigation between roads of different type, simultaneous use of different sensors, and generalization to road types which the neural system has never seen. The system presented here addresses these issue with a modular neural architecture which uses pre-trained ALVINN networks and a connectionist superstructure to robustly drive on many dif ferent types of roads. | [
702
] | Test |
505 | 2 | Title: From Isolation to Cooperation: An Alternative View of a System of Experts
Abstract: We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blen ding their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to adjust the size and shape of the receptive field in which its predictions are valid, and also to adjust its bias on the importance of individual input dimensions. The size and shape adjustment corresponds to finding a local distance metric, while the bias adjustment accomplishes local dimensio n-ality reduction. We derive asymptotic results for our method. In a variety of simulations we demonstrate the properties of the algorithm with respect to interference, learning speed, prediction accuracy, feature detection, and task or i-ented incremental learning. | [
74,
134
] | Train |
506 | 4 | Title: Dynamic Non-Bayesian Decision Making
Abstract: The model of a non-Bayesian agent who faces a repeated game with incomplete information against Nature is an appropriate tool for modeling general agent-environment interactions. In such a model the environment state (controlled by Nature) may change arbitrarily, and the feedback/reward function is initially unknown. The agent is not Bayesian, that is he does not form a prior probability neither on the state selection strategy of Nature, nor on his reward function. A policy for the agent is a function which assigns an action to every history of observations and actions. Two basic feedback structures are considered. In one of them the perfect monitoring case the agent is able to observe the previous environment state as part of his feedback, while in the other the imperfect monitoring case all that is available to the agent is the reward obtained. Both of these settings refer to partially observable processes, where the current environment state is unknown. Our main result refers to the competitive ratio criterion in the perfect monitoring case. We prove the existence of an efficient stochastic policy that ensures that the competitive ratio is obtained at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence. It is further shown that such an optimal policy does not exist in the imperfect monitoring case. Moreover, it is proved that in the perfect monitoring case there does not exist a deterministic policy that satisfies our long run optimality criterion. In addition, we discuss the maxmin criterion and prove that a deterministic efficient optimal strategy does exist in the imperfect monitoring case under this criterion. Finally we show that our approach to long-run optimality can be viewed as qualitative, which distinguishes it from previous work in this area. | [
514
] | Train |
507 | 6 | Title: PAC Learning Axis-aligned Rectangles with Respect to Product Distributions from Multiple-instance Examples
Abstract: We describe a polynomial-time algorithm for learning axis-aligned rectangles in Q d with respect to product distributions from multiple-instance examples in the PAC model. Here, each example consists of n elements of Q d together with a label indicating whether any of the n points is in the rectangle to be learned. We assume that there is an unknown product distribution D over Q d such that all instances are independently drawn according to D. The accuracy of a hypothesis is measured by the probability that it would incorrectly predict whether one of n more points drawn from D was in the rectangle to be learned. Our algorithm achieves accuracy * with probability 1 ffi in | [
109,
549,
798,
1888,
2391,
2427,
2548
] | Validation |
508 | 6 | Title: Machine Learning by Function Decomposition
Abstract: We present a new machine learning method that, given a set of training examples, induces a definition of the target concept in terms of a hierarchy of intermediate concepts and their definitions. This effectively decomposes the problem into smaller, less complex problems. The method is inspired by the Boolean function decomposition approach to the design of digital circuits. To cope with high time complexity of finding an optimal decomposition, we propose a suboptimal heuristic algorithm. The method, implemented in program HINT (HIerarchy Induction Tool), is experimentally evaluated using a set of artificial and real-world learning problems. It is shown that the method performs well both in terms of classification accuracy and discovery of meaningful concept hierarchies. | [
317,
417,
523,
2326
] | Test |
509 | 5 | Title: The Bayesian Approach to Tree-Structured Regression
Abstract: In the context of inductive learning, the Bayesian approach turned out to be very successful in estimating probabilities of events when there are only a few learning examples. The m-probability estimate was developed to handle such situations. In this paper we present the m-distribution estimate, an extension to the m-probability estimate which, besides the estimation of probabilities, covers also the estimation of probability distributions. We focus on its application in the construction of regression trees. The theoretical results were incorporated into a system for automatic induction of regression trees. The results of applying the upgraded system to several domains are presented and compared to previous results. | [
314,
669
] | Test |
510 | 3 | Title: The Bayesian Approach to Tree-Structured Regression
Abstract: TECHNICAL REPORT NO. 967 August 1996 | [
192,
356,
519,
2223
] | Validation |
511 | 3 | Title: Learning from incomplete data
Abstract: Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives|the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977)|both for the estimation of mixture components and for coping with the missing data. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense. The authors were supported in part by a grant from ATR Auditory and Visual Perception Research Laboratories, by a grant from Siemens Corporation, by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. Zoubin Ghahramani was supported by a grant from the McDonnell-Pew Foundation. Michael I. Jordan is a NSF Presidential Young Investigator. | [
74,
611
] | Train |
512 | 2 | Title: Fault-Tolerant Implementation of Finite-State Automata in Recurrent Neural Networks
Abstract: Recently, we have proven that the dynamics of any deterministic finite-state automata (DFA) with n states and m input symbols can be implemented in a sparse second-order recurrent neural network (SORNN) with n + 1 state neurons and O(mn) second-order weights and sigmoidal discriminant functions [5]. We investigate how that constructive algorithm can be extended to fault-tolerant neural DFA implementations where faults in an analog implementation of neurons or weights do not affect the desired network performance. We show that tolerance to weight perturbation can be achieved easily; tolerance to weight and/or neuron stuck-at-zero faults, however, requires duplication of the network resources. This result has an impact on the construction of neural DFAs with a dense internal representation of DFA states. | [
407,
411
] | Train |
513 | 3 | Title: Detecting Features in Spatial Point Processes with Clutter via Model-Based Clustering
Abstract: Technical Report No. 295 Department of Statistics, University of Washington October, 1995 1 Abhijit Dasgupta is a graduate student at the Department of Biostatistics, University of Washington, Box 357232, Seattle, WA 98195-7232, and his e-mail address is dasgupta@biostat.washington.edu. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, and his e-mail address is raftery@stat.washington.edu. This research was supported by Office of Naval Research Grant no. N-00014-91-J-1074. The authors are grateful to Peter Guttorp, Girardeau Henderson and Robert Muise for helpful discussions. | [
117,
155,
452
] | Train |
514 | 6 | Title: Gambling in a rigged casino: The adversarial multi-armed bandit problem
Abstract: In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T 1=2 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also prove a general matching lower bound on the best possible performance of any algorithm in our setting. In addition, we consider a setting in which the player has a team of experts advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary. | [
453,
506,
569
] | Validation |
515 | 3 | Title: Sensitivities: An Alternative to Conditional Probabilities for Bayesian Belief Networks
Abstract: We show an alternative way of representing a Bayesian belief network by sensitivities and probability distributions. This representation is equivalent to the traditional representation by conditional probabilities, but makes dependencies between nodes apparent and intuitively easy to understand. We also propose a QR matrix representation for the sensitivities and/or conditional probabilities which is more efficient, in both memory requirements and computational speed, than the traditional representation for computer-based implementations of probabilistic inference. We use sensitivities to show that for a certain class of binary networks, the computation time for approximate probabilistic inference with any positive upper bound on the error of the result is independent of the size of the network. Finally, as an alternative to traditional algorithms that use conditional probabilities, we describe an exact algorithm for probabilistic inference that uses the QR-representation for sensitivities and updates probability distributions of nodes in a network according to messages from the neigh bors. | [
637,
2164
] | Validation |
516 | 2 | Title: A Supercomputer for Neural Computation
Abstract: The requirement to train large neural networks quickly has prompted the design of a new massively parallel supercomputer using custom VLSI. This design features 128 processing nodes, communicating over a mesh network connected directly to the processor chip. Studies show peak performance in the range of 160 billion arithmetic operations per second. This paper presents the case for custom hardware that combines neural network-specific features with a general programmable machine architecture, and briefly describes the design in progress. | [
272
] | Train |
517 | 6 | Title: Active Learning with Committees for Text Categorization
Abstract: In many real-world domains like text categorization, supervised learning requires a large number of training examples. In this paper we describe an active learning method that uses a committee of learners to reduce the number of training examples required for learning. Our approach is similar to the Query by Committee framework, where disagreement among the committee members on the predicted label for the input part of the example is used to signal the need for knowing the actual value of the label. Our experiments in text categorization using a committee of Winnow-based learners demonstrate that this approach can reduce the number of labeled training examples required over that used by a single Winnow learner by 1-2 orders of magnitude. This paper is not under review or accepted for publication in another conference or journal. Acknowledgements: The availability of the Reuters-22173 corpus [Reuters] and of the | STAT Data Manipulation and Analysis Programs [Perlman] has greatly assisted in our research to date. | [
164,
1170,
1198,
2509
] | Train |
518 | 6 | Title: Developments in Probabilistic Modelling with Neural Networks|Ensemble Learning
Abstract: In this paper I give a review of ensemble learning using a simple example. | [
76,
181,
662,
766,
2532
] | Train |
519 | 2 | Title: Smoothing Spline ANOVA for Exponential Families, with Application to the Wisconsin Epidemiological Study of Diabetic
Abstract: In this paper I give a review of ensemble learning using a simple example. | [
10,
190,
192,
193,
280,
420,
510,
705,
2223,
2448,
2549,
2590,
2608
] | Train |
520 | 2 | Title: CANCER DIAGNOSIS AND PROGNOSIS VIA LINEAR-PROGRAMMING-BASED MACHINE LEARNING
Abstract: In this paper I give a review of ensemble learning using a simple example. | [
142,
230,
478,
524,
719
] | Train |
521 | 5 | Title: Covering vs. Divide-and-Conquer for Top-Down Induction of Logic Programs
Abstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions. | [
156,
344,
638,
1081,
1082,
1259,
2312
] | Train |
522 | 6 | Title: THE DISCOVERY OF ALGORITHMIC PROBABILITY
Abstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions. | [
68,
525
] | Train |
523 | 1 | Title: Some studies in machine learning using the game of checkers. IBM Journal, 3(3):211-229, 1959. Some
Abstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions. | [
54,
163,
188,
277,
283,
415,
465,
478,
508,
540,
551,
565,
717,
870,
882,
910,
961,
1214,
1616,
1676,
1687,
1790,
1921,
1931,
2408,
2442,
2480,
2551,
2600,
2642
] | Validation |
524 | 2 | Title: An Inductive Learning Approach to Prognostic Prediction
Abstract: This paper introduces the Recurrence Surface Approximation, an inductive learning method based on linear programming that predicts recurrence times using censored training examples, that is, examples in which the available training output may be only a lower bound on the "right answer." This approach is augmented with a feature selection method that chooses an appropriate feature set within the context of the linear programming generalizer. Computational results in the field of breast cancer prognosis are shown. A straightforward translation of the prediction method to an artificial neural network model is also proposed. | [
430,
520,
1169,
1454
] | Train |
525 | 6 | Title: MML mixture modelling of multi-state, Poisson, von Mises circular and Gaussian distributions
Abstract: Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also consistent and efficient. We provide a brief overview of MML inductive inference (Wallace and Boulton (1968), Wallace and Freeman (1987)), and how it has both an information-theoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture mod-elling program, Snob (Wallace and Boulton (1968), Wal-lace (1986), Wallace and Dowe(1994)) uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components. The message length is (to within a constant) the logarithm of the posterior probability of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated, and permits multi-variate data from Gaussian, discrete multi-state, Poisson and von Mises circular distributions. | [
522,
684,
1419,
1425,
1427
] | Test |
526 | 2 | Title: MML mixture modelling of multi-state, Poisson, von Mises circular and Gaussian distributions
Abstract: 11] M.H. Overmars. A random approach to motion planning. Technical Report RUU-CS-92-32, Department of Computer Science, Utrecht University, October 1992. | [
427,
747
] | Train |
527 | 2 | Title: VISIT: An Efficient Computational Model of Human Visual Attention
Abstract: One of the challenges for models of cognitive phenomena is the development of efficient and exible interfaces between low level sensory information and high level processes. For visual processing, researchers have long argued that an attentional mechanism is required to perform many of the tasks required by high level vision. This thesis presents VISIT, a connectionist model of covert visual attention that has been used as a vehicle for studying this interface. The model is efficient, exible, and is biologically plausible. The complexity of the network is linear in the number of pixels. Effective parallel strategies are used to minimize the number of iterations required. The resulting system is able to efficiently solve two tasks that are particularly difficult for standard bottom-up models of vision: computing spatial relations and visual search. Simulations show that the networks behavior matches much of the known psychophysical data on human visual attention. The general architecture of the model also closely matches the known physiological data on the human attention system. Various extensions to VISIT are discussed, including methods for learning the component modules. | [
747,
1656,
1968,
2606,
2662
] | Train |
528 | 2 | Title: Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks
Abstract: A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations. The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by The dynamics of a neural network with symmetric interactions provably converges to fixed points under very general assumptions[1, 2]. This mathematical result helped to establish the paradigm of neural computation with fixed point attractors[3]. But in reality, interactions between neurons in the brain are asymmetric. Furthermore, the dynamical behaviors seen in the brain are not confined to fixed point attractors, but also include oscillations and complex nonperiodic behavior. These other types of dynamics can be realized by asymmetric networks, and may be useful for neural computation. For these reasons, it is important to understand the global behavior of asymmetric neural networks. The interaction between an excitatory neuron and an inhibitory neuron is clearly asymmetric. Here we consider a class of networks that incorporates this fundamental asymmetry of the brain's microcircuitry. Networks of this class have distinct populations of excitatory and inhibitory neurons, with antisymmetric interactions minimax and dissipative Hamiltonian forms of the network dynamics. | [
678
] | Validation |
529 | 2 | Title: Capacity of SDM
Abstract: Report R95:12 ISRN : SICS-R--95/12-SE ISSN : 0283-3638 Abstract A more efficient way of reading the SDM memory is presented. This is accomplished by using implicit information, hitherto not utilized, to find the information-carrying units and thus removing unnecessary noise when reading the memory. | [
340,
341,
709
] | Train |
530 | 1 | Title: operations: operation machine duration
Abstract: Report R95:12 ISRN : SICS-R--95/12-SE ISSN : 0283-3638 Abstract A more efficient way of reading the SDM memory is presented. This is accomplished by using implicit information, hitherto not utilized, to find the information-carrying units and thus removing unnecessary noise when reading the memory. | [
163,
343
] | Train |
531 | 2 | Title: FEEDBACK STABILIZATION OF NONLINEAR SYSTEMS
Abstract: This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. | [
693,
1490,
2187
] | Validation |
532 | 3 | Title: Hierarchical Selection Models with Applications in Meta-Analysis
Abstract: This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. | [
27
] | Train |
533 | 3 | Title: Estimating Ratios of Normalizing Constants for Densities with Different Dimensions
Abstract: In Bayesian inference, a Bayes factor is defined as the ratio of posterior odds versus prior odds where posterior odds is simply a ratio of the normalizing constants of two posterior densities. In many practical problems, the two posteriors have different dimensions. For such cases, the current Monte Carlo methods such as the bridge sampling method (Meng and Wong 1996), the path sampling method (Gelman and Meng 1994), and the ratio importance sampling method (Chen and Shao 1994) cannot directly be applied. In this article, we extend importance sampling, bridge sampling, and ratio importance sampling to problems of different dimensions. Then we find global optimal importance sampling, bridge sampling, and ratio importance sampling in the sense of minimizing asymptotic relative mean-square errors of estimators. Implementation algorithms, which can asymptotically achieve the optimal simulation errors, are developed and two illustrative examples are also provided. | [
41,
777
] | Validation |
534 | 0 | Title: Massively Parallel Matching of Knowledge Structures
Abstract: As knowledge bases used for AI systems increase in size, access to relevant information is the dominant factor in the cost of inference. This is especially true for analogical (or case-based) reasoning, in which the ability of the system to perform inference is dependent on efficient and flexible access to a large base of exemplars (cases) judged likely to be relevant to solving a problem at hand. In this chapter we discuss a novel algorithm for efficient associative matching of relational structures in large semantic networks. The structure matching algorithm uses massively parallel hardware to search memory for knowledge structures matching a given probe structure. The algorithm is built on top of PARKA, a massively parallel knowledge representation system which runs on the Connection Machine. We are currently exploring the utility of this algorithm in CaPER, a case-based planning system. | [
313
] | Train |
535 | 6 | Title: Sequential PAC Learning
Abstract: We consider the use of "on-line" stopping rules to reduce the number of training examples needed to pac-learn. Rather than collect a large training sample that can be proved sufficient to eliminate all bad hypotheses a priori, the idea is instead to observe training examples one-at-a-time and decide "on-line" whether to stop and return a hypothesis, or continue training. The primary benefit of this approach is that we can detect when a hypothesizer has actually "converged," and halt training before the standard fixed-sample-size bounds. This paper presents a series of such sequential learning procedures for: distribution-free pac-learning, "mistake-bounded to pac" conversion, and distribution-specific pac-learning, respectively. We analyze the worst case expected training sample size of these procedures, and show that this is often smaller than existing fixed sample size bounds | while providing the exact same worst case pac-guarantees. We also provide lower bounds that show these reductions can at best involve constant (and possibly log) factors. However, empirical studies show that these sequential learning procedures actually use many times fewer training examples in prac tice. | [
109,
672,
761,
1560
] | Train |
536 | 2 | Title: Dimension of Recurrent Neural Networks
Abstract: DIMACS Technical Report 96-56 December 1996 | [
58,
200,
206,
411
] | Train |
537 | 1 | Title: Adaptive Global Optimization with Local Search
Abstract: DIMACS Technical Report 96-56 December 1996 | [
357,
606,
1204
] | Test |
538 | 1 | Title: Learning and evolution in neural networks
Abstract: DIMACS Technical Report 96-56 December 1996 | [
15,
129,
402,
487,
1036,
1204,
1689,
1738,
2165,
2193,
2309
] | Train |
539 | 0 | Title: Structural Similarity as Guidance in Case-Based Design
Abstract: This paper presents a novel approach to determine structural similarity as guidance for adaptation in case-based reasoning (Cbr). We advance structural similarity assessment which provides not only a single numeric value but the most specific structure two cases have in common, inclusive of the modification rules needed to obtain this structure from the two cases. Our approach treats retrieval, matching and adaptation as a group of dependent processes. This guarantees the retrieval and matching of not only similar but adaptable cases. Both together enlarge the overall problem solving performance of Cbr and the explainability of case selection and adaptation considerably. Although our approach is more theoretical in nature and not restricted to a specific domain, we will give an example taken from the domain of industrial building design. Additionally, we will sketch two prototypical implementations of this approach. | [
183,
454,
541,
1123,
1209,
1210,
1453,
1665
] | Train |
540 | 0 | Title: A Model-Based Approach to Blame-Assignment in Design
Abstract: We analyze the blame-assignment task in the context of experience-based design and redesign of physical devices. We identify three types of blame-assignment tasks that differ in the types of information they take as input: the design does not achieve a desired behavior of the device, the design results in an undesirable behavior, a specific structural element in the design misbehaves. We then describe a model-based approach for solving the blame-assignment task. This approach uses structure-behavior-function models that capture a designer's comprehension of the way a device works in terms of causal explanations of how its structure results in its behaviors. We also address the issue of indexing the models in memory. We discuss how the three types of blame-assignment tasks require different types of indices for accessing the models. Finally we describe the KRITIK2 system that implements and evaluates this model-based approach to blame assignment. | [
523,
543,
603,
1121,
1640
] | Validation |
541 | 0 | Title: Task-Oriented Knowledge Acquisition and Reasoning for Design Support Systems
Abstract: We present a framework for task-driven knowledge acquisition in the development of design support systems. Different types of knowledge that enter the knowledge base of a design support system are defined and illustrated both from a formal and from a knowledge acquisition vantage point. Special emphasis is placed on the task-structure, which is used to guide both acquisition and application of knowledge. Starting with knowledge for planning steps in design and augmenting this with problem-solving knowledge that supports design, a formal integrated model of knowledge for design is constructed. Based on the notion of knowledge acquisition as an incremental process we give an account of possibilities for problem solving depending on the knowledge that is at the disposal of the system. Finally, we depict how different kinds of knowledge interact in a design support system. ? This research was supported by the German Ministry for Research and Technology (BMFT) within the joint project FABEL under contract no. 413-4001-01IW104. Project partners in FABEL are German National Research Center of Computer Science (GMD), Sankt Augustin, BSR Consulting GmbH, Munchen, Technical University of Dresden, HTWK Leipzig, University of Freiburg, and University of Karlsruhe. | [
183,
454,
539,
1123
] | Validation |
542 | 2 | Title: Comparison of Bayesian and Neural Net Unsupervised Classification Techniques
Abstract: Unsupervised classification is the classification of data into a number of classes in such a way that data in each class are all similar to each other. In the past there have been few if any studies done to compare the performance of different unsupervised classification techniques. In this paper we review Bayesian and neural net approaches to unsupervised classification and present results of experiments that we did to compare Autoclass, a Bayesian classification system, and ART2, a neural net classification algorithm. | [
747,
779,
1203
] | Train |
543 | 0 | Title: Meta-Cases: Explaining Case-Based Reasoning
Abstract: AI research on case-based reasoning has led to the development of many laboratory case-based systems. As we move towards introducing these systems into work environments, explaining the processes of case-based reasoning is becoming an increasingly important issue. In this paper we describe the notion of a meta-case for illustrating, explaining and justifying case-based reasoning. A meta-case contains a trace of the processing in a problem-solving episode, and provides an explanation of the problem-solving decisions and a (partial) justification for the solution. The language for representing the problem-solving trace depends on the model of problem solving. We describe a task-method-knowledge (TMK) model of problem-solving and describe the representation of meta-cases in the TMK language. We illustrate this explanatory scheme with examples from Interactive Kritik, a computer-based de sign and learning environment presently under development. | [
540
] | Train |
544 | 2 | Title: Minimum-Risk Profiles of Protein Families Based on Statistical Decision Theory
Abstract: Statistical decision theory provides a principled way to estimate amino acid frequencies in conserved positions of a protein family. The goal is to minimize the risk function, or the expected squared-error distance between the estimates and the true population frequencies. The minimum-risk estimates are obtained by adding an optimal number of pseudocounts to the observed data. Two formulas are presented, one for pseudocounts based on marginal amino acid frequencies and one for pseudocounts based on the observed data. Experimental results show that profiles constructed using minimal-risk estimates are more discriminating than those constructed using existing methods. | [
0,
14,
258,
751
] | Test |
545 | 2 | Title: Characterising Innateness in Artificial and Natural Learning
Abstract: The purpose of this paper is to propose a refinement of the notion of innateness. If we merely identify innateness with bias, then we obtain a poor characterisation of this notion, since any learning device relies on a bias that makes it choose a given hypothesis instead of another. We show that our intuition of innateness is better captured by a characteristic of bias, related to isotropy. Generalist models of learning are shown to rely on an isotropic bias, whereas the bias of specialised models, which include some specific a priori knowledge about what is to be learned, is necessarily anisotropic. The socalled generalist models, however, turn out to be specialised in some way: they learn symmetrical forms preferentially, and have strictly no deficiencies in their learning ability. Because some learning beings do not always show these two properties, such generalist models may be sometimes ruled out as bad candidates for cognitive modelling. | [
747
] | Validation |
546 | 1 | Title: GREQE a Diplome des Etudes Approfondies en Economie Mathematique et Econometrie A Genetic Algorithm for
Abstract: The purpose of this paper is to propose a refinement of the notion of innateness. If we merely identify innateness with bias, then we obtain a poor characterisation of this notion, since any learning device relies on a bias that makes it choose a given hypothesis instead of another. We show that our intuition of innateness is better captured by a characteristic of bias, related to isotropy. Generalist models of learning are shown to rely on an isotropic bias, whereas the bias of specialised models, which include some specific a priori knowledge about what is to be learned, is necessarily anisotropic. The socalled generalist models, however, turn out to be specialised in some way: they learn symmetrical forms preferentially, and have strictly no deficiencies in their learning ability. Because some learning beings do not always show these two properties, such generalist models may be sometimes ruled out as bad candidates for cognitive modelling. | [
163
] | Train |
547 | 2 | Title: Expectation-Based Selective Attention for Visual Monitoring and Control of a Robot Vehicle
Abstract: Reliable vision-based control of an autonomous vehicle requires the ability to focus attention on the important features in an input scene. Previous work with an autonomous lane following system, ALVINN [Pomerleau, 1993], has yielded good results in uncluttered conditions. This paper presents an artificial neural network based learning approach for handling difficult scenes which will confuse the ALVINN system. This work presents a mechanism for achieving task-specific focus of attention by exploiting temporal coherence. A saliency map, which is based upon a computed expectation of the contents of the inputs in the next time step, indicates which regions of the input retina are important for performing the task. The saliency map can be used to accentuate the features which are important for the task, and de-emphasize those which are not. | [
74,
430
] | Train |
548 | 4 | Title: Value Function Based Production Scheduling
Abstract: Production scheduling, the problem of sequentially configuring a factory to meet forecasted demands, is a critical problem throughout the manufacturing industry. The requirement of maintaining product inventories in the face of unpredictable demand and stochastic factory output makes standard scheduling models, such as job-shop, inadequate. Currently applied algorithms, such as simulated annealing and constraint propagation, must employ ad-hoc methods such as frequent replanning to cope with uncertainty. In this paper, we describe a Markov Decision Process (MDP) formulation of production scheduling which captures stochasticity in both production and demands. The solution to this MDP is a value function which can be used to generate optimal scheduling decisions online. A simple example illustrates the theoretical superiority of this approach over replanning-based methods. We then describe an industrial application and two reinforcement learning methods for generating an approximate value function on this domain. Our results demonstrate that in both deterministic and noisy scenarios, value function approx imation is an effective technique. | [
82,
552,
565,
1859,
1860
] | Train |
549 | 6 | Title: Efficient Distribution-free Learning of Probabilistic Concepts
Abstract: In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior|thus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of p-concepts, we study and develop in detail an underlying theory of learning p-concepts. | [
287,
453,
456,
488,
507,
574,
591,
640,
672
] | Train |
550 | 6 | Title: LEARNING BY USING DYNAMIC FEATURE COMBINATION AND SELECTION
Abstract: | [
438,
569,
1422,
2423
] | Test |
551 | 4 | Title: Utilization Filtering a method for reducing the inherent harmfulness of deductively learned knowledge field of
Abstract: This paper highlights a phenomenon that causes deductively learned knowledge to be harmful when used for problem solving. The problem occurs when deductive problem solvers encounter a failure branch of the search tree. The backtracking mechanism of such problem solvers will force the program to traverse the whole subtree thus visiting many nodes twice - once by using the deductively learned rule and once by using the rules that generated the learned rule in the first place. We suggest an approach called utilization filtering to solve that problem. Learners that use this approach submit to the problem solver a filter function together with the knowledge that was acquired. The function decides for each problem whether to use the learned knowledge and what part of it to use. We have tested the idea in the context of a lemma learning system, where the filter uses the probability of a subgoal failing to decide whether to turn lemma usage off. Experiments show an improvement of performance by a factor of 3. This paper is concerned with a particular type of harmful redundancy that occurs in deductive problem solvers that employ backtracking in their search procedure, and use deductively learned knowledge to accelerate the search. The problem is that in failure branches of the search tree, the backtracking mechanism of the problem solver forces exploration of the whole subtree. Thus, the search procedure will visit many states twice - once by using the deductively learned rule, and once by using the search path that produced the rule in the first place. | [
482,
523,
2215,
2473
] | Train |
552 | 4 | Title: Learning to Act using Real-Time Dynamic Programming
Abstract: fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526). | [
2,
16,
57,
60,
85,
92,
167,
173,
210,
220,
239,
277,
298,
305,
306,
311,
367,
370,
374,
412,
446,
451,
455,
466,
472,
473,
483,
548,
554,
559,
575,
601,
621,
636,
644,
653,
671,
688,
691,
723,
738,
749
] | Train |
553 | 2 | Title: Object Selection Based on Oscillatory Correlation
Abstract: 1 Technical Report: OSU-CISRC-12/96 - TR67, 1996 Abstract One of the classical topics in neural networks is winner-take-all (WTA), which has been widely used in unsupervised (competitive) learning, cortical processing, and attentional control. Because of global connectivity, WTA networks, however, do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. We propose a new architecture that maintains spatial relations between input features. This selection network builds on LEGION (Locally Excitatory Globally Inhibitory Oscillator Networks) dynamics and slow inhibition. In an input scene with many objects (patterns), the network selects the largest object. This system can be easily adjusted to select several largest objects, which then alternate in time. We further show that a twostage selection network gains efficiency by combining selection with parallel removal of noisy regions. The network is applied to select the most salient object in real images. As a special case, the selection network without local excitation gives rise to a new form of oscillatory WTA. | [
123,
2459
] | Train |
554 | 4 | Title: Reinforcement Learning Algorithms for Average-Payoff Markovian Decision Processes
Abstract: Reinforcement learning (RL) has become a central paradigm for solving learning-control problems in robotics and artificial intelligence. RL researchers have focussed almost exclusively on problems where the controller has to maximize the discounted sum of payoffs. However, as emphasized by Schwartz (1993), in many problems, e.g., those for which the optimal behavior is a limit cycle, it is more natural and com-putationally advantageous to formulate tasks so that the controller's objective is to maximize the average payoff received per time step. In this paper I derive new average-payoff RL algorithms as stochastic approximation methods for solving the system of equations associated with the policy evaluation and optimal control questions in average-payoff RL tasks. These algorithms are analogous to the popular TD and Q-learning algorithms already developed for the discounted-payoff case. One of the algorithms derived here is a significant variation of Schwartz's R-learning algorithm. Preliminary empirical results are presented to validate these new algorithms. | [
167,
294,
306,
446,
552,
565,
875,
1859
] | Train |
555 | 6 | Title: Exactly Learning Automata with Small Cover Time
Abstract: We present algorithms for exactly learning unknown environments that can be described by deterministic finite automata. The learner performs a walk on the target automaton, where at each step it observes the output of the state it is at, and chooses a labeled edge to traverse to the next state. We assume that the learner has no means of a reset, and we also assume that the learner does not have access to a teacher that answers equivalence queries and gives the learner counterexamples to its hypotheses. We present two algorithms, one assumes that the outputs observed by the learner are always correct and the other assumes that the outputs might be erroneous. The running times of both algorithms are polynomial in the cover time of the underlying graph of the target automaton. | [
400,
556,
615,
2354
] | Test |
556 | 6 | Title: The Power of a Pebble: Exploring and Mapping Directed Graphs
Abstract: Exploring and mapping an unknown environment is a fundamental problem, which is studied in a variety of contexts. Many works have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions on the environment and solve the mapping problem in this general setting. We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. We do not assume that the vertices of G are labeled, and thus the robot has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a pebble a device that it can place on a vertex and use to identify the vertex later. In this paper we show: (1) If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble. (2) If the robot does not know an upper bound on the number of vertices n, then fi(log log n) pebbles are both necessary and sufficient. In both cases our algorithms are deterministic. | [
555,
615,
2354,
2360
] | Test |
557 | 3 | Title: On the Sample Complexity of Learning Bayesian Networks
Abstract: In recent years there has been an increasing interest in learning Bayesian networks from data. One of the most effective methods for learning such networks is based on the minimum description length (MDL) principle. Previous work has shown that this learning procedure is asymptotically successful: with probability one, it will converge to the target distribution, given a sufficient number of samples. However, the rate of this convergence has been hitherto unknown. In this work we examine the sample complexity of MDL based learning procedures for Bayesian networks. We show that the number of samples needed to learn an *-close approximation (in terms of entropy distance) with confidence ffi is O * ) 3 log 1 ffi log log 1 . This means that the sample complexity is a low-order polynomial in the error threshold and sub-linear in the confidence bound. We also discuss how the constants in this term depend on the complexity of the target distribution. Finally, we address questions of asymptotic minimality and propose a method for using the sample complexity results to speed up the learning process. | [
423,
558
] | Train |
558 | 3 | Title: A Tutorial on Learning With Bayesian Networks
Abstract: Technical Report MSR-TR-95-06 | [
376,
423,
557,
905,
1137,
1532,
1555,
1641,
1816,
1934,
2034,
2463,
2660
] | Train |
559 | 4 | Title: Scaling Up Average Reward Reinforcement Learning by Approximating the Domain Models and the Value Function
Abstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply | [
34,
167,
552,
1378,
1816,
2341
] | Train |
560 | 6 | Title: Bayesian Methods for Adaptive Models
Abstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply | [
78,
157,
246,
740,
938,
955,
1375,
1452,
2287
] | Test |
561 | 2 | Title: Visualizing High-Dimensional Structure with the Incremental Grid Growing Neural Network
Abstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply | [
427,
745,
747
] | Train |
562 | 4 | Title: Transfer of Learning by Composing Solutions of Elemental Sequential Tasks
Abstract: Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focussed on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm. | [
60,
252,
370,
440,
671,
1117,
1183,
1401,
1889,
2014,
2018
] | Train |
563 | 4 | Title: Evolving Obstacle Avoidance Behavior in a Robot Arm
Abstract: Existing approaches for learning to control a robot arm rely on supervised methods where correct behavior is explicitly given. It is difficult to learn to avoid obstacles using such methods, however, because examples of obstacle avoidance behavior are hard to generate. This paper presents an alternative approach that evolves neural network controllers through genetic algorithms. No input/output examples are necessary, since neuro-evolution learns from a single performance measurement over the entire task of grasping an object. The approach is tested in a simulation of the OSCAR-6 robot arm which receives both visual and sensory input. Neural networks evolved to effectively avoid obstacles at various locations to reach random target locations. | [
37,
38,
163,
219,
247,
500
] | Train |
564 | 4 | Title: Reinforcement Learning with Soft State Aggregation
Abstract: It is widely accepted that the use of more compact representations than lookup tables is crucial to scaling reinforcement learning (RL) algorithms to real-world problems. Unfortunately almost all of the theory of reinforcement learning assumes lookup table representations. In this paper we address the pressing issue of combining function approximation and RL, and present 1) a function approx-imator based on a simple extension to state aggregation (a commonly used form of compact representation), namely soft state aggregation, 2) a theory of convergence for RL with arbitrary, but fixed, soft state aggregation, 3) a novel intuitive understanding of the effect of state aggregation on online RL, and 4) a new heuristic adaptive state aggregation algorithm that finds improved compact representations by exploiting the non-discrete nature of soft state aggregation. Preliminary empirical results are also presented. | [
294,
463,
565,
738,
1841
] | Train |
565 | 4 | Title: Machine Learning Learning to Predict by the Methods of Temporal Differences Keywords: Incremental learning, prediction,
Abstract: This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. | [
2,
34,
57,
60,
82,
85,
92,
103,
118,
128,
173,
239,
244,
283,
294,
295,
305,
306,
333,
367,
385,
410,
425,
465,
466,
477,
478,
492,
502,
523,
548,
554,
564,
566,
575,
601,
621,
633,
644,
671,
691,
738,
773,
842,
882,
910,
10... | Train |
566 | 4 | Title: Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming
Abstract: This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments. | [
16,
34,
173,
186,
274,
294,
321,
333,
449,
465,
466,
472,
477,
483,
565,
588,
633,
671,
688,
699,
733,
858,
1447,
1459,
1544,
1643,
1782,
1816,
2221,
2480,
2485,
2658
] | Validation |
567 | 4 | Title: Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding
Abstract: On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general . | [
21,
277,
385,
502,
970,
1828
] | Train |
568 | 4 | Title: Online Learning with Random Representations
Abstract: We consider the requirements of online learning|learning which must be done incrementally and in realtime, with the results of learning available soon after each new example is acquired. Despite the abundance of methods for learning from examples, there are few that can be used effectively for online learning, e.g., as components of reinforcement learning systems. Most of these few, including radial basis functions, CMACs, Ko-honen's self-organizing maps, and those developed in this paper, share the same structure. All expand the original input representation into a higher dimensional representation in an unsupervised way, and then map that representation to the final answer using a relatively simple supervised learner, such as a perceptron or LMS rule. Such structures learn very rapidly and reliably, but have been thought either to scale poorly or to require extensive domain knowledge. To the contrary, some researchers (Rosenblatt, 1962; Gallant & Smith, 1987; Kanerva, 1988; Prager & Fallside, 1988) have argued that the expanded representation can be chosen largely at random with good results. The main contribution of this paper is to develop and test this hypothesis. We show that simple random-representation methods can perform as well as nearest-neighbor methods (while being more suited to online learning), and significantly better than backpropagation. We find that the size of the random representation does increase with the dimensionality of the problem, but not unreasonably so, and that the required size can be reduced substantially using unsupervised-learning techniques. Our results suggest that randomness has a useful role to play in online supervised learning and constructive induction. | [
478,
843
] | Train |
569 | 6 | Title: A decision-theoretic generalization of on-line learning and an application to boosting how the weight-update rule
Abstract: We consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of Littlestone and Warmuth [10] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n | [
255,
456,
514,
550,
710,
767,
1000,
1025,
1092,
1181,
1269,
1273,
1430,
1457,
1522,
1712,
1986,
2099
] | Train |
570 | 2 | Title: A New Learning Algorithm for Blind Signal Separation
Abstract: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm is verified by computer simulations. | [
59,
169,
212,
576,
872,
874,
1067,
1200,
1243,
1245,
1246,
1258,
1381,
1520,
1524,
1709,
1814,
1922,
2026
] | Train |
571 | 6 | Title: The Central Classifier Bound ANew Error Bound for the Classifier Chosen by Early Stopping Key
Abstract: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm is verified by computer simulations. | [
19,
424,
1762,
2331,
2495,
2694
] | Validation |
572 | 2 | Title: Avoiding Overfitting with BP-SOM
Abstract: Overfitting is a well-known problem in the fields of symbolic and connectionist machine learning. It describes the deterioration of gen-eralisation performance of a trained model. In this paper, we investigate the ability of a novel artificial neural network, bp-som, to avoid overfitting. bp-som is a hybrid neural network which combines a multi-layered feed-forward network (mfn) with Kohonen's self-organising maps (soms). During training, supervised back-propagation learning and unsupervised som learning cooperate in finding adequate hidden-layer representations. We show that bp-som outperforms standard backpropagation, and also back-propagation with a weight decay when dealing with the problem of overfitting. In addition, we show that bp-som succeeds in preserving generalisation performance under hidden-unit pruning, where both other methods fail. | [
112,
624,
747,
881
] | Train |
573 | 3 | Title: Iterated Revision and Minimal Change of Conditional Beliefs
Abstract: We describe a model of iterated belief revision that extends the AGM theory of revision to account for the effect of a revision on the conditional beliefs of an agent. In particular, this model ensures that an agent makes as few changes as possible to the conditional component of its belief set. Adopting the Ramsey test, minimal conditional revision provides acceptance conditions for arbitrary right-nested conditionals. We show that problem of determining acceptance of any such nested conditional can be reduced to acceptance tests for unnested conditionals. Thus, iterated revision can be accomplished in a virtual manner, using uniterated revision. | [
270,
464
] | Test |
574 | 6 | Title: On the Learnability of Discrete Distributions (extended abstract)
Abstract: We describe a model of iterated belief revision that extends the AGM theory of revision to account for the effect of a revision on the conditional beliefs of an agent. In particular, this model ensures that an agent makes as few changes as possible to the conditional component of its belief set. Adopting the Ramsey test, minimal conditional revision provides acceptance conditions for arbitrary right-nested conditionals. We show that problem of determining acceptance of any such nested conditional can be reduced to acceptance tests for unnested conditionals. Thus, iterated revision can be accomplished in a virtual manner, using uniterated revision. | [
242,
549,
640,
672,
1006,
1827,
1962,
2040,
2360,
2475
] | Train |
575 | 4 | Title: Issues in Using Function Approximation for Reinforcement Learning
Abstract: Reinforcement learning techniques address the problem of learning to select actions in unknown, dynamic environments. It is widely acknowledged that to be of use in complex domains, reinforcement learning techniques must be combined with generalizing function approximation methods such as artificial neural networks. Little, however, is understood about the theoretical properties of such combinations, and many researchers have encountered failures in practice. In this paper we identify a prime source of such failuresnamely, a systematic overestimation of utility values. Using Watkins' Q-Learning [18] as an example, we give a theoretical account of the phenomenon, deriving conditions under which one may expected it to cause learning to fail. Employing some of the most popular function approximators, we present experimental results which support the theoretical findings. | [
173,
552,
565,
738,
843,
882,
1378,
2485
] | Train |
576 | 2 | Title: An information-maximisation approach to blind separation and blind deconvolution
Abstract: We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to tony@salk.edu. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. | [
43,
59,
212,
293,
330,
354,
355,
570,
605,
726,
731,
834,
839,
863,
874,
1014,
1067,
1200,
1245,
1258,
1381,
1524,
1526,
1710,
1801,
1814,
1922,
1932,
2026,
2552
] | Train |
577 | 3 | Title: Operations for Learning with Graphical Models decomposition techniques and the demonstration that graphical models provide
Abstract: This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. | [
250,
312,
389,
401,
1502,
1532,
2034,
2492,
2660
] | Test |
578 | 0 | Title: AN EMPIRICAL APPROACH TO SOLVING THE GENERAL UTILITY PROBLEM IN SPEEDUP LEARNING
Abstract: The utility problem in speedup learning describes a common behavior of machine learning methods: the eventual degradation of performance due to increasing amounts of learned knowledge. The shape of the learning curve (cost of using a learning method vs. number of training examples) over several domains suggests a parameterized model relating performance to the amount of learned knowledge and a mechanism to limit the amount of learned knowledge for optimal performance. Many recent approaches to avoiding the utility problem in speedup learning rely on sophisticated utility measures and significant numbers of training data to accurately estimate the utility of control knowledge. Empirical results presented here and elsewhere indicate that a simple selection strategy of retaining all control rules derived from a training problem explanation quickly defines an efficient set of control knowledge from few training problems. This simple selection strategy provides a low-cost alternative to example-intensive approaches for improving the speed of a problem solver. Experimentation illustrates the existence of a minimum (representing least cost) in the learning curve which is reached after a few training examples. Stress is placed on controlling the amount of learned knowledge as opposed to which knowledge. An attempt is also made to relate domain characteristics to the shape of the learning curve. | [
13,
482,
1122,
1333,
1877
] | Train |
579 | 2 | Title: Comparison of Kernel Estimators, Perceptrons, and Radial-Basis Functions for OCR and Speech Classification
Abstract: We compare kernel estimators, single and multi-layered perceptrons and radial-basis functions for the problems of classification of handwritten digits and speech phonemes. By taking two different applications and employing many techniques, we report here a two-dimensional study whereby a domain-independent assessment of these learning methods can be possible. We consider a feed-forward network with one hidden layer. As examples of the local methods, we use kernel estimators like k-nearest neighbor (k-nn), Parzen windows, generalized k-nn, and Grow and Learn (Condensed Nearest Neighbor). We have also considered fuzzy k-nn due to its similarity. As distributed networks, we use linear perceptron, pairwise separating linear perceptron, and multilayer perceptrons with sigmoidal hidden units. We also tested the radial-basis function network which is a combination of local and distributed networks. Four criteria are taken for comparison: Correct classification of the test set, network size, learning time, and the operational complexity. We found that perceptrons when the architecture is suitable, generalize better than local, memory-based kernel estimators but require longer training and more precise computation. Local networks are simple, learn very quickly and acceptably, but use more memory. | [
611,
696,
747
] | Train |
580 | 0 | Title: Learning to Improve Case Adaptation by Introspective Reasoning and CBR
Abstract: In current CBR systems, case adaptation is usually performed by rule-based methods that use task-specific rules hand-coded by the system developer. The ability to define those rules depends on knowledge of the task and domain that may not be available a priori, presenting a serious impediment to endowing CBR systems with the needed adaptation knowledge. This paper describes ongoing research on a method to address this problem by acquiring adaptation knowledge from experience. The method uses reasoning from scratch, based on introspective reasoning about the requirements for successful adaptation, to build up a library of adaptation cases that are stored for future reuse. We describe the tenets of the approach and the types of knowledge it requires. We sketch initial computer implementation, lessons learned, and open questions for further study. | [
581,
922,
1126,
1212,
1215,
1497
] | Train |
581 | 0 | Title: Representing Self-knowledge for Introspection about Memory Search
Abstract: This position paper sketches a framework for modeling introspective reasoning and discusses the relevance of that framework for modeling introspective reasoning about memory search. It argues that effective and flexible memory processing in rich memories should be built on five types of explicitly represented self-knowledge: knowledge about information needs, relationships between different types of information, expectations for the actual behavior of the information search process, desires for its ideal behavior, and representations of how those expectations and desires relate to its actual performance. This approach to modeling memory search is both an illustration of general principles for modeling introspective reasoning and a step towards addressing the problem of how a reasoner human or machinecan acquire knowledge about the properties of its own knowledge base. | [
49,
50,
222,
580
] | Train |
582 | 0 | Title: In Machine Learning: A Multistrategy Approach, Vol. IV Macro and Micro Perspectives of Multistrategy Learning
Abstract: Machine learning techniques are perceived to have a great potential as means for the acquisition of knowledge; nevertheless, their use in complex engineering domains is still rare. Most machine learning techniques have been studied in the context of knowledge acquisition for well defined tasks, such as classification. Learning for these tasks can be handled by relatively simple algorithms. Complex domains present difficulties that can be approached by combining the strengths of several complementing learning techniques, and overcoming their weaknesses by providing alternative learning strategies. This study presents two perspectives, the macro and the micro, for viewing the issue of multistrategy learning. The macro perspective deals with the decomposition of an overall complex learning task into relatively well-defined learning tasks, and the micro perspective deals with designing multistrategy learning techniques for supporting the acquisition of knowledge for each task. The two perspectives are discussed in the context of | [
259,
818,
1498,
1792
] | Train |
583 | 0 | Title: Introspective reasoning using meta-explanations for multistrategy learning
Abstract: In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system's knowledge, and of the organization of this knowledge. This chapter presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task. | [
50,
64,
284,
643,
1126,
1214,
1278,
2371,
2398,
2568
] | Train |
584 | 3 | Title: A MEAN FIELD LEARNING ALGORITHM FOR UNSUPERVISED NEURAL NETWORKS
Abstract: We introduce a learning algorithm for unsupervised neural networks based on ideas from statistical mechanics. The algorithm is derived from a mean field approximation for large, layered sigmoid belief networks. We show how to (approximately) infer the statistics of these networks without resort to sampling. This is done by solving the mean field equations, which relate the statistics of each unit to those of its Markov blanket. Using these statistics as target values, the weights in the network are adapted by a local delta rule. We evaluate the strengths and weaknesses of these networks for problems in statistical pattern recognition. | [
250,
427,
639
] | Test |
585 | 5 | Title: An investigation of noise-tolerant relational concept learning algorithms
Abstract: We discuss the types of noise that may occur in relational learning systems and describe two approaches to addressing noise in a relational concept learning algorithm. We then evaluate each approach experimentally. | [
335,
378,
911,
1061,
1275,
2091,
2290,
2291
] | Validation |
586 | 2 | Title: Neural Learning of Chaotic Dynamics: The Error Propagation Algorithm trains a neural network to identify
Abstract: Technical Report UMIACS-TR-97-77 and CS-TR-3843 Abstract | [
28
] | Test |
587 | 2 | Title: NONPARAMETRIC SELECTION OF INPUT VARIABLES FOR CONNECTIONIST LEARNING
Abstract: Technical Report UMIACS-TR-97-77 and CS-TR-3843 Abstract | [
88,
214,
427,
2239
] | Train |
588 | 4 | Title: LEARNING TO AVOID COLLISIONS: A REINFORCEMENT LEARNING PARADIGM FOR MOBILE ROBOT NAVIGATION
Abstract: The paper describes a self-learning control system for a mobile robot. Based on sensor information the control system has to provide a steering signal in such a way that collisions are avoided. Since in our case no `examples' are available, the system learns on the basis of an external reinforcement signal which is negative in case of a collision and zero otherwise. We describe the adaptive algorithm which is used for a discrete coding of the state space, and the adaptive algorithm for learning the correct mapping from the input (state) vector to the output (steering) signal. | [
186,
294,
566,
699,
747
] | Train |
589 | 2 | Title: BRIGHTNESS PERCEPTION, ILLUSORY CONTOURS, AND CORTICOGENICULATE FEEDBACK
Abstract: fl Partially supported by the Advanced Research Projects Agency (AFOSR 90-0083). y Partially supported by the Air Force Office of Scientific Research (AFOSR F49620-92-J-0499), the Advanced Research Projects Agency (ONR N00014-92-J-4015), and the Office of Naval Research (ONR N00014-91-J-4100). z Partially funded by the Air Force Office of Scientific Research (AFOSR F49620-92-J-0334) and the Office of Naval Research (ONR N00014-91-J-4100 and ONR N00014-94-1-0597). | [
282,
592,
1509
] | Train |
590 | 2 | Title: APPROXIMATION IN L p (R d FROM SPACES SPANNED BY THE PERTURBED INTEGER TRANSLATES OF
Abstract: May 14, 1995 Abstract. The problem of approximating smooth L p -functions from spaces spanned by the integer translates of a radially symmetric function is very well understood. In case the points of translation, ffi, are scattered throughout R d , the approximation problem is only well understood in the "stationary" setting. In this work, we treat the "non-stationary" setting under the assumption that ffi is a small perturbation of Z d . Our results, which are similar in many respects to the known results for the case ffi = Z d , apply specifically to the examples of the Gauss kernel and the Generalized Multiquadric. | [
364,
365,
366
] | Train |
591 | 6 | Title: Toward Efficient Agnostic Learning
Abstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. | [
199,
287,
453,
488,
549,
640,
672,
848,
1032,
1105,
1186,
1358,
2054,
2155,
2182,
2475,
2690
] | Test |
592 | 2 | Title: FIGURE-GROUND SEPARATION BY VISUAL CORTEX Encyclopedia of Neuroscience
Abstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. | [
282,
589,
1144,
1509
] | Validation |
593 | 0 | Title: THE DESIGN AND IMPLEMENTATION OF A CASE-BASED PLANNING FRAMEWORK WITHIN A PARTIAL-ORDER PLANNER
Abstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. | [
300,
594
] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.