node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
1,894 | 3 | Title: Causal inference, path analysis, and recursive struc-tural equations models. In C. Clogg, editor, Sociological Methodology,
Abstract: Lipid Research Clinic Program 84] Lipid Research Clinic Program. The Lipid Research Clinics Coronary Primary Prevention Trial results, parts I and II. Journal of the American Medical Association, 251(3):351-374, January 1984. [Pearl 93] Judea Pearl. Aspects of graphical models connected with causality. Technical Report R-195-LL, Cognitive Systems Laboratory, UCLA, June 1993. Submitted to Biometrika (June 1993). Short version in Proceedings of the 49th Session of the International Statistical Institute: Invited papers, Flo rence, Italy, August 1993, Tome LV, Book 1, pp. 391-401. | [
827,
909,
1527,
2144,
2524
] | Train |
1,895 | 2 | Title: Generating Neural Networks Through the Induction of Threshold Logic Unit Trees (Extended Abstract)
Abstract: We investigate the generation of neural networks through the induction of binary trees of threshold logic units (TLUs). Initially, we describe the framework for our tree construction algorithm and how such trees can be transformed into an isomorphic neural network topology. Several methods for learning the linear discriminant functions at each node of the tree structure are examined and shown to produce accuracy results that are comparable to classical information theoretic methods for constructing decision trees (which use single feature tests at each node). Our TLU trees, however, are smaller and thus easier to understand. Moreover, we show that it is possible to simultaneously learn both the topology and weight settings of a neural network simply using the training data set that we are given. | [
102,
638,
1893
] | Train |
1,896 | 2 | Title: Experiments with the Cascade-Correlation Algorithm
Abstract: Technical Report # 91-16 July 1991; Revised August 1991 | [
496,
1851,
2393
] | Test |
1,897 | 6 | Title: On Learning Visual Concepts and DNF Formulae
Abstract: We consider the problem of learning DNF formulae in the mistake-bound and the PAC models. We develop a new approach, which is called polynomial explainability, that is shown to be useful for learning some new subclasses of DNF (and CNF) formulae that were not known to be learnable before. Unlike previous learnability results for DNF (and CNF) formulae, these subclasses are not limited in the number of terms or in the number of variables per term; yet, they contain the subclasses of k-DNF and k-term-DNF (and the corresponding classes of CNF) as special cases. We apply our DNF results to the problem of learning visual concepts and obtain learning algorithms for several natural subclasses of visual concepts that appear to have no natural boolean counterpart. On the other hand, we show that learning some other natural subclasses of visual concepts is as hard as learning the class of all DNF formulae. We also consider the robustness of these results under various types of noise. | [
25,
640,
732,
1003,
2146,
2182
] | Train |
1,898 | 3 | Title: Accounting for Context in Plan Recognition, with Application to Traffic Monitoring
Abstract: Typical approaches to plan recognition start from a representation of an agent's possible plans, and reason evidentially from observations of the agent's actions to assess the plausibility of the various candidates. A more expansive view of the task (consistent with some prior work) accounts for the context in which the plan was generated, the mental state and planning process of the agent, and consequences of the agent's actions in the world. We present a general Bayesian framework encompassing this view, and focus on how context can be exploited in plan recognition. We demonstrate the approach on a problem in traffic monitoring, where the objective is to induce the plan of the driver from observation of vehicle movements. Starting from a model of how the driver generates plans, we show how the highway context can appropriately influence the recognizer's interpretation of observed driver behavior. | [
278,
1268,
2108,
2140
] | Train |
1,899 | 3 | Title: Logarithmic Time Parallel Bayesian Inference
Abstract: I present a parallel algorithm for exact probabilistic inference in Bayesian networks. For polytree networks with n variables, the worst-case time complexity is O(log n) on a CREW PRAM (concurrent-read, exclusive-write parallel random-access machine) with n processors, for any constant number of evidence variables. For arbitrary networks, the time complexity is O(r 3w log n) for n processors, or O(w log n) for r 3w n processors, where r is the maximum range of any variable, and w is the induced width (the maximum clique size), after moralizing and trian gulating the network. | [
327,
2292
] | Test |
1,900 | 3 | Title: Learning Convex Sets of Probability from Data
Abstract: Several theories of inference and decision employ sets of probability distributions as the fundamental representation of (subjective) belief. This paper investigates a frequentist connection between empirical data and convex sets of probability distributions. Building on earlier work by Walley and Fine, a framework is advanced in which a sequence of random outcomes can be described as being drawn from a convex set of distributions, rather than just from a single distribution. The extra generality can be detected from observable characteristics of the outcome sequence. The paper presents new asymptotic convergence results paralleling the laws of large numbers in probability theory, and concludes with a comparison between this approach and approaches based on prior subjective constraints. c fl1997 Carnegie Mellon University | [
2492
] | Train |
1,901 | 3 | Title: Learning Convex Sets of Probability from Data
Abstract: This reproduces a report submitted to Rome Laboratory on October 27, 1994. c flCopyright 1994 by Jon Doyle. All rights reserved. Freely available via http://www.medg.lcs.mit.edu/doyle. Final Report on Rational Distributed Reason Maintenance for Abstract Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the sub-plans improve on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. We develop revision methods aiming to revise only those beliefs and plans worth revising, and to tolerate incoherence and ungroundedness when these are judged less detrimental than a costly revision effort. We use an artificial market economy in planning and revision tasks to arrive at overall judgments of worth, and present a representation for qualitative preferences that permits capture of common forms of dominance information. | [
1800,
2301
] | Train |
1,902 | 2 | Title: LU TP 91-25 Mass Reconstruction with a Neural Network
Abstract: A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W ! q q, where W -bosons are produced in pp reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using "intelligent" variables in instances when the amount of training instances is limited. | [
1884,
1885,
1886,
1887
] | Train |
1,903 | 2 | Title: DNA: A New ASOCS Model With Improved Implementation Potential
Abstract: A new class of highspeed, self-adaptive, massively parallel computing models called ASOCS (Adaptive Self-Organizing Concurrent Systems) has been proposed. Current analysis suggests that there may be problems implementing ASOCS models in VLSI using the hierarchical network structures originally proposed. The problems are not inherent in the models, but rather in the technology used to implement them. This has led to the development of a new ASOCS model called DNA (Discriminant-Node ASOCS) that does not depend on a hierarchical node structure for success. Three areas of the DNA model are briefly discussed in this paper: DNA's flexible nodes, how DNA overcomes problems other models have allocating unused nodes, and how DNA operates during processing and learning. | [
2612
] | Train |
1,904 | 0 | Title: Using Case-Based Reasoning for Mobile Robot Navigation
Abstract: This paper presents an approach to mobile robot path planning using case-based reasoning together with map-based path planning. The map-based path planner is used to seed the case-base with innovative solutions. The casebase stores the paths and the information about their traversability. While planning the route those paths are preferred that according to the former experience are least risky. | [
643,
2556
] | Test |
1,905 | 1 | Title: Determining Successful Negotiation Strategies: An Evolutionary Approach
Abstract: To be successful in open, multi-agent environments, autonomous agents must be capable of adapting their negotiation strategies and tactics to their prevailing circumstances. To this end, we present an empirical study showing the relative success of different strategies against different types of opponent in different environments. In particular, we adopt an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. We conduct a series of experiments to determine the most successful strategies and to see how and when these strategies evolve depending on the context and negotiation stance of the agent's opponent. | [
55,
163,
1834
] | Test |
1,906 | 3 | Title: Bayesian Estimation and Model Choice in Item Response Models
Abstract: To be successful in open, multi-agent environments, autonomous agents must be capable of adapting their negotiation strategies and tactics to their prevailing circumstances. To this end, we present an empirical study showing the relative success of different strategies against different types of opponent in different environments. In particular, we adopt an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. We conduct a series of experiments to determine the most successful strategies and to see how and when these strategies evolve depending on the context and negotiation stance of the agent's opponent. | [
2421
] | Train |
1,907 | 3 | Title: Toward Rational Planning and Replanning Rational Reason Maintenance, Reasoning Economies, and Qualitative Preferences formal notions
Abstract: Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the subplans improves on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. To address these problems, we developed (1) revision methods aimed at revising only those beliefs and plans worth revising, and tolerating incoherence and ungroundedness when these are judged less detrimental than a costly revision effort, (2) an artificial market economy in planning and revision tasks for arriving at overall judgments of worth, and (3) a representation for qualitative preferences that permits capture of common forms of dominance information. We view the activities of intelligent agents as stemming from interleaved or simultaneous planning, replanning, execution, and observation subactivities. In this model of the plan construction process, the agents continually evaluate and revise their plans in light of what happens in the world. Planning is necessary for the organization of large-scale activities because decisions about actions to be taken in the future have direct impact on what should be done in the shorter term. But even if well-constructed, the value of a plan decays as changing circumstances, resources, information, or objectives render the original course of action inappropriate. When changes occur before or during execution of the plan, it may be necessary to construct a new plan by starting from scratch or by revising a previous plan. only the portions of the plan actually affected by the changes. Given the information accrued during plan execution, which remaining parts of the original plan should be salvaged and in what ways should other parts be changed? Incremental replanning first involves localizing the potential changes or conflicts by identifying the subset of the extant beliefs and plans in which they occur. It then involves choosing which of the identified beliefs and plans to keep and which to change. For greatest efficiency, the choices of what portion of the plan to revise and how to revise it should be based on coherent expectations about and preferences among the consequences of different alternatives so as to be rational in the sense of decision theory (Savage 1972). Our work toward mechanizing rational planning and replanning has focussed on four main issues: This paper focusses on the latter three issues; for our approach to the first, see (Doyle 1988; 1992). Replanning in an incremental and local manner requires that the planning procedures routinely identify the assumptions made during planning and connect plan elements with these assumptions, so that replan-ning may seek to change only those portions of a plan dependent upon assumptions brought into question by new information. Consequently, the problem of revising plans to account for changed conditions has much | [
1800,
1995,
2301
] | Train |
1,908 | 3 | Title: Induction of Selective Bayesian Classifiers
Abstract: In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding the naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space of features. We hypothesize that this approach will improve asymptotic accuracy in domains that involve correlated features without reducing the rate of learning in ones that do not. We report experimental results on six natural domains, including comparisons with decision-tree induction, that support these hypotheses. In closing, we discuss other approaches to extending naive Bayesian classifiers and outline some directions for future research. | [
442,
1582,
1647,
1838,
1893,
2514,
2561
] | Test |
1,909 | 3 | Title: A Comparison of Induction Algorithms for Selective and non-Selective Bayesian Classifiers
Abstract: In this paper we present a novel induction algorithm for Bayesian networks. This selective Bayesian network classifier selects a subset of attributes that maximizes predictive accuracy prior to the network learning phase, thereby learning Bayesian networks with a bias for small, high-predictive-accuracy networks. We compare the performance of this classifier with selective and non-selective naive Bayesian classifiers. We show that the selective Bayesian network classifier performs significantly better than both versions of the naive Bayesian classifier on almost all databases analyzed, and hence is an enhancement of the naive Bayesian classifier. Relative to the non-selective Bayesian network classifier, our selective Bayesian network classifier generates networks that are computationally simpler to evaluate and that display predictive accuracy comparable to that of Bayesian networks which model all features. | [
632,
1545,
1582,
2677
] | Validation |
1,910 | 3 | Title: Minimax Estimation via Wavelet Shrinkage a pleasure to acknowledge friendly conversations with Gerard Kerkyacharian,
Abstract: We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel- and Besov-type smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p < 2, so our method can significantly outperform every linear method (kernel, smoothing spline, sieve, : : : ) in a minimax sense. Variants of our method based on simple threshold nonlinearities are nearly minimax. Our method possesses the interpretation of spatial adaptivity: it reconstructs using a kernel which may vary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper discuss practical implementation, spatial adaptation properties and applications to inverse problems. Acknowledgements. This work was completed while the first author was on leave from U.C. Berkeley, where his research was supported by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12. Supersedes an earlier version, titled "Wavelets and Optimal Function Estimation", dated November 10, 1990, and issued as Technical reports by the Departments of Statistics at both Stanford and at U.C. Berkeley. | [
705,
1668,
2081,
2159,
2242,
2366,
2375,
2416,
2488,
2506,
2575,
2661
] | Test |
1,911 | 1 | Title: Genetic Programming and Data Structures
Abstract: We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel- and Besov-type smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p < 2, so our method can significantly outperform every linear method (kernel, smoothing spline, sieve, : : : ) in a minimax sense. Variants of our method based on simple threshold nonlinearities are nearly minimax. Our method possesses the interpretation of spatial adaptivity: it reconstructs using a kernel which may vary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper discuss practical implementation, spatial adaptation properties and applications to inverse problems. Acknowledgements. This work was completed while the first author was on leave from U.C. Berkeley, where his research was supported by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12. Supersedes an earlier version, titled "Wavelets and Optimal Function Estimation", dated November 10, 1990, and issued as Technical reports by the Departments of Statistics at both Stanford and at U.C. Berkeley. | [
290,
860,
1098,
1719,
2087,
2206
] | Validation |
1,912 | 2 | Title: Theory of Correlations in Stochastic Neural Networks
Abstract: One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however the interpretation of the observed correlations is difficult, since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus, a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop the theory of neuronal correlation functions in large networks comprising of several highly connected subpopulations, and obeying stochastic dynamic rules. When the networks are in asynchronous states, the cross-correlations are relatively weak, i.e., their amplitude relative to that of the auto-correlations is of order of 1=N , N being the size of the interacting populations. Using the weakness of the cross-correlations, general equations which express the matrix of cross-correlations in terms of the mean neuronal activities, and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the the gain of the postsynaptic neurons. The time-delayed cross-correlation matrix can be expressed as a sum of exponentially decaying modes that correspond to the (non-orthogonal) eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for the comparison between the contribution from the internal common input and that from the direct | [
304,
1932
] | Train |
1,913 | 3 | Title: A Note on the Dirichlet Process Prior in Bayesian Nonparametric Inference with Partial Exchangeability 1
Abstract: Technical Report no. 297 Department of Statistics University of Washington 1 Sonia Petrone is Assistant Professor, Universita di Pavia, Dipartimento di Economia Politica e Metodi Quantitativi, I-27100 Pavia, Italy and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322. This research was supported by ONR grant no. N-00014-91-J-1074 and by grants from MURST, Rome. | [
416,
1803
] | Test |
1,914 | 2 | Title: Local Feedforward Networks
Abstract: Although feedforward neural networks are well suited to function approximation, in some applications networks experience problems when learning a desired function. One problem is interference which occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To understand these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed that incorporates not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal multi-layer perceptron (MLP) networks that employ the back-prop learning algorithm, we address a familiar misconception that sigmoidal networks are inherently non-local by demonstrating that given a sufficiently large number of adjustable parameters, sigmoidal MLPs can be made arbitrarily local while retaining the ability to represent any continuous function on a compact domain. | [
427,
2176
] | Train |
1,915 | 2 | Title: A Mixture of Experts Model Exhibiting Prosopagnosia
Abstract: A considerable body of evidence from prosopagnosia, a deficit in face recognition dissociable from nonface object recognition, indicates that the visual system devotes a specialized functional area to mechanisms appropriate for face processing. We present a modular neural network composed of two expert networks and one mediating gate network with the task of learning to recognize the faces of 12 individuals and classifying 36 nonface objects as members of one of three classes. While learning the task, the network tends to divide labor between the two expert modules, with one expert specializing in face processing and the other specializing in nonface object processing. After training, we observe the network's performance on a test set as one of the experts is progressively damaged. The results roughly agree with data reported for prosopagnosic patients: as damage to the face expert increases, the network's face recognition performance decreases dramatically while its object classification performance drops slowly. We conclude that data-driven competitive learning between two unbiased functional units can give rise to localized face processing, and that selective damage in such a system could underlie prosopagnosia. | [
1981,
2497
] | Test |
1,916 | 2 | Title: Modeling Cortical Plasticity Based on Adapting Lateral Interaction
Abstract: A neural network model called LISSOM for the cooperative self-organization of afferent and lateral connections in cortical maps is applied to modeling cortical plasticity. After self-organization, the LISSOM maps are in a dynamic equilibrium with the input, and reorganize like the cortex in response to simulated cortical lesions and intracortical microstimulation. The model predicts that adapting lateral interactions are fundamental to cortical reorganization, and suggests techniques to hasten recovery following sensory cortical surgery. | [
2400
] | Train |
1,917 | 1 | Title: Go and Genetic Programming Playing Go with Filter Functions
Abstract: A neural network model called LISSOM for the cooperative self-organization of afferent and lateral connections in cortical maps is applied to modeling cortical plasticity. After self-organization, the LISSOM maps are in a dynamic equilibrium with the input, and reorganize like the cortex in response to simulated cortical lesions and intracortical microstimulation. The model predicts that adapting lateral interactions are fundamental to cortical reorganization, and suggests techniques to hasten recovery following sensory cortical surgery. | [
1796,
2334
] | Train |
1,918 | 6 | Title: Stochastic Logic Programs
Abstract: One way to represent a machine learning algorithm's bias over the hypothesis and instance space is as a pair of probability distributions. This approach has been taken both within Bayesian learning schemes and the framework of U-learnability. However, it is not obvious how an Inductive Logic Programming (ILP) system should best be provided with a probability distribution. This paper extends the results of a previous paper by the author which introduced stochastic logic programs as a means of providing a structured definition of such a probability distribution. Stochastic logic programs are a generalisation of stochastic grammars. A stochastic logic program consists of a set of labelled clauses p : C where p is from the interval [0; 1] and C is a range-restricted definite clause. A stochastic logic program P has a distributional semantics, that is one which assigns a probability distribution to the atoms of each predicate in the Herbrand base of the clauses in P . These probabilities are assigned to atoms according to an SLD-resolution strategy which employs a stochastic selection rule. It is shown that the probabilities can be computed directly for fail-free logic programs and by normalisation for arbitrary logic programs. The stochastic proof strategy can be used to provide three distinct functions: 1) a method of sampling from the Herbrand base which can be used to provide selected targets or example sets for ILP experiments, 2) a measure of the information content of examples or hypotheses; this can be used to guide the search in an ILP system and 3) a simple method for conditioning a given stochastic logic program on samples of data. Functions 1) and 3) are used to measure the generality of hypotheses in the ILP system Progol4.2. This supports an implementation of a Bayesian technique for learning from positive examples only. fl This paper is an extension of a paper with the same title which appeared in [12] | [
1290,
2329
] | Train |
1,919 | 5 | Title: Inductive Constraint Logic and the Mutagenesis Problem
Abstract: A novel approach to learning first order logic formulae from positive and negative examples is incorporated in a system named ICL (Inductive Constraint Logic). In ICL, examples are viewed as interpretations which are true or false for the target theory, whereas in present inductive logic programming systems, examples are true and false ground facts (or clauses). Furthermore, ICL uses a clausal representation, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form. We present some experiments with this new system on the mutagenesis problem. These experiments illustrate some of the differences with other systems, and indicate that our approach should work at least as well as the more classical approaches. | [
638,
1007,
2426,
2431
] | Train |
1,920 | 2 | Title: On the Probability of Chaos in Large Dynamical Systems: A Monte Carlo Study
Abstract: In this paper we report the result of a Monte Carlo study on the probability of chaos in large dynamical systems. We use neural networks as the basis functions for the system dynamics and choose parameter values for the networks randomly. Our results show that as the dimension of the system and the complexity of the network increase, the probability of chaotic dynamics increases to 100%. Since neural networks are dense in the set of dynamical systems, our conclusion is that most large systems are chaotic. | [
1874
] | Test |
1,921 | 1 | Title: Computation. Automated Synthesis of Analog Electrical Circuits by Means of Genetic Programming
Abstract: The design (synthesis) of analog electrical circuits starts with a high-level statement of the circuit's desired behavior and requires creating a circuit that satisfies the specified design goals. Analog circuit synthesis entails the creation of both the topology and the sizing (numerical values) of all of the circuit's components. The difficulty of the problem of analog circuit synthesis is well known and there is no previously known general automated technique for synthesizing an analog circuit from a high-level statement of the circuit's desired behavior. This paper presents a single uniform approach using genetic programming for the automatic synthesis of both the topology and sizing of a suite of eight different prototypical analog circuits, including a lowpass filter, a crossover (woofer and tweeter) filter, a source identification circuit, an amplifier, a computational circuit, a time-optimal controller circuit, a temperaturesensing circuit, and a voltage reference circuit. The problemspecific information required for each of the eight problems is minimal and consists primarily of the number of inputs and outputs of the desired circuit, the types of available components, and a fitness measure that restates the high-level statement of the circuit's desired behavior as a measurable mathematical quantity. The eight genetically evolved circuits constitute an instance of an evolutionary computation technique producing results on a task that is usually thought of as requiring human intelligence. The fact that a single uniform approach yielded a satisfactory design for each of the eight circuits as well as the fact that a satisfactory design was created on the first or second run of each problem are evidence for the general applicability of genetic programming for solving the problem of automatic synthesis of analog electrical circuits. | [
523,
1408,
1931,
2402
] | Train |
1,922 | 3 | Title: Maximum Likelihood and Covariant Algorithms for Independent Component Analysis somewhat more biologically plausible, involving no
Abstract: Bell and Sejnowski (1995) have derived a blind signal processing algorithm for a non-linear feedforward network from an information maximization viewpoint. This paper first shows that the same algorithm can be viewed as a maximum likelihood algorithm for the optimization of a linear generative model. Third, this paper gives a partial proof of the `folk-theorem' that any mixture of sources with high-kurtosis histograms is separable by the classic ICA algorithm. | [
570,
576,
2026
] | Train |
1,923 | 2 | Title: EM Algorithms for PCA and SPCA
Abstract: I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally very efficient in space and time. It also naturally accommodates missing information. I also introduce a new variant of PCA called sensible principal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I report results on synthetic and real data showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to hundreds of thousands of datapoints in thousands of dimensions. | [
71,
1928,
2114,
2227
] | Train |
1,924 | 6 | Title: Training Algorithms for Hidden Markov Models Using Entropy Based Distance Functions
Abstract: We present new algorithms for parameter estimation of HMMs. By adapting a framework used for supervised learning, we construct iterative algorithms that maximize the likelihood of the observations while also attempting to stay close to the current estimated parameters. We use a bound on the relative entropy between the two HMMs as a distance measure between them. The result is new iterative training algorithms which are similar to the EM (Baum-Welch) algorithm for training HMMs. The proposed algorithms are composed of a step similar to the expectation step of Baum-Welch and a new update of the parameters which replaces the maximization (re-estimation) step. The algorithm takes only negligibly more time per iteration and an approximated version uses the same expectation step as Baum-Welch. We evaluate experimentally the new algorithms on synthetic and natural speech pronunciation data. For sparse models, i.e. models with relatively small number of non-zero parameters, the proposed algorithms require significantly fewer iterations. | [
345,
2040,
2327
] | Test |
1,925 | 1 | Title: Boolean Functions Fitness Spaces
Abstract: We investigate the distribution of performance of the Boolean functions of 3 Boolean inputs (particularly that of the parity functions), the always-on-6 and even-6 parity functions. We us enumeration, uniform Monte-Carlo random sampling and sampling random full trees. As expected XOR dramatically changes the fitness distributions. In all cases once some minimum size threshold has been exceeded, the distribution of performance is approximately independent of program length. However the distribution of the performance of full trees is different from that of asymmetric trees and varies with tree depth. We consider but reject testing the No Free Lunch (NFL) theorems on these functions. | [
2133,
2206,
2392
] | Test |
1,926 | 3 | Title: Analysis of a Non-Reversible Markov Chain Sampler
Abstract: Technical Report BU-1385-M, Biometrics Unit, Cornell University Abstract We analyse the convergence to stationarity of a simple non-reversible Markov chain that serves as a model for several non-reversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that non-reversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalisation. We thank David Aldous, Martin Hildebrand, Brad Mann, and Laurent Saloff-Coste for their help. | [
748,
1941
] | Validation |
1,927 | 2 | Title: A Neural Architecture for Content as well as Address-Based Storage and Recall: Theory and Applications
Abstract: Technical Report BU-1385-M, Biometrics Unit, Cornell University Abstract We analyse the convergence to stationarity of a simple non-reversible Markov chain that serves as a model for several non-reversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that non-reversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalisation. We thank David Aldous, Martin Hildebrand, Brad Mann, and Laurent Saloff-Coste for their help. | [
1846,
1847,
2537
] | Validation |
1,928 | 2 | Title: Mixtures of Probabilistic Principal Component Analysers
Abstract: Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. | [
74,
667,
1923,
2114,
2124,
2570
] | Train |
1,929 | 2 | Title: Geometry of Early Stopping in Linear Networks
Abstract: | [
2349
] | Train |
1,930 | 1 | Title: A NEW METHODOLOGY FOR REDUCING BRITTLENESS IN GENETIC PROGRAMMING optimized maneuvers for an extended two-dimensional
Abstract: programs were independently evolved using fixed and randomly-generated fitness cases. These programs were subsequently tested against a large, representative fixed population of pursuers to determine their relative effectiveness. This paper describes the implementation of both the original and modified systems, and summarizes the results of these tests. | [
2512
] | Test |
1,931 | 1 | Title: AUTOMATED TOPOLOGY AND SIZING OF ANALOG CIRCUITS AUTOMATED DESIGN OF BOTH THE TOPOLOGY AND SIZING
Abstract: This paper describes an automated process for designing analog electrical circuits based on the principles of natural selection, sexual recombination, and developmental biology. The design process starts with the random creation of a large population of program trees composed of circuit-constructing functions. Each program tree specifies the steps by which a fully developed circuit is to be progressively developed from a common embryonic circuit appropriate for the type of circuit that the user wishes to design. Each fully developed circuit is translated into a netlist, simulated using a modified version of SPICE, and evaluated as to how well it satisfies the user's design requirements. The fitness measure is a user-written computer program that may incorporate any calculable characteristic or combination of characteristics of the circuit, including the circuit's behavior in the time domain, its behavior in the frequency domain, its power consumption, the number of components, cost of components, or surface area occupied by its components. The population of program trees is genetically bred over a series of many generations using genetic programming. Genetic programming is driven by a fitness measure and employs genetic operations such as Darwinian reproduction, sexual recombination (crossover), and occasional mutation to create offspring. This automated evolutionary process produces both the topology of the circuit and the numerical values for each component. This paper describes how genetic programming can evolve the circuit for a difficult-to-design low-pass filter. | [
523,
1408,
1921,
2277,
2402,
2624
] | Train |
1,932 | 2 | Title: Constrained Optimization for Neural Map Formation: A Unifying Framework for Weight Growth and Normalization
Abstract: Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics that abstract from the neural activity dynamics by an adiabatic approximation, and constrained optimization from which equations governing weight dynamics can be derived. Constrained optimization uses an objective function, from which a weight growth rule can be derived as a gradient flow, and some constraints, from which normalization rules are derived. In this paper we present an example of how an optimization problem can be derived from detailed non-linear neural dynamics. A systematic investigation reveals how different weight dynamics introduced previously can be derived from two types of objective function terms and two types of constraints. This includes dynamic link matching as a special case of neural map formation. We focus in particular on the role of coordinate transformations to derive different weight dynamics from the same optimization problem. Several examples illustrate how the constrained optimization framework can help in understanding, generating, and comparing different models of neural map formation. The techniques used in this analysis may also be useful in investigating other types of neural dynamics. | [
18,
576,
745,
1912,
2024
] | Validation |
1,933 | 3 | Title: Continuous sigmoidal belief networks trained using slice sampling
Abstract: Real-valued random hidden variables can be useful for modelling latent structure that explains correlations among observed variables. I propose a simple unit that adds zero-mean Gaussian noise to its input before passing it through a sigmoidal squashing function. Such units can produce a variety of useful behaviors, ranging from deterministic to binary stochastic to continuous stochastic. I show how "slice sampling" can be used for inference and learning in top-down networks of these units and demonstrate learning on two simple problems. | [
36,
748,
2660
] | Test |
1,934 | 3 | Title: Sequential Update of Bayesian Network Structure
Abstract: There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a fixed structure can be accomplished using standard techniques, sequential update of network structure is still an open problem. In this paper, we investigate sequential update of Bayesian networks were both parameters and structure are expected to change. We introduce a new approach that allows for the flexible manipulation of the tradeoff between the quality of the learned networks and the amount of information that is maintained about past observations. We formally describe our approach including the necessary modifications to the scoring functions for learning Bayesian networks, evaluate its effectiveness through and empirical study, and extend it to the case of missing data. | [
76,
423,
558,
1816,
2463
] | Train |
1,935 | 2 | Title: Observations on Cortical Mechanisms for Object Recognition and Learning
Abstract: This paper sketches several aspects of a hypothetical cortical architecture for visual object recognition, based on a recent computational model. The scheme relies on modules for learning from examples, such as Hyperbf-like networks, as its basic components. Such models are not intended to be precise theories of the biological circuitry but rather to capture a class of explanations we call Memory-Based Models (MBM) that contains sparse population coding, memory-based recognition and codebooks of prototypes. Unlike the sigmoidal units of some artificial neural networks, the units of MBMs are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli. We will describe how an example of MBM may be realized in terms of cortical circuitry and biophysical mechanisms, consistent with psychophysical and physiological data. A number of predictions, testable with physiological techniques, are made. This memo describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by grants from the Office of Naval Research under contracts N00014-92-J-1879 and N00014-93-1-0385; and by a grant from the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program). Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ARPA contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at MIT's Whitaker College. | [
611,
2340,
2499
] | Train |
1,936 | 1 | Title: COLLECTIVE ADAPTATION: THE SHARING OF BUILDING BLOCKS
Abstract: This paper sketches several aspects of a hypothetical cortical architecture for visual object recognition, based on a recent computational model. The scheme relies on modules for learning from examples, such as Hyperbf-like networks, as its basic components. Such models are not intended to be precise theories of the biological circuitry but rather to capture a class of explanations we call Memory-Based Models (MBM) that contains sparse population coding, memory-based recognition and codebooks of prototypes. Unlike the sigmoidal units of some artificial neural networks, the units of MBMs are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli. We will describe how an example of MBM may be realized in terms of cortical circuitry and biophysical mechanisms, consistent with psychophysical and physiological data. A number of predictions, testable with physiological techniques, are made. This memo describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by grants from the Office of Naval Research under contracts N00014-92-J-1879 and N00014-93-1-0385; and by a grant from the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program). Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ARPA contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at MIT's Whitaker College. | [
1943
] | Train |
1,937 | 3 | Title: Using Qualitative Relationships for Bounding Probability Distributions
Abstract: We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. | [
107,
389,
623,
1064,
2293
] | Validation |
1,938 | 3 | Title: Latent and manifest monotonicity in item response models
Abstract: We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. | [
1764,
1765,
1770
] | Test |
1,939 | 2 | Title: Serial and Parallel Multicategory Discrimination
Abstract: A parallel algorithm is proposed for a fundamental problem of machine learning, that of mul-ticategory discrimination. The algorithm is based on minimizing an error function associated with a set of highly structured linear inequalities. These inequalities characterize piecewise-linear separation of k sets by the maximum of k affine functions. The error function has a Lipschitz continuous gradient that allows the use of fast serial and parallel unconstrained minimization algorithms. A serial quasi-Newton algorithm is considerably faster than previous linear programming formulations. A parallel gradient distribution algorithm is used to parallelize the error-minimization problem. Preliminary computational results are given for both a DECstation | [
2307
] | Train |
1,940 | 1 | Title: A Comparison of Crossover and Mutation in Genetic Programming
Abstract: This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of building blocks in GP. | [
860,
2175,
2220,
2250
] | Test |
1,941 | 3 | Title: SUPPRESSING RANDOM WALKS IN MARKOV CHAIN MONTE CARLO USING ORDERED OVERRELAXATION
Abstract: Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n 2 iterations. Such random walks can sometimes be suppressed using "overrelaxed" variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model. | [
748,
1926
] | Test |
1,942 | 3 | Title: On Functional Relation between Recognition Error and Class-Selective Reject
Abstract: This report reviews various optimum decision rules for pattern recognition, namely, Bayes rule, Chow's rule (optimum error-reject tradeoff), and a recently proposed class-selective rejection rule. The latter provides an optimum tradeoff between the error rate and the average number of (selected) classes. A new general relation between the error rate and the average number of classes is presented. The error rate can directly be computed from the class-selective reject function, which in turn can be estimated from unlabelled patterns, by simply counting the rejects. Theoretical as well as practical implications are discussed and some future research directions are proposed. | [
2573
] | Validation |
1,943 | 1 | Title: Distributed Collective Adaptation Applied to a Hard Combinatorial Optimization Problem
Abstract: We utilize collective memory to integrate weak and strong search heuristics to find cliques in FC, a family of graphs. We construct FC such that pruning of partial solutions will be ineffective. Each weak heuristic maintains a local cache of the collective memory. We examine the impact on the distributed search from the various characteristics of the distribution of the collective memory, the search algorithms, and our family of graphs. We find the distributed search performs better than the individuals, even though the space of partial solutions is combinatorially explosive. | [
163,
1696,
1936
] | Train |
1,944 | 5 | Title: Knowledge Acquisition with a Knowledge-Intensive Machine Learning System
Abstract: In this paper, we investigate the integration of knowledge acquisition and machine learning techniques. We argue that existing machine learning techniques can be made more useful as knowledge acquisition tools by allowing the expert to have greater control over and interaction with the learning process. We describe a number of extensions to FOCL (a multistrategy Horn-clause learning program) that have greatly enhanced its power as a knowledge acquisition tool, paying particular attention to the utility of maintaining a connection between a rule and the set of examples explained by the rule. The objective of this research is to make the modification of a domain theory analogous to the use of a spread sheet. A prototype knowledge acquisition tool, FOCL-1-2-3, has been constructed in order to evaluate the strengths and weaknesses of this approach. | [
1259,
2091
] | Train |
1,945 | 3 | Title: Defining Relative Likelihood in Partially-Ordered Preferential Structures
Abstract: Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning. | [
342,
729,
1993
] | Train |
1,946 | 1 | Title: Plateaus and Plateau Search in Boolean Satisfiability Problems: When to Give Up Searching and Start Again
Abstract: We empirically investigate the properties of the search space and the behavior of hill-climbing search for solving hard, random Boolean satisfiability problems. In these experiments it was frequently observed that rather than attempting to escape from plateaus by extensive search, it was better to completely restart from a new random initial state. The optimum point to terminate search and restart was determined empirically over a range of problem sizes and complexities. The growth rate of the optimum cutoff is faster than linear with the number of features, although the exact growth rate was not determined. Based on these empirical results, a simple run-time heuristic is proposed to determine when to give up searching a plateau and restart. This heuristic closely approximates the empirically determined optimum values over a range of problem sizes and complexities, and consequently allows the search algorithm to automatically adjust its strategy for each particular problem without prior knowledge of the problem's complexity. | [
1030,
2516
] | Train |
1,947 | 2 | Title: Rapid Quality Estimation of Neural Network Input Representations
Abstract: The choice of an input representation for a neural network can have a profound impact on its accuracy in classifying novel instances. However, neural networks are typically computationally expensive to train, making it difficult to test large numbers of alternative representations. This paper introduces fast quality measures for neural network representations, allowing one to quickly and accurately estimate which of a collection of possible representations for a problem is the best. We show that our measures for ranking representations are more accurate than a previously published measure, based on experiments with three difficult, real-world pattern recognition problems. | [
2557
] | Train |
1,948 | 2 | Title: Rapid Quality Estimation of Neural Network Input Representations
Abstract: FURTHER RESULTS ON CONTROLLABILITY PROPERTIES OF DISCRETE-TIME NONLINEAR SYSTEMS fl ABSTRACT Controllability questions for discrete-time nonlinear systems are addressed in this paper. In particular, we continue the search for conditions under which the group-like notion of transitivity implies the stronger and semigroup-like property of forward accessibility. We show that this implication holds, pointwise, for states which have a weak Poisson stability property, and globally, if there exists a global "attractor" for the system. | [
1746
] | Train |
1,949 | 2 | Title: Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, A Globally Convergent Inexact Newton Method for
Abstract: We propose an algorithm for solving systems of monotone equations which combines Newton, proximal point, and projection methodologies. An important property of the algorithm is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions. Moreover, under standard assumptions the local su-perlinear rate of convergence is achieved. As opposed to classical globalization strategies for Newton methods, for computing the stepsize we do not use line-search aimed at decreasing the value of some merit function. Instead, linesearch in the approximate Newton direction is used to construct an appropriate hy-perplane which separates the current iterate from the solution set. This step is followed by projecting the current iterate onto this hyperplane, which ensures global convergence of the algorithm. Computational cost of each iteration of our method is of the same order as that of the classical damped Newton method. The crucial advantage is that our method is truly globally convergent. In particular, it cannot get trapped in a stationary point of a merit function. The presented algorithm is motivated by the hybrid projection-proximal point method proposed in [25]. | [
1960
] | Train |
1,950 | 1 | Title: Cultural Transmission of Information in Genetic Programming
Abstract: This paper shows how the performance of a genetic programming system can be improved through the addition of mechanisms for non-genetic transmission of information between individuals (culture). Teller has previously shown how genetic programming systems can be enhanced through the addition of memory mechanisms for individual programs [Teller 1994]; in this paper we show how Teller's memory mechanism can be changed to allow for communication between individuals within and across generations. We show the effects of indexed memory and culture on the performance of a genetic programming system on a symbolic regression problem, on Koza's Lawnmower problem, and on Wum-pus world agent problems. We show that culture can reduce the computational effort required to solve all of these problems. We conclude with a discussion of possible improvements. | [
2220,
2226
] | Train |
1,951 | 0 | Title: What Kind of Adaptation do CBR Systems Need?: A Review of Current Practice
Abstract: This paper reviews a large number of CBR systems to determine when and what sort of adaptation is currently used. Three taxonomies are proposed: an adaptation-relevant taxonomy of CBR systems, a taxonomy of the tasks performed by CBR systems and a taxonomy of adaptation knowledge. To the extent that the set of existing systems reflects constraints on what is feasible, this review shows interesting dependencies between different system-types, the tasks these systems achieve and the adaptation needed to meet system goals. The CBR system designer may find the partition of CBR systems and the division of adaptation knowledge suggested by this paper useful. Moreover, this paper may help focus the initial stages of systems development by suggesting (on the basis of existing work) what types of adaptation knowledge should be supported by a new system. In addition, the paper provides a framework for the preliminary evaluation and comparison of systems. | [
2303
] | Train |
1,952 | 2 | Title: Analysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms
Abstract: Constructive learning algorithms offer an approach to incremental construction of near-minimal artificial neural networks for pattern classification. Examples of such algorithms include Tower, Pyramid, Upstart, and Tiling algorithms which construct multilayer networks of threshold logic units (or, multilayer perceptrons). These algorithms differ in terms of the topology of the networks that they construct which in turn biases the search for a decision boundary that correctly classifies the training set. This paper presents an analysis of such algorithms from a geometrical perspective. This analysis helps in a better characterization of the search bias employed by the different algorithms in relation to the geometrical distribution of examples in the training set. Simple experiments with non linearly separable training sets support the results of mathematical analysis of such algorithms. This suggests the possibility of designing more efficient constructive algorithms that dynamically choose among different biases to build near-minimal networks for pattern classification. | [
503,
2029,
2393,
2396
] | Train |
1,953 | 2 | Title: that fits the asymptotics of the problem. References
Abstract: 1] D. Aldous and P. Shields. A diffusion limit for a class of randomly growing binary trees. Probability Theory, 79:509-542, 1988. [2] R. Breathnach, C. Benoist, K. O'Hare, F. Gannon, and P. Chambon. Ovalbumin gene: Evidence for leader sequence in mRNA and DNA sequences at the exon-intron boundaries. Proceedings of the National Academy of Science, 75:4853-4857, 1978. [3] S. Brunak, J. Engelbrecht, and S. Knudsen. Prediction of human mRNA donor and acceptor sites from the DNA sequence. Journal of Molecular Biology, 220:49, 1991. [4] Jack Cophen and Ian Stewart. The information in your hand. The Mathematical Intelligencer, 13(3), 1991. [5] R. G. Gallager. Information Theory and Reliable Communication. John Wiley & Sons, Inc., 1968. [6] Ali Hariri, Bruce Weber, and John Olmstead. On the validity of Shannon-information calculations for molecular biological sequence. Journal of Theoretical Biology, 147:235-254, 1990. [7] W. B. Davenport Jr. and W. L. Root. An Introduction to the Theory of Random Signals and Noise. McGraw-Hill, 1958. [8] Andrzej Knopka and John Owens. Complexity charts can be used to map functional domains in DNA. Gene Anal. Techn., 6, 1989. [9] S.M. Mount. A catalogue of splice-junction sequences. Nucleic Acids Research, 10:459-472, 1982. [10] H.M. Seidel, D.L. Pompliano, and J.R. Knowles. Exons as microgenes? Science, 257, September 1992. [11] C. E. Shannon. A mathematical theory of communication. Bell System Tech. J., 27:379-423, 623-656, 1948. [12] Peter S. Shenkin, Batu Erman, and Lucy D. Mastrandrea. Information-theoretical entropy as a measure of sequence variability. Proteins, 11(4):297, 1991. [13] R. Staden. Measurements of the effects that coding for a protein has on a DNA sequence and their use for finding genes. Nucleic Acids Research, 12:551-567, 1984. [14] J.A. Steitz. Snurps. Scientific American, 258(6), June 1988. [15] H. van Trees. Detection, estimation and modulation theory. Wiley, 1971. [16] J. D. Watson, N. H. Hopkins, J. W. Roberts, J. Ar-getsinger Steitz, and A. M. Weiner. Molecular Biology of the Gene. Benjamin/Cummings, Menlo Park, CA, fourth edition, 1987. [17] A.D. Wyner and A.J. Wyner. An improved version of the Lempel-Ziv algorithm. Transactions of Information Theory. [18] A.J. Wyner. String Matching Theorems and Applications to Data Compression and Statistics. PhD thesis, Stanford University, 1993. [19] J. Ziv and A. Lempel. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, IT-23(3):337-343, 1977. | [
2107
] | Train |
1,954 | 4 | Title: TD Models: Modeling the World at a Mixture of Time Scales
Abstract: Temporal-difference (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985). | [
98,
321,
978,
2150,
2183,
2222
] | Test |
1,955 | 5 | Title: Abstract
Abstract: This paper is a scientific comparison of two code generation techniques with identical goals generation of the best possible software pipelined code for computers with instruction level parallelism. Both are variants of modulo scheduling, a framework for generation of software pipelines pioneered by Rau and Glaser [RaGl81], but are otherwise quite dissimilar. One technique was developed at Silicon Graphics and is used in the MIPSpro compiler. This is the production compiler for SGI s systems which are based on the MIPS R8000 processor [Hsu94]. It is essentially a branchandbound enumeration of possible schedules with extensive pruning. This method is heuristic because of the way it prunes and also because of the interaction between register allocation and scheduling. 1 The second technique aims to produce optimal results by formulating the scheduling and register allocation problem as an integrated integer linear programming (ILP 1 ) problem. This idea has received much recent exposure in the literature [AlGoGa95, Feautrier94, GoAlGa94a, GoAlGa94b, Eichenberger95], but to our knowledge all previous implementations have been too preliminary for detailed measurement and evaluation. In particular, we believe this to be the first published measurement of runtime performance for ILP based generation of software pipelines. A particularly valuable result of this study was evaluation of the heuristic pipelining technology in the SGI compiler . One of the motivations behind the McGill research was the hope that optimal software pipelining, while not in itself practical for use in production compilers, would be useful for their evaluation and validation. Our comparison has indeed provided a quantitative validation of the SGI compilers pipeliner, leading us to increased confidence in both techniques. | [
2149,
2190,
2194
] | Train |
1,956 | 5 | Title: Instructions
Abstract: Paper and BibTeX entry are available at http://www.complang.tuwien.ac.at/papers/. This paper was published in: Compiler Construction (CC '94), Springer LNCS 786, 1994, pages 158-171 Delayed Exceptions | Speculative Execution of Abstract. Superscalar processors, which execute basic blocks sequentially, cannot use much instruction level parallelism. Speculative execution has been proposed to execute basic blocks in parallel. A pure software approach suffers from low performance, because exception-generating instructions cannot be executed speculatively. We propose delayed exceptions, a combination of hardware and compiler extensions that can provide high performance and correct exception handling in compiler-based speculative execution. Delayed exceptions exploit the fact that exceptions are rare. The compiler assumes the typical case (no exceptions), schedules the code accordingly, and inserts run-time checks and fix-up code that ensure correct execution when exceptions do happen. | [
735,
2527,
2649
] | Test |
1,957 | 4 | Title: AVERAGED REWARD REINFORCEMENT LEARNING APPLIED TO FUZZY RULE TUNING
Abstract: Fuzzy rules for control can be effectively tuned via reinforcement learning. Reinforcement learning is a weak learning method, which only requires information on the success or failure of the control application. The tuning process allows people to generate fuzzy rules which are unable to accurately perform control and have them tuned to be rules which provide smooth control. This paper explores a new simplified method of using reinforcement learning for the tuning of fuzzy control rules. It is shown that the learned fuzzy rules provide smoother control in the pole balancing domain than another approach. | [
565,
2536
] | Train |
1,958 | 1 | Title: Automatic Generation of Adaptive Programs Automatic Generation of Adaptive Programs. In From Animals to Animats
Abstract: Fuzzy rules for control can be effectively tuned via reinforcement learning. Reinforcement learning is a weak learning method, which only requires information on the success or failure of the control application. The tuning process allows people to generate fuzzy rules which are unable to accurately perform control and have them tuned to be rules which provide smooth control. This paper explores a new simplified method of using reinforcement learning for the tuning of fuzzy control rules. It is shown that the learned fuzzy rules provide smoother control in the pole balancing domain than another approach. | [
2220,
2226
] | Train |
1,959 | 1 | Title: Evolution-based Discovery of Hierarchical Behaviors
Abstract: Procedural representations of control policies have two advantages when facing the scale-up problem in learning tasks. First they are implicit, with potential for inductive generalization over a very large set of situations. Second they facilitate modularization. In this paper we compare several randomized algorithms for learning modular procedural representations. The main algorithm, called Adaptive Representation through Learning (ARL) is a genetic programming extension that relies on the discovery of subroutines. ARL is suitable for learning hierarchies of subroutines and for constructing policies to complex tasks. ARL was successfully tested on a typical reinforcement learning problem of controlling an agent in a dynamic and nondeterministic environment where the discovered subroutines correspond to agent behaviors. | [
120,
177,
2259
] | Train |
1,960 | 2 | Title: Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION-PROXIMAL POINT ALGORITHM
Abstract: We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allows a nice geometric interpretation, and is somewhat more intuitive than the classical proof. | [
1949
] | Validation |
1,961 | 5 | Title: Resource Spackling: A Framework for Integrating Register Allocation in Local and Global Schedulers
Abstract: We present Resource Spackling, a framework for integrating register allocation and instruction scheduling that is based on a Measure and Reduce paradigm. The technique measures the resource requirements of a program and uses the measurements to distribute code for better resource allocation. The technique is applicable to the allocation of different types of resources. A program's resource requirements for both register and functional unit resources are first measured using a unified representation. These measurements are used to find areas where resources are either under or over utilized, called resource holes and excessive sets, respectively. Conditions are determined for increasing resource utilization in the resource holes. These conditions are applicable to both local and global code motion. | [
2100,
2527
] | Train |
1,962 | 6 | Title: Learning Distributions from Random Walks
Abstract: We introduce a new model of distributions generated by random walks on graphs. This model suggests a variety of learning problems, using the definitions and models of distribution learning defined in [6]. Our framework is general enough to model previously studied distribution learning problems, as well as to suggest new applications. We describe special cases of the general problem, and investigate their relative difficulty. We present algorithms to solve the learning problem under various conditions. | [
574,
1827,
2509
] | Train |
1,963 | 5 | Title: Learning Problem-Oriented Decision Structures from Decision Rules: The AQDT-2 System
Abstract: We introduce a new model of distributions generated by random walks on graphs. This model suggests a variety of learning problems, using the definitions and models of distribution learning defined in [6]. Our framework is general enough to model previously studied distribution learning problems, as well as to suggest new applications. We describe special cases of the general problem, and investigate their relative difficulty. We present algorithms to solve the learning problem under various conditions. | [
286,
378,
2195
] | Test |
1,964 | 6 | Title: Constructing Nominal Xof-N Attributes
Abstract: Most constructive induction researchers focus only on new boolean attributes. This paper reports a new constructive induction algorithm, called XofN, that constructs new nominal attributes in the form of Xof-N representations. An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. The promising preliminary experimental results, on both artificial and real-world domains, show that constructing new nominal attributes in the form of Xof-N representations can significantly improve the performance of selective induction in terms of both higher prediction accuracy and lower theory complexity. | [
102,
1595,
1644,
1862,
1863,
2675
] | Validation |
1,965 | 1 | Title: Constructing Nominal Xof-N Attributes
Abstract: Co-evolution of Pursuit and Evasion II: Simulation Methods and Results fl Abstract In a previous SAB paper [10], we presented the scientific rationale for simulating the coevolution of pursuit and evasion strategies. Here, we present an overview of our simulation methods and some results. Our most notable results are as follows. First, co-evolution works to produce good pursuers and good evaders through a pure bootstrapping process, but both types are rather specially adapted to their opponents' current counter-strategies. Second, eyes and brains can also co-evolve within each simulated species for example, pursuers usually evolved eyes on the front of their bodies (like cheetahs), while evaders usually evolved eyes pointing sideways or even backwards (like gazelles). Third, both kinds of coevolution are promoted by allowing spatially distributed populations, gene duplication, and an explicitly spatial morphogenesis program for eyes and brains that allows bilateral symmetry. The paper concludes by discussing some possible applications of simulated pursuit-evasion coevolu tion in biology and entertainment. | [
712,
757,
2089
] | Train |
1,966 | 2 | Title: LONG SHORT-TERM MEMORY Neural Computation 9(8):1735-1780, 1997
Abstract: Learning to store information over extended time intervals via recurrent backpropagation takes a very long time, mostly due to insufficient, decaying error back flow. We briefly review Hochreiter's 1991 analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called "Long Short-Term Memory" (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete time steps by enforcing constant error flow through "constant error carrousels" within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with RTRL, BPTT, Recurrent Cascade-Correlation, Elman nets, and Neural Sequence Chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long time lag tasks that have never been solved by previous recurrent network algorithms. | [
1825
] | Test |
1,967 | 6 | Title: Separability is a Learner's Best Friend
Abstract: Geometric separability is a generalisation of linear separability, familiar to many from Minsky and Papert's analysis of the Perceptron learning method. The concept forms a novel dimension along which to conceptualise learning methods. The present paper shows how geometric separability can be defined and demonstrates that it accurately predicts the performance of a at least one empirical learning method. | [
695,
2346
] | Test |
1,968 | 2 | Title: A Delay-Line Based Motion Detection Chip
Abstract: Inspired by a visual motion detection model for the rabbit retina and by a computational architecture used for early audition in the barn owl, we have designed a chip that employs a correlation model to report the one-dimensional field motion of a scene in real time. Using subthreshold analog VLSI techniques, we have fabricated and successfully tested a 8000 transistor chip using a standard MOSIS process. | [
527,
1774,
2619
] | Train |
1,969 | 4 | Title: Generalization and scaling in reinforcement learning
Abstract: In associative reinforcement learning, an environment generates input vectors, a learning system generates possible output vectors, and a reinforcement function computes feedback signals from the input-output pairs. The task is to discover and remember input-output pairs that generate rewards. Especially difficult cases occur when rewards are rare, since the expected time for any algorithm can grow exponentially with the size of the problem. Nonetheless, if a reinforcement function possesses regularities, and a learning algorithm exploits them, learning time can be reduced below that of non-generalizing algorithms. This paper describes a neural network algorithm called complementary reinforcement back-propagation (CRBP), and reports simulation results on problems designed to offer differing opportunities for generalization. | [
2051,
2200,
2309,
2363
] | Train |
1,970 | 2 | Title: The Canonical Distortion Measure in Feature Space and 1-NN Classification
Abstract: We prove that the Canonical Distortion Measure (CDM) [2, 3] is the optimal distance measure to use for 1 nearest-neighbour (1-NN) classification, and show that it reduces to squared Euclidean distance in feature space for function classes that can be expressed as linear combinations of a fixed set of features. PAC-like bounds are given on the sample-complexity required to learn the CDM. An experiment is presented in which a neural network CDM was learnt for a Japanese OCR environ ment and then used to do 1-NN classification. | [
2486
] | Test |
1,971 | 1 | Title: Voting for Schemata
Abstract: The schema theorem states that implicit parallel search is behind the power of the genetic algorithm. We contend that chromosomes can vote, proportionate to their fitness, for candidate schemata. We maintain a population of binary strings and ternary schemata. The string population not only works on solving its problem domain, but it supplies fitness for the schema population, which indirectly can solve the original problem. | [
163,
995,
1696,
2211
] | Test |
1,972 | 3 | Title: ON MCMC METHODS IN BAYESIAN REGRESSION ANALYSIS AND MODEL SELECTION
Abstract: The objective of statistical data analysis is not only to describe the behaviour of a system, but also to propose, construct (and then to check) a model of observed processes. Bayesian methodology offers one of possible approaches to estimation of unknown components of the model (its parameters or functional components) in a framework of a chosen model type. However, in many instances the evaluation of Bayes posterior distribution (which is basal for Bayesian solutions) is difficult and practically intractable (even with the help of numerical approximations). In such cases the Bayesian analysis may be performed with the help of intensive simulation techniques called the `Markov chain Monte Carlo'. The present paper reviews the best known approaches to MCMC generation. It deals with several typical situations of data analysis and model construction where MCMC methods have been successfully applied. Special attention is devoted to the problem of selection of optimal regression model constructed from regression splines or from other functional units. | [
2620
] | Train |
1,973 | 1 | Title: ON MCMC METHODS IN BAYESIAN REGRESSION ANALYSIS AND MODEL SELECTION
Abstract: 1] R.K. Belew, J. McInerney, and N. Schraudolph, Evolving networks: using the genetic algorithm with connectionist learning, in Artificial Life II, SFI Studies in the Science of Complexity, C.G. Langton, C. Taylor, J.D. Farmer, S. Rasmussen Eds., vol. 10, Addison-Wesley, 1991. [2] M. McInerney, and A.P. Dhawan, Use of genetic algorithms with back propagation in training of feed-forward neural networks, in IEEE International Conference on Neural Networks, vol. 1, pp. 203-208, 1993. [3] F.Z. Brill, D.E. Brown, and W.N. Martin, Fast genetic selection of features for neural network classifiers, IEEE Transactions on Neural Networks, vol. 3, no. 2, pp. 324-328, 1992. [4] F. Dellaert, and J. Vandewalle, Automatic design of cellular neural networks by means of genetic algorithms: finding a feature detector, in The Third IEEE International Workshop on Cellular Neural Networks and Their Applications, IEEE, New Jersey, pp. 189-194, 1994. [5] D.E. Moriarty, and R. Miikkulainen, Efficient reinforcement learning through symbiotic evolution, Machine Learning, vol. 22, pp. 11-33, 1996. [6] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. [7] D. Whitely, The GENITOR algorithm and selective pressure, in Proceedings of the Third Interanational Conference on Genetic Algorithms, J.D. Schaffer Ed., Morgan Kauffman, San Mateo, CA, 1989, pp. 116-121. [8] van Camp, D., T. Plate and G.E. Hinton (1992). The Xerion Neural Network Simulator and Documentation. Department of Computer Science, University of Toronto, Toronto. | [
129,
247,
2451
] | Test |
1,974 | 2 | Title: Data Mining for Association Rules with Unsupervised Neural Networks
Abstract: results for Gaussian mixture models and factor analysis are discussed. | [
36,
667,
2227
] | Train |
1,975 | 4 | Title: Associative Reinforcement Learning: Functions in k-DNF
Abstract: An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are often quite inefficient and do not exhibit generalization. One strategy is to find restricted classes of action policies that can be learned more efficiently. This paper pursues that strategy by developing algorithms that can efficiently learn action maps that are expressible in k-DNF. The algorithms are compared with existing methods in empirical trials and are shown to have very good performance. | [
427,
565,
2408,
2655,
2689
] | Test |
1,976 | 0 | Title: Utilizing Connectionist Learning Procedures in Symbolic Case Retrieval Nets
Abstract: This paper describes a method which, under certain circumstances, allows to automatically learn or adjust similarity measures. For this, ideas of connectionist learning procedures, in particular those related to Hebbian learning, are combined with a Case-Based Reasoning engine. | [
1854
] | Train |
1,977 | 3 | Title: A note on acceptance rate criteria for CLTs for Hastings-Metropolis algorithms
Abstract: This note considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis-Hastings algorithms which are constructed in terms of a rejection probability, (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis adjusted Langevin algorithm. The examples are rather specialised, although in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used. 0 I would like to thank Kerrie Mengersen Jeff Rosenthal and Richard Tweedie for useful conversations on the subject of this paper. | [
2153,
2219,
2318,
2510
] | Train |
1,978 | 3 | Title: Two convergence properties of hybrid samplers
Abstract: This note considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis-Hastings algorithms which are constructed in terms of a rejection probability, (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis adjusted Langevin algorithm. The examples are rather specialised, although in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used. 0 I would like to thank Kerrie Mengersen Jeff Rosenthal and Richard Tweedie for useful conversations on the subject of this paper. | [
1713,
2510
] | Test |
1,979 | 4 | Title: ENVIRONMENT-INDEPENDENT REINFORCEMENT ACCELERATION difference between time and space is that you can't reuse time.
Abstract: A reinforcement learning system with limited computational resources interacts with an unrestricted, unknown environment. Its goal is to maximize cumulative reward, to be obtained throughout its limited, unknown lifetime. System policy is an arbitrary modifiable algorithm mapping environmental inputs and internal states to outputs and new internal states. The problem is: in realistic, unknown environments, each policy modification process (PMP) occurring during system life may have unpredictable influence on environmental states, rewards and PMPs at any later time. Existing reinforcement learning algorithms cannot properly deal with this. Neither can naive exhaustive search among all policy candidates | not even in case of very small search spaces. In fact, a reasonable way of measuring performance improvements in such general (but typical) situations is missing. I define such a measure based on the novel "reinforcement acceleration criterion" (RAC). At a given time, RAC is satisfied if the beginning of each completed PMP that computed a currently valid policy modification has been followed by long-term acceleration of average reinforcement intake (the computation time for later PMPs is taken into account). I present a method called "environment-independent reinforcement acceleration" (EIRA) which is guaranteed to achieve RAC. EIRA does neither care whether the system's policy allows for changing itself, nor whether there are multiple, interacting learning systems. Consequences are: (1) a sound theoretical framework for "meta-learning" (because the success of a PMP recursively depends on the success of all later PMPs, for which it is setting the stage). (2) A sound theoretical framework for multi-agent learning. The principles have been implemented (1) in a single system using an assembler-like programming language to modify its own policy, and (2) a system consisting of multiple agents, where each agent is in fact just a connection in a fully recurrent reinforcement learning neural net. A by-product of this research is a general reinforcement learning algorithm for such nets. Preliminary experiments illustrate the theory. | [
68,
1844,
1845
] | Train |
1,980 | 1 | Title: An Overview of Evolutionary Computation
Abstract: Evolutionary computation uses computational models of evolution - ary processes as key elements in the design and implementation of computer-based problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and di fferences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research. | [
262,
758,
2202,
2457
] | Train |
1,981 | 2 | Title: Task and Spatial Frequency Effects on Face Specialization
Abstract: There is strong evidence that face processing is localized in the brain. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent mechanisms in the brain. Is neural specialization innate or learned? We suggest that this specialization could be the result of a competitive learning mechanism that, during development, devotes neural resources to the tasks they are best at performing. Further, we suggest that the specialization arises as an interaction between task requirements and developmental constraints. In this paper, we present a feed-forward computational model of visual processing, in which two modules compete to classify input stimuli. When one module receives low spatial frequency information and the other receives high spatial frequency information, and the task is to identify the faces while simply classifying the objects, the low frequency network shows a strong specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that an innately-specified face processing module is unnecessary. | [
1915,
2491
] | Train |
1,982 | 3 | Title: Theoretical rates of convergence for Markov chain Monte Carlo
Abstract: We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally. | [
41,
1713,
1716,
2153
] | Train |
1,983 | 3 | Title: Correlated Action Effects in Decision Theoretic Regression
Abstract: Much recent research in decision theoretic planning has adopted Markov decision processes (MDPs) as the model of choice, and has attempted to make their solution more tractable by exploiting problem structure. One particular algorithm, structured policy construction achieves this by means of a decision theoretic analog of goal regression, using action descriptions based on Bayesian networks with tree-structured conditional probability tables. The algorithm as presented is not able to deal with actions with correlated effects. We describe a new decision theoretic regression operator that corrects this weakness. While conceptually straightforward, this extension requires a somewhat more complicated technical approach. | [
2078
] | Train |
1,984 | 1 | Title: Better Trained Ants
Abstract: The problem of programming an artificial ant to follow the Santa Fe trail has been repeatedly used as a benchmark problem. Recently we have shown performance of several techniques is not much better than the best performance obtainable using uniform random search. We suggested that this could be because the program fitness landscape is difficult for hill climbers and the problem is also difficult for Genetic Algorithms as it contains multiple levels of deception. Here we redefine the problem so the ant is obliged to traverse the trail in approximately the correct order. A simple genetic programming system, with no size or depth restriction, is show to perform approximately three times better with the improved training function. | [
2206,
2379
] | Train |
1,985 | 1 | Title: ABSTRACT
Abstract: In general, the machine learning process can be accelerated through the use of heuristic knowledge about the problem solution. For example, monomorphic typed Genetic Programming (GP) uses type information to reduce the search space and improve performance. Unfortunately, monomorphic typed GP also loses the generality of untyped GP: the generated programs are only suitable for inputs with the specified type. Polymorphic typed GP improves over mono-morphic and untyped GP by allowing the type information to be expressed in a more generic manner, and yet still imposes constraints on the search space. This paper describes a polymorphic GP system which can generate polymorphic programs: programs which take inputs of more than one type and produces outputs of more than one type. We also demonstrate its operation through the generation of the map polymorphic program. | [
995,
1231,
2065
] | Train |
1,986 | 6 | Title: BOOSTING AND NAIVE BAYESIAN LEARNING
Abstract: Although so-called naive Bayesian classification makes the unrealistic assumption that the values of the attributes of an example are independent given the class of the example, this learning method is remarkably successful in practice, and no uniformly better learning method is known. Boosting is a general method of combining multiple classifiers due to Yoav Freund and Rob Schapire. This paper shows that boosting applied to naive Bayesian classifiers yields combination classifiers that are representationally equivalent to standard feedforward multilayer perceptrons. (An ancillary result is that naive Bayesian classification is a nonparametric, nonlinear generalization of logistic regression.) As a training algorithm, boosted naive Bayesian learning is quite different from backpropagation, and has definite advantages. Boosting requires only linear time and constant space, and hidden nodes are learned incrementally, starting with the most important. On the real-world datasets on which the method has been tried so far, generalization performance is as good as or better than the best published result using any other learning method. Unlike all other standard learning algorithms, naive Bayesian learning, with and without boosting, can be done in logarithmic time with a linear number of parallel computing units. Accordingly, these learning methods are highly plausible computationally as models of animal learning. Other arguments suggest that they are plausible behaviorally also. | [
70,
569,
1329,
2338,
2462
] | Validation |
1,987 | 0 | Title: Improving Minority Class Prediction Using Case-Specific Feature Weights
Abstract: This paper addresses the problem of handling skewed class distributions within the case-based learning (CBL) framework. We first present as a baseline an information-gain-weighted CBL algorithm and apply it to three data sets from natural language processing (NLP) with skewed class distributions. Although overall performance of the baseline CBL algorithm is good, we show that the algorithm exhibits poor performance on minority class instances. We then present two CBL algorithms designed to improve the performance of minority class predictions. Each variation creates test-case-specific feature weights by first observing the path taken by the test case in a decision tree created for the learning task, and then using path-specific information gain values to create an appropriate weight vector for use during case retrieval. When applied to the NLP data sets, the algorithms are shown to significantly increase the accuracy of minority class predictions while maintaining or improving overall classification accuracy. | [
2607
] | Test |
1,988 | 3 | Title: Trellis-Constrained Codes
Abstract: We introduce a class of iteratively decodable trellis-constrained codes as a generalization of turbocodes, low-density parity-check codes, serially-concatenated convolutional codes, and product codes. In a trellis-constrained code, multiple trellises interact to define the allowed set of codewords. As a result of these interactions, the minimum-complexity single trellis for the code can have a state space that grows exponentially with block length. However, as with turbocodes and low-density parity-check codes, a decoder can approximate bit-wise maximum a posteriori decoding by using the sum-product algorithm on the factor graph that describes the code. We present two new families of codes, homogenous trellis-constrained codes and ring-connected trellis-constrained codes, and give results that show these codes perform in the same regime as do turbo-codes and low-density parity-check codes. | [
2401
] | Train |
1,989 | 0 | Title: Participating in Instructional Dialogues: Finding and Exploiting Relevant Prior Explanations
Abstract: In this paper we present our research on identifying and modeling the strategies that human tutors use for integrating previous explanations into current explanations. We have used this work to develop a computational model that has been partially implemented in an explanation facility for an existing tutoring system known as SHERLOCK. We are implementing a system that uses case-based reasoning to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in When human tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material, and to avoid repeating old material that would distract the student from what is new. Thus, strategies for using the dialogue history in generating explanations are of great importance to research in natural language generation for tutorial applications. The goal of our work is to produce a computational model of the effects of discourse context on explanations in instructional dialogues, and to implement this model in an intelligent tutoring system that maintains a dialogue history and uses it in planning its explanations. Based on a study of human-human instructional dialogues, we have developed a taxonomy that classifies the types of contextual effects that occur in our data according to the explanatory functions they serve (16). In this paper, we focus on one important category from our taxonomy: situations in which the tutor explicitly refers to a previous explanation in order to point out similarities (differences) between the material currently being explained and material presented in earlier explanation(s). We are implementing a system that uses case-based reasoning to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in instructional dialogues produced by human tutors. By building a computer system that has this capability as an optional facility that can be enabled or disabled, we will be able to systematically evaluate our hypothesis that this is a useful tutoring strategy. In order to test our hypotheses about the effects of previous discourse on explanations, we are building an explanation component for an existing intelligent training system, Sherlock (11). Sherlock is an intelligent coached practice environment for training avionics technicians to troubleshoot complex electronic equipment. Using Sherlock, trainees solve problems with minimal tutor interaction and then review their troubleshooting behavior in a post-problem reflective follow-up session (rfu) where the tutor instructional dialogues produced by human tutors. | [
1882
] | Train |
1,990 | 2 | Title: A FIXED SIZE STORAGE O(n 3 TIME COMPLEXITY LEARNING ALGORITHM FOR FULLY RECURRENT CONTINUALLY RUNNING
Abstract: The RTRL algorithm for fully recurrent continually running networks (Robinson and Fallside, 1987)(Williams and Zipser, 1989) requires O(n 4 ) computations per time step, where n is the number of non-input units. I describe a method suited for on-line learning which computes exactly the same gradient and requires fixed-size storage of the same order but has an average time complexity 1 per time step of O(n 3 ). | [
121,
201,
233,
2093
] | Validation |
1,991 | 3 | Title: APPLICATIONS OF CHEEGER'S CONSTANT TO THE CONVERGENCE RATE OF MARKOV CHAINS ON R n
Abstract: Quantitative geometric rates of convergence for reversible Markov chains are closely related to the Cheeger's constant, which is hard to calculate for general state spaces. This article describes a geometric argument to bound the Cheeger's constant for chains on bounded subsets of R n . | [
1713,
1716,
2510
] | Validation |
1,992 | 3 | Title: Estimating L 1 Error of Kernel Estimator: Monitoring Convergence of Markov Samplers
Abstract: In many Markov chain Monte Carlo problems, the target density function is known up to a normalization constant. In this paper, we take advantage of this knowledge to facilitate the convergence diagnostic of a Markov sampler by estimating the L 1 error of a kernel estimator. Firstly, we propose an estimator of the normalization constant which is shown to be asymptotically normal under mixing and moment conditions. Secondly, the L 1 error of the kernel estimator is estimated using the normalization constant estimator, and the ratio of the estimated L 1 error to the true L 1 error is shown to converge to 1 in probability under similar conditions. Thirdly, we propose a sequential plot of the estimated L 1 error as a tool to monitor the convergence of the Markov sampler. Finally, a 2-dimensional bimodal example is given to illustrate the proposal, fl Bin Yu is Assistant Professor, Department of Statistics, University of California, Berkeley, CA 94720-3860. Research supported in part by the Junior Faculty Research Grant from University of California at Berkeley, grants DAAL03-91-G-007 and DAAH04-94-G-0232 from the Army Research Office, and grant DMS-9322817 from the National Science Foundation. The author is very grateful to Professors Peter Bickel and Andrew Gelman for many helpful discussions and their comments on the draft. Special thanks are due to Mr. Sam Buttrey for his help on simulation, to Professor Per Mykland and Mr. Karl Broman for commenting on the draft, and to two anonymous and two Markov samplers are compared in the example using the proposed diagnostic plot. | [
1713,
2153
] | Validation |
1,993 | 3 | Title: Plausibility Measures and Default Reasoning
Abstract: We introduce a new approach to modeling uncertainty based on plausibility measures. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. We focus on one application of plausibility measures in this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, *-semantics, possibilistic structures, and -rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature. | [
276,
342,
729,
1945,
2546
] | Test |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.