node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
1,394
2
Title: Vapnik-Chervonenkis entropy of the spherical perceptron Abstract: Perceptron learning of randomly labeled patterns is analyzed using a Gibbs distribution on the set of realizable labelings of the patterns. The entropy of this distribution is an extension of the Vapnik-Chervonenkis (VC) entropy, reducing to it exactly in the limit of infinite temperature. The close relationship between the VC and Gardner entropies can be seen within the replica formalism. There has been recent progress towards understanding the relationship between the statistical physics and Vapnik-Chervonenkis (VC) approaches to learning theory[1, 2, 3, 4]. The two approaches can be unified in a statistical mechanics based on the VC entropy. This paper treats the case of learning randomly labeled patterns, or the capacity problem, and extends some of the results of previous work[5, 6] to finite temperature. As will be explained in a companion paper, this extension is important for treating the generalization problem, which occurs in the context of learning patterns labeled by a target rule. Our general framework is illustrated for the simple perceptron sgn(w x), which maps an N -dimensional real-valued input x to a 1-valued output. Given a sample X = (x 1 ; : : : ; x m ) of inputs, the weight vector w determines a labeling L = (l 1 ; : : : ; l m ) of the sample via l i = sgn(w x i ). The weight vector w defines a normal hyperplane that separates the positive from the negative examples. The training error of a labeling L with respect to a reference labeling L 0 is defined by 1 m X 1 l i l 0 and is just the fraction of different labels in the two labelings. We consider the case in which the reference labeling is chosen at random, and address the issue of
[ 967 ]
Validation
1,395
1
Title: Learning Where To Go without Knowing Where That Is: The Acquisition of a Non-reactive Mobot Abstract: In the path-imitation task, one agent traces out a path through a second agent's sensory field. The second agent then has to reproduce that path exactly, i.e. move through the sequence of locations visited by the first agent. This is a non-trivial behaviour whose acquisition might be expected to involve special-purpose (i.e., strongly biased) learning machinery. However, the present paper shows this is not the case. The behaviour can be acquired using a fairly primitive learning regime provided that the agent's environment can be made to pass through a specific sequence of dynamic states.
[ 1541 ]
Test
1,396
1
Title: Evolving a Generalised Behaviour: Artificial Ant Problem Revisited Abstract: This research aims to demonstrate that a solution for artificial ant problem [4] is very likely to be non-general and relying on the specific characteristics of the Santa Fe trail. It then presents a consistent method which promotes producing general solutions. Using the concepts of training and testing from machine learning research, the method can be useful in producing general behaviours for simulation environments.
[ 970 ]
Train
1,397
3
Title: Speech Recognition with Dynamic Bayesian Networks Abstract: Dynamic Bayesian networks (DBNs) are a useful tool for representing complex stochastic processes. Recent developments in inference and learning in DBNs allow their use in real-world applications. In this paper, we apply DBNs to the problem of speech recognition. The factored state representation enabled by DBNs allows us to explicitly represent long-term articulatory and acoustic context in addition to the phonetic-state information maintained by hidden Markov models (HMMs). Furthermore, it enables us to model the short-term correlations among multiple observation streams within single time-frames. Given a DBN structure capable of representing these long- and short-term correlations, we applied the EM algorithm to learn models with up to 500,000 parameters. The use of structured DBN models decreased the error rate by 12 to 29% on a large-vocabulary isolated-word recognition task, compared to a discrete HMM; it also improved significantly on other published results for the same task. This is the first successful application of DBNs to a large-scale speech recognition problem. Investigation of the learned models indicates that the hidden state variables are strongly correlated with acoustic properties of the speech signal.
[ 905, 1287, 1393 ]
Train
1,398
2
Title: Self-Organizing Sets of Experts Abstract: We describe and evaluate multi-network connectionist systems composed of "expert" networks. By preprocessing training data with a competitive learning network, the system automatically organizes the process of decomposition into expert subtasks. Using several different types of challenging problem, we assess this approach | the degree to which the automatically generated experts really are specialists on a predictable subset of the overall task, and a comparison of such decompositions with equivalent single-networks. In addition, we assess the utility of this approach alongside, and in competition to, non-expert multiversion systems. Previously developed measures of `diversity' for such systems are also applied to provide a quantitative assessment of the degree of specialization obtained in an expert-net ensemble. We show that on both types of problem, abstract well-defined and data-defined, the automatic decomposition does produce an effective set of specialist networks which together can support a high level of performance. Curiously, the study does not provide any support for a differential of effectiveness within the two classes of problem: continuous, homogeneous functions and discrete, discontinuous functions.
[ 1383 ]
Test
1,399
2
Title: Parametric regression 1.1 Learning problem model f bw b w in turn is an estimator Abstract: Let us present briefly the learning problem we will address in this chapter and the following. The ultimate goal is the modelling of a mapping f : x 7! y from multidimensional input x to output y. The output can be multi-dimensional, but we will mostly address situations where it is a one dimensional real value. Furthermore, we should take into account the fact that we scarcely ever observe the actual true mapping y = f (x). This is due to perturbations such as e.g. observational noise. We will rather have a joint probability p (x; y). We expect this probability to be peaked for values of x and y corresponding to the mapping. We focus on automatic learning by example. A set D = of data sampled from the joint distribution p (x; y) = p (yjx) p (x) is collected. With the help of this set, we try to identify a model of the data, parameterised by a set of 1.2 Learning and optimisation The fit of the model to the system in a given point x is measured using a criterion representing the distance from the model prediction b y to the system, e (y; f w (x)). This is the local risk . The performance of the model is measured by the expected This quantity represents the ability to yield good performance for all the possible situations (i.e. (x; y) pairs) and is thus called generalisation error . The optimal set 1 parameters w: f w : x 7! b y.
[ 427, 1463 ]
Train
1,400
6
Title: Towards Robust Model Selection using Estimation and Approximation Error Bounds Abstract: Let us present briefly the learning problem we will address in this chapter and the following. The ultimate goal is the modelling of a mapping f : x 7! y from multidimensional input x to output y. The output can be multi-dimensional, but we will mostly address situations where it is a one dimensional real value. Furthermore, we should take into account the fact that we scarcely ever observe the actual true mapping y = f (x). This is due to perturbations such as e.g. observational noise. We will rather have a joint probability p (x; y). We expect this probability to be peaked for values of x and y corresponding to the mapping. We focus on automatic learning by example. A set D = of data sampled from the joint distribution p (x; y) = p (yjx) p (x) is collected. With the help of this set, we try to identify a model of the data, parameterised by a set of 1.2 Learning and optimisation The fit of the model to the system in a given point x is measured using a criterion representing the distance from the model prediction b y to the system, e (y; f w (x)). This is the local risk . The performance of the model is measured by the expected This quantity represents the ability to yield good performance for all the possible situations (i.e. (x; y) pairs) and is thus called generalisation error . The optimal set 1 parameters w: f w : x 7! b y.
[ 848, 967 ]
Test
1,401
0
Title: Using Case-Based Reasoning as a Reinforcement Learning Framework for Optimization with Changing Criteria Abstract: Practical optimization problems such as job-shop scheduling often involve optimization criteria that change over time. Repair-based frameworks have been identified as flexible computational paradigms for difficult combinatorial optimization problems. Since the control problem of repair-based optimization is severe, Reinforcement Learning (RL) techniques can be potentially helpful. However, some of the fundamental assumptions made by traditional RL algorithms are not valid for repair-based optimization. Case-Based Reasoning (CBR) compensates for some of the limitations of traditional RL approaches. In this paper, we present a Case-Based Reasoning RL approach, implemented in the C A B I N S system, for repair-based optimization. We chose job-shop scheduling as the testbed for our approach. Our experimental results show that C A B I N S is able to effectively solve problems with changing optimization criteria which are not known to the system and only exist implicitly in a extensional manner in the case base.
[ 562, 565, 951, 1554, 2605 ]
Validation
1,402
2
Title: A model f w depending on a set of parameters w is used to estimate Abstract: We have introduced earlier the use of regularisation in the learning procedure. It should now be understood that regularisation is most often a necessity to increase the quality of the results. Even when the unregularised solution is acceptable, it is likely that some regularisation will produce an improvement in performance. There does not exist any method giving directly the best value for the regularisation parameter ~, even in the linear case. The topic of this chapter is thus to propose some methods to estimate the best value. The best ~ being the one that leads to the smallest generalisation error, the methods presented and compared here propose estimators of the generalisation error. This estimation can then be used to approximate the best regularisation level. In sections 3.2 to 3.4 we present validation-based techniques. They estimate the generalisation error on the basis of some extra data. In sections 3.6 to 3.9, we deal with algebraic estimates of this error, that do not use any extra data, but rely on a number of assumptions. The contribution of this chapter is to present all these techniques and analyse them on the same ground. We also present some short derivations clarifying the links between different estimators of generalisation error, as well as a comparison between them. During the course of this chapter, the error will be the quadratic difference. For the validation-based methods, it is possible to consider any kind of error without modification of the method. On the other hand, the algebraic estimates are specific to the quadratic cost. Adapting them to another cost function would require to derive new expressions for the estimators.
[ 847 ]
Validation
1,403
3
Title: DART/HYESS Users Guide recursive covering approach to local learning Abstract:
[ 1112 ]
Train
1,404
1
Title: A Hybrid GP/GA Approach for Co-evolving Controllers and Robot Bodies to Achieve Fitness-Specified Tasks Abstract: Evolutionary approaches have been advocated to automate robot design. Some research work has shown the success of evolving controllers for the robots by genetic approaches. As we can observe, however, not only the controller but also the robot body itself can affect the behavior of the robot in a robot system. In this paper, we develop a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. In order to assess the performance of the developed approach, it is used to evolve a simulated agent, with its own controller and body, to do obstacle avoidance in the simulated environment. Experimental results show the promise of this work. In addition, the importance of co-evolving controllers and robot bodies is analyzed and discussed in this paper.
[ 163, 219, 755, 1143 ]
Validation
1,405
2
Title: TH presentee par STATISTICAL LEARNING AND REGULARISATION FOR REGRESSION Application to system identification and time Abstract: Evolutionary approaches have been advocated to automate robot design. Some research work has shown the success of evolving controllers for the robots by genetic approaches. As we can observe, however, not only the controller but also the robot body itself can affect the behavior of the robot in a robot system. In this paper, we develop a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. In order to assess the performance of the developed approach, it is used to evolve a simulated agent, with its own controller and body, to do obstacle avoidance in the simulated environment. Experimental results show the promise of this work. In addition, the importance of co-evolving controllers and robot bodies is analyzed and discussed in this paper.
[ 427, 1463 ]
Train
1,406
2
Title: 82 Lag-space estimation in time-series modelling keep track of cases where the estimation of P Abstract: When m = 0 (no delays), we set A 0 (ffi) = f(j; k) ; j 6= kg, such that P m (*jffi) depends only on *. The estimated probabilities above become quite noisy when the number of elements in set A m and B m are small. For this reason, we estimate the standard deviation of P m (*jffi). Notice that this estimate is the empirical average of a binomial variable (either a given couple satisfied the conditions on ffi and *, or it does not). The standard deviation is then estimated easily by: Generally speaking, P m (*jffi) increases with * (laxer output test), and when ffi approaches 0 (stricter input condition). Let us now define by P m (*) the maximum over ffi of P m (*jffi): P m (*) = max ffi>0 P m (*jffi). The dependability index is defined as: P 0 (*) represents how much data passes the continuity test when no input information is available. This dependability index measures how much of the remaining continuity information is associated with involving input i m . This index is then averaged over * with respect to the probability (1 P 0 (*)): m (*) (1 P 0 (*)) d* (4.8) It is clear that m (*), and therefore its average, should be positive quantities. Furthermore, if the system is deterministic, the dependability is zero after a certain number of inputs, so the sum of averages saturates. If the system is also noise-free, they sum up to 1. For any m greater than the embedding dimension: refers to results obtained using this method. 4.6 Statistical variable selection Statistical variable selection (or feature selection) encompasses a number of techniques aimed at choosing a relevant subset of input variables in a regression or a classification problem. As in the rest of this document, we will limit ourselves to considerations related to the regression problem, even though most methods discussed below apply to classification as well. Variable selection can be seen as a part of the data analysis problem: the selection (or discard) of a variable tells us about the relevance of the associated measurement to the modelled system. In a general setting, this is a purely combinatorial problem: given V possible variables, there is 2 V possible subsets (including the empty set and the full set) of these variables. Given a performance measure, such as prediction error, the only optimal scheme is to test all these subset and choose the one that gives the best performance. It is easy to see that such an extensive scheme is only viable when the number of variables is rather low. Identifying 2 V models when we have more than a few variables requires too much computation. A number of techniques have been devised to overcome this combinatorial limit. Some of them use an iterative, locally optimal technique to construct an estimate of the relevant subset in a number of steps. We will refer to them as stepwise selection methods, not to be con fused with stepwise regression, a subset of these methods that we will address below. In forward selection, we start with an empty set of variables. At each step, we select a candidate variable using a selection criteria, check whether this variable should be added to the set, and iterate until a given stop condition is reached. On the contrary, backward elimination methods start with the full set of all input variables. At each step, the least significant variable is selected according to a selection criteria. If this variable is irrelevant, it is removed and the process is iterated until a stop condition is reached. It is easy to devise examples where the inclusion of a variable causes a previously included variable to become irrelevant. It thus seems appropriate to consider running a backward elimination each time a new variable is added by forward selection. This combination of both ap proaches is known as stepwise regression in the linear regression con
[ 1463 ]
Train
1,407
0
Title: ABSTRACTION CONSIDERED HARMFUL: LAZY LEARNING OF LANGUAGE PROCESSING Abstract: When m = 0 (no delays), we set A 0 (ffi) = f(j; k) ; j 6= kg, such that P m (*jffi) depends only on *. The estimated probabilities above become quite noisy when the number of elements in set A m and B m are small. For this reason, we estimate the standard deviation of P m (*jffi). Notice that this estimate is the empirical average of a binomial variable (either a given couple satisfied the conditions on ffi and *, or it does not). The standard deviation is then estimated easily by: Generally speaking, P m (*jffi) increases with * (laxer output test), and when ffi approaches 0 (stricter input condition). Let us now define by P m (*) the maximum over ffi of P m (*jffi): P m (*) = max ffi>0 P m (*jffi). The dependability index is defined as: P 0 (*) represents how much data passes the continuity test when no input information is available. This dependability index measures how much of the remaining continuity information is associated with involving input i m . This index is then averaged over * with respect to the probability (1 P 0 (*)): m (*) (1 P 0 (*)) d* (4.8) It is clear that m (*), and therefore its average, should be positive quantities. Furthermore, if the system is deterministic, the dependability is zero after a certain number of inputs, so the sum of averages saturates. If the system is also noise-free, they sum up to 1. For any m greater than the embedding dimension: refers to results obtained using this method. 4.6 Statistical variable selection Statistical variable selection (or feature selection) encompasses a number of techniques aimed at choosing a relevant subset of input variables in a regression or a classification problem. As in the rest of this document, we will limit ourselves to considerations related to the regression problem, even though most methods discussed below apply to classification as well. Variable selection can be seen as a part of the data analysis problem: the selection (or discard) of a variable tells us about the relevance of the associated measurement to the modelled system. In a general setting, this is a purely combinatorial problem: given V possible variables, there is 2 V possible subsets (including the empty set and the full set) of these variables. Given a performance measure, such as prediction error, the only optimal scheme is to test all these subset and choose the one that gives the best performance. It is easy to see that such an extensive scheme is only viable when the number of variables is rather low. Identifying 2 V models when we have more than a few variables requires too much computation. A number of techniques have been devised to overcome this combinatorial limit. Some of them use an iterative, locally optimal technique to construct an estimate of the relevant subset in a number of steps. We will refer to them as stepwise selection methods, not to be con fused with stepwise regression, a subset of these methods that we will address below. In forward selection, we start with an empty set of variables. At each step, we select a candidate variable using a selection criteria, check whether this variable should be added to the set, and iterate until a given stop condition is reached. On the contrary, backward elimination methods start with the full set of all input variables. At each step, the least significant variable is selected according to a selection criteria. If this variable is irrelevant, it is removed and the process is iterated until a stop condition is reached. It is easy to devise examples where the inclusion of a variable causes a previously included variable to become irrelevant. It thus seems appropriate to consider running a backward elimination each time a new variable is added by forward selection. This combination of both ap proaches is known as stepwise regression in the linear regression con
[ 783, 862, 1155, 1626, 1812 ]
Validation
1,408
1
Title: Use of Architecture-Altering Operations to Dynamically Adapt a Three-Way Analog Source Identification Circuit to Accommodate Abstract: We used genetic programming to evolve b o t h the topology and the sizing (numerical values) for each component of an analog electrical circuit that can correctly classify an incoming analog electrical signal into three categories. Then, the r e p e r t o i r e o f s o u r c e s w a s dynamically changed by adding a new source during the run. The p a p e r d e s c r i b e s h o w t h e
[ 1249, 1921, 1931 ]
Validation
1,409
1
Title: Evolution of Mapmaking: Learning, planning, and memory using Genetic Programming Abstract: An essential component of an intelligent agent is the ability to observe, encode, and use information about its environment. Traditional approaches to Genetic Programming have focused on evolving functional or reactive programs with only a minimal use of state. This paper presents an approach for investigating the evolution of learning, planning, and memory using Genetic Programming. The approach uses a multi-phasic fitness environment that enforces the use of memory and allows fairly straightforward comprehension of the evolved representations . An illustrative problem of 'gold' collection is used to demonstrate the usefulness of the approach. The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans.
[ 129, 789, 958, 1797, 2220, 2600, 2703 ]
Test
1,410
1
Title: Genetic Algorithm Programming Environments Abstract: Interest in Genetic algorithms is expanding rapidly. This paper reviews software environments for programming Genetic Algorithms ( GA s). As background, we initially preview genetic algorithms' models and their programming. Next we classify GA software environments into three main categories: Application-oriented, Algorithm-oriented and ToolKits. For each category of GA programming environment we review their common features and present a case study of a leading environment.
[ 163, 1153 ]
Validation
1,411
2
Title: Connection Pruning with Static and Adaptive Pruning Schedules Abstract: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (e.g. OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. Results of statistical significance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 different problems. The results indicate that training with pruning is often significantly better and rarely significantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required.
[ 881, 1203, 2203 ]
Train
1,412
0
Title: EXPLORING A FRAMEWORK FOR INSTANCE BASED LEARNING AND NAIVE BAYESIAN CLASSIFIERS Abstract: The relative performance of different methods for classifier learning varies across domains. Some recent Instance Based Learning (IBL) methods, such as IB1-MVDM* 10 , use similarity measures based on conditional class probabilities. These probabilities are a key component of Naive Bayes methods. Given this commonality of approach, it is of interest to consider how the differences between the two methods are linked to their relative performance in different domains. Here we interpret Naive Bayes in an IBL like framework, identifying differences between Naive Bayes and IB1-MVDM* in this framework. Experiments on variants of IB1-MVDM* that lie between it and Naive Bayes in the framework are conducted on sixteen domains. The results strongly suggest that the relative performance of Naive Bayes and IB1-MVDM* is linked to the extent to which each class can be satisfactorily represented by a single instance in the IBL framework. However, this is not the only factor that appears significant.
[ 1111, 1328, 1431 ]
Train
1,413
0
Title: References Automatic student modeling and bug library construction using theory refinement. Ph.D. ml/ Symbolic revision Abstract: ASSERT demonstrates how theory refinement techniques developed in machine learning can be used to ef fec-tively build student models for intelligent tutoring systems. This application is unique since it inverts the normal goal of theory refinement from correcting errors in a knowledge base to introducing them. A comprehensive experiment involving a lar ge number of students interacting with an automated tutor for teaching concepts in C ++ programming was used to evaluate the approach. This experiment demonstrated the ability of theory refinement to generate more accurate student models than raw induction, as well as the ability of the resulting models to support individualized feedback that actually improves students subsequent performance. Carr, B. and Goldstein, I. (1977). Overlays: a theory of modeling for computer aided instruction. T echnical Report A. I. Memo 406, Cambridge, MA: MIT. Sandberg, J. and Barnard, Y . (1993). Education and technology: What do we know? And where is AI? Artificial Intelligence Communications, 6(1):47-58.
[ 136, 1102 ]
Test
1,414
3
Title: Tractable Inference for Complex Stochastic Processes Abstract: The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief statea probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy.
[ 788, 945, 1268, 1287, 1288, 1393 ]
Train
1,415
3
Title: A New Approach for Induction: From a Non-Axiomatic Logical Point of View Abstract: Non-Axiomatic Reasoning System (NARS) is designed to be a general-purpose intelligent reasoning system, which is adaptive and works under insufficient knowledge and resources. This paper focuses on the components of NARS that contribute to the system's induction capacity, and shows how the traditional problems in induction are addressed by the system. The NARS approach of induction uses an term-oriented formal language with an experience-grounded semantics that consistently interprets various types of uncertainty. An induction rule generates conclusions from common instance of terms, and a revision rule combines evidence from different sources. In NARS, induction and other types of inference, such as deduction and abduction, are based on the same semantic foundation, and they cooperate in inference activities of the system. The system's control mechanism makes knowledge-driven, context-dependent inference possible.
[ 1504, 1506, 1525 ]
Validation
1,416
0
Title: Synergy and Commonality in Case-Based and Constraint-Based Reasoning Abstract: Although Case-Based Reasoning (CBR) is a natural formulation for many problems, our previous work on CBR as applied to design made it apparent that there were elements of the CBR paradigm that prevented it from being more widely applied. At the same time, we were evaluating Constraint Satisfaction techniques for design, and found a commonality in motivation between repair-based constraint satisfaction problems (CSP) and case adaptation. This led us to combine the two methodologies in order to gain the advantages of CSP for case-based reasoning, allowing CBR to be more widely and flexibly applied. In combining the two methodologies, we found some unexpected synergy and commonality between the approaches. This paper describes the synergy and commonality that emerged as we combined case-based and constraint-based reasoning, and gives a brief overview of our continuing and future work on exploiting the emergent synergy when combining these reasoning modes.
[ 922, 923 ]
Test
1,417
2
Title: Ridge Regression Learning Algorithm in Dual Variables Abstract: In this paper we study a dual version of the Ridge Regression procedure. It allows us to perform non-linear regression by constructing a linear regression function in a high dimensional feature space. The feature space representation can result in a large increase in the number of parameters used by the algorithm. In order to combat this "curse of dimensionality", the algorithm allows the use of kernel functions, as used in Support Vector methods. We also discuss a powerful family of kernel functions which is constructed using the ANOVA decomposition method from the kernel corresponding to splines with an infinite number of nodes. This paper introduces a regression estimation algorithm which is a combination of these two elements: the dual version of Ridge Regression is applied to the ANOVA enhancement of the infinite-node splines. Experimental results are then presented (based on the Boston Housing data set) which indicate the performance of this algorithm relative to other algorithms.
[ 1421 ]
Train
1,418
2
Title: BCM Network develops Orientation Selectivity and Ocular Dominance in Natural Scene Environment. Abstract: A two-eye visual environment is used in training a network of BCM neurons. We study the effect of misalignment between the synaptic density functions from the two eyes, on the formation of orientation selectivity and ocular dominance in a lateral inhibition network. The visual environment we use is composed of natural images. We show that for the BCM rule a natural image environment with binocular cortical misalignment is sufficient for producing networks with orientation selective cells and ocular dominance columns. This work is an extension of our previous single cell misalignment model (Shouval et al., 1996).
[ 726, 989, 2499 ]
Train
1,419
3
Title: BAYESIAN ESTIMATION OF THE VON MISES CONCENTRATION PARAMETER Abstract: The von Mises distribution is a maximum entropy distribution. It corresponds to the distribution of an angle of a compass needle in a uniform magnetic field of direction, , with concentration parameter, . The concentration parameter, , is the ratio of the field strength to the temperature of thermal fluctuations. Previously, we obtained a Bayesian estimator for the von Mises distribution parameters using the information-theoretic Minimum Message Length (MML) principle. Here, we examine a variety of Bayesian estimation techniques by examining the posterior distribution in both polar and Cartesian co-ordinates. We compare the MML estimator with these fellow Bayesian techniques, and a range of Classical estimators. We find that the Bayesian estimators outperform the Classical estimators.
[ 525, 1550 ]
Validation
1,420
0
Title: Design, Analogy, and Creativity Abstract: :
[ 1047, 1138, 1354 ]
Validation
1,421
2
Title: Support Vector Machines, Reproducing Kernel Hilbert Spaces and the Randomized GACV 1 Abstract: 1 Prepared for the NIPS 97 Workshop on Support Vector Machines. Research sponsored in part by NSF under Grant DMS-9704758 and in part by NEI under Grant R01 EY09946. This is a second revised and corrected version of a report of the same number and title dated November 29, 1997
[ 821, 1417 ]
Train
1,422
2
Title: Generating Accurate and Diverse Members of a Neural-Network Ensemble Abstract: Neural-network ensembles have been shown to be very accurate classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks. In this paper we present a technique called Addemup that uses genetic algorithms to directly search for an accurate and diverse set of trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are as accurate as possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that Addemup is able to generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.
[ 550, 826, 828, 1223, 1237, 1273, 1657 ]
Train
1,423
2
Title: Comparison of Regression Methods, Symbolic Induction Methods and Neural Networks in Morbidity Diagnosis and Mortality Abstract: Classifier induction algorithms differ on what inductive hypotheses they can represent, and on how they search their space of hypotheses. No classifier is better than another for all problems: they have selective superiority. This paper empirically compares six classifier induction algorithms on the diagnosis of equine colic and the prediction of its mortality. The classification is based on simultaneously analyzing sixteen features measured from a patient. The relative merits of the algorithms (linear regression, decision trees, nearest neighbor classifiers, the Model Class Selection system, logistic regression (with and without feature selection), and neural nets) are qualitatively discussed, and the generalization accuracies quantitatively analyzed.
[ 1328, 2583 ]
Train
1,424
1
Title: Multi-parent's niche: n-ary crossovers on NK-landscapes Abstract: Using the multi-parent diagonal and scanning crossover in GAs reproduction operators obtain an adjustable arity. Hereby sexuality becomes a graded feature instead of a Boolean one. Our main objective is to relate the performance of GAs to the extent of sexuality used for reproduction on less arbitrary functions then those reported in the current literature. We investigate GA behaviour on Kauffman's NK-landscapes that allow for systematic characterization and user control of ruggedness of the fitness landscape. We test GAs with a varying extent of sexuality, ranging from asexual to 'very sexual'. Our tests were performed on two types of NK-landscapes: landscapes with random and landscapes with nearest neighbour epistasis. For both landscape types we selected landscapes from a range of ruggednesses. The results confirm the superiority of (very) sexual recombination on mildly epistatic problems.
[ 1218, 1530, 1799 ]
Train
1,425
3
Title: Unsupervised Learning Using MML Abstract: This paper discusses the unsupervised learning problem. An important part of the unsupervised learning problem is determining the number of constituent groups (components or classes) which best describes some data. We apply the Minimum Message Length (MML) criterion to the unsupervised learning problem, modifying an earlier such MML application. We give an empirical comparison of criteria prominent in the literature for estimating the number of components in a data set. We conclude that the Minimum Message Length criterion performs better than the alternatives on the data considered here for unsupervised learning tasks.
[ 525, 684, 1550 ]
Test
1,426
0
Title: REPRESENTING PHYSICAL AND DESIGN KNOWLEDGE IN INNOVATIVE DESIGN Abstract: This paper discusses the unsupervised learning problem. An important part of the unsupervised learning problem is determining the number of constituent groups (components or classes) which best describes some data. We apply the Minimum Message Length (MML) criterion to the unsupervised learning problem, modifying an earlier such MML application. We give an empirical comparison of criteria prominent in the literature for estimating the number of components in a data set. We conclude that the Minimum Message Length criterion performs better than the alternatives on the data considered here for unsupervised learning tasks.
[ 1354 ]
Train
1,427
3
Title: SINGLE FACTOR ANALYSIS BY MML ESTIMATION Abstract: The Minimum Message Length (MML) technique is applied to the problem of estimating the parameters of a multivariate Gaussian model in which the correlation structure is modelled by a single common factor. Implicit estimator equations are derived and compared with those obtained from a Maximum Likelihood (ML) analysis. Unlike ML, the MML estimators remain consistent when used to estimate both the factor loadings and factor scores. Tests on simulated data show the MML estimates to be on av erage more accurate than the ML estimates when the former exist. If the data show little evidence for a factor, the MML estimate collapses. It is shown that the condition for the existence of an MML estimate is essentially that the log likelihood ratio in favour of the factor model exceed the value expected under the null (no-factor) hypotheses.
[ 525, 1550 ]
Train
1,428
5
Title: Inverse entailment and Progol Abstract: This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol is implemented in C and available by anonymous ftp. The re-assessment of previous techniques in terms of inverse entailment leads to new results for learning from positive data and inverting implication between pairs of clauses.
[ 1312, 1601, 1622, 2079, 2229, 2329, 2424, 2493, 2539, 2617 ]
Train
1,429
5
Title: Learning First-Order Definitions of Functions Abstract: First-order learning involves finding a clause-form definition of a relation from examples of the relation and relevant background information. In this paper, a particular first-order learning system is modified to customize it for finding definitions of functional relations. This restriction leads to faster learning times and, in some cases, to definitions that have higher predictive accuracy. Other first-order learning systems might benefit from similar specialization.
[ 675, 1208, 1601 ]
Test
1,430
2
Title: Adaptive Boosting of Neural Networks for Character Recognition Abstract: Technical Report #1072, D epartement d'Informatique et Recherche Op erationnelle, Universit e de Montr eal Abstract Boosting is a general method for improving the performance of any learning algorithm that consistently generates classifiers which need to perform only slightly better than random guessing. A recently proposed and very promising boosting algorithm is AdaBoost [5]. It has been applied with great success to several benchmark machine learning problems using rather simple learning algorithms [4], in particular decision trees [1, 2, 6]. In this paper we use AdaBoost to improve the performances of neural networks applied to character recognition tasks. We compare training methods based on sampling the training set and weighting the cost function. Our system achieves about 1.4% error on a data base of online handwritten digits from more than 200 writers. Adaptive boosting of a multi-layer network achieved 2% error on the UCI Letters offline characters data set.
[ 569, 1356, 1484 ]
Validation
1,431
2
Title: Learning to Represent Codons: A Challenge Problem for Constructive Induction Abstract: The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. Systems that perform constructive induction are able to change their representation by constructing new features. We describe an important, real-world problem finding genes in DNA that we believe offers an interesting challenge to constructive-induction researchers. We report experiments that demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decision trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for constructive induction because the relationship between the two representations is well known, and because the representational shift involved in construct ing the better representation is not imposing.
[ 698, 861, 1181, 1412 ]
Train
1,432
1
Title: EVOLUTIONARY ALGORITHMS IN ROBOTICS Abstract:
[ 910, 964, 965, 1573 ]
Train
1,433
6
Title: Learning Boxes in High Dimension Abstract: We present exact learning algorithms that learn several classes of (discrete) boxes in f0; : : :; ` 1g n . In particular we learn: (1) The class of unions of O(log n) boxes in time poly(n; log `) (solving an open problem of [16, 12]; in [3] this class is shown to be learnable in time poly(n; `)). (2) The class of unions of disjoint boxes in time poly(n; t; log `), where t is the number of boxes. (Previously this was known only in the case where all boxes are disjoint in one of the dimensions; in [3] this class is shown to be learnable in time poly(n; t; `)). In particular our algorithm learns the class of decision trees over n variables, that take values in f0; : : :; ` 1g, with comparison nodes in time poly(n; t; log `), where t is the number of leaves (this was an open problem in [9] which was shown in [4] to be learnable in time poly(n; t; `)). (3) The class of unions of O(1)-degenerate boxes (that is, boxes that depend only on O(1) variables) in time poly(n; t; log `) (generalizing the learnability of O(1)-DNF and of boxes in O(1) dimensions). The algorithm for this class uses only equivalence queries and it can also be used to learn the class of unions of O(1) boxes (from equivalence queries only). fl A preliminary version of this paper appeared in the proceedings of the EuroCOLT '97 conference, published in volume 1208 of Lecture Notes in Artificial Intelligence, pages 3-15. Springer-Verlag, 1997.
[ 798, 1026, 1095 ]
Validation
1,434
5
Title: Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming Abstract: This paper demonstrates the capabilities of Foidl, an inductive logic programming (ILP) system whose distinguishing characteristics are the ability to produce first-order decision lists, the use of an output completeness assumption as a substitute for negative examples, and the use of intensional background knowledge. The development of Foidl was originally motivated by the problem of learning to generate the past tense of English verbs; however, this paper demonstrates its superior performance on two different sets of benchmark ILP problems. Tests on the finite element mesh design problem show that Foidl's decision lists enable it to produce generally more accurate results than a range of methods previously applied to this problem. Tests with a selection of list-processing problems from Bratko's introductory Prolog text demonstrate that the combination of implicit negatives and intensionality allow Foidl to learn correct programs from far fewer examples than Foil.
[ 1208 ]
Test
1,435
2
Title: Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming Abstract: Further Results on Controllability Abstract This paper studies controllability properties of recurrent neural networks. The new contributions are: (1) an extension of a previous result to a slightly different model, (2) a formulation and proof of a necessary and sufficient condition, and (3) an analysis of a low-dimensional case for which the of Recurrent Neural Networks fl
[ 1028, 1042, 1043 ]
Test
1,436
3
Title: Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming Abstract: Figure 9: Results for various optimizations. Figure 10: Results with and without markov boundary scoring.
[ 895 ]
Train
1,437
3
Title: Coupled hidden Markov models for complex action recognition Abstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 407 20nov96 Abstract We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm, and a clear Bayesian semantics. However, the Markovian framework makes strong restrictive assumptions about the system generating the signalthat it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions.
[ 787, 891, 1287, 1393, 1593 ]
Validation
1,438
4
Title: Learning from undiscounted delayed rewards Abstract: The general framework of reinforcement learning has been proposed by several researchers for both the solution of optimization problems and the realization of adaptive control schemes. To allow for an efficient application of reinforcement learning in either of these areas, it is necessary to solve both the structural and the temporal credit assignment problem. In this paper, we concentrate on the latter which is usually tackled through the use of learning algorithms that employ discounted rewards. We argue that for realistic problems this kind of solution is not satisfactory, since it does not address the effect of noise originating from different experiences and does not allow for an easy explanation of the parameters involved in the learning process. As a possible solution, we propose to keep the delayed reward undiscounted, but to discount the actual adaptation rate. Empirical results show that dependent on the kind of discount used amore stable convergence and even an increase in performance can be obtained.
[ 294, 565, 807 ]
Test
1,439
1
Title: Minimum-Perimeter Domain Assignment Abstract: For certain classes of problems defined over two-dimensional domains with grid structure, optimization problems involving the assignment of grid cells to processors present a nonlinear network model for the problem of partitioning tasks among processors so as to minimize interprocessor communication. Minimizing interprocessor communication in this context is shown to be equivalent to tiling the domain so as to minimize total tile perimeter, where each tile corresponds to the collection of tasks assigned to some processor. A tight lower bound on the perimeter of a tile as a function of its area is developed. We then show how to generate minimum-perimeter tiles. By using assignments corresponding to near-rectangular minimum-perimeter tiles, closed form solutions are developed for certain classes of domains. We conclude with computational results with parallel high-level genetic algorithms that have produced good (and sometimes provably optimal) solutions for very large perimeter minimization problems.
[ 53, 803 ]
Validation
1,440
4
Title: Value Function Approximations and Job-Shop Scheduling Abstract: We report a successful application of TD() with value function approximation to the task of job-shop scheduling. Our scheduling problems are based on the problem of scheduling payload processing steps for the NASA space shuttle program. The value function is approximated by a 2-layer feedforward network of sigmoid units. A one-step lookahead greedy algorithm using the learned evaluation function outperforms the best existing algorithm for this task, which is an iterative repair method incorporating simulated annealing. To understand the reasons for this performance improvement, this paper introduces several measurements of the learning process and discusses several hypotheses suggested by these measurements. We conclude that the use of value function approximation is not a source of difficulty for our method, and in fact, it may explain the success of the method independent of the use of value iteration. Additional experiments are required to discriminate among our hypotheses.
[ 82, 565, 1378 ]
Validation
1,441
1
Title: Nonlinearity, Hyperplane Ranking and the Simple Genetic Algorithm Abstract: Several metrics are used in empirical studies to explore the mechanisms of convergence of genetic algorithms. The metric is designed to measure the consistency of an arbitrary ranking of hyperplanes in a partition with respect to a target string. Walsh coefficients can be calculated for small functions in order to characterize sources of linear and nonlinear interactions. A simple deception measure is also developed to look closely at the effects of increasing nonlinearity of functions. Correlations between the metric and deception measure are discussed and relationships between and convergence behavior of a simple genetic algorithm are studied over large sets of functions with varying degrees of nonlinearity.
[ 941, 1638, 1717 ]
Test
1,442
5
Title: Learning Goal-Decomposition Rules using Exercises Abstract: Exercises are problems ordered in increasing order of difficulty. Teaching problem-solving through exercises is a widely used pedagogic technique. A computational reason for this is that the knowledge gained by solving simple problems is useful in efficiently solving more difficult problems. We adopt this approach of learning from exercises to acquire search-control knowledge in the form of goal-decomposition rules (d-rules). D-rules are first order, and are learned using a new "generalize-and-test" algorithm which is based on inductive logic programming techniques. We demonstrate the feasibility of the approach by applying it in two planning do mains.
[ 344, 414, 675, 1135, 1444, 1445 ]
Validation
1,443
4
Title: Residual Q-Learning Applied to Visual Attention Abstract: Foveal vision features imagers with graded acuity coupled with context sensitive sensor gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision operates more efficiently than uniform acuity vision because resolution is treated as a dynamically allocatable resource, but requires a more refined visual attention mechanism. We demonstrate that reinforcement learning (RL) significantly improves the performance of foveal visual attention, and of the overall vision system, for the task of model based target recognition. A simulated foveal vision system is shown to classify targets with fewer fixations by learning strategies for the acquisition of visual information relevant to the task, and learning how to generalize these strategies in ambiguous and unexpected scenario conditions.
[ 1540 ]
Test
1,444
6
Title: Learning Horn Definitions with Equivalence and Membership Queries Abstract: A Horn definition is a set of Horn clauses with the same head literal. In this paper, we consider learning non-recursive, function-free first-order Horn definitions. We show that this class is exactly learnable from equivalence and membership queries. It follows then that this class is PAC learnable using examples and membership queries. Our results have been shown to be applicable to learning efficient goal-decomposition rules in planning domains.
[ 1135, 1442 ]
Train
1,445
5
Title: Theory-guided Empirical Speedup Learning of Goal Decomposition Rules Abstract: Speedup learning is the study of improving the problem-solving performance with experience and from outside guidance. We describe here a system that successfully combines the best features of Explanation-based learning and empirical learning to learn goal decomposition rules from examples of successful problem solving and membership queries. We demonstrate that our system can efficiently learn effective decomposition rules in three different domains. Our results suggest that theory-guided empirical learning can overcome the problems of purely explanation-based learning and purely empirical learning, and be an effective speedup learning method.
[ 344, 414, 675, 1442 ]
Test
1,446
2
Title: STABILIZATION WITH SATURATED ACTUATORS, A WORKED EXAMPLE:F-8 LONGITUDINAL FLIGHT CONTROL Abstract: The authors and coworkers recently proved general theorems on the global stabilization of linear systems subject to control saturation. This paper develops in detail an explicit design for the linearized equations of longitudinal flight control for an F-8 aircraft, and tests the obtained controller on the original nonlinear model. This paper represents the first detailed derivation of a controller using the techniques in question, and the results are very encouraging.
[ 948, 1282 ]
Train
1,447
4
Title: Model of the Environment to Avoid Local Learning Abstract: Pier Luca Lanzi Technical Report N. 97.46 December 20 th , 1997
[ 566, 1515, 1581 ]
Test
1,448
0
Title: Automatic Storage and Indexing of Plan Derivations based on Replay Failures Abstract: When a case-based planner is retrieving a previous case in preparation for solving a new similar problem, it is often not aware of all of the implicit features of the new problem situation which determine if a particular case may be successfully applied. This means that some cases may fail to improve the planner's performance. By detecting and explaining these case failures as they occur, retrieval may be improved incrementally. In this paper we provide a definition of case failure for the case-based planner, dersnlp (derivation replay in snlp), which solves new problems by replaying its previous plan derivations. We provide explanation-based learning (EBL) techniques for detecting and constructing the reasons for the case failure. We also describe how the case library is organized so as to incorporate this failure information as it is produced. Finally we present an empirical study which demonstrates the effectiveness of this approach in improving the performance of dersnlp.
[ 1621 ]
Train
1,449
6
Title: Adaptive Mixtures of Probabilistic Transducers Abstract: We describe and analyze a mixture model for supervised learning of probabilistic transducers. We devise an on-line learning algorithm that efficiently infers the structure and estimates the parameters of each probabilistic transducer in the mixture. Theoretical analysis and comparative simulations indicate that the learning algorithm tracks the best transducer from an arbitrarily large (possibly infinite) pool of models. We also present an application of the model for inducing a noun phrase recognizer.
[ 453, 1025 ]
Train
1,450
2
Title: Plasticity-Mediated Competitive Learning Abstract: Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computation-ally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique to simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms.
[ 731, 808, 1710 ]
Train
1,451
2
Title: On the Computation of the Induced L 2 Norm of Single Input Linear Systems with Saturation Abstract: Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computation-ally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique to simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms.
[ 1272, 1281, 1346, 1604 ]
Train
1,452
2
Title: Bayesian Training of Backpropagation Networks by the Hybrid Monte Carlo Method Abstract: It is shown that Bayesian training of backpropagation neural networks can feasibly be performed by the "Hybrid Monte Carlo" method. This approach allows the true predictive distribution for a test case given a set of training cases to be approximated arbitrarily closely, in contrast to previous approaches which approximate the posterior weight distribution by a Gaussian. In this work, the Hybrid Monte Carlo method is implemented in conjunction with simulated annealing, in order to speed relaxation to a good region of parameter space. The method has been applied to a test problem, demonstrating that it can produce good predictions, as well as an indication of the uncertainty of these predictions. Appropriate weight scaling factors are found automatically. By applying known techniques for calculation of "free energy" differences, it should also be possible to compare the merits of different network architectures. The work described here should also be applicable to a wide variety of statistical models other than neural networks.
[ 157, 560, 1289, 1375 ]
Train
1,453
0
Title: Automatic Indexing, Retrieval and Reuse of Topologies in Architectual Layouts Abstract: Former layouts contain much of the know-how of architects. A generic and automatic way to formalize this know-how in order to use it by a computer would save a lot of effort and money. However, there seems to be no such way. The only access to the know-how are the layouts themselves. Developing a generic software tool to reuse former layouts you cannot consider every part of the architectual domain or things like personal style. Tools used today only consider small parts of the architectual domain. Any personal style is ignored. Isn't it possible to build a basic tool which is adjusted by the content of the former layouts, but may be extended incremently by modeling as much of the domain as desirable? This paper will describe a reuse tool to perform this task focusing on topological and geometrical binary relations.
[ 539, 1152, 1210 ]
Train
1,454
2
Title: A Neural Network Model for Prognostic Prediction Abstract: An important and difficult prediction task in many domains, particularly medical decision making, is that of prognosis. Prognosis presents a unique set of problems to a learning system when some of the outputs are unknown. This paper presents a new approach to prognostic prediction, using ideas from nonparametric statistics to fully utilize all of the available information in a neural architecture. The technique is applied to breast cancer prognosis, resulting in flexible, accurate models that may play a role in prevent ing unnecessary surgeries.
[ 524, 1169 ]
Train
1,455
1
Title: Self-Adaptation in Genetic Algorithms of external parameters of a GA is seen as a first Abstract: In this paper a new approach is presented, which transfers a basic idea from Evolution Strategies (ESs) to GAs. Mutation rates are changed into endogeneous items which are adapting during the search process. First experimental results are presented, which indicate that environment-dependent self-adaptation of appropriate settings for the mutation rate is possible even for GAs.
[ 793, 1069, 1153, 1685, 1694 ]
Train
1,456
6
Title: An Interactive Model of Teaching Abstract: Previous teaching models in the learning theory community have been batch models. That is, in these models the teacher has generated a single set of helpful examples to present to the learner. In this paper we present an interactive model in which the learner has the ability to ask queries as in the query learning model of Angluin [1]. We show that this model is at least as powerful as previous teaching models. We also show that anything learnable with queries, even by a randomized learner, is teachable in our model. In all previous teaching models, all classes shown to be teachable are known to be efficiently learnable. An important concept class that is not known to be learnable is DNF formulas. We demonstrate the power of our approach by providing a deterministic teacher and learner for the class of DNF formulas. The learner makes only equivalence queries and all hypotheses are also DNF formulas.
[ 308, 1095, 1343, 1469 ]
Train
1,457
2
Title: Actively Searching for an Effective Neural-Network Ensemble Abstract: A neural-network ensemble is a very successful technique where the outputs of a set of separately trained neural network are combined to form one unified prediction. An effective ensemble should consist of a set of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well; however, most existing techniques only indirectly address the problem of creating such a set. We present an algorithm called Addemup that uses genetic algorithms to explicitly search for a highly diverse set of accurate trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are highly accurate while disagreeing with each other as much as possible. Experiments on four real-world domains show that Addemup is able to generate a set of trained networks that is more accurate than several existing ensemble approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.
[ 163, 569, 826, 828, 1237, 1462 ]
Train
1,458
3
Title: Sequential Thresholds: Context Sensitive Default Extensions Abstract: Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instan-tiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both.
[ 838, 1714 ]
Train
1,459
4
Title: Generalized Markov Decision Processes: Dynamic-programming and Reinforcement-learning Algorithms Abstract: The problem of maximizing the expected total discounted reward in a completely observable Markovian environment, i.e., a Markov decision process (mdp), models a particular class of sequential decision problems. Algorithms have been developed for making optimal decisions in mdps given either an mdp specification or the opportunity to interact with the mdp over time. Recently, other sequential decision-making problems have been studied prompting the development of new algorithms and analyses. We describe a new generalized model that subsumes mdps as well as many of the recent variations. We prove some basic results concerning this model and develop generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning that can be used to make optimal decisions in the generalized model under various assumptions. Applications of the theory to particular models are described, including risk-averse mdps, exploration-sensitive mdps, sarsa, Q-learning with spreading, two-player games, and approximate max picking via sampling. Central to the results are the contraction property of the value operator and a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence.
[ 45, 57, 167, 210, 473, 566, 633, 671, 749, 775, 804, 1137, 1540, 1687, 2078, 2221, 2404 ]
Train
1,460
6
Title: Learning from Queries and Examples with Tree-structured Bias Abstract: Incorporating declarative bias or prior knowledge into learning is an active research topic in machine learning. Tree-structured bias specifies the prior knowledge as a tree of "relevance" relationships between attributes. This paper presents a learning algorithm that implements tree-structured bias, i.e., learns any target function probably approximately correctly from random examples and membership queries if it obeys a given tree-structured bias. The theoretical predictions of the paper are em pirically validated.
[ 638, 672, 924 ]
Train
1,461
2
Title: Learning in Boltzmann Trees Abstract: We introduce a large family of Boltzmann machines that can be trained using standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the supervised learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. The stochastic averages that yield the gradients in weight space are computed by the technique of decimation. We present results on the problems of N -bit parity and the detection of hidden symmetries.
[ 304, 954, 1288, 1357, 1593 ]
Test
1,462
2
Title: Learning from Bad Data Abstract: The data describing resolutions to telephone network local loop "troubles," from which we wish to learn rules for dispatching technicians, are notoriously unreliable. Anecdotes abound detailing reasons why a resolution entered by a technician would not be valid, ranging from sympathy to fear to ignorance to negligence to management pressure. In this paper, we describe four different approaches to dealing with the problem of "bad" data in order first to determine whether machine learning has promise in this domain, and then to determine how well machine learning might perform. We then offer evidence that machine learning can help to build a dispatching method that will perform better than the system currently in place.
[ 1457, 1657 ]
Train
1,463
3
Title: Bias, variance and prediction error for classification rules Abstract: We study the notions of bias and variance for classification rules. Following Efron (1978) we develop a decomposition of prediction error into its natural components. Then we derive bootstrap estimates of these components and illustrate how they can be used to describe the error behaviour of a classifier in practice. In the process we also obtain a bootstrap estimate of the error of a "bagged" classifier.
[ 931, 999, 1053, 1361, 1399, 1405, 1406, 1512 ]
Train
1,464
2
Title: LINEAR SYSTEMS WITH SIGN-OBSERVATIONS Abstract: This paper deals with systems that are obtained from linear time-invariant continuous-or discrete-time devices followed by a function that just provides the sign of each output. Such systems appear naturally in the study of quantized observations as well as in signal processing and neural network theory. Results are given on observability, minimal realizations, and other system-theoretic concepts. Certain major differences exist with the linear case, and other results generalize in a surprisingly straightforward manner.
[ 200, 1021, 1100, 1254, 1470 ]
Train
1,465
0
Title: REPRO: Supporting Flowsheet Design by Case-Base Retrieval Abstract: Case-Based Reasoning (CBR) paradigm is very close to the designer behavior during the conceptual design, and seems to be a fruitable computer aided-design approach if a library of design cases is available. The goal of this paper is to presents the general framework of a case-based retrieval system: REPRO, that supports chemical process design. The crucial problems like the case representation and structural similarity measure are widely described. The presented experimental results and the expert evaluation shows usefulness of the described system in real world problems. The papers ends with discussion concerning research problems and future work.
[ 1354 ]
Train
1,466
1
Title: A STUDY OF CROSSOVER OPERATORS IN GENETIC PROGRAMMING Abstract: Holland's analysis of the sources of power of genetic algorithms has served as guidance for the applications of genetic algorithms for more than 15 years. The technique of applying a recombination operator (crossover) to a population of individuals is a key to that power. Neverless, there have been a number of contradictory results concerning crossover operators with respect to overall performance. Recently, for example, genetic algorithms were used to design neural network modules and their control circuits. In these studies, a genetic algorithm without crossover outperformed a genetic algorithm with crossover. This report re-examines these studies, and concludes that the results were caused by a small population size. New results are presented that illustrate the effectiveness of crossover when the population size is larger. From a performance view, the results indicate that better neural networks can be evolved in a shorter time if the genetic algorithm uses crossover.
[ 728, 943, 1016, 1650 ]
Test
1,467
1
Title: Adaptive Strategy Selection for Concept Learning Abstract: In this paper, we explore the use of genetic algorithms (GAs) to construct a system called GABIL that continually learns and refines concept classification rules from its interac - tion with the environment. The performance of this system is compared with that of two other concept learners (NEWGEM and C4.5) on a suite of target concepts. From this comparison, we identify strategies responsible for the success of these concept learners. We then implement a subset of these strategies within GABIL to produce a multistrategy concept learner. Finally, this multistrategy concept learner is further enhanced by allowing the GAs to adaptively select the appropriate strategies.
[ 163, 793, 1333 ]
Train
1,468
6
Title: Preventing "Overfitting" of Cross-Validation Data Abstract: Suppose that, for a learning task, we have to select one hypothesis out of a set of hypotheses (that may, for example, have been generated by multiple applications of a randomized learning algorithm). A common approach is to evaluate each hypothesis in the set on some previously unseen cross-validation data, and then to select the hypothesis that had the lowest cross-validation error. But when the cross-validation data is partially corrupted such as by noise, and if the set of hypotheses we are selecting from is large, then "folklore" also warns about "overfitting" the cross-validation data [Klockars and Sax, 1986, Tukey, 1949, Tukey, 1953]. In this paper, we explain how this "overfitting" really occurs, and show the surprising result that it can be overcome by selecting a hypothesis with a higher cross-validation error, over others with lower cross-validation errors. We give reasons for not selecting the hypothesis with the lowest cross-validation error, and propose a new algorithm, LOOCVCV, that uses a computa-tionally efficient form of leave-one-out cross-validation to select such a hypothesis. Finally, we present experimental results for one domain, that show LOOCVCV consistently beating picking the hypothesis with the lowest cross-validation error, even when using reasonably large cross-validation sets.
[ 638, 847, 848 ]
Test
1,469
6
Title: Warning: missing six few referencesfixed in proceedings. Learning with Queries but Incomplete Information (Extended Abstract) Abstract: We investigate learning with membership and equivalence queries assuming that the information provided to the learner is incomplete. By incomplete we mean that some of the membership queries may be answered by I don't know. This model is a worst-case version of the incomplete membership query model of Angluin and Slonim. It attempts to model practical learning situations, including an experiment of Lang and Baum that we describe, where the teacher may be unable to answer reliably some queries that are critical for the learning algorithm. We present algorithms to learn monotone k-term DNF with membership queries only, and to learn monotone DNF with membership and equivalence queries. Compared to the complete information case, the query complexity increases by an additive term linear in the number of I don't know answers received. We also observe that the blowup in the number of queries can in general be exponential for both our new model and the incomplete membership model.
[ 1003, 1004, 1364, 1456, 1661, 1705 ]
Test
1,470
2
Title: Interconnected Automata and Linear Systems: A Theoretical Framework in Discrete-Time In Hybrid Systems III: Verification Abstract: This paper summarizes the definitions and several of the main results of an approach to hybrid systems, which combines finite automata and linear systems, developed by the author in the early 1980s. Some related more recent results are briefly mentioned as well.
[ 411, 1037, 1464 ]
Test
1,471
2
Title: New Characterizations of Input to State Stability Abstract: We present new characterizations of the Input to State Stability property. As a consequence of these results, we show the equivalence between the ISS property and several (apparent) variations proposed in the literature.
[ 447, 693, 1281, 1282, 1501, 1633 ]
Train
1,472
2
Title: A SUCCESSIVE LINEAR PROGRAMMING APPROACH FOR INITIALIZATION AND REINITIALIZATION AFTER DISCONTINUITIES OF DIFFERENTIAL ALGEBRAIC EQUATIONS Abstract: Determination of consistent initial conditions is an important aspect of the solution of differential algebraic equations (DAEs). Specification of inconsistent initial conditions, even if they are slightly inconsistent, often leads to a failure in the initialization problem. In this paper, we present a Successive Linear Programming (SLP) approach for the solution of the DAE derivative array equations for the initialization problem. The SLP formulation handles roundoff errors and inconsistent user specifications among others and allows for reliable convergence strategies that incorporate variable bounds and trust region concepts. A new consistent set of initial conditions is obtained by minimizing the deviation of the variable values from the specified ones. For problems with discontinuities caused by a step change in the input functions, a new criterion is presented for identifying the subset of variables which are continuous across the discontinuity. The LP formulation is then applied to determine a consistent set of initial conditions for further solution of the problem in the domain after the discontinuity. Numerous example problems are solved to illustrate these concepts.
[ 878 ]
Test
1,473
1
Title: Evolving Non-Determinism: An Inventive and Efficient Tool for Optimization and Discovery of Strategies Abstract: In the field of optimization and machine learning techniques, some very efficient and promising tools like Genetic Algorithms (GAs) and Hill-Climbing have been designed. In this same field, the Evolving Non-Determinism (END) model proposes an inventive way to explore the space of states that, combined with the use of simulated co-evolution, remedies some drawbacks of these previous techniques and even allow this model to outperform them on some difficult problems. This new model has been applied to the sorting network problem, a reference problem that challenged many computer scientists, and an original one-player game named Solitaire. For the first problem, the END model has been able to build from scratch some sorting networks as good as the best known for the 16-input problem. It even improved by one comparator a 25 years old result for the 13-input problem! For the Solitaire game, END evolved a strategy comparable to a human designed strategy.
[ 380, 1249, 1474 ]
Train
1,474
1
Title: Incremental Co-evolution of Organisms: A New Approach for Optimization and Discovery of Strategies Abstract: In the field of optimization and machine learning techniques, some very efficient and promising tools like Genetic Algorithms (GAs) and Hill-Climbing have been designed. In this same field, the Evolving Non-Determinism (END) model presented in this paper proposes an inventive way to explore the space of states that, using the simulated "incremental" co-evolution of some organisms, remedies some drawbacks of these previous techniques and even allow this model to outperform them on some difficult problems. This new model has been applied to the sorting network problem, a reference problem that challenged many computer scientists, and an original one-player game named Solitaire. For the first problem, the END model has been able to build from "scratch" some sorting networks as good as the best known for the 16-input problem. It even improved by one comparator a 25 years old result for the 13-input problem. For the Solitaire game, END evolved a strategy comparable to a human designed strategy.
[ 380, 1249, 1473, 1728 ]
Train
1,475
0
Title: Within the Letter of the Law: open-textured planning Abstract: Most case-based reasoning systems have used a single "best" or "most similar" case as the basis for a solution. For many problems, however, there is no single exact solution. Rather, there is a range of acceptable answers. We use cases not only as a basis for a solution, but also to indicate the boundaries within which a solution can be found. We solve problems by choosing some point within those boundaries. In this paper, I discuss this use of cases with illustrations from chiron, a system I have implemented in the domain of personal income tax planning.
[ 801, 1642 ]
Validation
1,476
1
Title: Program Optimization for Faster Genetic Programming Abstract: We have used genetic programming to develop efficient image processing software. The ultimate goal of our work is to detect certain signs of breast cancer that cannot be detected with current segmentation and classification methods. Traditional techniques do a relatively good job of segmenting and classifying small-scale features of mammo-grams, such as micro-calcification clusters. Our strongly-typed genetic programs work on a multi-resolution representation of the mammogram, and they are aimed at handling features at medium and large scales, such as stel-lated lesions and architectural distortions. The main problem is efficiency. We employ program optimizations that speed up the evolution process by more than a factor of ten. In this paper we present our genetic programming system, and we describe our optimization techniques.
[ 1178, 1277 ]
Train
1,477
2
Title: Two Constructive Methods for Designing Compact Feedforward Networks of Threshold Units Abstract: We propose two algorithms for constructing and training compact feedforward networks of linear threshold units. The Shift procedure constructs networks with a single hidden layer while the PTI constructs multilayered networks. The resulting networks are guaranteed to perform any given task with binary or real-valued inputs. The various experimental results reported for tasks with binary and real inputs indicate that our methods compare favorably with alternative procedures deriving from similar strategies, both in terms of size of the resulting networks and of their generalization properties.
[ 1252 ]
Validation
1,478
3
Title: Lazy Bayesian Trees Abstract: The naive Bayesian classifier is simple and effective, but its attribute independence assumption is often violated in the real world. A number of approaches have been developed that seek to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local Bayesian classifier at each leaf instead of predicting a single class. However, Bayesian tree learning still suffers from the replication and fragmentation problems of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes a novel lazy Bayesian tree learning algorithm. For each test example, it conceptually builds a most appropriate Bayesian tree. In practice, only one path with a local Bayesian classifier at its leaf is created. Experiments with a wide variety of real-world and artificial domains show that this new algorithm has significantly lower overall prediction error rates than a naive Bayesian classifier, C4.5, and a Bayesian tree learning algorithm.
[ 1335, 1336, 2338 ]
Train
1,479
3
Title: Revising Bayesian Network Parameters Using Backpropagation Abstract: The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique.
[ 136, 1102, 2017, 2543 ]
Test
1,480
2
Title: Learning in the Presence of Prior Knowledge: A Case Study Using Model Calibration Abstract: Computational models of natural systems often contain free parameters that must be set to optimize the predictive accuracy of the models. This process|called calibration|can be viewed as a form of supervised learning in the presence of prior knowledge. In this view, the fixed aspects of the model constitute the prior knowledge, and the goal is to learn correct values for the free parameters. We report on a series of attempts to learn parameter values for a global vegetation model called MAPSS (Mapped Atmosphere-Plant-Soil System) developed by our collaborator, Ron Neilson. Unfortunately, attempts to apply standard machine learning methods|specifically global error functions and gradient descent search|do not work with MAPSS, because the constraints introduced by the structure of the model (the prior knowledge) create a very difficult non-linear optimization problem. Successful calibration of MAPSS required taking a divide-and-conquer approach in which subsets of the parameters were calibrated while others were held constant. This approach was made possible by carefully selecting training sets that exercised only portions of the model and by designing error functions for each part that had desirable properties. The automated calibration tool that we have developed is currently being applied to calibrate MAPSS against a global climate data set.
[ 924, 1532 ]
Test
1,481
1
Title: An Evolutionary Approach to Learning in Robots Abstract: Evolutionary learning methods have been found to be useful in several areas in the development of intelligent robots. In the approach described here, evolutionary algorithms are used to explore alternative robot behaviors within a simulation model as a way of reducing the overall knowledge engineering effort. This paper presents some initial results of applying the SAMUEL genetic learning system to a collision avoidance and navigation task for mobile robots.
[ 910, 964, 965, 966, 1573 ]
Validation
1,482
6
Title: Generalizations of the Bias/Variance Decomposition for Prediction Error Abstract: The bias and variance of a real valued random variable, using squared error loss, are well understood. However because of recent developments in classification techniques it has become desirable to extend these concepts to general random variables and loss functions. The 0-1 (misclassification) loss function with categorical random variables has been of particular interest. We explore the concepts of variance and bias and develop a decomposition of the prediction error into functions of the systematic and variable parts of our predictor. After providing some examples we conclude with a discussion of the various definitions that have been proposed.
[ 1484 ]
Train
1,483
0
Title: Context-Based Similarity Applied to Retrieval of Relevant Cases Abstract: Retrieving relevant cases is a crucial component of case-based reasoning systems. The task is to use user-defined query to retrieve useful information, i.e., exact matches or partial matches which are close to query-defined request according to certain measures. The difficulty stems from the fact that it may not be easy (or it may be even impossible) to specify query requests precisely and completely resulting in a situation known as a fuzzy-querying. It is usually not a problem for small domains, but for a large repositories which store various information (multifunctional information bases or a federated databases), a request specification becomes a bottleneck. Thus, a flexible retrieval algorithm is required, allowing for imprecise query specification and for changing the viewpoint. Efficient database techniques exists for locating exact matches. Finding relevant partial matches might be a problem. This document proposes a context-based similarity as a basis for flexible retrieval. Historical bacground on research in similarity assessment is presented and is used as a motivation for formal definition of context-based similarity. We also describe a similarity-based retrieval system for multifunctinal information bases.
[ 857, 1123, 1125, 1498, 2052, 2060 ]
Test
1,484
6
Title: Experiments with a New Boosting Algorithm Abstract: In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.
[ 822, 931, 1057, 1092, 1197, 1220, 1237, 1430, 1482, 1500, 1521, 1522, 1692 ]
Train
1,485
2
Title: Maximizing the Robustness of a Linear Threshold Classifier with Discrete Weights Abstract: Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classifiers is their ability to provide some robustness on input noise. This paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to tackle the combinatorial problem arising from the discrete weights.
[ 1159, 1252 ]
Train
1,486
5
Title: Induction of decision trees and Bayesian classification applied to diagnosis of sport injuries Abstract: Machine learning techniques can be used to extract knowledge from data stored in medical databases. In our application, various machine learning algorithms were used to extract diagnostic knowledge to support the diagnosis of sport injuries. The applied methods include variants of the Assistant algorithm for top-down induction of decision trees, and variants of the Bayesian classifier. The available dataset was insufficent for reliable diagnosis of all sport injuries considered by the system. Consequently, expert-defined diagnostic rules were added and used as pre-classifiers or as generators of additional training instances for injuries with few training examples. Experimental results show that the classification accuracy and the explanation capability of the naive Bayesian classifier with the fuzzy discretization of numerical attributes was superior to other methods and was estimated as the most appro priate for practical use.
[ 426, 1008, 1569 ]
Train
1,487
2
Title: A Divide-and-Conquer Approach to Learning from Prior Knowledge Abstract: This paper introduces a new machine learning task|model calibration|and presents a method for solving a particularly difficult model calibration task that arose as part of a global climate change research project. The model calibration task is the problem of training the free parameters of a scientific model in order to optimize the accuracy of the model for making future predictions. It is a form of supervised learning from examples in the presence of prior knowledge. An obvious approach to solving calibration problems is to formulate them as global optimization problems in which the goal is to find values for the free parameters that minimize the error of the model on training data. Unfortunately, this global optimization approach becomes computationally infeasible when the model is highly nonlinear. This paper presents a new divide-and-conquer method that analyzes the model to identify a series of smaller optimization problems whose sequential solution solves the global calibration problem. This paper argues that methods of this kind|rather than global optimization techniques|will be required in order for agents with large amounts of prior knowledge to learn efficiently.
[ 1528, 1532 ]
Train
1,488
2
Title: Identification and Control of Nonlinear Systems Using Neural Network Models: Design and Stability Analysis Abstract: Report 91-09-01 September 1991 (revised) May 1994
[ 206, 427, 611, 980, 1490, 1668 ]
Test
1,489
5
Title: Dlab: A Declarative Language Bias Formalism Abstract: We describe the principles and functionalities of Dlab (Declarative LAnguage Bias). Dlab can be used in inductive learning systems to define syntactically and traverse efficiently finite subspaces of first order clausal logic, be it a set of propositional formulae, association rules, Horn clauses, or full clauses. A Prolog implementation of Dlab is available by ftp access. Keywords: declarative language bias, concept learning, knowledge dis covery
[ 177, 837 ]
Train
1,490
2
Title: FEEDBACK STABILIZATION USING TWO-HIDDEN-LAYER NETS Abstract: This paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It is remarked that for certain problems two hidden layers are required, contrary to what might be in principle expected from the known approximation theorems. The differences are not based on numerical accuracy or number of units needed, nor on capabilities for feature extraction, but rather on a much more basic classification into "direct" and "inverse" problems. The former correspond to the approximation of continuous functions, while the latter are concerned with approximating one-sided inverses of continuous functions |and are often encountered in the context of inverse kinematics determination or in control questions. A general result is given showing that nonlinear control systems can be stabilized using two hidden layers, but not in general using just one.
[ 206, 531, 1488 ]
Train
1,491
6
Title: Discovery as Autonomous Learning from the Environment Abstract: Discovery involves collaboration among many intelligent activities. However, little is known about how and in what form such collaboration occurs. In this paper, a framework is proposed for autonomous systems that learn and discover from their environment. Within this framework, many intelligent activities such as perception, action, exploration, experimentation, learning, problem solving, and new term construction can be integrated in a coherent way. The framework is presented in detail through an implemented system called LIVE, and is evaluated through the performance of LIVE on several discovery tasks. The conclusion is that autonomous learning from the environment is a feasible approach for integrating the activities involved in a discovery process.
[ 851, 903, 1390, 1605 ]
Train
1,492
6
Title: Predicting Time Series with Support Vector Machines Abstract: Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an * insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 37%.
[ 1050, 1724 ]
Train
1,493
2
Title: Evaluation of Pattern Classifiers for Fingerprint and OCR Applications Abstract: In this paper we evaluate the classification accuracy of four statistical and three neural network classifiers for two image based pattern classification problems. These are fingerprint classification and optical character recognition (OCR) for isolated handprinted digits. The evaluation results reported here should be useful for designers of practical systems for these two important commercial applications. For the OCR problem, the Karhunen-Loeve (K-L) transform of the images is used to generate the input feature set. Similarly for the fingerprint problem, the K-L transform of the ridge directions is used to generate the input feature set. The statistical classifiers used were Euclidean minimum distance, quadratic minimum distance, normal, and k-nearest neighbor. The neural network classifiers used were multilayer perceptron, radial basis function, and probabilistic. The OCR data consisted of 7,480 digit images for training and 23,140 digit images for testing. The fingerprint data consisted of 2,000 training and 2,000 testing images. In addition to evaluation for accuracy, the multilayer perceptron and radial basis function networks were evaluated for size and generalization capability. For the evaluated datasets the best accuracy obtained for either problem was provided by the probabilistic neural network, where the minimum classification error was 2.5% for OCR and 7.2% for fingerprints.
[ 611, 774, 867, 1732 ]
Train