node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
994
0
Title: Role of Stories 1 Abstract: PO Box 600 Wellington New Zealand Tel: +64 4 471 5328 Fax: +64 4 495 5232 Internet: Tech.Reports@comp.vuw.ac.nz Technical Report CS-TR-92/4 October 1992 Abstract People often give advice by telling stories. Stories both recommend a course of action and exemplify general conditions in which that recommendation is appropriate. A computational model of advice taking using stories must address two related problems: determining the story's recommendations and appropriateness conditions, and showing that these obtain in the new situation. In this paper, we present an efficient solution to the second problem based on caching the results of the first. Our proposal has been implemented in brainstormer, a planner that takes abstract advice.
[ 1354 ]
Train
995
1
Title: Evolving a Team Abstract: PO Box 600 Wellington New Zealand Tel: +64 4 471 5328 Fax: +64 4 495 5232 Internet: Tech.Reports@comp.vuw.ac.nz Technical Report CS-TR-92/4 October 1992 Abstract People often give advice by telling stories. Stories both recommend a course of action and exemplify general conditions in which that recommendation is appropriate. A computational model of advice taking using stories must address two related problems: determining the story's recommendations and appropriateness conditions, and showing that these obtain in the new situation. In this paper, we present an efficient solution to the second problem based on caching the results of the first. Our proposal has been implemented in brainstormer, a planner that takes abstract advice.
[ 415, 956, 1178, 1230, 1231, 1232, 1495, 1690, 1971, 1985, 2139, 2673 ]
Train
996
3
Title: Reparameterisation Issues in Mixture Modelling and their bearing on MCMC algorithms Abstract: There is increasing need for efficient estimation of mixture distributions, especially following the explosion in the use of these as modelling tools in many applied fields. We propose in this paper a Bayesian noninformative approach for the estimation of normal mixtures which relies on a reparameterisation of the secondary components of the mixture in terms of divergence from the main component. As well as providing an intuitively appealing representation at the modelling stage, this reparameterisation has important bearing on both the prior distribution and the performance of MCMC algorithms. We compare two possible reparameterisations extending Mengersen and Robert (1996) and show that the reparameterisation which does not link the secondary components together is associated with poor convergence properties of MCMC algorithms.
[ 161, 1015 ]
Train
997
5
Title: A Common LISP Hypermedia Server Abstract: A World-Wide Web (WWW) server was implemented in Common LISP in order to facilitate exploratory programming in the global hypermedia domain and to provide access to complex research programs, particularly artificial intelligence systems. The server was initially used to provide interfaces for document retrieval and for email servers. More advanced applications include interfaces to systems for inductive rule learning and natural-language question answering. Continuing research seeks to more fully generalize automatic form-processing techniques developed for email servers to operate seamlessly over the Web. The conclusions argue that presentation-based interfaces and more sophisticated form processing should be moved into the clients in order to reduce the load on servers and provide more advanced interaction models for users.
[ 1271 ]
Train
998
3
Title: Accounting for Model Uncertainty in Survival Analysis Improves Predictive Performance Abstract: Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the model-building process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful.
[ 84, 347, 1240, 1241 ]
Train
999
3
Title: The out-of-bootstrap method for model averaging and selection Abstract: We propose a bootstrap-based method for model averaging and selection that focuses on training points that are left out of individual bootstrap samples. This information can be used to estimate optimal weighting factors for combining estimates from different bootstrap samples, and also for finding the best subsets the linear model setting. These proposals provide alternatives to Bayesian approaches to model averaging and selection, requiring less computation and fewer subjective choices.
[ 70, 347, 1240, 1463, 1512 ]
Validation
1,000
6
Title: PREDICTION GAMES AND ARCING ALGORITHMS Abstract: Technical Report 504 December 19, 1997 Statistics Department University of California Berkeley, CA. (4720 Abstract The theory behind the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost (Freund and Schapire [1995].[1996]) and others in reducing generalization error has not been well understood. By formulating prediction, both classification and regression, as a game where one player makes a selection from instances in the training set and the other a convex linear combination of predictors from a finite set, existing arcing algorithms are shown to be algorithms for finding good game strategies. An optimal game strategy finds a combined predictor that minimizes the maximum of the error over the training set. A bound on the generalization error for the combined predictors in terms of their maximum error is proven that is sharper than bounds to date. Arcing algorithms are described that converge to the optimal strategy. Schapire et.al. [1997] offered an explanation of why Adaboost works in terms of its ability to reduce the margin. Comparing Adaboost to our optimal arcing algorithm shows that their explanation is not valid and that the answer lies elsewhere. In this situation the VC-type bounds are misleading. Some empirical results are given to explore the situation.
[ 569, 931, 1185 ]
Train
1,001
0
Title: Is analogical problem solving always analogical? The case for imitation. Second draft Is analogical problem Abstract: HCRL Technical Report 97 By
[ 1354 ]
Train
1,002
0
Title: A Model-Based Approach for Supporting Dialogue Inferencing in a Conversational Case-Based Reasoner Abstract: Conversational case-based reasoning (CCBR) is a form of interactive case-based reasoning where users input a partial problem description (in text). The CCBR system responds with a ranked solution display, which lists the solutions of stored cases whose problem descriptions best match the user's, and a ranked question display, which lists the unanswered questions in these cases. Users interact with these displays, either refining their problem description by answering selected questions, or selecting a solution to apply. CCBR systems should support dialogue inferencing; they should infer answers to questions that are implied by the problem description. Otherwise, questions will be listed that the user believes they have already answered. The standard approach to dialogue inferencing allows case library designers to insert rules that define implications between the problem description and unanswered questions. However, this approach imposes substantial knowledge engineering requirements. We introduce an alternative approach whereby an intelligent assistant guides the designer in defining a model of their case library, from which implication rules are derived. We detail this approach, its benefits, and explain how it can be supported through an integration with Parka-DB, a fast relational database system. We will evaluate our approach in the context of our CCBR system, named NaCoDAE. This paper appeared at the 1998 AAAI Spring Symposium on Multimodal Reasoning, and is NCARAI TR AIC-97-023. We introduce an integrated reasoning approach in which a model-based reasoning component performs an important inferencing role in a conversational case-based reasoning (CCBR) system named NaCoDAE (Breslow & Aha, 1997) (Figure 1). CCBR is a form of case-based reasoning where users enter text queries describing a problem and the system assists in eliciting refinements of it (Aha & Breslow, 1997). Cases have three components:
[ 983, 1735 ]
Train
1,003
6
Title: Learning Conjunctions of Horn Clauses Abstract:
[ 672, 786, 791, 1004, 1343, 1364, 1469, 1897, 2028, 2146, 2182, 2483 ]
Train
1,004
6
Title: Learning Read-Once Formulas with Queries Abstract: A read-once formula is a boolean formula in which each variable occurs at most once. Such formulas are also called -formulas or boolean trees. This paper treats the problem of exactly identifying an unknown read-once formula using specific kinds of queries. The main results are a polynomial time algorithm for exact identification of monotone read-once formulas using only membership queries, and a polynomial time algorithm for exact identification of general read-once formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). Our results improve on Valiant's previous results for read-once formulas [26]. We also show that no polynomial time algorithm using only membership queries or only equivalence queries can exactly identify all read-once formulas.
[ 672, 786, 791, 1003, 1343, 1364, 1469, 2146, 2350, 2483 ]
Test
1,005
2
Title: Distributed Patterns as Hierarchical Structures Abstract: Recursive Auto-Associative Memory (RAAM) structures show promise as a general representation vehicle that uses distributed patterns. However training is often difficult, which explains, at least in part, why only relatively small networks have been studied. We show a technique for transforming any collection of hierarchical structures into a set of training patterns for a sequential RAAM which can be effectively trained using a simple (Elman-style) recurrent network. Tr aining produces a set of distributed patterns corresponding to the structures.
[ 1313 ]
Train
1,006
6
Title: Learning Probabilistic Automata with Variable Memory Length Abstract: We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Finite Suffix Automata. The learning algorithm is motivated by real applications in man-machine interaction such as handwriting and speech recognition. Conventionally used fixed memory Markov and hidden Markov models have either severe practical or theoretical drawbacks. Though general hardness results are known for learning distributions generated by sources with similar structure, we prove that our algorithm can indeed efficiently learn distributions generated by our more restricted sources. In Particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time and sample complexity. We demonstrate the applicability of our algorithm by learning the structure of natural English text and using our hy pothesis for the correction of corrupted text.
[ 242, 472, 574, 650, 876, 1025, 2040, 2360 ]
Test
1,007
5
Title: Applications of a logical discovery engine Abstract: The clausal discovery engine claudien is presented. claudien discovers regularities in data and is a representative of the inductive logic programming paradigm. As such, it represents data and regularities by means of first order clausal theories. Because the search space of clausal theories is larger than that of attribute value representation, claudien also accepts as input a declarative specification of the language bias, which determines the set of syntactically well-formed regularities. Whereas other papers on claudien focuss on the semantics or logical problem specification of claudien, on the discovery algorithm, or the PAC-learning aspects, this paper wants to illustrate the power of the resulting technique. In order to achieve this aim, we show how claudien can be used to learn 1) integrity constraints in databases, 2) functional dependencies and determinations, 3) properties of sequences, 4) mixed quantitative and qualitative laws, 5) reverse engineering, and 6) classification rules.
[ 344, 837, 1919, 2217, 2426 ]
Test
1,008
5
Title: Induction of decision trees using RELIEFF Abstract: In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies between them. Greedy search prevents current inductive machine learning algorithms to detect significant dependencies between the attributes. Recently, Kira and Rendell developed the RELIEF algorithm for estimating the quality of attributes that is able to detect dependencies between attributes. We show strong relation between RELIEF's estimates and impurity functions, that are usually used for heuristic guidance of inductive learning algorithms. We propose to use RELIEFF, an extended version of RELIEF, instead of myopic impurity functions. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems. Results show the advantage of the presented approach to inductive learning and open a wide rang of possibilities for using RELIEFF.
[ 1011, 1486, 1569 ]
Train
1,009
1
Title: Induction of decision trees using RELIEFF Abstract: An investigation into the dynamics of Genetic Programming applied to chaotic time series prediction is reported. An interesting characteristic of adaptive search techniques is their ability to perform well in many problem domains while failing in others. Because of Genetic Programming's flexible tree structure, any particular problem can be represented in myriad forms. These representations have variegated effects on search performance. Therefore, an aspect of fundamental engineering significance is to find a representation which, when acted upon by Genetic Programming operators, optimizes search performance. We discover, in the case of chaotic time series prediction, that the representation commonly used in this domain does not yield optimal solutions. Instead, we find that the population converges onto one "accurately replicating" tree before other trees can be explored. To correct for this premature convergence we make a simple modification to the crossover operator. In this paper we review previous work with GP time series prediction, pointing out an anomalous result related to overlearning, and report the improvement effected by our modified crossover operator.
[ 934, 1079, 2175 ]
Train
1,010
5
Title: Linear Space Induction in First Order Logic with RELIEFF Abstract: Current ILP algorithms typically use variants and extensions of the greedy search. This prevents them to detect significant relationships between the training objects. Instead of myopic impurity functions, we propose the use of the heuristic based on RELIEF for guidance of ILP algorithms. At each step, in our ILP-R system, this heuristic is used to determine a beam of candidate literals. The beam is then used in an exhaustive search for a potentially good conjunction of literals. From the efficiency point of view we introduce interesting declarative bias which enables us to keep the growth of the training set, when introducing new variables, within linear bounds (linear with respect to the clause length). This bias prohibits cross-referencing of variables in variable dependency tree. The resulting system has been tested on various artificial problems. The advantages and deficiencies of our approach are discussed.
[ 877, 1011, 1061, 1182, 1569, 1578, 1651 ]
Train
1,011
5
Title: Discretization of continuous attributes using ReliefF Abstract: Many existing learning algorithms expect the attributes to be discrete. Discretization of continuous attributes might be difficult task even for domain experts. We have tried the non-myopic heuristic measure ReliefF for discretization and compared it with well known dissimilarity measure and discretizations by experts. An extensive testing with several learning algorithms on six real world databases has shown that none of the discretizations has clear advantage over the others.
[ 1008, 1010, 1569 ]
Test
1,012
4
Title: TDLeaf(): Combining Temporal Difference learning with game-tree search. Abstract: In this paper we present TDLeaf(), a variation on the TD() algorithm that enables it to be used in conjunction with minimax search. We present some experiments in both chess and backgammon which demonstrate its utility and provide comparisons with TD() and another less radical variant, TD-directed(). In particular, our chess program, KnightCap, used TDLeaf() to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games. We discuss some of the reasons for this success and the relationship between our results and Tesauro's results in backgammon.
[ 295, 565, 882 ]
Train
1,013
3
Title: A SEQUENTIAL METROPOLIS-HASTINGS ALGORITHM Abstract: This paper deals with the asymptotic properties of the Metropolis-Hastings algorithm, when the distribution of interest is unknown, but can be approximated by a sequential estimator of its density. We prove that, under very simple conditions, the rate of convergence of the Metropolis-Hastings algorithm is the same as that of the sequential estimator when the latter is introduced as the reversible measure for the Metropolis-Hastings Kernel. This problem is a natural extension of previous a work on a new simulated annealing algorithm with a sequential estimator of the energy.
[ 1156 ]
Train
1,014
2
Title: Viewpoint invariant face recognition using independent component analysis and attractor networks Abstract: We have explored two approaches to recognizing faces across changes in pose. First, we developed a representation of face images based on independent component analysis (ICA) and compared it to a principal component analysis (PCA) representation for face recognition. The ICA basis vectors for this data set were more spatially local than the PCA basis vectors and the ICA representation had greater invariance to changes in pose. Second, we present a model for the development of viewpoint invariant responses to faces from visual experience in a biological system. The temporal continuity of natural visual experience was incorporated into an attractor network model by Hebbian learning following a lowpass temporal filter on unit activities. When combined with the temporal filter, a basic Hebbian update rule became a generalization of Griniasty et al. (1993), which associates temporally proximal input patterns into basins of attraction. The system acquired rep resentations of faces that were largely independent of pose.
[ 576, 676, 1056, 1091 ]
Validation
1,015
3
Title: Bayesian curve fitting using multivariate normal mixtures Abstract: Problems of regression smoothing and curve fitting are addressed via predictive inference in a flexible class of mixture models. Multi-dimensional density estimation using Dirichlet mixture models provides the theoretical basis for semi-parametric regression methods in which fitted regression functions may be deduced as means of conditional predictive distributions. These Bayesian regression functions have features similar to generalised kernel regression estimates, but the formal analysis addresses problems of multivariate smoothing parameter estimation and the assessment of uncertainties about regression functions naturally. Computations are based on multi-dimensional versions of existing Markov chain simulation analysis of univariate Dirichlet mixture models.
[ 852, 996, 1338 ]
Train
1,016
1
Title: An Analysis of the Interacting Roles of Population Size and Crossover in Genetic Algorithms Abstract: In this paper we present some theoretical and empirical results on the interacting roles of population size and crossover in genetic algorithms. We summarize recent theoretical results on the disruptive effect of two forms of multi-point crossover: n-point crossover and uniform crossover. We then show empirically that disruption analysis alone is not sufficient for selecting appropriate forms of crossover. However, by taking into account the interacting effects of population size and crossover, a general picture begins to emerge. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested.
[ 727, 728, 856, 943, 1070, 1110, 1205, 1305, 1466, 1670, 1729 ]
Train
1,017
2
Title: Using Neural Networks for Descriptive Statistical Analysis of Educational Data Abstract: In this paper we discuss the methodological issues of using a class of neural networks called Mixture Density Networks (MDN) for discriminant analysis. MDN models have the advantage of having a rigorous probabilistic interpretation, and they have proven to be a viable alternative as a classification procedure in discrete domains. We will address both the classification and interpretive aspects of discriminant analysis, and compare the approach to the traditional method of linear discrimin- ants as implemented in standard statistical packages. We show that the MDN approach adopted performs well in both aspects. Many of the observations made are not restricted to the particular case at hand, and are applicable to most applications of discriminant analysis in educational research. fl URL: http://www.cs.Helsinki.FI/research/cosco/
[ 74, 157, 1574 ]
Train
1,018
1
Title: Simulated Annealing for Hard Satisfiability Problems Abstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Finally, an improvement to the basic SASAT algorithm is examined, based on a random walk suggested by Selman et al. (1993).
[ 1136, 1139, 1142 ]
Train
1,019
6
Title: Bibliography "SMART: Support Management Automated Reasoning Technology for COMPAQ Customer Service," "Instance-Based Learning Algorithms," Machine Abstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Finally, an improvement to the basic SASAT algorithm is examined, based on a random walk suggested by Selman et al. (1993).
[ 853, 862, 906, 926, 927, 1256, 1290 ]
Validation
1,020
3
Title: Error-Based and Entropy-Based Discretization of Continuous Features Abstract: We present a comparison of error-based and entropy-based methods for discretization of continuous features. Our study includes both an extensive empirical comparison as well as an analysis of scenarios where error minimization may be an inappropriate discretization criterion. We present a discretization method based on the C4.5 decision tree algorithm and compare it to an existing entropy-based discretization algorithm, which employs the Minimum Description Length Principle, and a recently proposed error-based technique. We evaluate these discretization methods with respect to C4.5 and Naive-Bayesian classifiers on datasets from the UCI repository and analyze the computational complexity of each method. Our results indicate that the entropy-based MDL heuristic outperforms error minimization on average. We then analyze the shortcomings of error-based approaches in comparison to entropy-based methods.
[ 430, 1322, 1328, 1329, 1337, 2577 ]
Test
1,021
2
Title: Lemma 2.3 The system is reachable and observable and realizes the same input/output behavior as Abstract: Here we show a similar construction for multiple-output systems, with some modifications. Let = (A; B; C) s be a discrete-time sign-linear system with state space IR n and p outputs. Perform a change of ; where A 1 (n 1 fi n 1 ) is invertible and A 2 (n 2 fi n 2 ) is nilpotent. If (A; B) is a reachable pair and (A; C) is an observable pair, then is minimal in the sense that any other sign-linear system with the same input/output behavior has dimension at least n. But, if n 1 < n, then det A = 0 and is not observable and hence not canonical. Let us find another system ~ (necessarily not sign-linear) which has the same input/output behavior as , but is canonical. Let i be the relative degree of the ith row of the Markov sequence A, and = minf i : i = 1; : : : ; pg. Let the initial state be x. There is a difference between the case when the smallest relative degree is greater or equal to n 2 and the case when < n 2 . Roughly speaking, when n 2 the outputs of the sign-linear system give us information about sign (Cx), sign (CAx), : : : , sign (CA 1 x), which are the first outputs of the sys tem. After that, we can use the inputs and outputs to learn only about x 1 (the first n 1 components of x). When < n 2 , we may be able to use some controls to learn more about x 2 (the last n 2 components of x) before time n 2 when the nilpotency of A 2 has finally Lemma 2.4 Two states x and z are indistinguishable for if and only if (x) = (z). Proof. In the case n 2 , we have only the equations x 1 = z 1 and the equality of the 's. The first ` output terms for are exactly the terms of . So these equalities are satisfied if and only if the first ` output terms coincide for x and z, for any input. Equality of everything but the first n 1 components is equivalent to the first n 2 output terms coinciding for x and z, since the jth row of the qth output, for initial state x, for example, is either sign (c j A q x) if j > q, or sign (c j A q x + + A j j u q j +1 + ) if j q in which case we may use the control u q j +1 to identify c j A q x (using Remark 3.3 in [1]).
[ 1464 ]
Train
1,022
2
Title: j Abstract: So applying Corollary 4.3 to the second equation in (47), we conclude that From (38), we then get jg(y n + ~ k( y (51), we obtain jy n + ~ k( From (39) we see that the right-hand side of (54) is bounded by . Since the system _ y = A 1 y k( y )b 2 jyj ev N : (55) Now, suppose lim sup t!1 jy(t)j = > 0. Then jyj ev 2. Since j k(y)j Ljyj, we have and using (56) and (57), we obtain j~yj ev 2(~-1 + -2 )L + -2 ffi : (58) (Note that if the right-hand side of (58) is 1 , then the inequality is trivial since we know from (52) that j~yj ev 1 .) From (53), (56), and (58), we have -2 ffi + N ffi > N . However, from (55) we see that (60) still holds. So we established (60) in all cases. From (40) we then get jyj ev 2 Taking the lim sup t!1 of the left-hand side of (61), we have 1 2 + N("-2 + 1)ffi i.e. 2 N("-2 + 1)ffi. Substituting this into (58) and (61), we get j~yj ev ffi, and jyj ev 2 N("-2 + 1)ffi . So, if we take N = 2 N("-2 + 1)(1 + 2L(~-1 + -2 )) + -2 ; the conclusion follows. To complete the proof, we need to deal with the general case of m > 1 inputs. This is done by induction on m, as in the proof in [14], and will be omitted here. 2 [1] Fuller, A.T., "In the large stability of relay and saturated control systems with linear controllers," Int. J. Control, 10(1969): 457-480. [2] Gutman, P-O., and P. Hagander, "A new design of constrained controllers for linear systems," IEEE Transactions on Automat. Contr. AC-30(1985): 22-23. [3] Kosut, R.L., "Design of linear systems with saturating linear control and bounded states," IEEE Trans. Au-tom. Control AC-28(1983): 121-124. [4] Krikelis, N.J., and S.K. Barkas, "Design of tracking systems subject to actuator saturation and integrator wind-up," Int. J. Control 39(1984): 667-682. [5] Schmitendorf, W.E. and B.R. Barmish, "Null controllability of linear systems with constrained controls," SIAM J. Control and Opt. 18(1980): 327-345. [6] Slemrod, M., "Feedback stabilization of a linear control system in Hilbert space," Math. Control Signals Systems 2(1989): 265-285. [7] Slotine, J-J.E., and W. Li, Applied Nonlinear Control, Prentice-Hall, Englewood Cliffs, 1991. [8] Sontag, E.D., "An algebraic approach to bounded controllability of linear systems," Int. J. Control 39(1984): 181-188. [9] Sontag, E.D., "Remarks on stabilization and input-to-state stability," Proc. IEEE CDC, Tampa, Dec. 1989, IEEE Publications, 1989, pp. 1376-1378. [10] Sontag, E.D., Mathematical Control Theory: Deterministic Finite Dimensional Systems, Springer, New York, 1990. [11] Sontag, E.D., and H.J. Sussmann, "Nonlinear output feedback design for linear systems with saturating controls," Proc. IEEE CDC, Honolulu, Dec. 1990, IEEE Publications, 1990, pp. 3414-3416. [12] Sussmann, H. J. and Y. Yang, "On the stabilizability of multiple integrators by means of bounded feedback controls" Proc. IEEE CDC, Brighton, UK, Dec. 1991, IEEE Publications, 1991: 70-73. [13] Teel, A.R., "Global stabilization and restricted tracking for multiple integrators with bounded controls," Systems and Control Letters 18(1992): 165-171. [14] Yang, Y., H.J. Sussmann and E.D. Sontag, "Stabilization of linear systems with bounded controls," Proc. June 1992 NOLCOS, Bordeaux, (M. Fliess, Ed.), IFAC Publications, pp. 15-20. [15] Yang, Y., Global Stabilization of Linear Systems with Bounded Feedback, Ph. D. Thesis, Mathematics Department, Rutgers University, 1993. j~yj ev M (~-1 + -2 ) + ffi-2 ; (50)
[ 1282 ]
Test
1,023
2
Title: Data Reconciliation and Gross Error Detection for Dynamic Systems Abstract: Gross error detection plays a vital role in parameter estimation and data reconciliation for both dynamic and steady state systems. In particular, recent advances in process optimization now allow data reconciliation of dynamic systems and appropriate problem formulations need to be considered for them. Data errors due to either miscalibrated or faulty sensors or just random events nonrepresentative of the underlying statistical distribution, can induce heavy biases in the parameter estimates and in the reconciled data. In this paper we concentrate on robust estimators and exploratory statistical methods which allow us to detect the gross errors as the data reconciliation is performed. These robust methods have the property of being insensitive to departures from ideal statistical distributions and therefore are insensitive to the presence of outliers. Once the regression is done, the outliers can be detected readily by using exploratory statistical techniques. An important feature for performance of the optimization algorithm and uniqueness of the reconciled data is the ability to classify the variables according to their observability and redundancy properties. Here an observable variable is an unmeasured quantity which can be estimated from the measured variables through the physical model while a nonredundant variable is a measured variable which cannot be estimated other than through its measurements. Variable classification can be used as an aid to design instrumentation schemes. In this
[ 878, 1090 ]
Train
1,024
6
Title: On learning hierarchical classifications Abstract: Many significant real-world classification tasks involve a large number of categories which are arranged in a hierarchical structure; for example, classifying documents into subject categories under the library of congress scheme, or classifying world-wide-web documents into topic hierarchies. We investigate the potential benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers for these domains. First, we consider the possibility of exploiting a class hierarchy as prior knowledge that can help one learn a more accurate classifier. We explore the benefits of learning category-discriminants in a hard top-down fashion and compare this to a soft approach which shares training data among sibling categories. In doing so, we verify that hierarchies have the potential to improve prediction accuracy. But we argue that the reasons for this can be subtle. Sometimes, the improvement is only because using a hierarchy happens to constrain the expressiveness of a hypothesis class in an appropriate manner. However, various controlled experiments show that in other cases the performance advantage associated with using a hierarchy really does seem to be due to the prior knowledge it encodes.
[ 74, 1053, 1335, 2338 ]
Validation
1,025
6
Title: Machine Learning 27(1):51-68, 1997. Predicting nearly as well as the best pruning of a decision tree Abstract: Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up "over-fitting" the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The final tree is then output and used for classification on test data. In this paper, we suggest an alternative approach to the pruning phase. Using a given unpruned decision tree, we present a new method of making predictions on test data, and we prove that our algorithm's performance will not be "much worse" (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure is very efficient and highly robust. Our method can be viewed as a synthesis of two previously studied techniques. First, we apply Cesa-Bianchi et al.'s [4] results on predicting using "expert advice" (where we view each pruning as an "expert") to obtain an algorithm that has provably low prediction loss, but that is com-putationally infeasible. Next, we generalize and apply a method developed by Buntine [3], [2] and Willems, Shtarkov and Tjalkens [20], [21] to derive a very efficient implementation of this procedure.
[ 453, 569, 876, 1006, 1238, 1290, 1388, 1449, 1712 ]
Train
1,026
6
Title: Noise-Tolerant Parallel Learning of Geometric Concepts Abstract: We present several efficient parallel algorithms for PAC-learning geometric concepts in a constant-dimensional space that are robust even against malicious misclassification noise of any rate less than 1=2. In particular we consider the class of geometric concepts defined by a polynomial number of (d 1)-dimensional hyperplanes against an arbitrary distribution where each hyperplane has a slope from a set of known slopes, and the class of geometric concepts defined by a polynomial number of (d 1)-dimensional hyperplanes (of unrestricted slopes) against a product distribution. Next we define a complexity measure of any set S of (d1)-dimensional surfaces that we call the variant of S and prove that the class of geometric concepts defined by surfaces of polynomial variant can be efficiently learned in parallel under a product distribution (even under malicious misclassifi-cation noise). Finally, we describe how boosting techniques can be used so that our algorithms' de pendence on * and ffi does not depend on d.
[ 1105, 1433 ]
Train
1,027
6
Title: Pessimistic decision tree pruning based on tree size Abstract: In this work we develop a new criteria to perform pessimistic decision tree pruning. Our method is theoretically sound and is based on theoretical concepts such as uniform convergence and the Vapnik-Chervonenkis dimension. We show that our criteria is very well motivated, from the theory side, and performs very well in practice. The accuracy of the new criteria is comparable to that of the current method used in C4.5.
[ 56, 322, 378, 638, 1322, 1336, 1388 ]
Train
1,028
2
Title: NETWORKS, FUNCTION DETERMINES FORM Abstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as "black boxes" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights.
[ 206, 1037, 1042, 1435, 1610 ]
Test
1,029
3
Title: Subregion-Adaptive Integration of Functions Having a Dominant Peak Abstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as "black boxes" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights.
[ 1090, 2456 ]
Train
1,030
1
Title: Solving Combinatorial Problems Using Evolutionary Algorithms Abstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as "black boxes" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights.
[ 163, 1136, 1796, 1946 ]
Train
1,031
2
Title: Protein Secondary Structure Modelling with Probabilistic Networks (Extended Abstract) Abstract: In this paper we study the performance of probabilistic networks in the context of protein sequence analysis in molecular biology. Specifically, we report the results of our initial experiments applying this framework to the problem of protein secondary structure prediction. One of the main advantages of the probabilistic approach we describe here is our ability to perform detailed experiments where we can experiment with different models. We can easily perform local substitutions (mutations) and measure (probabilistically) their effect on the global structure. Window-based methods do not support such experimentation as readily. Our method is efficient both during training and during prediction, which is important in order to be able to perform many experiments with different networks. We believe that probabilistic methods are comparable to other methods in prediction quality. In addition, the predictions generated by our methods have precise quantitative semantics which is not shared by other classification methods. Specifically, all the causal and statistical independence assumptions are made explicit in our networks thereby allowing biologists to study and experiment with different causal models in a convenient manner.
[ 258, 1328 ]
Train
1,032
6
Title: Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation Abstract: In this paper we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate is not much worse than that of the training error estimate. The name sanity-check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for rather limited cases in the prior literature on cross-validation. Any nontrivial bound on the error of leave-one-out must rely on some notion of algorithmic stability. Previous bounds relied on the rather strong notion of hypothesis stability, whose application was primarily limited to nearest-neighbor and other local algorithms. Here we introduce the new and weaker notion of error stability, and apply it to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. We also provide lower bounds demonstrating the necessity of error stability for proving bounds on the error of the leave-one-out estimate, and the fact that for training error minimization algorithms, in the worst case such bounds must still depend on the Vapnik-Chervonenkis dimension of the hypothesis class.
[ 591, 847, 848, 967, 1335 ]
Train
1,033
0
Title: Observation and Generalisation in a Simulated Robot World Abstract: This paper describes a program which observes the behaviour of actors in a simulated world and uses these observations as guides to conducting experiments. An experiment is a sequence of actions carried out by an actor in order to support or weaken the case for a generalisation of a concept. A generalisation is attempted when the program observes a state of the world which is similar to a some previous state. A partial matching algorithm is used to find substitutions which enable the two states to be unified. The generalisation of the two states is their unifier.
[ 1174 ]
Test
1,034
1
Title: 1 GP-COM: A Distributed, Component-Based Genetic Programming System in C++ Abstract: Widespread adoption of Genetic Programming techniques as a domain-independent problem solving tool depends on a good underlying software structure. A system is presented that mirrors the conceptual makeup of a GP system. Consisting of a loose collection of software components, each with strict interface definitions and roles, the system maximises flexibility and minimises effort when applied to a new problem domain.
[ 1178, 1730 ]
Validation
1,035
1
Title: An Empirical Investigation of Multi-Parent Recombination Operators in Evolution Strategies Abstract: Widespread adoption of Genetic Programming techniques as a domain-independent problem solving tool depends on a good underlying software structure. A system is presented that mirrors the conceptual makeup of a GP system. Consisting of a loose collection of software components, each with strict interface definitions and roles, the system maximises flexibility and minimises effort when applied to a new problem domain.
[ 714, 1216, 1218 ]
Validation
1,036
1
Title: Adaptive Behavior in Competing Co-Evolving Species Abstract: Co-evolution of competitive species provides an interesting testbed to study the role of adaptive behavior because it provides unpredictable and dynamic environments. In this paper we experimentally investigate some arguments for the co-evolution of different adaptive protean behaviors in competing species of predators and preys. Both species are implemented as simulated mobile robots (Kheperas) with infrared proximity sensors, but the predator has an additional vision module whereas the prey has a maximum speed set to twice that of the predator. Different types of variability during life for neurocontrollers with the same architecture and genetic length are compared. It is shown that simple forms of pro-teanism affect co-evolutionary dynamics and that preys rather exploit noisy controllers to generate random trajectories, whereas predators benefit from directional-change controllers to improve pursuit behavior.
[ 538, 712, 1662 ]
Train
1,037
2
Title: OBSERVABILITY IN RECURRENT NEURAL NETWORKS Abstract: Report SYCON-92-07rev ABSTRACT We obtain a characterization of observability for a class of nonlinear systems which appear in neural networks research.
[ 427, 1028, 1042, 1470, 1610 ]
Validation
1,038
2
Title: Brief Papers Computing Second Derivatives in Feed-Forward Networks: A Review Abstract: The calculation of second derivatives is required by recent training and analysis techniques of connectionist networks, such as the elimination of superfluous weights, and the estimation of confidence intervals both for weights and network outputs. We here review and develop exact and approximate algorithms for calculating second derivatives. For networks with jwj weights, simply writing the full matrix of second derivatives requires O(jwj 2 ) operations. For networks of radial basis units or sigmoid units, exact calculation of the necessary intermediate terms requires of the order of 2h + 2 backward/forward-propagation passes where h is the number of hidden units in the network. We also review and compare three approximations (ignoring some components of the second derivative, numerical differentiation, and scoring). Our algorithms apply to arbitrary activation functions, networks, and error functions (for instance, with connections that skip layers, or radial basis functions, or cross-entropy error and Softmax units, etc.).
[ 157, 916, 1196 ]
Train
1,039
0
Title: Functional Programming by Analogy Abstract: In this paper we describe how the principles of problem solving by analogy can be applied to the domain of functional program synthesis. For this reason, we treat programs as syntactical structures. We discuss two different methods to handle these structures: (a) a graph metric for determining the distance between two program schemes, and (b) the Structure Mapping Engine (an existing system to examine analogical processing). Furthermore we show experimental results and discuss them.
[ 1354 ]
Validation
1,040
0
Title: Learning from Examples: Reminding or Heuristic Switching? Abstract: In this paper we describe how the principles of problem solving by analogy can be applied to the domain of functional program synthesis. For this reason, we treat programs as syntactical structures. We discuss two different methods to handle these structures: (a) a graph metric for determining the distance between two program schemes, and (b) the Structure Mapping Engine (an existing system to examine analogical processing). Furthermore we show experimental results and discuss them.
[ 1354 ]
Test
1,041
2
Title: The Potential of Prototype Styles of Generalization Abstract: There are many ways for a learning system to generalize from training set data. This paper presents several generalization styles using prototypes in an attempt to provide accurate generalization on training set data for a wide variety of applications. These generalization styles are efficient in terms of time and space, and lend themselves well to massively parallel architectures. Empirical results of generalizing on several real-world applications are given, and these results indicate that the prototype styles of generalization presented have potential to provide accurate generalization for many applications.
[ 1321 ]
Train
1,042
2
Title: Recurrent Neural Networks: Some Systems-Theoretic Aspects Abstract: This paper provides an exposition of some recent research regarding system-theoretic aspects of continuous-time recurrent (dynamic) neural networks with sigmoidal activation functions. The class of systems is introduced and discussed, and a result is cited regarding their universal approximation properties. Known characterizations of controllability, ob-servability, and parameter identifiability are reviewed, as well as a result on minimality. Facts regarding the computational power of recurrent nets are also mentioned. fl Supported in part by US Air Force Grant AFOSR-94-0293
[ 206, 1028, 1037, 1043, 1435 ]
Train
1,043
2
Title: Complete Controllability of Continuous-Time Recurrent Neural Networks Abstract: This paper presents a characterization of controllability for the class of control systems commonly called (continuous-time) recurrent neural networks. The characterization involves a simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent.
[ 1042, 1435 ]
Train
1,044
2
Title: Word Perfect Corp. LIA: A Location-Independent Transformation for ASOCS Adaptive Algorithm 2 Abstract: Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. ANNs that use dynamic topologies have shown ability to overcome many of these problems. Adaptive Self Organizing Concurrent Systems (ASOCS) are a class of learning models with inherently dynamic topologies. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing learning models that use dynamic topologies efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents the Location - Independent ASOCS (LIA) model as a LIT for ASOCS Adaptive Algorithm 2. The description of LIA gives formal definitions for LIA algorithms. Because LIA implements basic ASOCS mechanisms, these definitions provide a formal description of basic ASOCS mechanisms in general, in addition to LIA.
[ 809, 812, 814, 1341, 1365 ]
Validation
1,045
4
Title: Spurious Solutions to the Bellman Equation Abstract: Reinforcement learning algorithms often work by finding functions that satisfy the Bellman equation. This yields an optimal solution for prediction with Markov chains and for controlling a Markov decision process (MDP) with a finite number of states and actions. This approach is also frequently applied to Markov chains and MDPs with infinite states. We show that, in this case, the Bellman equation may have multiple solutions, many of which lead to erroneous predictions and policies (Baird, 1996). Algorithms and conditions are presented that guarantee a single, optimal solution to the Bellman equation.
[ 471, 1540 ]
Train
1,046
0
Title: Model-Based Learning of Structural Indices to Design Cases Abstract: A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization.
[ 612, 1121, 1344, 1345, 2706 ]
Test
1,047
0
Title: GIT-CC-92/60 A Model-Based Approach to Analogical Reasoning and Learning in Design Abstract: A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization.
[ 603, 1344, 1345, 1348, 1354, 1355, 1420 ]
Train
1,048
0
Title: Learning to Classify Observed Motor Behavior Abstract: We present a representational format for observed movements The representation has a temporal structure relating components of a single complex movement. We also present OXBOW, an unsupervised learning system, which constructs classes of these movements. Empirical results indicate that the system builds abstract movement concepts with appropriate component structure allowing it to predict the latter portions of a partially observed movement.
[ 984 ]
Test
1,049
5
Title: Machine Learning and Inference Abstract: Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: onefor the best representation space, and twofor the best hypothesis in that space. In data-driven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented data-driven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity.
[ 1292, 1329, 1498 ]
Train
1,050
6
Title: Extracting Support Data for a Given Task Abstract: We report a novel possibility for extracting a small subset of a data base which contains all the information necessary to solve a given classification task: using the Support Vector Algorithm to train three different types of handwritten digit classifiers, we observed that these types of classifiers construct their decision surface from strongly overlapping small ( 4%) subsets of the data base. This finding opens up the possibility of compressing data bases significantly by disposing of the data which is not important for the solution of a given task. In addition, we show that the theory allows us to predict the classifier that will have the best generalization ability, based solely on performance on the training set and characteristics of the learning machines. This finding is important for cases where the amount of available data is limited.
[ 1171, 1306, 1389, 1492, 1499, 1591 ]
Train
1,051
2
Title: Interpreting neuronal population activity by reconstruction: A unified framework with application to hippocampal place cells Abstract: Physical variables such as the orientation of a line in the visual field or the location of the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which the physical variables are estimated from observed neural activity. Reconstruction is useful first in quantifying how much information about the physical variables is present in the population, and second, in providing insight into how the brain might use distributed representations in solving related computational problems such as visual object recognition and spatial navigation. Two classes of reconstruction methods, namely, probabilistic or Bayesian methods and basis function methods, are discussed. They include important existing methods as special cases, such as population vector coding, optimal linear estimation and template matching. As a representative example for the reconstruction problem, different methods were applied to multi-electrode spike train data from hippocampal place cells in freely moving rats. The reconstruction accuracy of the trajectories of the rats was compared for the different methods. Bayesian methods were especially accurate when a continuity constraint was enforced, and the best errors were within a factor of two of the the information-theoretic limit on how accurate any reconstruction can be, which were comparable with the intrinsic experimental errors in position tracking. In addition, the reconstruction analysis uncovered some interesting aspects of place cell activity, such as the tendency for erratic jumps of the reconstructed trajectory when the animal stopped running. In general, the theoretical values of the minimal achievable reconstruction errors quantify how accurately a physical variable is encoded in the neuronal population in the sense of mean square error, regardless of the method used for reading out the information. One related result is that the theoretical accuracy is independent of the width of the Gaussian tuning function only in two dimensions. Finally, all the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain could feasibly use to solve related problems.
[ 1052, 2576 ]
Train
1,052
2
Title: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory Abstract: The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.
[ 600, 832, 1051, 1066 ]
Train
1,053
6
Title: Bias Plus Variance Decomposition for Zero-One Loss Functions Abstract: We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich (1995) and Breiman (1996). Their decomposition suffers from some major shortcomings though (e.g., potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository.
[ 853, 931, 1024, 1191, 1197, 1463, 1568, 1607 ]
Test
1,054
1
Title: Evolving Sorting Networks using Genetic Programming and Rapidly Reconfigurable Field-Programmable Gate Arrays Convergent Design, L.L.C. Abstract: This paper describes ongoing work involving the use of the X i l i n x X C 6 2 1 6 r a p i d l y reconfigurable field-programmable gate ar ray to evol ve sorting n e t w o r k s u s i n g g e n e t i c programming. We successfully evolved a network for sorting seven items that employs two fewer steps than the sorting network described in a l962 patent and that has the same number of steps as the seven-sorter devised by Floyd and Knuth subsequent to the patent.
[ 1249 ]
Validation
1,055
2
Title: Parsimonious Least Norm Approximation Abstract: A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solution to a corrupted linear system Ax = b + p, where the corruption p is due to noise or error in measurement. The proposed linear-programming-based algorithm finds a solution x by parametrically minimizing the number of nonzero elements in x and the error k Ax b p k 1 . Numerical tests on a signal-processing-based example indicate that the proposed method is comparable to a method that parametrically minimizes the 1-norm of the solution x and the error k Ax b p k 1 , and that both methods are superior, by orders of magnitude, to solutions obtained by least squares as well by combinatorially choosing an optimal solution with a specific number of nonzero elements.
[ 607, 1059, 1284 ]
Train
1,056
2
Title: Learning Viewpoint Invariant Face Representations from Visual Experience by Temporal Association Abstract: In natural visual experience, different views of an object or face tend to appear in close temporal proximity. A set of simulations is presented which demonstrate how viewpoint invariant representations of faces can be developed from visual experience by capturing the temporal relationships among the input patterns. The simulations explored the interaction of temporal smoothing of activity signals with Hebbian learning (Foldiak, 1991) in both a feed-forward system and a recurrent system. The recurrent system was a generalization of a Hopfield network with a lowpass temporal filter on all unit activities. Following training on sequences of graylevel images of faces as they changed pose, multiple views of a given face fell into the same basin of attraction, and the system acquired representations of faces that were approximately viewpoint invariant.
[ 476, 676, 1014 ]
Train
1,057
2
Title: Submitted to the Future Generation Computer Systems special issue on Data Mining. Using Neural Networks Abstract: Neural networks have been successfully applied in a wide range of supervised and unsupervised learning applications. Neural-network methods are not commonly used for data-mining tasks, however, because they often produce incomprehensible models and require long training times. In this article, we describe neural-network learning algorithms that are able to produce comprehensible models, and that do not require excessive training times. Specifically, we discuss two classes of approaches for data mining with neural networks. The first type of approach, often called rule extraction, involves extracting symbolic models from trained neural networks. The second approach is to directly learn simple, easy-to-understand networks. We argue that, given the current state of the art, neural-network methods deserve a place in the tool boxes of data-mining specialists.
[ 627, 631, 1307, 1359, 1484 ]
Validation
1,058
2
Title: Statistical Theory of Overtraining Is Cross-Validation Asymptotically Effective? Abstract: A statistical theory for overtraining is proposed. The analysis treats realizable stochastic neural networks, trained with Kullback-Leibler loss in the asymptotic case. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and testing sets in order to obtain the optimum performance. In the non-asymptotic region cross-validated early stopping always decreases the generalization error. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings.
[ 1211, 2454 ]
Train
1,059
2
Title: Mathematical Programming in Data Mining Abstract: Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained model degradation when applied to new problems. A novel approach is proposed that purposely tolerates a small error in the training process in order to avoid overfitting data that may contain errors. Examples of applications of these concepts are given.
[ 1055 ]
Train
1,060
1
Title: An Overview of Genetic Algorithms Part 1, Fundamentals Abstract: Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained model degradation when applied to new problems. A novel approach is proposed that purposely tolerates a small error in the training process in order to avoid overfitting data that may contain errors. Examples of applications of these concepts are given.
[ 163, 237, 965, 1136, 1523, 1890, 2039 ]
Train
1,061
5
Title: Stochastic search in inductive concept learning Abstract: Concept learning can be viewed as search of the space of concept descriptions. The hypothesis language determines the search space. In standard inductive learning algorithms, the structure of the search space is determined by generalization/specialization operators. Algorithms perform locally optimal search by using a hill-climbing and/or a beam-search strategy. To overcome this limitation, concept learning can be viewed as stochastic search of the space of concept descriptions. The proposed stochastic search method is based on simulated annealing which is known as a successful means for solving combinatorial optimization problems. The stochastic search method, implemented in a rule learning system ATRIS, is based on a compact and efficient representation of the problem and the appropriate operators for structuring the search space. Furthermore, by heuristic pruning of the search space, the method enables also handling of imperfect data. The paper introduces the stochastic search method, describes the ATRIS learning algorithm and gives results of the experiments.
[ 378, 426, 585, 1010, 1244, 1578, 1651 ]
Train
1,062
6
Title: Exponentially many local minima for single neurons Abstract: We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension.
[ 930, 1254, 1323, 2651 ]
Train
1,063
1
Title: An Analysis of the Effects of Neighborhood Size and Shape on Local Selection Algorithms Abstract: The increasing availability of finely-grained parallel architectures has resulted in a variety of evolutionary algorithms (EAs) in which the population is spatially distributed and local selection algorithms operate in parallel on small, overlapping neighborhoods. The effects of design choices regarding the particular type of local selection algorithm as well as the size and shape of the neighborhood are not particularly well understood and are generally tested empirically. In this paper we extend the techniques used to more formally analyze selection methods for sequential EAs and apply them to local neighborhood models, resulting in a much clearer understanding of the effects of neighborhood size and shape.
[ 1065, 1136, 1153, 1628 ]
Train
1,064
3
Title: Incremental Tradeoff Resolution in Qualitative Probabilistic Networks Abstract: Qualitative probabilistic reasoning in a Bayesian network often reveals tradeoffs: relationships that are ambiguous due to competing qualitative influences. We present two techniques that combine qualitative and numeric probabilistic reasoning to resolve such tradeoffs, inferring the qualitative relationship between nodes in a Bayesian network. The first approach incrementally marginalizes nodes that contribute to the ambiguous qualitative relationships. The second approach evaluates approximate Bayesian networks for bounds of probability distributions, and uses these bounds to determinate qualitative relationships in question. This approach is also incremental in that the algorithm refines the state spaces of random variables for tighter bounds until the qualitative relationships are resolved. Both approaches provide systematic methods for tradeoff resolution at potentially lower computational cost than application of purely numeric methods.
[ 332, 623, 952, 1937 ]
Train
1,065
1
Title: A Survey of Parallel Genetic Algorithms Abstract: IlliGAL Report No. 97003 May 1997
[ 163, 1063, 1106, 1153, 1279, 1305 ]
Validation
1,066
2
Title: The Rectified Gaussian Distribution Abstract: A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the representational power of the rectified Gaussian. Since the cooperative distribution can represent the translations of a pattern, it demonstrates the potential of the rectified Gaussian for modeling pattern manifolds.
[ 36, 1052 ]
Validation
1,067
2
Title: A Fast Fixed-Point Algorithm for Independent Component Analysis Abstract: This paper will appear in Neural Computation, 9:1483-1492, 1997. Abstract We introduce a novel fast algorithm for Independent Component Analysis, which can be used for blind source separation and feature extraction. It is shown how a neural network learning rule can be transformed into a txed-point iteration, which provides an algorithm that is very simple, does not depend on any user-detned parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm tnds, one at a time, all non-Gaussian independent components, regardless of their probability distributions. The computations can be performed either in batch mode or in a semi-adaptive manner. The convergence of the algorithm is rigorously proven, and the convergence speed is shown to be cubic. Some comparisons to gradient based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations.
[ 570, 576, 834, 839, 1801, 1814 ]
Validation
1,068
2
Title: Neuronal Goals: Efficient Coding and Coincidence Detection Abstract: Barlow's seminal work on minimal entropy codes and unsupervised learning is reiterated. In particular, the need to transmit the probability of events is put in a practical neuronal framework for detecting suspicious events. A variant of the BCM learning rule [15] is presented together with some mathematical results suggesting optimal minimal entropy coding.
[ 359, 726, 989, 2499, 2500 ]
Train
1,069
1
Title: Extended Selection Mechanisms in Genetic Algorithms Abstract:
[ 163, 422, 793, 1096, 1455, 1685 ]
Train
1,070
1
Title: Extended Selection Mechanisms in Genetic Algorithms Abstract: A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993
[ 163, 793, 1016, 1153 ]
Train
1,071
5
Title: Machine Learning and Inference Abstract: Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: onefor the best representation space, and twofor the best hypothesis in that space. In data-driven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented data-driven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity.
[ 1292, 1329, 1498 ]
Test
1,072
2
Title: The Nonlinear PCA Learning Rule and Signal Separation Mathematical Analysis Abstract: Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: onefor the best representation space, and twofor the best hypothesis in that space. In data-driven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented data-driven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity.
[ 354, 839, 1520 ]
Validation
1,073
5
Title: An adaptation of Relief for attribute estimation in regression Abstract: Heuristic measures for estimating the quality of attributes mostly assume the independence of attributes so in domains with strong dependencies between attributes their performance is poor. Relief and its extension ReliefF are capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. By exploiting local information provided by different contexts they provide a global view. We present the analysis of Reli-efF which lead us to its adaptation to regression (continuous class) problems. The experiments on artificial and real-world data sets show that Re-gressional ReliefF correctly estimates the quality of attributes in various conditions, and can be used for non-myopic learning of the regression trees. Regressional ReliefF and ReliefF provide a unified view on estimating the attribute quality in regression and classification.
[ 314, 1112, 1182, 1569, 1636 ]
Train
1,074
6
Title: Inductive Logic Programming Abstract: A new research area, Inductive Logic Programming, is presently emerging. While inheriting various positive characteristics of the parent subjects of Logic Programming and Machine Learning, it is hoped that the new area will overcome many of the limitations of its forebears. The background to present developments within this area is discussed and various goals and aspirations for the increasing body of researchers are identified. Inductive Logic Programming needs to be based on sound principles from both Logic and Statistics. On the side of statistical justification of hypotheses we discuss the possible relationship between Algorithmic Complexity theory and Probably-Approximately-Correct (PAC) Learning. In terms of logic we provide a unifying framework for Muggleton and Buntine's Inverse Resolution (IR) and Plotkin's Relative Least General Generali-sation (RLGG) by rederiving RLGG in terms of IR. This leads to a discussion of the feasibility of extending the RLGG framework to allow for the invention of new predicates, previously discussed only within the context of IR.
[ 109, 1174 ]
Train
1,075
2
Title: Bayesian Learning in Feed Forward Neural Networks Abstract: Bayesian methods are applicable to complex modeling tasks. In this review, the principles of Bayesian inference are presented and applied to neural network models. Several approximate implementations are discussed, and their advantages over conventional fre-quentist model training and selection are outlined. It is argued that Bayesian methods are preferable to traditional approaches, although empirical evidence for this is still sparse.
[ 157, 1340 ]
Validation
1,076
3
Title: Learning Belief Networks from Data: An Information Theory Based Approach Abstract: This paper presents an efficient algorithm for learning Bayesian belief networks from databases. The algorithm takes a database as input and constructs the belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set that is large enough, this algorithm can generate a belief network very close to the underlying model, and at the same time, enjoys the time When the data set has a normal DAG-Faithful (see Section 3.2) probability distribution, the algorithm guarantees that the structure of a perfect map [Pearl, 1988] of the underlying dependency model is generated. To evaluate this algorithm, we present the experimental results on three versions of the well-known ALARM network database, which has 37 attributes and 10,000 records. The results show that this algorithm is accurate and efficient. The proof of correctness and the analysis of complexity of O N( ) 4 on conditional independence (CI) tests.
[ 1078, 1086, 2461 ]
Train
1,077
1
Title: A Search Space Analysis of the Job Shop Scheduling Problem Abstract: A computational study for the Job Shop Scheduling Problem is presented. Thereby emphasis is put on the structure of the solution space as it appears for adaptive search. A statistical analysis of the search spaces reveals the impacts of inherent properties of the problem on adaptive heuristics.
[ 1153 ]
Train
1,078
3
Title: An Algorithm for Bayesian Belief Network Construction from Data Abstract: This paper presents an efficient algorithm for constructing Bayesian belief networks from databases. The algorithm takes a database and an attributes ordering (i.e., the causal attributes of an attribute should appear earlier in the order) as input and constructs a belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set which is large enough and has a DAG-Isomorphic probability distribution, this algorithm guarantees that the perfect map [1] of the underlying dependency tests. To evaluate this algorithm, we present the experimental results on three versions of the well-known ALARM network database, which has 37 attributes and 10,000 records. The correctness proof and the analysis of computational complexity are also presented. We also discuss the features of our work and relate it to previous works. model is generated, and at the same time, enjoys the time complexity of O N( ) 2 on conditional independence (CI)
[ 1076, 1086, 2461 ]
Train
1,079
2
Title: Nonlinear Prediction of Chaotic Time Series Using Support Vector Machines Abstract: A novel method for regression has been recently proposed by V. Vapnik et al. [8, 9]. The technique, called Support Vector Machine (SVM), is very well founded from the mathematical point of view and seems to provide a new insight in function approximation. We implemented the SVM and tested it on the same data base of chaotic time series that was used in [1] to compare the performances of different approximation techniques, including polynomial and rational approximation, local polynomial techniques, Radial Basis Functions, and Neural Networks. The SVM performs better than the approaches presented in [1]. We also study, for a particular time series, the variability in performance with respect to the few free parameters of SVM.
[ 611, 864, 975, 1009, 1103, 1315, 1389, 1718, 1724 ]
Train
1,080
2
Title: A Multi-Chip Module Implementation of a Neural Network Abstract: The requirement for dense interconnect in artificial neural network systems has led researchers to seek high-density interconnect technologies. This paper reports an implementation using multi-chip modules (MCMs) as the interconnect medium. The specific system described is a self-organizing, parallel, and dynamic learning model which requires a dense interconnect technology for effective implementation; this requirement is fulfilled by exploiting MCM technology. The ideas presented in this paper regarding an MCM implementation of artificial neural networks are versatile and can be adapted to apply to other neural network and connectionist models.
[ 809, 812, 814, 1129, 1321 ]
Train
1,081
5
Title: Specialization of Recursive Predicates Abstract: When specializing a recursive predicate in order to exclude a set of negative examples without excluding a set of positive examples, it may not be possible to specialize or remove any of the clauses in a refutation of a negative example without excluding any positive exam ples. A previously proposed solution to this problem is to apply program transformation in order to obtain non-recursive target predicates from recursive ones. However, the application of this method prevents recursive specializations from being found. In this work, we present the algorithm spectre ii which is not limited to specializing non-recursive predicates. The key idea upon which the algorithm is based is that it is not enough to specialize or remove clauses in refutations of negative examples in order to obtain correct specializations, but it is sometimes necessary to specialize clauses that appear only in refutations of positive examples. In contrast to its predecessor spectre, the new algorithm is not limited to specializing clauses defining one predicate only, but may specialize clauses defining multiple predicates. Furthermore, the positive and negative examples are no longer required to be instances of the same predicate. It is proven that the algorithm produces a correct specialization when all positive examples are logical consequences of the original program, there is a finite number of derivations of positive and negative examples and when no positive and negative examples have the same sequence of input clauses in their refutations.
[ 521, 1082, 1259 ]
Train
1,082
5
Title: Specialization of Logic Programs by Pruning SLD-Trees Abstract: program w.r.t. positive and negative examples can be viewed as the problem of pruning an SLD-tree such that all refutations of negative examples and no refutations of positive examples are excluded. It is shown that the actual pruning can be performed by applying unfolding and clause removal. The algorithm spectre is presented, which is based on this idea. The input to the algorithm is, besides a logic program and positive and negative examples, a computation rule, which determines the shape of the SLD-tree that is to be pruned. It is shown that the generality of the resulting specialization is dependent on the computation rule, and experimental results are presented from using three different computation rules. The experiments indicate that the computation rule should be formulated so that the number of applications of unfolding is kept as low as possible. The algorithm, which uses a divide-and-conquer method, is also compared with a covering algorithm. The experiments show that a higher predictive accuracy can be achieved if the focus is on discriminating positive from negative examples rather than on achieving a high coverage of positive examples only.
[ 521, 1081, 1259, 2312 ]
Train
1,083
3
Title: Inference in Cognitive Maps Abstract: Cognitive mapping is a qualitative decision modeling technique developed over twenty years ago by political scientists, which continues to see occasional use in social science and decision-aiding applications. In this paper, I show how cognitive maps can be viewed in the context of more recent formalisms for qualitative decision modeling, and how the latter provide a firm semantic foundation that can facilitate the development of more powerful inference procedures as well as extensions in expressiveness for models of this sort.
[ 1660 ]
Train
1,084
0
Title: Continuous Case-Based Reasoning Abstract: Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as online sensorimotor interaction with the environment, and continuous adaptation and learning during the performance task. This article introduces a new method for continuous case-based reasoning, and discusses its application to the dynamic selection, modification, and acquisition of robot behaviors in an autonomous navigation system, SINS (Self-Improving Navigation System). The computer program and the underlying method are systematically evaluated through statistical analysis of results from several empirical studies. The article concludes with a general discussion of case-based reasoning issues addressed by this research.
[ 858, 991, 2035 ]
Validation
1,085
5
Title: Reports of the GMU Machine Learning and Inference HOW DID AQ FACE THE EAST-WEST CHALLENGE? Abstract: The East-West Challenge is the title of the second international competition of machine learning programs, organized in the Fall 1994 by Donald Michie, Stephen Muggleton, David Page and Ashwin Srinivasan from Oxford University. The goal of the competition was to solve the TRAINS problems, that is to discover the simplest classification rules for train-like structured objects. The rule complexity was judged by a Prolog program that counted the number of various components in the rule expressed in the from of Prolog Horn clauses. There were 65 entries from several countries submitted to the competition. The GMU teams entry was generated by three members of the AQ family of learning programs: AQ-DT, INDUCE and AQ17-HCI. The paper analyses the results obtained by these programs and compares them to those obtained by other learning programs. It also presents ideas for further research that were inspired by the competition. One of these ideas is a challenge to the machine learning community to develop a measure of knowledge complexity that would adequately capture the cognitive complexity of knowledge. A preliminary measure of such cognitive complexity, called Ccomplexity, different from the Prolog-complexity (P-complexity) used in the competition, is briefly discussed. The authors thank Professors Donald Michie, Steve Muggleton, David Page and Ashwin Srinivasan for organizing the West-East Challenge competition of machine learning programs, which provided us with a stimulating challenge for our learning programs and inspired new ideas for improving them. The authors also thank Nabil Allkharouf and Ali Hadjarian for their help and suggestions in the efforts to solve problems posed by the competition. This research was conducted in the Center for Machine Learning and Inference at George Mason University. The Center's research is supported in part by the Advanced Research Projects Agency under Grant No. N00014-91-J-1854, administered by the Office of Naval Research, and Grant No. F49620-92-J-0549, administered by the Air Force Office of Scientific Research, in part by the Office of Naval Research under Grant No. N00014-91-J-1351, and in part by the National Science Foundation under Grants No. IRI-9020266, CDA-9309725 and DMI-9496192.
[ 1292 ]
Test
1,086
3
Title: An Algorithm for the Construction of Bayesian Network Structures from Data Abstract: Previous algorithms for the construction of Bayesian belief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches - CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI based method. Results of preliminary evaluation of the algorithm on two networks (ALARM and LED) are presented. We also discuss some algo rithm performance issues and open problems.
[ 1076, 1078, 1240, 1527, 1545, 1582, 1641 ]
Train
1,087
3
Title: The covariance inflation criterion for adaptive model selection Abstract: We propose a new criterion for model selection in prediction problems. The covariance inflation criterion adjusts the training error by the average covariance of the predictions and responses, when the prediction rule is applied to permuted versions of the dataset. This criterion can be applied to general prediction problems (for example regression or classification), and to general prediction rules (for example stepwise regression, tree-based models and neural nets). As a byproduct we obtain a measure of the effective number of parameters used by an adaptive procedure. We relate the covariance inflation criterion to other model selection procedures and illustrate its use in some regression and classification problems. We also revisit the conditional bootstrap approach to model selection.
[ 1512 ]
Train
1,088
2
Title: SUPERVISED COMPETITIVE LEARNING FOR FINDING POSITIONS OF RADIAL BASIS FUNCTIONS Abstract: This paper introduces the magnetic neural gas (MNG) algorithm, which extends unsupervised competitive learning with class information to improve the positioning of radial basis functions. The basic idea of MNG is to discover heterogeneous clusters (i.e., clusters with data from different classes) and to migrate additional neurons towards them. The discovery is effected by a heterogeneity coefficient associated with each neuron and the migration is guided by introducing a kind of magnetic effect. The performance of MNG is tested on a number of data sets, including the thyroid data set. Results demonstrate promise.
[ 1565 ]
Validation
1,089
0
Title: Modeling Analogical Problem Solving in a Production System Architecture Abstract: This research is supported by a National Science Foundation Fellowship awarded to Dario Salvucci and Office of Naval Research grant N00014-96-1-0491 awarded to John Anderson. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Office of Naval Research, or the United States government.
[ 1354 ]
Validation
1,090
2
Title: Inference in Dynamic Error-in-Variable-Measurement Problems Abstract: Efficient algorithms have been developed for estimating model parameters from measured data, even in the presence of gross errors. In addition to point estimates of parameters, however, assessments of uncertainty are needed. Linear approximations provide standard errors, but these can be misleading when applied to models that are substantially nonlinear. To overcome this difficulty, "profiling" methods have been developed for the case in which the regressor variables are error free. In this paper we extend profiling methods to Error-in-Variable-Measurement (EVM) models. We use Laplace's method to integrate out the incidental parameters associated with the measurement errors, and then apply profiling methods to obtain approximate confidence contours for the parameters. This approach is computationally efficient, requiring few function evaluations, and can be applied to large scale problems. It is useful when the certain measurement errors (e.g., input variables) are relatively small, but not so small that they can be ignored.
[ 1023, 1029 ]
Test
1,091
2
Title: Implicit learning in 3D object recognition: The importance of temporal context Abstract: A novel architecture and set of learning rules for cortical self-organization is proposed. The model is based on the idea that multiple information channels can modulate one another's plasticity. Features learned from bottom-up information sources can thus be influenced by those learned from contextual pathways, and vice versa. A maximum likelihood cost function allows this scheme to be implemented in a biologically feasible, hierarchical neural circuit. In simulations of the model, we first demonstrate the utility of temporal context in modulating plasticity. The model learns a representation that categorizes people's faces according to identity, independent of viewpoint, by taking advantage of the temporal continuity in image sequences. In a second set of simulations, we add plasticity to the contextual stream and explore variations in the architecture. In this case, the model learns a two-tiered representation, starting with a coarse view-based clustering and proceeding to a finer clustering of more specific stimulus features. This model provides a tenable account of how people may perform 3D object recognition in a hierarchical, bottom-up fashion.
[ 476, 676, 1014, 1288 ]
Train
1,092
6
Title: Pruning Adaptive Boosting ICML-97 Final Draft Abstract: The boosting algorithm AdaBoost, developed by Freund and Schapire, has exhibited outstanding performance on several benchmark problems when using C4.5 as the "weak" algorithm to be "boosted." Like other ensemble learning approaches, AdaBoost constructs a composite hypothesis by voting many individual hypotheses. In practice, the large amount of memory required to store these hypotheses can make ensemble methods hard to deploy in applications. This paper shows that by selecting a subset of the hypotheses, it is possible to obtain nearly the same levels of performance as the entire set. The results also provide some insight into the behavior of AdaBoost.
[ 569, 1484 ]
Validation
1,093
2
Title: The role of afferent excitatory and lateral inhibitory synaptic plasticity in visual cortical ocular dominance Abstract: The boosting algorithm AdaBoost, developed by Freund and Schapire, has exhibited outstanding performance on several benchmark problems when using C4.5 as the "weak" algorithm to be "boosted." Like other ensemble learning approaches, AdaBoost constructs a composite hypothesis by voting many individual hypotheses. In practice, the large amount of memory required to store these hypotheses can make ensemble methods hard to deploy in applications. This paper shows that by selecting a subset of the hypotheses, it is possible to obtain nearly the same levels of performance as the entire set. The results also provide some insight into the behavior of AdaBoost.
[ 122, 355, 1659, 2085, 2228 ]
Train