text
stringlengths
0
4.09k
Abstract: We first pursue the study of how hierarchy provides a well-adapted tool for the analysis of change. Then, using a time sequence-constrained hierarchical clustering, we develop the practical aspects of a new approach to wavelet regression. This provides a new way to link hierarchical relationships in a multivariate time series data set with external signals. Violence data from the Colombian conflict in the years 1990 to 2004 is used throughout. We conclude with some proposals for further study on the relationship between social violence and market forces, viz. between the Colombian conflict and the US narcotics market.
Title: The Semantics of Kalah Game
Abstract: The present work consisted in developing a plateau game. There are the traditional ones (monopoly, cluedo, ect.) but those which interest us leave less place at the chance (luck) than to the strategy such that the chess game. Kallah is an old African game, its rules are simple but the strategies to be used are very complex to implement. Of course, they are based on a strongly mathematical basis as in the film "Rain-Man" where one can see that gambling can be payed with strategies based on mathematical theories. The Artificial Intelligence gives the possibility "of thinking" to a machine and, therefore, allows it to make decisions. In our work, we use it to give the means to the computer choosing its best movement.
Title: Model selection for weakly dependent time series forecasting
Abstract: Observing a stationary time series, we propose a two-step procedure for the prediction of the next value of the time series. The first step follows machine learning theory paradigm and consists in determining a set of possible predictors as randomized estimators in (possibly numerous) different predictive models. The second step follows the model selection paradigm and consists in choosing one predictor with good properties among all the predictors of the first steps. We study our procedure for two different types of bservations: causal Bernoulli shifts and bounded weakly dependent processes. In both cases, we give oracle inequalities: the risk of the chosen predictor is close to the best prediction risk in all predictive models that we consider. We apply our procedure for predictive models such as linear predictors, neural networks predictors and non-parametric autoregressive.
Title: Ptarithmetic
Abstract: The present article introduces ptarithmetic (short for "polynomial time arithmetic") -- a formal number theory similar to the well known Peano arithmetic, but based on the recently born computability logic (see http://www.cis.upenn.edu/ giorgi/cl.html) instead of classical logic. The formulas of ptarithmetic represent interactive computational problems rather than just true/false statements, and their "truth" is understood as existence of a polynomial time solution. The system of ptarithmetic elaborated in this article is shown to be sound and complete. Sound in the sense that every theorem T of the system represents an interactive number-theoretic computational problem with a polynomial time solution and, furthermore, such a solution can be effectively extracted from a proof of T. And complete in the sense that every interactive number-theoretic problem with a polynomial time solution is represented by some theorem T of the system. The paper is self-contained, and can be read without any previous familiarity with computability logic.
Title: Writing Positive/Negative-Conditional Equations Conveniently
Abstract: We present a convenient notation for positive/negative-conditional equations. The idea is to merge rules specifying the same function by using case-, if-, match-, and let-expressions. Based on the presented macro-rule-construct, positive/negative-conditional equational specifications can be written on a higher level. A rewrite system translates the macro-rule-constructs into positive/negative-conditional equations.
Title: Bayesian inference of a negative quantity from positive measurement results
Abstract: In this paper the Bayesian analysis is applied to assign a probability density to the value of a quantity having a definite sign. This analysis is logically consistent with the results, positive or negative, of repeated measurements. Results are used to estimate the atom density shift in a caesium fountain clock. The comparison with the classical statistical analysis is also reported and the advantages of the Bayesian approach for the realization of the time unit are discussed.
Title: ASF+ --- eine ASF-aehnliche Spezifikationssprache
Abstract: Maintaining the main aspects of the algebraic specification language ASF as presented in [Bergstra&al.89] we have extend ASF with the following concepts: While once exported names in ASF must stay visible up to the top the module hierarchy, ASF+ permits a more sophisticated hiding of signature names. The erroneous merging of distinct structures that occurs when importing different actualizations of the same parameterized module in ASF is avoided in ASF+ by a more adequate form of parameter binding. The new ``Namensraum''-concept of ASF+ permits the specifier on the one hand directly to identify the origin of hidden names and on the other to decide whether an imported module is only to be accessed or whether an important property of it is to be modified. In the first case he can access one single globally provided version; in the second he has to import a copy of the module. Finally ASF+ permits semantic conditions on parameters and the specification of tasks for a theorem prover.
Title: Likelihood-based inference for max-stable processes
Abstract: The last decade has seen max-stable processes emerge as a common tool for the statistical modeling of spatial extremes. However, their application is complicated due to the unavailability of the multivariate density function, and so likelihood-based methods remain far from providing a complete and flexible framework for inference. In this article we develop inferentially practical, likelihood-based methods for fitting max-stable processes derived from a composite-likelihood approach. The procedure is sufficiently reliable and versatile to permit the simultaneous modeling of marginal and dependence parameters in the spatial context at a moderate computational cost. The utility of this methodology is examined via simulation, and illustrated by the analysis of U.S. precipitation extremes.
Title: Syntactic variation of support verb constructions
Abstract: We report experiments about the syntactic variations of support verb constructions, a special type of multiword expressions (MWEs) containing predicative nouns. In these expressions, the noun can occur with or without the verb, with no clear-cut semantic difference. We extracted from a large French corpus a set of examples of the two situations and derived statistical results from these data. The extraction involved large-coverage language resources and finite-state techniques. The results show that, most frequently, predicative nouns occur without a support verb. This fact has consequences on methods of extracting or recognising MWEs.
Title: Risk Bounds for CART Classifiers under a Margin Condition
Abstract: Risk bounds for Classification and Regression Trees (CART, Breiman et. al. 1984) classifiers are obtained under a margin condition in the binary supervised classification framework. These risk bounds are obtained conditionally on the construction of the maximal deep binary tree and permit to prove that the linear penalty used in the CART pruning algorithm is valid under a margin condition. It is also shown that, conditionally on the construction of the maximal tree, the final selection by test sample does not alter dramatically the estimation accuracy of the Bayes classifier. In the two-class classification framework, the risk bounds that are proved, obtained by using penalized model selection, validate the CART algorithm which is used in many data mining applications such as Biology, Medicine or Image Coding.
Title: Error-Correcting Tournaments
Abstract: We present a family of pairwise tournaments reducing $k$-class classification to binary classification. These reductions are provably robust against a constant fraction of binary errors. The results improve on the PECOC construction with an exponential improvement in computation, from $O(k)$ to $O(\log_2 k)$, and the removal of a square root in the regret dependence, matching the best possible computation and regret up to a constant.
Title: Symbolic Computing with Incremental Mindmaps to Manage and Mine Data Streams - Some Applications
Abstract: In our understanding, a mind-map is an adaptive engine that basically works incrementally on the fundament of existing transactional streams. Generally, mind-maps consist of symbolic cells that are connected with each other and that become either stronger or weaker depending on the transactional stream. Based on the underlying biologic principle, these symbolic cells and their connections as well may adaptively survive or die, forming different cell agglomerates of arbitrary size. In this work, we intend to prove mind-maps' eligibility following diverse application scenarios, for example being an underlying management system to represent normal and abnormal traffic behaviour in computer networks, supporting the detection of the user behaviour within search engines, or being a hidden communication layer for natural language interaction.
Title: An Exact Algorithm for the Stratification Problem with Proportional Allocation
Abstract: We report a new optimal resolution for the statistical stratification problem under proportional sampling allocation among strata. Consider a finite population of N units, a random sample of n units selected from this population and a number L of strata. Thus, we have to define which units belong to each stratum so as to minimize the variance of a total estimator for one desired variable of interest in each stratum,and consequently reduce the overall variance for such quantity. In order to solve this problem, an exact algorithm based on the concept of minimal path in a graph is proposed and assessed. Computational results using real data from IBGE (Brazilian Central Statistical Office) are provided.
Title: Progress in Computer-Assisted Inductive Theorem Proving by Human-Orientedness and Descente Infinie?
Abstract: In this short position paper we briefly review the development history of automated inductive theorem proving and computer-assisted mathematical induction. We think that the current low expectations on progress in this field result from a faulty narrow-scope historical projection. Our main motivation is to explain--on an abstract but hopefully sufficiently descriptive level--why we believe that future progress in the field is to result from human-orientedness and descente infinie.
Title: Weighted least squares methods for prediction in the functional data linear model
Abstract: The problem of prediction in functional linear regression is conventionally addressed by reducing dimension via the standard principal component basis. In this paper we show that an alternative basis chosen through weighted least-squares, or weighted least-squares itself, can be more effective when the experimental errors are heteroscedastic. We give a concise theoretical result which demonstrates the effectiveness of this approach, even when the model for the variance is inaccurate, and we explore the numerical properties of the method. We show too that the advantages of the suggested adaptive techniques are not found only in low-dimensional aspects of the problem; rather, they accrue almost equally among all dimensions.
Title: On calibration of design weights
Abstract: In the present investigation, we build a bridge between the generalized regression (GREG) estimator due to Deville and Sarndal (1992) and the linear regression estimator due to Hansen, Hurwitz and Madow (1953) in the presence of single auxiliary variable. The bridge confirms that the sum of calibrated weights should be equal to sum of design weights as pointed out by Singh (2003, 2004, 2006) and Stearns and Singh (2008). An important modification in the statistical packages such as GES, SUDAAN etc. has been suggested.
Title: Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression
Abstract: The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime.
Title: Learning rules from multisource data for cardiac monitoring
Abstract: This paper formalises the concept of learning symbolic rules from multisource data in a cardiac monitoring context. Our sources, electrocardiograms and arterial blood pressure measures, describe cardiac behaviours from different viewpoints. To learn interpretable rules, we use an Inductive Logic Programming (ILP) method. We develop an original strategy to cope with the dimensionality issues caused by using this ILP technique on a rich multisource language. The results show that our method greatly improves the feasibility and the efficiency of the process while staying accurate. They also confirm the benefits of using multiple sources to improve the diagnosis of cardiac arrhythmias.
Title: Domain Adaptation: Learning Bounds and Algorithms
Abstract: This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben-David et al. (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.
Title: Escaping the curse of dimensionality with a tree-based regressor
Abstract: We present the first tree-based regressor whose convergence rate depends only on the intrinsic dimension of the data, namely its Assouad dimension. The regressor uses the RPtree partitioning procedure, a simple randomized variant of k-d trees.
Title: A Systematic Approach to Artificial Agents
Abstract: Agents and agent systems are becoming more and more important in the development of a variety of fields such as ubiquitous computing, ambient intelligence, autonomous computing, intelligent systems and intelligent robotics. The need for improvement of our basic knowledge on agents is very essential. We take a systematic approach and present extended classification of artificial agents which can be useful for understanding of what artificial agents are and what they can be in the future. The aim of this classification is to give us insights in what kind of agents can be created and what type of problems demand a specific kind of agents for their solution.
Title: Online Multi-task Learning with Hard Constraints
Abstract: We discuss multi-task online learning when a decision maker has to deal simultaneously with M tasks. The tasks are related, which is modeled by imposing that the M-tuple of actions taken by the decision maker needs to satisfy certain constraints. We give natural examples of such restrictions and then discuss a general class of tractable constraints, for which we introduce computationally efficient ways of selecting actions, essentially by reducing to an on-line shortest path problem. We briefly discuss "tracking" and "bandit" versions of the problem and extend the model in various ways, including non-additive global losses and uncountably infinite sets of tasks.
Title: Syntactic Confluence Criteria for Positive/Negative-Conditional Term Rewriting Systems
Abstract: We study the combination of the following already known ideas for showing confluence of unconditional or conditional term rewriting systems into practically more useful confluence criteria for conditional systems: Our syntactical separation into constructor and non-constructor symbols, Huet's introduction and Toyama's generalization of parallel closedness for non-noetherian unconditional systems, the use of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, the idea that certain kinds of limited confluence can be assumed for checking the fulfilledness or infeasibility of the conditions of conditional critical pairs, and the idea that (when termination is given) only prime superpositions have to be considered and certain normalization restrictions can be applied for the substitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving already known methods, we present the following new ideas and results: We strengthen the criterion for overlay joinable noetherian systems, and, by using the expressiveness of our syntactical separation into constructor and non-constructor symbols, we are able to present criteria for level confluence that are not criteria for shallow confluence actually and also able to weaken the severe requirement of normality (stiffened with left-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems to the easily satisfied requirement of quasi-normality. Finally, the whole paper may also give a practically useful overview of the syntactical means for showing confluence of conditional term rewriting systems.
Title: Context tree selection and linguistic rhythm retrieval from written texts
Abstract: The starting point of this article is the question "How to retrieve fingerprints of rhythm in written texts?" We address this problem in the case of Brazilian and European Portuguese. These two dialects of Modern Portuguese share the same lexicon and most of the sentences they produce are superficially identical. Yet they are conjectured, on linguistic grounds, to implement different rhythms. We show that this linguistic question can be formulated as a problem of model selection in the class of variable length Markov chains. To carry on this approach, we compare texts from European and Brazilian Portuguese. These texts are previously encoded according to some basic rhythmic features of the sentences which can be automatically retrieved. This is an entirely new approach from the linguistic point of view. Our statistical contribution is the introduction of the smallest maximizer criterion which is a constant free procedure for model selection. As a by-product, this provides a solution for the problem of optimal choice of the penalty constant when using the BIC to select a variable length Markov chain. Besides proving the consistency of the smallest maximizer criterion when the sample size diverges, we also make a simulation study comparing our approach with both the standard BIC selection and the Peres-Shields order estimation. Applied to the linguistic sample constituted for our case study, the smallest maximizer criterion assigns different context-tree models to the two dialects of Portuguese. The features of the selected models are compatible with current conjectures discussed in the linguistic literature.
Title: A Self-Contained and Easily Accessible Discussion of the Method of Descente Infinie and Fermat's Only Explicitly Known Proof by Descente Infinie
Abstract: We present the only proof of Pierre Fermat by descente infinie that is known to exist today. As the text of its Latin original requires active mathematical interpretation, it is more a proof sketch than a proper mathematical proof. We discuss descente infinie from the mathematical, logical, historical, linguistic, and refined logic-historical points of view. We provide the required preliminaries from number theory and develop a self-contained proof in a modern form, which nevertheless is intended to follow Fermat's ideas closely. We then annotate an English translation of Fermat's original proof with terms from the modern proof. Including all important facts, we present a concise and self-contained discussion of Fermat's proof sketch, which is easily accessible to laymen in number theory as well as to laymen in the history of mathematics, and which provides new clarification of the Method of Descente Infinie to the experts in these fields. Last but not least, this paper fills a gap regarding the easy accessibility of the subject.
Title: lim+, delta+, and Non-Permutability of beta-Steps
Abstract: Using a human-oriented formal example proof of the (lim+) theorem, i.e. that the sum of limits is the limit of the sum, which is of value for reference on its own, we exhibit a non-permutability of beta-steps and delta+-steps (according to Smullyan's classification), which is not visible with non-liberalized delta-rules and not serious with further liberalized delta-rules, such as the delta++-rule. Besides a careful presentation of the search for a proof of (lim+) with several pedagogical intentions, the main subject is to explain why the order of beta-steps plays such a practically important role in some calculi.
Title: An Algebraic Dexter-Based Hypertext Reference Model
Abstract: We present the first formal algebraic specification of a hypertext reference model. It is based on the well-known Dexter Hypertext Reference Model and includes modifications with respect to the development of hypertext since the WWW came up. Our hypertext model was developed as a product model with the aim to automatically support the design process and is extended to a model of hypertext-systems in order to be able to describe the state transitions in this process. While the specification should be easy to read for non-experts in algebraic specification, it guarantees a unique understanding and enables a close connection to logic-based development and verification.
Title: Space-time covariance functions with compact support
Abstract: We characterize completely the Gneiting class of space-time covariance functions and give more relaxed conditions on the involved functions. We then show necessary conditions for the construction of compactly supported functions of the Gneiting type. These conditions are very general since they do not depend on the Euclidean norm. Finally, we discuss a general class of positive definite functions, used for multivariate Gaussian random fields. For this class, we show necessary criteria for its generator to be compactly supported.
Title: Target Detection via Network Filtering
Abstract: A method of `network filtering' has been proposed recently to detect the effects of certain external perturbations on the interacting members in a network. However, with large networks, the goal of detection seems a priori difficult to achieve, especially since the number of observations available often is much smaller than the number of variables describing the effects of the underlying network. Under the assumption that the network possesses a certain sparsity property, we provide a formal characterization of the accuracy with which the external effects can be detected, using a network filtering system that combines Lasso regression in a sparse simultaneous equation model with simple residual analysis. We explore the implications of the technical conditions underlying our characterization, in the context of various network topologies, and we illustrate our method using simulated data.
Title: Statistical Inference of Functional Connectivity in Neuronal Networks using Frequent Episodes
Abstract: Identifying the spatio-temporal network structure of brain activity from multi-neuronal data streams is one of the biggest challenges in neuroscience. Repeating patterns of precisely timed activity across a group of neurons is potentially indicative of a microcircuit in the underlying neural tissue. Frequent episode discovery, a temporal data mining framework, has recently been shown to be a computationally efficient method of counting the occurrences of such patterns. In this paper, we propose a framework to determine when the counts are statistically significant by modeling the counting process. Our model allows direct estimation of the strengths of functional connections between neurons with improved resolution over previously published methods. It can also be used to rank the patterns discovered in a network of neurons according to their strengths and begin to reconstruct the graph structure of the network that produced the spike data. We validate our methods on simulated data and present analysis of patterns discovered in data from cultures of cortical neurons.
Title: Full First-Order Sequent and Tableau Calculi With Preservation of Solutions and the Liberalized delta-Rule but Without Skolemization
Abstract: We present a combination of raising, explicit variable dependency representation, the liberalized delta-rule, and preservation of solutions for first-order deductive theorem proving. Our main motivation is to provide the foundation for our work on inductive theorem proving, where the preservation of solutions is indispensable.
Title: Hilbert's epsilon as an Operator of Indefinite Committed Choice
Abstract: Paul Bernays and David Hilbert carefully avoided overspecification of Hilbert's epsilon-operator and axiomatized only what was relevant for their proof-theoretic investigations. Semantically, this left the epsilon-operator underspecified. In the meanwhile, there have been several suggestions for semantics of the epsilon as a choice operator. After reviewing the literature on semantics of Hilbert's epsilon operator, we propose a new semantics with the following features: We avoid overspecification (such as right-uniqueness), but admit indefinite choice, committed choice, and classical logics. Moreover, our semantics for the epsilon supports proof search optimally and is natural in the sense that it does not only mirror some cases of referential interpretation of indefinite articles in natural language, but may also contribute to philosophy of language. Finally, we ask the question whether our epsilon within our free-variable framework can serve as a paradigm useful in the specification and computation of semantics of discourses in natural language.
Title: Innovated higher criticism for detecting sparse signals in correlated noise
Abstract: Higher criticism is a method for detecting signals that are both sparse and weak. Although first proposed in cases where the noise variables are independent, higher criticism also has reasonable performance in settings where those variables are correlated. In this paper we show that, by exploiting the nature of the correlation, performance can be improved by using a modified approach which exploits the potential advantages that correlation has to offer. Indeed, it turns out that the case of independent noise is the most difficult of all, from a statistical viewpoint, and that more accurate signal detection (for a given level of signal sparsity and strength) can be obtained when correlation is present. We characterize the advantages of correlation by showing how to incorporate them into the definition of an optimal detection boundary. The boundary has particularly attractive properties when correlation decays at a polynomial rate or the correlation matrix is Toeplitz.
Title: Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
Abstract: The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to an efficient randomized algorithm for testing both local and global unique completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix.