text
stringlengths 0
4.09k
|
|---|
Title: Using survival curves for comparison of ordinal qualitative data in clinical studies
|
Abstract: Background and Objective: The survival-agreement plot was proposed and improved to assess the reliability of a quantitative measure. We propose the use of survival analysis as an alternative non-parametric approach for comparison of ordinal qualitative data. Study Design and Setting: Two case studies were presented. The first one is related to a randomized, double blind, placebo-controlled clinical trial to investigate the safety and efficacy of silymarin/metionin for chronic hepatitis C. The second one is a prospective study to identify gustatory alterations due to chorda tympani nerve involvement in patients with chronic otitis media without prior surgery. Results: No significant difference was detected between the two treatments related to the chronic hepatitis C (p > 0.5). On the other hand, a significant association was observed between the healthy side and the affected side of the face of patients with chronic otitis media related to gustatory alterations (p < 0.05). Conclusion: The proposed method can serve as an alternative procedure to statistical test for comparison of samples from ordinal qualitative variables. This approach has the advantage of being more familiar to clinical researchers.
|
Title: Better Global Polynomial Approximation for Image Rectification
|
Abstract: When using images to locate objects, there is the problem of correcting for distortion and misalignment in the images. An elegant way of solving this problem is to generate an error correcting function that maps points in an image to their corrected locations. We generate such a function by fitting a polynomial to a set of sample points. The objective is to identify a polynomial that passes "sufficiently close" to these points with "good" approximation of intermediate points. In the past, it has been difficult to achieve good global polynomial approximation using only sample points. We report on the development of a global polynomial approximation algorithm for solving this problem. Key Words: Polynomial approximation, interpolation, image rectification.
|
Title: Guarded resolution for answer set programming
|
Abstract: We describe a variant of resolution rule of proof and show that it is complete for stable semantics of logic programs. We show applications of this result.
|
Title: Influence diagnostics in Birnbaum-Saunders nonlinear regression models
|
Abstract: We consider the issue of assessing influence of observations in the class of Birnbaum-Saunders nonlinear regression models, which is useful in lifetime data analysis. Our results generalize those in Galea et al. [2004, Influence diagnostics in log-Birnbaum-Saunders regression models. Journal of Applied Statistics, 31, 1049-1064] which are confined to Birnbaum-Saunders linear regression models. Some influence methods, such as the local influence, total local influence of an individual and generalized leverage are discussed. Additionally, the normal curvatures of local influence are derived under various perturbation schemes.
|
Title: On the Peaking Phenomenon of the Lasso in Model Selection
|
Abstract: I briefly report on some unexpected results that I obtained when optimizing the model parameters of the Lasso. In simulations with varying observations-to-variables ratio n=p, I typically observe a strong peak in the test error curve at the transition point n/p = 1. This peaking phenomenon is well-documented in scenarios that involve the inversion of the sample covariance matrix, and as I illustrate in this note, it is also the source of the peak for the Lasso. The key problem is the parametrization of the Lasso penalty (as e.g. in the current R package lars) and I present a solution in terms of a normalized Lasso parameter.
|
Title: Limits of Learning about a Categorical Latent Variable under Prior Near-Ignorance
|
Abstract: In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a state of prior ignorance is not compatible with learning. To overcome this problem, another state of beliefs, called , has been proposed. Near-ignorance resembles ignorance very closely, by satisfying some principles that can arguably be regarded as necessary in a state of ignorance, and allows learning to take place. What this paper does, is to provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is . We argue that such a setting is by far the most common case in practice, and we provide, for the case of categorical latent variables (and general variables) a condition that, if satisfied, prevents learning to take place under prior near-ignorance. This condition is shown to be easily satisfied even in the most common statistical problems. We regard these results as a strong form of evidence against the possibility to adopt a condition of prior near-ignorance in real statistical problems.
|
Title: Adaptive Learning with Binary Neurons
|
Abstract: A efficient incremental learning algorithm for classification tasks, called NetLines, well adapted for both binary and real-valued input patterns is presented. It generates small compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are compared to the best published results on well-known classification benchmarks. Early stopping is shown to decrease overfitting, without improving the generalization performance.
|
Title: Temporal data mining for root-cause analysis of machine faults in automotive assembly lines
|
Abstract: Engine assembly is a complex and heavily automated distributed-control process, with large amounts of faults data logged everyday. We describe an application of temporal data mining for analyzing fault logs in an engine assembly plant. Frequent episode discovery framework is a model-free method that can be used to deduce (temporal) correlations among events from the logs in an efficient manner. In addition to being theoretically elegant and computationally efficient, frequent episodes are also easy to interpret in the form actionable recommendations. Incorporation of domain-specific information is critical to successful application of the method for analyzing fault logs in the manufacturing domain. We show how domain-specific knowledge can be incorporated using heuristic rules that act as pre-filters and post-filters to frequent episode discovery. The system described here is currently being used in one of the engine assembly plants of General Motors and is planned for adaptation in other plants. To the best of our knowledge, this paper presents the first real, large-scale application of temporal data mining in the manufacturing domain. We believe that the ideas presented in this paper can help practitioners engineer tools for analysis in other similar or related application domains as well.
|
Title: Quality Classifiers for Open Source Software Repositories
|
Abstract: Open Source Software (OSS) often relies on large repositories, like SourceForge, for initial incubation. The OSS repositories offer a large variety of meta-data providing interesting information about projects and their success. In this paper we propose a data mining approach for training classifiers on the OSS meta-data provided by such data repositories. The classifiers learn to predict the successful continuation of an OSS project. The `successfulness' of projects is defined in terms of the classifier confidence with which it predicts that they could be ported in popular OSS projects (such as FreeBSD, Gentoo Portage).
|
Title: Continuous Strategy Replicator Dynamics for Multi--Agent Learning
|
Abstract: The problem of multi-agent learning and adaptation has attracted a great deal of attention in recent years. It has been suggested that the dynamics of multi agent learning can be studied using replicator equations from population biology. Most existing studies so far have been limited to discrete strategy spaces with a small number of available actions. In many cases, however, the choices available to agents are better characterized by continuous spectra. This paper suggests a generalization of the replicator framework that allows to study the adaptive dynamics of Q-learning agents with continuous strategy spaces. Instead of probability vectors, agents strategies are now characterized by probability measures over continuous variables. As a result, the ordinary differential equations for the discrete case are replaced by a system of coupled integral--differential replicator equations that describe the mutual evolution of individual agent strategies. We derive a set of functional equations describing the steady state of the replicator dynamics, examine their solutions for several two-player games, and confirm our analytical results using simulations.
|
Title: Characterizations of Stable Model Semantics for Logic Programs with Arbitrary Constraint Atoms
|
Abstract: This paper studies the stable model semantics of logic programs with (abstract) constraint atoms and their properties. We introduce a succinct abstract representation of these constraint atoms in which a constraint atom is represented compactly. We show two applications. First, under this representation of constraint atoms, we generalize the Gelfond-Lifschitz transformation and apply it to define stable models (also called answer sets) for logic programs with arbitrary constraint atoms. The resulting semantics turns out to coincide with the one defined by Son et al., which is based on a fixpoint approach. One advantage of our approach is that it can be applied, in a natural way, to define stable models for disjunctive logic programs with constraint atoms, which may appear in the disjunctive head as well as in the body of a rule. As a result, our approach to the stable model semantics for logic programs with constraint atoms generalizes a number of previous approaches. Second, we show that our abstract representation of constraint atoms provides a means to characterize dependencies of atoms in a program with constraint atoms, so that some standard characterizations and properties relying on these dependencies in the past for logic programs with ordinary atoms can be extended to logic programs with constraint atoms.
|
Title: Dictionary Identification - Sparse Matrix-Factorisation via $\ell_1$-Minimisation
|
Abstract: This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via $\ell_1$-minimisation. The problem can also be seen as factorising a $\ddim \times \nsig$ matrix $Y=(y_1 >... y_\nsig), y_n\in \R^\ddim$ of training signals into a $\ddim \times \natoms$ dictionary matrix $\dico$ and a $\natoms \times \nsig$ coefficient matrix $\X=(x_1... x_\nsig), x_n \in \R^\natoms$, which is sparse. The exact question studied here is when a dictionary coefficient pair $(\dico,\X)$ can be recovered as local minimum of a (nonconvex) $\ell_1$-criterion with input $Y=\dico \X$. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples $\nsig$ grows up to a logarithmic factor only linearly with the signal dimension, i.e. $\nsig \approx C \natoms \log \natoms$, in contrast to previous approaches requiring combinatorially many samples.
|
Title: FaceBots: Steps Towards Enhanced Long-Term Human-Robot Interaction by Utilizing and Publishing Online Social Information
|
Abstract: Our project aims at supporting the creation of sustainable and meaningful longer-term human-robot relationships through the creation of embodied robots with face recognition and natural language dialogue capabilities, which exploit and publish social information available on the web (Facebook). Our main underlying experimental hypothesis is that such relationships can be significantly enhanced if the human and the robot are gradually creating a pool of shared episodic memories that they can co-refer to (shared memories), and if they are both embedded in a social web of other humans and robots they both know and encounter (shared friends). In this paper, we are presenting such a robot, which as we will see achieves two significant novelties.
|
Title: Classification and categorical inputs with treed Gaussian process models
|
Abstract: Recognizing the successes of treed Gaussian process (TGP) models as an interpretable and thrifty model for nonparametric regression, we seek to extend the model to classification. Both treed models and Gaussian processes (GPs) have, separately, enjoyed great success in application to classification problems. An example of the former is Bayesian CART. In the latter, real-valued GP output may be utilized for classification via latent variables, which provide classification rules by means of a softmax function. We formulate a Bayesian model averaging scheme to combine these two models and describe a Monte Carlo method for sampling from the full posterior distribution with joint proposals for the tree topology and the GP parameters corresponding to latent variables at the leaves. We concentrate on efficient sampling of the latent variables, which is important to obtain good mixing in the expanded parameter space. The tree structure is particularly helpful for this task and also for developing an efficient scheme for handling categorical predictors, which commonly arise in classification problems. Our proposed classification TGP (CTGP) methodology is illustrated on a collection of synthetic and real data sets. We assess performance relative to existing methods and thereby show how CTGP is highly flexible, offers tractable inference, produces rules that are easy to interpret, and performs well out of sample.
|
Title: Correspondence: The use of cost information when defining critical values for prediction of rare events using logistic regression and similar methods
|
Abstract: Balancing a rare and serious possibility against a more common and less serious one is a familiar problem in many situations, such as the prediction of rare diseases. The relative costs of forecasting errors can be used for any prediction method that gives an estimated probability of a future event. The probability at which the likely cost (defined as cost x probability) of a possible false negative is exactly equal to that of a possible false positive gives the relevant cutpoint and all subjects with probability of disease greater than this have a positive test result. All standard methods of logistic regression will give the log-odds and hence the predicted probability of a positive outcome for every subject:
|
Title: Fuzzy Mnesors
|
Abstract: A fuzzy mnesor space is a semimodule over the positive real numbers. It can be used as theoretical framework for fuzzy sets. Hence we can prove a great number of properties for fuzzy sets without refering to the membership functions.
|
Title: An Application of Proof-Theory in Answer Set Programming
|
Abstract: We apply proof-theoretic techniques in answer Set Programming. The main results include: 1. A characterization of continuity properties of Gelfond-Lifschitz operator for logic program. 2. A propositional characterization of stable models of logic programs (without referring to loop formulas.
|
Title: Gaussian Belief with dynamic data and in dynamic network
|
Abstract: In this paper we analyse Belief Propagation over a Gaussian model in a dynamic environment. Recently, this has been proposed as a method to average local measurement values by a distributed protocol ("Consensus Propagation", Moallemi & Van Roy, 2006), where the average is available for read-out at every single node. In the case that the underlying network is constant but the values to be averaged fluctuate ("dynamic data"), convergence and accuracy are determined by the spectral properties of an associated Ruelle-Perron-Frobenius operator. For Gaussian models on Erdos-Renyi graphs, numerical computation points to a spectral gap remaining in the large-size limit, implying exceptionally good scalability. In a model where the underlying network also fluctuates ("dynamic network"), averaging is more effective than in the dynamic data case. Altogether, this implies very good performance of these methods in very large systems, and opens a new field of statistical physics of large (and dynamic) information systems.
|
Title: Nonparametric Covariate Adjustment for Receiver Operating Characteristic Curves
|
Abstract: The accuracy of a diagnostic test is typically characterised using the receiver operating characteristic (ROC) curve. Summarising indexes such as the area under the ROC curve (AUC) are used to compare different tests as well as to measure the difference between two populations. Often additional information is available on some of the covariates which are known to influence the accuracy of such measures. We propose nonparametric methods for covariate adjustment of the AUC. Models with normal errors and non-normal errors are discussed and analysed separately. Nonparametric regression is used for estimating mean and variance functions in both scenarios. In the general noise case we propose a covariate-adjusted Mann-Whitney estimator for AUC estimation which effectively uses available data to construct working samples at any covariate value of interest and is computationally efficient for implementation. This provides a generalisation of the Mann-Whitney approach for comparing two populations by taking covariate effects into account. We derive asymptotic properties for the AUC estimators in both settings, including asymptotic normality, optimal strong uniform convergence rates and MSE consistency. The usefulness of the proposed methods is demonstrated through simulated and real data examples.
|
Title: Regularized estimation of large-scale gene association networks using graphical Gaussian models
|
Abstract: Graphical Gaussian models are popular tools for the estimation of (undirected) gene association networks from microarray data. A key issue when the number of variables greatly exceeds the number of samples is the estimation of the matrix of partial correlations. Since the (Moore-Penrose) inverse of the sample covariance matrix leads to poor estimates in this scenario, standard methods are inappropriate and adequate regularization techniques are needed. In this article, we investigate a general framework for combining regularized regression methods with the estimation of Graphical Gaussian models. This framework includes various existing methods as well as two new approaches based on ridge regression and adaptive lasso, respectively. These methods are extensively compared both qualitatively and quantitatively within a simulation study and through an application to six diverse real data sets. In addition, all proposed algorithms are implemented in the R package "parcor", available from the R repository CRAN.
|
Title: Small-sample corrections for score tests in Birnbaum-Saunders regressions
|
Abstract: In this paper we deal with the issue of performing accurate small-sample inference in the Birnbaum-Saunders regression model, which can be useful for modeling lifetime or reliability data. We derive a Bartlett-type correction for the score test and numerically compare the corrected test with the usual score test, the likelihood ratio test and its Bartlett-corrected version. Our simulation results suggest that the corrected test we propose is more reliable than the other tests.
|
Title: Feasibility of random basis function approximators for modeling and control
|
Abstract: We discuss the role of random basis function approximators in modeling and control. We analyze the published work on random basis function approximators and demonstrate that their favorable error rate of convergence O(1/n) is guaranteed only with very substantial computational resources. We also discuss implications of our analysis for applications of neural networks in modeling and control.
|
Title: A FORTRAN coded regular expression Compiler for IBM 1130 Computing System
|
Abstract: REC (Regular Expression Compiler) is a concise programming language which allows students to write programs without knowledge of the complicated syntax of languages like FORTRAN and ALGOL. The language is recursive and contains only four elements for control. This paper describes an interpreter of REC written in FORTRAN.
|
Title: Soft Motion Trajectory Planner for Service Manipulator Robot
|
Abstract: Human interaction introduces two main constraints: Safety and Comfort. Therefore service robot manipulator can't be controlled like industrial robotic manipulator where personnel is isolated from the robot's work envelope. In this paper, we present a soft motion trajectory planner to try to ensure that these constraints are satisfied. This planner can be used on-line to establish visual and force control loop suitable in presence of human. The cubic trajectories build by this planner are good candidates as output of a manipulation task planner. The obtained system is then homogeneous from task planning to robot control. The soft motion trajectory planner limits jerk, acceleration and velocity in cartesian space using quaternion. Experimental results carried out on a Mitsubishi PA10-6CE arm are presented.
|
Title: A Large-Deviation Analysis of the Maximum-Likelihood Learning of Markov Tree Structures
|
Abstract: The problem of maximum-likelihood (ML) estimation of discrete tree-structured distributions is considered. Chow and Liu established that ML-estimation reduces to the construction of a maximum-weight spanning tree using the empirical mutual information quantities as the edge weights. Using the theory of large-deviations, we analyze the exponent associated with the error probability of the event that the ML-estimate of the Markov tree structure differs from the true tree structure, given a set of independently drawn samples. By exploiting the fact that the output of ML-estimation is a tree, we establish that the error exponent is equal to the exponential rate of decay of a single dominant crossover event. We prove that in this dominant crossover event, a non-neighbor node pair replaces a true edge of the distribution that is along the path of edges in the true tree graph connecting the nodes in the non-neighbor pair. Using ideas from Euclidean information theory, we then analyze the scenario of ML-estimation in the very noisy learning regime and show that the error exponent can be approximated as a ratio, which is interpreted as the signal-to-noise ratio (SNR) for learning tree distributions. We show via numerical experiments that in this regime, our SNR approximation is accurate.
|
Title: Statistical Automatic Summarization in Organic Chemistry
|
Abstract: We present an oriented numerical summarizer algorithm, applied to producing automatic summaries of scientific documents in Organic Chemistry. We present its implementation named Yachs (Yet Another Chemistry Summarizer) that combines a specific document pre-processing with a sentence scoring method relying on the statistical properties of documents. We show that Yachs achieves the best results among several other summarizers on a corpus of Organic Chemistry articles.
|
Title: The Modular Audio Recognition Framework (MARF) and its Applications: Scientific and Software Engineering Notes
|
Abstract: MARF is an open-source research platform and a collection of voice/sound/speech/text and natural language processing (NLP) algorithms written in Java and arranged into a modular and extensible framework facilitating addition of new algorithms. MARF can run distributively over the network and may act as a library in applications or be used as a source for learning and extension. A few example applications are provided to show how to use the framework. There is an API reference in the Javadoc format as well as this set of accompanying notes with the detailed description of the architectural design, algorithms, and applications. MARF and its applications are released under a BSD-style license and is hosted at SourceForge.net. This document provides the details and the insight on the internals of MARF and some of the mentioned applications.
|
Title: Auditing a collection of races simultaneously
|
Abstract: A collection of races in a single election can be audited as a group by auditing a random sample of batches of ballots and combining observed discrepancies in the races represented in those batches in a particular way: the maximum across-race relative overstatement of pairwise margins (MARROP). A risk-limiting audit for the entire collection of races can be built on this ballot-based auditing using a variety of probability sampling schemes. The audit controls the familywise error rate (the chance that one or more incorrect outcomes fails to be corrected by a full hand count) at a cost that can be lower than that of controlling the per-comparison error rate with independent audits. The approach is particularly efficient if batches are drawn with probability proportional to a bound on the MARROP (PPEB sampling).
|
Title: Concept Stability for Constructing Taxonomies of Web-site Users
|
Abstract: Owners of a web-site are often interested in analysis of groups of users of their site. Information on these groups can help optimizing the structure and contents of the site. In this paper we use an approach based on formal concepts for constructing taxonomies of user groups. For decreasing the huge amount of concepts that arise in applications, we employ stability index of a concept, which describes how a group given by a concept extent differs from other such groups. We analyze resulting taxonomies of user groups for three target websites.
|
Title: Locally most powerful sequential tests of a simple hypothesis vs one-sided alternatives
|
Abstract: Let $X_1,X_2,...$ be a discrete-time stochastic process with a distribution $P_\theta$, $\theta\in\Theta$, where $\Theta$ is an open subset of the real line. We consider the problem of testing a simple hypothesis $H_0:$ $\theta=\theta_0$ versus a composite alternative $H_1:$ $\theta>\theta_0$, where $\theta_0\in\Theta$ is some fixed point. The main goal of this article is to characterize the structure of locally most powerful sequential tests in this problem. For any sequential test $(\psi,\phi)$ with a (randomized) stopping rule $\psi$ and a (randomized) decision rule $\phi$ let $\alpha(\psi,\phi)$ be the type I error probability, $\dot \beta_0(\psi,\phi)$ the derivative, at $\theta=\theta_0$, of the power function, and $\mathscr N(\psi)$ an average sample number of the test $(\psi,\phi)$. Then we are concerned with the problem of maximizing $\dot \beta_0(\psi,\phi)$ in the class of all sequential tests such that $$ \alpha(\psi,\phi)\leq \alpha\quad \mathscr N(\psi)\leq \mathscr N, $$ where $\alpha\in[0,1]$ and $\mathscr N\geq 1$ are some restrictions. It is supposed that $\mathscr N(\psi)$ is calculated under some fixed (not necessarily coinciding with one of $P_\theta$) distribution of the process $X_1,X_2...$. The structure of optimal sequential tests is characterized.
|
Title: Supplementary material for Markov equivalence for ancestral graphs
|
Abstract: We prove that the criterion for Markov equivalence provided by Zhao et al. (2005) may involve a set of features of a graph that is exponential in the number of vertices.
|
Title: Fast and Near-Optimal Matrix Completion via Randomized Basis Pursuit
|
Abstract: Motivated by the philosophy and phenomenal success of compressed sensing, the problem of reconstructing a matrix from a sampling of its entries has attracted much attention recently. Such a problem can be viewed as an information-theoretic variant of the well-studied matrix completion problem, and the main objective is to design an efficient algorithm that can reconstruct a matrix by inspecting only a small number of its entries. Although this is an impossible task in general, Cand\`es and co-authors have recently shown that under a so-called incoherence assumption, a rank $r$ $n\times n$ matrix can be reconstructed using semidefinite programming (SDP) after one inspects $O(nr\log^6n)$ of its entries. In this paper we propose an alternative approach that is much more efficient and can reconstruct a larger class of matrices by inspecting a significantly smaller number of the entries. Specifically, we first introduce a class of so-called stable matrices and show that it includes all those that satisfy the incoherence assumption. Then, we propose a randomized basis pursuit (RBP) algorithm and show that it can reconstruct a stable rank $r$ $n\times n$ matrix after inspecting $O(nr\log n)$ of its entries. Our sampling bound is only a logarithmic factor away from the information-theoretic limit and is essentially optimal. Moreover, the runtime of the RBP algorithm is bounded by $O(nr^2\log n+n^2r)$, which compares very favorably with the $\Omega(n^4r^2\log^12n)$ runtime of the SDP-based algorithm. Perhaps more importantly, our algorithm will provide an exact reconstruction of the input matrix in polynomial time. By contrast, the SDP-based algorithm can only provide an approximate one in polynomial time.
|
Title: Acquisition of morphological families and derivational series from a machine readable dictionary
|
Abstract: The paper presents a linguistic and computational model aiming at making the morphological structure of the lexicon emerge from the formal and semantic regularities of the words it contains. The model is word-based. The proposed morphological structure consists of (1) binary relations that connect each headword with words that are morphologically related, and especially with the members of its morphological family and its derivational series, and of (2) the analogies that hold between the words. The model has been tested on the lexicon of French using the TLFi machine readable dictionary.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.