text
stringlengths 0
4.09k
|
|---|
Title: The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation
|
Abstract: A basic feature of many field experiments is that investigators are only able to randomize clusters of individuals--such as households, communities, firms, medical practices, schools or classrooms--even when the individual is the unit of interest. To recoup the resulting efficiency loss, some studies pair similar clusters and randomize treatment within pairs. However, many other studies avoid pairing, in part because of claims in the literature, echoed by clinical trials standards organizations, that this matched-pair, cluster-randomization design has serious problems. We argue that all such claims are unfounded. We also prove that the estimator recommended for this design in the literature is unbiased only in situations when matching is unnecessary; its standard error is also invalid. To overcome this problem without modeling assumptions, we develop a simple design-based estimator with much improved statistical properties. We also propose a model-based approach that includes some of the benefits of our design-based estimator as well as the estimator in the literature. Our methods also address individual-level noncompliance, which is common in applications but not allowed for in most existing methods. We show that from the perspective of bias, efficiency, power, robustness or research costs, and in large or small samples, pairing should be used in cluster-randomized experiments whenever feasible; failing to do so is equivalent to discarding a considerable fraction of one's data. We develop these techniques in the context of a randomized evaluation we are conducting of the Mexican Universal Health Insurance Program.
|
Title: Comment: The Essential Role of Pair Matching
|
Abstract: Comment on "The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation" [arXiv:0910.3752]
|
Title: Comment: The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation
|
Abstract: Comment on ``The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation'' [arXiv:0910.3752]
|
Title: Rejoinder: Matched Pairs and the Future of Cluster-Randomized Experiments
|
Abstract: Rejoinder to "The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation" [arXiv:0910.3752]
|
Title: Dynamics of the Orthoglide parallel robot
|
Abstract: Recursive matrix relations for kinematics and dynamics of the Orthoglide parallel robot having three concurrent prismatic actuators are established in this paper. These are arranged according to the Cartesian coordinate system with fixed orientation, which means that the actuating directions are normal to each other. Three identical legs connecting to the moving platform are located on three planes being perpendicular to each other too. Knowing the position and the translation motion of the platform, we develop the inverse kinematics problem and determine the position, velocity and acceleration of each element of the robot. Further, the principle of virtual work is used in the inverse dynamic problem. Some matrix equations offer iterative expressions and graphs for the input forces and the powers of the three actuators.
|
Title: How to Complete an Interactive Configuration Process?
|
Abstract: When configuring customizable software, it is useful to provide interactive tool-support that ensures that the configuration does not breach given constraints. But, when is a configuration complete and how can the tool help the user to complete it? We formalize this problem and relate it to concepts from non-monotonic reasoning well researched in Artificial Intelligence. The results are interesting for both practitioners and theoreticians. Practitioners will find a technique facilitating an interactive configuration process and experiments supporting feasibility of the approach. Theoreticians will find links between well-known formal concepts and a concrete practical application.
|
Title: Path placement optimization of manipulators based on energy consumption: application to the orthoglide 3-axis
|
Abstract: This paper deals with the optimal path placement for a manipulator based on energy consumption. It proposes a methodology to determine the optimal location of a given test path within the workspace of a manipulator with minimal electric energy used by the actuators while taking into account the geometric, kinematic and dynamic constraints. The proposed methodology is applied to the Orthoglide 3-axis, a three-degree-of-freedom translational parallel kinematic machine (PKM), as an illustrative example.
|
Title: Singularity Analysis of Lower-Mobility Parallel Manipulators Using Grassmann-Cayley Algebra
|
Abstract: This paper introduces a methodology to analyze geometrically the singularities of manipulators, of which legs apply both actuation forces and constraint moments to their moving platform. Lower-mobility parallel manipulators and parallel manipulators, of which some legs do not have any spherical joint, are such manipulators. The geometric conditions associated with the dependency of six Pl\"ucker vectors of finite lines or lines at infinity constituting the rows of the inverse Jacobian matrix are formulated using Grassmann-Cayley Algebra. Accordingly, the singularity conditions are obtained in vector form. This study is illustrated with the singularity analysis of four manipulators.
|
Title: Swarm Intelligence
|
Abstract: Biologically inspired computing is an area of computer science which uses the advantageous properties of biological systems. It is the amalgamation of computational intelligence and collective intelligence. Biologically inspired mechanisms have already proved successful in achieving major advances in a wide range of problems in computing and communication systems. The consortium of bio-inspired computing are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, DNA computing and quantum computing, etc. This article gives an introduction to swarm intelligence.
|
Title: Sparsification and feature selection by compressive linear regression
|
Abstract: The Minimum Description Length (MDL) principle states that the optimal model for a given data set is that which compresses it best. Due to practial limitations the model can be restricted to a class such as linear regression models, which we address in this study. As in other formulations such as the LASSO and forward step-wise regression we are interested in sparsifying the feature set while preserving generalization ability. We derive a well-principled set of codes for both parameters and error residuals along with smooth approximations to lengths of these codes as to allow gradient descent optimization of description length, and go on to show that sparsification and feature selection using our approach is faster than the LASSO on several datasets from the UCI and StatLib repositories, with favorable generalization accuracy, while being fully automatic, requiring neither cross-validation nor tuning of regularization hyper-parameters, allowing even for a nonlinear expansion of the feature set followed by sparsification.
|
Title: The Geometry of Generalized Binary Search
|
Abstract: This paper investigates the problem of determining a binary-valued function through a sequence of strategically selected queries. The focus is an algorithm called Generalized Binary Search (GBS). GBS is a well-known greedy algorithm for determining a binary-valued function through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search. This paper develops novel incoherence and geometric conditions under which GBS achieves the information-theoretically optimal query complexity; i.e., given a collection of N hypotheses, GBS terminates with the correct function after no more than a constant times log N queries. Furthermore, a noise-tolerant version of GBS is developed that also achieves the optimal query complexity. These results are applied to learning halfspaces, a problem arising routinely in image processing and machine learning.
|
Title: Stochastic epidemic models: a survey
|
Abstract: This paper is a survey paper on stochastic epidemic models. A simple stochastic epidemic model is defined and exact and asymptotic model properties (relying on a large community) are presented. The purpose of modelling is illustrated by studying effects of vaccination and also in terms of inference procedures for important parameters, such as the basic reproduction number and the critical vaccination coverage. Several generalizations towards realism, e.g. multitype and household epidemic models, are also presented, as is a model for endemic diseases.
|
Title: Tutorial on ABC rejection and ABC SMC for parameter estimation and model selection
|
Abstract: In this tutorial we schematically illustrate four algorithms: (1) ABC rejection for parameter estimation (2) ABC SMC for parameter estimation (3) ABC rejection for model selection on the joint space (4) ABC SMC for model selection on the joint space.
|
Title: Effect of indirect dependencies on "A mutual information minimization approach for a class of nonlinear recurrent separating systems"
|
Abstract: In a recent paper [4], Duarte and Jutten investigated the Blind Source Separation (BSS) problem, for the nonlinear mixing model that they introduced in that paper. They proposed to solve this problem by using information-theoretic tools, more precisely by minimizing the mutual information (MI) of the outputs of the separating structure. When applying the MI approach to BSS problems, one usually determines the analytical expressions of the derivatives of the MI with respect to the parameters of the considered separating model. In the literature, these calculations were mainly reported for linear mixtures up to now. They are more complex for nonlinear mixtures, due to dependencies between the considered quantities. Moreover, the notations commonly employed by the BSS community in such calculations may become misleading when using them for nonlinear mixtures, due to the above-mentioned dependencies. We claim that the calculations reported in [4] contain an error, because they did not take into account all these dependencies. In this document, we therefore explain this phenomenon, by showing the effect of indirect dependencies on the application of the MI approach to the mixing and separating models considered in [4]. We thus introduce a corrected expression of the gradient of the considered BSS criterion based on MI. This correct gradient may then e.g. be used to optimize the adaptive coefficients of the considered separating system by means of the well-known gradient descent algorithm. As explained hereafter, this investigation has some similarities with an analysis that we previously reported in another arXiv document [3]. However, these two investigations concern different problems (mixture and separating structure, mathematical tools: see paper).
|
Title: Outlier Elimination for Robust Ellipse and Ellipsoid Fitting
|
Abstract: In this paper, an outlier elimination algorithm for ellipse/ellipsoid fitting is proposed. This two-stage algorithm employs a proximity-based outlier detection algorithm (using the graph Laplacian), followed by a model-based outlier detection algorithm similar to random sample consensus (RANSAC). These two stages compensate for each other so that outliers of various types can be eliminated with reasonable computation. The outlier elimination algorithm considerably improves the robustness of ellipse/ellipsoid fitting as demonstrated by simulations.
|
Title: Self-concordant analysis for logistic regression
|
Abstract: Most of the non-asymptotic theoretical work in regression is carried out for the square loss, where estimators can be obtained through closed-form expressions. In this paper, we use and extend tools from the convex optimization literature, namely self-concordant functions, to provide simple extensions of theoretical results for the square loss to the logistic loss. We apply the extension techniques to logistic regression with regularization by the $\ell_2$-norm and regularization by the $\ell_1$-norm, showing that new results for binary classification through logistic regression can be easily derived from corresponding results for least-squares regression.
|
Title: On approximation of smoothing probabilities for hidden Markov models
|
Abstract: We consider the smoothing probabilities of hidden Markov model (HMM). We show that under fairly general conditions for HMM, the exponential forgetting still holds, and the smoothing probabilities can be well approximated with the ones of double sided HMM. This makes it possible to use ergodic theorems. As an applications we consider the pointwise maximum a posteriori segmentation, and show that the corresponding risks converge.
|
Title: Competing with Gaussian linear experts
|
Abstract: We study the problem of online regression. We prove a theoretical bound on the square loss of Ridge Regression. We do not make any assumptions about input vectors or outcomes. We also show that Bayesian Ridge Regression can be thought of as an online algorithm competing with all the Gaussian linear experts.
|
Title: Bayesian Core: The Complete Solution Manual
|
Abstract: This solution manual contains the unabridged and original solutions to all the exercises proposed in Bayesian Core, along with R programs when necessary.
|
Title: Sum of Us: Strategyproof Selection from the Selectors
|
Abstract: We consider directed graphs over a set of n agents, where an edge (i,j) is taken to mean that agent i supports or trusts agent j. Given such a graph and an integer k\leq n, we wish to select a subset of k agents that maximizes the sum of indegrees, i.e., a subset of k most popular or most trusted agents. At the same time we assume that each individual agent is only interested in being selected, and may misreport its outgoing edges to this end. This problem formulation captures realistic scenarios where agents choose among themselves, which can be found in the context of Internet search, social networks like Twitter, or reputation systems like Epinions. Our goal is to design mechanisms without payments that map each graph to a k-subset of agents to be selected and satisfy the following two constraints: strategyproofness, i.e., agents cannot benefit from misreporting their outgoing edges, and approximate optimality, i.e., the sum of indegrees of the selected subset of agents is always close to optimal. Our first main result is a surprising impossibility: for k \in 1,...,n-1, no deterministic strategyproof mechanism can provide a finite approximation ratio. Our second main result is a randomized strategyproof mechanism with an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.
|
Title: Parallelization of the LBG Vector Quantization Algorithm for Shared Memory Systems
|
Abstract: This paper proposes a parallel approach for the Vector Quantization (VQ) problem in image processing. VQ deals with codebook generation from the input training data set and replacement of any arbitrary data with the nearest codevector. Most of the efforts in VQ have been directed towards designing parallel search algorithms for the codebook, and little has hitherto been done in evolving a parallelized procedure to obtain an optimum codebook. This parallel algorithm addresses the problem of designing an optimum codebook using the traditional LBG type of vector quantization algorithm for shared memory systems and for the efficient usage of parallel processors. Using the codebook formed from a training set, any arbitrary input data is replaced with the nearest codevector from the codebook. The effectiveness of the proposed algorithm is indicated.
|
Title: A $p$-adic RanSaC algorithm for stereo vision using Hensel lifting
|
Abstract: A $p$-adic variation of the Ran(dom) Sa(mple) C(onsensus) method for solving the relative pose problem in stereo vision is developped. From two 2-adically encoded images a random sample of five pairs of corresponding points is taken, and the equations for the essential matrix are solved by lifting solutions modulo 2 to the 2-adic integers. A recently devised $p$-adic hierarchical classification algorithm imitating the known LBG quantisation method classifies the solutions for all the samples after having determined the number of clusters using the known intra-inter validity of clusterings. In the successful case, a cluster ranking will determine the cluster containing a 2-adic approximation to the "true" solution of the problem.
|
Title: Artificial Immune Systems
|
Abstract: The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years. A novel computational intelligence technique, inspired by immunology, has emerged, called Artificial Immune Systems. Several concepts from the immune have been extracted and applied for solution to real world science and engineering problems. In this tutorial, we briefly describe the immune system metaphors that are relevant to existing Artificial Immune Systems methods. We will then show illustrative real-world problems suitable for Artificial Immune Systems and give a step-by-step algorithm walkthrough for one such problem. A comparison of the Artificial Immune Systems to other well-known algorithms, areas for future work, tips & tricks and a list of resources will round this tutorial off. It should be noted that as Artificial Immune Systems is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from time to time and from those examples given here.
|
Title: Articulation and Clarification of the Dendritic Cell Algorithm
|
Abstract: The Dendritic Cell algorithm (DCA) is inspired by recent work in innate immunity. In this paper a formal description of the DCA is given. The DCA is described in detail, and its use as an anomaly detector is illustrated within the context of computer security. A port scan detection task is performed to substantiate the influence of signal selection on the behaviour of the algorithm. Experimental results provide a comparison of differing input signal mappings.
|
Title: An Iterative Shrinkage Approach to Total-Variation Image Restoration
|
Abstract: The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a case, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information is essential for image restoration, rendering it stable and robust to noise. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In the present paper, a different approach to the solution of the problem is proposed based on the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae.
|
Title: Two-sample Bayesian Nonparametric Hypothesis Testing
|
Abstract: In this article we describe Bayesian nonparametric procedures for two-sample hypothesis testing. Namely, given two sets of samples $^\scriptscriptstyle(1)\;$\s im$\;F^\scriptscriptstyle(1)$ and $^\scriptscriptstyle(2 )\;$\sim$\;F^\scriptscriptstyle( 2)$, with $F^\scriptscriptstyle(1),F^\scriptscriptstyle(2)$ unknown, we wish to evaluate the evidence for the null hypothesis $H_0:F^\scriptscriptstyle(1)\equiv F^\scriptscriptstyle(2)$ versus the alternative $H_1:F^\scriptscriptstyle(1)\neq F^\scriptscriptstyle(2)$. Our method is based upon a nonparametric P\'olya tree prior centered either subjectively or using an empirical procedure. We show that the P\'olya tree prior leads to an analytic expression for the marginal likelihood under the two hypotheses and hence an explicit measure of the probability of the null $(H_0|\\mathbf y^\scriptscriptstyle(1),^\scriptscriptstyle(2)\$.
|
Title: Nonparametric methods for volatility density estimation
|
Abstract: Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on discretely sampled continuous time processes and discrete time models will be discussed. The key insight for the analysis is a transformation of the volatility density estimation problem to a deconvolution model for which standard methods exist. Three type of nonparametric density estimators are reviewed: the Fourier-type deconvolution kernel density estimator, a wavelet deconvolution density estimator and a penalized projection estimator. The performance of these estimators will be compared. Key words: stochastic volatility models, deconvolution, density estimation, kernel estimator, wavelets, minimum contrast estimation, mixing
|
Title: A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion
|
Abstract: We consider the problem of reconstructing a low-rank matrix from a small subset of its entries. In this paper, we describe the implementation of an efficient algorithm called OptSpace, based on singular value decomposition followed by local manifold optimization, for solving the low-rank matrix completion problem. It has been shown that if the number of revealed entries is large enough, the output of singular value decomposition gives a good estimate for the original matrix, so that local optimization reconstructs the correct matrix with high probability. We present numerical results which show that this algorithm can reconstruct the low rank matrix exactly from a very small subset of its entries. We further study the robustness of the algorithm with respect to noise, and its performance on actual collaborative filtering datasets.
|
Title: Artificial Immune Tissue using Self-Orgamizing Networks
|
Abstract: As introduced by Bentley et al. (2005), artificial immune systems (AIS) are lacking tissue, which is present in one form or another in all living multi-cellular organisms. Some have argued that this concept in the context of AIS brings little novelty to the already saturated field of the immune inspired computational research. This article aims to show that such a component of an AIS has the potential to bring an advantage to a data processing algorithm in terms of data pre-processing, clustering and extraction of features desired by the immune inspired system. The proposed tissue algorithm is based on self-organizing networks, such as self-organizing maps (SOM) developed by Kohonen (1996) and an analogy of the so called Toll-Like Receptors (TLR) affecting the activation function of the clusters developed by the SOM.
|
Title: The Uned systems at Senseval-2
|
Abstract: We have participated in the SENSEVAL-2 English tasks (all words and lexical sample) with an unsupervised system based on mutual information measured over a large corpus (277 million words) and some additional heuristics. A supervised extension of the system was also presented to the lexical sample task. Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge.
|
Title: Word Sense Disambiguation Based on Mutual Information and Syntactic Patterns
|
Abstract: This paper describes a hybrid system for WSD, presented to the English all-words and lexical-sample tasks, that relies on two different unsupervised approaches. The first one selects the senses according to mutual information proximity between a context word a variant of the sense. The second heuristic analyzes the examples of use in the glosses of the senses so that simple syntactic patterns are inferred. This patterns are matched against the disambiguation contexts. We show that the first heuristic obtains a precision and recall of .58 and .35 respectively in the all words task while the second obtains .80 and .25. The high precision obtained recommends deeper research of the techniques. Results for the lexical sample task are also provided.
|
Title: Straight to the Source: Detecting Aggregate Objects in Astronomical Images with Proper Error Control
|
Abstract: The next generation of telescopes will acquire terabytes of image data on a nightly basis. Collectively, these large images will contain billions of interesting objects, which astronomers call sources. The astronomers' task is to construct a catalog detailing the coordinates and other properties of the sources. The source catalog is the primary data product for most telescopes and is an important input for testing new astrophysical theories, but to construct the catalog one must first detect the sources. Existing algorithms for catalog creation are effective at detecting sources, but do not have rigorous statistical error control. At the same time, there are several multiple testing procedures that provide rigorous error control, but they are not designed to detect sources that are aggregated over several pixels. In this paper, we propose a technique that does both, by providing rigorous statistical error control on the aggregate objects themselves rather than the pixels. We demonstrate the effectiveness of this approach on data from the Chandra X-ray Observatory Satellite. Our technique effectively controls the rate of false sources, yet still detects almost all of the sources detected by procedures that do not have such rigorous error control and have the advantage of additional data in the form of follow up observations, which will not be available for upcoming large telescopes. In fact, we even detect a new source that was missed by previous studies. The statistical methods developed in this paper can be extended to problems beyond Astronomy, as we will illustrate with an example from Neuroimaging.
|
Title: The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah
|
Abstract: (ABRIDGED) In previous work, two platforms have been developed for testing computer-vision algorithms for robotic planetary exploration (McGuire et al. 2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon color, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone-camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colors to test this algorithm. The algorithm robustly recognized previously-observed units by their color, while requiring only a single image or a few images to learn colors as familiar, demonstrating its fast learning capability.
|
Title: Anomaly Detection with Score functions based on Nearest Neighbor Graphs
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.