text
stringlengths
0
4.09k
Title: Adapting Heuristic Mastermind Strategies to Evolutionary Algorithms
Abstract: The art of solving the Mastermind puzzle was initiated by Donald Knuth and is already more than 30 years old; despite that, it still receives much attention in operational research and computer games journals, not to mention the nature-inspired stochastic algorithm literature. In this paper we try to suggest a strategy that will allow nature-inspired algorithms to obtain results as good as those based on exhaustive search strategies; in order to do that, we first review, compare and improve current approaches to solving the puzzle; then we test one of these strategies with an estimation of distribution algorithm. Finally, we try to find a strategy that falls short of being exhaustive, and is then amenable for inclusion in nature inspired algorithms (such as evolutionary or particle swarm algorithms). This paper proves that by the incorporation of local entropy into the fitness function of the evolutionary algorithm it becomes a better player than a random one, and gives a rule of thumb on how to incorporate the best heuristic strategies to evolutionary algorithms without incurring in an excessive computational cost.
Title: Learning an Interactive Segmentation System
Abstract: Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user - a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.
Title: A Model-Based Approach to Predicting Predator-Prey & Friend-Foe Relationships in Ant Colonies
Abstract: Understanding predator-prey relationships among insects is a challenging task in the domain of insect-colony research. This is due to several factors involved, such as determining whether a particular behavior is the result of a predator-prey interaction, a friend-foe interaction or another kind of interaction. In this paper, we analyze a series of predator-prey and friend-foe interactions in two colonies of carpenter ants to better understand and predict such behavior. Using the data gathered, we have also come up with a preliminary model for predicting such behavior under the specific conditions the experiment was conducted in. In this paper, we present the results of our data analysis as well as an overview of the processes involved.
Title: Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models
Abstract: A variable screening procedure via correlation learning was proposed Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under the nonparametric additive models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, an iterative nonparametric independence screening (INIS) is also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.
Title: The Gaussian Surface Area and Noise Sensitivity of Degree-$d$ Polynomials
Abstract: We provide asymptotically sharp bounds for the Gaussian surface area and the Gaussian noise sensitivity of polynomial threshold functions. In particular we show that if $f$ is a degree-$d$ polynomial threshold function, then its Gaussian sensitivity at noise rate $\epsilon$ is less than some quantity asymptotic to $\pi$ and the Gaussian surface area is at most $$. Furthermore these bounds are asymptotically tight as $\epsilon\to 0$ and $f$ the threshold function of a product of $d$ distinct homogeneous linear functions.
Title: Condition Number Analysis of Kernel-based Density Ratio Estimation
Abstract: The ratio of two probability densities can be used for solving various machine learning tasks such as covariate shift adaptation (importance sampling), outlier detection (likelihood-ratio test), and feature selection (mutual information). Recently, several methods of directly estimating the density ratio have been developed, e.g., kernel mean matching, maximum likelihood density ratio estimation, and least-squares density ratio fitting. In this paper, we consider a kernelized variant of the least-squares method and investigate its theoretical properties from the viewpoint of the condition number using smoothed analysis techniques--the condition number of the Hessian matrix determines the convergence rate of optimization and the numerical stability. We show that the kernel least-squares method has a smaller condition number than a version of kernel mean matching and other M-estimators, implying that the kernel least-squares method has preferable numerical properties. We further give an alternative formulation of the kernel least-squares estimator which is shown to possess an even smaller condition number. We show that numerical studies meet our theoretical analysis.
Title: Intrusion Detection In Mobile Ad Hoc Networks Using GA Based Feature Selection
Abstract: Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology and lack of centralized monitoring point. It is important to search new architecture and mechanisms to protect the wireless networks and mobile computing application. IDS analyze the network activities by means of audit data and use patterns of well-known attacks or normal profile to detect potential attacks. There are two methods to analyze: misuse detection and anomaly detection. Misuse detection is not effective against unknown attacks and therefore, anomaly detection method is used. In this approach, the audit data is collected from each mobile node after simulating the attack and compared with the normal behavior of the system. If there is any deviation from normal behavior then the event is considered as an attack. Some of the features of collected audit data may be redundant or contribute little to the detection process. So it is essential to select the important features to increase the detection rate. This paper focuses on implementing two feature selection methods namely, markov blanket discovery and genetic algorithm. In genetic algorithm, bayesian network is constructed over the collected features and fitness function is calculated. Based on the fitness value the features are selected. Markov blanket discovery also uses bayesian network and the features are selected depending on the minimum description length. During the evaluation phase, the performances of both approaches are compared based on detection rate and false alarm rate.
Title: Multi-valued Action Languages in CLP(FD)
Abstract: Action description languages, such as A and B, are expressive instruments introduced for formalizing planning domains and planning problem instances. The paper starts by proposing a methodology to encode an action language (with conditional effects and static causal laws), a slight variation of B, using Constraint Logic Programming over Finite Domains. The approach is then generalized to raise the use of constraints to the level of the action language itself. A prototype implementation has been developed, and the preliminary results are presented and discussed. To appear in Theory and Practice of Logic Programming (TPLP)
Title: Variational Bayesian Inference and Complexity Control for Stochastic Block Models
Abstract: It is now widely accepted that knowledge can be acquired from networks by clustering their vertices according to connection profiles. Many methods have been proposed and in this paper we concentrate on the Stochastic Block Model (SBM). The clustering of vertices and the estimation of SBM model parameters have been subject to previous work and numerous inference strategies such as variational Expectation Maximization (EM) and classification EM have been proposed. However, SBM still suffers from a lack of criteria to estimate the number of components in the mixture. To our knowledge, only one model based criterion, ICL, has been derived for SBM in the literature. It relies on an asymptotic approximation of the Integrated Complete-data Likelihood and recent studies have shown that it tends to be too conservative in the case of small networks. To tackle this issue, we propose a new criterion that we call ILvb, based on a non asymptotic approximation of the marginal likelihood. We describe how the criterion can be computed through a variational Bayes EM algorithm.
Title: Representing human and machine dictionaries in Markup languages
Abstract: In this chapter we present the main issues in representing machine readable dictionaries in XML, and in particular according to the Text Encoding Dictionary (TEI) guidelines.
Title: Projection Pursuit through $\Phi$-Divergence Minimisation
Abstract: Consider a defined density on a set of very large dimension. It is quite difficult to find an estimate of this density from a data set. However, it is possible through a projection pursuit methodology to solve this problem. Touboul's article "Projection Pursuit Through Relative Entropy Minimization", 2009, demonstrates the interest of the author's method in a very simple given case. He considers the factorization of a density through an Elliptical component and some residual density. The above Touboul's work is based on minimizing relative entropy. In the present article, our proposal will aim at extending this very methodology to the $\Phi-$divergence. Furthermore, we will also consider the case when the density to be factorized is estimated from an i.i.d. sample. We will then propose a test for the factorization of the estimated density. Applications include a new test of fit pertaining to the Elliptical copulas.
Title: Complexity of Propositional Abduction for Restricted Sets of Boolean Functions
Abstract: Abduction is a fundamental and important form of non-monotonic reasoning. Given a knowledge base explaining how the world behaves it aims at finding an explanation for some observed manifestation. In this paper we focus on propositional abduction, where the knowledge base and the manifestation are represented by propositional formulae. The problem of deciding whether there exists an explanation has been shown to be SigmaP2-complete in general. We consider variants obtained by restricting the allowed connectives in the formulae to certain sets of Boolean functions. We give a complete classification of the complexity for all considerable sets of Boolean functions. In this way, we identify easier cases, namely NP-complete and polynomial cases; and we highlight sources of intractability. Further, we address the problem of counting the explanations and draw a complete picture for the counting complexity.
Title: Notes to Robert et al.: Model criticism informs model choice and model comparison
Abstract: In their letter to PNAS and a comprehensive set of notes on arXiv [arXiv:0909.5673v2], Christian Robert, Kerrie Mengersen and Carla Chen (RMC) represent our approach to model criticism in situations when the likelihood cannot be computed as a way to "contrast several models with each other". In addition, RMC argue that model assessment with Approximate Bayesian Computation under model uncertainty (ABCmu) is unduly challenging and question its Bayesian foundations. We disagree, and clarify that ABCmu is a probabilistically sound and powerful too for criticizing a model against aspects of the observed data, and discuss further the utility of ABCmu.
Title: Multi-Way, Multi-View Learning
Abstract: We extend multi-way, multivariate ANOVA-type analysis to cases where one covariate is the view, with features of each view coming from different, high-dimensional domains. The different views are assumed to be connected by having paired samples; this is a common setup in recent bioinformatics experiments, of which we analyze metabolite profiles in different conditions (disease vs. control and treatment vs. untreated) in different tissues (views). We introduce a multi-way latent variable model for this new task, by extending the generative model of Bayesian canonical correlation analysis (CCA) both to take multi-way covariate information into account as population priors, and by reducing the dimensionality by an integrated factor analysis that assumes the metabolites to come in correlated groups.
Title: On Backtracking in Real-time Heuristic Search
Abstract: Real-time heuristic search algorithms are suitable for situated agents that need to make their decisions in constant time. Since the original work by Korf nearly two decades ago, numerous extensions have been suggested. One of the most intriguing extensions is the idea of backtracking wherein the agent decides to return to a previously visited state as opposed to moving forward greedily. This idea has been empirically shown to have a significant impact on various performance measures. The studies have been carried out in particular empirical testbeds with specific real-time search algorithms that use backtracking. Consequently, the extent to which the trends observed are characteristic of backtracking in general is unclear. In this paper, we present the first entirely theoretical study of backtracking in real-time heuristic search. In particular, we present upper bounds on the solution cost exponential and linear in a parameter regulating the amount of backtracking. The results hold for a wide class of real-time heuristic search algorithms that includes many existing algorithms as a small subclass.
Title: Variational Inducing Kernels for Sparse Convolved Multiple Output Gaussian Processes
Abstract: Interest in multioutput kernel methods is increasing, whether under the guise of multitask learning, multisensor networks or structured output data. From the Gaussian process perspective a multioutput Mercer kernel is a covariance function over correlated output functions. One way of constructing such kernels is based on convolution processes (CP). A key problem for this approach is efficient inference. Alvarez and Lawrence (2009) recently presented a sparse approximation for CPs that enabled efficient inference. In this paper, we extend this work in two directions: we introduce the concept of variational inducing functions to handle potential non-smooth functions involved in the kernel CP construction and we consider an alternative approach to approximate inference based on variational methods, extending the work by Titsias (2009) to the multiple output case. We demonstrate our approaches on prediction of school marks, compiler performance and financial time series.
Title: Composite Binary Losses
Abstract: We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduce an intrinsic parametrisation of composite binary losses and give a complete characterisation of the relationship between proper losses and ``classification calibrated'' losses. We also consider the question of the ``best'' surrogate binary loss. We introduce a precise notion of ``best'' and show there exist situations where two convex surrogate losses are incommensurable. We provide a complete explicit characterisation of the convexity of composite binary losses in terms of the link function and the weight function associated with the proper loss which make up the composite loss. This characterisation suggests new ways of ``surrogate tuning''. Finally, in an appendix we present some new algorithm-independent results on the relationship between properness, convexity and robustness to misclassification noise for binary losses and show that all convex proper losses are non-robust to misclassification noise.
Title: New Generalization Bounds for Learning Kernels
Abstract: This paper presents several novel generalization bounds for the problem of learning kernels based on the analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels has only a log(p) dependency on the number of kernels, p, which is considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a linear combination of p base kernels with an L_2 regularization whose dependency on p is only in p^1/4.
Title: Optimal construction of k-nearest neighbor graphs for identifying noisy clusters
Abstract: We study clustering algorithms based on neighborhood graphs on a random sample of data points. The question we ask is how such a graph should be constructed in order to obtain optimal clustering results. Which type of neighborhood graph should one choose, mutual k-nearest neighbor or symmetric k-nearest neighbor? What is the optimal parameter k? In our setting, clusters are defined as connected components of the t-level set of the underlying probability distribution. Clusters are said to be identified in the neighborhood graph if connected components in the graph correspond to the true underlying clusters. Using techniques from random geometric graph theory, we prove bounds on the probability that clusters are identified successfully, both in a noise-free and in a noisy setting. Those bounds lead to several conclusions. First, k has to be chosen surprisingly high (rather of the order n than of the order log n) to maximize the probability of cluster identification. Secondly, the major difference between the mutual and the symmetric k-nearest neighbor graph occurs when one attempts to detect the most significant cluster only.
Title: Matching 2-D Ellipses to 3-D Circles with Application to Vehicle Pose Estimation
Abstract: Finding the three-dimensional representation of all or a part of a scene from a single two dimensional image is a challenging task. In this paper we propose a method for identifying the pose and location of objects with circular protrusions in three dimensions from a single image and a 3d representation or model of the object of interest. To do this, we present a method for identifying ellipses and their properties quickly and reliably with a novel technique that exploits intensity differences between objects and a geometric technique for matching an ellipse in 2d to a circle in 3d. We apply these techniques to the specific problem of determining the pose and location of vehicles, particularly cars, from a single image. We have achieved excellent pose recovery performance on artificially generated car images and show promising results on real vehicle images. We also make use of the ellipse detection method to identify car wheels from images, with a very high successful match rate.
Title: A Geometric Proof of Calibration
Abstract: We provide yet another proof of the existence of calibrated forecasters; it has two merits. First, it is valid for an arbitrary finite number of outcomes. Second, it is short and simple and it follows from a direct application of Blackwell's approachability theorem to carefully chosen vector-valued payoff function and convex target set. Our proof captures the essence of existing proofs based on approachability (e.g., the proof by Foster, 1999 in case of binary outcomes) and highlights the intrinsic connection between approachability and calibration.
Title: Geometric Representations of Random Hypergraphs
Abstract: A parametrization of hypergraphs based on the geometry of points in $^d$ is developed. Informative prior distributions on hypergraphs are induced through this parametrization by priors on point configurations via spatial processes. This prior specification is used to infer conditional independence models or Markov structure of multivariate distributions. Specifically, we can recover both the junction tree factorization as well as the hyper Markov law. This approach offers greater control on the distribution of graph features than Erd\"os-R\'enyi random graphs, supports inference of factorizations that cannot be retrieved by a graph alone, and leads to new Metropolis\slash Hastings Markov chain Monte Carlo algorithms with both local and global moves in graph space. We illustrate the utility of this parametrization and prior specification using simulations.
Title: A Survey of Paraphrasing and Textual Entailment Methods
Abstract: Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation. We summarize key ideas from the two areas by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.
Title: On the de la Garza Phenomenon
Abstract: Deriving optimal designs for nonlinear models is in general challenging. One crucial step is to determine the number of support points needed. Current tools handle this on a case-by-case basis. Each combination of model, optimality criterion and objective requires its own proof. The celebrated de la Garza Phenomenon states that under a (p-1)th-degree polynomial regression model, any optimal design can be based on at most p design points, the minimum number of support points such that all parameters are estimable. Does this conclusion also hold for nonlinear models? If the answer is yes, it would be relatively easy to derive any optimal design, analytically or numerically. In this paper, a novel approach is developed to address this question. Using this new approach, it can be easily shown that the de la Garza phenomenon exists for many commonly studied nonlinear models, such as the Emax model, exponential model, three- and four-parameter log-linear models, Emax-PK1 model, as well as many classical polynomial regression models. The proposed approach unifies and extends many well-known results in the optimal design literature. It has four advantages over current tools: (i) it can be applied to many forms of nonlinear models; to continuous or discrete data; to data with homogeneous or non-homogeneous errors; (ii) it can be applied to any design region; (iii) it can be applied to multiple-stage optimal design; and (iv) it can be easily implemented.
Title: P values, confidence intervals, or confidence levels for hypotheses?
Abstract: Null hypothesis significance tests and p values are widely used despite very strong arguments against their use in many contexts. Confidence intervals are often recommended as an alternative, but these do not achieve the objective of assessing the credibility of a hypothesis, and the distinction between confidence and probability is an unnecessary confusion. This paper proposes a more straightforward (probabilistic) definition of confidence, and suggests how the idea can be applied to whatever hypotheses are of interest to researchers. The relative merits of the different approaches are discussed using a series of illustrative examples: usually confidence based approaches seem more transparent and useful, but there are some contexts in which p values may be appropriate. I also suggest some methods for converting results from one format to another. (The attractiveness of the idea of confidence is demonstrated by the widespread persistence of the completely incorrect idea that p=5% is equivalent to 95% confidence in the alternative hypothesis. In this paper I show how p values can be used to derive meaningful confidence statements, and the assumptions underlying the derivation.) Key words: Confidence interval, Confidence level, Hypothesis testing, Null hypothesis significance tests, P value, User friendliness.
Title: Bootstrapping Confidence Levels for Hypotheses about Quadratic (U-Shaped) Regression Models
Abstract: Bootstrapping can produce confidence levels for hypotheses about quadratic regression models - such as whether the U-shape is inverted, and the location of optima. The method has several advantages over conventional methods: it provides more, and clearer, information, and is flexible - it could easily be applied to a wide variety of different types of models. The utility of the method can be enhanced by formulating models with interpretable coefficients, such as the location and value of the optimum. Keywords: Bootstrap resampling; Confidence level; Quadratic model; Regression, U-shape.
Title: Horvitz-Thompson estimators for functional data: asymptotic confidence bands and optimal allocation for stratified sampling
Abstract: When dealing with very large datasets of functional data, survey sampling approaches are useful in order to obtain estimators of simple functional quantities, without being obliged to store all the data. We propose here a Horvitz--Thompson estimator of the mean trajectory. In the context of a superpopulation framework, we prove under mild regularity conditions that we obtain uniformly consistent estimators of the mean function and of its variance function. With additional assumptions on the sampling design we state a functional Central Limit Theorem and deduce asymptotic confidence bands. Stratified sampling is studied in detail, and we also obtain a functional version of the usual optimal allocation rule considering a mean variance criterion. These techniques are illustrated by means of a test population of N=18902 electricity meters for which we have individual electricity consumption measures every 30 minutes over one week. We show that stratification can substantially improve both the accuracy of the estimators and reduce the width of the global confidence bands compared to simple random sampling without replacement.
Title: Speech Recognition Oriented Vowel Classification Using Temporal Radial Basis Functions
Abstract: The recent resurgence of interest in spatio-temporal neural network as speech recognition tool motivates the present investigation. In this paper an approach was developed based on temporal radial basis function "TRBF" looking to many advantages: few parameters, speed convergence and time invariance. This application aims to identify vowels taken from natural speech samples from the Timit corpus of American speech. We report a recognition accuracy of 98.06 percent in training and 90.13 in test on a subset of 6 vowel phonemes, with the possibility to expend the vowel sets in future.
Title: Modeling and Application of Series Elastic Actuators for Force Control Multi Legged Robots
Abstract: Series Elastic Actuators provide many benefits in force control of robots in unconstrained environments. These benefits include high force fidelity, extremely low impedance, low friction, and good force control bandwidth. Series Elastic Actuators employ a novel mechanical design architecture which goes against the common machine design principal of "stiffer is better". A compliant element is placed between the gear train and driven load to intentionally reduce the stiffness of the actuator. A position sensor measures the deflection, and the force output is accurately calculated using Hooke's Law (F=Kx). A control loop then servos the actuator to the desired output force. The resulting actuator has inherent shock tolerance, high force fidelity and extremely low impedance. These characteristics are desirable in many applications including legged robots, exoskeletons for human performance amplification, robotic arms, haptic interfaces, and adaptive suspensions. We describe several variations of Series Elastic Actuators that have been developed using both electric and hydraulic components.
Title: A Novel Feature Extraction for Robust EMG Pattern Recognition
Abstract: Varieties of noises are major problem in recognition of Electromyography (EMG) signal. Hence, methods to remove noise become most significant in EMG signal analysis. White Gaussian noise (WGN) is used to represent interference in this paper. Generally, WGN is difficult to be removed using typical filtering and solutions to remove WGN are limited. In addition, noise removal is an important step before performing feature extraction, which is used in EMG-based recognition. This research is aimed to present a novel feature that tolerate with WGN. As a result, noise removal algorithm is not needed. Two novel mean and median frequencies (MMNF and MMDF) are presented for robust feature extraction. Sixteen existing features and two novelties are evaluated in a noisy environment. WGN with various signal-to-noise ratios (SNRs), i.e. 20-0 dB, was added to the original EMG signal. The results showed that MMNF performed very well especially in weak EMG signal compared with others. The error of MMNF in weak EMG signal with very high noise, 0 dB SNR, is about 5-10 percent and closed by MMDF and Histogram, whereas the error of other features is more than 20 percent. While in strong EMG signal, the error of MMNF is better than those from other features. Moreover, the combination of MMNF, Histrogram of EMG and Willison amplitude is used as feature vector in classification task. The experimental result shows the better recognition result in noisy environment than other success feature candidates. From the above results demonstrate that MMNF can be used for new robust feature extraction.
Title: Performance Analysis of AIM-K-means & K-means in Quality Cluster Generation
Abstract: Among all the partition based clustering algorithms K-means is the most popular and well known method. It generally shows impressive results even in considerably large data sets. The computational complexity of K-means does not suffer from the size of the data set. The main disadvantage faced in performing this clustering is that the selection of initial means. If the user does not have adequate knowledge about the data set, it may lead to erroneous results. The algorithm Automatic Initialization of Means (AIM), which is an extension to K-means, has been proposed to overcome the problem of initial mean generation. In this paper an attempt has been made to compare the performance of the algorithms through implementation
Title: Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
Abstract: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
Title: Restricted Eigenvalue Conditions on Subgaussian Random Matrices
Abstract: It is natural to ask: what kinds of matrices satisfy the Restricted Eigenvalue (RE) condition? In this paper, we associate the RE condition (Bickel-Ritov-Tsybakov 09) with the complexity of a subset of the sphere in $\R^p$, where $p$ is the dimensionality of the data, and show that a class of random matrices with independent rows, but not necessarily independent columns, satisfy the RE condition, when the sample size is above a certain lower bound. Here we explicitly introduce an additional covariance structure to the class of random matrices that we have known by now that satisfy the Restricted Isometry Property as defined in Candes and Tao 05 (and hence the RE condition), in order to compose a broader class of random matrices for which the RE condition holds. In this case, tools from geometric functional analysis in characterizing the intrinsic low-dimensional structures associated with the RE condition has been crucial in analyzing the sample complexity and understanding its statistical implications for high dimensional data.
Title: The assessment and planning of non-inferiority trials for retention of effect hypotheses - towards a general approach