text
stringlengths 0
4.09k
|
|---|
Title: A Bernstein-type inequality for stochastic processes of quadratic forms of Gaussian variables
|
Abstract: We introduce a Bernstein-type inequality which serves to uniformly control quadratic forms of gaussian variables. The latter can for example be used to derive sharp model selection criteria for linear estimation in linear regression and linear inverse problems via penalization, and we do not exclude that its scope of application can be made even broader.
|
Title: Extension of Path Probability Method to Approximate Inference over Time
|
Abstract: There has been a tremendous growth in publicly available digital video footage over the past decade. This has necessitated the development of new techniques in computer vision geared towards efficient analysis, storage and retrieval of such data. Many mid-level computer vision tasks such as segmentation, object detection, tracking, etc. involve an inference problem based on the video data available. Video data has a high degree of spatial and temporal coherence. The property must be intelligently leveraged in order to obtain better results. Graphical models, such as Markov Random Fields, have emerged as a powerful tool for such inference problems. They are naturally suited for expressing the spatial dependencies present in video data, It is however, not clear, how to extend the existing techniques for the problem of inference over time. This thesis explores the Path Probability Method, a variational technique in statistical mechanics, in the context of graphical models and approximate inference problems. It extends the method to a general framework for problems involving inference in time, resulting in an algorithm, . We explore the relation of the algorithm with existing techniques, and find the algorithm competitive with existing approaches. The main contribution of this thesis are the extended GBP algorithm, the extension of Path Probability Methods to the DynBP algorithm and the relationship between them. We have also explored some applications in computer vision involving temporal evolution with promising results.
|
Title: Randomized Algorithms for Large scale SVMs
|
Abstract: We propose a randomized algorithm for training Support vector machines(SVMs) on large datasets. By using ideas from Random projections we show that the combinatorial dimension of SVMs is $O(log n)$ with high probability. This estimate of combinatorial dimension is used to derive an iterative algorithm, called RandSVM, which at each step calls an existing solver to train SVMs on a randomly chosen subset of size $O(log n)$. The algorithm has probabilistic guarantees and is capable of training SVMs with Kernels for both classification and regression problems. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up existing SVM learners, without loss of accuracy.
|
Title: Random scattering of bits by prediction
|
Abstract: We investigate a population of binary mistake sequences that result from learning with parametric models of different order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the learner's information density parameter which is defined as the ratio between the lengths of the compressed to uncompressed files that contain the learner's decision rule. The results indicate that good learners have a low information density$\rho$ while bad learners have a high $\rho$. Bad learners generate mistake sequences that are atypically complex or diverge stochastically from a purely random Bernoulli sequence. Good learners generate typically complex sequences with low divergence from Bernoulli sequences and they include mistake sequences generated by the Bayes optimal predictor. Based on the static algorithmic interference model of the learner here acts as a static structure which "scatters" the bits of an input sequence (to be predicted) in proportion to its information density $\rho$ thereby deforming its randomness characteristics.
|
Title: FDR control with adaptive procedures and FDR monotonicity
|
Abstract: The steep rise in availability and usage of high-throughput technologies in biology brought with it a clear need for methods to control the False Discovery Rate (FDR) in multiple tests. Benjamini and Hochberg (BH) introduced in 1995 a simple procedure and proved that it provided a bound on the expected value, $\leq q$. Since then, many authors tried to improve the BH bound, with one approach being designing adaptive procedures, which aim at estimating the number of true null hypothesis in order to get a better FDR bound. Our two main rigorous results are the following: (i) a theorem that provides a bound on the FDR for adaptive procedures that use any estimator for the number of true hypotheses ($m_0$), (ii) a theorem that proves a monotonicity property of general BH-like procedures, both for the case where the hypotheses are independent. We also propose two improved procedures for which we prove FDR control for the independent case, and demonstrate their advantages over several available bounds, on simulated data and on a large number of gene expression data sets. Both applications are simple and involve a similar amount of computation as the original BH procedure. We compare the performance of our proposed procedures with BH and other procedures and find that in most cases we get more power for the same level of statistical significance.
|
Title: Kinematic calibration of Orthoglide-type mechanisms from observation of parallel leg motions
|
Abstract: The paper proposes a new calibration method for parallel manipulators that allows efficient identification of the joint offsets using observations of the manipulator leg parallelism with respect to the base surface. The method employs a simple and low-cost measuring system, which evaluates deviation of the leg location during motions that are assumed to preserve the leg parallelism for the nominal values of the manipulator parameters. Using the measured deviations, the developed algorithm estimates the joint offsets that are treated as the most essential parameters to be identified. The validity of the proposed calibration method and efficiency of the developed numerical algorithms are confirmed by experimental results. The sensitivity of the measurement methods and the calibration accuracy are also studied.
|
Title: A Method for Extraction and Recognition of Isolated License Plate Characters
|
Abstract: A method to extract and recognize isolated characters in license plates is proposed. In extraction stage, the proposed method detects isolated characters by using Difference-of-Gaussian (DOG) function, The DOG function, similar to Laplacian of Gaussian function, was proven to produce the most stable image features compared to a range of other possible image functions. The candidate characters are extracted by doing connected component analysis on different scale DOG images. In recognition stage, a novel feature vector named accumulated gradient projection vector (AGPV) is used to compare the candidate character with the standard ones. The AGPV is calculated by first projecting pixels of similar gradient orientations onto specific axes, and then accumulates the projected gradient magnitudes by each axis. In the experiments, the AGPVs are proven to be invariant from image scaling and rotation, and robust to noise and illumination change.
|
Title: Stiffness Analysis Of Multi-Chain Parallel Robotic Systems
|
Abstract: The paper presents a new stiffness modelling method for multi-chain parallel robotic manipulators with flexible links and compliant actuating joints. In contrast to other works, the method involves a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations, which allows computing the stiffness matrix for singular postures and to take into account influence of the external forces. The advantages of the developed technique are confirmed by application examples, which deal with stiffness analysis of a parallel manipulator of the Orthoglide family
|
Title: Maximum Entropy Estimation for Survey sampling
|
Abstract: Calibration methods have been widely studied in survey sampling over the last decades. Viewing calibration as an inverse problem, we extend the calibration technique by using a maximum entropy method. Finding the optimal weights is achieved by considering random weights and looking for a discrete distribution which maximizes an entropy under the calibration constraint. This method points a new frame for the computation of such estimates and the investigation of its statistical properties.
|
Title: Efficient Calculation of P-value and Power for Quadratic Form Statistics in Multilocus Association Testing
|
Abstract: We address the asymptotic and approximate distributions of a large class of test statistics with quadratic forms used in association studies. The statistics of interest do not necessarily follow a chi-square distribution and take the general form $D=X^T A X$, where $X$ follows the multivariate normal distribution, and $A$ is a general similarity matrix which may or may not be positive semi-definite. We show that $D$ can be written as a linear combination of independent chi-square random variables, whose distribution can be approximated by a chi-square or the difference of two chi-square distributions. In the setting of association testing, our methods are especially useful in two situations. First, for a genome screen, the required significance level is much smaller than 0.05 due to multiple comparisons, and estimation of p-values using permutation procedures is particularly challenging. An efficient and accurate estimation procedure would therefore be useful. Second, in a candidate gene study based on haplotypes when phase is unknown a computationally expensive method-the EM algorithm-is usually required to infer haplotype frequencies. Because the EM algorithm is needed for each permutation, this results in a substantial computational burden, which can be eliminated with our mathematical solution. We assess the practical utility of our method using extensive simulation studies based on two example statistics and apply it to find the sample size needed for a typical candidate gene association study when phase information is not available. Our method can be applied to any quadratic form statistic and therefore should be of general interest.
|
Title: Efficient Simulation of a Bivariate Exponential Conditionals Distribution
|
Abstract: The bivariate distribution with exponential conditionals (BEC) is introduced by Arnold and Strauss [Bivariate distributions with exponential conditionals, J. Amer. Statist. Assoc. 83 (1988) 522--527]. This work presents a simple and fast algorithm for simulating random variates from this density.
|
Title: Towards Multimodal Content Representation
|
Abstract: Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance: - the understanding of possibly imprecise, partial or ambiguous multimodal input; - the generation of coordinated, cohesive, and coherent multimodal presentations; - the management of multimodal interaction (e.g., task completion, adapting the interface, error prevention) by representing and exploiting models of the user, the domain, the task, the interactive context, and the media (e.g. text, audio, video). The present document is intended to support the discussion on multimodal content representation, its possible objectives and basic constraints, and how the definition of a generic representation framework for multimodal content representation may be approached. It takes into account the results of the Dagstuhl workshop, in particular those of the informal working group on multimodal meaning representation that was active during the workshop (see http://www.dfki.de/ wahlster/Dagstuhl_Multi_Modality, Working Group 4).
|
Title: Rumors in a Network: Who's the Culprit?
|
Abstract: We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with a variant of the popular SIR model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term . We establish that this is an ML estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has non-trivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like.
|
Title: The meta book and size-dependent properties of written language
|
Abstract: Evidence is given for a systematic text-length dependence of the power-law index gamma of a single book. The estimated gamma values are consistent with a monotonic decrease from 2 to 1 with increasing length of a text. A direct connection to an extended Heap's law is explored. The infinite book limit is, as a consequence, proposed to be given by gamma = 1 instead of the value gamma=2 expected if the Zipf's law was ubiquitously applicable. In addition we explore the idea that the systematic text-length dependence can be described by a meta book concept, which is an abstract representation reflecting the word-frequency structure of a text. According to this concept the word-frequency distribution of a text, with a certain length written by a single author, has the same characteristics as a text of the same length pulled out from an imaginary complete infinite corpus written by the same author.
|
Title: Telling cause from effect based on high-dimensional observations
|
Abstract: We describe a method for inferring linear causal relations among multi-dimensional variables. The idea is to use an asymmetry between the distributions of cause and effect that occurs if both the covariance matrix of the cause and the structure matrix mapping cause to the effect are independently chosen. The method works for both stochastic and deterministic causal relations, provided that the dimensionality is sufficiently high (in some experiments, 5 was enough). It is applicable to Gaussian as well as non-Gaussian data.
|
Title: Initialization Free Graph Based Clustering
|
Abstract: This paper proposes an original approach to cluster multi-component data sets, including an estimation of the number of clusters. From the construction of a minimal spanning tree with Prim's algorithm, and the assumption that the vertices are approximately distributed according to a Poisson distribution, the number of clusters is estimated by thresholding the Prim's trajectory. The corresponding cluster centroids are then computed in order to initialize the generalized Lloyd's algorithm, also known as $K$-means, which allows to circumvent initialization problems. Some results are derived for evaluating the false positive rate of our cluster detection algorithm, with the help of approximations relevant in Euclidean spaces. Metrics used for measuring similarity between multi-dimensional data points are based on symmetrical divergences. The use of these informational divergences together with the proposed method leads to better results, compared to other clustering methods for the problem of astrophysical data processing. Some applications of this method in the multi/hyper-spectral imagery domain to a satellite view of Paris and to an image of the Mars planet are also presented. In order to demonstrate the usefulness of divergences in our problem, the method with informational divergence as similarity measure is compared with the same method using classical metrics. In the astrophysics application, we also compare the method with the spectral clustering algorithms.
|
Title: Manipulation and gender neutrality in stable marriage procedures
|
Abstract: The stable marriage problem is a well-known problem of matching men to women so that no man and woman who are not married to each other both prefer each other. Such a problem has a wide variety of practical applications ranging from matching resident doctors to hospitals to matching students to schools. A well-known algorithm to solve this problem is the Gale-Shapley algorithm, which runs in polynomial time. It has been proven that stable marriage procedures can always be manipulated. Whilst the Gale-Shapley algorithm is computationally easy to manipulate, we prove that there exist stable marriage procedures which are NP-hard to manipulate. We also consider the relationship between voting theory and stable marriage procedures, showing that voting rules which are NP-hard to manipulate can be used to define stable marriage procedures which are themselves NP-hard to manipulate. Finally, we consider the issue that stable marriage procedures like Gale-Shapley favour one gender over the other, and we show how to use voting rules to make any stable marriage procedure gender neutral.
|
Title: Dealing with incomplete agents' preferences and an uncertain agenda in group decision making via sequential majority voting
|
Abstract: We consider multi-agent systems where agents' preferences are aggregated via sequential majority voting: each decision is taken by performing a sequence of pairwise comparisons where each comparison is a weighted majority vote among the agents. Incompleteness in the agents' preferences is common in many real-life settings due to privacy issues or an ongoing elicitation process. In addition, there may be uncertainty about how the preferences are aggregated. For example, the agenda (a tree whose leaves are labelled with the decisions being compared) may not yet be known or fixed. We therefore study how to determine collectively optimal decisions (also called winners) when preferences may be incomplete, and when the agenda may be uncertain. We show that it is computationally easy to determine if a candidate decision always wins, or may win, whatever the agenda. On the other hand, it is computationally hard to know wheth er a candidate decision wins in at least one agenda for at least one completion of the agents' preferences. These results hold even if the agenda must be balanced so that each candidate decision faces the same number of majority votes. Such results are useful for reasoning about preference elicitation. They help understand the complexity of tasks such as determining if a decision can be taken collectively, as well as knowing if the winner can be manipulated by appropriately ordering the agenda.
|
Title: Elicitation strategies for fuzzy constraint problems with missing preferences: algorithms and experimental studies
|
Abstract: Fuzzy constraints are a popular approach to handle preferences and over-constrained problems in scenarios where one needs to be cautious, such as in medical or space applications. We consider here fuzzy constraint problems where some of the preferences may be missing. This models, for example, settings where agents are distributed and have privacy issues, or where there is an ongoing preference elicitation process. In this setting, we study how to find a solution which is optimal irrespective of the missing preferences. In the process of finding such a solution, we may elicit preferences from the user if necessary. However, our goal is to ask the user as little as possible. We define a combined solving and preference elicitation scheme with a large number of different instantiations, each corresponding to a concrete algorithm which we compare experimentally. We compute both the number of elicited preferences and the "user effort", which may be larger, as it contains all the preference values the user has to compute to be able to respond to the elicitation requests. While the number of elicited preferences is important when the concern is to communicate as little information as possible, the user effort measures also the hidden work the user has to do to be able to communicate the elicited preferences. Our experimental results show that some of our algorithms are very good at finding a necessarily optimal solution while asking the user for only a very small fraction of the missing preferences. The user effort is also very small for the best algorithms. Finally, we test these algorithms on hard constraint problems with possibly missing constraints, where the aim is to find feasible solutions irrespective of the missing constraints.
|
Title: Flow-Based Propagators for the SEQUENCE and Related Global Constraints
|
Abstract: We propose new filtering algorithms for the SEQUENCE constraint and some extensions of the SEQUENCE constraint based on network flows. We enforce domain consistency on the SEQUENCE constraint in $O(n^2)$ time down a branch of the search tree. This improves upon the best existing domain consistency algorithm by a factor of $O(\log n)$. The flows used in these algorithms are derived from a linear program. Some of them differ from the flows used to propagate global constraints like GCC since the domains of the variables are encoded as costs on the edges rather than capacities. Such flows are efficient for maintaining bounds consistency over large domains and may be useful for other global constraints.
|
Title: The Weighted CFG Constraint
|
Abstract: We introduce the weighted CFG constraint and propose a propagation algorithm that enforces domain consistency in $O(n^3|G|)$ time. We show that this algorithm can be decomposed into a set of primitive arithmetic constraints without hindering propagation.
|
Title: Prediction of Ordered Random Effects in a Simple Small Area Model
|
Abstract: Prediction of a vector of ordered parameters or part of it arises naturally in the context of Small Area Estimation (SAE). For example, one may want to estimate the parameters associated with the top ten areas, the best or worst area, or a certain percentile. We use a simple SAE model to show that estimation of ordered parameters by the corresponding ordered estimates of each area separately does not yield good results with respect to MSE. Shrinkage-type predictors, with an appropriate amount of shrinkage for the particular problem of ordered parameters, are considerably better, and their performance is close to that of the optimal predictors, which cannot in general be computed explicitly.
|
Title: Discrete MDL Predicts in Total Variation
|
Abstract: The Minimum Description Length (MDL) principle selects the model that has the shortest code for data plus model. We show that for a countable class of models, MDL predictions are close to the true distribution in a strong sense. The result is completely general. No independence, ergodicity, stationarity, identifiability, or other assumption on the model class need to be made. More formally, we show that for any countable class of models, the distributions selected by MDL (or MAP) asymptotically predict (merge with) the true measure in the class in total variation distance. Implications for non-i.i.d. domains like time-series forecasting, discriminative learning, and reinforcement learning are discussed.
|
Title: Scalable Inference for Latent Dirichlet Allocation
|
Abstract: We investigate the problem of learning a topic model - the well-known Latent Dirichlet Allocation - in a distributed manner, using a cluster of C processors and dividing the corpus to be learned equally among them. We propose a simple approximated method that can be tuned, trading speed for accuracy according to the task at hand. Our approach is asynchronous, and therefore suitable for clusters of heterogenous machines.
|
Title: Nonparametric inference for competing risks current status data with continuous, discrete or grouped observation times
|
Abstract: New methods and theory have recently been developed to nonparametrically estimate cumulative incidence functions for competing risks survival data subject to current status censoring. In particular, the limiting distribution of the nonparametric maximum likelihood estimator and a simplified "naive estimator" have been established under certain smoothness conditions. In this paper, we establish the large-sample behavior of these estimators in two additional models, namely when the observation time distribution has discrete support and when the observation times are grouped. These asymptotic results are applied to the construction of confidence intervals in the three different models. The methods are illustrated on two data sets regarding the cumulative incidence of (i) different types of menopause from a cross-sectional sample of women in the United States and (ii) subtype-specific HIV infection from a sero-prevalence study in injecting drug users in Thailand.
|
Title: Hybrid Intrusion Detection and Prediction multiAgent System HIDPAS
|
Abstract: This paper proposes an intrusion detection and prediction system based on uncertain and imprecise inference networks and its implementation. Giving a historic of sessions, it is about proposing a method of supervised learning doubled of a classifier permitting to extract the necessary knowledge in order to identify the presence or not of an intrusion in a session and in the positive case to recognize its type and to predict the possible intrusions that will follow it. The proposed system takes into account the uncertainty and imprecision that can affect the statistical data of the historic. The systematic utilization of an unique probability distribution to represent this type of knowledge supposes a too rich subjective information and risk to be in part arbitrary. One of the first objectives of this work was therefore to permit the consistency between the manner of which we represent information and information which we really dispose.
|
Title: Eignets for function approximation on manifolds
|
Abstract: Let $\XX$ be a compact, smooth, connected, Riemannian manifold without boundary, $G:\XX\times\XX\to \RR$ be a kernel. Analogous to a radial basis function network, an eignet is an expression of the form $\sum_j=1^M a_jG(\circ,y_j)$, where $a_j\in\RR$, $y_j\in\XX$, $1\le j\le M$. We describe a deterministic, universal algorithm for constructing an eignet for approximating functions in $L^p(\mu;\XX)$ for a general class of measures $\mu$ and kernels $G$. Our algorithm yields linear operators. Using the minimal separation amongst the centers $y_j$ as the cost of approximation, we give modulus of smoothness estimates for the degree of approximation by our eignets, and show by means of a converse theorem that these are the best possible for every . We also give estimates on the coefficients $a_j$ in terms of the norm of the eignet. Finally, we demonstrate that if any sequence of eignets satisfies the optimal estimates for the degree of approximation of a smooth function, measured in terms of the minimal separation, then the derivatives of the eignets also approximate the corresponding derivatives of the target function in an optimal manner.
|
Title: SpicyMKL
|
Abstract: We propose a new optimization algorithm for Multiple Kernel Learning (MKL) called SpicyMKL, which is applicable to general convex loss functions and general types of regularization. The proposed SpicyMKL iteratively solves smooth minimization problems. Thus, there is no need of solving SVM, LP, or QP internally. SpicyMKL can be viewed as a proximal minimization method and converges super-linearly. The cost of inner minimization is roughly proportional to the number of active kernels. Therefore, when we aim for a sparse kernel combination, our algorithm scales well against increasing number of kernels. Moreover, we give a general block-norm formulation of MKL that includes non-sparse regularizations, such as elastic-net and \ellp -norm regularizations. Extending SpicyMKL, we propose an efficient optimization method for the general regularization framework. Experimental results show that our algorithm is faster than existing methods especially when the number of kernels is large (> 1000).
|
Title: On the Scope of the Universal-Algebraic Approach to Constraint Satisfaction
|
Abstract: The universal-algebraic approach has proved a powerful tool in the study of the complexity of CSPs. This approach has previously been applied to the study of CSPs with finite or (infinite) omega-categorical templates, and relies on two facts. The first is that in finite or omega-categorical structures A, a relation is primitive positive definable if and only if it is preserved by the polymorphisms of A. The second is that every finite or omega-categorical structure is homomorphically equivalent to a core structure. In this paper, we present generalizations of these facts to infinite structures that are not necessarily omega-categorical. (This abstract has been severely curtailed by the space constraints of arXiv -- please read the full abstract in the article.) Finally, we present applications of our general results to the description and analysis of the complexity of CSPs. In particular, we give general hardness criteria based on the absence of polymorphisms that depend on more than one argument, and we present a polymorphism-based description of those CSPs that are first-order definable (and therefore can be solved in polynomial time).
|
Title: Breaking Generator Symmetry
|
Abstract: Dealing with large numbers of symmetries is often problematic. One solution is to focus on just symmetries that generate the symmetry group. Whilst there are special cases where breaking just the symmetries in a generating set is complete, there are also cases where no irredundant generating set eliminates all symmetry. However, focusing on just generators improves tractability. We prove that it is polynomial in the size of the generating set to eliminate all symmetric solutions, but NP-hard to prune all symmetric values. Our proof considers row and column symmetry, a common type of symmetry in matrix models where breaking just generator symmetries is very effective. We show that propagating a conjunction of lexicographical ordering constraints on the rows and columns of a matrix of decision variables is NP-hard.
|
Title: Bounding the Sensitivity of Polynomial Threshold Functions
|
Abstract: We give the first non-trivial upper bounds on the average sensitivity and noise sensitivity of polynomial threshold functions. More specifically, for a Boolean function f on n variables equal to the sign of a real, multivariate polynomial of total degree d we prove 1) The average sensitivity of f is at most O(n^1-1/(4d+6)) (we also give a combinatorial proof of the bound O(n^1-1/2^d). 2) The noise sensitivity of f with noise rate \delta is at most O(\delta^1/(4d+6)). Previously, only bounds for the linear case were known. Along the way we show new structural theorems about random restrictions of polynomial threshold functions obtained via hypercontractivity. These structural results may be of independent interest as they provide a generic template for transforming problems related to polynomial threshold functions defined on the Boolean hypercube to polynomial threshold functions defined in Gaussian space.
|
Title: Dirichlet Process Mixtures of Generalized Linear Models
|
Abstract: We propose Dirichlet Process mixtures of Generalized Linear Models (DP-GLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings.
|
Title: Learning Gaussian Tree Models: Analysis of Error Exponents and Extremal Structures
|
Abstract: The problem of learning tree-structured Gaussian graphical models from independent and identically distributed (i.i.d.) samples is considered. The influence of the tree structure and the parameters of the Gaussian distribution on the learning rate as the number of samples increases is discussed. Specifically, the error exponent corresponding to the event that the estimated tree structure differs from the actual unknown tree structure of the distribution is analyzed. Finding the error exponent reduces to a least-squares problem in the very noisy learning regime. In this regime, it is shown that the extremal tree structure that minimizes the error exponent is the star for any fixed set of correlation coefficients on the edges of the tree. If the magnitudes of all the correlation coefficients are less than 0.63, it is also shown that the tree structure that maximizes the error exponent is the Markov chain. In other words, the star and the chain graphs represent the hardest and the easiest structures to learn in the class of tree-structured Gaussian graphical models. This result can also be intuitively explained by correlation decay: pairs of nodes which are far apart, in terms of graph distance, are unlikely to be mistaken as edges by the maximum-likelihood estimator in the asymptotic regime.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.