id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1206.1892
Computing the degree of a lattice ideal of dimension one
math.AC cs.IT math.AG math.IT
We show that the degree of a graded lattice ideal of dimension 1 is the order of the torsion subgroup of the quotient group of the lattice. This gives an efficient method to compute the degree of this type of lattice ideals.
1206.1898
A Nonparametric Conjugate Prior Distribution for the Maximizing Argument of a Noisy Function
stat.ML cs.AI math.ST stat.TH
We propose a novel Bayesian approach to solve stochastic optimization problems that involve finding extrema of noisy, nonlinear functions. Previous work has focused on representing possible functions explicitly, which leads to a two-step procedure of first, doing inference over the function space and second, finding the extrema of these functions. Here we skip the representation step and directly model the distribution over extrema. To this end, we devise a non-parametric conjugate prior based on a kernel regressor. The resulting posterior distribution directly captures the uncertainty over the maximum of the unknown function. We illustrate the effectiveness of our model by optimizing a noisy, high-dimensional, non-convex objective function.
1206.1903
Mechanism Designs for Stochastic Resources for Renewable Energy Integration
cs.GT cs.SI
Among the many challenges of integrating renewable energy sources into the existing power grid, is the challenge of integrating renewable energy generators into the power systems economy. Electricity markets currently are run in a way that participating generators must supply contracted amounts. And yet, renewable energy generators such as wind power generators cannot supply contracted amounts with certainty. Thus, alternative market architectures must be considered where there are aggregator entities who participate in the electricity market by buying power from the renewable energy generators, and assuming risk of any shortfall from contracted amounts. In this paper, we propose auction mechanisms that can be used by the aggregators for procuring stochastic resources, such as wind power. The nature of stochastic resources is different from classical resources in that such a resource is only available stochastically. The distribution of the generation is private information, and the system objective is to truthfully elicit such information. We introduce a variant of the VCG mechanism for this problem. We also propose a non-VCG mechanism with a contracted-payment-plus-penalty payoff structure. We generalize the basic mechanisms in various ways. We then consider the setting where there are two classes of players to demonstrate the difficulty of auction design in such scenarios. We also consider an alternative architecture where the generators need to fulfill any shortfall from the contracted amount by buying from the spot market.
1206.1926
The hardest logic puzzle ever becomes even tougher
math.LO cs.AI
"The hardest logic puzzle ever" presented by George Boolos became a target for philosophers and logicians who tried to modify it and make it even tougher. I propose further modification of the original puzzle where part of the available information is eliminated but the solution is still possible. The solution also gives interesting ideas on logic behind discovery of unknown language.
1206.1932
Theoretical approach and impact of correlations on the critical packet generation rate in traffic dynamics on complex networks
physics.soc-ph cs.SI
Using the formalism of the biased random walk in random uncorrelated networks with arbitrary degree distributions, we develop theoretical approach to the critical packet generation rate in traffic based on routing strategy with local information. We explain microscopic origins of the transition from the flow to the jammed phase and discuss how the node neighbourhood topology affects the transport capacity in uncorrelated and correlated networks.
1206.1948
The Capacity of Less Noisy Cognitive Interference Channels
cs.IT math.IT
Fundamental limits of the cognitive interference channel (CIC) with two pairs of transmitter-receiver has been under exploration for several years. In this paper, we study the discrete memoryless cognitive interference channel (DM-CIC) in which the cognitive transmitter non-causally knows the message of the primary transmitter. The capacity of this channel is not known in general; it is only known in some special cases. Inspired by the concept of less noisy broadcast channel (BC), in this work we introduce the notion of less noisy cognitive interference channel. Unlike BC, due to the inherent asymmetry of the cognitive channel, two different less noisy channels are distinguishable; these are named the primary-less-noisy and cognitive-less-noisy channels. We derive capacity region for the latter case, by introducing inner and outer bounds on the capacity of the DM-CIC and showing that these bounds coincide for the cognitive-less-noisy channel. Having established the capacity region, we prove that superposition coding is the optimal encoding technique.
1206.1953
Improvement of Loadability in Distribution System Using Genetic Algorithm
cs.SY cs.NE
Generally during recent decades due to development of power systems, the methods for delivering electrical energy to consumers, and because of voltage variations is a very important problem, the power plants follow this criteria. The good solution for improving transfer and distribution of electrical power the majority of consumers prefer to use energy near the loads .So small units that are connected to distribution system named "Decentralized Generation" or "Dispersed Generation". Deregulated in power industry and development of renewable energies are the most important factors in developing this type of electricity generation. Today DG has a key role in electrical distribution systems. For example we can refer to improving reliability indices, improvement of stability and reduction of losses in power system. One of the key problems in using DG's, is allocation of these sources in distribution networks. Load ability in distribution systems and its improvement has an effective role in the operation of power systems. However, placement of distributed generation sources in order to improve the distribution system load ability index was not considered, we show DG placement and allocation with genetic algorithm optimization method maximize load ability of power systems .This method implemented on the IEEE Standard bench marks. The results show the effectiveness of the proposed algorithm .Another benefits of DG in selected positions are also studied and compared.
1206.1971
A Connectionist Network Approach to Find Numerical Solutions of Diophantine Equations
cs.NE
The paper introduces a connectionist network approach to find numerical solutions of Diophantine equations as an attempt to address the famous Hilbert's tenth problem. The proposed methodology uses a three layer feed forward neural network with back propagation as sequential learning procedure to find numerical solutions of a class of Diophantine equations. It uses a dynamically constructed network architecture where number of nodes in the input layer is chosen based on the number of variables in the equation. The powers of the given Diophantine equation are taken as input to the input layer. The training of the network starts with initial random integral weights. The weights are updated based on the back propagation of the error values at the output layer. The optimization of weights is augmented by adding a momentum factor into the network. The optimized weights of the connection between the input layer and the hidden layer are taken as numerical solution of the given Diophantine equation. The procedure is validated using different Diophantine Equations of different number of variables and different powers.
1206.1973
Communications-Inspired Projection Design with Application to Compressive Sensing
cs.IT math.IT
We consider the recovery of an underlying signal x \in C^m based on projection measurements of the form y=Mx+w, where y \in C^l and w is measurement noise; we are interested in the case l < m. It is assumed that the signal model p(x) is known, and w CN(w;0,S_w), for known S_W. The objective is to design a projection matrix M \in C^(l x m) to maximize key information-theoretic quantities with operational significance, including the mutual information between the signal and the projections I(x;y) or the Renyi entropy of the projections h_a(y) (Shannon entropy is a special case). By capitalizing on explicit characterizations of the gradients of the information measures with respect to the projections matrix, where we also partially extend the well-known results of Palomar and Verdu from the mutual information to the Renyi entropy domain, we unveil the key operations carried out by the optimal projections designs: mode exposure and mode alignment. Experiments are considered for the case of compressive sensing (CS) applied to imagery. In this context, we provide a demonstration of the performance improvement possible through the application of the novel projection designs in relation to conventional ones, as well as justification for a fast online projections design method with which state-of-the-art adaptive CS signal recovery is achieved.
1206.2009
Developing a model for a text database indexed pedagogically for teaching the Arabic language
cs.CL
In this memory we made the design of an indexing model for Arabic language and adapting standards for describing learning resources used (the LOM and their application profiles) with learning conditions such as levels education of students, their levels of understanding...the pedagogical context with taking into account the repre-sentative elements of the text, text's length,...in particular, we highlight the specificity of the Arabic language which is a complex language, characterized by its flexion, its voyellation and its agglutination.
1206.2010
Temporal expression normalisation in natural language texts
cs.CL cs.IR
Automatic annotation of temporal expressions is a research challenge of great interest in the field of information extraction. In this report, I describe a novel rule-based architecture, built on top of a pre-existing system, which is able to normalise temporal expressions detected in English texts. Gold standard temporally-annotated resources are limited in size and this makes research difficult. The proposed system outperforms the state-of-the-art systems with respect to TempEval-2 Shared Task (value attribute) and achieves substantially better results with respect to the pre-existing system on top of which it has been developed. I will also introduce a new free corpus consisting of 2822 unique annotated temporal expressions. Both the corpus and the system are freely available on-line.
1206.2027
Adaptive Fractional PID Controller for Robot Manipulator
nlin.AO cs.RO
A Fractional adaptive PID (FPID) controller for a robot manipulator will be proposed. The PID parameters have been optimized by Genetic algorithm. The proposed controller is found robust by means of simulation in a tracking job. The validity of the proposed controller is shown by simulation of two-link robot manipulator. The result then is compared with integer type adaptive PID controller. It is found that when error signals in the learning stage are bounded, the trajectory of the robot converges to the desired one asymptotically.
1206.2032
Timely Coordination in a Multi-Agent System
cs.MA cs.GT cs.LO
In a distributed algorithm, multiple processes, or agents, work toward a common goal. More often than not, the actions of some agents are dependent on the previous execution (if not also on the outcome) of the actions of other agents. The resulting interdependencies between the timings of the actions of the various agents give rise to the study of methods for timely coordination of these actions. In this work, we formulate and mathematically analyze "Timely-Coordinated Response" - a novel multi-agent coordination problem in which the time difference between each pair of actions may be constrained by upper and/or lower bounds. This problem generalizes coordination problems previously studied by Halpern and Moses and by Ben-Zvi and Moses. We optimally solve timely-coordinated response in two ways: using a generalization of the fixed-point approach of Halpern and Moses, and using a generalization of the "syncausality" approach of Ben-Zvi and Moses. We constructively show the equivalence of the solutions yielded by both approaches, and by combining them, derive strengthened versions of known results for some previously-defined special cases of this problem. Our analysis is conducted under minimal assumptions: we work in a continuous-time model with possibly infinitely many agents. The general results we obtain for this model reduce to stronger ones for discrete-time models with only finitely many agents. In order to distill the properties of such models that are significant to this reduction, we define several classes of naturally-occurring models, which in a sense separate the different results. We present both a more practical optimal solution, as well as a surprisingly simple condition for solvability, for timely coordinated response under these models. Finally, we show how our results generalize the results known for previously-studied special cases of this problem.
1206.2058
Dimension Reduction by Mutual Information Discriminant Analysis
cs.CV cs.IT cs.LG math.IT
In the past few decades, researchers have proposed many discriminant analysis (DA) algorithms for the study of high-dimensional data in a variety of problems. Most DA algorithms for feature extraction are based on transformations that simultaneously maximize the between-class scatter and minimize the withinclass scatter matrices. This paper presents a novel DA algorithm for feature extraction using mutual information (MI). However, it is not always easy to obtain an accurate estimation for high-dimensional MI. In this paper, we propose an efficient method for feature extraction that is based on one-dimensional MI estimations. We will refer to this algorithm as mutual information discriminant analysis (MIDA). The performance of this proposed method was evaluated using UCI databases. The results indicate that MIDA provides robust performance over different data sets with different characteristics and that MIDA always performs better than, or at least comparable to, the best performing algorithms.
1206.2059
NP-hardness of polytope M-matrix testing and related problems
math.OC cs.CC cs.SY math.DS
In this note we prove NP-hardness of the following problem: Given a set of matrices, is there a convex combination of those that is a nonsingular M-matrix? Via known characterizations of M-matrices, our result establishes NP-hardness of several fundamental problems in systems analysis and control, such as testing the instability of an uncertain dynamical system, and minimizing the spectral radius of an affine matrix function.
1206.2061
Comments on "On Approximating Euclidean Metrics by Weighted t-Cost Distances in Arbitrary Dimension"
cs.NA cs.CV
Mukherjee (Pattern Recognition Letters, vol. 32, pp. 824-831, 2011) recently introduced a class of distance functions called weighted t-cost distances that generalize m-neighbor, octagonal, and t-cost distances. He proved that weighted t-cost distances form a family of metrics and derived an approximation for the Euclidean norm in $\mathbb{Z}^n$. In this note we compare this approximation to two previously proposed Euclidean norm approximations and demonstrate that the empirical average errors given by Mukherjee are significantly optimistic in $\mathbb{R}^n$. We also propose a simple normalization scheme that improves the accuracy of his approximation substantially with respect to both average and maximum relative errors.
1206.2068
Revolvable Indoor Panoramas Using a Rectified Azimuthal Projection
cs.CV math.DG
We present an algorithm for converting an indoor spherical panorama into a photograph with a simulated overhead view. The resulting image will have an extremely wide field of view covering up to 4{\pi} steradians of the spherical panorama. We argue that our method complements the stereographic projection commonly used in the "little planet" effect. The stereographic projection works well in creating little planets of outdoor scenes; whereas our method is a well-suited counterpart for indoor scenes. The main innovation of our method is the introduction of a novel azimuthal map projection that can smoothly blend between the stereographic projection and the Lambert azimuthal equal-area projection. Our projection has an adjustable parameter that allows one to control and compromise between distortions in shape and distortions in size within the projected panorama. This extra control parameter gives our projection the ability to produce superior results over the stereographic projection.
1206.2082
Dimension Independent Similarity Computation
cs.DS cs.AI cs.DC
We present a suite of algorithms for Dimension Independent Similarity Computation (DISCO) to compute all pairwise similarities between very high dimensional sparse vectors. All of our results are provably independent of dimension, meaning apart from the initial cost of trivially reading in the data, all subsequent operations are independent of the dimension, thus the dimension can be very large. We study Cosine, Dice, Overlap, and the Jaccard similarity measures. For Jaccard similiarity we include an improved version of MinHash. Our results are geared toward the MapReduce framework. We empirically validate our theorems at large scale using data from the social networking site Twitter. At time of writing, our algorithms are live in production at twitter.com.
1206.2097
Bursty communication patterns facilitate spreading in a threshold-based epidemic dynamics
physics.soc-ph cond-mat.stat-mech cs.SI
Records of social interactions provide us with new sources of data for understanding how interaction patterns affect collective dynamics. Such human activity patterns are often bursty, i.e., they consist of short periods of intense activity followed by long periods of silence. This burstiness has been shown to affect spreading phenomena; it accelerates epidemic spreading in some cases and slows it down in other cases. We investigate a model of history-dependent contagion. In our model, repeated interactions between susceptible and infected individuals in a short period of time is needed for a susceptible individual to contract infection. We carry out numerical simulations on real temporal network data to find that bursty activity patterns facilitate epidemic spreading in our model.
1206.2123
Extending Term Suggestion with Author Names
cs.IR cs.DL
Term suggestion or recommendation modules can help users to formulate their queries by mapping their personal vocabularies onto the specialized vocabulary of a digital library. While we examined actual user queries of the social sciences digital library Sowiport we could see that nearly one third of the users were explicitly looking for author names rather than terms. Common term recommenders neglect this fact. By picking up the idea of polyrepresentation we could show that in a standardized IR evaluation setting we can significantly increase the retrieval performances by adding topical-related author names to the query. This positive effect only appears when the query is additionally expanded with thesaurus terms. By just adding the author names to a query we often observe a query drift which results in worse results.
1206.2126
Improving Retrieval Results with discipline-specific Query Expansion
cs.IR cs.DL
Choosing the right terms to describe an information need is becoming more difficult as the amount of available information increases. Search-Term-Recommendation (STR) systems can help to overcome these problems. This paper evaluates the benefits that may be gained from the use of STRs in Query Expansion (QE). We create 17 STRs, 16 based on specific disciplines and one giving general recommendations, and compare the retrieval performance of these STRs. The main findings are: (1) QE with specific STRs leads to significantly better results than QE with a general STR, (2) QE with specific STRs selected by a heuristic mechanism of topic classification leads to better results than the general STR, however (3) selecting the best matching specific STR in an automatic way is a major challenge of this process.
1206.2130
An information-theoretic proof of Nash's inequality
cs.IT math-ph math.FA math.IT math.MP
We show that an information-theoretic property of Shannon's entropy power, known as concavity of entropy power, can be fruitfully employed to prove inequalities in sharp form. In particular, the concavity of entropy power implies the logarithmic Sobolev inequality, and Nash's inequality with the sharp constant.
1206.2138
Comparative Analysis of Peak Correlation Characteristics of Non-Orthogonal Spreading Codes for Wireless Systems
cs.IT math.IT
The performance of a CDMA based wireless system is largely dependent on the characteristics of pseudo-random spreading codes. The spreading codes should be carefully chosen to ensure highest possible peak value of auto-correlation function and lower correlation peaks (side-lobes) at non-zero time-shifts. Simultaneously, zero cross-correlation value at all time shifts is required in order to eliminate the effect of multiple access interference at the receiver. But no such code family exists which possess both characteristics simultaneously. That's why an exhaustive effort has been made in this paper to evaluate the peak correlation characteristics of various non-orthogonal spreading codes and suggest a suitable solution.
1206.2145
The stability of networks --- towards a structural dynamical systems theory
nlin.CD cs.SI physics.soc-ph
The need to build a link between the structure of a complex network and the dynamical properties of the corresponding complex system (comprised of multiple low dimensional systems) has recently become apparent. Several attempts to tackle this problem have been made and all focus on either the controllability or synchronisability of the network --- usually analyzed by way of the master stability function, or the graph Laplacian. We take a different approach. Using the basic tools from dynamical systems theory we show that the dynamical stability of a network can easily be defined in terms of the eigenvalues of an homologue of the network adjacency matrix. This allows us to compute the stability of a network (a quantity derived from the eigenspectrum of the adjacency matrix). Numerical experiments show that this quantity is very closely related too, and can even be predicted from, the standard structural network properties. Following from this we show that the stability of large network systems can be understood via an analytic study of the eigenvalues of their fixed points --- even for a very large number of fixed points.
1206.2190
Communication-Efficient Parallel Belief Propagation for Latent Dirichlet Allocation
cs.LG
This paper presents a novel communication-efficient parallel belief propagation (CE-PBP) algorithm for training latent Dirichlet allocation (LDA). Based on the synchronous belief propagation (BP) algorithm, we first develop a parallel belief propagation (PBP) algorithm on the parallel architecture. Because the extensive communication delay often causes a low efficiency of parallel topic modeling, we further use Zipf's law to reduce the total communication cost in PBP. Extensive experiments on different data sets demonstrate that CE-PBP achieves a higher topic modeling accuracy and reduces more than 80% communication cost than the state-of-the-art parallel Gibbs sampling (PGS) algorithm.
1206.2197
Complex Orthogonal Matching Pursuit and Its Exact Recovery Conditions
cs.IT math.IT math.NA stat.ML
In this paper, we present new results on using orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries for complex cases (i.e., complex measurement vector, complex dictionary and complex additive white Gaussian noise (CAWGN)). A sufficient condition that OMP can recover the optimal representation of an exactly sparse signal in the complex cases is proposed both in noiseless and bound Gaussian noise settings. Similar to exact recovery condition (ERC) results in real cases, we extend them to complex case and derivate the corresponding ERC in the paper. It leverages this theory to show that OMP succeed for k-sparse signal from a class of complex dictionary. Besides, an application with geometrical theory of diffraction (GTD) model is presented for complex cases. Finally, simulation experiments illustrate the validity of the theoretical analysis.
1206.2199
Predicting link directions via a recursive subgraph-based ranking
physics.soc-ph cs.SI
Link directions are essential to the functionality of networks and their prediction is helpful towards a better knowledge of directed networks from incomplete real-world data. We study the problem of predicting the directions of some links by using the existence and directions of the rest of links. We propose a solution by first ranking nodes in a specific order and then predicting each link as stemming from a lower-ranked node towards a higher-ranked one. The proposed ranking method works recursively by utilizing local indicators on multiple scales, each corresponding to a subgraph extracted from the original network. Experiments on real networks show that the directions of a substantial fraction of links can be correctly recovered by our method, which outperforms either purely local or global methods.
1206.2220
Representations of Genetic Tables, Bimagic Squares, Hamming Distances and Shannon Entropy
cs.IT math.IT
In this paper we have established relations of the genetic tables with magic and bimagic squares. Connections with Hamming distances, binomial coefficients are established. The idea of Gray code is applied. Shannon entropy of magic squares of order 4x4, 8x8 and 16x16 are also calculated. Some comparison is also made. Symmetry among restriction enzymes having four letters is also studied.
1206.2248
Fast Cross-Validation via Sequential Testing
cs.LG stat.ML
With the increasing size of today's data sets, finding the right parameter configuration in model selection via cross-validation can be an extremely time-consuming task. In this paper we propose an improved cross-validation procedure which uses nonparametric testing coupled with sequential analysis to determine the best parameter set on linearly increasing subsets of the data. By eliminating underperforming candidates quickly and keeping promising candidates as long as possible, the method speeds up the computation while preserving the capability of the full cross-validation. Theoretical considerations underline the statistical power of our procedure. The experimental evaluation shows that our method reduces the computation time by a factor of up to 120 compared to a full cross-validation with a negligible impact on the accuracy.
1206.2262
Community-detection cellular automata with local and long-range connectivity
nlin.CG cs.SI physics.soc-ph
We explore a community-detection cellular automata algorithm inspired by human heuristics, based on information diffusion and a non-linear processing phase with a dynamics inspired by human heuristics. The main point of the methods is that of furnishing different "views" of the clustering levels from an individual point of view. We apply the method to networks with local connectivity and long-range rewiring.
1206.2276
Irregular Product Codes
cs.IT math.IT
We consider irregular product codes.In this class of codes, each codeword is represented by a matrix. The entries in each row (column) of the matrix should come from a component row (column) code. As opposed to (standard) product codes, we do not require that all component row codes nor all component column codes be the same. As we will see, relaxing this requirement can provide some additional attractive features including 1) allowing some regions of the codeword be more error-resilient 2) allowing a more refined spectrum of rates for finite-lengths and improved performance in some of these rates 3) more interaction between row and column codes during decoding. We study these codes over erasure channels. We find that for any $0 < \epsilon < 1$, for many rate distributions on component row codes, there is a matching rate distribution on component column codes such that an irregular product code based on MDS codes with those rate distributions on the component codes has asymptotic rate $1 - \epsilon$ and can decode on erasure channels (of alphabet size equal the alphabet size of the component MDS codes) with erasure probability $< \epsilon$.
1206.2292
An Intercell Interference Model based on Scheduling for Future Generation Wireless Networks (Part 1 and Part 2)
cs.IT math.IT math.PR
This technical report is divided into two parts. The first part of the technical report presents a novel framework for modeling the uplink and downlink intercell interference (ICI) in a multiuser cellular network. The proposed framework assists in quantifying the impact of various fading channel models and multiuser scheduling schemes on the uplink and downlink ICI. Firstly, we derive a semi-analytical expression for the distribution of the location of the scheduled user in a given cell considering a wide range of scheduling schemes. Based on this, we derive the distribution and moment generating function (MGF) of the ICI considering a single interfering cell. Consequently, we determine the MGF of the cumulative ICI observed from all interfering cells and derive explicit MGF expressions for three typical fading models. Finally, we utilize the obtained expressions to evaluate important network performance metrics such as the outage probability, ergodic capacity and average fairness numerically. Monte-Carlo simulation results are provided to demonstrate the efficacy of the derived analytical expressions {\bf The first part of the technical report is currently submitted to IEEE Transactions on Wireless Communications}. The second part of the technical report deals with the statistical modeling of uplink inter-cell interference (ICI) considering greedy scheduling with power adaptation based on channel conditions. The derived model is utilized to evaluate important network performance metrics such as ergodic capacity, average fairness and average power preservation numerically. In parallel to the literature, we have shown that greedy scheduling with power adaptation reduces the ICI, average power consumption of users, and enhances the average fairness among users, compared to the case without power adaptation.
1206.2322
A Fast HRRP Synthesis Algorithm with Sensing Dictionary in GTD Model
cs.IT math.IT
To achieve high range resolution profile (HRRP), the geometric theory of diffraction (GTD) parametric model is widely used in stepped-frequency radar system. In the paper, a fast synthetic range profile algorithm, called orthogonal matching pursuit with sensing dictionary (OMP-SD), is proposed. It formulates the traditional HRRP synthetic to be a sparse approximation problem over redundant dictionary. As it employs a priori information that targets are sparsely distributed in the range space, the synthetic range profile (SRP) can be accomplished even in presence of data lost. Besides, the computational complexity is reduced by introducing sensing dictionary (SD) and it mitigates the model mismatch at the same time. The computation complexity decreases from O(MNDK) flops for OMP to O(M(N +D)K) flops for OMP-SD. Simulation experiments illustrate its advantages both in additive white Gaussian noise (AWGN) and noiseless situation, respectively.
1206.2347
Uncertain and Approximative Knowledge Representation to Reasoning on Classification with a Fuzzy Networks Based System
cs.AI
The approach described here allows to use the fuzzy Object Based Representation of imprecise and uncertain knowledge. This representation has a great practical interest due to the possibility to realize reasoning on classification with a fuzzy semantic network based system. For instance, the distinction between necessary, possible and user classes allows to take into account exceptions that may appear on fuzzy knowledge-base and facilitates integration of user's Objects in the base. This approach describes the theoretical aspects of the architecture of the whole experimental A.I. system we built in order to provide effective on-line assistance to users of new technological systems: the understanding of "how it works" and "how to complete tasks" from queries in quite natural languages. In our model, procedural semantic networks are used to describe the knowledge of an "ideal" expert while fuzzy sets are used both to describe the approximative and uncertain knowledge of novice users in fuzzy semantic networks which intervene to match fuzzy labels of a query with categories from our "ideal" expert.
1206.2362
Applying Compression to a Game's Network Protocol
cs.IT math.IT
This report presents the results of applying different compression algorithms to the network protocol of an online game. The algorithm implementations compared are zlib, liblzma and my own implementation based on LZ77 and a variation of adaptive Huffman coding. The comparison data was collected from the game TomeNET. The results show that adaptive coding is especially useful for compressing large amounts of very small packets.
1206.2369
Networks in Motion
physics.soc-ph cond-mat.dis-nn cs.SI nlin.AO q-bio.MN
Feature article on how networks that govern communication, growth, herd behavior, and other key processes in nature and society are becoming increasingly amenable to modeling, forecast, and control.
1206.2372
PRISMA: PRoximal Iterative SMoothing Algorithm
math.OC cs.LG
Motivated by learning problems including max-norm regularized matrix completion and clustering, robust PCA and sparse inverse covariance selection, we propose a novel optimization algorithm for minimizing a convex objective which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz part, and a simple non-smooth non-Lipschitz part. We use a time variant smoothing strategy that allows us to obtain a guarantee that does not depend on knowing in advance the total number of iterations nor a bound on the domain.
1206.2437
A Novel Windowing Technique for Efficient Computation of MFCC for Speaker Recognition
cs.CV
In this paper, we propose a novel family of windowing technique to compute Mel Frequency Cepstral Coefficient (MFCC) for automatic speaker recognition from speech. The proposed method is based on fundamental property of discrete time Fourier transform (DTFT) related to differentiation in frequency domain. Classical windowing scheme such as Hamming window is modified to obtain derivatives of discrete time Fourier transform coefficients. It has been mathematically shown that the slope and phase of power spectrum are inherently incorporated in newly computed cepstrum. Speaker recognition systems based on our proposed family of window functions are shown to attain substantial and consistent performance improvement over baseline single tapered Hamming window as well as recently proposed multitaper windowing technique.
1206.2459
R\'enyi Divergence and Kullback-Leibler Divergence
cs.IT math.IT math.ST stat.ML stat.TH
R\'enyi divergence is related to R\'enyi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings. It was introduced by R\'enyi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the R\'enyi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of R\'enyi divergence and Kullback-Leibler divergence, including convexity, continuity, limits of $\sigma$-algebras and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results.
1206.2465
Search Strategies of Library Search Experts
cs.IR cs.DL
Search engines like Google, Yahoo or Bing are an excellent support for finding documents, but this strength also imposes a limitation. As they are optimized for document retrieval tasks, they perform less well when it comes to more complex search needs. Complex search tasks are usually described as open-ended, abstract and poorly defined information needs with a multifaceted character. In this paper we will present the results of an experiment carried out with information professionals from libraries and museums in the course of a search contest. The aim of the experiment was to analyze the search strategies of experienced information workers trying to tackle search tasks of varying complexity and get qualitative results on the impact of time pressure on such an experiment.
1206.2478
On the Exact BER of Bit-Wise Demodulators for One-Dimensional Constellations
cs.IT math.IT
The optimal bit-wise demodulator for M-ary pulse amplitude modulation (PAM) over the additive white Gaussian noise channel is analyzed in terms of uncoded bit-error rate (BER). New closed-form BER expressions for 4-PAM with any labeling are developed. Moreover, closed-form BER expressions for 11 out of 23 possible bit patterns for 8-PAM are presented, which enable us to obtain the BER for 8-PAM with some of the most popular labelings, including the binary reflected Gray code and the natural binary code. Numerical results show that, regardless of the labeling, there is no difference between the optimal demodulator and the symbol-wise demodulator for any BER of practical interest (below 0.1).
1206.2484
Architecture for Automated Tagging and Clustering of Song Files According to Mood
cs.IR cs.MM
Music is one of the basic human needs for recreation and entertainment. As song files are digitalized now a days, and digital libraries are expanding continuously, which makes it difficult to recall a song. Thus need of a new classification system other than genre is very obvious and mood based classification system serves the purpose very well. In this paper we will present a well-defined architecture to classify songs into different mood-based categories, using audio content analysis, affective value of song lyrics to map a song onto a psychological-based emotion space and information from online sources. In audio content analysis we will use music features such as intensity, timbre and rhythm including their subfeatures to map music in a 2-Dimensional emotional space. In lyric based classification 1-Dimensional emotional space is used. Both the results are merged onto a 2-Dimensional emotional space, which will classify song into a particular mood category. Finally clusters of mood based song files are formed and arranged according to data acquired from various Internet sources.
1206.2491
Rewritable storage channels with hidden state
cs.IT math.IT
Many storage channels admit reading and rewriting of the content at a given cost. We consider rewritable channels with a hidden state which models the unknown characteristics of the memory cell. In addition to mitigating the effect of the write noise, rewrites can help the write controller obtain a better estimate of the hidden state. The paper has two contributions. The first is a lower bound on the capacity of a general rewritable channel with hidden state. The lower bound is obtained using a coding scheme that combines Gelfand-Pinsker coding with superposition coding. The rewritable AWGN channel is discussed as an example. The second contribution is a simple coding scheme for a rewritable channel where the write noise and hidden state are both uniformly distributed. It is shown that this scheme is asymptotically optimal as the number of rewrites gets large.
1206.2510
Generic Subsequence Matching Framework: Modularity, Flexibility, Efficiency
cs.MM cs.DS cs.IR
Subsequence matching has appeared to be an ideal approach for solving many problems related to the fields of data mining and similarity retrieval. It has been shown that almost any data class (audio, image, biometrics, signals) is or can be represented by some kind of time series or string of symbols, which can be seen as an input for various subsequence matching approaches. The variety of data types, specific tasks and their partial or full solutions is so wide that the choice, implementation and parametrization of a suitable solution for a given task might be complicated and time-consuming; a possibly fruitful combination of fragments from different research areas may not be obvious nor easy to realize. The leading authors of this field also mention the implementation bias that makes difficult a proper comparison of competing approaches. Therefore we present a new generic Subsequence Matching Framework (SMF) that tries to overcome the aforementioned problems by a uniform frame that simplifies and speeds up the design, development and evaluation of subsequence matching related systems. We identify several relatively separate subtasks solved differently over the literature and SMF enables to combine them in straightforward manner achieving new quality and efficiency. This framework can be used in many application domains and its components can be reused effectively. Its strictly modular architecture and openness enables also involvement of efficient solutions from different fields, for instance efficient metric-based indexes. This is an extended version of a paper published on DEXA 2012.
1206.2517
Assessing the Quality of Wikipedia Pages Using Edit Longevity and Contributor Centrality
cs.SI cs.CY
In this paper we address the challenge of assessing the quality of Wikipedia pages using scores derived from edit contribution and contributor authoritativeness measures. The hypothesis is that pages with significant contributions from authoritative contributors are likely to be high-quality pages. Contributions are quantified using edit longevity measures and contributor authoritativeness is scored using centrality metrics in either the Wikipedia talk or co-author networks. The results suggest that it is useful to take into account the contributor authoritativeness when assessing the information quality of Wikipedia content. The percentile visualization of the quality scores provides some insights about the anomalous articles, and can be used to help Wikipedia editors to identify Start and Stub articles that are of relatively good quality.
1206.2523
Binary Jumbled String Matching for Highly Run-Length Compressible Texts
cs.DS cs.IR
The Binary Jumbled String Matching problem is defined as: Given a string $s$ over $\{a,b\}$ of length $n$ and a query $(x,y)$, with $x,y$ non-negative integers, decide whether $s$ has a substring $t$ with exactly $x$ $a$'s and $y$ $b$'s. Previous solutions created an index of size O(n) in a pre-processing step, which was then used to answer queries in constant time. The fastest algorithms for construction of this index have running time $O(n^2/\log n)$ [Burcsi et al., FUN 2010; Moosa and Rahman, IPL 2010], or $O(n^2/\log^2 n)$ in the word-RAM model [Moosa and Rahman, JDA 2012]. We propose an index constructed directly from the run-length encoding of $s$. The construction time of our index is $O(n+\rho^2\log \rho)$, where O(n) is the time for computing the run-length encoding of $s$ and $\rho$ is the length of this encoding---this is no worse than previous solutions if $\rho = O(n/\log n)$ and better if $\rho = o(n/\log n)$. Our index $L$ can be queried in $O(\log \rho)$ time. While $|L|= O(\min(n, \rho^{2}))$ in the worst case, preliminary investigations have indicated that $|L|$ may often be close to $\rho$. Furthermore, the algorithm for constructing the index is conceptually simple and easy to implement. In an attempt to shed light on the structure and size of our index, we characterize it in terms of the prefix normal forms of $s$ introduced in [Fici and Lipt\'ak, DLT 2011].
1206.2526
Analysis of Inpainting via Clustered Sparsity and Microlocal Analysis
math.FA cs.IT math.IT math.NA
Recently, compressed sensing techniques in combination with both wavelet and directional representation systems have been very effectively applied to the problem of image inpainting. However, a mathematical analysis of these techniques which reveals the underlying geometrical content is completely missing. In this paper, we provide the first comprehensive analysis in the continuum domain utilizing the novel concept of clustered sparsity, which besides leading to asymptotic error bounds also makes the superior behavior of directional representation systems over wavelets precise. First, we propose an abstract model for problems of data recovery and derive error bounds for two different recovery schemes, namely l_1 minimization and thresholding. Second, we set up a particular microlocal model for an image governed by edges inspired by seismic data as well as a particular mask to model the missing data, namely a linear singularity masked by a horizontal strip. Applying the abstract estimate in the case of wavelets and of shearlets we prove that -- provided the size of the missing part is asymptotically to the size of the analyzing functions -- asymptotically precise inpainting can be obtained for this model. Finally, we show that shearlets can fill strictly larger gaps than wavelets in this model.
1206.2528
Ordinary Search Engine Users assessing Difficulty, Effort, and Outcome for Simple and Complex Search Tasks
cs.IR
Search engines are the preferred tools for finding information on the Web. They are advancing to be the common helpers to answer any of our search needs. We use them to carry out simple look-up tasks and also to work on rather time consuming and more complex search tasks. Yet, we do not know very much about the user performance while carrying out those tasks -- especially not for ordinary users. The aim of this study was to get more insight into whether Web users manage to assess difficulty, time effort, query effort, and task outcome of search tasks, and if their judging performance relates to task complexity. Our study was conducted with a systematically selected sample of 56 people with a wide demographic background. They carried out a set of 12 search tasks with commercial Web search engines in a laboratory environment. The results confirm that it is hard for normal Web users to judge the difficulty and effort to carry out complex search tasks. The judgments are more reliable for simple tasks than for complex ones. Task complexity is an indicator for judging performance.
1206.2544
Education in Conflict Zones: a Web and Mobility Approach
cs.CY cs.SI
We propose a new framework for education in conflict zones, considering the explosive growth of social media, web services, and mobile Internet over the past decade. Moreover, we focus on one conflict zone, Afghanistan, as a case study, because of its alarmingly high illiteracy rate, lack of qualified teachers, rough terrain, and relatively high mobile penetration of over 50%. In several of Afghanistan's provinces, it is hard to currently sustain the traditional bricks-and-mortar school model, due to numerous incidents of schools, teachers, and students being attacked because of the ongoing insurgency and political instability. Our model improves the virtual school model, by addressing most of its disadvantages, to provide students in Afghanistan with an opportunity to achieve standardised education, even when the security situation does not allow them to attend traditional schools. One of the biggest advantages of this model is that it is sufficiently robust to deal with gender discrimination, imposed by culture or politics of the region.
1206.2550
Core percolation on complex networks
cond-mat.stat-mech cs.SI physics.soc-ph
As a fundamental structural transition in complex networks, core percolation is related to a wide range of important problems. Yet, previous theoretical studies of core percolation have been focusing on the classical Erd\H{o}s-R\'enyi random networks with Poisson degree distribution, which are quite unlike many real-world networks with scale-free or fat-tailed degree distributions. Here we show that core percolation can be analytically studied for complex networks with arbitrary degree distributions. We derive the condition for core percolation and find that purely scale-free networks have no core for any degree exponents. We show that for undirected networks if core percolation occurs then it is always continuous while for directed networks it becomes discontinuous when the in- and out-degree distributions are different. We also apply our theory to real-world directed networks and find, surprisingly, that they often have much larger core sizes as compared to random models. These findings would help us better understand the interesting interplay between the structural and dynamical properties of complex networks.
1206.2568
LP decoding of expander codes: a simpler proof
cs.IT math.IT
A code $C \subseteq \F_2^n$ is a $(c,\epsilon,\delta)$-expander code if it has a Tanner graph, where every variable node has degree $c$, and every subset of variable nodes $L_0$ such that $|L_0|\leq \delta n$ has at least $\epsilon c |L_0|$ neighbors. Feldman et al. (IEEE IT, 2007) proved that LP decoding corrects $\frac{3\epsilon-2}{2\epsilon-1} \cdot (\delta n-1)$ errors of $(c,\epsilon,\delta)$-expander code, where $\epsilon > 2/3+\frac{1}{3c}$. In this paper, we provide a simpler proof of their result and show that this result holds for every expansion parameter $\epsilon > 2/3$.
1206.2587
Exploiting Particle Swarm Optimization in Multiple Faults Fuzzy Detection
cs.NE cs.SY
In this paper an on-line multiple faults detection approach is first of all proposed. For efficiency, an optimal design of membership functions is required. Thus, the proposed approach is improved using Particle Swarm Optimization (PSO) technique. The inputs of the proposed approaches are residuals representing the numerical evaluation of Analytical Redundancy Relations. These residuals are generated due to the use of bond graph modeling. The results of the fuzzy detection modules are displayed as a colored causal graph. A comparison between the results obtained by using PSO and those given by the use of Genetic Algorithms (GA) is finally made. The experiments focus on a simulation of the three-tank hydraulic system, a benchmark in the diagnosis domain.
1206.2599
A tale of two cities. Vulnerabilities of the London and Paris transit networks
physics.soc-ph cs.SI physics.data-an
This paper analyses the impact of random failure or attack on the public transit networks of London and Paris in a comparative study. In particular we analyze how the dysfunction or removal of sets of stations or links (rails, roads, etc.) affects the connectivity properties within these networks. We show how accumulating dysfunction leads to emergent phenomena that cause the transportation system to break down as a whole. Simulating different directed attack strategies, we find minimal strategies with high impact and identify a-priory criteria that correlate with the resilience of these networks. To demonstrate our approach, we choose the London and Paris public transit networks. Our quantitative analysis is performed in the frames of the complex network theory - a methodological tool that has emerged recently as an interdisciplinary approach joining methods and concepts of the theory of random graphs, percolation, and statistical physics. In conclusion we demonstrate that taking into account cascading effects the network integrity is controlled for both networks by less than 0.5 % of the stations i.e. 19 for Paris and 34 for London.
1206.2627
Image Similarity Using Sparse Representation and Compression Distance
cs.CV
A new line of research uses compression methods to measure the similarity between signals. Two signals are considered similar if one can be compressed significantly when the information of the other is known. The existing compression-based similarity methods, although successful in the discrete one dimensional domain, do not work well in the context of images. This paper proposes a sparse representation-based approach to encode the information content of an image using information from the other image, and uses the compactness (sparsity) of the representation as a measure of its compressibility (how much can the image be compressed) with respect to the other image. The more sparse the representation of an image, the better it can be compressed and the more it is similar to the other image. The efficacy of the proposed measure is demonstrated through the high accuracies achieved in image clustering, retrieval and classification.
1206.2656
A Construction of Quantum LDPC Codes from Cayley Graphs
cs.IT math.CO math.IT
We study a construction of Quantum LDPC codes proposed by MacKay, Mitchison and Shokrollahi. It is based on the Cayley graph of Fn together with a set of generators regarded as the columns of the parity-check matrix of a classical code. We give a general lower bound on the minimum distance of the Quantum code in $\mathcal{O}(dn^2)$ where d is the minimum distance of the classical code. When the classical code is the $[n, 1, n]$ repetition code, we are able to compute the exact parameters of the associated Quantum code which are $[[2^n, 2^{\frac{n+1}{2}}, 2^{\frac{n-1}{2}}]]$.
1206.2669
Information-Theoretically Secure Three-Party Computation with One Corrupted Party
cs.CR cs.IT math.IT
The problem in which one of three pairwise interacting parties is required to securely compute a function of the inputs held by the other two, when one party may arbitrarily deviate from the computation protocol (active behavioral model), is studied. An information-theoretic characterization of unconditionally secure computation protocols under the active behavioral model is provided. A protocol for Hamming distance computation is provided and shown to be unconditionally secure under both active and passive behavioral models using the information-theoretic characterization. The difference between the notions of security under the active and passive behavioral models is illustrated through the BGW protocol for computing quadratic and Hamming distances; this protocol is secure under the passive model, but is shown to be not secure under the active model.
1206.2691
IDS: An Incremental Learning Algorithm for Finite Automata
cs.LG cs.DS cs.FL
We present a new algorithm IDS for incremental learning of deterministic finite automata (DFA). This algorithm is based on the concept of distinguishing sequences introduced in (Angluin81). We give a rigorous proof that two versions of this learning algorithm correctly learn in the limit. Finally we present an empirical performance analysis that compares these two algorithms, focussing on learning times and different types of learning queries. We conclude that IDS is an efficient algorithm for software engineering applications of automata learning, such as testing and model inference.
1206.2720
Temporal percolation of the susceptible network in an epidemic spreading
physics.soc-ph cs.SI
In this work, we study the evolution of the susceptible individuals during the spread of an epidemic modeled by the susceptible-infected-recovered (SIR) process spreading on the top of complex networks. Using an edge-based compartmental approach and percolation tools, we find that a time-dependent quantity $\Phi_S(t)$, namely, the probability that a given neighbor of a node is susceptible at time $t$, is the control parameter of a node void percolation process involving those nodes on the network not-reached by the disease. We show that there exists a critical time $t_c$ above which the giant susceptible component is destroyed. As a consequence, in order to preserve a macroscopic connected fraction of the network composed by healthy individuals which guarantee its functionality, any mitigation strategy should be implemented before this critical time $t_c$. Our theoretical results are confirmed by extensive simulations of the SIR process.
1206.2728
Effect of Closed Paths in Complex networks on Six Degrees of Separation and Disorder
physics.soc-ph cs.SI
Milgram Condition proposed by Aoyama et al. plays an important role on the analysis of "six degrees of separation". We have shown that the relations between Milgram condition and the generalized clustering coefficient, which was introduced as an index for measuring the number of closed paths by us, are absolutely different in scale free networks (Barabasi and Albert) and small world networks (Watts and Strogatz, Watts). This fact implies that the effect of closed paths on information propagation is different in both networks. In this article, we first investigate the difference and pursuit what is a crucial mathematical quantity for information propagation. As a result we find that a sort of "disorder" plays more important role for information propagation than partially closed paths included in a network. Next we inquired into it in more detail by introducing two types of intermediate networks. Then we find that the average of the local clustering coefficient and the generalized clustering coefficients $C_{(q)}$ have some different functions and important meanings, respectively. We also find that $C_{(q)}$ is close to the propagation of information on networks. Lastly, we show that realizability of six degrees of separation in networks can be understood in a unified way by disorder.
1206.2733
Parrondo Paradox in Scale Free Networks
physics.soc-ph cs.SI
Parrondo's paradox occurs in sequences of games in which a winning expectation may be obtained by playing the games in a random order, even though each game in the sequence may be lost when played individually. Several variations of Parrondo's games with paradoxical property have been introduced. In this paper, I examine whether Parrondo's paradox occurs or not in scale free networks. Two models are discussed by some theoretical analyses and computer simulations. As a result, I prove that Parrondo's paradox occurs only in the second model.
1206.2742
Online open neuroimaging mass meta-analysis
cs.DL cs.AI stat.AP
We describe a system for meta-analysis where a wiki stores numerical data in a simple format and a web service performs the numerical computation. We initially apply the system on multiple meta-analyses of structural neuroimaging data results. The described system allows for mass meta-analysis, e.g., meta-analysis across multiple brain regions and multiple mental disorders.
1206.2802
Critical behavior in a cross-situational lexicon learning scenario
physics.soc-ph cond-mat.stat-mech cs.AI
The associationist account for early word-learning is based on the co-occurrence between objects and words. Here we examine the performance of a simple associative learning algorithm for acquiring the referents of words in a cross-situational scenario affected by noise produced by out-of-context words. We find a critical value of the noise parameter $\gamma_c$ above which learning is impossible. We use finite-size scaling to show that the sharpness of the transition persists across a region of order $\tau^{-1/2}$ about $\gamma_c$, where $\tau$ is the number of learning trials, as well as to obtain the learning error (scaling function) in the critical region. In addition, we show that the distribution of durations of periods when the learning error is zero is a power law with exponent -3/2 at the critical point.
1206.2807
An efficient hierarchical graph based image segmentation
cs.CV
Hierarchical image segmentation provides region-oriented scalespace, i.e., a set of image segmentations at different detail levels in which the segmentations at finer levels are nested with respect to those at coarser levels. Most image segmentation algorithms, such as region merging algorithms, rely on a criterion for merging that does not lead to a hierarchy, and for which the tuning of the parameters can be difficult. In this work, we propose a hierarchical graph based image segmentation relying on a criterion popularized by Felzenzwalb and Huttenlocher. We illustrate with both real and synthetic images, showing efficiency, ease of use, and robustness of our method.
1206.2866
Efficient scheduling using complex networks
physics.soc-ph cs.CE cs.SY
We consider the problem of efficiently scheduling the production of goods for a model steel manufacturing company. We propose a new approach for solving this classic problem, using techniques from the statistical physics of complex networks in conjunction with depth-first search to generate a successful, flexible, schedule. The schedule generated by our algorithm is more efficient and outperforms schedules selected at random from those observed in real steel manufacturing processes. Finally, we explore whether the proposed approach could be beneficial for long term planning.
1206.2898
Long-Range Navigation on Complex Networks using L\'evy Random Walks
cond-mat.stat-mech cs.SI physics.soc-ph
We introduce a strategy of navigation in undirected networks, including regular, random, and complex networks, that is inspired by L\'evy random walks, generalizing previous navigation rules. We obtained exact expressions for the stationary probability distribution, the occupation probability, the mean first passage time, and the average time to reach a node on the network. We found that the long-range navigation using the L\'evy random walk strategy, compared with the normal random walk strategy, is more efficient at reducing the time to cover the network. The dynamical effect of using the L\'evy walk strategy is to transform a large-world network into a small world. Our exact results provide a general framework that connects two important fields: L\'evy navigation strategies and dynamics on complex networks.
1206.2944
Practical Bayesian Optimization of Machine Learning Algorithms
stat.ML cs.LG
Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a "black art" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.
1206.2959
Collaborative High Accuracy Localization in Mobile Multipath Environments
cs.NI cs.IT cs.RO math.IT
We study the problem of high accuracy localization of mobile nodes in a multipath-rich environment where sub-meter accuracies are required. We employ a peer-to-peer framework where the vehicles/nodes can get pairwise multipath-degraded ranging estimates in local neighborhoods together with a fixed number of anchor nodes. The challenge is to overcome the multipath-barrier with redundancy in order to provide the desired accuracies especially under severe multipath conditions when the fraction of received signals corrupted by multipath is dominating. We invoke a analytical graphical model framework based on particle filtering and reveal its high accuracy localization promise through simulations. We also address design questions such as "How many anchors and what fraction of line-of-sight (LOS) measurements are needed to achieve a specified target accuracy?", by analytically characterizing the performance improvement in localization accuracy as a function of the number of nodes in the network and the fraction of LOS measurements. In particular, for a static node placement, we show that the Cramer-Rao Lower Bound (CRLB), a fundamental lower bound on the localization accuracy, can be expressed as a product of two factors - a scalar function that depends only on the parameters of the noise distribution and a matrix that depends only on the geometry of node locations and the underlying connectivity graph. Further, a simplified expression is obtained for the CRLB that helps deduce the scaling behavior of the estimation error as a function of the number of agents and anchors in the network. The bound suggests that even a small fraction of LOS measurements can provide significant improvements. Conversely, a small fraction of NLOS measurements can significantly degrade the performance. The analysis is extended to the mobile setting and the performance is compared with the derived CRLB.
1206.2960
Modeling two-language competition dynamics
physics.soc-ph cs.SI
During the last decade, much attention has been paid to language competition in the complex systems community, that is, how the fractions of speakers of several competing languages evolve in time. In this paper we review recent advances in this direction and focus on three aspects. First we consider the shift from two-state models to three state models that include the possibility of bilingual individuals. The understanding of the role played by bilingualism is essential in sociolinguistics. In particular, the question addressed is whether bilingualism facilitates the coexistence of languages. Second, we will analyze the effect of social interaction networks and physical barriers. Finally, we will show how to analyze the issue of bilingualism from a game theoretical perspective.
1206.2961
Epistemic view of quantum states and communication complexity of quantum channels
quant-ph cs.IT math.IT
The communication complexity of a quantum channel is the minimal amount of classical communication required for classically simulating a process of state preparation, transmission through the channel and subsequent measurement. It establishes a limit on the power of quantum communication in terms of classical resources. We show that classical simulations employing a finite amount of communication can be derived from a special class of hidden variable theories where quantum states represent statistical knowledge about the classical state and not an element of reality. This special class has attracted strong interest very recently. The communication cost of each derived simulation is given by the mutual information between the quantum state and the classical state of the parent hidden variable theory. Finally, we find that the communication complexity for single qubits is smaller than 1.28 bits. The previous known upper bound was 1.85 bits.
1206.2974
On Constrained Randomized Quantization
cs.IT math.IT
Randomized (dithered) quantization is a method capable of achieving white reconstruction error independent of the source. Dithered quantizers have traditionally been considered within their natural setting of uniform quantization. In this paper we extend conventional dithered quantization to nonuniform quantization, via a subterfage: dithering is performed in the companded domain. Closed form necessary conditions for optimality of the compressor and expander mappings are derived for both fixed and variable rate randomized quantization. Numerically, mappings are optimized by iteratively imposing these necessary conditions. The framework is extended to include an explicit constraint that deterministic or randomized quantizers yield reconstruction error that is uncorrelated with the source. Surprising theoretical results show direct and simple connection between the optimal constrained quantizers and their unconstrained counterparts. Numerical results for the Gaussian source provide strong evidence that the proposed constrained randomized quantizer outperforms the conventional dithered quantizer, as well as the constrained deterministic quantizer. Moreover, the proposed constrained quantizer renders the reconstruction error nearly white. In the second part of the paper, we investigate whether uncorrelated reconstruction error requires random coding to achieve asymptotic optimality. We show that for a Gaussian source, the optimal vector quantizer of asymptotically high dimension whose quantization error is uncorrelated with the source, is indeed random. Thus, random encoding in this setting of rate-distortion theory, is not merely a tool to characterize performance bounds, but a required property of quantizers that approach such bounds.
1206.2994
Towards Optimality in Transform Coding
cs.IT math.IT
It is well-known for transform coding of multivariate Gaussian sources, that the Karhunen-Lo\`eve transform (KLT) minimizes the mean square error distortion. However, finding the optimal transform for general non-Gaussian sources has been an open problem for decades, despite several important advances that provide some partial answers regarding KLT optimality. In this paper, we present a necessary and sufficient condition for optimality of a transform when high resolution, variable rate quantizers are employed. We hence present not only a complete characterization of when KLT is optimal, but also a determining condition for optimality of a general (non-KLT) transform. This necessary and sufficient condition is shown to have direct connections to the well studied source separation problem. This observation can impact source separation itself, as illustrated with a new optimality result. We combine the transform optimality condition with algorithmic tools from source separation, to derive a practical numerical method to search for the optimal transform in source coding. Then, we focus on multiterminal settings, for which {\it conditional} KLT was shown to possess certain optimality properties for Gaussian sources. We derive the optimal orthogonal transform for the setting where side information is only available to the decoder, along with new specialized results specific to the conditions for optimality of conditional KLT. Finally, we consider distributed source coding where two correlated sources are to be transform coded separately but decoded jointly. We derive the necessary and sufficient condition of optimality of the orthogonal transforms. We specialize to find the optimal orthogonal transforms, in this setting, for specific source densities, including jointly Gaussian sources.
1206.3002
Study of the Importance of Adequacy to Robot Verbal and Non Verbal Communication in Human-Robot interaction
cs.RO cs.HC
The Robadom project aims at creating a homecare robot that help and assist people in their daily life, either in doing task for the human or in managing day organization. A robot could have this kind of role only if it is accepted by humans. Before thinking about the robot appearance, we decided to evaluate the importance of the relation between verbal and nonverbal communication during a human-robot interaction in order to determine the situation where the robot is accepted. We realized two experiments in order to study this acceptance. The first experiment studied the importance of having robot nonverbal behavior in relation of its verbal behavior. The second experiment studied the capability of a robot to provide a correct human-robot interaction.
1206.3014
Round-Robin Streaming with Generations
cs.IT math.IT
We consider three types of application layer coding for streaming over lossy links: random linear coding, systematic random linear coding, and structured coding. The file being streamed is divided into sub-blocks (generations). Code symbols are formed by combining data belonging to the same generation, and transmitted in a round-robin fashion. We compare the schemes based on delivery packet count, net throughput, and energy consumption for a range of generation sizes. We determine these performance measures both analytically and in an experimental configuration. We find our analytical predictions to match the experimental results. We show that coding at the application layer brings about a significant increase in net data throughput, and thereby reduction in energy consumption due to reduced communication time. On the other hand, on devices with constrained computing resources, heavy coding operations cause packet drops in higher layers and negatively affect the net throughput. We find from our experimental results that low-rate MDS codes are best for small generation sizes, whereas systematic random linear coding has the best net throughput and lowest energy consumption for larger generation sizes due to its low decoding complexity.
1206.3027
Social Networks, Functional Differentiation of Society, and Data Protection
cs.CY cs.SI physics.soc-ph
Most scholars, politicians, and activists are following individualistic theories of privacy and data protection. In contrast, some of the pioneers of the data protection legislation in Germany like Adalbert Podlech, Paul J. M\"uller, and Ulrich Dammann used a systems theory approach. Following Niklas Luhmann, the aim of data protection is (1) maintaining the functional differentiation of society against the threats posed by the possibilities of modern information processing, and (2) countering undue information power by organized social players. It could be, therefore, no surprise that the first data protection law in the German state of Hesse contained rules to protect the individual as well as the balance of power between the legislative and the executive body of the state. Social networks like Facebook or Google+ do not only endanger their users by exposing them to other users or the public. They constitute, first and foremost, a threat to society as a whole by collecting information about individuals, groups, and organizations from different social systems and combining them in a centralized data bank. They transgress the boundaries between social systems that act as a shield against total visibility and transparency of the individual and protect the freedom and the autonomy of the people. Without enforcing structural limitations on the organizational use of collected data by the social network itself or the company behind it, social networks pose the worst totalitarian peril for western societies since the fall of the Soviet Union.
1206.3029
Asymptotic Outage Probability Analysis for General Fixed-Gain Amplify-and-Forward Multihop Relay Systems
cs.IT math.IT
In this paper, we present an analysis of the outage probability for fixed-gain amplify-and-forward (AF) multihop relay links operating in the high SNR regime. Our analysis exploits properties of Mellin transforms to derive an asymptotic approximation that is accurate even when the per-hop channel gains adhere to completely different fading models. The main result contained in the paper is a general expression for the outage probability, which is a functional of the Mellin transforms of the per-hop channel gains. Furthermore, we explicitly calculate the asymptotic outage probability for four different systems, whereby in each system the per-hop channels adhere to either a Nakagami-m, Weibull, Rician, or Hoyt fading profile, but where the distributional parameters may differ from hop to hop. This analysis leads to our second main result, which is a semi-general closed-form formula for the outage probability of general fixed-gain AF multihop systems. We exploit this formula to analyze an example scenario for a four-hop system where the per-hop channels follow the four aforementioned fading models, i.e., the first channel is Nakagami-m fading, the second is Weibull fading, and so on. Finally, we provide simulation results to corroborate our analysis.
1206.3037
Generalized voter-like models on heterogeneous networks
physics.soc-ph cs.SI
We describe a generalization of the voter model on complex networks that encompasses different sources of degree-related heterogeneity and that is amenable to direct analytical solution by applying the standard methods of heterogeneous mean-field theory. Our formalism allows for a compact description of previously proposed heterogeneous voter-like models, and represents a basic framework within which we can rationalize the effects of heterogeneity in voter-like models, as well as implement novel sources of heterogeneity, not previously considered in the literature.
1206.3038
On the Covering Radius of Some Modular Codes
cs.IT math.IT math.RA
This paper gives lower and upper bounds on the covering radius of codes over $\Z_{2^s}$ with respect to homogenous distance. We also determine the covering radius of various Repetition codes, Simplex codes (Type $\alpha$ and Type $\beta$) and their dual and give bounds on the covering radii for MacDonald codes of both types over $\Z_4$.
1206.3043
A Metapopulation Model for Chikungunya Including Populations Mobility on a Large-Scale Network
cs.SI math.DS physics.soc-ph
In this work we study the influence of populations mobility on the spread of a vector-borne disease. We focus on the chikungunya epidemic event that occurred in 2005-2006 on the R\'eunion Island, Indian Ocean, France, and validate our models with real epidemic data from the event. We propose a metapopulation model to represent both a high-resolution patch model of the island with realistic population densities and also mobility models for humans (based on real-motion data) and mosquitoes. In this metapopulation network, two models are coupled: one for the dynamics of the mosquito population and one for the transmission of the disease. A high-resolution numerical model is created out from real geographical, demographical and mobility data. The Island is modeled with an 18 000-nodes metapopulation network. Numerical results show the impact of the geographical environment and populations' mobility on the spread of the disease. The model is finally validated against real epidemic data from the R\'eunion event.
1206.3065
Stability Analysis and Controller Design for a Linear System with Duhem Hysteresis Nonlinearity
math.OC cs.SY
In this paper, we investigate the stability of a feedback interconnection between a linear system and a Duhem hysteresis operator, where the linear system and the Duhem hysteresis operator satisfy either the counter-clockwise (CCW) or clockwise (CW) input-output dynamics. More precisely, we present sufficient conditions for the stability of the interconnected system that depend on the CW or CCW properties of the linear system and the Duhem operator. Based on these results we introduce a control design methodology for stabilizing a linear plant with a hysteretic actuator or sensor without requiring precise information on the hysteresis operator.
1206.3072
Statistical Consistency of Finite-dimensional Unregularized Linear Classification
cs.LG stat.ML
This manuscript studies statistical properties of linear classifiers obtained through minimization of an unregularized convex risk over a finite sample. Although the results are explicitly finite-dimensional, inputs may be passed through feature maps; in this way, in addition to treating the consistency of logistic regression, this analysis also handles boosting over a finite weak learning class with, for instance, the exponential, logistic, and hinge losses. In this finite-dimensional setting, it is still possible to fit arbitrary decision boundaries: scaling the complexity of the weak learning class with the sample size leads to the optimal classification risk almost surely.
1206.3075
Cascades on clique-based graphs
physics.soc-ph cond-mat.stat-mech cs.SI
We present an analytical approach to determining the expected cascade size in a broad range of dynamical models on the class of highly-clustered random graphs introduced by Gleeson [J. P. Gleeson, Phys. Rev. E 80, 036107 (2009)]. A condition for the existence of global cascades is also derived. Applications of this approach include analyses of percolation, and Watts's model. We show how our techniques can be used to study the effects of in-group bias in cascades on social networks.
1206.3078
Mining Educational Data Using Classification to Decrease Dropout Rate of Students
cs.IR
In the last two decades, number of Higher Education Institutions (HEI) grows rapidly in India. Since most of the institutions are opened in private mode therefore, a cut throat competition rises among these institutions while attracting the student to got admission. This is the reason for institutions to focus on the strength of students not on the quality of education. This paper presents a data mining application to generate predictive models for engineering student's dropout management. Given new records of incoming students, the predictive model can produce short accurate prediction list identifying students who tend to need the support from the student dropout program most. The results show that the machine learning algorithm is able to establish effective predictive model from the existing student dropout data.
1206.3099
Sparse Distributed Learning Based on Diffusion Adaptation
cs.LG cs.DC
This article proposes diffusion LMS strategies for distributed estimation over adaptive networks that are able to exploit sparsity in the underlying system model. The approach relies on convex regularization, common in compressive sensing, to enhance the detection of sparsity via a diffusive process over the network. The resulting algorithms endow networks with learning abilities and allow them to learn the sparse structure from the incoming data in real-time, and also to track variations in the sparsity of the model. We provide convergence and mean-square performance analysis of the proposed method and show under what conditions it outperforms the unregularized diffusion version. We also show how to adaptively select the regularization parameter. Simulation results illustrate the advantage of the proposed filters for sparse data recovery.
1206.3111
The third open Answer Set Programming competition
cs.AI
Answer Set Programming (ASP) is a well-established paradigm of declarative programming in close relationship with other declarative formalisms such as SAT Modulo Theories, Constraint Handling Rules, FO(.), PDDL and many others. Since its first informal editions, ASP systems have been compared in the now well-established ASP Competition. The Third (Open) ASP Competition, as the sequel to the ASP Competitions Series held at the University of Potsdam in Germany (2006-2007) and at the University of Leuven in Belgium in 2009, took place at the University of Calabria (Italy) in the first half of 2011. Participants competed on a pre-selected collection of benchmark problems, taken from a variety of domains as well as real world applications. The Competition ran on two tracks: the Model and Solve (M&S) Track, based on an open problem encoding, and open language, and open to any kind of system based on a declarative specification paradigm; and the System Track, run on the basis of fixed, public problem encodings, written in a standard ASP language. This paper discusses the format of the Competition and the rationale behind it, then reports the results for both tracks. Comparison with the second ASP competition and state-of-the-art solutions for some of the benchmark domains is eventually discussed. To appear in Theory and Practice of Logic Programming (TPLP).
1206.3120
Convexity Conditions for 802.11 WLANs
cs.NI cs.IT math.IT
In this paper we characterise the maximal convex subsets of the (non-convex) rate region in 802.11 WLANs. In addition to being of intrinsic interest as a fundamental property of 802.11 WLANs, this characterisation can be exploited to allow the wealth of convex optimisation approaches to be applied to 802.11 WLANs.
1206.3133
Multi-terminal Secrecy in a Linear Non-coherent Packetized Networks
cs.IT cs.CR math.IT
We consider a group of m+1 trusted nodes that aim to create a shared secret key K over a network in the presence of a passive eavesdropper, Eve. We assume a linear non-coherent network coding broadcast channel (over a finite field F_q) from one of the honest nodes (i.e., Alice) to the rest of them including Eve. All of the trusted nodes can also discuss over a cost-free public channel which is also overheard by Eve. For this setup, we propose upper and lower bounds for the secret key generation capacity assuming that the field size q is very large. For the case of two trusted terminals (m = 1) our upper and lower bounds match and we have complete characterization for the secrecy capacity in the large field size regime.
1206.3137
Identifiability and Unmixing of Latent Parse Trees
stat.ML cs.LG
This paper explores unsupervised learning of parsing models along two directions. First, which models are identifiable from infinite data? We use a general technique for numerically checking identifiability based on the rank of a Jacobian matrix, and apply it to several standard constituency and dependency parsing models. Second, for identifiable models, how do we estimate the parameters efficiently? EM suffers from local optima, while recent work using spectral methods cannot be directly applied since the topology of the parse tree varies across sentences. We develop a strategy, unmixing, which deals with this additional complexity for restricted classes of parsing models.
1206.3138
On Modulo-Sum Computation over an Erasure Multiple Access Channel
cs.IT math.IT
We study computation of a modulo-sum of two binary source sequences over a two-user erasure multiple access channel. The channel is modeled as a binary-input, erasure multiple access channel, which can be in one of three states - either the channel output is a modulo-sum of the two input symbols, or the channel output equals the input symbol on the first link and an erasure on the second link, or vice versa. The associated state sequence is independent and identically distributed. We develop a new upper bound on the sum-rate by revealing only part of the state sequence to the transmitters. Our coding scheme is based on the compute and forward and the decode and forward techniques. When a (strictly) causal feedback of the channel state is available to the encoders, we show that the modulo-sum capacity is increased. Extensions to the case of lossy reconstruction of the modulo-sum and to channels involving additional states are also treated briefly.
1206.3189
A New Representation for the Symbol Error Rate
cs.IT math.IT
The symbol error rate of the minimum distance detector for an arbitrary multi-dimensional constellation impaired by additive white Gaussian noise is characterized as the product of a completely monotone function with a non-negative power of the signal to noise ratio. This representation is also shown to apply to cases when the impairing noise is compound Gaussian. Using this general result, it is proved that the symbol error rate is completely monotone if the rank of its constellation matrix is either one or two. Further, a necessary and sufficient condition for the complete monotonicity of the symbol error rate of a constellation of any dimension is also obtained. Applications to stochastic ordering of wireless system performance are also discussed.
1206.3204
Improved Spectral-Norm Bounds for Clustering
cs.LG cs.DS
Aiming to unify known results about clustering mixtures of distributions under separation conditions, Kumar and Kannan[2010] introduced a deterministic condition for clustering datasets. They showed that this single deterministic condition encompasses many previously studied clustering assumptions. More specifically, their proximity condition requires that in the target $k$-clustering, the projection of a point $x$ onto the line joining its cluster center $\mu$ and some other center $\mu'$, is a large additive factor closer to $\mu$ than to $\mu'$. This additive factor can be roughly described as $k$ times the spectral norm of the matrix representing the differences between the given (known) dataset and the means of the (unknown) target clustering. Clearly, the proximity condition implies center separation -- the distance between any two centers must be as large as the above mentioned bound. In this paper we improve upon the work of Kumar and Kannan along several axes. First, we weaken the center separation bound by a factor of $\sqrt{k}$, and secondly we weaken the proximity condition by a factor of $k$. Using these weaker bounds we still achieve the same guarantees when all points satisfy the proximity condition. We also achieve better guarantees when only $(1-\epsilon)$-fraction of the points satisfy the weaker proximity condition. The bulk of our analysis relies only on center separation under which one can produce a clustering which (i) has low error, (ii) has low $k$-means cost, and (iii) has centers very close to the target centers. Our improved separation condition allows us to match the results of the Planted Partition Model of McSherry[2001], improve upon the results of Ostrovsky et al[2006], and improve separation results for mixture of Gaussian models in a particular setting.
1206.3231
CORL: A Continuous-state Offset-dynamics Reinforcement Learner
cs.LG stat.ML
Continuous state spaces and stochastic, switching dynamics characterize a number of rich, realworld domains, such as robot navigation across varying terrain. We describe a reinforcementlearning algorithm for learning in these domains and prove for certain environments the algorithm is probably approximately correct with a sample complexity that scales polynomially with the state-space dimension. Unfortunately, no optimal planning techniques exist in general for such problems; instead we use fitted value iteration to solve the learned MDP, and include the error due to approximate planning in our bounds. Finally, we report an experiment using a robotic car driving over varying terrain to demonstrate that these dynamics representations adequately capture real-world dynamics and that our algorithm can be used to efficiently solve such problems.
1206.3232
AND/OR Importance Sampling
cs.AI
The paper introduces AND/OR importance sampling for probabilistic graphical models. In contrast to importance sampling, AND/OR importance sampling caches samples in the AND/OR space and then extracts a new sample mean from the stored samples. We prove that AND/OR importance sampling may have lower variance than importance sampling; thereby providing a theoretical justification for preferring it over importance sampling. Our empirical evaluation demonstrates that AND/OR importance sampling is far more accurate than importance sampling in many cases.
1206.3233
Speeding Up Planning in Markov Decision Processes via Automatically Constructed Abstractions
cs.AI
In this paper, we consider planning in stochastic shortest path (SSP) problems, a subclass of Markov Decision Problems (MDP). We focus on medium-size problems whose state space can be fully enumerated. This problem has numerous important applications, such as navigation and planning under uncertainty. We propose a new approach for constructing a multi-level hierarchy of progressively simpler abstractions of the original problem. Once computed, the hierarchy can be used to speed up planning by first finding a policy for the most abstract level and then recursively refining it into a solution to the original problem. This approach is fully automated and delivers a speed-up of two orders of magnitude over a state-of-the-art MDP solver on sample problems while returning near-optimal solutions. We also prove theoretical bounds on the loss of solution optimality resulting from the use of abstractions.
1206.3234
Adaptive Inference on General Graphical Models
cs.DS cs.AI
Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional probabilities and dependencies in logarithmic time. We give experimental results for an implementation of our algorithm, and demonstrate its potential performance benefit in the study of protein structure.
1206.3235
Identifying reasoning patterns in games
cs.GT cs.AI
We present an algorithm that identifies the reasoning patterns of agents in a game, by iteratively examining the graph structure of its Multi-Agent Influence Diagram (MAID) representation. If the decision of an agent participates in no reasoning patterns, then we can effectively ignore that decision for the purpose of calculating a Nash equilibrium for the game. In some cases, this can lead to exponential time savings in the process of equilibrium calculation. Moreover, our algorithm can be used to enumerate the reasoning patterns in a game, which can be useful for constructing more effective computerized agents interacting with humans.
1206.3236
Learning Inclusion-Optimal Chordal Graphs
cs.LG cs.DS stat.ML
Chordal graphs can be used to encode dependency models that are representable by both directed acyclic and undirected graphs. This paper discusses a very simple and efficient algorithm to learn the chordal structure of a probabilistic model from data. The algorithm is a greedy hill-climbing search algorithm that uses the inclusion boundary neighborhood over chordal graphs. In the limit of a large sample size and under appropriate hypotheses on the scoring criterion, we prove that the algorithm will find a structure that is inclusion-optimal when the dependency model of the data-generating distribution can be represented exactly by an undirected graph. The algorithm is evaluated on simulated datasets.
1206.3237
Clique Matrices for Statistical Graph Decomposition and Parameterising Restricted Positive Definite Matrices
cs.DM cs.LG stat.ML
We introduce Clique Matrices as an alternative representation of undirected graphs, being a generalisation of the incidence matrix representation. Here we use clique matrices to decompose a graph into a set of possibly overlapping clusters, de ned as well-connected subsets of vertices. The decomposition is based on a statistical description which encourages clusters to be well connected and few in number. Inference is carried out using a variational approximation. Clique matrices also play a natural role in parameterising positive de nite matrices under zero constraints on elements of the matrix. We show that clique matrices can parameterise all positive de nite matrices restricted according to a decomposable graph and form a structured Factor Analysis approximation in the non-decomposable case.
1206.3238
Greedy Block Coordinate Descent for Large Scale Gaussian Process Regression
cs.LG stat.ML
We propose a variable decomposition algorithm -greedy block coordinate descent (GBCD)- in order to make dense Gaussian process regression practical for large scale problems. GBCD breaks a large scale optimization into a series of small sub-problems. The challenge in variable decomposition algorithms is the identification of a subproblem (the active set of variables) that yields the largest improvement. We analyze the limitations of existing methods and cast the active set selection into a zero-norm constrained optimization problem that we solve using greedy methods. By directly estimating the decrease in the objective function, we obtain not only efficient approximate solutions for GBCD, but we are also able to demonstrate that the method is globally convergent. Empirical comparisons against competing dense methods like Conjugate Gradient or SMO show that GBCD is an order of magnitude faster. Comparisons against sparse GP methods show that GBCD is both accurate and capable of handling datasets of 100,000 samples or more.
1206.3239
On Identifying Total Effects in the Presence of Latent Variables and Selection bias
stat.ME cs.AI stat.AP
Assume that cause-effect relationships between variables can be described as a directed acyclic graph and the corresponding linear structural equation model.We consider the identification problem of total effects in the presence of latent variables and selection bias between a treatment variable and a response variable. Pearl and his colleagues provided the back door criterion, the front door criterion (Pearl, 2000) and the conditional instrumental variable method (Brito and Pearl, 2002) as identifiability criteria for total effects in the presence of latent variables, but not in the presence of selection bias. In order to solve this problem, we propose new graphical identifiability criteria for total effects based on the identifiable factor models. The results of this paper are useful to identify total effects in observational studies and provide a new viewpoint to the identification conditions of factor models.
1206.3240
Complexity of Inference in Graphical Models
cs.DS cs.AI
It is well-known that inference in graphical models is hard in the worst case, but tractable for models with bounded treewidth. We ask whether treewidth is the only structural criterion of the underlying graph that enables tractable inference. In other words, is there some class of structures with unbounded treewidth in which inference is tractable? Subject to a combinatorial hypothesis due to Robertson et al. (1994), we show that low treewidth is indeed the only structural restriction that can ensure tractability. Thus, even for the "best case" graph structure, there is no inference algorithm with complexity polynomial in the treewidth.