id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1210.4875
A Theory of Goal-Oriented MDPs with Dead Ends
cs.AI
Stochastic Shortest Path (SSP) MDPs is a problem class widely studied in AI, especially in probabilistic planning. They describe a wide range of scenarios but make the restrictive assumption that the goal is reachable from any state, i.e., that dead-end states do not exist. Because of this, SSPs are unable to model various scenarios that may have catastrophic events (e.g., an airplane possibly crashing if it flies into a storm). Even though MDP algorithms have been used for solving problems with dead ends, a principled theory of SSP extensions that would allow dead ends, including theoretically sound algorithms for solving such MDPs, has been lacking. In this paper, we propose three new MDP classes that admit dead ends under increasingly weaker assumptions. We present Value Iteration-based as well as the more efficient heuristic search algorithms for optimally solving each class, and explore theoretical relationships between these classes. We also conduct a preliminary empirical study comparing the performance of our algorithms on different MDP classes, especially on scenarios with unavoidable dead ends.
1210.4876
Active Imitation Learning via Reduction to I.I.D. Active Learning
cs.LG stat.ML
In standard passive imitation learning, the goal is to learn a target policy by passively observing full execution trajectories of it. Unfortunately, generating such trajectories can require substantial expert effort and be impractical in some cases. In this paper, we consider active imitation learning with the goal of reducing this effort by querying the expert about the desired action at individual states, which are selected based on answers to past queries and the learner's interactions with an environment simulator. We introduce a new approach based on reducing active imitation learning to i.i.d. active learning, which can leverage progress in the i.i.d. setting. Our first contribution, is to analyze reductions for both non-stationary and stationary policies, showing that the label complexity (number of queries) of active imitation learning can be substantially less than passive learning. Our second contribution, is to introduce a practical algorithm inspired by the reductions, which is shown to be highly effective in four test domains compared to a number of alternatives.
1210.4877
Incentive Decision Processes
cs.GT cs.MA
We consider Incentive Decision Processes, where a principal seeks to reduce its costs due to another agent's behavior, by offering incentives to the agent for alternate behavior. We focus on the case where a principal interacts with a greedy agent whose preferences are hidden and static. Though IDPs can be directly modeled as partially observable Markov decision processes (POMDP), we show that it is possible to directly reduce or approximate the IDP as a polynomially-sized MDP: when this representation is approximate, we prove the resulting policy is boundedly-optimal for the original IDP. Our empirical simulations demonstrate the performance benefit of our algorithms over simpler approaches, and also demonstrate that our approximate representation results in a significantly faster algorithm whose performance is extremely close to the optimal policy for the original IDP.
1210.4878
Join-graph based cost-shifting schemes
cs.AI
We develop several algorithms taking advantage of two common approaches for bounding MPE queries in graphical models: minibucket elimination and message-passing updates for linear programming relaxations. Both methods are quite similar, and offer useful perspectives for the other; our hybrid approaches attempt to balance the advantages of each. We demonstrate the power of our hybrid algorithms through extensive empirical evaluation. Most notably, a Branch and Bound search guided by the heuristic function calculated by one of our new algorithms has recently won first place in the PASCAL2 inference challenge.
1210.4879
Causal Discovery of Linear Cyclic Models from Multiple Experimental Data Sets with Overlapping Variables
stat.ME cs.AI stat.ML
Much of scientific data is collected as randomized experiments intervening on some and observing other variables of interest. Quite often, a given phenomenon is investigated in several studies, and different sets of variables are involved in each study. In this article we consider the problem of integrating such knowledge, inferring as much as possible concerning the underlying causal structure with respect to the union of observed variables from such experimental or passive observational overlapping data sets. We do not assume acyclicity or joint causal sufficiency of the underlying data generating model, but we do restrict the causal relationships to be linear and use only second order statistics of the data. We derive conditions for full model identifiability in the most generic case, and provide novel techniques for incorporating an assumption of faithfulness to aid in inference. In each case we seek to establish what is and what is not determined by the data at hand.
1210.4880
Inferring Strategies from Limited Reconnaissance in Real-time Strategy Games
cs.AI cs.GT cs.LG
In typical real-time strategy (RTS) games, enemy units are visible only when they are within sight range of a friendly unit. Knowledge of an opponent's disposition is limited to what can be observed through scouting. Information is costly, since units dedicated to scouting are unavailable for other purposes, and the enemy will resist scouting attempts. It is important to infer as much as possible about the opponent's current and future strategy from the available observations. We present a dynamic Bayes net model of strategies in the RTS game Starcraft that combines a generative model of how strategies relate to observable quantities with a principled framework for incorporating evidence gained via scouting. We demonstrate the model's ability to infer unobserved aspects of the game from realistic observations.
1210.4881
Tightening Fractional Covering Upper Bounds on the Partition Function for High-Order Region Graphs
cs.LG stat.ML
In this paper we present a new approach for tightening upper bounds on the partition function. Our upper bounds are based on fractional covering bounds on the entropy function, and result in a concave program to compute these bounds and a convex program to tighten them. To solve these programs effectively for general region graphs we utilize the entropy barrier method, thus decomposing the original programs by their dual programs and solve them with dual block optimization scheme. The entropy barrier method provides an elegant framework to generalize the message-passing scheme to high-order region graph, as well as to solve the block dual steps in closed-form. This is a key for computational relevancy for large problems with thousands of regions.
1210.4882
A Maximum Likelihood Approach For Selecting Sets of Alternatives
cs.AI
We consider the problem of selecting a subset of alternatives given noisy evaluations of the relative strength of different alternatives. We wish to select a k-subset (for a given k) that provides a maximum likelihood estimate for one of several objectives, e.g., containing the strongest alternative. Although this problem is NP-hard, we show that when the noise level is sufficiently high, intuitive methods provide the optimal solution. We thus generalize classical results about singling out one alternative and identifying the hidden ranking of alternatives by strength. Extensive experiments show that our methods perform well in practical settings.
1210.4883
A Model-Based Approach to Rounding in Spectral Clustering
cs.LG cs.NA stat.ML
In spectral clustering, one defines a similarity matrix for a collection of data points, transforms the matrix to get the Laplacian matrix, finds the eigenvectors of the Laplacian matrix, and obtains a partition of the data using the leading eigenvectors. The last step is sometimes referred to as rounding, where one needs to decide how many leading eigenvectors to use, to determine the number of clusters, and to partition the data points. In this paper, we propose a novel method for rounding. The method differs from previous methods in three ways. First, we relax the assumption that the number of clusters equals the number of eigenvectors used. Second, when deciding the number of leading eigenvectors to use, we not only rely on information contained in the leading eigenvectors themselves, but also use subsequent eigenvectors. Third, our method is model-based and solves all the three subproblems of rounding using a class of graphical models called latent tree models. We evaluate our method on both synthetic and real-world data. The results show that our method works correctly in the ideal case where between-clusters similarity is 0, and degrades gracefully as one moves away from the ideal case.
1210.4884
A Spectral Algorithm for Latent Junction Trees
cs.LG stat.ML
Latent variable models are an elegant framework for capturing rich probabilistic dependencies in many applications. However, current approaches typically parametrize these models using conditional probability tables, and learning relies predominantly on local search heuristics such as Expectation Maximization. Using tensor algebra, we propose an alternative parameterization of latent variable models (where the model structures are junction trees) that still allows for computation of marginals among observed variables. While this novel representation leads to a moderate increase in the number of parameters for junction trees of low treewidth, it lets us design a local-minimum-free algorithm for learning this parameterization. The main computation of the algorithm involves only tensor operations and SVDs which can be orders of magnitude faster than EM algorithms for large datasets. To our knowledge, this is the first provably consistent parameter learning technique for a large class of low-treewidth latent graphical models beyond trees. We demonstrate the advantages of our method on synthetic and real datasets.
1210.4885
A Case Study in Complexity Estimation: Towards Parallel Branch-and-Bound over Graphical Models
cs.AI
We study the problem of complexity estimation in the context of parallelizing an advanced Branch and Bound-type algorithm over graphical models. The algorithm's pruning power makes load balancing, one crucial element of every distributed system, very challenging. We propose using a statistical regression model to identify and tackle disproportionally complex parallel subproblems, the cause of load imbalance, ahead of time. The proposed model is evaluated and analyzed on various levels and shown to yield robust predictions. We then demonstrate its effectiveness for load balancing in practice.
1210.4886
Exploiting Structure in Cooperative Bayesian Games
cs.GT cs.AI
Cooperative Bayesian games (BGs) can model decision-making problems for teams of agents under imperfect information, but require space and computation time that is exponential in the number of agents. While agent independence has been used to mitigate these problems in perfect information settings, we propose a novel approach for BGs based on the observation that BGs additionally possess a different types of structure, which we call type independence. We propose a factor graph representation that captures both forms of independence and present a theoretical analysis showing that non-serial dynamic programming cannot effectively exploit type independence, while Max-Sum can. Experimental results demonstrate that our approach can tackle cooperative Bayesian games of unprecedented size.
1210.4887
Hilbert Space Embeddings of POMDPs
cs.LG cs.AI stat.ML
A nonparametric approach for policy learning for POMDPs is proposed. The approach represents distributions over the states, observations, and actions as embeddings in feature spaces, which are reproducing kernel Hilbert spaces. Distributions over states given the observations are obtained by applying the kernel Bayes' rule to these distribution embeddings. Policies and value functions are defined on the feature space over states, which leads to a feature space expression for the Bellman equation. Value iteration may then be used to estimate the optimal value function and associated policy. Experimental results confirm that the correct policy is learned using the feature space representation.
1210.4888
Local Structure Discovery in Bayesian Networks
cs.LG cs.AI stat.ML
Learning a Bayesian network structure from data is an NP-hard problem and thus exact algorithms are feasible only for small data sets. Therefore, network structures for larger networks are usually learned with various heuristics. Another approach to scaling up the structure learning is local learning. In local learning, the modeler has one or more target variables that are of special interest; he wants to learn the structure near the target variables and is not interested in the rest of the variables. In this paper, we present a score-based local learning algorithm called SLL. We conjecture that our algorithm is theoretically sound in the sense that it is optimal in the limit of large sample size. Empirical results suggest that SLL is competitive when compared to the constraint-based HITON algorithm. We also study the prospects of constructing the network structure for the whole node set based on local results by presenting two algorithms and comparing them to several heuristics.
1210.4889
Learning STRIPS Operators from Noisy and Incomplete Observations
cs.LG cs.AI stat.ML
Agents learning to act autonomously in real-world domains must acquire a model of the dynamics of the domain in which they operate. Learning domain dynamics can be challenging, especially where an agent only has partial access to the world state, and/or noisy external sensors. Even in standard STRIPS domains, existing approaches cannot learn from noisy, incomplete observations typical of real-world domains. We propose a method which learns STRIPS action models in such domains, by decomposing the problem into first learning a transition function between states in the form of a set of classifiers, and then deriving explicit STRIPS rules from the classifiers' parameters. We evaluate our approach on simulated standard planning domains from the International Planning Competition, and show that it learns useful domain descriptions from noisy, incomplete observations.
1210.4890
The Complexity of Approximately Solving Influence Diagrams
cs.AI
Influence diagrams allow for intuitive and yet precise description of complex situations involving decision making under uncertainty. Unfortunately, most of the problems described by influence diagrams are hard to solve. In this paper we discuss the complexity of approximately solving influence diagrams. We do not assume no-forgetting or regularity, which makes the class of problems we address very broad. Remarkably, we show that when both the tree-width and the cardinality of the variables are bounded the problem admits a fully polynomial-time approximation scheme.
1210.4891
Hokusai - Sketching Streams in Real Time
cs.DB cs.DS
We describe Hokusai, a real time system which is able to capture frequency information for streams of arbitrary sequences of symbols. The algorithm uses the CountMin sketch as its basis and exploits the fact that sketching is linear. It provides real time statistics of arbitrary events, e.g. streams of queries as a function of time. We use a factorizing approximation to provide point estimates at arbitrary (time, item) combinations. Queries can be answered in constant time.
1210.4892
Unsupervised Joint Alignment and Clustering using Bayesian Nonparametrics
cs.LG stat.ML
Joint alignment of a collection of functions is the process of independently transforming the functions so that they appear more similar to each other. Typically, such unsupervised alignment algorithms fail when presented with complex data sets arising from multiple modalities or make restrictive assumptions about the form of the functions or transformations, limiting their generality. We present a transformed Bayesian infinite mixture model that can simultaneously align and cluster a data set. Our model and associated learning scheme offer two key advantages: the optimal number of clusters is determined in a data-driven fashion through the use of a Dirichlet process prior, and it can accommodate any transformation function parameterized by a continuous parameter vector. As a result, it is applicable to a wide range of data types, and transformation functions. We present positive results on synthetic two-dimensional data, on a set of one-dimensional curves, and on various image data sets, showing large improvements over previous work. We discuss several variations of the model and conclude with directions for future work.
1210.4893
Sparse Q-learning with Mirror Descent
cs.LG stat.ML
This paper explores a new framework for reinforcement learning based on online convex optimization, in particular mirror descent and related algorithms. Mirror descent can be viewed as an enhanced gradient method, particularly suited to minimization of convex functions in highdimensional spaces. Unlike traditional gradient methods, mirror descent undertakes gradient updates of weights in both the dual space and primal space, which are linked together using a Legendre transform. Mirror descent can be viewed as a proximal algorithm where the distance generating function used is a Bregman divergence. A new class of proximal-gradient based temporal-difference (TD) methods are presented based on different Bregman divergences, which are more powerful than regular TD learning. Examples of Bregman divergences that are studied include p-norm functions, and Mahalanobis distance based on the covariance of sample gradients. A new family of sparse mirror-descent reinforcement learning methods are proposed, which are able to find sparse fixed points of an l1-regularized Bellman equation at significantly less computational cost than previous methods based on second-order matrix methods. An experimental study of mirror-descent reinforcement learning is presented using discrete and continuous Markov decision processes.
1210.4894
Heuristic Ranking in Tightly Coupled Probabilistic Description Logics
cs.AI cs.LO
The Semantic Web effort has steadily been gaining traction in the recent years. In particular,Web search companies are recently realizing that their products need to evolve towards having richer semantic search capabilities. Description logics (DLs) have been adopted as the formal underpinnings for Semantic Web languages used in describing ontologies. Reasoning under uncertainty has recently taken a leading role in this arena, given the nature of data found on theWeb. In this paper, we present a probabilistic extension of the DL EL++ (which underlies the OWL2 EL profile) using Markov logic networks (MLNs) as probabilistic semantics. This extension is tightly coupled, meaning that probabilistic annotations in formulas can refer to objects in the ontology. We show that, even though the tightly coupled nature of our language means that many basic operations are data-intractable, we can leverage a sublanguage of MLNs that allows to rank the atomic consequences of an ontology relative to their probability values (called ranking queries) even when these values are not fully computed. We present an anytime algorithm to answer ranking queries, and provide an upper bound on the error that it incurs, as well as a criterion to decide when results are guaranteed to be correct.
1210.4896
Closed-Form Learning of Markov Networks from Dependency Networks
cs.LG cs.AI stat.ML
Markov networks (MNs) are a powerful way to compactly represent a joint probability distribution, but most MN structure learning methods are very slow, due to the high cost of evaluating candidates structures. Dependency networks (DNs) represent a probability distribution as a set of conditional probability distributions. DNs are very fast to learn, but the conditional distributions may be inconsistent with each other and few inference algorithms support DNs. In this paper, we present a closed-form method for converting a DN into an MN, allowing us to enjoy both the efficiency of DN learning and the convenience of the MN representation. When the DN is consistent, this conversion is exact. For inconsistent DNs, we present averaging methods that significantly improve the approximation. In experiments on 12 standard datasets, our methods are orders of magnitude faster than and often more accurate than combining conditional distributions using weight learning.
1210.4897
Belief Propagation for Structured Decision Making
cs.AI
Variational inference algorithms such as belief propagation have had tremendous impact on our ability to learn and use graphical models, and give many insights for developing or understanding exact and approximate inference. However, variational approaches have not been widely adoped for decision making in graphical models, often formulated through influence diagrams and including both centralized and decentralized (or multi-agent) decisions. In this work, we present a general variational framework for solving structured cooperative decision-making problems, use it to propose several belief propagation-like algorithms, and analyze them both theoretically and empirically.
1210.4898
Value Function Approximation in Noisy Environments Using Locally Smoothed Regularized Approximate Linear Programs
cs.LG stat.ML
Recently, Petrik et al. demonstrated that L1Regularized Approximate Linear Programming (RALP) could produce value functions and policies which compared favorably to established linear value function approximation techniques like LSPI. RALP's success primarily stems from the ability to solve the feature selection and value function approximation steps simultaneously. RALP's performance guarantees become looser if sampled next states are used. For very noisy domains, RALP requires an accurate model rather than samples, which can be unrealistic in some practical scenarios. In this paper, we demonstrate this weakness, and then introduce Locally Smoothed L1-Regularized Approximate Linear Programming (LS-RALP). We demonstrate that LS-RALP mitigates inaccuracies stemming from noise even without an accurate model. We show that, given some smoothness assumptions, as the number of samples increases, error from noise approaches zero, and provide experimental examples of LS-RALP's success on common reinforcement learning benchmark problems.
1210.4899
Fast Exact Inference for Recursive Cardinality Models
cs.LG stat.ML
Cardinality potentials are a generally useful class of high order potential that affect probabilities based on how many of D binary variables are active. Maximum a posteriori (MAP) inference for cardinality potential models is well-understood, with efficient computations taking O(DlogD) time. Yet efficient marginalization and sampling have not been addressed as thoroughly in the machine learning community. We show that there exists a simple algorithm for computing marginal probabilities and drawing exact joint samples that runs in O(Dlog2 D) time, and we show how to frame the algorithm as efficient belief propagation in a low order tree-structured model that includes additional auxiliary variables. We then develop a new, more general class of models, termed Recursive Cardinality models, which take advantage of this efficiency. Finally, we show how to do efficient exact inference in models composed of a tree structure and a cardinality potential. We explore the expressive power of Recursive Cardinality models and empirically demonstrate their utility.
1210.4900
Probability and Asset Updating using Bayesian Networks for Combinatorial Prediction Markets
cs.AI q-fin.TR
A market-maker-based prediction market lets forecasters aggregate information by editing a consensus probability distribution either directly or by trading securities that pay off contingent on an event of interest. Combinatorial prediction markets allow trading on any event that can be specified as a combination of a base set of events. However, explicitly representing the full joint distribution is infeasible for markets with more than a few base events. A factored representation such as a Bayesian network (BN) can achieve tractable computation for problems with many related variables. Standard BN inference algorithms, such as the junction tree algorithm, can be used to update a representation of the entire joint distribution given a change to any local conditional probability. However, in order to let traders reuse assets from prior trades while never allowing assets to become negative, a BN based prediction market also needs to update a representation of each user's assets and find the conditional state in which a user has minimum assets. Users also find it useful to see their expected assets given an edit outcome. We show how to generalize the junction tree algorithm to perform all these computations.
1210.4901
An Approximate Solution Method for Large Risk-Averse Markov Decision Processes
q-fin.PM cs.AI cs.GT
Stochastic domains often involve risk-averse decision makers. While recent work has focused on how to model risk in Markov decision processes using risk measures, it has not addressed the problem of solving large risk-averse formulations. In this paper, we propose and analyze a new method for solving large risk-averse MDPs with hybrid continuous-discrete state spaces and continuous action spaces. The proposed method iteratively improves a bound on the value function using a linearity structure of the MDP. We demonstrate the utility and properties of the method on a portfolio optimization problem.
1210.4902
Efficiently Searching for Frustrated Cycles in MAP Inference
cs.DS cs.LG stat.ML
Dual decomposition provides a tractable framework for designing algorithms for finding the most probable (MAP) configuration in graphical models. However, for many real-world inference problems, the typical decomposition has a large integrality gap, due to frustrated cycles. One way to tighten the relaxation is to introduce additional constraints that explicitly enforce cycle consistency. Earlier work showed that cluster-pursuit algorithms, which iteratively introduce cycle and other higherorder consistency constraints, allows one to exactly solve many hard inference problems. However, these algorithms explicitly enumerate a candidate set of clusters, limiting them to triplets or other short cycles. We solve the search problem for cycle constraints, giving a nearly linear time algorithm for finding the most frustrated cycle of arbitrary length. We show how to use this search algorithm together with the dual decomposition framework and clusterpursuit. The new algorithm exactly solves MAP inference problems arising from relational classification and stereo vision.
1210.4903
Detecting Change-Points in Time Series by Maximum Mean Discrepancy of Ordinal Pattern Distributions
stat.ME cs.CE
As a new method for detecting change-points in high-resolution time series, we apply Maximum Mean Discrepancy to the distributions of ordinal patterns in different parts of a time series. The main advantage of this approach is its computational simplicity and robustness with respect to (non-linear) monotonic transformations, which makes it particularly well-suited for the analysis of long biophysical time series where the exact calibration of measurement devices is unknown or varies with time. We establish consistency of the method and evaluate its performance in simulation studies. Furthermore, we demonstrate the application to the analysis of electroencephalography (EEG) and electrocardiography (ECG) recordings.
1210.4904
Spectrum Identification using a Dynamic Bayesian Network Model of Tandem Mass Spectra
cs.CE q-bio.QM
Shotgun proteomics is a high-throughput technology used to identify unknown proteins in a complex mixture. At the heart of this process is a prediction task, the spectrum identification problem, in which each fragmentation spectrum produced by a shotgun proteomics experiment must be mapped to the peptide (protein subsequence) which generated the spectrum. We propose a new algorithm for spectrum identification, based on dynamic Bayesian networks, which significantly outperforms the de-facto standard tools for this task: SEQUEST and Mascot.
1210.4905
Latent Composite Likelihood Learning for the Structured Canonical Correlation Model
stat.ML cs.LG
Latent variable models are used to estimate variables of interest quantities which are observable only up to some measurement error. In many studies, such variables are known but not precisely quantifiable (such as "job satisfaction" in social sciences and marketing, "analytical ability" in educational testing, or "inflation" in economics). This leads to the development of measurement instruments to record noisy indirect evidence for such unobserved variables such as surveys, tests and price indexes. In such problems, there are postulated latent variables and a given measurement model. At the same time, other unantecipated latent variables can add further unmeasured confounding to the observed variables. The problem is how to deal with unantecipated latents variables. In this paper, we provide a method loosely inspired by canonical correlation that makes use of background information concerning the "known" latent variables. Given a partially specified structure, it provides a structure learning approach to detect "unknown unknowns," the confounding effect of potentially infinitely many other latent variables. This is done without explicitly modeling such extra latent factors. Because of the special structure of the problem, we are able to exploit a new variation of composite likelihood fitting to efficiently learn this structure. Validation is provided with experiments in synthetic data and the analysis of a large survey done with a sample of over 100,000 staff members of the National Health Service of the United Kingdom.
1210.4906
Efficient MRF Energy Minimization via Adaptive Diminishing Smoothing
cs.AI cs.DS
We consider the linear programming relaxation of an energy minimization problem for Markov Random Fields. The dual objective of this problem can be treated as a concave and unconstrained, but non-smooth function. The idea of smoothing the objective prior to optimization was recently proposed in a series of papers. Some of them suggested the idea to decrease the amount of smoothing (so called temperature) while getting closer to the optimum. However, no theoretical substantiation was provided. We propose an adaptive smoothing diminishing algorithm based on the duality gap between relaxed primal and dual objectives and demonstrate the efficiency of our approach with a smoothed version of Sequential Tree-Reweighted Message Passing (TRW-S) algorithm. The strategy is applicable to other algorithms as well, avoids adhoc tuning of the smoothing during iterations, and provably guarantees convergence to the optimum.
1210.4907
From imprecise probability assessments to conditional probabilities with quasi additive classes of conditioning events
cs.AI math.PR
In this paper, starting from a generalized coherent (i.e. avoiding uniform loss) intervalvalued probability assessment on a finite family of conditional events, we construct conditional probabilities with quasi additive classes of conditioning events which are consistent with the given initial assessment. Quasi additivity assures coherence for the obtained conditional probabilities. In order to reach our goal we define a finite sequence of conditional probabilities by exploiting some theoretical results on g-coherence. In particular, we use solutions of a finite sequence of linear systems.
1210.4909
Active Learning with Distributional Estimates
cs.LG stat.ML
Active Learning (AL) is increasingly important in a broad range of applications. Two main AL principles to obtain accurate classification with few labeled data are refinement of the current decision boundary and exploration of poorly sampled regions. In this paper we derive a novel AL scheme that balances these two principles in a natural way. In contrast to many AL strategies, which are based on an estimated class conditional probability ^p(y|x), a key component of our approach is to view this quantity as a random variable, hence explicitly considering the uncertainty in its estimated value. Our main contribution is a novel mathematical framework for uncertainty-based AL, and a corresponding AL scheme, where the uncertainty in ^p(y|x) is modeled by a second-order distribution. On the practical side, we show how to approximate such second-order distributions for kernel density classification. Finally, we find that over a large number of UCI, USPS and Caltech4 datasets, our AL scheme achieves significantly better learning curves than popular AL methods such as uncertainty sampling and error reduction sampling, when all use the same kernel density classifier.
1210.4910
New Advances and Theoretical Insights into EDML
cs.AI cs.LG stat.ML
EDML is a recently proposed algorithm for learning MAP parameters in Bayesian networks. In this paper, we present a number of new advances and insights on the EDML algorithm. First, we provide the multivalued extension of EDML, originally proposed for Bayesian networks over binary variables. Next, we identify a simplified characterization of EDML that further implies a simple fixed-point algorithm for the convex optimization problem that underlies it. This characterization further reveals a connection between EDML and EM: a fixed point of EDML is a fixed point of EM, and vice versa. We thus identify also a new characterization of EM fixed points, but in the semantics of EDML. Finally, we propose a hybrid EDML/EM algorithm that takes advantage of the improved empirical convergence behavior of EDML, while maintaining the monotonic improvement property of EM.
1210.4911
Multi-objective Influence Diagrams
cs.AI
We describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can still be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on e-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user tradeoffs, which also greatly improves the efficiency.
1210.4912
FHHOP: A Factored Hybrid Heuristic Online Planning Algorithm for Large POMDPs
cs.AI
Planning in partially observable Markov decision processes (POMDPs) remains a challenging topic in the artificial intelligence community, in spite of recent impressive progress in approximation techniques. Previous research has indicated that online planning approaches are promising in handling large-scale POMDP domains efficiently as they make decisions "on demand" instead of proactively for the entire state space. We present a Factored Hybrid Heuristic Online Planning (FHHOP) algorithm for large POMDPs. FHHOP gets its power by combining a novel hybrid heuristic search strategy with a recently developed factored state representation. On several benchmark problems, FHHOP substantially outperformed state-of-the-art online heuristic search approaches in terms of both scalability and quality.
1210.4913
An Improved Admissible Heuristic for Learning Optimal Bayesian Networks
cs.AI cs.LG stat.ML
Recently two search algorithms, A* and breadth-first branch and bound (BFBnB), were developed based on a simple admissible heuristic for learning Bayesian network structures that optimize a scoring function. The heuristic represents a relaxation of the learning problem such that each variable chooses optimal parents independently. As a result, the heuristic may contain many directed cycles and result in a loose bound. This paper introduces an improved admissible heuristic that tries to avoid directed cycles within small groups of variables. A sparse representation is also introduced to store only the unique optimal parent choices. Empirical results show that the new techniques significantly improved the efficiency and scalability of A* and BFBnB on most of datasets tested in this paper.
1210.4914
Latent Structured Ranking
cs.LG cs.IR stat.ML
Many latent (factorized) models have been proposed for recommendation tasks like collaborative filtering and for ranking tasks like document or image retrieval and annotation. Common to all those methods is that during inference the items are scored independently by their similarity to the query in the latent embedding space. The structure of the ranked list (i.e. considering the set of items returned as a whole) is not taken into account. This can be a problem because the set of top predictions can be either too diverse (contain results that contradict each other) or are not diverse enough. In this paper we introduce a method for learning latent structured rankings that improves over existing methods by providing the right blend of predictions at the top of the ranked list. Particular emphasis is put on making this method scalable. Empirical results on large scale image annotation and music recommendation tasks show improvements over existing approaches.
1210.4916
A Cluster-Cumulant Expansion at the Fixed Points of Belief Propagation
cs.AI
We introduce a new cluster-cumulant expansion (CCE) based on the fixed points of iterative belief propagation (IBP). This expansion is similar in spirit to the loop-series (LS) recently introduced in [1]. However, in contrast to the latter, the CCE enjoys the following important qualities: 1) it is defined for arbitrary state spaces 2) it is easily extended to fixed points of generalized belief propagation (GBP), 3) disconnected groups of variables will not contribute to the CCE and 4) the accuracy of the expansion empirically improves upon that of the LS. The CCE is based on the same M\"obius transform as the Kikuchi approximation, but unlike GBP does not require storing the beliefs of the GBP-clusters nor does it suffer from convergence issues during belief updating.
1210.4917
Fast Graph Construction Using Auction Algorithm
cs.LG stat.ML
In practical machine learning systems, graph based data representation has been widely used in various learning paradigms, ranging from unsupervised clustering to supervised classification. Besides those applications with natural graph or network structure data, such as social network analysis and relational learning, many other applications often involve a critical step in converting data vectors to an adjacency graph. In particular, a sparse subgraph extracted from the original graph is often required due to both theoretic and practical needs. Previous study clearly shows that the performance of different learning algorithms, e.g., clustering and classification, benefits from such sparse subgraphs with balanced node connectivity. However, the existing graph construction methods are either computationally expensive or with unsatisfactory performance. In this paper, we utilize a scalable method called auction algorithm and its parallel extension to recover a sparse yet nearly balanced subgraph with significantly reduced computational cost. Empirical study and comparison with the state-ofart approaches clearly demonstrate the superiority of the proposed method in both efficiency and accuracy.
1210.4918
Dynamic Teaching in Sequential Decision Making Environments
cs.LG cs.AI stat.ML
We describe theoretical bounds and a practical algorithm for teaching a model by demonstration in a sequential decision making environment. Unlike previous efforts that have optimized learners that watch a teacher demonstrate a static policy, we focus on the teacher as a decision maker who can dynamically choose different policies to teach different parts of the environment. We develop several teaching frameworks based on previously defined supervised protocols, such as Teaching Dimension, extending them to handle noise and sequences of inputs encountered in an MDP.We provide theoretical bounds on the learnability of several important model classes in this setting and suggest a practical algorithm for dynamic teaching.
1210.4919
Latent Dirichlet Allocation Uncovers Spectral Characteristics of Drought Stressed Plants
cs.LG cs.CE stat.ML
Understanding the adaptation process of plants to drought stress is essential in improving management practices, breeding strategies as well as engineering viable crops for a sustainable agriculture in the coming decades. Hyper-spectral imaging provides a particularly promising approach to gain such understanding since it allows to discover non-destructively spectral characteristics of plants governed primarily by scattering and absorption characteristics of the leaf internal structure and biochemical constituents. Several drought stress indices have been derived using hyper-spectral imaging. However, they are typically based on few hyper-spectral images only, rely on interpretations of experts, and consider few wavelengths only. In this study, we present the first data-driven approach to discovering spectral drought stress indices, treating it as an unsupervised labeling problem at massive scale. To make use of short range dependencies of spectral wavelengths, we develop an online variational Bayes algorithm for latent Dirichlet allocation with convolved Dirichlet regularizer. This approach scales to massive datasets and, hence, provides a more objective complement to plant physiological practices. The spectral topics found conform to plant physiological knowledge and can be computed in a fraction of the time compared to existing LDA approaches.
1210.4920
Factorized Multi-Modal Topic Model
cs.LG cs.IR stat.ML
Multi-modal data collections, such as corpora of paired images and text snippets, require analysis methods beyond single-view component and topic models. For continuous observations the current dominant approach is based on extensions of canonical correlation analysis, factorizing the variation into components shared by the different modalities and those private to each of them. For count data, multiple variants of topic models attempting to tie the modalities together have been presented. All of these, however, lack the ability to learn components private to one modality, and consequently will try to force dependencies even between minimally correlating modalities. In this work we combine the two approaches by presenting a novel HDP-based topic model that automatically learns both shared and private topics. The model is shown to be especially useful for querying the contents of one domain given samples of the other.
1210.4981
Foundations and Tools for End-User Architecting
cs.SE cs.HC cs.SI
Within an increasing number of domains an important emerging need is the ability for technically naive users to compose computational elements into novel configurations. Examples include astronomers who create new analysis pipelines to process telescopic data, intelligence analysts who must process diverse sources of unstructured text to discover socio-technical trends, and medical researchers who have to process brain image data in new ways to understand disease pathways. Creating such compositions today typically requires low-level technical expertise, limiting the use of computational methods and increasing the cost of using them. In this paper we describe an approach - which we term end-user architecting - that exploits the similarity between such compositional activities and those of software architects. Drawing on the rich heritage of software architecture languages, methods, and tools, we show how those techniques can be adapted to support end users in composing rich computational systems through domain-specific compositional paradigms and component repositories, without requiring that they have knowledge of the low-level implementation details of the components or the compositional infrastructure. Further, we outline a set of open research challenges that the area of end-user architecting raises.
1210.5031
Semi-Definite Programming Relaxation for Non-Line-of-Sight Localization
cs.IT cs.MA cs.NI math.IT
We consider the problem of estimating the locations of a set of points in a k-dimensional euclidean space given a subset of the pairwise distance measurements between the points. We focus on the case when some fraction of these measurements can be arbitrarily corrupted by large additive noise. Given that the problem is highly non-convex, we propose a simple semidefinite programming relaxation that can be efficiently solved using standard algorithms. We define a notion of non-contractibility and show that the relaxation gives the exact point locations when the underlying graph is non-contractible. The performance of the algorithm is evaluated on an experimental data set obtained from a network of 44 nodes in an indoor environment and is shown to be robust to non-line-of-sight errors.
1210.5034
Optimal Computational Trade-Off of Inexact Proximal Methods
cs.LG cs.CV cs.NA
In this paper, we investigate the trade-off between convergence rate and computational cost when minimizing a composite functional with proximal-gradient methods, which are popular optimisation tools in machine learning. We consider the case when the proximity operator is computed via an iterative procedure, which provides an approximation of the exact proximity operator. In that case, we obtain algorithms with two nested loops. We show that the strategy that minimizes the computational cost to reach a solution with a desired accuracy in finite time is to set the number of inner iterations to a constant, which differs from the strategy indicated by a convergence rate analysis. In the process, we also present a new procedure called SIP (that is Speedy Inexact Proximal-gradient algorithm) that is both computationally efficient and easy to implement. Our numerical experiments confirm the theoretical findings and suggest that SIP can be a very competitive alternative to the standard procedure.
1210.5035
A Comparative Study of State Transition Algorithm with Harmony Search and Artificial Bee Colony
math.OC cs.IT math.IT math.PR
We focus on a comparative study of three recently developed nature-inspired optimization algorithms, including state transition algorithm, harmony search and artificial bee colony. Their core mechanisms are introduced and their similarities and differences are described. Then, a suit of 27 well-known benchmark problems are used to investigate the performance of these algorithms and finally we discuss their general applicability with respect to the structure of optimization problems.
1210.5041
Navigation domain representation for interactive multiview imaging
cs.MM cs.CV
Enabling users to interactively navigate through different viewpoints of a static scene is a new interesting functionality in 3D streaming systems. While it opens exciting perspectives towards rich multimedia applications, it requires the design of novel representations and coding techniques in order to solve the new challenges imposed by interactive navigation. Interactivity clearly brings new design constraints: the encoder is unaware of the exact decoding process, while the decoder has to reconstruct information from incomplete subsets of data since the server can generally not transmit images for all possible viewpoints due to resource constrains. In this paper, we propose a novel multiview data representation that permits to satisfy bandwidth and storage constraints in an interactive multiview streaming system. In particular, we partition the multiview navigation domain into segments, each of which is described by a reference image and some auxiliary information. The auxiliary information enables the client to recreate any viewpoint in the navigation segment via view synthesis. The decoder is then able to navigate freely in the segment without further data request to the server; it requests additional data only when it moves to a different segment. We discuss the benefits of this novel representation in interactive navigation systems and further propose a method to optimize the partitioning of the navigation domain into independent segments, under bandwidth and storage constraints. Experimental results confirm the potential of the proposed representation; namely, our system leads to similar compression performance as classical inter-view coding, while it provides the high level of flexibility that is required for interactive streaming. Hence, our new framework represents a promising solution for 3D data representation in novel interactive multimedia services.
1210.5058
Properties of Persistent Mutual Information and Emergence
math-ph cs.IT math.IT math.MP
The persistent mutual information (PMI) is a complexity measure for stochastic processes. It is related to well-known complexity measures like excess entropy or statistical complexity. Essentially it is a variation of the excess entropy so that it can be interpreted as a specific measure of system internal memory. The PMI was first introduced in 2010 by Ball, Diakonova and MacKay as a measure for (strong) emergence. In this paper we define the PMI mathematically and investigate the relation to excess entropy and statistical complexity. In particular we prove that the excess entropy is an upper bound of the PMI. Furthermore we show some properties of the PMI and calculate it explicitly for some example processes. We also discuss to what extend it is a measure for emergence and compare it with alternative approaches used to formalize emergence.
1210.5117
Distributed and Autonomous Resource and Power Allocation for Wireless Networks
cs.IT cs.NI math.IT
In this paper, a distributed and autonomous technique for resource and power allocation in orthogonal frequency division multiple access (OFDMA) femto-cellular networks is presented. Here, resource blocks (RBs) and their corresponding transmit powers are assigned to the user(s) in each cell individually without explicit coordination between femto base stations (FBSs). The "allocatability" of each resource is determined utilising only locally available information of the following quantities: - the required rate of the user; - the quality (i.e., strength) of the desired signal; - the frequency-selective fading on each RB; and - the level of interference incident on each RB. Using a fuzzy logic system, the time-averaged values of each of these inputs are combined to determine which RBs are most suitable to be allocated in a particular cell, i.e., which resources can be allocated such that the user requested rate(s) in that cell are satisfied. Furthermore, link adaptation (LA) is included, enabling users to adjust to varying channel conditions. A comprehensive study of this system in a femto-cell environment is performed, yielding system performance improvements in terms of throughput, energy efficiency and coverage over state-of-the-art ICIC techniques.
1210.5118
Creating a level playing field for all symbols in a discretization
cs.DS cs.AI
In time series analysis research there is a strong interest in discrete representations of real valued data streams. One approach that emerged over a decade ago and is still considered state-of-the-art is the Symbolic Aggregate Approximation algorithm. This discretization algorithm was the first symbolic approach that mapped a real-valued time series to a symbolic representation that was guaranteed to lower-bound Euclidean distance. The interest of this paper concerns the SAX assumption of data being highly Gaussian and the use of the standard normal curve to choose partitions to discretize the data. Though not necessarily, but generally, and certainly in its canonical form, the SAX approach chooses partitions on the standard normal curve that would produce an equal probability for each symbol in a finite alphabet to occur. This procedure is generally valid as a time series is normalized before the rest of the SAX algorithm is applied. However there exists a caveat to this assumption of equi-probability due to the intermediate step of Piecewise Aggregate Approximation (PAA). What we will show in this paper is that when PAA is applied the distribution of the data is indeed altered, resulting in a shrinking standard deviation that is proportional to the number of points used to create a segment of the PAA representation and the degree of auto-correlation within the series. Data that exhibits statistically significant auto-correlation is less affected by this shrinking distribution. As the standard deviation of the data contracts, the mean remains the same, however the distribution is no longer standard normal and therefore the partitions based on the standard normal curve are no longer valid for the assumption of equal probability.
1210.5128
A Novel Learning Algorithm for Bayesian Network and Its Efficient Implementation on GPU
cs.DC cs.LG
Computational inference of causal relationships underlying complex networks, such as gene-regulatory pathways, is NP-complete due to its combinatorial nature when permuting all possible interactions. Markov chain Monte Carlo (MCMC) has been introduced to sample only part of the combinations while still guaranteeing convergence and traversability, which therefore becomes widely used. However, MCMC is not able to perform efficiently enough for networks that have more than 15~20 nodes because of the computational complexity. In this paper, we use general purpose processor (GPP) and general purpose graphics processing unit (GPGPU) to implement and accelerate a novel Bayesian network learning algorithm. With a hash-table-based memory-saving strategy and a novel task assigning strategy, we achieve a 10-fold acceleration per iteration than using a serial GPP. Specially, we use a greedy method to search for the best graph from a given order. We incorporate a prior component in the current scoring function, which further facilitates the searching. Overall, we are able to apply this system to networks with more than 60 nodes, allowing inferences and modeling of bigger and more complex networks than current methods.
1210.5135
LSBN: A Large-Scale Bayesian Structure Learning Framework for Model Averaging
cs.LG stat.ML
The motivation for this paper is to apply Bayesian structure learning using Model Averaging in large-scale networks. Currently, Bayesian model averaging algorithm is applicable to networks with only tens of variables, restrained by its super-exponential complexity. We present a novel framework, called LSBN(Large-Scale Bayesian Network), making it possible to handle networks with infinite size by following the principle of divide-and-conquer. The method of LSBN comprises three steps. In general, LSBN first performs the partition by using a second-order partition strategy, which achieves more robust results. LSBN conducts sampling and structure learning within each overlapping community after the community is isolated from other variables by Markov Blanket. Finally LSBN employs an efficient algorithm, to merge structures of overlapping communities into a whole. In comparison with other four state-of-art large-scale network structure learning algorithms such as ARACNE, PC, Greedy Search and MMHC, LSBN shows comparable results in five common benchmark datasets, evaluated by precision, recall and f-score. What's more, LSBN makes it possible to learn large-scale Bayesian structure by Model Averaging which used to be intractable. In summary, LSBN provides an scalable and parallel framework for the reconstruction of network structures. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful clusters in biological networks, such as protein-protein-interaction network or gene regulatory network, as well as in social network.
1210.5161
Predicting Group Evolution in the Social Network
cs.SI physics.soc-ph
Groups - social communities are important components of entire societies, analysed by means of the social network concept. Their immanent feature is continuous evolution over time. If we know how groups in the social network has evolved we can use this information and try to predict the next step in the given group evolution. In the paper, a new aproach for group evolution prediction is presented and examined. Experimental studies on four evolving social networks revealed that (i) the prediction based on the simple input features may be very accurate, (ii) some classifiers are more precise than the others and (iii) parameters of the group evolution extracion method significantly influence the prediction quality.
1210.5167
Influence of the Dynamic Social Network Timeframe Type and Size on the Group Evolution Discovery
cs.SI physics.soc-ph
New technologies allow to store vast amount of data about users interaction. From those data the social network can be created. Additionally, because usually also time and dates of this activities are stored, the dynamic of such network can be analysed by splitting it into many timeframes representing the state of the network during specific period of time. One of the most interesting issue is group evolution over time. To track group evolution the GED method can be used. However, choice of the timeframe type and length might have great influence on the method results. Therefore, in this paper, the influence of timeframe type as well as timeframe length on the GED method results is extensively analysed.
1210.5171
Identification of Group Changes in Blogosphere
cs.SI physics.soc-ph
The paper addresses a problem of change identification in social group evolution. A new SGCI method for discovering of stable groups was proposed and compared with existing GED method. The experimental studies on a Polish blogosphere service revealed that both methods are able to identify similar evolution events even though both use different concepts. Some differences were demonstrated as well
1210.5180
Shortest Path Discovery in the Multi-layered Social Network
cs.SI physics.soc-ph
Multi-layered social networks consist of the fixed set of nodes linked by multiple connections. These connections may be derived from different types of user activities logged in the IT system. To calculate any structural measures for multi-layered networks this multitude of relations should be coped with in the parameterized way. Two separate algorithms for evaluation of shortest paths in the multi-layered social network are proposed in the paper. The first one is based on pre-processing - aggregation of multiple links into single multi-layered edges, whereas in the second approach, many edges are processed 'on the fly' in the middle of path discovery. Experimental studies carried out on the DBLP database converted into the multi-layered social network are presented as well.
1210.5183
LLR Compression for BICM Systems Using Large Constellations
cs.IT math.IT
Digital video broadcasting (DVB-C2) and other modern communication standards increase diversity by means of a symbol-level interleaver that spans over several codewords. De-interleaving at the receiver requires a large memory, which has a significant impact on the implementation cost. In this paper, we propose a technique that reduces the de-interleaver memory size. By quantizing log-likelihood ratios with bit-specific quantizers and compressing the quantized output, we can significantly reduce the memory size with a negligible increase in computational complexity. Both the quantizer and compressor are designed via a GMI-based maximization procedure. For a typical DVB-C2 scenario, numerical results show that the proposed solution enables a memory saving up to 30%.
1210.5184
A degree centrality in multi-layered social network
cs.SI physics.soc-ph
Multi-layered social networks reflect complex relationships existing in modern interconnected IT systems. In such a network each pair of nodes may be linked by many edges that correspond to different communication or collaboration user activities. Multi-layered degree centrality for multi-layered social networks is presented in the paper. Experimental studies were carried out on data collected from the real Web 2.0 site. The multi-layered social network extracted from this data consists of ten distinct layers and the network analysis was performed for different degree centralities measures.
1210.5196
Matrix reconstruction with the local max norm
stat.ML cs.LG
We introduce a new family of matrix norms, the "local max" norms, generalizing existing methods such as the max norm, the trace norm (nuclear norm), and the weighted or smoothed weighted trace norms, which have been extensively used in the literature as regularizers for matrix reconstruction problems. We show that this new family can be used to interpolate between the (weighted or unweighted) trace norm and the more conservative max norm. We test this interpolation on simulated data and on the large-scale Netflix and MovieLens ratings data, and find improved accuracy relative to the existing matrix norms. We also provide theoretical results showing learning guarantees for some of the new norms.
1210.5198
Multiple Hypotheses Iterative Decoding of LDPC in the Presence of Strong Phase Noise
cs.IT math.IT
Many satellite communication systems operating today employ low cost upconverters or downconverters which create phase noise. This noise can severely limit the information rate of the system and pose a serious challenge for the detection systems. Moreover, simple solutions for phase noise tracking such as PLL either require low phase noise or otherwise require many pilot symbols which reduce the effective data rate. In the last decade we have witnessed a significant amount of research done on joint estimation and decoding of phase noise and coded information. These algorithms are based on the factor graph representation of the joint posterior distribution. The framework proposed in [5], allows the design of efficient message passing algorithms which incorporate both the code graph and the channel graph. The use of LDPC or Turbo decoders, as part of iterative message passing schemes, allows the receiver to operate in low SNR regions while requiring less pilot symbols. In this paper we propose a multiple hypotheses algorithm for joint detection and estimation of coded information in a strong phase noise channel. We also present a low complexity mixture reduction procedure which maintains very good accuracy for the belief propagation messages.
1210.5215
The scaling of human interactions with city size
physics.soc-ph cs.SI physics.data-an
The size of cities is known to play a fundamental role in social and economic life. Yet, its relation to the structure of the underlying network of human interactions has not been investigated empirically in detail. In this paper, we map society-wide communication networks to the urban areas of two European countries. We show that both the total number of contacts and the total communication activity grow superlinearly with city population size, according to well-defined scaling relations and resulting from a multiplicative increase that affects most citizens. Perhaps surprisingly, however, the probability that an individual's contacts are also connected with each other remains largely unaffected. These empirical results predict a systematic and scale-invariant acceleration of interaction-based spreading phenomena as cities get bigger, which is numerically confirmed by applying epidemiological models to the studied networks. Our findings should provide a microscopic basis towards understanding the superlinear increase of different socioeconomic quantities with city size, that applies to almost all urban systems and includes, for instance, the creation of new inventions or the prevalence of certain contagious diseases.
1210.5219
The Domino Effect in Decentralized Wireless Networks
cs.IT cs.NI math.IT
Convergence of resource allocation algorithms is well covered in the literature as convergence to a steady state is important due to stability and performance. However, research is lacking when it comes to the propagation of change that occur in a network due to new nodes arriving or old nodes leaving or updating their allocation. As change can propagate through the network in a manner similar to how domino pieces falls, we call this propagation of change the domino effect. In this paper we investigate how change at one node can affect other nodes for a simple power control algorithm. We provide analytical results from a deterministic network as well as a Poisson distributed network through percolation theory and provide simulation results that highlight some aspects of the domino effect. The difficulty of mitigating this domino effect lies in the fact that to avoid it, one needs to have a margin of tolerance for changes in the network. However, a high margin leads to poor system performance in a steady-state and therefore one has to consider a trade-off between performance and propagation of change.
1210.5222
Module Theorem for The General Theory of Stable Models
cs.AI cs.LO
The module theorem by Janhunen et al. demonstrates how to provide a modular structure in answer set programming, where each module has a well-defined input/output interface which can be used to establish the compositionality of answer sets. The theorem is useful in the analysis of answer set programs, and is a basis of incremental grounding and reactive answer set programming. We extend the module theorem to the general theory of stable models by Ferraris et al. The generalization applies to non-ground logic programs allowing useful constructs in answer set programming, such as choice rules, the count aggregate, and nested expressions. Our extension is based on relating the module theorem to the symmetric splitting theorem by Ferraris et al. Based on this result, we reformulate and extend the theory of incremental answer set computation to a more general class of programs.
1210.5240
Tracking Group Evolution in Social Networks
cs.SI physics.soc-ph
Easy access and vast amount of data, especially from long period of time, allows to divide social network into timeframes and create temporal social network. Such network enables to analyse its dynamics. One aspect of the dynamics is analysis of social communities evolution, i.e., how particular group changes over time. To do so, the complete group evolution history is needed. That is why in this paper the new method for group evolution extraction called GED is presented.
1210.5268
Diffusion of Lexical Change in Social Media
cs.CL cs.SI physics.soc-ph
Computer-mediated communication is driving fundamental changes in the nature of written language. We investigate these changes by statistical analysis of a dataset comprising 107 million Twitter messages (authored by 2.7 million unique user accounts). Using a latent vector autoregressive model to aggregate across thousands of words, we identify high-level patterns in diffusion of linguistic change over the United States. Our model is robust to unpredictable changes in Twitter's sampling rate, and provides a probabilistic characterization of the relationship of macro-scale linguistic influence to a set of demographic and geographic predictors. The results of this analysis offer support for prior arguments that focus on geographical proximity and population size. However, demographic similarity -- especially with regard to race -- plays an even more central role, as cities with similar racial demographics are far more likely to share linguistic influence. Rather than moving towards a single unified "netspeak" dialect, language evolution in computer-mediated communication reproduces existing fault lines in spoken American English.
1210.5288
A Scalable Null Model for Directed Graphs Matching All Degree Distributions: In, Out, and Reciprocal
cs.SI physics.soc-ph
Degree distributions are arguably the most important property of real world networks. The classic edge configuration model or Chung-Lu model can generate an undirected graph with any desired degree distribution. This serves as a good null model to compare algorithms or perform experimental studies. Furthermore, there are scalable algorithms that implement these models and they are invaluable in the study of graphs. However, networks in the real-world are often directed, and have a significant proportion of reciprocal edges. A stronger relation exists between two nodes when they each point to one another (reciprocal edge) as compared to when only one points to the other (one-way edge). Despite their importance, reciprocal edges have been disregarded by most directed graph models. We propose a null model for directed graphs inspired by the Chung-Lu model that matches the in-, out-, and reciprocal-degree distributions of the real graphs. Our algorithm is scalable and requires $O(m)$ random numbers to generate a graph with $m$ edges. We perform a series of experiments on real datasets and compare with existing graph models.
1210.5290
A numerical framework for diffusion-controlled bimolecular-reactive systems to enforce maximum principles and non-negative constraint
cs.NA cs.CE
We present a novel computational framework for diffusive-reactive systems that satisfies the non-negative constraint and maximum principles on general computational grids. The governing equations for the concentration of reactants and product are written in terms of tensorial diffusion-reaction equations. % We restrict our studies to fast irreversible bimolecular reactions. If one assumes that the reaction is diffusion-limited and all chemical species have the same diffusion coefficient, one can employ a linear transformation to rewrite the governing equations in terms of invariants, which are unaffected by the reaction. This results in two uncoupled tensorial diffusion equations in terms of these invariants, which are solved using a novel non-negative solver for tensorial diffusion-type equations. The concentrations of the reactants and the product are then calculated from invariants using algebraic manipulations. The novel aspect of the proposed computational framework is that it will always produce physically meaningful non-negative values for the concentrations of all chemical species. Several representative numerical examples are presented to illustrate the robustness, convergence, and the numerical performance of the proposed computational framework. We will also compare the proposed framework with other popular formulations. In particular, we will show that the Galerkin formulation (which is the standard single-field formulation) does not produce reliable solutions, and the reason can be attributed to the fact that the single-field formulation does not guarantee non-negative solutions. We will also show that the clipping procedure (which produces non-negative solutions but is considered as a variational crime) does not give accurate results when compared with the proposed computational framework.
1210.5292
Low-Complexity Demodulation for Interleaved OFDMA Downlink System Using Circular Convolution
cs.IT math.IT
In this paper, a new low-complexity demodulation scheme is proposed for interleaved orthogonal frequency division multiple access (OFDMA) downlink system with N subcarriers and M users using circular convolution. In the proposed scheme, each user's signal is extracted from the received interleaved OFDMA signal of M users by using circular convolution in the time domain and then fast Fourier transformed in the reduced size N over M. It is shown that the computational complexity of the proposed scheme for the interleaved OFDMA downlink system is much less than that of the conventional one.
1210.5297
Adaptive Differential Feedback in Time-Varying Multiuser MIMO Channels
cs.IT math.IT
In the context of a time-varying multiuser multiple-input-multiple-output (MIMO) system, we design recursive least squares based adaptive predictors and differential quantizers to minimize the sum mean squared error of the overall system. Using the fact that the scalar entries of the left singular matrix of a Gaussian MIMO channel becomes almost Gaussian distributed even for a small number of transmit antennas, we perform adaptive differential quantization of the relevant singular matrix entries. Compared to the algorithms in the existing differential feedback literature, our proposed quantizer provides three advantages: first, the controller parameters are flexible enough to adapt themselves to different vehicle speeds; second, the model is backward adaptive i.e., the base station and receiver can agree upon the predictor and variance estimator coefficients without explicit exchange of the parameters; third, it can accurately model the system even when the correlation between two successive channel samples becomes as low as 0.05. Our simulation results show that our proposed method can reduce the required feedback by several kilobits per second for vehicle speeds up to 20 km/h (channel tracker) and 10 km/h (singular vector tracker). The proposed system also outperforms a fixed quantizer, with same feedback overhead, in terms of bit error rate up to 30 km/h.
1210.5314
Maximum Likelihood Algorithms for Joint Estimation of Synchronization Impairments and Channel in MIMO-OFDM System
cs.IT math.IT
Maximum Likelihood (ML) algorithms, for the joint estimation of synchronization impairments and channel in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system, are investigated in this work. A system model that takes into account the effects of carrier frequency offset, sampling frequency offset, symbol timing error, and channel impulse response is formulated. Cram\'{e}r-Rao Lower Bounds for the estimation of continuous parameters are derived, which show the coupling effect among different impairments and the significance of the joint estimation. We propose an ML algorithm for the estimation of synchronization impairments and channel together, using grid search method. To reduce the complexity of the joint grid search in ML algorithm, a Modified ML (MML) algorithm with multiple one-dimensional searches is also proposed. Further, a Stage-wise ML (SML) algorithm using existing algorithms, which estimate fewer number of parameters, is also proposed. Performance of the estimation algorithms is studied through numerical simulations and it is found that the proposed ML and MML algorithms exhibit better performance than SML algorithm.
1210.5321
The origin of Mayan languages from Formosan language group of Austronesian
cs.CL q-bio.PE
Basic body-part names (BBPNs) were defined as body-part names in Swadesh basic 200 words. Non-Mayan cognates of Mayan (MY) BBPNs were extensively searched for, by comparing with non-MY vocabulary, including ca.1300 basic words of 82 AN languages listed by Tryon (1985), etc. Thus found cognates (CGs) in non-MY are listed in Table 1, as classified by language groups to which most similar cognates (MSCs) of MY BBPNs belong. CGs of MY are classified to 23 mutually unrelated CG-items, of which 17.5 CG-items have their MSCs in Austronesian (AN), giving its closest similarity score (CSS), CSS(AN) = 17.5, which consists of 10.33 MSCs in Formosan, 1.83 MSCs in Western Malayo-Polynesian (W.MP), 0.33 in Central MP, 0.0 in SHWNG, and 5.0 in Oceanic [i.e., CSS(FORM)= 10.33, CSS(W.MP) = 1.88, ..., CSS(OC)= 5.0]. These CSSs for language (sub)groups are also listed in the underline portion of every section of (Section1 - Section 6) in Table 1. Chi-squar test (degree of freedom = 1) using [Eq 1] and [Eqs.2] revealed that MSCs of MY BBPNs are distributed in Formosan in significantly higher frequency (P < 0.001) than in other subgroups of AN, as well as than in non-AN languages. MY is thus concluded to have been derived from Formosan of AN. Eskimo shows some BBPN similarities to FORM and MY.
1210.5323
The performance of orthogonal multi-matching pursuit under RIP
cs.IT cs.LG math.IT math.NA
The orthogonal multi-matching pursuit (OMMP) is a natural extension of orthogonal matching pursuit (OMP). We denote the OMMP with the parameter $M$ as OMMP(M) where $M\geq 1$ is an integer. The main difference between OMP and OMMP(M) is that OMMP(M) selects $M$ atoms per iteration, while OMP only adds one atom to the optimal atom set. In this paper, we study the performance of orthogonal multi-matching pursuit (OMMP) under RIP. In particular, we show that, when the measurement matrix A satisfies $(9s, 1/10)$-RIP, there exists an absolutely constant $M_0\leq 8$ so that OMMP(M_0) can recover $s$-sparse signal within $s$ iterations. We furthermore prove that, for slowly-decaying $s$-sparse signal, OMMP(M) can recover s-sparse signal within $O(\frac{s}{M})$ iterations for a large class of $M$. In particular, for $M=s^a$ with $a\in [0,1/2]$, OMMP(M) can recover slowly-decaying $s$-sparse signal within $O(s^{1-a})$ iterations. The result implies that OMMP can reduce the computational complexity heavily.
1210.5338
Pairwise MRF Calibration by Perturbation of the Bethe Reference Point
cond-mat.dis-nn cond-mat.stat-mech cs.LG stat.ML
We investigate different ways of generating approximate solutions to the pairwise Markov random field (MRF) selection problem. We focus mainly on the inverse Ising problem, but discuss also the somewhat related inverse Gaussian problem because both types of MRF are suitable for inference tasks with the belief propagation algorithm (BP) under certain conditions. Our approach consists in to take a Bethe mean-field solution obtained with a maximum spanning tree (MST) of pairwise mutual information, referred to as the \emph{Bethe reference point}, for further perturbation procedures. We consider three different ways following this idea: in the first one, we select and calibrate iteratively the optimal links to be added starting from the Bethe reference point; the second one is based on the observation that the natural gradient can be computed analytically at the Bethe point; in the third one, assuming no local field and using low temperature expansion we develop a dual loop joint model based on a well chosen fundamental cycle basis. We indeed identify a subclass of planar models, which we refer to as \emph{Bethe-dual graph models}, having possibly many loops, but characterized by a singly connected dual factor graph, for which the partition function and the linear response can be computed exactly in respectively O(N) and $O(N^2)$ operations, thanks to a dual weight propagation (DWP) message passing procedure that we set up. When restricted to this subclass of models, the inverse Ising problem being convex, becomes tractable at any temperature. Experimental tests on various datasets with refined $L_0$ or $L_1$ regularization procedures indicate that these approaches may be competitive and useful alternatives to existing ones.
1210.5374
Timing Constraints Support on Petri-Net Model for Healthcare System Design
cs.SE cs.SY
The worldwide healthcare organizations are facing a number of daunting challenges forcing systems to benefit from modern technologies and telecom capabilities. Hence, systems evolution through extension of the existing information technology infrastructure becomes one of the most challenging aspects of healthcare. In this paper, we present a newly architecture for evolving healthcare systems towards a service-oriented architecture. Since healthcare process exists in temporal context, timing constraints satisfiability verification techniques are growing to enable designers to test and repair design errors. Thanks to Hierarchical Timed Predicate Petri-Net based conceptual framework, desirable properties such as deadlock free and safe as well as timing constraints satisfiability can be easily checked by designer.
1210.5394
Bayesian Estimation for Continuous-Time Sparse Stochastic Processes
cs.LG
We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the $\ell_1$ norm and Log ($\ell_1$-$\ell_0$ relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.
1210.5403
An Experience Report of Large Scale Federations
cs.DB
We present an experimental study of large-scale RDF federations on top of the Bio2RDF data sources, involving 29 data sets with more than four billion RDF triples deployed in a local federation. Our federation is driven by FedX, a highly optimized federation mediator for Linked Data. We discuss design decisions, technical aspects, and experiences made in setting up and optimizing the Bio2RDF federation, and present an exhaustive experimental evaluation of the federation scenario. In addition to a controlled setting with local federation members, we study implications arising in a hybrid setting, where local federation members interact with remote federation members exhibiting higher network latency. The outcome demonstrates the feasibility of federated semantic data management in general and indicates remaining bottlenecks and research opportunities that shall serve as a guideline for future work in the area of federated semantic data processing.
1210.5424
Implementation of Distributed Time Exchange Based Cooperative Forwarding
cs.IT cs.NI math.IT
In this paper, we design and implement time exchange (TE) based cooperative forwarding where nodes use transmission time slots as incentives for relaying. We focus on distributed joint time slot exchange and relay selection in the sum goodput maximization of the overall network. We formulate the design objective as a mixed integer nonlinear programming (MINLP) problem and provide a polynomial time distributed solution of the MINLP. We implement the designed algorithm in the software defined radio enabled USRP nodes of the ORBIT indoor wireless testbed. The ORBIT grid is used as a global control plane for exchange of control information between the USRP nodes. Experimental results suggest that TE can significantly increase the sum goodput of the network. We also demonstrate the performance of a goodput optimization algorithm that is proportionally fair.
1210.5454
Stuck in Traffic (SiT) Attacks: A Framework for Identifying Stealthy Attacks that Cause Traffic Congestion
cs.NI cs.MA
Recent advances in wireless technologies have enabled many new applications in Intelligent Transportation Systems (ITS) such as collision avoidance, cooperative driving, congestion avoidance, and traffic optimization. Due to the vulnerable nature of wireless communication against interference and intentional jamming, ITS face new challenges to ensure the reliability and the safety of the overall system. In this paper, we expose a class of stealthy attacks -- Stuck in Traffic (SiT) attacks -- that aim to cause congestion by exploiting how drivers make decisions based on smart traffic signs. An attacker mounting a SiT attack solves a Markov Decision Process problem to find optimal/suboptimal attack policies in which he/she interferes with a well-chosen subset of signals that are based on the state of the system. We apply Approximate Policy Iteration (API) algorithms to derive potent attack policies. We evaluate their performance on a number of systems and compare them to other attack policies including random, myopic and DoS attack policies. The generated policies, albeit suboptimal, are shown to significantly outperform other attack policies as they maximize the expected cumulative reward from the standpoint of the attacker.
1210.5470
The DoF of Network MIMO with Backhaul Delays
cs.IT math.IT
We consider the problem of downlink precoding for Network (multi-cell) MIMO networks where Transmitters (TXs) are provided with imperfect Channel State Information (CSI). Specifically, each TX receives a delayed channel estimate with the delay being specific to each channel component. This model is particularly adapted to the scenarios where a user feeds back its CSI to its serving base only as it is envisioned in future LTE networks. We analyze the impact of the delay during the backhaul-based CSI exchange on the rate performance achieved by Network MIMO. We highlight how delay can dramatically degrade system performance if existing precoding methods are to be used. We propose an alternative robust beamforming strategy which achieves the maximal performance, in DoF sense. We verify by simulations that the theoretical DoF improvement translates into a performance increase at finite Signal-to-Noise Ratio (SNR) as well.
1210.5474
Disentangling Factors of Variation via Generative Entangling
stat.ML cs.LG cs.NE
Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spike-and-slab restricted Boltzmann machine which we generalize to include higher-order interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. Inference in the model can be seen as disentangling these generative factors. Unlike previous attempts at disentangling latent factors, the proposed model is trained using no supervised information regarding the latent factors. We apply our model to the task of facial expression classification.
1210.5486
A Lightweight Stemmer for Gujarati
cs.CL
Gujarati is a resource poor language with almost no language processing tools being available. In this paper we have shown an implementation of a rule based stemmer of Gujarati. We have shown the creation of rules for stemming and the richness in morphology that Gujarati possesses. We have also evaluated our results by verifying it with a human expert.
1210.5500
Modeling with Copulas and Vines in Estimation of Distribution Algorithms
cs.NE stat.ME
The aim of this work is studying the use of copulas and vines in the optimization with Estimation of Distribution Algorithms (EDAs). Two EDAs are built around the multivariate product and normal copulas, and other two are based on pair-copula decomposition of vine models. Empirically we study the effect of both marginal distributions and dependence structure separately, and show that both aspects play a crucial role in the success of the optimization. The results show that the use of copulas and vines opens new opportunities to a more appropriate modeling of search distributions in EDAs.
1210.5502
OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects
q-bio.QM cs.CV
Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an in- tuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.
1210.5503
Downlink Coordinated Multi-Point with Overhead Modeling in Heterogeneous Cellular Networks
cs.IT math.IT
Coordinated multi-point (CoMP) communication is attractive for heterogeneous cellular networks (HCNs) for interference reduction. However, previous approaches to CoMP face two major hurdles in HCNs. First, they usually ignore the inter-cell overhead messaging delay, although it results in an irreducible performance bound. Second, they consider the grid or Wyner model for base station locations, which is not appropriate for HCN BS locations which are numerous and haphazard. Even for conventional macrocell networks without overlaid small cells, SINR results are not tractable in the grid model nor accurate in the Wyner model. To overcome these hurdles, we develop a novel analytical framework which includes the impact of overhead delay for CoMP evaluation in HCNs. This framework can be used for a class of CoMP schemes without user data sharing. As an example, we apply it to downlink CoMP zero-forcing beamforming (ZFBF), and see significant divergence from previous work. For example, we show that CoMP ZFBF does not increase throughput when the overhead channel delay is larger than 60% of the channel coherence time. We also find that, in most cases, coordinating with only one other cell is nearly optimum for downlink CoMP ZFBF.
1210.5515
Quality of Service Support on High Level Petri-Net Based Model for Dynamic Configuration of Web Service Composition
cs.SE cs.SY
Web services are widely used thanks to their features of universal interoperability between software assets, platform independent and loose-coupled. Web services composition is one of the most challenging topics in service computing area. In this paper, an approach based on High Level Petri-Net model as dynamic configuration schema of web services composition is proposed to achieve self adaptation to run-time environment and self management of composite web services. For composite service based applications, in addition to functional requirements, quality of service properties should be considered. This paper presents and proves some quality of service formulas in context of web service composition. Based on this model and the quality of service properties, a suitable configuration with optimal quality of service can be selected in dynamic way to reach the goal of automatic service composition. The correctness of the approach is proved by a simulation results and corresponding analysis.
1210.5516
Managing Changes in Citizen-Centric Healthcare Service Platform using High Level Petri Net
cs.SE cs.SY
The healthcare organizations are facing a number of daunting challenges pushing systems to deal with requirements changes and benefit from modern technologies and telecom capabilities. Systems evolution through extension of the existing information technology infrastructure becomes one of the most challenging aspects of healthcare and the adaptation to changes is a must. The paper presents a change management framework for a citizen-centric healthcare service platform. A combination between Petri nets model to handle changes and reconfigurable Petri nets model to react to these changes are introduced to fulfill healthcare goals. Thanks to this management framework model, consistency and correctness of a healthcare processes in the presence of frequent changes can be checked and guaranteed.
1210.5517
Design of English-Hindi Translation Memory for Efficient Translation
cs.CL
Developing parallel corpora is an important and a difficult activity for Machine Translation. This requires manual annotation by Human Translators. Translating same text again is a useless activity. There are tools available to implement this for European Languages, but no such tool is available for Indian Languages. In this paper we present a tool for Indian Languages which not only provides automatic translations of the previously available translation but also provides multiple translations, in cases where a sentence has multiple translations, in ranked list of suggestive translations for a sentence. Moreover this tool also lets translators have global and local saving options of their work, so that they may share it with others, which further lightens the task.
1210.5539
Stability of Evolutionary Dynamics on Time Scales
math.DS cs.IT math.IT q-bio.PE
We combine incentive, adaptive, and time-scale dynamics to study multipopulation dynamics on the simplex equipped with a large class of Riemmanian metrics, simultaneously generalizing and extending many dynamics commonly studied in dynamic game theory and evolutionary dynamics. Each population has its own geometry, method of adaptation (incentive), and time-scale (discrete, continuous, and others). Using an information-theoretic measure of distance we give a widely-applicable Lyapunov result for the dynamic. We include a wealth of examples leading up to and beyond the main results.
1210.5544
Online Learning in Decentralized Multiuser Resource Sharing Problems
cs.LG
In this paper, we consider the general scenario of resource sharing in a decentralized system when the resource rewards/qualities are time-varying and unknown to the users, and using the same resource by multiple users leads to reduced quality due to resource sharing. Firstly, we consider a user-independent reward model with no communication between the users, where a user gets feedback about the congestion level in the resource it uses. Secondly, we consider user-specific rewards and allow costly communication between the users. The users have a cooperative goal of achieving the highest system utility. There are multiple obstacles in achieving this goal such as the decentralized nature of the system, unknown resource qualities, communication, computation and switching costs. We propose distributed learning algorithms with logarithmic regret with respect to the optimal allocation. Our logarithmic regret result holds under both i.i.d. and Markovian reward models, as well as under communication, computation and switching costs.
1210.5552
Quickest Change Detection
math.ST cs.IT math.IT math.OC math.PR stat.AP stat.TH
The problem of detecting changes in the statistical properties of a stochastic system and time series arises in various branches of science and engineering. It has a wide spectrum of important applications ranging from machine monitoring to biomedical signal processing. In all of these applications the observations being monitored undergo a change in distribution in response to a change or anomaly in the environment, and the goal is to detect the change as quickly as possibly, subject to false alarm constraints. In this chapter, two formulations of the quickest change detection problem, Bayesian and minimax, are introduced, and optimal or asymptotically optimal solutions to these formulations are discussed. Then some generalizations and extensions of the quickest change detection problem are described. The chapter is concluded with a discussion of applications and open issues.
1210.5560
Wikipedia Vandalism Detection Through Machine Learning: Feature Review and New Proposals: Lab Report for PAN at CLEF 2010
cs.IR cs.AI
Wikipedia is an online encyclopedia that anyone can edit. In this open model, some people edits with the intent of harming the integrity of Wikipedia. This is known as vandalism. We extend the framework presented in (Potthast, Stein, and Gerling, 2008) for Wikipedia vandalism detection. In this approach, several vandalism indicating features are extracted from edits in a vandalism corpus and are fed to a supervised learning algorithm. The best performing classifiers were LogitBoost and Random Forest. Our classifier, a Random Forest, obtained an AUC of 0.92236, ranking in the first place of the PAN'10 Wikipedia vandalism detection task.
1210.5581
Hidden Trends in 90 Years of Harvard Business Review
cs.CL cs.DL cs.IR
In this paper, we demonstrate and discuss results of our mining the abstracts of the publications in Harvard Business Review between 1922 and 2012. Techniques for computing n-grams, collocations, basic sentiment analysis, and named-entity recognition were employed to uncover trends hidden in the abstracts. We present findings about international relationships, sentiment in HBR's abstracts, important international companies, influential technological inventions, renown researchers in management theories, US presidents via chronological analyses.
1210.5594
Cross-Entropy Clustering
cs.IT math.IT
We construct a cross-entropy clustering (CEC) theory which finds the optimal number of clusters by automatically removing groups which carry no information. Moreover, our theory gives simple and efficient criterion to verify cluster validity. Although CEC can be build on an arbitrary family of densities, in the most important case of Gaussian CEC: {\em -- the division into clusters is affine invariant; -- the clustering will have the tendency to divide the data into ellipsoid-type shapes; -- the approach is computationally efficient as we can apply Hartigan approach.} We study also with particular attention clustering based on the Spherical Gaussian densities and that of Gaussian densities with covariance $s \I$. In the letter case we show that with $s$ converging to zero we obtain the classical k-means clustering.
1210.5626
Compressed Sensing Signal Recovery via Forward-Backward Pursuit
cs.IT math.IT
Recovery of sparse signals from compressed measurements constitutes an l0 norm minimization problem, which is unpractical to solve. A number of sparse recovery approaches have appeared in the literature, including l1 minimization techniques, greedy pursuit algorithms, Bayesian methods and nonconvex optimization techniques among others. This manuscript introduces a novel two stage greedy approach, called the Forward-Backward Pursuit (FBP). FBP is an iterative approach where each iteration consists of consecutive forward and backward stages. The forward step first expands the support estimate by the forward step size, while the following backward step shrinks it by the backward step size. The forward step size is larger than the backward step size, hence the initially empty support estimate is expanded at the end of each iteration. Forward and backward steps are iterated until the residual power of the observation vector falls below a threshold. This structure of FBP does not necessitate the sparsity level to be known a priori in contrast to the Subspace Pursuit or Compressive Sampling Matching Pursuit algorithms. FBP recovery performance is demonstrated via simulations including recovery of random sparse signals with different nonzero coefficient distributions in noisy and noise-free scenarios in addition to the recovery of a sparse image.
1210.5631
Content-boosted Matrix Factorization Techniques for Recommender Systems
stat.ML cs.LG
Many businesses are using recommender systems for marketing outreach. Recommendation algorithms can be either based on content or driven by collaborative filtering. We study different ways to incorporate content information directly into the matrix factorization approach of collaborative filtering. These content-boosted matrix factorization algorithms not only improve recommendation accuracy, but also provide useful insights about the contents, as well as make recommendations more easily interpretable.
1210.5644
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
cs.CV cs.AI cs.LG
Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.
1210.5653
Identifications of concealed weapon in a Human Body
cs.CV
The detection of weapons concealed underneath a person cloths is very much important to the improvement of the security of the public as well as the safety of public assets like airports, buildings and railway stations etc.
1210.5660
Linear Physical-layer Network Coding in Galois Field for Rayleigh fading 2-Way Relay Channels
cs.IT math.IT
In this paper, we propose a novel linear physicallayer network coding (LPNC) for Rayleigh fading 2-way relay channels (2-WRC). Rather than the simple modulo-2 (bit-XOR) operation, the relay directly maps the superimposed signal of the two users into the linear network coded combination in GF(2^2) by multiplying the user data by properly selected generator matrix. We derive the constellation constrained capacities for LPNC and 5QAM denoise-and forward (5QAM-DNF) [2] and further explicitly characterize the capacity difference between LPNC and 5QAM-DNF. Based on our analysis and simulation, we highlight that without employing the irregular 5QAM mapping and sacrificing the spectral efficiency, our LPNC in GF(2^2) is superior to 5QAM-DNF scheme in low SNR regime while they achieve equal performance in the the moderate-to-high SNR regime.
1210.5670
Typed Answer Set Programming and Inverse Lambda Algorithms
cs.AI cs.LO cs.PL
Our broader goal is to automatically translate English sentences into formulas in appropriate knowledge representation languages as a step towards understanding and thus answering questions with respect to English text. Our focus in this paper is on the language of Answer Set Programming (ASP). Our approach to translate sentences to ASP rules is inspired by Montague's use of lambda calculus formulas as meaning of words and phrases. With ASP as the target language the meaning of words and phrases are ASP-lambda formulas. In an earlier work we illustrated our approach by manually developing a dictionary of words and their ASP-lambda formulas. However such an approach is not scalable. In this paper our focus is on two algorithms that allow one to construct ASP-lambda formulas in an inverse manner. In particular the two algorithms take as input two lambda-calculus expressions G and H and compute a lambda-calculus expression F such that F with input as G, denoted by F@G, is equal to H; and similarly G@F = H. We present correctness and complexity results about these algorithms. To do that we develop the notion of typed ASP-lambda calculus theories and their orders and use it in developing the completeness results. (To appear in Theory and Practice of Logic Programming.)