id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1309.6838 | Inverse Covariance Estimation for High-Dimensional Data in Linear Time
and Space: Spectral Methods for Riccati and Sparse Models | cs.LG stat.ML | We propose maximum likelihood estimation for learning Gaussian graphical
models with a Gaussian (ell_2^2) prior on the parameters. This is in contrast
to the commonly used Laplace (ell_1) prior for encouraging sparseness. We show
that our optimization problem leads to a Riccati matrix equation, which has a
closed form solution. We propose an efficient algorithm that performs a
singular value decomposition of the training data. Our algorithm is
O(NT^2)-time and O(NT)-space for N variables and T samples. Our method is
tailored to high-dimensional problems (N gg T), in which sparseness promoting
methods become intractable. Furthermore, instead of obtaining a single solution
for a specific regularization parameter, our algorithm finds the whole solution
path. We show that the method has logarithmic sample complexity under the
spiked covariance model. We also propose sparsification of the dense solution
with provable performance guarantees. We provide techniques for using our
learnt models, such as removing unimportant variables, computing likelihoods
and conditional distributions. Finally, we show promising results in several
gene expressions datasets.
|
1309.6839 | Solving Limited-Memory Influence Diagrams Using Branch-and-Bound Search | cs.AI | A limited-memory influence diagram (LIMID) generalizes a traditional
influence diagram by relaxing the assumptions of regularity and no-forgetting,
allowing a wider range of decision problems to be modeled. Algorithms for
solving traditional influence diagrams are not easily generalized to solve
LIMIDs, however, and only recently have exact algorithms for solving LIMIDs
been developed. In this paper, we introduce an exact algorithm for solving
LIMIDs that is based on branch-and-bound search. Our approach is related to the
approach of solving an influence diagram by converting it to an equivalent
decision tree, with the difference that the LIMID is converted to a much
smaller decision graph that can be searched more efficiently.
|
1309.6840 | Constrained Bayesian Inference for Low Rank Multitask Learning | cs.LG stat.ML | We present a novel approach for constrained Bayesian inference. Unlike
current methods, our approach does not require convexity of the constraint set.
We reduce the constrained variational inference to a parametric optimization
over the feasible set of densities and propose a general recipe for such
problems. We apply the proposed constrained Bayesian inference approach to
multitask learning subject to rank constraints on the weight matrix. Further,
constrained parameter estimation is applied to recover the sparse conditional
independence structure encoded by prior precision matrices. Our approach is
motivated by reverse inference for high dimensional functional neuroimaging, a
domain where the high dimensionality and small number of examples requires the
use of constraints to ensure meaningful and effective models. For this
application, we propose a model that jointly learns a weight matrix and the
prior inverse covariance structure between different tasks. We present
experimental validation showing that the proposed approach outperforms strong
baseline models in terms of predictive performance and structure recovery.
|
1309.6841 | Collective Diffusion Over Networks: Models and Inference | cs.SI physics.soc-ph | Diffusion processes in networks are increasingly used to model the spread of
information and social influence. In several applications in computational
sustainability such as the spread of wildlife, infectious diseases and traffic
mobility pattern, the observed data often consists of only aggregate
information. In this work, we present new models that generalize standard
diffusion processes to such collective settings. We also present optimization
based techniques that can accurately learn the underlying dynamics of the given
contagion process, including the hidden network structure, by only observing
the time a node becomes active and the associated aggregate information.
Empirically, our technique is highly robust and accurately learns network
structure with more than 90% recall and precision. Results on real-world flu
spread data in the US confirm that our technique can also accurately model
infectious disease spread.
|
1309.6842 | Causal Transportability of Experiments on Controllable Subsets of
Variables: z-Transportability | cs.AI | We introduce z-transportability, the problem of estimating the causal effect
of a set of variables X on another set of variables Y in a target domain from
experiments on any subset of controllable variables Z where Z is an arbitrary
subset of observable variables V in a source domain. z-Transportability
generalizes z-identifiability, the problem of estimating in a given domain the
causal effect of X on Y from surrogate experiments on a set of variables Z such
that Z is disjoint from X;. z-Transportability also generalizes
transportability which requires that the causal effect of X on Y in the target
domain be estimable from experiments on any subset of all observable variables
in the source domain. We first generalize z-identifiability to allow cases
where Z is not necessarily disjoint from X. Then, we establish a necessary and
sufficient condition for z-transportability in terms of generalized
z-identifiability and transportability. We provide a correct and complete
algorithm that determines whether a causal effect is z-transportable; and if it
is, produces a transport formula, that is, a recipe for estimating the causal
effect of X on Y in the target domain using information elicited from the
results of experimental manipulations of Z in the source domain and
observational data from the target domain. Our results also show that
do-calculus is complete for z-transportability.
|
1309.6843 | A Sound and Complete Algorithm for Learning Causal Models from
Relational Data | cs.AI | The PC algorithm learns maximally oriented causal Bayesian networks. However,
there is no equivalent complete algorithm for learning the structure of
relational models, a more expressive generalization of Bayesian networks.
Recent developments in the theory and representation of relational models
support lifted reasoning about conditional independence. This enables a
powerful constraint for orienting bivariate dependencies and forms the basis of
a new algorithm for learning structure. We present the relational causal
discovery (RCD) algorithm that learns causal relational models. We prove that
RCD is sound and complete, and we present empirical results that demonstrate
effectiveness.
|
1309.6844 | Evaluating Anytime Algorithms for Learning Optimal Bayesian Networks | cs.AI | Exact algorithms for learning Bayesian networks guarantee to find provably
optimal networks. However, they may fail in difficult learning tasks due to
limited time or memory. In this research we adapt several anytime heuristic
search-based algorithms to learn Bayesian networks. These algorithms find
high-quality solutions quickly, and continually improve the incumbent solution
or prove its optimality before resources are exhausted. Empirical results show
that the anytime window A* algorithm usually finds higher-quality, often
optimal, networks more quickly than other approaches. The results also show
that, surprisingly, while generating networks with few parents per variable are
structurally simpler, they are harder to learn than complex generating networks
with more parents per variable.
|
1309.6845 | On the Complexity of Strong and Epistemic Credal Networks | cs.AI | Credal networks are graph-based statistical models whose parameters take
values in a set, instead of being sharply specified as in traditional
statistical models (e.g., Bayesian networks). The computational complexity of
inferences on such models depends on the irrelevance/independence concept
adopted. In this paper, we study inferential complexity under the concepts of
epistemic irrelevance and strong independence. We show that inferences under
strong independence are NP-hard even in trees with ternary variables. We prove
that under epistemic irrelevance the polynomial time complexity of inferences
in credal trees is not likely to extend to more general models (e.g. singly
connected networks). These results clearly distinguish networks that admit
efficient inferences and those where inferences are most likely hard, and
settle several open questions regarding computational complexity.
|
1309.6846 | Learning Periodic Human Behaviour Models from Sparse Data for
Crowdsourcing Aid Delivery in Developing Countries | cs.AI | In many developing countries, half the population lives in rural locations,
where access to essentials such as school materials, mosquito nets, and medical
supplies is restricted. We propose an alternative method of distribution (to
standard road delivery) in which the existing mobility habits of a local
population are leveraged to deliver aid, which raises two technical challenges
in the areas optimisation and learning. For optimisation, a standard Markov
decision process applied to this problem is intractable, so we provide an exact
formulation that takes advantage of the periodicities in human location
behaviour. To learn such behaviour models from sparse data (i.e., cell tower
observations), we develop a Bayesian model of human mobility. Using real cell
tower data of the mobility behaviour of 50,000 individuals in Ivory Coast, we
find that our model outperforms the state of the art approaches in mobility
prediction by at least 25% (in held-out data likelihood). Furthermore, when
incorporating mobility prediction with our MDP approach, we find a 81.3%
reduction in total delivery time versus routine planning that minimises just
the number of participants in the solution path.
|
1309.6847 | Learning Max-Margin Tree Predictors | cs.LG stat.ML | Structured prediction is a powerful framework for coping with joint
prediction of interacting outputs. A central difficulty in using this framework
is that often the correct label dependence structure is unknown. At the same
time, we would like to avoid an overly complex structure that will lead to
intractable prediction. In this work we address the challenge of learning tree
structured predictive models that achieve high accuracy while at the same time
facilitate efficient (linear time) inference. We start by proving that this
task is in general NP-hard, and then suggest an approximate alternative.
Briefly, our CRANK approach relies on a novel Circuit-RANK regularizer that
penalizes non-tree structures and that can be optimized using a CCCP procedure.
We demonstrate the effectiveness of our approach on several domains and show
that, despite the relative simplicity of the structure, prediction accuracy is
competitive with a fully connected model that is computationally costly at
prediction time.
|
1309.6848 | Tighter Linear Program Relaxations for High Order Graphical Models | cs.AI | Graphical models with High Order Potentials (HOPs) have received considerable
interest in recent years. While there are a variety of approaches to inference
in these models, nearly all of them amount to solving a linear program (LP)
relaxation with unary consistency constraints between the HOP and the
individual variables. In many cases, the resulting relaxations are loose, and
in these cases the results of inference can be poor. It is thus desirable to
look for more accurate ways of performing inference in these models. In this
work, we study the LP relaxations that result from enforcing additional
consistency constraints between the HOP and the rest of the model. We address
theoretical questions about the strength of the resulting relaxations compared
to the relaxations that arise in standard approaches, and we develop practical
and efficient message passing algorithms for optimizing the LPs. Empirically,
we show that the LPs with additional consistency constraints lead to more
accurate inference on some challenging problems that include a combination of
low order and high order terms.
|
1309.6849 | Cyclic Causal Discovery from Continuous Equilibrium Data | cs.LG cs.AI stat.ML | We propose a method for learning cyclic causal models from a combination of
observational and interventional equilibrium data. Novel aspects of the
proposed method are its ability to work with continuous data (without assuming
linearity) and to deal with feedback loops. Within the context of biochemical
reactions, we also propose a novel way of modeling interventions that modify
the activity of compounds instead of their abundance. For computational
reasons, we approximate the nonlinear causal mechanisms by (coupled) local
linearizations, one for each experimental condition. We apply the method to
reconstruct a cellular signaling network from the flow cytometry data measured
by Sachs et al. (2005). We show that our method finds evidence in the data for
feedback loops and that it gives a more accurate quantitative description of
the data at comparable model complexity.
|
1309.6850 | Structured Convex Optimization under Submodular Constraints | cs.LG cs.DS stat.ML | A number of discrete and continuous optimization problems in machine learning
are related to convex minimization problems under submodular constraints. In
this paper, we deal with a submodular function with a directed graph structure,
and we show that a wide range of convex optimization problems under submodular
constraints can be solved much more efficiently than general submodular
optimization methods by a reduction to a maximum flow problem. Furthermore, we
give some applications, including sparse optimization methods, in which the
proposed methods are effective. Additionally, we evaluate the performance of
the proposed method through computational experiments.
|
1309.6851 | Treedy: A Heuristic for Counting and Sampling Subsets | cs.DS cs.AI cs.LG | Consider a collection of weighted subsets of a ground set N. Given a query
subset Q of N, how fast can one (1) find the weighted sum over all subsets of
Q, and (2) sample a subset of Q proportionally to the weights? We present a
tree-based greedy heuristic, Treedy, that for a given positive tolerance d
answers such counting and sampling queries to within a guaranteed relative
error d and total variation distance d, respectively. Experimental results on
artificial instances and in application to Bayesian structure discovery in
Bayesian networks show that approximations yield dramatic savings in running
time compared to exact computation, and that Treedy typically outperforms a
previously proposed sorting-based heuristic.
|
1309.6852 | Stochastic Rank Aggregation | cs.LG cs.IR stat.ML | This paper addresses the problem of rank aggregation, which aims to find a
consensus ranking among multiple ranking inputs. Traditional rank aggregation
methods are deterministic, and can be categorized into explicit and implicit
methods depending on whether rank information is explicitly or implicitly
utilized. Surprisingly, experimental results on real data sets show that
explicit rank aggregation methods would not work as well as implicit methods,
although rank information is critical for the task. Our analysis indicates that
the major reason might be the unreliable rank information from incomplete
ranking inputs. To solve this problem, we propose to incorporate uncertainty
into rank aggregation and tackle the problem in both unsupervised and
supervised scenario. We call this novel framework {stochastic rank aggregation}
(St.Agg for short). Specifically, we introduce a prior distribution on ranks,
and transform the ranking functions or objectives in traditional explicit
methods to their expectations over this distribution. Our experiments on
benchmark data sets show that the proposed St.Agg outperforms the baselines in
both unsupervised and supervised scenarios.
|
1309.6855 | Evaluating computational models of explanation using human judgments | cs.AI | We evaluate four computational models of explanation in Bayesian networks by
comparing model predictions to human judgments. In two experiments, we present
human participants with causal structures for which the models make divergent
predictions and either solicit the best explanation for an observed event
(Experiment 1) or have participants rate provided explanations for an observed
event (Experiment 2). Across two versions of two causal structures and across
both experiments, we find that the Causal Explanation Tree and Most Relevant
Explanation models provide better fits to human data than either Most Probable
Explanation or Explanation Tree models. We identify strengths and shortcomings
of these models and what they can reveal about human explanation. We conclude
by suggesting the value of pursuing computational and psychological
investigations of explanation in parallel.
|
1309.6856 | Approximation of Lorenz-Optimal Solutions in Multiobjective Markov
Decision Processes | cs.AI | This paper is devoted to fair optimization in Multiobjective Markov Decision
Processes (MOMDPs). A MOMDP is an extension of the MDP model for planning under
uncertainty while trying to optimize several reward functions simultaneously.
This applies to multiagent problems when rewards define individual utility
functions, or in multicriteria problems when rewards refer to different
features. In this setting, we study the determination of policies leading to
Lorenz-non-dominated tradeoffs. Lorenz dominance is a refinement of Pareto
dominance that was introduced in Social Choice for the measurement of
inequalities. In this paper, we introduce methods to efficiently approximate
the sets of Lorenz-non-dominated solutions of infinite-horizon, discounted
MOMDPs. The approximations are polynomial-sized subsets of those solutions.
|
1309.6857 | Solution Methods for Constrained Markov Decision Process with Continuous
Probability Modulation | cs.AI | We propose solution methods for previously-unsolved constrained MDPs in which
actions can continuously modify the transition probabilities within some
acceptable sets. While many methods have been proposed to solve regular MDPs
with large state sets, there are few practical approaches for solving
constrained MDPs with large action sets. In particular, we show that the
continuous action sets can be replaced by their extreme points when the rewards
are linear in the modulation. We also develop a tractable optimization
formulation for concave reward functions and, surprisingly, also extend it to
non- concave reward functions by using their concave envelopes. We evaluate the
effectiveness of the approach on the problem of managing delinquencies in a
portfolio of loans.
|
1309.6858 | The Supervised IBP: Neighbourhood Preserving Infinite Latent Feature
Models | cs.LG stat.ML | We propose a probabilistic model to infer supervised latent variables in the
Hamming space from observed data. Our model allows simultaneous inference of
the number of binary latent variables, and their values. The latent variables
preserve neighbourhood structure of the data in a sense that objects in the
same semantic concept have similar latent values, and objects in different
concepts have dissimilar latent values. We formulate the supervised infinite
latent variable problem based on an intuitive principle of pulling objects
together if they are of the same type, and pushing them apart if they are not.
We then combine this principle with a flexible Indian Buffet Process prior on
the latent variables. We show that the inferred supervised latent variables can
be directly used to perform a nearest neighbour search for the purpose of
retrieval. We introduce a new application of dynamically extending hash codes,
and show how to effectively couple the structure of the hash codes with
continuously growing structure of the neighbourhood preserving infinite latent
feature space.
|
1309.6860 | Identifying Finite Mixtures of Nonparametric Product Distributions and
Causal Inference of Confounders | cs.LG cs.AI stat.ML | We propose a kernel method to identify finite mixtures of nonparametric
product distributions. It is based on a Hilbert space embedding of the joint
distribution. The rank of the constructed tensor is equal to the number of
mixture components. We present an algorithm to recover the components by
partitioning the data points into clusters such that the variables are jointly
conditionally independent given the cluster. This method can be used to
identify finite confounders.
|
1309.6862 | Determinantal Clustering Processes - A Nonparametric Bayesian Approach
to Kernel Based Semi-Supervised Clustering | cs.LG stat.ML | Semi-supervised clustering is the task of clustering data points into
clusters where only a fraction of the points are labelled. The true number of
clusters in the data is often unknown and most models require this parameter as
an input. Dirichlet process mixture models are appealing as they can infer the
number of clusters from the data. However, these models do not deal with high
dimensional data well and can encounter difficulties in inference. We present a
novel nonparameteric Bayesian kernel based method to cluster data points
without the need to prespecify the number of clusters or to model complicated
densities from which data points are assumed to be generated from. The key
insight is to use determinants of submatrices of a kernel matrix as a measure
of how close together a set of points are. We explore some theoretical
properties of the model and derive a natural Gibbs based algorithm with MCMC
hyperparameter learning. The model is implemented on a variety of synthetic and
real world data sets.
|
1309.6863 | Sparse Nested Markov models with Log-linear Parameters | cs.LG cs.AI stat.ML | Hidden variables are ubiquitous in practical data analysis, and therefore
modeling marginal densities and doing inference with the resulting models is an
important problem in statistics, machine learning, and causal inference.
Recently, a new type of graphical model, called the nested Markov model, was
developed which captures equality constraints found in marginals of directed
acyclic graph (DAG) models. Some of these constraints, such as the so called
`Verma constraint', strictly generalize conditional independence. To make
modeling and inference with nested Markov models practical, it is necessary to
limit the number of parameters in the model, while still correctly capturing
the constraints in the marginal of a DAG model. Placing such limits is similar
in spirit to sparsity methods for undirected graphical models, and regression
models. In this paper, we give a log-linear parameterization which allows
sparse modeling with nested Markov models. We illustrate the advantages of this
parameterization with a simulation study.
|
1309.6864 | Preference Elicitation For General Random Utility Models | cs.AI | This paper discusses {General Random Utility Models (GRUMs)}. These are a
class of parametric models that generate partial ranks over alternatives given
attributes of agents and alternatives. We propose two preference elicitation
scheme for GRUMs developed from principles in Bayesian experimental design, one
for social choice and the other for personalized choice. We couple this with a
general Monte-Carlo-Expectation-Maximization (MC-EM) based algorithm for MAP
inference under GRUMs. We also prove uni-modality of the likelihood functions
for a class of GRUMs. We examine the performance of various criteria by
experimental studies, which show that the proposed elicitation scheme increases
the precision of estimation.
|
1309.6865 | Modeling Documents with Deep Boltzmann Machines | cs.LG cs.IR stat.ML | We introduce a Deep Boltzmann Machine model suitable for modeling and
extracting latent semantic representations from a large unstructured collection
of documents. We overcome the apparent difficulty of training a DBM with
judicious parameter tying. This parameter tying enables an efficient
pretraining algorithm and a state initialization scheme that aids inference.
The model can be trained just as efficiently as a standard Restricted Boltzmann
Machine. Our experiments show that the model assigns better log probability to
unseen data than the Replicated Softmax model. Features extracted from our
model outperform LDA, Replicated Softmax, and DocNADE models on document
retrieval and document classification tasks.
|
1309.6867 | Speedy Model Selection (SMS) for Copula Models | cs.LG stat.ME | We tackle the challenge of efficiently learning the structure of expressive
multivariate real-valued densities of copula graphical models. We start by
theoretically substantiating the conjecture that for many copula families the
magnitude of Spearman's rank correlation coefficient is monotone in the
expected contribution of an edge in network, namely the negative copula
entropy. We then build on this theory and suggest a novel Bayesian approach
that makes use of a prior over values of Spearman's rho for learning
copula-based models that involve a mix of copula families. We demonstrate the
generalization effectiveness of our highly efficient approach on sizable and
varied real-life datasets.
|
1309.6868 | Approximate Kalman Filter Q-Learning for Continuous State-Space MDPs | cs.LG stat.ML | We seek to learn an effective policy for a Markov Decision Process (MDP) with
continuous states via Q-Learning. Given a set of basis functions over state
action pairs we search for a corresponding set of linear weights that minimizes
the mean Bellman residual. Our algorithm uses a Kalman filter model to estimate
those weights and we have developed a simpler approximate Kalman filter model
that outperforms the current state of the art projected TD-Learning methods on
several standard benchmark problems.
|
1309.6869 | Finite-Time Analysis of Kernelised Contextual Bandits | cs.LG stat.ML | We tackle the problem of online reward maximisation over a large finite set
of actions described by their contexts. We focus on the case when the number of
actions is too big to sample all of them even once. However we assume that we
have access to the similarities between actions' contexts and that the expected
reward is an arbitrary linear function of the contexts' images in the related
reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB
algorithm, and give a cumulative regret bound through a frequentist analysis.
For contextual bandits, the related algorithm GP-UCB turns out to be a special
case of our algorithm, and our finite-time analysis improves the regret bound
of GP-UCB for the agnostic case, both in the terms of the kernel-dependent
quantity and the RKHS norm of the reward function. Moreover, for the linear
kernel, our regret bound matches the lower bound for contextual linear bandits.
|
1309.6870 | Dynamic Blocking and Collapsing for Gibbs Sampling | cs.AI | In this paper, we investigate combining blocking and collapsing -- two widely
used strategies for improving the accuracy of Gibbs sampling -- in the context
of probabilistic graphical models (PGMs). We show that combining them is not
straight-forward because collapsing (or eliminating variables) introduces new
dependencies in the PGM and in computation-limited settings, this may adversely
affect blocking. We therefore propose a principled approach for tackling this
problem. Specifically, we develop two scoring functions, one each for blocking
and collapsing, and formulate the problem of partitioning the variables in the
PGM into blocked and collapsed subsets as simultaneously maximizing both
scoring functions (i.e., a multi-objective optimization problem). We propose a
dynamic, greedy algorithm for approximately solving this intractable
optimization problem. Our dynamic algorithm periodically updates the
partitioning into blocked and collapsed variables by leveraging correlation
statistics gathered from the generated samples and enables rapid mixing by
blocking together and collapsing highly correlated variables. We demonstrate
experimentally the clear benefit of our dynamic approach: as more samples are
drawn, our dynamic approach significantly outperforms static graph-based
approaches by an order of magnitude in terms of accuracy.
|
1309.6871 | Bounded Approximate Symbolic Dynamic Programming for Hybrid MDPs | cs.AI | Recent advances in symbolic dynamic programming (SDP) combined with the
extended algebraic decision diagram (XADD) data structure have provided exact
solutions for mixed discrete and continuous (hybrid) MDPs with piecewise linear
dynamics and continuous actions. Since XADD-based exact solutions may grow
intractably large for many problems, we propose a bounded error compression
technique for XADDs that involves the solution of a constrained bilinear saddle
point problem. Fortuitously, we show that given the special structure of this
problem, it can be expressed as a bilevel linear programming problem and solved
to optimality in finite time via constraint generation, despite having an
infinite set of constraints. This solution permits the use of efficient linear
program solvers for XADD compression and enables a novel class of bounded
approximate SDP algorithms for hybrid MDPs that empirically offers
order-of-magnitude speedups over the exact solution in exchange for a small
approximation error.
|
1309.6872 | On MAP Inference by MWSS on Perfect Graphs | cs.AI cs.DS | Finding the most likely (MAP) configuration of a Markov random field (MRF) is
NP-hard in general. A promising, recent technique is to reduce the problem to
finding a maximum weight stable set (MWSS) on a derived weighted graph, which
if perfect, allows inference in polynomial time. We derive new results for this
approach, including a general decomposition theorem for MRFs of any order and
number of labels, extensions of results for binary pairwise models with
submodular cost functions to higher order, and an exact characterization of
which binary pairwise MRFs can be efficiently solved with this method. This
defines the power of the approach on this class of models, improves our toolbox
and expands the range of tractable models.
|
1309.6874 | Integrating Document Clustering and Topic Modeling | cs.LG cs.CL cs.IR stat.ML | Document clustering and topic modeling are two closely related tasks which
can mutually benefit each other. Topic modeling can project documents into a
topic space which facilitates effective document clustering. Cluster labels
discovered by document clustering can be incorporated into topic models to
extract local topics specific to each cluster and global topics shared by all
clusters. In this paper, we propose a multi-grain clustering topic model
(MGCTM) which integrates document clustering and topic modeling into a unified
framework and jointly performs the two tasks to achieve the overall best
performance. Our model tightly couples two components: a mixture component used
for discovering latent groups in document collection and a topic model
component used for mining multi-grain topics including local topics specific to
each cluster and global topics shared across clusters.We employ variational
inference to approximate the posterior of hidden variables and learn model
parameters. Experiments on two datasets demonstrate the effectiveness of our
model.
|
1309.6875 | Active Learning with Expert Advice | cs.LG stat.ML | Conventional learning with expert advice methods assumes a learner is always
receiving the outcome (e.g., class labels) of every incoming training instance
at the end of each trial. In real applications, acquiring the outcome from
oracle can be costly or time consuming. In this paper, we address a new problem
of active learning with expert advice, where the outcome of an instance is
disclosed only when it is requested by the online learner. Our goal is to learn
an accurate prediction model by asking the oracle the number of questions as
small as possible. To address this challenge, we propose a framework of active
forecasters for online active learning with expert advice, which attempts to
extend two regular forecasters, i.e., Exponentially Weighted Average Forecaster
and Greedy Forecaster, to tackle the task of active learning with expert
advice. We prove that the proposed algorithms satisfy the Hannan consistency
under some proper assumptions, and validate the efficacy of our technique by an
extensive set of experiments.
|
1309.6876 | Bennett-type Generalization Bounds: Large-deviation Case and Faster Rate
of Convergence | stat.ML cs.LG | In this paper, we present the Bennett-type generalization bounds of the
learning process for i.i.d. samples, and then show that the generalization
bounds have a faster rate of convergence than the traditional results. In
particular, we first develop two types of Bennett-type deviation inequality for
the i.i.d. learning process: one provides the generalization bounds based on
the uniform entropy number; the other leads to the bounds based on the
Rademacher complexity. We then adopt a new method to obtain the alternative
expressions of the Bennett-type generalization bounds, which imply that the
bounds have a faster rate o(N^{-1/2}) of convergence than the traditional
results O(N^{-1/2}). Additionally, we find that the rate of the bounds will
become faster in the large-deviation case, which refers to a situation where
the empirical risk is far away from (at least not close to) the expected risk.
Finally, we analyze the asymptotical convergence of the learning process and
compare our analysis with the existing results.
|
1309.6883 | Predicate Logic as a Modeling Language: Modeling and Solving some
Machine Learning and Data Mining Problems with IDP3 | cs.LO cs.AI | This paper provides a gentle introduction to problem solving with the IDP3
system. The core of IDP3 is a finite model generator that supports first order
logic enriched with types, inductive definitions, aggregates and partial
functions. It offers its users a modeling language that is a slight extension
of predicate logic and allows them to solve a wide range of search problems.
Apart from a small introductory example, applications are selected from
problems that arose within machine learning and data mining research. These
research areas have recently shown a strong interest in declarative modeling
and constraint solving as opposed to algorithmic approaches. The paper
illustrates that the IDP3 system can be a valuable tool for researchers with
such an interest.
The first problem is in the domain of stemmatology, a domain of philology
concerned with the relationship between surviving variant versions of text. The
second problem is about a somewhat related problem within biology where
phylogenetic trees are used to represent the evolution of species. The third
and final problem concerns the classical problem of learning a minimal
automaton consistent with a given set of strings. For this last problem, we
show that the performance of our solution comes very close to that of a
state-of-the art solution. For each of these applications, we analyze the
problem, illustrate the development of a logic-based model and explore how
alternatives can affect the performance.
|
1309.6908 | A Collaborative Filtering Based Approach for Recommending Elective
Courses | cs.IR | In management education programmes today, students face a difficult time in
choosing electives as the number of electives available are many. As the range
and diversity of different elective courses available for selection have
increased, course recommendation systems that help students in making choices
about courses have become more relevant. In this paper we extend the concept of
collaborative filtering approach to develop a course recommendation system. The
proposed approach provides student an accurate prediction of the grade they may
get if they choose a particular course, which will be helpful when they decide
on selecting elective courses, as grade is an important parameter for a student
while deciding on an elective course. We experimentally evaluate the
collaborative filtering approach on a real life data set and show that the
proposed system is effective in terms of accuracy.
|
1309.6919 | Accurate Profiling of Microbial Communities from Massively Parallel
Sequencing using Convex Optimization | cs.CE q-bio.GN q-bio.QM stat.AP stat.CO | We describe the Microbial Community Reconstruction ({\bf MCR}) Problem, which
is fundamental for microbiome analysis. In this problem, the goal is to
reconstruct the identity and frequency of species comprising a microbial
community, using short sequence reads from Massively Parallel Sequencing (MPS)
data obtained for specified genomic regions. We formulate the problem
mathematically as a convex optimization problem and provide sufficient
conditions for identifiability, namely the ability to reconstruct species
identity and frequency correctly when the data size (number of reads) grows to
infinity. We discuss different metrics for assessing the quality of the
reconstructed solution, including a novel phylogenetically-aware metric based
on the Mahalanobis distance, and give upper-bounds on the reconstruction error
for a finite number of reads under different metrics. We propose a scalable
divide-and-conquer algorithm for the problem using convex optimization, which
enables us to handle large problems (with $\sim10^6$ species). We show using
numerical simulations that for realistic scenarios, where the microbial
communities are sparse, our algorithm gives solutions with high accuracy, both
in terms of obtaining accurate frequency, and in terms of species phylogenetic
resolution.
|
1309.6925 | Typical behavior of the linear programming method for combinatorial
optimization problems: From a statistical-mechanical perspective | cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT | Typical behavior of the linear programming problem (LP) is studied as a
relaxation of the minimum vertex cover problem, which is a type of the integer
programming problem (IP). To deal with the LP and IP by statistical mechanics,
a lattice-gas model on the Erd\"os-R\'enyi random graphs is analyzed by a
replica method. It is found that the LP optimal solution is typically equal to
that of the IP below the critical average degree c*=e in the thermodynamic
limit. The critical threshold for LP=IP is beyond a mathematical result, c=1,
and coincides with the replica-symmetry-breaking threshold of the IP.
|
1309.6928 | Structure and dynamics of core-periphery networks | physics.soc-ph cs.NI cs.SI nlin.AO q-bio.MN | Recent studies uncovered important core/periphery network structures
characterizing complex sets of cooperative and competitive interactions between
network nodes, be they proteins, cells, species or humans. Better
characterization of the structure, dynamics and function of core/periphery
networks is a key step of our understanding cellular functions, species
adaptation, social and market changes. Here we summarize the current knowledge
of the structure and dynamics of "traditional" core/periphery networks,
rich-clubs, nested, bow-tie and onion networks. Comparing core/periphery
structures with network modules, we discriminate between global and local
cores. The core/periphery network organization lies in the middle of several
extreme properties, such as random/condensed structures, clique/star
configurations, network symmetry/asymmetry, network
assortativity/disassortativity, as well as network hierarchy/anti-hierarchy.
These properties of high complexity together with the large degeneracy of core
pathways ensuring cooperation and providing multiple options of network flow
re-channelling greatly contribute to the high robustness of complex systems.
Core processes enable a coordinated response to various stimuli, decrease
noise, and evolve slowly. The integrative function of network cores is an
important step in the development of a large variety of complex organisms and
organizations. In addition to these important features and several decades of
research interest, studies on core/periphery networks still have a number of
unexplored areas.
|
1309.6933 | Estimating Undirected Graphs Under Weak Assumptions | math.ST cs.LG stat.ML stat.TH | We consider the problem of providing nonparametric confidence guarantees for
undirected graphs under weak assumptions. In particular, we do not assume
sparsity, incoherence or Normality. We allow the dimension $D$ to increase with
the sample size $n$. First, we prove lower bounds that show that if we want
accurate inferences with low assumptions then there are limitations on the
dimension as a function of sample size. When the dimension increases slowly
with sample size, we show that methods based on Normal approximations and on
the bootstrap lead to valid inferences and we provide Berry-Esseen bounds on
the accuracy of the Normal approximation. When the dimension is large relative
to sample size, accurate inferences for graphs under low assumptions are not
possible. Instead we propose to estimate something less demanding than the
entire partial correlation graph. In particular, we consider: cluster graphs,
restricted partial correlation graphs and correlation graphs.
|
1309.6964 | Online Algorithms for Factorization-Based Structure from Motion | cs.CV | We present a family of online algorithms for real-time factorization-based
structure from motion, leveraging a relationship between incremental singular
value decomposition and recently proposed methods for online matrix completion.
Our methods are orders of magnitude faster than previous state of the art, can
handle missing data and a variable number of feature points, and are robust to
noise and sparse outliers. We demonstrate our methods on both real and
synthetic sequences and show that they perform well in both online and batch
settings. We also provide an implementation which is able to produce 3D models
in real time using a laptop with a webcam.
|
1309.6989 | Linear combination of one-step predictive information with an external
reward in an episodic policy gradient setting: a critical analysis | cs.AI | One of the main challenges in the field of embodied artificial intelligence
is the open-ended autonomous learning of complex behaviours. Our approach is to
use task-independent, information-driven intrinsic motivation(s) to support
task-dependent learning. The work presented here is a preliminary step in which
we investigate the predictive information (the mutual information of the past
and future of the sensor stream) as an intrinsic drive, ideally supporting any
kind of task acquisition. Previous experiments have shown that the predictive
information (PI) is a good candidate to support autonomous, open-ended learning
of complex behaviours, because a maximisation of the PI corresponds to an
exploration of morphology- and environment-dependent behavioural regularities.
The idea is that these regularities can then be exploited in order to solve any
given task. Three different experiments are presented and their results lead to
the conclusion that the linear combination of the one-step PI with an external
reward function is not generally recommended in an episodic policy gradient
setting. Only for hard tasks a great speed-up can be achieved at the cost of an
asymptotic performance lost.
|
1309.7004 | Calculation of Entailed Rank Constraints in Partially Non-Linear and
Cyclic Models | cs.AI stat.ML | The Trek Separation Theorem (Sullivant et al. 2010) states necessary and
sufficient conditions for a linear directed acyclic graphical model to entail
for all possible values of its linear coefficients that the rank of various
sub-matrices of the covariance matrix is less than or equal to n, for any given
n. In this paper, I extend the Trek Separation Theorem in two ways: I prove
that the same necessary and sufficient conditions apply even when the
generating model is partially non-linear and contains some cycles. This
justifies application of constraint-based causal search algorithms such as the
BuildPureClusters algorithm (Silva et al. 2006) for discovering the causal
structure of latent variable models to data generated by a wider class of
causal models that may contain non-linear and cyclic relations among the latent
variables.
|
1309.7009 | Analyzing the Reduced Required BS Density due to CoMP in Cellular
Networks | cs.IT math.IT | In this paper we investigate the benefit of base station (BS) cooperation in
the uplink of coordinated multi-point (CoMP) networks. Our figure of merit is
the required BS density required to meet a chosen rate coverage. Our model
assumes a 2-D network of BSs on a regular hexagonal lattice in which path loss,
lognormal shadowing and Rayleigh fading affect the signal received from users.
Accurate closed-form expressions are first presented for the sum-rate coverage
probability and ergodic sum-rate at each point of the cooperation region. Then,
for a chosen quality of user rate, the required density of BS is derived based
on the minimum value of rate coverage probability in the cooperation region.
The approach guarantees that the achievable rate in the entire coverage region
is above a target rate with chosen probability. The formulation allows
comparison between different orders of BS cooperation, quantifying the reduced
required BS density from higher orders of cooperation.
|
1309.7031 | Controlling Contagion Processes in Time-Varying Networks | physics.soc-ph cs.SI q-bio.PE | The vast majority of strategies aimed at controlling contagion processes on
networks considers the connectivity pattern of the system as either quenched or
annealed. However, in the real world many networks are highly dynamical and
evolve in time concurrently to the contagion process. Here, we derive an
analytical framework for the study of control strategies specifically devised
for time-varying networks. We consider the removal/immunization of individual
nodes according the their activity in the network and develop a block variable
mean-field approach that allows the derivation of the equations describing the
evolution of the contagion process concurrently to the network dynamic. We
derive the critical immunization threshold and assess the effectiveness of the
control strategies. Finally, we validate the theoretical picture by simulating
numerically the information spreading process and control strategies in both
synthetic networks and a large-scale, real-world mobile telephone call dataset
|
1309.7068 | Investigation of commuting Hamiltonian in quantum Markov network | cs.AI quant-ph | Graphical Models have various applications in science and engineering which
include physics, bioinformatics, telecommunication and etc. Usage of graphical
models needs complex computations in order to evaluation of marginal
functions,so there are some powerful methods including mean field
approximation, belief propagation algorithm and etc. Quantum graphical models
have been recently developed in context of quantum information and computation,
and quantum statistical physics, which is possible by generalization of
classical probability theory to quantum theory. The main goal of this paper is
preparing a primary generalization of Markov network, as a type of graphical
models, to quantum case and applying in quantum statistical physics.We have
investigated the Markov network and the role of commuting Hamiltonian terms in
conditional independence with simple examples of quantum statistical physics.
|
1309.7102 | Finite Length Analysis of LDPC Codes | cs.IT math.IT | In this paper, we study the performance of finite-length LDPC codes in the
waterfall region. We propose an algorithm to predict the error performance of
finite-length LDPC codes over various binary memoryless channels. Through
numerical results, we find that our technique gives better performance
prediction compared to existing techniques.
|
1309.7109 | Total Jensen divergences: Definition, Properties and k-Means++
Clustering | cs.IT math.IT | We present a novel class of divergences induced by a smooth convex function
called total Jensen divergences. Those total Jensen divergences are invariant
by construction to rotations, a feature yielding regularization of ordinary
Jensen divergences by a conformal factor. We analyze the relationships between
this novel class of total Jensen divergences and the recently introduced total
Bregman divergences. We then proceed by defining the total Jensen centroids as
average distortion minimizers, and study their robustness performance to
outliers. Finally, we prove that the k-means++ initialization that bypasses
explicit centroid computations is good enough in practice to guarantee
probabilistically a constant approximation factor to the optimal k-means
clustering.
|
1309.7119 | Stock price direction prediction by directly using prices data: an
empirical study on the KOSPI and HSI | cs.CE cs.LG q-fin.ST | The prediction of a stock market direction may serve as an early
recommendation system for short-term investors and as an early financial
distress warning system for long-term shareholders. Many stock prediction
studies focus on using macroeconomic indicators, such as CPI and GDP, to train
the prediction model. However, daily data of the macroeconomic indicators are
almost impossible to obtain. Thus, those methods are difficult to be employed
in practice. In this paper, we propose a method that directly uses prices data
to predict market index direction and stock price direction. An extensive
empirical study of the proposed method is presented on the Korean Composite
Stock Price Index (KOSPI) and Hang Seng Index (HSI), as well as the individual
constituents included in the indices. The experimental results show notably
high hit ratios in predicting the movements of the individual constituents in
the KOSPI and HIS.
|
1309.7122 | Proceedings Wivace 2013 - Italian Workshop on Artificial Life and
Evolutionary Computation | cs.CE cs.NE | The Wivace 2013 Electronic Proceedings in Theoretical Computer Science
(EPTCS) contain some selected long and short articles accepted for the
presentation at Wivace 2013 - Italian Workshop on Artificial Life and
Evolutionary Computation, which was held at the University of Milan-Bicocca,
Milan, on the 1st and 2nd of July, 2013.
|
1309.7145 | Propagating Regular Counting Constraints | cs.AI cs.FL | Constraints over finite sequences of variables are ubiquitous in sequencing
and timetabling. Moreover, the wide variety of such constraints in practical
applications led to general modelling techniques and generic propagation
algorithms, often based on deterministic finite automata (DFA) and their
extensions. We consider counter-DFAs (cDFA), which provide concise models for
regular counting constraints, that is constraints over the number of times a
regular-language pattern occurs in a sequence. We show how to enforce domain
consistency in polynomial time for atmost and atleast regular counting
constraints based on the frequent case of a cDFA with only accepting states and
a single counter that can be incremented by transitions. We also prove that the
satisfaction of exact regular counting constraints is NP-hard and indicate that
an incomplete algorithm for exact regular counting constraints is faster and
provides more pruning than the existing propagator from [3]. Regular counting
constraints are closely related to the CostRegular constraint but contribute
both a natural abstraction and some computational advantages.
|
1309.7170 | An Efficient Index for Visual Search in Appearance-based SLAM | cs.CV cs.RO | Vector-quantization can be a computationally expensive step in visual
bag-of-words (BoW) search when the vocabulary is large. A BoW-based appearance
SLAM needs to tackle this problem for an efficient real-time operation. We
propose an effective method to speed up the vector-quantization process in
BoW-based visual SLAM. We employ a graph-based nearest neighbor search (GNNS)
algorithm to this aim, and experimentally show that it can outperform the
state-of-the-art. The graph-based search structure used in GNNS can efficiently
be integrated into the BoW model and the SLAM framework. The graph-based index,
which is a k-NN graph, is built over the vocabulary words and can be extracted
from the BoW's vocabulary construction procedure, by adding one iteration to
the k-means clustering, which adds small extra cost. Moreover, exploiting the
fact that images acquired for appearance-based SLAM are sequential, GNNS search
can be initiated judiciously which helps increase the speedup of the
quantization process considerably.
|
1309.7173 | Analysis of Optimization Techniques to Improve User Response Time of Web
Applications and Their Implementation for MOODLE | cs.AI cs.PF | Analysis of seven optimization techniques grouped under three categories
(hardware, back-end, and front-end) is done to study the reduction in average
user response time for Modular Object Oriented Dynamic Learning Environment
(Moodle), a Learning Management System which is scripted in PHP5, runs on
Apache web server and utilizes MySQL database software. Before the
implementation of these techniques, performance analysis of Moodle is performed
for varying number of concurrent users. The results obtained for each
optimization technique are then reported in a tabular format. The maximum
reduction in end user response time was achieved for hardware optimization
which requires Moodle server and database to be installed on solid state disk.
|
1309.7187 | Analyse des r\^oles dans les communaut\'es virtuelles : d\'efinitions et
premi\`eres exp\'erimentations sur IMDb | cs.SI | Role analysis in online communities allows us to understand and predict users
behavior. Though several approaches have been followed, there is still lack of
generalization of their methods and their results. In this paper, we discuss
about the ground theory of roles and search for a consistent and computable
definition that allows the automatic detection of roles played by users in
forum threads on the internet. We analyze the web site IMDb to illustrate the
discussion.
|
1309.7233 | Multilayer Networks | physics.soc-ph cs.SI | In most natural and engineered systems, a set of entities interact with each
other in complicated patterns that can encompass multiple types of
relationships, change in time, and include other types of complications. Such
systems include multiple subsystems and layers of connectivity, and it is
important to take such "multilayer" features into account to try to improve our
understanding of complex systems. Consequently, it is necessary to generalize
"traditional" network theory by developing (and validating) a framework and
associated tools to study multilayer systems in a comprehensive fashion. The
origins of such efforts date back several decades and arose in multiple
disciplines, and now the study of multilayer networks has become one of the
most important directions in network science. In this paper, we discuss the
history of multilayer networks (and related concepts) and review the exploding
body of work on such networks. To unify the disparate terminology in the large
body of recent work, we discuss a general framework for multilayer networks,
construct a dictionary of terminology to relate the numerous existing concepts
to each other, and provide a thorough discussion that compares, contrasts, and
translates between related notions such as multilayer networks, multiplex
networks, interdependent networks, networks of networks, and many others. We
also survey and discuss existing data sets that can be represented as
multilayer networks. We review attempts to generalize single-layer-network
diagnostics to multilayer networks. We also discuss the rapidly expanding
research on multilayer-network models and notions like community structure,
connected components, tensor decompositions, and various types of dynamical
processes on multilayer networks. We conclude with a summary and an outlook.
|
1309.7261 | Detecting Fake Escrow Websites using Rich Fraud Cues and Kernel Based
Methods | cs.CY cs.LG | The ability to automatically detect fraudulent escrow websites is important
in order to alleviate online auction fraud. Despite research on related topics,
fake escrow website categorization has received little attention. In this study
we evaluated the effectiveness of various features and techniques for detecting
fake escrow websites. Our analysis included a rich set of features extracted
from web page text, image, and link information. We also proposed a composite
kernel tailored to represent the properties of fake websites, including content
duplication and structural attributes. Experiments were conducted to assess the
proposed features, techniques, and kernels on a test bed encompassing nearly
90,000 web pages derived from 410 legitimate and fake escrow sites. The
combination of an extended feature set and the composite kernel attained over
98% accuracy when differentiating fake sites from real ones, using the support
vector machines algorithm. The results suggest that automated web-based
information systems for detecting fake escrow sites could be feasible and may
be utilized as authentication mechanisms.
|
1309.7266 | Evaluating Link-Based Techniques for Detecting Fake Pharmacy Websites | cs.CY cs.LG | Fake online pharmacies have become increasingly pervasive, constituting over
90% of online pharmacy websites. There is a need for fake website detection
techniques capable of identifying fake online pharmacy websites with a high
degree of accuracy. In this study, we compared several well-known link-based
detection techniques on a large-scale test bed with the hyperlink graph
encompassing over 80 million links between 15.5 million web pages, including
1.2 million known legitimate and fake pharmacy pages. We found that the QoC and
QoL class propagation algorithms achieved an accuracy of over 90% on our
dataset. The results revealed that algorithms that incorporate dual class
propagation as well as inlink and outlink information, on page-level or
site-level graphs, are better suited for detecting fake pharmacy websites. In
addition, site-level analysis yielded significantly better results than
page-level analysis for most algorithms evaluated.
|
1309.7270 | Evaluating the Usefulness of Sentiment Information for Focused Crawlers | cs.IR cs.CL | Despite the prevalence of sentiment-related content on the Web, there has
been limited work on focused crawlers capable of effectively collecting such
content. In this study, we evaluated the efficacy of using sentiment-related
information for enhanced focused crawling of opinion-rich web content regarding
a particular topic. We also assessed the impact of using sentiment-labeled web
graphs to further improve collection accuracy. Experimental results on a large
test bed encompassing over half a million web pages revealed that focused
crawlers utilizing sentiment information as well as sentiment-labeled web
graphs are capable of gathering more holistic collections of opinion-related
content regarding a particular topic. The results have important implications
for business and marketing intelligence gathering efforts in the Web 2.0 era.
|
1309.7276 | Adopting level set theory based algorithms to segment human ear | cs.CV | Human identification has always been a topic that interested researchers
around the world. Biometric methods are found to be more effective and much
easier for the users than the traditional identification methods like keys,
smart cards and passwords. Unlike with the traditional methods, with biometric
methods the data acquisition is most of the times passive, which means the
users do not take active part in data acquisition. Data acquisition can be
performed using cameras, scanners or sensors. Human physiological biometrics
such as face, eye and ear are good candidates for uniquely identifying an
individual. However, human ear scores over face and eye because of certain
advantages it has over face. The most challenging phase in human identification
based on ear biometric is the segmentation of the ear image from the captured
image which may contain many unwanted details. In this work, PDE based image
processing techniques are used to segment out the ear image. Level Set Theory
based image processing is employed to obtain the contour of the ear image. A
few Level set algorithms are compared for their efficiency in segmenting test
ear images.
|
1309.7289 | A General Stochastic Information Diffusion Model in Social Networks
based on Epidemic Diseases | cs.SI physics.soc-ph | Social networks are an important infrastructure for information, viruses and
innovations propagation. Since users behavior has influenced by other users
activity, some groups of people would be made regard to similarity of users
interests. On the other hand, dealing with many events in real worlds, can be
justified in social networks; spreading disease is one instance of them. People
manner and infection severity are more important parameters in dissemination of
diseases. Both of these reasons derive, whether the diffusion leads to an
epidemic or not. SIRS is a hybrid model of SIR and SIS disease models to spread
contamination. A person in this model can be returned to susceptible state
after it removed. According to communities which are established on the social
network, we use the compartmental type of SIRS model. During this paper, a
general compartmental information diffusion model would be proposed and
extracted some of the beneficial parameters to analyze our model. To adapt our
model to realistic behaviors, we use Markovian model, which would be helpful to
create a stochastic manner of the proposed model. In the case of random model,
we can calculate probabilities of transaction between states and predicting
value of each state. The comparison between two mode of the model shows that,
the prediction of population would be verified in each state.
|
1309.7298 | A Greedy Algorithm for the Analysis Transform Domain | math.NA cs.IT math.IT | Many image processing applications benefited remarkably from the theory of
sparsity. One model of sparsity is the cosparse analysis one. It was shown that
using l_1-minimization one might stably recover a cosparse signal from a small
set of random linear measurements if the operator is a frame. Another effort
has provided guarantees for dictionaries that have a near optimal projection
procedure using greedy-like algorithms. However, no claims have been given for
frames. A common drawback of all these existing techniques is their high
computational cost for large dimensional problems.
In this work we propose a new greedy-like technique with theoretical recovery
guarantees for frames as the analysis operator, closing the gap between greedy
and relaxation techniques. Our results cover both the case of bounded
adversarial noise, where we show that the algorithm provides us with a stable
reconstruction, and the one of random Gaussian noise, for which we prove that
it has a denoising effect, closing another gap in the analysis framework. Our
proposed program, unlike the previous greedy-like ones that solely act in the
signal domain, operates mainly in the analysis operator's transform domain.
Besides the theoretical benefit, the main advantage of this strategy is its
computational efficiency that makes it easily applicable to visually big data.
We demonstrate its performance on several high dimensional images.
|
1309.7311 | Bayesian Inference in Sparse Gaussian Graphical Models | stat.ML cs.LG | One of the fundamental tasks of science is to find explainable relationships
between observed phenomena. One approach to this task that has received
attention in recent years is based on probabilistic graphical modelling with
sparsity constraints on model structures. In this paper, we describe two new
approaches to Bayesian inference of sparse structures of Gaussian graphical
models (GGMs). One is based on a simple modification of the cutting-edge block
Gibbs sampler for sparse GGMs, which results in significant computational gains
in high dimensions. The other method is based on a specific construction of the
Hamiltonian Monte Carlo sampler, which results in further significant
improvements. We compare our fully Bayesian approaches with the popular
regularisation-based graphical LASSO, and demonstrate significant advantages of
the Bayesian treatment under the same computing costs. We apply the methods to
a broad range of simulated data sets, and a real-life financial data set.
|
1309.7312 | Development and Transcription of Assamese Speech Corpus | cs.CL | A balanced speech corpus is the basic need for any speech processing task. In
this report we describe our effort on development of Assamese speech corpus. We
mainly focused on some issues and challenges faced during development of the
corpus. Being a less computationally aware language, this is the first effort
to develop speech corpus for Assamese. As corpus development is an ongoing
process, in this paper we report only the initial task.
|
1309.7313 | Timeline Generation: Tracking individuals on Twitter | cs.SI cs.IR | In this paper, we propose a unsupervised framework to reconstruct a person's
life history by creating a chronological list for {\it personal important
events} (PIE) of individuals based on the tweets they published. By analyzing
individual tweet collections, we find that what are suitable for inclusion in
the personal timeline should be tweets talking about personal (as opposed to
public) and time-specific (as opposed to time-general) topics. To further
extract these types of topics, we introduce a non-parametric multi-level
Dirichlet Process model to recognize four types of tweets: personal
time-specific (PersonTS), personal time-general (PersonTG), public
time-specific (PublicTS) and public time-general (PublicTG) topics, which, in
turn, are used for further personal event extraction and timeline generation.
To the best of our knowledge, this is the first work focused on the generation
of timeline for individuals from twitter data. For evaluation, we have built a
new golden standard Timelines based on Twitter and Wikipedia that contain PIE
related events from 20 {\it ordinary twitter users} and 20 {\it celebrities}.
Experiments on real Twitter data quantitatively demonstrate the effectiveness
of our approach.
|
1309.7315 | Nonlinear Compressive Particle Filtering | cs.SY | Many systems for which compressive sensing is used today are dynamical. The
common approach is to neglect the dynamics and see the problem as a sequence of
independent problems. This approach has two disadvantages. Firstly, the
temporal dependency in the state could be used to improve the accuracy of the
state estimates. Secondly, having an estimate for the state and its support
could be used to reduce the computational load of the subsequent step. In the
linear Gaussian setting, compressive sensing was recently combined with the
Kalman filter to mitigate above disadvantages. In the nonlinear dynamical case,
compressive sensing can not be used and, if the state dimension is high, the
particle filter would perform poorly. In this paper we combine one of the most
novel developments in compressive sensing, nonlinear compressive sensing, with
the particle filter. We show that the marriage of the two is essential and that
neither the particle filter or nonlinear compressive sensing alone gives a
satisfying solution.
|
1309.7340 | Early Stage Influenza Detection from Twitter | cs.SI cs.CL | Influenza is an acute respiratory illness that occurs virtually every year
and results in substantial disease, death and expense. Detection of Influenza
in its earliest stage would facilitate timely action that could reduce the
spread of the illness. Existing systems such as CDC and EISS which try to
collect diagnosis data, are almost entirely manual, resulting in about two-week
delays for clinical data acquisition. Twitter, a popular microblogging service,
provides us with a perfect source for early-stage flu detection due to its
real- time nature. For example, when a flu breaks out, people that get the flu
may post related tweets which enables the detection of the flu breakout
promptly. In this paper, we investigate the real-time flu detection problem on
Twitter data by proposing Flu Markov Network (Flu-MN): a spatio-temporal
unsupervised Bayesian algorithm based on a 4 phase Markov Network, trying to
identify the flu breakout at the earliest stage. We test our model on real
Twitter datasets from the United States along with baselines in multiple
applications, such as real-time flu breakout detection, future epidemic phase
prediction, or Influenza-like illness (ILI) physician visits. Experimental
results show the robustness and effectiveness of our approach. We build up a
real time flu reporting system based on the proposed approach, and we are
hopeful that it would help government or health organizations in identifying
flu outbreaks and facilitating timely actions to decrease unnecessary
mortality.
|
1309.7367 | Stochastic Online Shortest Path Routing: The Value of Feedback | cs.NI cs.LG math.OC | This paper studies online shortest path routing over multi-hop networks. Link
costs or delays are time-varying and modeled by independent and identically
distributed random processes, whose parameters are initially unknown. The
parameters, and hence the optimal path, can only be estimated by routing
packets through the network and observing the realized delays. Our aim is to
find a routing policy that minimizes the regret (the cumulative difference of
expected delay) between the path chosen by the policy and the unknown optimal
path. We formulate the problem as a combinatorial bandit optimization problem
and consider several scenarios that differ in where routing decisions are made
and in the information available when making the decisions. For each scenario,
we derive a tight asymptotic lower bound on the regret that has to be satisfied
by any online routing policy. These bounds help us to understand the
performance improvements we can expect when (i) taking routing decisions at
each hop rather than at the source only, and (ii) observing per-link delays
rather than end-to-end path delays. In particular, we show that (i) is of no
use while (ii) can have a spectacular impact. Three algorithms, with a
trade-off between computational complexity and performance, are proposed. The
regret upper bounds of these algorithms improve over those of the existing
algorithms, and they significantly outperform state-of-the-art algorithms in
numerical experiments.
|
1309.7393 | HeteSim: A General Framework for Relevance Measure in Heterogeneous
Networks | cs.IR cs.AI | Similarity search is an important function in many applications, which
usually focuses on measuring the similarity between objects with the same type.
However, in many scenarios, we need to measure the relatedness between objects
with different types. With the surge of study on heterogeneous networks, the
relevance measure on objects with different types becomes increasingly
important. In this paper, we study the relevance search problem in
heterogeneous networks, where the task is to measure the relatedness of
heterogeneous objects (including objects with the same type or different
types). A novel measure HeteSim is proposed, which has the following
attributes: (1) a uniform measure: it can measure the relatedness of objects
with the same or different types in a uniform framework; (2) a path-constrained
measure: the relatedness of object pairs are defined based on the search path
that connect two objects through following a sequence of node types; (3) a
semi-metric measure: HeteSim has some good properties (e.g., self-maximum and
symmetric), that are crucial to many data mining tasks. Moreover, we analyze
the computation characteristics of HeteSim and propose the corresponding quick
computation strategies. Empirical studies show that HeteSim can effectively and
efficiently evaluate the relatedness of heterogeneous objects.
|
1309.7405 | A Model of the Mechanisms Underlying Exploratory Behaviour | q-bio.PE cs.AI | A model of the mechanisms underlying exploratory behaviour, based on
empirical research and refined using a computer simulation, is presented. The
behaviour of killifish from two lakes, one with killifish predators and one
without, was compared in the laboratory. Plotting average activity in a novel
environment versus time resulted in an inverted-U-shaped curve for both groups;
however, the curve for killifish from the lake without predators was (1)
steeper, (2) reached a peak value earlier, (S) reached a higher peak value, and
(4) subsumed less area than the curve for killifish from the lake with
predators. We hypothesize that the shape of the exploration curve reflects a
competition between motivational subsystems that excite and inhibit exploratory
behaviour in a way that is tuned to match the affordance probabilities of the
animal's environment. A computer implementation of this model produced curves
which differed along the same four dimensions as differentiate the two
killifish curves. All four differences were reproduced in the model by tuning a
single parameter: the time-dependent component of the decay-rate of the
exploration-inhibiting subsystem.
|
1309.7407 | Concept Combination and the Origins of Complex Cognition | q-bio.NC cs.AI | At the core of our uniquely human cognitive abilities is the capacity to see
things from different perspectives, or to place them in a new context. We
propose that this was made possible by two cognitive transitions. First, the
large brain of Homo erectus facilitated the onset of recursive recall: the
ability to string thoughts together into a stream of potentially abstract or
imaginative thought. This hypothesis is supported by a set of computational
models where an artificial society of agents evolved to generate more diverse
and valuable cultural outputs under conditions of recursive recall. We propose
that the capacity to see things in context arose much later, following the
appearance of anatomically modern humans. This second transition was brought on
by the onset of contextual focus: the capacity to shift between a minimally
contextual analytic mode of thought, and a highly contextual associative mode
of thought, conducive to combining concepts in new ways and 'breaking out of a
rut'. When contextual focus is implemented in an art-generating computer
program, the resulting artworks are seen as more creative and appealing. We
summarize how both transitions can be modeled using a theory of concepts which
highlights the manner in which different contexts can lead to modern humans
attributing very different meanings to the interpretation of one concept.
|
1309.7423 | More Constructions of Differentially 4-uniform Permutations on
$\gf_{2^{2k}}$ | cs.IT math.IT | Differentially 4-uniform permutations on $\gf_{2^{2k}}$ with high
nonlinearity are often chosen as Substitution boxes in both block and stream
ciphers. Recently, Qu et al. introduced a class of functions, which are called
preferred functions, to construct a lot of infinite families of such
permutations \cite{QTTL}. In this paper, we propose a particular type of
Boolean functions to characterize the preferred functions. On the one hand,
such Boolean functions can be determined by solving linear equations, and they
give rise to a huge number of differentially 4-uniform permutations over
$\gf_{2^{2k}}$. Hence they may provide more choices for the design of
Substitution boxes. On the other hand, by investigating the number of these
Boolean functions, we show that the number of CCZ-inequivalent differentially
4-uniform permutations over $\gf_{2^{2k}}$ grows exponentially when $k$
increases, which gives a positive answer to an open problem proposed in
\cite{QTTL}.
|
1309.7429 | Quorum Sensing for Regenerating Codes in Distributed Storage | cs.DC cs.IT math.IT | Distributed storage systems with replication are well known for storing large
amount of data. A large number of replication is done in order to provide
reliability. This makes the system expensive. Various methods have been
proposed over time to reduce the degree of replication and yet provide same
level of reliability. One recently suggested scheme is of Regenerating codes,
where a file is divided in to parts which are then processed by a coding
mechanism and network coding to provide large number of parts. These are stored
at various nodes with more than one part at each node. These codes can generate
whole file and can repair a failed node by contacting some out of total
existing nodes. This property ensures reliability in case of node failure and
uses clever replication. This also optimizes bandwidth usage. In a practical
scenario, the original file will be read and updated many times. With every
update, we will have to update the data stored at many nodes. Handling multiple
requests at the same time will bring a lot of complexity. Reading and writing
or multiple writing on the same data at the same time should also be prevented.
In this paper, we propose an algorithm that manages and executes all the
requests from the users which reduces the update complexity. We also try to
keep an adequate amount of availability at the same time. We use a voting based
mechanism and form read, write and repair quorums. We have also done
probabilistic analysis of regenerating codes.
|
1309.7430 | Pilot Beam Pattern Design for Channel Estimation in Massive MIMO Systems | cs.IT math.IT | In this paper, the problem of pilot beam pattern design for channel
estimation in massive multiple-input multiple-output systems with a large
number of transmit antennas at the base station is considered, and a new
algorithm for pilot beam pattern design for optimal channel estimation is
proposed under the assumption that the channel is a stationary Gauss-Markov
random process. The proposed algorithm designs the pilot beam pattern
sequentially by exploiting the properties of Kalman filtering and the
associated prediction error covariance matrices and also the channel statistics
such as spatial and temporal channel correlation. The resulting design
generates a sequentially-optimal sequence of pilot beam patterns with low
complexity for a given set of system parameters. Numerical results show the
effectiveness of the proposed algorithm.
|
1309.7434 | Face Verification Using Boosted Cross-Image Features | cs.CV | This paper proposes a new approach for face verification, where a pair of
images needs to be classified as belonging to the same person or not. This
problem is relatively new and not well-explored in the literature. Current
methods mostly adopt techniques borrowed from face recognition, and process
each of the images in the pair independently, which is counter intuitive. In
contrast, we propose to extract cross-image features, i.e. features across the
pair of images, which, as we demonstrate, is more discriminative to the
similarity and the dissimilarity of faces. Our features are derived from the
popular Haar-like features, however, extended to handle the face verification
problem instead of face detection. We collect a large bank of cross-image
features using filters of different sizes, locations, and orientations.
Consequently, we use AdaBoost to select and weight the most discriminative
features. We carried out extensive experiments on the proposed ideas using
three standard face verification datasets, and obtained promising results
outperforming state-of-the-art.
|
1309.7437 | A Note on Broadcast Channels with Stale State Information at the
Transmitter | cs.IT math.IT | This paper shows that the Maddah-Ali--Tse (MAT) scheme which establishes the
symmetric capacity of two example broadcast channels with strictly causal state
information at the transmitter is a simple special case of the
Shayevitz--Wigger scheme for the broadcast channel with generalized feedback,
which involves block Markov coding, compression, superposition coding, Marton
coding, and coded time sharing. Focusing on the class of symmetric broadcast
channels with state, we derive an expression for the maximum achievable
symmetric rate using the Shayevitz--Wigger scheme. We show that the MAT results
can be recovered by evaluating this expression for the special case in which
superposition coding and Marton coding are not used. We then introduce a new
broadcast channel example that shares many features of the MAT examples. We
show that another special case of our maximum symmetric rate expression in
which superposition coding is also used attains a higher symmetric rate than
the MAT scheme. The symmetric capacity of this example is not known, however.
|
1309.7439 | Optimal Hybrid Channel Allocation:Based On Machine Learning Algorithms | cs.NI cs.LG | Recent advances in cellular communication systems resulted in a huge increase
in spectrum demand. To meet the requirements of the ever-growing need for
spectrum, efficient utilization of the existing resources is of utmost
importance. Channel Allocation, has thus become an inevitable research topic in
wireless communications. In this paper, we propose an optimal channel
allocation scheme, Optimal Hybrid Channel Allocation (OHCA) for an effective
allocation of channels. We improvise upon the existing Fixed Channel Allocation
(FCA) technique by imparting intelligence to the existing system by employing
the multilayer perceptron technique.
|
1309.7440 | Decompositions of Triangle-Dense Graphs | cs.DS cs.SI math.CO | High triangle density -- the graph property stating that a constant fraction
of two-hop paths belong to a triangle -- is a common signature of social
networks. This paper studies triangle-dense graphs from a structural
perspective. We prove constructively that significant portions of a
triangle-dense graph are contained in a disjoint union of dense, radius 2
subgraphs. This result quantifies the extent to which triangle-dense graphs
resemble unions of cliques. We also show that our algorithm recovers planted
clusterings in approximation-stable k-median instances.
|
1309.7451 | Multiuser Diversity for Secrecy Communications Using Opportunistic
Jammer Selection -- Secure DoF and Jammer Scaling Law | cs.IT math.IT | In this paper, we propose opportunistic jammer selection in a wireless
security system for increasing the secure degrees of freedom (DoF) between a
transmitter and a legitimate receiver (say, Alice and Bob). There is a jammer
group consisting of $S$ jammers among which Bob selects $K$ jammers. The
selected jammers transmit independent and identically distributed Gaussian
signals to hinder the eavesdropper (Eve). Since the channels of Bob and Eve are
independent, we can select the jammers whose jamming channels are aligned at
Bob, but not at Eve. As a result, Eve cannot obtain any DoF unless it has more
than $KN_j$ receive antennas, where $N_j$ is the number of jammer's transmit
antenna each, and hence $KN_j$ can be regarded as defensible dimensions against
Eve. For the jamming signal alignment at Bob, we propose two opportunistic
jammer selection schemes and find the scaling law of the required number of
jammers for target secure DoF by a geometrical interpretation of the received
signals.
|
1309.7455 | From sparse to dense and from assortative to disassortative in online
social networks | physics.soc-ph cond-mat.stat-mech cs.SI | Inspired by the analysis of several empirical online social networks, we
propose a simple reaction-diffusion-like coevolving model, in which individuals
are activated to create links based on their states, influenced by local
dynamics and their own intention. It is shown that the model can reproduce the
remarkable properties observed in empirical online social networks; in
particular, the assortative coefficients are neutral or negative, and the power
law exponents are smaller than 2. Moreover, we demonstrate that, under
appropriate conditions, the model network naturally makes transition(s) from
assortative to disassortative, and from sparse to dense in their
characteristics. The model is useful in understanding the formation and
evolution of online social networks.
|
1309.7463 | Characterizing and Modeling the Dynamics of Activity and Popularity | physics.soc-ph cond-mat.stat-mech cs.SI | Social media, regarded as two-layer networks consisting of users and items,
turn out to be the most important channels for access to massive information in
the era of Web 2.0. The dynamics of human activity and item popularity is a
crucial issue in social media networks. In this paper, by analyzing the growth
of user activity and item popularity in four empirical social media networks,
i.e., Amazon, Flickr, Delicious and Wikipedia, it is found that cross links
between users and items are more likely to be created by active users and to be
acquired by popular items, where user activity and item popularity are measured
by the number of cross links associated with users and items. This indicates
that users generally trace popular items, overall. However, it is found that
the inactive users more severely trace popular items than the active users.
Inspired by empirical analysis, we propose an evolving model for such networks,
in which the evolution is driven only by two-step random walk. Numerical
experiments verified that the model can qualitatively reproduce the
distributions of user activity and item popularity observed in empirical
networks. These results might shed light on the understandings of micro
dynamics of activity and popularity in social media networks.
|
1309.7478 | The achievable performance of convex demixing | cs.IT math.IT math.OC | Demixing is the problem of identifying multiple structured signals from a
superimposed, undersampled, and noisy observation. This work analyzes a general
framework, based on convex optimization, for solving demixing problems. When
the constituent signals follow a generic incoherence model, this analysis leads
to precise recovery guarantees. These results admit an attractive
interpretation: each signal possesses an intrinsic degrees-of-freedom
parameter, and demixing can succeed if and only if the dimension of the
observation exceeds the total degrees of freedom present in the observation.
|
1309.7484 | CSIFT Based Locality-constrained Linear Coding for Image Classification | cs.CV | In the past decade, SIFT descriptor has been witnessed as one of the most
robust local invariant feature descriptors and widely used in various vision
tasks. Most traditional image classification systems depend on the
luminance-based SIFT descriptors, which only analyze the gray level variations
of the images. Misclassification may happen since their color contents are
ignored. In this article, we concentrate on improving the performance of
existing image classification algorithms by adding color information. To
achieve this purpose, different kinds of colored SIFT descriptors are
introduced and implemented. Locality-constrained Linear Coding (LLC), a
state-of-the-art sparse coding technology, is employed to construct the image
classification system for the evaluation. The real experiments are carried out
on several benchmarks. With the enhancements of color SIFT, the proposed image
classification system obtains approximate 3% improvement of classification
accuracy on the Caltech-101 dataset and approximate 4% improvement of
classification accuracy on the Caltech-256 dataset.
|
1309.7498 | Most probable failure scenario in a model power grid with random power
demand | math.OC cs.SY | We consider a simple system with a local synchronous generator and a load
whose power consumption is a random process. The most probable scenario of
system failure (synchronization loss) is considered, and it is argued that its
knowledge is virtually enough to estimate the probability of failure per unit
time. We discuss two numerical methods to obtain the "optimal" evolution
leading to failure.
|
1309.7512 | Structured learning of sum-of-submodular higher order energy functions | cs.CV cs.LG stat.ML | Submodular functions can be exactly minimized in polynomial time, and the
special case that graph cuts solve with max flow \cite{KZ:PAMI04} has had
significant impact in computer vision
\cite{BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04}. In this paper we address
the important class of sum-of-submodular (SoS) functions
\cite{Arora:ECCV12,Kolmogorov:DAM12}, which can be efficiently minimized via a
variant of max flow called submodular flow \cite{Edmonds:ADM77}. SoS functions
can naturally express higher order priors involving, e.g., local image patches;
however, it is difficult to fully exploit their expressive power because they
have so many parameters. Rather than trying to formulate existing higher order
priors as an SoS function, we take a discriminative learning approach,
effectively searching the space of SoS functions for a higher order prior that
performs well on our training set. We adopt a structural SVM approach
\cite{Joachims/etal/09a,Tsochantaridis/etal/04} and formulate the training
problem in terms of quadratic programming; as a result we can efficiently
search the space of SoS priors via an extended cutting-plane algorithm. We also
show how the state-of-the-art max flow method for vision problems
\cite{Goldberg:ESA11} can be modified to efficiently solve the submodular flow
problem. Experimental comparisons are made against the OpenCV implementation of
the GrabCut interactive segmentation technique \cite{Rother:GrabCut04}, which
uses hand-tuned parameters instead of machine learning. On a standard dataset
\cite{Gulshan:CVPR10} our method learns higher order priors with hundreds of
parameter values, and produces significantly better segmentations. While our
focus is on binary labeling problems, we show that our techniques can be
naturally generalized to handle more than two labels.
|
1309.7517 | Improving tag recommendation by folding in more consistency | cs.IR | Tag recommendation is a major aspect of collaborative tagging systems. It
aims to recommend tags to a user for tagging an item. In this paper we present
a part of our work in progress which is a novel improvement of recommendations
by re-ranking the output of a tag recommender. We mine association rules
between candidates tags in order to determine a more consistent list of tags to
recommend.
Our method is an add-on one which leads to better recommendations as we show
in this paper. It is easily parallelizable and morever it may be applied to a
lot of tag recommenders. The experiments we did on five datasets with two kinds
of tag recommender demonstrated the efficiency of our method.
|
1309.7518 | Iterative Detection and Decoding for the Four-Rectangular-Grain TDMR
Model | cs.IT math.IT | This paper considers detection and error control coding for the
two-dimensional magnetic recording (TDMR) channel modeled by the
two-dimensional (2D) four-rectangular-grain model proposed by Kavcic, Huang et.
al. in 2010. This simple model captures the effects of different 2D grain sizes
and shapes, as well as the TDMR grain overwrite effect: grains large enough to
be written by successive bits retain the polarity of only the last bit written.
We construct a row-by-row BCJR detection algorithm that considers outputs from
two rows at a time over two adjacent columns, thereby enabling consideration of
more grain and data states than previously proposed algorithms that scan only
one row at a time. The proposed algorithm employs soft-decision feedback of
grain states from previous rows to aid the estimation of current data bits and
grain states. Simulation results using the same average coded bit density and
serially concatenated convolutional code (SCCC) as a previous paper by Pan,
Ryan, et. al. show gains in user bits/grain of up to 6.7% over the previous
work when no iteration is performed between the TDMR BCJR and the SCCC, and
gains of up to 13.4% when the detector and the decoder iteratively exchange
soft information.
|
1309.7522 | An Application of Backpropagation Artificial Neural Network Method for
Measuring The Severity of Osteoarthritis | cs.NE cs.CE cs.CV | The examination of Osteoarthritis disease through X-ray by rheumatology can
be classified into four grade of severity. This paper discusses about the
application of artificial neural network backpropagation method for measuring
the severity of the disease, where the observed X-ray range from wrist to
fingers. The main procedures of system in this paper is divided into three,
which are image processing, feature extraction, and artificial neural network
process. First, an X-ray image digital (200x150 pixels and greyscale) will be
thresholded, then extracted features based on probabilistic values of the color
intensity of seven bit quantization result, and statistical textures. That
feature values then will be normalizing to interval [0.1, 0.9], and then the
result would be processing on backpropagation artificial neural network system
as input to determine the severity of disease from an X-ray had input before
it. From testing with learning rate 0.3, momentum 0.4, hidden units five pieces
and about 132 feature vectors, this system had had a level of accuracy of 100%
for learning data, 80% for learning and non-learning data, and 66.6% for
non-learning data
|
1309.7524 | Meme and Variations: A Computer Model of Cultural Evolution | cs.MA cs.NE | Holland's (1975) genetic algorithm is a minimal computer model of natural
selection that made it possible to investigate the effect of manipulating
specific parameters on the evolutionary process. If culture is, like biology, a
form of evolution, it should be possible to similarly abstract the underlying
skeleton of the process and develop a minimal model of it. Meme and Variations,
or MAV, is a computational model, inspired by the genetic algorithm, of how
ideas evolve in a society of interacting individuals (Gabora 1995). The name is
a pun on the classical music form 'theme and variations', because it is based
on the premise that novel ideas are variations of old ones; they result from
tweaking or combining existing ideas in new ways (Holland et al. 1981). MAV
explores the impact of biological phenomena such as over-dominance and
epistasis as well as cognitive and social phenomena such as the ability to
learn generalizations or imitate others on the fitness and diversity of
cultural transmissible actions.
|
1309.7528 | Finite-Length Analyses for Source and Channel Coding on Markov Chains | cs.IT math.IT | We study finite-length bounds for source coding with side information for
Markov sources and channel coding for channels with conditional Markovian
additive noise. For this purpose, we propose two criteria for finite-length
bounds. One is the asymptotic optimality and the other is the efficient
computability of the bound. Then, we derive finite-length upper and lower
bounds for coding length in both settings so that their computational
complexity is efficient. To discuss the first criterion, we derive the large
deviation bounds, the moderate deviation bounds, and second order bounds for
these two topics, and show that these finite-length bounds achieves the
asymptotic optimality in these senses. For this discussion, we introduce
several kinds of information measure for transition matrices.
|
1309.7540 | Joint Power and Antenna Selection Optimization in Large Cloud Radio
Access Networks | cs.IT math.IT | Large multiple-input multiple-output (MIMO) networks promise high energy
efficiency, i.e., much less power is required to achieve the same capacity
compared to the conventional MIMO networks if perfect channel state information
(CSI) is available at the transmitter. However, in such networks, huge overhead
is required to obtain full CSI especially for Frequency-Division Duplex (FDD)
systems. To reduce overhead, we propose a downlink antenna selection scheme,
which selects S antennas from M>S transmit antennas based on the large scale
fading to serve K\leq S users in large distributed MIMO networks employing
regularized zero-forcing (RZF) precoding. In particular, we study the joint
optimization of antenna selection, regularization factor, and power allocation
to maximize the average weighted sum-rate. This is a mixed combinatorial and
non-convex problem whose objective and constraints have no closed-form
expressions. We apply random matrix theory to derive asymptotically accurate
expressions for the objective and constraints. As such, the joint optimization
problem is decomposed into subproblems, each of which is solved by an efficient
algorithm. In addition, we derive structural solutions for some special cases
and show that the capacity of very large distributed MIMO networks scales as
O\left(K\textrm{log}M\right) when M\rightarrow\infty with K,S fixed.
Simulations show that the proposed scheme achieves significant performance gain
over various baselines.
|
1309.7543 | Threshold Saturation for Spatially-Coupled LDPC and LDGM Codes on BMS
Channels | cs.IT math.IT | Spatially-coupled low-density parity-check (LDPC) codes, which were first
introduced as LDPC convolutional codes, have been shown to exhibit excellent
performance under low-complexity belief-propagation decoding. This phenomenon
is now termed threshold saturation via spatial coupling. Spatially-coupled
codes have been successfully applied in numerous areas. In particular, it was
proven that spatially-coupled regular LDPC codes universally achieve capacity
over the class of binary memoryless symmetric (BMS) channels under
belief-propagation decoding.
Recently, potential functions have been used to simplify threshold saturation
proofs for scalar and vector recursions. In this paper, potential functions are
used to prove threshold saturation for irregular LDPC and low-density
generator-matrix (LDGM) codes on BMS channels, extending the simplified proof
technique to BMS channels. The corresponding potential functions are closely
related to the average Bethe free entropy of the ensembles in the large-system
limit. These functions also appear in statistical physics when the replica
method is used to analyze optimal decoding.
|
1309.7564 | Channel Estimation, Carrier Recovery, and Data Detection in the Presence
of Phase Noise in OFDM Relay Systems | cs.IT math.IT | Due to its time-varying nature, oscillator phase noise can significantly
degrade the performance of channel estimation, carrier recovery, and data
detection blocks in high-speed wireless communication systems. In this paper,
we analyze joint channel, \emph{carrier frequency offset (CFO)}, and phase
noise estimation plus data detection in \emph{orthogonal frequency division
multiplexing (OFDM)} relay systems. To achieve this goal, a detailed
transmission framework involving both training and data symbols is presented.
In the data transmission phase, a comb-type OFDM symbol consisting of both
pilots and data symbols is proposed to track phase noise over an OFDM frame.
Next, a novel algorithm that applies the training symbols to jointly estimate
the channel responses, CFO, and phase noise based on the maximum a posteriori
criterion is proposed. Additionally, a new \emph{hybrid Cram\'{e}r-Rao lower
bound} for evaluating the performance of channel estimation and carrier
recovery algorithms in OFDM relay networks is derived. Finally, an iterative
receiver for joint phase noise estimation and data detection at the destination
node is derived. Extensive simulations demonstrate that the application of the
proposed estimation and receiver blocks significantly improves the performance
of OFDM relay networks in the presence of phase noise.
|
1309.7572 | Optimal Transmit Power Allocation for MIMO Two-Way Cognitive Relay
Networks with Multiple Relays | cs.IT cs.NI math.IT | In this letter, we consider a multiple-input multiple-output two-way
cognitive radio system under a spectrum sharing scenario, where primary and
secondary users operate on the same frequency band. The secondary terminals
aims to exchange different messages with each other using multiple relays where
each relay employs an amplify-and-forward strategy. The main objective of our
work is to maximize the secondary sum rate allowed to share the spectrum with
the primary users by respecting a primary user tolerated interference
threshold. In this context, we derive a closed-form expression of the optimal
power allocated to each antenna of the terminals. We then discuss the impact of
some system parameters on the performance in the numerical result section.
|
1309.7583 | Optimized Bit Mappings for Spatially Coupled LDPC Codes over Parallel
Binary Erasure Channels | cs.IT math.IT | In many practical communication systems, one binary encoder/decoder pair is
used to communicate over a set of parallel channels. Examples of this setup
include multi-carrier transmission, rate-compatible puncturing of turbo-like
codes, and bit-interleaved coded modulation (BICM). A bit mapper is commonly
employed to determine how the coded bits are allocated to the channels. In this
paper, we study spatially coupled low-density parity check codes over parallel
channels and optimize the bit mapper using BICM as the driving example. For
simplicity, the parallel bit channels that arise in BICM are replaced by
independent binary erasure channels (BECs). For two parallel BECs modeled
according to a 4-PAM constellation labeled by the binary reflected Gray code,
the optimization results show that the decoding threshold can be improved over
a uniform random bit mapper, or, alternatively, the spatial chain length of the
code can be reduced for a given gap to capacity. It is also shown that for
rate-loss free, circular (tail-biting) ensembles, a decoding wave effect can be
initiated using only an optimized bit mapper.
|
1309.7598 | On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori
Perturbations | cs.LG | In this paper we describe how MAP inference can be used to sample efficiently
from Gibbs distributions. Specifically, we provide means for drawing either
approximate or unbiased samples from Gibbs' distributions by introducing low
dimensional perturbations and solving the corresponding MAP assignments. Our
approach also leads to new ways to derive lower bounds on partition functions.
We demonstrate empirically that our method excels in the typical "high signal -
high coupling" regime. The setting results in ragged energy landscapes that are
challenging for alternative approaches to sampling and/or lower bounds.
|
1309.7609 | Identificaci\'on y Registro Catastral de Cuerpos de Agua mediante
T\'ecnicas de Procesamiento Digital de Imagenes | cs.CV | The effects of global climate change on Peruvian glaciers have brought about
several processes of deglaciation during the last few years. The immediate
effect is the change of size of lakes and rivers. Public institutions that
monitor water resources currently have only recent studies which make up less
than 10% of the total. The effects of climate change and the lack of updated
information intensify social-economic problems related to water resources in
Peru. The objective of this research is to develop a software application to
automate the Cadastral Registry of Water Bodies in Peru, using techniques of
digital image processing, which would provide tools for detection, record,
temporal analysis and visualization of water bodies. The images used are from
the satellite Landsat5, which undergo a pre-processing of calibration and
correction of the satellite. Detection results are archived into a file that
contains location vectors and images of the segmentated bodies of water.
|
1309.7611 | Context-aware recommendations from implicit data via scalable tensor
factorization | cs.LG cs.IR | Albeit the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be automatically transformed to the implicit case if
scalability should be maintained. There are few implicit feedback benchmark
data sets, therefore new ideas are usually experimented on explicit benchmarks.
In this paper, we propose a generic context-aware implicit feedback recommender
algorithm, coined iTALS. iTALS applies a fast, ALS-based tensor factorization
learning method that scales linearly with the number of non-zero elements in
the tensor. We also present two approximate and faster variants of iTALS using
coordinate descent and conjugate gradient methods at learning. The method also
allows us to incorporate various contextual information into the model while
maintaining its computational efficiency. We present two context-aware variants
of iTALS incorporating seasonality and item purchase sequentiality into the
model to distinguish user behavior at different time intervals, and product
types with different repetitiveness. Experiments run on six data sets shows
that iTALS clearly outperforms context-unaware models and context aware
baselines, while it is on par with factorization machines (beats 7 times out of
12 cases) both in terms of recall and MAP.
|
1309.7615 | Correcting Multi-focus Images via Simple Standard Deviation for Image
Fusion | cs.CV | Image fusion is one of the recent trends in image registration which is an
essential field of image processing. The basic principle of this paper is to
fuse multi-focus images using simple statistical standard deviation. Firstly,
the simple standard deviation for the k-by-k window inside each of the
multi-focus images was computed. The contribution in this paper came from the
idea that the focused part inside an image had high details rather than the
unfocused part. Hence, the dispersion between pixels inside the focused part is
higher than the dispersion inside the unfocused part. Secondly, a simple
comparison between the standard deviation for each k-by-k window in the
multi-focus images could be computed. The highest standard deviation between
all the computed standard deviations for the multi-focus images could be
treated as the optimal that is to be placed in the fused image. The
experimental visual results show that the proposed method produces very
satisfactory results in spite of its simplicity.
|
1309.7643 | Rotationally Invariant Image Representation for Viewing Direction
Classification in Cryo-EM | q-bio.BM cs.CV | We introduce a new rotationally invariant viewing angle classification method
for identifying, among a large number of Cryo-EM projection images, similar
views without prior knowledge of the molecule. Our rotationally invariant
features are based on the bispectrum. Each image is denoised and compressed
using steerable principal component analysis (PCA) such that rotating an image
is equivalent to phase shifting the expansion coefficients. Thus we are able to
extend the theory of bispectrum of 1D periodic signals to 2D images. The
randomized PCA algorithm is then used to efficiently reduce the dimensionality
of the bispectrum coefficients, enabling fast computation of the similarity
between any pair of images. The nearest neighbors provide an initial
classification of similar viewing angles. In this way, rotational alignment is
only performed for images with their nearest neighbors. The initial nearest
neighbor classification and alignment are further improved by a new
classification method called vector diffusion maps. Our pipeline for viewing
angle classification and alignment is experimentally shown to be faster and
more accurate than reference-free alignment with rotationally invariant K-means
clustering, MSA/MRA 2D classification, and their modern approximations.
|
1309.7665 | Group-theoretic structure of linear phase multirate filter banks | cs.IT math.IT | Unique lifting factorization results for group lifting structures are used to
characterize the group-theoretic structure of two-channel linear phase FIR
perfect reconstruction filter bank groups. For D-invariant, order-increasing
group lifting structures, it is shown that the associated lifting cascade group
C is isomorphic to the free product of the upper and lower triangular lifting
matrix groups. Under the same hypotheses, the associated scaled lifting group S
is the semidirect product of C by the diagonal gain scaling matrix group D.
These results apply to the group lifting structures for the two principal
classes of linear phase perfect reconstruction filter banks, the whole- and
half-sample symmetric classes. Since the unimodular whole-sample symmetric
class forms a group, W, that is in fact equal to its own scaled lifting group,
W=S_W, the results of this paper characterize the group-theoretic structure of
W up to isomorphism. Although the half-sample symmetric class H does not form a
group, it can be partitioned into cosets of its lifting cascade group, C_H, or,
alternatively, into cosets of its scaled lifting group, S_H. Homomorphic
comparisons reveal that scaled lifting groups covered by the results in this
paper have a structure analogous to a "noncommutative vector space."
|
1309.7666 | Dynamic Sliding Mode Control based on Fractional calculus subject to
uncertain delay based chaotic pneumatic robot | cs.RO cs.SY | This paper considers the chattering problem of sliding mode control while
delay in robot manipulator caused chaos in such electromechanical systems.
Fractional calculus as a powerful theorem to produce a novel sliding mode;
which has a dynamic essence is used for chattering elimination. To realize the
control of a class of chaotic systems in master-slave configuration this novel
fractional dynamic sliding mode control scheme is presented and examined on
delay based chaotic robot in joint and work space. Also the stability of the
closed-loop system is guaranteed by Lyapunov stability theory. Beside these,
delayed robot motions are sorted out for qualitative and quantification study.
Finally, numerical simulation example illustrates the feasibility of proposed
control method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.