id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1205.2467
|
Linking Social Networking Sites to Scholarly Information Portals by
ScholarLib
|
cs.DL cs.IR cs.SI
|
Online Social Networks usually provide no or limited way to access scholarly
information provided by Digital Libraries (DLs) in order to share and discuss
scholarly content with other online community members. The paper addresses the
potentials of Social Networking sites (SNSs) for science and proposes initial
use cases as well as a basic bi-directional model called ScholarLib for linking
SNSs to scholarly DLs. The major aim of ScholarLib is to make scholarly
information provided by DLs accessible at SNSs, and vice versa, to enhance
retrieval quality at DL side by social information provided by SNSs.
|
1205.2541
|
An improved approach to attribute reduction with covering rough sets
|
cs.AI
|
Attribute reduction is viewed as an important preprocessing step for pattern
recognition and data mining. Most of researches are focused on attribute
reduction by using rough sets. Recently, Tsang et al. discussed attribute
reduction with covering rough sets in the paper [E. C.C. Tsang, D. Chen, Daniel
S. Yeung, Approximations and reducts with covering generalized rough sets,
Computers and Mathematics with Applications 56 (2008) 279-289], where an
approach based on discernibility matrix was presented to compute all attribute
reducts. In this paper, we provide an improved approach by constructing simpler
discernibility matrix with covering rough sets, and then proceed to improve
some characterizations of attribute reduction provided by Tsang et al. It is
proved that the improved discernible matrix is equivalent to the old one, but
the computational complexity of discernible matrix is greatly reduced.
|
1205.2583
|
Emergence of scale-free close-knit friendship structure in online social
networks
|
physics.soc-ph cs.SI
|
Despite the structural properties of online social networks have attracted
much attention, the properties of the close-knit friendship structures remain
an important question. Here, we mainly focus on how these mesoscale structures
are affected by the local and global structural properties. Analyzing the data
of four large-scale online social networks reveals several common structural
properties. It is found that not only the local structures given by the
indegree, outdegree, and reciprocal degree distributions follow a similar
scaling behavior, the mesoscale structures represented by the distributions of
close-knit friendship structures also exhibit a similar scaling law. The degree
correlation is very weak over a wide range of the degrees. We propose a simple
directed network model that captures the observed properties. The model
incorporates two mechanisms: reciprocation and preferential attachment. Through
rate equation analysis of our model, the local-scale and mesoscale structural
properties are derived. In the local-scale, the same scaling behavior of
indegree and outdegree distributions stems from indegree and outdegree of nodes
both growing as the same function of the introduction time, and the reciprocal
degree distribution also shows the same power-law due to the linear
relationship between the reciprocal degree and in/outdegree of nodes. In the
mesoscale, the distributions of four closed triples representing close-knit
friendship structures are found to exhibit identical power-laws, a behavior
attributed to the negligible degree correlations. Intriguingly, all the
power-law exponents of the distributions in the local-scale and mesoscale
depend only on one global parameter -- the mean in/outdegree, while both the
mean in/outdegree and the reciprocity together determine the ratio of the
reciprocal degree of a node to its in/outdegree.
|
1205.2584
|
Low Complexity Damped Gauss-Newton Algorithms for CANDECOMP/PARAFAC
|
cs.NA cs.LG math.OC
|
The damped Gauss-Newton (dGN) algorithm for CANDECOMP/PARAFAC (CP)
decomposition can handle the challenges of collinearity of factors and
different magnitudes of factors; nevertheless, for factorization of an $N$-D
tensor of size $I_1\times I_N$ with rank $R$, the algorithm is computationally
demanding due to construction of large approximate Hessian of size $(RT \times
RT)$ and its inversion where $T = \sum_n I_n$. In this paper, we propose a fast
implementation of the dGN algorithm which is based on novel expressions of the
inverse approximate Hessian in block form. The new implementation has lower
computational complexity, besides computation of the gradient (this part is
common to both methods), requiring the inversion of a matrix of size
$NR^2\times NR^2$, which is much smaller than the whole approximate Hessian, if
$T \gg NR$. In addition, the implementation has lower memory requirements,
because neither the Hessian nor its inverse never need to be stored in their
entirety. A variant of the algorithm working with complex valued data is
proposed as well. Complexity and performance of the proposed algorithm is
compared with those of dGN and ALS with line search on examples of difficult
benchmark tensors.
|
1205.2590
|
On the Minimum/Stopping Distance of Array Low-Density Parity-Check Codes
|
cs.IT math.IT
|
In this work, we study the minimum/stopping distance of array low-density
parity-check (LDPC) codes. An array LDPC code is a quasi-cyclic LDPC code
specified by two integers q and m, where q is an odd prime and m <= q. In the
literature, the minimum/stopping distance of these codes (denoted by d(q,m) and
h(q,m), respectively) has been thoroughly studied for m <= 5. Both exact
results, for small values of q and m, and general (i.e., independent of q)
bounds have been established. For m=6, the best known minimum distance upper
bound, derived by Mittelholzer (IEEE Int. Symp. Inf. Theory, Jun./Jul. 2002),
is d(q,6) <= 32. In this work, we derive an improved upper bound of d(q,6) <=
20 and a new upper bound d(q,7) <= 24 by using the concept of a template
support matrix of a codeword/stopping set. The bounds are tight with high
probability in the sense that we have not been able to find codewords of
strictly lower weight for several values of q using a minimum distance
probabilistic algorithm. Finally, we provide new specific minimum/stopping
distance results for m <= 7 and low-to-moderate values of q <= 79.
|
1205.2596
|
Proceedings of the Twenty-Seventh Conference on Uncertainty in
Artificial Intelligence (2011)
|
cs.AI
|
This is the Proceedings of the Twenty-Seventh Conference on Uncertainty in
Artificial Intelligence, which was held in Barcelona, Spain, July 14 - 17 2011.
|
1205.2597
|
Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial
Intelligence (2010)
|
cs.AI
|
This is the Proceedings of the Twenty-Sixth Conference on Uncertainty in
Artificial Intelligence, which was held on Catalina Island, CA, July 8 - 11
2010.
|
1205.2599
|
On the Identifiability of the Post-Nonlinear Causal Model
|
stat.ML cs.LG
|
By taking into account the nonlinear effect of the cause, the inner noise
effect, and the measurement distortion effect in the observed variables, the
post-nonlinear (PNL) causal model has demonstrated its excellent performance in
distinguishing the cause from effect. However, its identifiability has not been
properly addressed, and how to apply it in the case of more than two variables
is also a problem. In this paper, we conduct a systematic investigation on its
identifiability in the two-variable case. We show that this model is
identifiable in most cases; by enumerating all possible situations in which the
model is not identifiable, we provide sufficient conditions for its
identifiability. Simulations are given to support the theoretical results.
Moreover, in the case of more than two variables, we show that the whole causal
structure can be found by applying the PNL causal model to each structure in
the Markov equivalent class and testing if the disturbance is independent of
the direct causes for each variable. In this way the exhaustive search over all
possible causal structures is avoided.
|
1205.2600
|
A Uniqueness Theorem for Clustering
|
cs.LG
|
Despite the widespread use of Clustering, there is distressingly little
general theory of clustering available. Questions like "What distinguishes a
clustering of data from other data partitioning?", "Are there any principles
governing all clustering paradigms?", "How should a user choose an appropriate
clustering algorithm for a particular task?", etc. are almost completely
unanswered by the existing body of clustering literature. We consider an
axiomatic approach to the theory of Clustering. We adopt the framework of
Kleinberg, [Kle03]. By relaxing one of Kleinberg's clustering axioms, we
sidestep his impossibility result and arrive at a consistent set of axioms. We
suggest to extend these axioms, aiming to provide an axiomatic taxonomy of
clustering paradigms. Such a taxonomy should provide users some guidance
concerning the choice of the appropriate clustering paradigm for a given task.
The main result of this paper is a set of abstract properties that characterize
the Single-Linkage clustering function. This characterization result provides
new insight into the properties of desired data groupings that make
Single-Linkage the appropriate choice. We conclude by considering a taxonomy of
clustering functions based on abstract properties that each satisfies.
|
1205.2601
|
Most Relevant Explanation: Properties, Algorithms, and Evaluations
|
cs.AI
|
Most Relevant Explanation (MRE) is a method for finding multivariate
explanations for given evidence in Bayesian networks [12]. This paper studies
the theoretical properties of MRE and develops an algorithm for finding
multiple top MRE solutions. Our study shows that MRE relies on an implicit soft
relevance measure in automatically identifying the most relevant target
variables and pruning less relevant variables from an explanation. The soft
measure also enables MRE to capture the intuitive phenomenon of explaining away
encoded in Bayesian networks. Furthermore, our study shows that the solution
space of MRE has a special lattice structure which yields interesting dominance
relations among the solutions. A K-MRE algorithm based on these dominance
relations is developed for generating a set of top solutions that are more
representative. Our empirical results show that MRE methods are promising
approaches for explanation in Bayesian networks.
|
1205.2602
|
The Entire Quantile Path of a Risk-Agnostic SVM Classifier
|
cs.LG
|
A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X =
x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has
been shown that Support Vector Machines (SVMs) in the limit are quantile
classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost
of misclassification SVMs can be appropriately extended to recover, in the
limit, the quantile binary classifier for any t. We then present a principled
algorithm to solve the extended SVM classifier for all values of t
simultaneously. This has two implications: First, one can recover the entire
conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build
a risk-agnostic SVM classifier where the cost of misclassification need not be
known apriori. Preliminary numerical experiments show the effectiveness of the
proposed algorithm.
|
1205.2603
|
A Bayesian Framework for Community Detection Integrating Content and
Link
|
cs.SI cs.AI physics.soc-ph
|
This paper addresses the problem of community detection in networked data
that combines link and content analysis. Most existing work combines link and
content information by a generative model. There are two major shortcomings
with the existing approaches. First, they assume that the probability of
creating a link between two nodes is determined only by the community
memberships of the nodes; however other factors (e.g. popularity) could also
affect the link pattern. Second, they use generative models to model the
content of individual nodes, whereas these generative models are vulnerable to
the content attributes that are irrelevant to communities. We propose a
Bayesian framework for combining link and content information for community
detection that explicitly addresses these shortcomings. A new link model is
presented that introduces a random variable to capture the node popularity when
deciding the link between two nodes; a discriminative model is used to
determine the community membership of a node by its content. An approximate
inference algorithm is presented for efficient Bayesian inference. Our
empirical study shows that the proposed framework outperforms several
state-of-theart approaches in combining link and content information for
community detection.
|
1205.2604
|
The Infinite Latent Events Model
|
stat.ML cs.LG
|
We present the Infinite Latent Events Model, a nonparametric hierarchical
Bayesian distribution over infinite dimensional Dynamic Bayesian Networks with
binary state representations and noisy-OR-like transitions. The distribution
can be used to learn structure in discrete timeseries data by simultaneously
inferring a set of latent events, which events fired at each timestep, and how
those events are causally linked. We illustrate the model on a sound
factorization task, a network topology identification task, and a video game
task.
|
1205.2605
|
Herding Dynamic Weights for Partially Observed Random Field Models
|
cs.LG stat.ML
|
Learning the parameters of a (potentially partially observable) random field
model is intractable in general. Instead of focussing on a single optimal
parameter value we propose to treat parameters as dynamical quantities. We
introduce an algorithm to generate complex dynamics for parameters and (both
visible and hidden) state vectors. We show that under certain conditions
averages computed over trajectories of the proposed dynamical system converge
to averages computed over the data. Our "herding dynamics" does not require
expensive operations such as exponentiation and is fully deterministic.
|
1205.2606
|
Exploring compact reinforcement-learning representations with linear
regression
|
cs.LG cs.AI
|
This paper presents a new algorithm for online linear regression whose
efficiency guarantees satisfy the requirements of the KWIK (Knows What It
Knows) framework. The algorithm improves on the complexity bounds of the
current state-of-the-art procedure in this setting. We explore several
applications of this algorithm for learning compact reinforcement-learning
representations. We show that KWIK linear regression can be used to learn the
reward function of a factored MDP and the probabilities of action outcomes in
Stochastic STRIPS and Object Oriented MDPs, none of which have been proven to
be efficiently learnable in the RL setting before. We also combine KWIK linear
regression with other KWIK learners to learn larger portions of these models,
including experiments on learning factored MDP transition and reward functions
together.
|
1205.2608
|
Temporal-Difference Networks for Dynamical Systems with Continuous
Observations and Actions
|
cs.LG stat.ML
|
Temporal-difference (TD) networks are a class of predictive state
representations that use well-established TD methods to learn models of
partially observable dynamical systems. Previous research with TD networks has
dealt only with dynamical systems with finite sets of observations and actions.
We present an algorithm for learning TD network representations of dynamical
systems with continuous observations and actions. Our results show that the
algorithm is capable of learning accurate and robust models of several noisy
continuous dynamical systems. The algorithm presented here is the first fully
incremental method for learning a predictive representation of a continuous
dynamical system.
|
1205.2609
|
Which Spatial Partition Trees are Adaptive to Intrinsic Dimension?
|
stat.ML cs.LG
|
Recent theory work has found that a special type of spatial partition tree -
called a random projection tree - is adaptive to the intrinsic dimension of the
data from which it is built. Here we examine this same question, with a
combination of theory and experiments, for a broader class of trees that
includes k-d trees, dyadic trees, and PCA trees. Our motivation is to get a
feel for (i) the kind of intrinsic low dimensional structure that can be
empirically verified, (ii) the extent to which a spatial partition can exploit
such structure, and (iii) the implications for standard statistical tasks such
as regression, vector quantization, and nearest neighbor search.
|
1205.2610
|
Probabilistic Structured Predictors
|
cs.LG
|
We consider MAP estimators for structured prediction with exponential family
models. In particular, we concentrate on the case that efficient algorithms for
uniform sampling from the output space exist. We show that under this
assumption (i) exact computation of the partition function remains a hard
problem, and (ii) the partition function and the gradient of the log partition
function can be approximated efficiently. Our main result is an approximation
scheme for the partition function based on Markov Chain Monte Carlo theory. We
also show that the efficient uniform sampling assumption holds in several
application settings that are of importance in machine learning.
|
1205.2611
|
Ordinal Boltzmann Machines for Collaborative Filtering
|
cs.IR cs.LG
|
Collaborative filtering is an effective recommendation technique wherein the
preference of an individual can potentially be predicted based on preferences
of other members. Early algorithms often relied on the strong locality in the
preference data, that is, it is enough to predict preference of a user on a
particular item based on a small subset of other users with similar tastes or
of other items with similar properties. More recently, dimensionality reduction
techniques have proved to be equally competitive, and these are based on the
co-occurrence patterns rather than locality. This paper explores and extends a
probabilistic model known as Boltzmann Machine for collaborative filtering
tasks. It seamlessly integrates both the similarity and co-occurrence in a
principled manner. In particular, we study parameterisation options to deal
with the ordinal nature of the preferences, and propose a joint modelling of
both the user-based and item-based processes. Experiments on moderate and
large-scale movie recommendation show that our framework rivals existing
well-known methods.
|
1205.2612
|
Computing Posterior Probabilities of Structural Features in Bayesian
Networks
|
cs.LG stat.ML
|
We study the problem of learning Bayesian network structures from data.
Koivisto and Sood (2004) and Koivisto (2006) presented algorithms that can
compute the exact marginal posterior probability of a subnetwork, e.g., a
single edge, in O(n2n) time and the posterior probabilities for all n(n-1)
potential edges in O(n2n) total time, assuming that the number of parents per
node or the indegree is bounded by a constant. One main drawback of their
algorithms is the requirement of a special structure prior that is non uniform
and does not respect Markov equivalence. In this paper, we develop an algorithm
that can compute the exact posterior probability of a subnetwork in O(3n) time
and the posterior probabilities for all n(n-1) potential edges in O(n3n) total
time. Our algorithm also assumes a bounded indegree but allows general
structure priors. We demonstrate the applicability of the algorithm on several
data sets with up to 20 variables.
|
1205.2613
|
Measuring Inconsistency in Probabilistic Knowledge Bases
|
cs.AI
|
This paper develops an inconsistency measure on conditional probabilistic
knowledge bases. The measure is based on fundamental principles for
inconsistency measures and thus provides a solid theoretical framework for the
treatment of inconsistencies in probabilistic expert systems. We illustrate its
usefulness and immediate application on several examples and present some
formal results. Building on this measure we use the Shapley value-a well-known
solution for coalition games-to define a sophisticated indicator that is not
only able to measure inconsistencies but to reveal the causes of
inconsistencies in the knowledge base. Altogether these tools guide the
knowledge engineer in his aim to restore consistency and therefore enable him
to build a consistent and usable knowledge base that can be employed in
probabilistic expert systems.
|
1205.2614
|
Products of Hidden Markov Models: It Takes N>1 to Tango
|
cs.LG stat.ML
|
Products of Hidden Markov Models(PoHMMs) are an interesting class of
generative models which have received little attention since their
introduction. This maybe in part due to their more computationally expensive
gradient-based learning algorithm,and the intractability of computing the log
likelihood of sequences under the model. In this paper, we demonstrate how the
partition function can be estimated reliably via Annealed Importance Sampling.
We perform experiments using contrastive divergence learning on rainfall data
and data captured from pairs of people dancing. Our results suggest that
advances in learning and evaluation for undirected graphical models and recent
increases in available computing power make PoHMMs worth considering for
complex time-series modeling tasks.
|
1205.2615
|
Effects of Treatment on the Treated: Identification and Generalization
|
stat.ME cs.AI
|
Many applications of causal analysis call for assessing, retrospectively, the
effect of withholding an action that has in fact been implemented. This
counterfactual quantity, sometimes called "effect of treatment on the treated,"
(ETT) have been used to to evaluate educational programs, critic public
policies, and justify individual decision making. In this paper we explore the
conditions under which ETT can be estimated from (i.e., identified in)
experimental and/or observational studies. We show that, when the action
invokes a singleton variable, the conditions for ETT identification have simple
characterizations in terms of causal diagrams. We further give a graphical
characterization of the conditions under which the effects of multiple
treatments on the treated can be identified, as well as ways in which the ETT
estimand can be constructed from both interventional and observational
distributions.
|
1205.2616
|
Bisimulation-based Approximate Lifted Inference
|
cs.AI
|
There has been a great deal of recent interest in methods for performing
lifted inference; however, most of this work assumes that the first-order model
is given as input to the system. Here, we describe lifted inference algorithms
that determine symmetries and automatically lift the probabilistic model to
speedup inference. In particular, we describe approximate lifted inference
techniques that allow the user to trade off inference accuracy for
computational efficiency by using a handful of tunable parameters, while
keeping the error bounded. Our algorithms are closely related to the
graph-theoretic concept of bisimulation. We report experiments on both
synthetic and real data to show that in the presence of symmetries, run-times
for inference can be improved significantly, with approximate lifted inference
providing orders of magnitude speedup over ground inference.
|
1205.2617
|
Modeling Discrete Interventional Data using Directed Cyclic Graphical
Models
|
stat.ML cs.LG stat.ME
|
We outline a representation for discrete multivariate distributions in terms
of interventional potential functions that are globally normalized. This
representation can be used to model the effects of interventions, and the
independence properties encoded in this model can be represented as a directed
graph that allows cycles. In addition to discussing inference and sampling with
this representation, we give an exponential family parametrization that allows
parameter estimation to be stated as a convex optimization problem; we also
give a convex relaxation of the task of simultaneous parameter and structure
learning using group l1-regularization. The model is evaluated on simulated
data and intracellular flow cytometry data.
|
1205.2618
|
BPR: Bayesian Personalized Ranking from Implicit Feedback
|
cs.IR cs.LG stat.ML
|
Item recommendation is the task of predicting a personalized ranking on a set
of items (e.g. websites, movies, products). In this paper, we investigate the
most common scenario with implicit feedback (e.g. clicks, purchases). There are
many methods for item recommendation from implicit feedback like matrix
factorization (MF) or adaptive knearest-neighbor (kNN). Even though these
methods are designed for the item prediction task of personalized ranking, none
of them is directly optimized for ranking. In this paper we present a generic
optimization criterion BPR-Opt for personalized ranking that is the maximum
posterior estimator derived from a Bayesian analysis of the problem. We also
provide a generic learning algorithm for optimizing models with respect to
BPR-Opt. The learning method is based on stochastic gradient descent with
bootstrap sampling. We show how to apply our method to two state-of-the-art
recommender models: matrix factorization and adaptive kNN. Our experiments
indicate that for the task of personalized ranking our optimization method
outperforms the standard learning techniques for MF and kNN. The results show
the importance of optimizing models for the right criterion.
|
1205.2619
|
Regret-based Reward Elicitation for Markov Decision Processes
|
cs.AI
|
The specification of aMarkov decision process (MDP) can be difficult. Reward
function specification is especially problematic; in practice, it is often
cognitively complex and time-consuming for users to precisely specify rewards.
This work casts the problem of specifying rewards as one of preference
elicitation and aims to minimize the degree of precision with which a reward
function must be specified while still allowing optimal or near-optimal
policies to be produced. We first discuss how robust policies can be computed
for MDPs given only partial reward information using the minimax regret
criterion. We then demonstrate how regret can be reduced by efficiently
eliciting reward information using bound queries, using regret-reduction as a
means for choosing suitable queries. Empirical results demonstrate that
regret-based reward elicitation offers an effective way to produce near-optimal
policies without resorting to the precise specification of the entire reward
function.
|
1205.2620
|
Exact Structure Discovery in Bayesian Networks with Less Space
|
cs.AI cs.DS
|
The fastest known exact algorithms for scorebased structure discovery in
Bayesian networks on n nodes run in time and space 2nnO(1). The usage of these
algorithms is limited to networks on at most around 25 nodes mainly due to the
space requirement. Here, we study space-time tradeoffs for finding an optimal
network structure. When little space is available, we apply the Gurevich-Shelah
recurrence-originally proposed for the Hamiltonian path problem-and obtain time
22n-snO(1) in space 2snO(1) for any s = n/2, n/4, n/8, . . .; we assume the
indegree of each node is bounded by a constant. For the more practical setting
with moderate amounts of space, we present a novel scheme. It yields running
time 2n(3/2)pnO(1) in space 2n(3/4)pnO(1) for any p = 0, 1, . . ., n/2; these
bounds hold as long as the indegrees are at most 0.238n. Furthermore, the
latter scheme allows easy and efficient parallelization beyond previous
algorithms. We also explore empirically the potential of the presented
techniques.
|
1205.2621
|
Logical Inference Algorithms and Matrix Representations for
Probabilistic Conditional Independence
|
cs.AI
|
Logical inference algorithms for conditional independence (CI) statements
have important applications from testing consistency during knowledge
elicitation to constraintbased structure learning of graphical models. We prove
that the implication problem for CI statements is decidable, given that the
size of the domains of the random variables is known and fixed. We will present
an approximate logical inference algorithm which combines a falsification and a
novel validation algorithm. The validation algorithm represents each set of CI
statements as a sparse 0-1 matrix A and validates instances of the implication
problem by solving specific linear programs with constraint matrix A. We will
show experimentally that the algorithm is both effective and efficient in
validating and falsifying instances of the probabilistic CI implication
problem.
|
1205.2622
|
Using the Gene Ontology Hierarchy when Predicting Gene Function
|
cs.LG cs.CE stat.ML
|
The problem of multilabel classification when the labels are related through
a hierarchical categorization scheme occurs in many application domains such as
computational biology. For example, this problem arises naturally when trying
to automatically assign gene function using a controlled vocabularies like Gene
Ontology. However, most existing approaches for predicting gene functions solve
independent classification problems to predict genes that are involved in a
given function category, independently of the rest. Here, we propose two simple
methods for incorporating information about the hierarchical nature of the
categorization scheme. In the first method, we use information about a gene's
previous annotation to set an initial prior on its label. In a second approach,
we extend a graph-based semi-supervised learning algorithm for predicting gene
function in a hierarchy. We show that we can efficiently solve this problem by
solving a linear system of equations. We compare these approaches with a
previous label reconciliation-based approach. Results show that using the
hierarchy information directly, compared to using reconciliation methods,
improves gene function prediction.
|
1205.2623
|
Virtual Vector Machine for Bayesian Online Classification
|
cs.LG stat.ML
|
In a typical online learning scenario, a learner is required to process a
large data stream using a small memory buffer. Such a requirement is usually in
conflict with a learner's primary pursuit of prediction accuracy. To address
this dilemma, we introduce a novel Bayesian online classi cation algorithm,
called the Virtual Vector Machine. The virtual vector machine allows you to
smoothly trade-off prediction accuracy with memory size. The virtual vector
machine summarizes the information contained in the preceding data stream by a
Gaussian distribution over the classi cation weights plus a constant number of
virtual data points. The virtual data points are designed to add extra
non-Gaussian information about the classi cation weights. To maintain the
constant number of virtual points, the virtual vector machine adds the current
real data point into the virtual point set, merges two most similar virtual
points into a new virtual point or deletes a virtual point that is far from the
decision boundary. The information lost in this process is absorbed into the
Gaussian distribution. The extra information provided by the virtual points
leads to improved predictive accuracy over previous online classification
algorithms.
|
1205.2624
|
Convexifying the Bethe Free Energy
|
cs.AI cs.LG
|
The introduction of loopy belief propagation (LBP) revitalized the
application of graphical models in many domains. Many recent works present
improvements on the basic LBP algorithm in an attempt to overcome convergence
and local optima problems. Notable among these are convexified free energy
approximations that lead to inference procedures with provable convergence and
quality properties. However, empirically LBP still outperforms most of its
convex variants in a variety of settings, as we also demonstrate here.
Motivated by this fact we seek convexified free energies that directly
approximate the Bethe free energy. We show that the proposed approximations
compare favorably with state-of-the art convex free energy approximations.
|
1205.2625
|
Convergent message passing algorithms - a unifying view
|
cs.AI cs.LG
|
Message-passing algorithms have emerged as powerful techniques for
approximate inference in graphical models. When these algorithms converge, they
can be shown to find local (or sometimes even global) optima of variational
formulations to the inference problem. But many of the most popular algorithms
are not guaranteed to converge. This has lead to recent interest in convergent
message-passing algorithms. In this paper, we present a unified view of
convergent message-passing algorithms. We present a simple derivation of an
abstract algorithm, tree-consistency bound optimization (TCBO) that is provably
convergent in both its sum and max product forms. We then show that many of the
existing convergent algorithms are instances of our TCBO algorithm, and obtain
novel convergent algorithms "for free" by exchanging maximizations and
summations in existing algorithms. In particular, we show that Wainwright's
non-convergent sum-product algorithm for tree based variational bounds, is
actually convergent with the right update order for the case where trees are
monotonic chains.
|
1205.2626
|
Group Sparse Priors for Covariance Estimation
|
stat.ML cs.LG
|
Recently it has become popular to learn sparse Gaussian graphical models
(GGMs) by imposing l1 or group l1,2 penalties on the elements of the precision
matrix. Thispenalized likelihood approach results in a tractable convex
optimization problem. In this paper, we reinterpret these results as performing
MAP estimation under a novel prior which we call the group l1 and l1,2
positivedefinite matrix distributions. This enables us to build a hierarchical
model in which the l1 regularization terms vary depending on which group the
entries are assigned to, which in turn allows us to learn block structured
sparse GGMs with unknown group assignments. Exact inference in this
hierarchical model is intractable, due to the need to compute the normalization
constant of these matrix distributions. However, we derive upper bounds on the
partition functions, which lets us use fast variational inference (optimizing a
lower bound on the joint posterior). We show that on two real world data sets
(motion capture and financial data), our method which infers the block
structure outperforms a method that uses a fixed block structure, which in turn
outperforms baseline methods that ignore block structure.
|
1205.2627
|
Domain Knowledge Uncertainty and Probabilistic Parameter Constraints
|
cs.LG stat.ML
|
Incorporating domain knowledge into the modeling process is an effective way
to improve learning accuracy. However, as it is provided by humans, domain
knowledge can only be specified with some degree of uncertainty. We propose to
explicitly model such uncertainty through probabilistic constraints over the
parameter space. In contrast to hard parameter constraints, our approach is
effective also when the domain knowledge is inaccurate and generally results in
superior modeling accuracy. We focus on generative and conditional modeling
where the parameters are assigned a Dirichlet or Gaussian prior and demonstrate
the framework with experiments on both synthetic and real-world data.
|
1205.2628
|
Multiple Source Adaptation and the Renyi Divergence
|
cs.LG stat.ML
|
This paper presents a novel theoretical study of the general problem of
multiple source adaptation using the notion of Renyi divergence. Our results
build on our previous work [12], but significantly broaden the scope of that
work in several directions. We extend previous multiple source loss guarantees
based on distribution weighted combinations to arbitrary target distributions
P, not necessarily mixtures of the source distributions, analyze both known and
unknown target distribution cases, and prove a lower bound. We further extend
our bounds to deal with the case where the learner receives an approximate
distribution for each source instead of the exact one, and show that similar
loss guarantees can be achieved depending on the divergence between the
approximate and true distributions. We also analyze the case where the labeling
functions of the source domains are somewhat different. Finally, we report the
results of experiments with both an artificial data set and a sentiment
analysis task, showing the performance benefits of the distribution weighted
combinations and the quality of our bounds based on the Renyi divergence.
|
1205.2629
|
Interpretation and Generalization of Score Matching
|
cs.LG stat.ML
|
Score matching is a recently developed parameter learning method that is
particularly effective to complicated high dimensional density models with
intractable partition functions. In this paper, we study two issues that have
not been completely resolved for score matching. First, we provide a formal
link between maximum likelihood and score matching. Our analysis shows that
score matching finds model parameters that are more robust with noisy training
data. Second, we develop a generalization of score matching. Based on this
generalization, we further demonstrate an extension of score matching to models
of discrete data.
|
1205.2631
|
Multi-Task Feature Learning Via Efficient l2,1-Norm Minimization
|
cs.LG cs.CV stat.ML
|
The problem of joint feature selection across a group of related tasks has
applications in many areas including biomedical informatics and computer
vision. We consider the l2,1-norm regularized regression model for joint
feature selection from multiple tasks, which can be derived in the
probabilistic framework by assuming a suitable prior from the exponential
family. One appealing feature of the l2,1-norm regularization is that it
encourages multiple predictors to share similar sparsity patterns. However, the
resulting optimization problem is challenging to solve due to the
non-smoothness of the l2,1-norm regularization. In this paper, we propose to
accelerate the computation by reformulating it as two equivalent smooth convex
optimization problems which are then solved via the Nesterov's method-an
optimal first-order black-box method for smooth convex optimization. A key
building block in solving the reformulations is the Euclidean projection. We
show that the Euclidean projection for the first reformulation can be
analytically computed, while the Euclidean projection for the second one can be
computed in linear time. Empirical evaluations on several data sets verify the
efficiency of the proposed algorithms.
|
1205.2632
|
Improving Compressed Counting
|
cs.DS cs.LG stat.ML
|
Compressed Counting (CC) [22] was recently proposed for estimating the ath
frequency moments of data streams, where 0 < a <= 2. CC can be used for
estimating Shannon entropy, which can be approximated by certain functions of
the ath frequency moments as a -> 1. Monitoring Shannon entropy for anomaly
detection (e.g., DDoS attacks) in large networks is an important task. This
paper presents a new algorithm for improving CC. The improvement is most
substantial when a -> 1--. For example, when a = 0:99, the new algorithm
reduces the estimation variance roughly by 100-fold. This new algorithm would
make CC considerably more practical for estimating Shannon entropy.
Furthermore, the new algorithm is statistically optimal when a = 0.5.
|
1205.2633
|
MAP Estimation of Semi-Metric MRFs via Hierarchical Graph Cuts
|
cs.AI cs.DS
|
We consider the task of obtaining the maximum a posteriori estimate of
discrete pairwise random fields with arbitrary unary potentials and semimetric
pairwise potentials. For this problem, we propose an accurate hierarchical move
making strategy where each move is computed efficiently by solving an st-MINCUT
problem. Unlike previous move making approaches, e.g. the widely used
a-expansion algorithm, our method obtains the guarantees of the standard linear
programming (LP) relaxation for the important special case of metric labeling.
Unlike the existing LP relaxation solvers, e.g. interior-point algorithms or
tree-reweighted message passing, our method is significantly faster as it uses
only the efficient st-MINCUT algorithm in its design. Using both synthetic and
real data experiments, we show that our technique outperforms several commonly
used algorithms.
|
1205.2634
|
The Temporal Logic of Causal Structures
|
cs.AI
|
Computational analysis of time-course data with an underlying causal
structure is needed in a variety of domains, including neural spike trains,
stock price movements, and gene expression levels. However, it can be
challenging to determine from just the numerical time course data alone what is
coordinating the visible processes, to separate the underlying prima facie
causes into genuine and spurious causes and to do so with a feasible
computational complexity. For this purpose, we have been developing a novel
algorithm based on a framework that combines notions of causality in philosophy
with algorithmic approaches built on model checking and statistical techniques
for multiple hypotheses testing. The causal relationships are described in
terms of temporal logic formulae, reframing the inference problem in terms of
model checking. The logic used, PCTL, allows description of both the time
between cause and effect and the probability of this relationship being
observed. We show that equipped with these causal formulae with their
associated probabilities we may compute the average impact a cause makes to its
effect and then discover statistically significant causes through the concepts
of multiple hypothesis testing (treating each causal relationship as a
hypothesis), and false discovery control. By exploring a well-chosen family of
potentially all significant hypotheses with reasonably minimal description
length, it is possible to tame the algorithm's computational complexity while
exploring the nearly complete search-space of all prima facie causes. We have
tested these ideas in a number of domains and illustrate them here with two
examples.
|
1205.2635
|
Constraint Processing in Lifted Probabilistic Inference
|
cs.AI
|
First-order probabilistic models combine representational power of
first-order logic with graphical models. There is an ongoing effort to design
lifted inference algorithms for first-order probabilistic models. We analyze
lifted inference from the perspective of constraint processing and, through
this viewpoint, we analyze and compare existing approaches and expose their
advantages and limitations. Our theoretical results show that the wrong choice
of constraint processing method can lead to exponential increase in
computational complexity. Our empirical tests confirm the importance of
constraint processing in lifted inference. This is the first theoretical and
empirical study of constraint processing in lifted inference.
|
1205.2636
|
Monolingual Probabilistic Programming Using Generalized Coroutines
|
cs.PL cs.AI
|
Probabilistic programming languages and modeling toolkits are two modular
ways to build and reuse stochastic models and inference procedures. Combining
strengths of both, we express models and inference as generalized coroutines in
the same general-purpose language. We use existing facilities of the language,
such as rich libraries, optimizing compilers, and types, to develop concise,
declarative, and realistic models with competitive performance on exact and
approximate inference. In particular, a wide range of models can be expressed
using memoization. Because deterministic parts of models run at full speed,
custom inference procedures are trivial to incorporate, and inference
procedures can reason about themselves without interpretive overhead. Within
this framework, we introduce a new, general algorithm for importance sampling
with look-ahead.
|
1205.2637
|
Counting Belief Propagation
|
cs.AI
|
A major benefit of graphical models is that most knowledge is captured in the
model structure. Many models, however, produce inference problems with a lot of
symmetries not reflected in the graphical structure and hence not exploitable
by efficient inference techniques such as belief propagation (BP). In this
paper, we present a new and simple BP algorithm, called counting BP, that
exploits such additional symmetries. Starting from a given factor graph,
counting BP first constructs a compressed factor graph of clusternodes and
clusterfactors, corresponding to sets of nodes and factors that are
indistinguishable given the evidence. Then it runs a modified BP algorithm on
the compressed graph that is equivalent to running BP on the original factor
graph. Our experiments show that counting BP is applicable to a variety of
important AI tasks such as (dynamic) relational models and boolean model
counting, and that significant efficiency gains are obtainable, often by orders
of magnitude.
|
1205.2638
|
Temporal Action-Graph Games: A New Representation for Dynamic Games
|
cs.GT cs.AI
|
In this paper we introduce temporal action graph games (TAGGs), a novel
graphical representation of imperfect-information extensive form games. We show
that when a game involves anonymity or context-specific utility independencies,
its encoding as a TAGG can be much more compact than its direct encoding as a
multiagent influence diagram (MAID).We also show that TAGGs can be understood
as indirect MAID encodings in which many deterministic chance nodes are
introduced. We provide an algorithm for computing with TAGGs, and show both
theoretically and empirically that our approach improves significantly on the
previous state of the art.
|
1205.2639
|
MAP Estimation, Message Passing, and Perfect Graphs
|
cs.AI cs.DM cs.DS
|
Efficiently finding the maximum a posteriori (MAP) configuration of a
graphical model is an important problem which is often implemented using
message passing algorithms. The optimality of such algorithms is only well
established for singly-connected graphs and other limited settings. This
article extends the set of graphs where MAP estimation is in P and where
message passing recovers the exact solution to so-called perfect graphs. This
result leverages recent progress in defining perfect graphs (the strong perfect
graph theorem), linear programming relaxations of MAP estimation and recent
convergent message passing schemes. The article converts graphical models into
nand Markov random fields which are straightforward to relax into linear
programs. Therein, integrality can be established in general by testing for
graph perfection. This perfection test is performed efficiently using a
polynomial time algorithm. Alternatively, known decomposition tools from
perfect graph theory may be used to prove perfection for certain families of
graphs. Thus, a general graph framework is provided for determining when MAP
estimation in any graphical model is in P, has an integral linear programming
relaxation and is exactly recoverable by message passing.
|
1205.2640
|
Identifying confounders using additive noise models
|
stat.ML cs.LG
|
We propose a method for inferring the existence of a latent common cause
('confounder') of two observed random variables. The method assumes that the
two effects of the confounder are (possibly nonlinear) functions of the
confounder plus independent, additive noise. We discuss under which conditions
the model is identifiable (up to an arbitrary reparameterization of the
confounder) from the joint distribution of the effects. We state and prove a
theoretical result that provides evidence for the conjecture that the model is
generically identifiable under suitable technical conditions. In addition, we
propose a practical method to estimate the confounder from a finite i.i.d.
sample of the effects and illustrate that the method works well on both
simulated and real-world data.
|
1205.2641
|
Bayesian Discovery of Linear Acyclic Causal Models
|
stat.ML cs.LG stat.ME
|
Methods for automated discovery of causal relationships from
non-interventional data have received much attention recently. A widely used
and well understood model family is given by linear acyclic causal models
(recursive structural equation models). For Gaussian data both constraint-based
methods (Spirtes et al., 1993; Pearl, 2000) (which output a single equivalence
class) and Bayesian score-based methods (Geiger and Heckerman, 1994) (which
assign relative scores to the equivalence classes) are available. On the
contrary, all current methods able to utilize non-Gaussianity in the data
(Shimizu et al., 2006; Hoyer et al., 2008) always return only a single graph or
a single equivalence class, and so are fundamentally unable to express the
degree of certainty attached to that output. In this paper we develop a
Bayesian score-based approach able to take advantage of non-Gaussianity when
estimating linear acyclic causal models, and we empirically demonstrate that,
at least on very modest size networks, its accuracy is as good as or better
than existing methods. We provide a complete code package (in R) which
implements all algorithms and performs all of the analysis provided in the
paper, and hope that this will further the application of these methods to
solving causal inference problems.
|
1205.2642
|
Improved Mean and Variance Approximations for Belief Net Responses via
Network Doubling
|
cs.AI
|
A Bayesian belief network models a joint distribution with an directed
acyclic graph representing dependencies among variables and network parameters
characterizing conditional distributions. The parameters are viewed as random
variables to quantify uncertainty about their values. Belief nets are used to
compute responses to queries; i.e., conditional probabilities of interest. A
query is a function of the parameters, hence a random variable. Van Allen et
al. (2001, 2008) showed how to quantify uncertainty about a query via a delta
method approximation of its variance. We develop more accurate approximations
for both query mean and variance. The key idea is to extend the query mean
approximation to a "doubled network" involving two independent replicates. Our
method assumes complete data and can be applied to discrete, continuous, and
hybrid networks (provided discrete variables have only discrete parents). We
analyze several improvements, and provide empirical studies to demonstrate
their effectiveness.
|
1205.2643
|
New inference strategies for solving Markov Decision Processes using
reversible jump MCMC
|
cs.LG cs.SY math.OC stat.CO stat.ML
|
In this paper we build on previous work which uses inferences techniques, in
particular Markov Chain Monte Carlo (MCMC) methods, to solve parameterized
control problems. We propose a number of modifications in order to make this
approach more practical in general, higher-dimensional spaces. We first
introduce a new target distribution which is able to incorporate more reward
information from sampled trajectories. We also show how to break strong
correlations between the policy parameters and sampled trajectories in order to
sample more freely. Finally, we show how to incorporate these techniques in a
principled manner to obtain estimates of the optimal policy.
|
1205.2644
|
First-Order Mixed Integer Linear Programming
|
cs.LO cs.AI
|
Mixed integer linear programming (MILP) is a powerful representation often
used to formulate decision-making problems under uncertainty. However, it lacks
a natural mechanism to reason about objects, classes of objects, and relations.
First-order logic (FOL), on the other hand, excels at reasoning about classes
of objects, but lacks a rich representation of uncertainty. While representing
propositional logic in MILP has been extensively explored, no theory exists yet
for fully combining FOL with MILP. We propose a new representation, called
first-order programming or FOP, which subsumes both FOL and MILP. We establish
formal methods for reasoning about first order programs, including a sound and
complete lifted inference procedure for integer first order programs. Since FOP
can offer exponential savings in representation and proof size compared to FOL,
and since representations and proofs are never significantly longer in FOP than
in FOL, we anticipate that inference in FOP will be more tractable than
inference in FOL for corresponding problems.
|
1205.2645
|
Distributed Parallel Inference on Large Factor Graphs
|
cs.AI cs.DC
|
As computer clusters become more common and the size of the problems
encountered in the field of AI grows, there is an increasing demand for
efficient parallel inference algorithms. We consider the problem of parallel
inference on large factor graphs in the distributed memory setting of computer
clusters. We develop a new efficient parallel inference algorithm, DBRSplash,
which incorporates over-segmented graph partitioning, belief residual
scheduling, and uniform work Splash operations. We empirically evaluate the
DBRSplash algorithm on a 120 processor cluster and demonstrate linear to
super-linear performance gains on large factor graph models.
|
1205.2646
|
Censored Exploration and the Dark Pool Problem
|
cs.LG cs.GT
|
We introduce and analyze a natural algorithm for multi-venue exploration from
censored data, which is motivated by the Dark Pool Problem of modern
quantitative finance. We prove that our algorithm converges in polynomial time
to a near-optimal allocation policy; prior results for similar problems in
stochastic inventory control guaranteed only asymptotic convergence and
examined variants in which each venue could be treated independently. Our
analysis bears a strong resemblance to that of efficient exploration/
exploitation schemes in the reinforcement learning literature. We describe an
extensive experimental evaluation of our algorithm on the Dark Pool Problem
using real trading data.
|
1205.2647
|
Generating Optimal Plans in Highly-Dynamic Domains
|
cs.AI
|
Generating optimal plans in highly dynamic environments is challenging. Plans
are predicated on an assumed initial state, but this state can change
unexpectedly during plan generation, potentially invalidating the planning
effort. In this paper we make three contributions: (1) We propose a novel
algorithm for generating optimal plans in settings where frequent, unexpected
events interfere with planning. It is able to quickly distinguish relevant from
irrelevant state changes, and to update the existing planning search tree if
necessary. (2) We argue for a new criterion for evaluating plan adaptation
techniques: the relative running time compared to the "size" of changes. This
is significant since during recovery more changes may occur that need to be
recovered from subsequently, and in order for this process of repeated recovery
to terminate, recovery time has to converge. (3) We show empirically that our
approach can converge and find optimal plans in environments that would
ordinarily defy planning due to their high dynamics.
|
1205.2648
|
Learning Continuous-Time Social Network Dynamics
|
cs.SI cs.LG physics.soc-ph stat.ML
|
We demonstrate that a number of sociology models for social network dynamics
can be viewed as continuous time Bayesian networks (CTBNs). A sampling-based
approximate inference method for CTBNs can be used as the basis of an
expectation-maximization procedure that achieves better accuracy in estimating
the parameters of the model than the standard method of moments
algorithmfromthe sociology literature. We extend the existing social network
models to allow for indirect and asynchronous observations of the links. A
Markov chain Monte Carlo sampling algorithm for this new model permits
estimation and inference. We provide results on both a synthetic network (for
verification) and real social network data.
|
1205.2650
|
Correlated Non-Parametric Latent Feature Models
|
cs.LG stat.ML
|
We are often interested in explaining data through a set of hidden factors or
features. When the number of hidden features is unknown, the Indian Buffet
Process (IBP) is a nonparametric latent feature model that does not bound the
number of active features in dataset. However, the IBP assumes that all latent
features are uncorrelated, making it inadequate for many realworld problems. We
introduce a framework for correlated nonparametric feature models, generalising
the IBP. We use this framework to generate several specific models and
demonstrate applications on realworld datasets.
|
1205.2651
|
Seeing the Forest Despite the Trees: Large Scale Spatial-Temporal
Decision Making
|
cs.AI
|
We introduce a challenging real-world planning problem where actions must be
taken at each location in a spatial area at each point in time. We use forestry
planning as the motivating application. In Large Scale Spatial-Temporal (LSST)
planning problems, the state and action spaces are defined as the
cross-products of many local state and action spaces spread over a large
spatial area such as a city or forest. These problems possess state
uncertainty, have complex utility functions involving spatial constraints and
we generally must rely on simulations rather than an explicit transition model.
We define LSST problems as reinforcement learning problems and present a
solution using policy gradients. We compare two different policy formulations:
an explicit policy that identifies each location in space and the action to
take there; and an abstract policy that defines the proportion of actions to
take across all locations in space. We show that the abstract policy is more
robust and achieves higher rewards with far fewer parameters than the
elementary policy. This abstract policy is also a better fit to the properties
that practitioners in LSST problem domains require for such methods to be
widely useful.
|
1205.2652
|
Complexity Analysis and Variational Inference for Interpretation-based
Probabilistic Description Logic
|
cs.AI
|
This paper presents complexity analysis and variational methods for inference
in probabilistic description logics featuring Boolean operators,
quantification, qualified number restrictions, nominals, inverse roles and role
hierarchies. Inference is shown to be PEXP-complete, and variational methods
are designed so as to exploit logical inference whenever possible.
|
1205.2653
|
L2 Regularization for Learning Kernels
|
cs.LG stat.ML
|
The choice of the kernel is critical to the success of many learning
algorithms but it is typically left to the user. Instead, the training data can
be used to learn the kernel by selecting it out of a given family, such as that
of non-negative linear combinations of p base kernels, constrained by a trace
or L1 regularization. This paper studies the problem of learning kernels with
the same family of kernels but with an L2 regularization instead, and for
regression problems. We analyze the problem of learning kernels with ridge
regression. We derive the form of the solution of the optimization problem and
give an efficient iterative algorithm for computing that solution. We present a
novel theoretical analysis of the problem based on stability and give learning
bounds for orthogonal kernels that contain only an additive term O(pp/m) when
compared to the standard kernel ridge regression stability bound. We also
report the results of experiments indicating that L1 regularization can lead to
modest improvements for a small number of kernels, but to performance
degradations in larger-scale cases. In contrast, L2 regularization never
degrades performance and in fact achieves significant improvements with a large
number of kernels.
|
1205.2655
|
Mean Field Variational Approximation for Continuous-Time Bayesian
Networks
|
cs.AI
|
Continuous-time Bayesian networks is a natural structured representation
language for multicomponent stochastic processes that evolve continuously over
time. Despite the compact representation, inference in such models is
intractable even in relatively simple structured networks. Here we introduce a
mean field variational approximation in which we use a product of inhomogeneous
Markov processes to approximate a distribution over trajectories. This
variational approach leads to a globally consistent distribution, which can be
efficiently queried. Additionally, it provides a lower bound on the probability
of observations, thus making it attractive for learning tasks. We provide the
theoretical foundations for the approximation, an efficient implementation that
exploits the wide range of highly optimized ordinary differential equations
(ODE) solvers, experimentally explore characterizations of processes for which
this approximation is suitable, and show applications to a large-scale
realworld inference problem.
|
1205.2656
|
Convex Coding
|
cs.LG cs.IT math.IT stat.ML
|
Inspired by recent work on convex formulations of clustering (Lashkari &
Golland, 2008; Nowozin & Bakir, 2008) we investigate a new formulation of the
Sparse Coding Problem (Olshausen & Field, 1997). In sparse coding we attempt to
simultaneously represent a sequence of data-vectors sparsely (i.e. sparse
approximation (Tropp et al., 2006)) in terms of a 'code' defined by a set of
basis elements, while also finding a code that enables such an approximation.
As existing alternating optimization procedures for sparse coding are
theoretically prone to severe local minima problems, we propose a convex
relaxation of the sparse coding problem and derive a boosting-style algorithm,
that (Nowozin & Bakir, 2008) serves as a convex 'master problem' which calls a
(potentially non-convex) sub-problem to identify the next code element to add.
Finally, we demonstrate the properties of our boosted coding algorithm on an
image denoising task.
|
1205.2657
|
Multilingual Topic Models for Unaligned Text
|
cs.CL cs.IR cs.LG stat.ML
|
We develop the multilingual topic model for unaligned text (MuTo), a
probabilistic model of text that is designed to analyze corpora composed of
documents in two languages. From these documents, MuTo uses stochastic EM to
simultaneously discover both a matching between the languages and multilingual
latent topics. We demonstrate that MuTo is able to find shared topics on
real-world multilingual corpora, successfully pairing related documents across
languages. MuTo provides a new framework for creating multilingual topic models
without needing carefully curated parallel corpora and allows applications
built using the topic model formalism to be applied to a much wider class of
corpora.
|
1205.2658
|
Optimization of Structured Mean Field Objectives
|
stat.ML cs.LG
|
In intractable, undirected graphical models, an intuitive way of creating
structured mean field approximations is to select an acyclic tractable
subgraph. We show that the hardness of computing the objective function and
gradient of the mean field objective qualitatively depends on a simple graph
property. If the tractable subgraph has this property- we call such subgraphs
v-acyclic-a very fast block coordinate ascent algorithm is possible. If not,
optimization is harder, but we show a new algorithm based on the construction
of an auxiliary exponential family that can be used to make inference possible
in this case as well. We discuss the advantages and disadvantages of each
regime and compare the algorithms empirically.
|
1205.2659
|
Deterministic POMDPs Revisited
|
cs.AI
|
We study a subclass of POMDPs, called Deterministic POMDPs, that is
characterized by deterministic actions and observations. These models do not
provide the same generality of POMDPs yet they capture a number of interesting
and challenging problems, and permit more efficient algorithms. Indeed, some of
the recent work in planning is built around such assumptions mainly by the
quest of amenable models more expressive than the classical deterministic
models. We provide results about the fundamental properties of Deterministic
POMDPs, their relation with AND/OR search problems and algorithms, and their
computational complexity.
|
1205.2660
|
Alternating Projections for Learning with Expectation Constraints
|
cs.LG stat.ML
|
We present an objective function for learning with unlabeled data that
utilizes auxiliary expectation constraints. We optimize this objective function
using a procedure that alternates between information and moment projections.
Our method provides an alternate interpretation of the posterior regularization
framework (Graca et al., 2008), maintains uncertainty during optimization
unlike constraint-driven learning (Chang et al., 2007), and is more efficient
than generalized expectation criteria (Mann & McCallum, 2008). Applications of
this framework include minimally supervised learning, semisupervised learning,
and learning with constraints that are more expressive than the underlying
model. In experiments, we demonstrate comparable accuracy to generalized
expectation criteria for minimally supervised learning, and use expressive
structural constraints to guide semi-supervised learning, providing a 3%-6%
improvement over stateof-the-art constraint-driven learning.
|
1205.2661
|
REGAL: A Regularization based Algorithm for Reinforcement Learning in
Weakly Communicating MDPs
|
cs.LG
|
We provide an algorithm that achieves the optimal regret rate in an unknown
weakly communicating Markov Decision Process (MDP). The algorithm proceeds in
episodes where, in each episode, it picks a policy using regularization based
on the span of the optimal bias vector. For an MDP with S states and A actions
whose optimal bias vector has span bounded by H, we show a regret bound of
~O(HSpAT). We also relate the span to various diameter-like quantities
associated with the MDP, demonstrating how our results improve on previous
regret bounds.
|
1205.2662
|
On Smoothing and Inference for Topic Models
|
cs.LG stat.ML
|
Latent Dirichlet analysis, or topic modeling, is a flexible latent variable
framework for modeling high-dimensional sparse count data. Various learning
algorithms have been developed in recent years, including collapsed Gibbs
sampling, variational inference, and maximum a posteriori estimation, and this
variety motivates the need for careful empirical comparisons. In this paper, we
highlight the close connections between these approaches. We find that the main
differences are attributable to the amount of smoothing applied to the counts.
When the hyperparameters are optimized, the differences in performance among
the algorithms diminish significantly. The ability of these algorithms to
achieve solutions of comparable accuracy gives us the freedom to select
computationally efficient approaches. Using the insights gained from this
comparative study, we show how accurate topic models can be learned in several
seconds on text corpora with thousands of documents.
|
1205.2663
|
Are visual dictionaries generalizable?
|
cs.CV
|
Mid-level features based on visual dictionaries are today a cornerstone of
systems for classification and retrieval of images. Those state-of-the-art
representations depend crucially on the choice of a codebook (visual
dictionary), which is usually derived from the dataset. In general-purpose,
dynamic image collections (e.g., the Web), one cannot have the entire
collection in order to extract a representative dictionary. However, based on
the hypothesis that the dictionary reflects only the diversity of low-level
appearances and does not capture semantics, we argue that a dictionary based on
a small subset of the data, or even on an entirely different dataset, is able
to produce a good representation, provided that the chosen images span a
diverse enough portion of the low-level feature space. Our experiments confirm
that hypothesis, opening the opportunity to greatly alleviate the burden in
generating the codebook, and confirming the feasibility of employing visual
dictionaries in large-scale dynamic environments.
|
1205.2664
|
A Bayesian Sampling Approach to Exploration in Reinforcement Learning
|
cs.LG
|
We present a modular approach to reinforcement learning that uses a Bayesian
representation of the uncertainty over models. The approach, BOSS (Best of
Sampled Set), drives exploration by sampling multiple models from the posterior
and selecting actions optimistically. It extends previous work by providing a
rule for deciding when to resample and how to combine the models. We show that
our algorithm achieves nearoptimal reward with high probability with a sample
complexity that is low relative to the speed at which the posterior
distribution converges during learning. We demonstrate that BOSS performs quite
favorably compared to state-of-the-art reinforcement-learning approaches and
illustrate its flexibility by pairing it with a non-parametric model that
generalizes across states.
|
1205.2665
|
Lower Bound Bayesian Networks - An Efficient Inference of Lower Bounds
on Probability Distributions in Bayesian Networks
|
cs.AI
|
We present a new method to propagate lower bounds on conditional probability
distributions in conventional Bayesian networks. Our method guarantees to
provide outer approximations of the exact lower bounds. A key advantage is that
we can use any available algorithms and tools for Bayesian networks in order to
represent and infer lower bounds. This new method yields results that are
provable exact for trees with binary variables, and results which are
competitive to existing approximations in credal networks for all other network
structures. Our method is not limited to a specific kind of network structure.
Basically, it is also not restricted to a specific kind of inference, but we
restrict our analysis to prognostic inference in this article. The
computational complexity is superior to that of other existing approaches.
|
1205.2681
|
Detectability of Symbol Manipulation by an Amplify-and-Forward Relay
|
cs.IT math.IT
|
This paper studies the problem of detecting a potential malicious relay node
by a source node that relies on the relay to forward information to other
nodes. The channel model of two source nodes simultaneously sending symbols to
a relay is considered. The relay is contracted to forward the symbols that it
receives back to the sources in the amplify-and-forward manner. However there
is a chance that the relay may send altered symbols back to the sources. Each
source attempts to individually detect such malicious acts of the relay by
comparing the empirical distribution of the symbols that it receives from the
relay conditioned on its own transmitted symbols with known stochastic
characteristics of the channel. It is shown that maliciousness of the relay can
be asymptotically detected with sufficient channel observations if and only if
the channel satisfies a non-manipulable condition, which can be easily checked.
As a result, the non-manipulable condition provides us a clear-cut criterion to
determine the detectability of the aforementioned class of symbol manipulation
attacks potentially conducted by the relay.
|
1205.2691
|
Improving Schema Matching with Linked Data
|
cs.DB
|
With today's public data sets containing billions of data items, more and
more companies are looking to integrate external data with their traditional
enterprise data to improve business intelligence analysis. These distributed
data sources however exhibit heterogeneous data formats and terminologies and
may contain noisy data. In this paper, we present a novel framework that
enables business users to semi-automatically perform data integration on
potentially noisy tabular data. This framework offers an extension to Google
Refine with novel schema matching algorithms leveraging Freebase rich types.
First experiments show that using Linked Data to map cell values with instances
and column headers with types improves significantly the quality of the
matching results and therefore should lead to more informed decisions.
|
1205.2726
|
Non-Interactive Differential Privacy: a Survey
|
cs.DB
|
OpenData movement around the globe is demanding more access to information
which lies locked in public or private servers. As recently reported by a
McKinsey publication, this data has significant economic value, yet its release
has potential to blatantly conflict with people privacy. Recent UK government
inquires have shown concern from various parties about publication of
anonymized databases, as there is concrete possibility of user identification
by means of linkage attacks. Differential privacy stands out as a model that
provides strong formal guarantees about the anonymity of the participants in a
sanitized database. Only recent results demonstrated its applicability on
real-life datasets, though. This paper covers such breakthrough discoveries, by
reviewing applications of differential privacy for non-interactive publication
of anonymized real-life datasets. Theory, utility and a data-aware comparison
are discussed on a variety of principles and concrete applications.
|
1205.2736
|
How Visibility and Divided Attention Constrain Social Contagion
|
physics.soc-ph cs.CY cs.SI
|
How far and how fast does information spread in social media? Researchers
have recently examined a number of factors that affect information diffusion in
online social networks, including: the novelty of information, users' activity
levels, who they pay attention to, and how they respond to friends'
recommendations. Using URLs as markers of information, we carry out a detailed
study of retweeting, the primary mechanism by which information spreads on the
Twitter follower graph. Our empirical study examines how users respond to an
incoming stimulus, i.e., a tweet (message) from a friend, and reveals that
%retweeting behavior is constrained by a few simple principles. the "principle
of least effort" combined with limited attention plays a dominant role in
retweeting behavior. Specifically, we observe that users retweet information
when it is most visible, such as when it near the top of their Twitter stream.
Moreover, our measurements quantify how a user's limited attention is divided
among incoming tweets, providing novel evidence that highly connected
individuals are less likely to propagate an arbitrary tweet. Our study
indicates that the finite ability to process incoming information constrains
social contagion, and we conclude that rapid decay of visibility is the primary
barrier to information propagation online.
|
1205.2797
|
Forecasting of Indian Rupee (INR) / US Dollar (USD) Currency Exchange
Rate Using Artificial Neural Network
|
cs.NE
|
A large part of the workforce, and growing every day, is originally from
India. India one of the second largest populations in the world, they have a
lot to offer in terms of jobs. The sheer number of IT workers makes them a
formidable travelling force as well, easily picking up employment in English
speaking countries. The beginning of the economic crises since 2008 September,
many Indians have return homeland, and this has had a substantial impression on
the Indian Rupee (INR) as liken to the US Dollar (USD). We are using
numerational knowledge based techniques for forecasting has been proved highly
successful in present time. The purpose of this paper is to examine the effects
of several important neural network factors on model fitting and forecasting
the behaviours. In this paper, Artificial Neural Network has successfully been
used for exchange rate forecasting. This paper examines the effects of the
number of inputs and hidden nodes and the size of the training sample on the
in-sample and out-of-sample performance. The Indian Rupee (INR) / US Dollar
(USD) is used for detailed examinations. The number of input nodes has a
greater impact on performance than the number of hidden nodes, while a large
number of observations do reduce forecast errors.
|
1205.2821
|
Texture Analysis And Characterization Using Probability Fractal
Descriptors
|
physics.data-an cs.CV
|
A gray-level image texture descriptors based on fractal dimension estimation
is proposed in this work. The proposed method estimates the fractal dimension
using probability (Voss) method. The descriptors are computed applying a
multiscale transform to the fractal dimension curves of the texture image. The
proposed texture descriptor method is evaluated in a classification task of
well known benchmark texture datasets. The results show the great performance
of the proposed method as a tool for texture images analysis and
characterization.
|
1205.2822
|
Promotional effect on cold start problem and diversity in a data
characteristic based recommendation method
|
cs.IR physics.soc-ph
|
Pure methods generally perform excellently in either recommendation accuracy
or diversity, whereas hybrid methods generally outperform pure cases in both
recommendation accuracy and diversity, but encounter the dilemma of optimal
hybridization parameter selection for different recommendation focuses. In this
article, based on a user-item bipartite network, we propose a data
characteristic based algorithm, by relating the hybridization parameter to the
data characteristic. Different from previous hybrid methods, the present
algorithm adaptively assign the optimal parameter specifically for each
individual items according to the correlation between the algorithm and the
item degrees. Compared with a highly accurate pure method, and a hybrid method
which is outstanding in both the recommendation accuracy and the diversity, our
method shows a remarkably promotional effect on the long-standing challenging
problem of the cold start, as well as the recommendation diversity, while
simultaneously keeps a high overall recommendation accuracy. Even compared with
an improved hybrid method which is highly efficient on the cold start problem,
the proposed method not only further improves the recommendation accuracy of
the cold items, but also enhances the recommendation diversity. Our work might
provide a promising way to better solving the personal recommendation from the
perspective of relating algorithms with dataset properties.
|
1205.2825
|
Ingroup favoritism and intergroup cooperation under indirect reciprocity
based on group reputation
|
physics.soc-ph cs.SI q-bio.PE
|
Indirect reciprocity in which players cooperate with unacquainted other
players having good reputations is a mechanism for cooperation in relatively
large populations subjected to social dilemma situations. When the population
has group structure, as is often found in social networks, players in
experiments are considered to show behavior that deviates from existing
theoretical models of indirect reciprocity. First, players often show ingroup
favoritism (i.e., cooperation only within the group) rather than full
cooperation (i.e., cooperation within and across groups), even though the
latter is Pareto efficient. Second, in general, humans approximate outgroup
members' personal characteristics, presumably including the reputation used for
indirect reciprocity, by a single value attached to the group. Humans use such
a stereotypic approximation, a phenomenon known as outgroup homogeneity in
social psychology. I propose a model of indirect reciprocity in populations
with group structure to examine the possibility of ingroup favoritism and full
cooperation. In accordance with outgroup homogeneity, I assume that players
approximate outgroup members' personal reputations by a single reputation value
attached to the group. I show that ingroup favoritism and full cooperation are
stable under different social norms (i.e., rules for assigning reputations)
such that they do not coexist in a single model. If players are forced to
consistently use the same social norm for assessing different types of
interactions (i.e., ingroup versus outgroup interactions), only full
cooperation survives. The discovered mechanism is distinct from any form of
group selection. The results also suggest potential methods for reducing
ingroup bias to shift the equilibrium from ingroup favoritism to full
cooperation.
|
1205.2828
|
Cellular Multi-User Two-Way MIMO AF Relaying via Signal Space Alignment:
Minimum Weighted SINR Maximization
|
cs.IT math.IT
|
In this paper, we consider linear MIMO transceiver design for a cellular
two-way amplify-and-forward relaying system consisting of a single
multi-antenna base station, a single multi-antenna relay station, and multiple
multi-antenna mobile stations (MSs). Due to the two-way transmission, the MSs
could suffer from tremendous multi-user interference. We apply an interference
management model exploiting signal space alignment and propose a transceiver
design algorithm, which allows for alleviating the loss in spectral efficiency
due to half-duplex operation and providing flexible performance optimization
accounting for each user's quality of service priorities. Numerical comparisons
to conventional two-way relaying schemes based on bidirectional channel
inversion and spatial division multiple access-only processing show that the
proposed scheme achieves superior error rate and average data rate performance.
|
1205.2833
|
User Association for Load Balancing in Heterogeneous Cellular Networks
|
cs.IT math.IT
|
For small cell technology to significantly increase the capacity of
tower-based cellular networks, mobile users will need to be actively pushed
onto the more lightly loaded tiers (corresponding to, e.g., pico and
femtocells), even if they offer a lower instantaneous SINR than the macrocell
base station (BS). Optimizing a function of the long-term rates for each user
requires (in general) a massive utility maximization problem over all the SINRs
and BS loads. On the other hand, an actual implementation will likely resort to
a simple biasing approach where a BS in tier j is treated as having its SINR
multiplied by a factor A_j>=1, which makes it appear more attractive than the
heavily-loaded macrocell. This paper bridges the gap between these approaches
through several physical relaxations of the network-wide optimal association
problem, whose solution is NP hard. We provide a low-complexity distributed
algorithm that converges to a near-optimal solution with a theoretical
performance guarantee, and we observe that simple per-tier biasing loses
surprisingly little, if the bias values A_j are chosen carefully. Numerical
results show a large (3.5x) throughput gain for cell-edge users and a 2x rate
gain for median users relative to a max received power association.
|
1205.2850
|
Spectral Efficiency of Multiple Access Fading Channels with Adaptive
Interference Cancellation
|
cs.IT math.IT
|
Reliable estimation of users' channels and data in rapidly time varying
fading environments is a very challenging task of multiuser detection (MUD)
techniques that promise impressive capacity gains for interference limited
systems such as non-orthogonal CDMA and spatial multiplexing MIMO based LTE.
This paper analyzes relative channel estimation error performances of
conventional single user and multiuser receivers for an uplink of DS-CDMA and
shows their impact on output signal to interference and noise ratio (SINR)
performances. Mean squared error (MSE) of channel estimation and achievable
spectral efficiencies of these receivers obtained from the output SINR
calculations are then compared with that achieved with new adaptive
interference canceling receivers. It is shown that the adaptive receivers using
successive (SIC) and parallel interference cancellation (PIC) methods offer
much improved channel estimation and SINR performances, and hence significant
increase in achievable sum date rates.
|
1205.2857
|
Operations on soft sets revisited
|
cs.AI
|
Soft sets, as a mathematical tool for dealing with uncertainty, have recently
gained considerable attention, including some successful applications in
information processing, decision, demand analysis, and forecasting. To
construct new soft sets from given soft sets, some operations on soft sets have
been proposed. Unfortunately, such operations cannot keep all classical
set-theoretic laws true for soft sets. In this paper, we redefine the
intersection, complement, and difference of soft sets and investigate the
algebraic properties of these operations along with a known union operation. We
find that the new operation system on soft sets inherits all basic properties
of operations on classical sets, which justifies our definitions.
|
1205.2874
|
Decoupling Exploration and Exploitation in Multi-Armed Bandits
|
cs.LG
|
We consider a multi-armed bandit problem where the decision maker can explore
and exploit different arms at every round. The exploited arm adds to the
decision maker's cumulative reward (without necessarily observing the reward)
while the explored arm reveals its value. We devise algorithms for this setup
and show that the dependence on the number of arms, k, can be much better than
the standard square root of k dependence, depending on the behavior of the
arms' reward sequences. For the important case of piecewise stationary
stochastic bandits, we show a significant improvement over existing algorithms.
Our algorithms are based on a non-uniform sampling policy, which we show is
essential to the success of any algorithm in the adversarial setup. Finally, we
show some simulation results on an ultra-wide band channel selection inspired
setting indicating the applicability of our algorithms.
|
1205.2876
|
Universal Bounds on the Scaling Behavior of Polar Codes
|
cs.IT math.IT
|
We consider the problem of determining the trade-off between the rate and the
block-length of polar codes for a given block error probability when we use the
successive cancellation decoder. We take the sum of the Bhattacharyya
parameters as a proxy for the block error probability, and show that there
exists a universal parameter $\mu$ such that for any binary memoryless
symmetric channel $W$ with capacity $I(W)$, reliable communication requires
rates that satisfy $R< I(W)-\alpha N^{-\frac{1}{\mu}}$, where $\alpha$ is a
positive constant and $N$ is the block-length. We provide lower bounds on
$\mu$, namely $\mu \geq 3.553$, and we conjecture that indeed $\mu=3.627$, the
parameter for the binary erasure channel.
|
1205.2877
|
Clustering of random scale-free networks
|
cond-mat.dis-nn cs.SI physics.soc-ph
|
We derive the finite size dependence of the clustering coefficient of
scale-free random graphs generated by the configuration model with degree
distribution exponent $2<\gamma<3$. Degree heterogeneity increases the presence
of triangles in the network up to levels that compare to those found in many
real networks even for extremely large nets. We also find that for values of
$\gamma \approx 2$, clustering is virtually size independent and, at the same
time, becomes a {\it de facto} non self-averaging topological property. This
implies that a single instance network is not representative of the ensemble
even for very large network sizes.
|
1205.2880
|
Efficient Spatial Keyword Search in Trajectory Databases
|
cs.DB
|
An increasing amount of trajectory data is being annotated with text
descriptions to better capture the semantics associated with locations. The
fusion of spatial locations and text descriptions in trajectories engenders a
new type of top-$k$ queries that take into account both aspects. Each
trajectory in consideration consists of a sequence of geo-spatial locations
associated with text descriptions. Given a user location $\lambda$ and a
keyword set $\psi$, a top-$k$ query returns $k$ trajectories whose text
descriptions cover the keywords $\psi$ and that have the shortest match
distance. To the best of our knowledge, previous research on querying
trajectory databases has focused on trajectory data without any text
description, and no existing work has studied such kind of top-$k$ queries on
trajectories. This paper proposes one novel method for efficiently computing
top-$k$ trajectories. The method is developed based on a new hybrid index,
cell-keyword conscious B$^+$-tree, denoted by \cellbtree, which enables us to
exploit both text relevance and location proximity to facilitate efficient and
effective query processing. The results of our extensive empirical studies with
an implementation of the proposed algorithms on BerkeleyDB demonstrate that our
proposed methods are capable of achieving excellent performance and good
scalability.
|
1205.2889
|
A Comparative Study on the Performance of the Top DBMS Systems
|
cs.DB cs.PF
|
Database management systems are today's most reliable mean to organize data
into collections that can be searched and updated. However, many DBMS systems
are available on the market each having their pros and cons in terms of
reliability, usability, security, and performance. This paper presents a
comparative study on the performance of the top DBMS systems. They are mainly
MS SQL Server 2008, Oracle 11g, IBM DB2, MySQL 5.5, and MS Access 2010. The
testing is aimed at executing different SQL queries with different level of
complexities over the different five DBMSs under test. This would pave the way
to build a head-to-head comparative evaluation that shows the average execution
time, memory usage, and CPU utilization of each DBMS after completion of the
test.
|
1205.2891
|
Effective performance of information retrieval on web by using web
crawling
|
cs.IR
|
World Wide Web consists of more than 50 billion pages online. It is highly
dynamic i.e. the web continuously introduces new capabilities and attracts many
people. Due to this explosion in size, the effective information retrieval
system or search engine can be used to access the information. In this paper we
have proposed the EPOW (Effective Performance of WebCrawler) architecture. It
is a software agent whose main objective is to minimize the overload of a user
locating needed information. We have designed the web crawler by considering
the parallelization policy. Since our EPOW crawler has a highly optimized
system it can download a large number of pages per second while being robust
against crashes. We have also proposed to use the data structure concepts for
implementation of scheduler & circular Queue to improve the performance of our
web crawler.
|
1205.2909
|
Evolution of robust network topologies: Emergence of central backbones
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.comp-ph
|
We model the robustness against random failure or intentional attack of
networks with arbitrary large-scale structure. We construct a block-based model
which incorporates --- in a general fashion --- both connectivity and
interdependence links, as well as arbitrary degree distributions and block
correlations. By optimizing the percolation properties of this general class of
networks, we identify a simple core-periphery structure as the topology most
robust against random failure. In such networks, a distinct and small "core" of
nodes with higher degree is responsible for most of the connectivity,
functioning as a central "backbone" of the system. This centralized topology
remains the optimal structure when other constraints are imposed, such as a
given fraction of interdependence links and fixed degree distributions. This
distinguishes simple centralized topologies as the most likely to emerge, when
robustness against failure is the dominant evolutionary force.
|
1205.2930
|
Density Sensitive Hashing
|
cs.IR cs.LG
|
Nearest neighbors search is a fundamental problem in various research fields
like machine learning, data mining and pattern recognition. Recently,
hashing-based approaches, e.g., Locality Sensitive Hashing (LSH), are proved to
be effective for scalable high dimensional nearest neighbors search. Many
hashing algorithms found their theoretic root in random projection. Since these
algorithms generate the hash tables (projections) randomly, a large number of
hash tables (i.e., long codewords) are required in order to achieve both high
precision and recall. To address this limitation, we propose a novel hashing
algorithm called {\em Density Sensitive Hashing} (DSH) in this paper. DSH can
be regarded as an extension of LSH. By exploring the geometric structure of the
data, DSH avoids the purely random projections selection and uses those
projective functions which best agree with the distribution of the data.
Extensive experimental results on real-world data sets have shown that the
proposed method achieves better performance compared to the state-of-the-art
hashing approaches.
|
1205.2952
|
Synchronization and quorum sensing in a swarm of humanoid robots
|
nlin.AO cs.RO
|
With the advent of inexpensive simple humanoid robots, new classes of robotic
questions can be considered experimentally. One of these is collective behavior
of groups of humanoid robots, and in particular robot synchronization and
swarming. The goal of this work is to robustly synchronize a group of humanoid
robots, and to demonstrate the approach experimentally on a choreography of 8
robots. We aim to be robust to network latencies, and to allow robots to join
or leave the group at any time (for example a fallen robot should be able to
stand up to rejoin the choreography). Contraction theory is used to allow each
robot in the group to synchronize to a common virtual oscillator, and quorum
sensing strategies are exploited to fit within the available bandwidth. The
humanoids used are Nao's, developed by Aldebaran Robotics.
|
1205.2958
|
b-Bit Minwise Hashing in Practice: Large-Scale Batch and Online Learning
and Using GPUs for Fast Preprocessing with Simple Hash Functions
|
cs.IR cs.DB cs.LG
|
In this paper, we study several critical issues which must be tackled before
one can apply b-bit minwise hashing to the volumes of data often used
industrial applications, especially in the context of search.
1. (b-bit) Minwise hashing requires an expensive preprocessing step that
computes k (e.g., 500) minimal values after applying the corresponding
permutations for each data vector. We developed a parallelization scheme using
GPUs and observed that the preprocessing time can be reduced by a factor of
20-80 and becomes substantially smaller than the data loading time.
2. One major advantage of b-bit minwise hashing is that it can substantially
reduce the amount of memory required for batch learning. However, as online
algorithms become increasingly popular for large-scale learning in the context
of search, it is not clear if b-bit minwise yields significant improvements for
them. This paper demonstrates that $b$-bit minwise hashing provides an
effective data size/dimension reduction scheme and hence it can dramatically
reduce the data loading time for each epoch of the online training process.
This is significant because online learning often requires many (e.g., 10 to
100) epochs to reach a sufficient accuracy.
3. Another critical issue is that for very large data sets it becomes
impossible to store a (fully) random permutation matrix, due to its space
requirements. Our paper is the first study to demonstrate that $b$-bit minwise
hashing implemented using simple hash functions, e.g., the 2-universal (2U) and
4-universal (4U) hash families, can produce very similar learning results as
using fully random permutations. Experiments on datasets of up to 200GB are
presented.
|
1205.2996
|
Predictive Complexity and Generalized Entropy Rate of Stationary Ergodic
Processes
|
cs.IT math.IT
|
In the online prediction framework, we use generalized entropy of to study
the loss rate of predictors when outcomes are drawn according to stationary
ergodic distributions over the binary alphabet. We show that the notion of
generalized entropy of a regular game \cite{KVV04} is well-defined for
stationary ergodic distributions. In proving this, we obtain new game-theoretic
proofs of some classical information theoretic inequalities. Using Birkhoff's
ergodic theorem and convergence properties of conditional distributions, we
prove that a classical Shannon-McMillan-Breiman theorem holds for a restricted
class of regular games, when no computational constraints are imposed on the
prediction strategies.
If a game is mixable, then there is an optimal aggregating strategy which
loses at most an additive constant when compared to any other lower
semicomputable strategy. The loss incurred by this algorithm on an infinite
sequence of outcomes is called its predictive complexity. We use our version of
Shannon-McMillan-Breiman theorem to prove that when a restriced regular game
has a predictive complexity, the predictive complexity converges to the
generalized entropy of the game almost everywhere with respect to the
stationary ergodic distribution.
|
1205.3020
|
Bayesian Hypothesis Test for Sparse Support Recovery using Belief
Propagation
|
cs.IT math.IT
|
In this paper, we introduce a new support recovery algorithm from noisy
measurements called Bayesian hypothesis test via belief propagation (BHT-BP).
BHT-BP focuses on sparse support recovery rather than sparse signal estimation.
The key idea behind BHT-BP is to detect the support set of a sparse vector
using hypothesis test where the posterior densities used in the test are
obtained by aid of belief propagation (BP). Since BP provides precise posterior
information using the noise statistic, BHT-BP can recover the support with
robustness against the measurement noise. In addition, BHT-BP has low
computational cost compared to the other algorithms by the use of BP. We show
the support recovery performance of BHT-BP on the parameters (N; M; K; SNR) and
compare the performance of BHT-BP to OMP and Lasso via numerical results.
|
1205.3031
|
The model of information retrieval based on the theory of hypercomplex
numerical systems
|
cs.IR
|
The paper provided a description of a new model of information retrieval,
which is an extension of vector-space model and is based on the principles of
the theory of hypercomplex numerical systems. The model allows to some extent
realize the idea of fuzzy search and allows you to apply in practice the model
of information retrieval practical developments in the field of hypercomplex
numerical systems.
|
1205.3054
|
Approximate Modified Policy Iteration
|
cs.AI
|
Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that
contains the two celebrated policy and value iteration methods. Despite its
generality, MPI has not been thoroughly studied, especially its approximation
form which is used when the state and/or action spaces are large or infinite.
In this paper, we propose three implementations of approximate MPI (AMPI) that
are extensions of well-known approximate DP algorithms: fitted-value iteration,
fitted-Q iteration, and classification-based policy iteration. We provide error
propagation analyses that unify those for approximate policy and value
iteration. On the last classification-based implementation, we develop a
finite-sample analysis that shows that MPI's main parameter allows to control
the balance between the estimation error of the classifier and the overall
value function approximation.
|
1205.3058
|
A Tight Lower Bound on the Controllability of Networks with Multiple
Leaders
|
cs.SY math.OC
|
In this paper we study the controllability of networked systems with static
network topologies using tools from algebraic graph theory. Each agent in the
network acts in a decentralized fashion by updating its state in accordance
with a nearest-neighbor averaging rule, known as the consensus dynamics. In
order to control the system, external control inputs are injected into the so
called leader nodes, and the influence is propagated throughout the network.
Our main result is a tight topological lower bound on the rank of the
controllability matrix for such systems with arbitrary network topologies and
possibly multiple leaders.
|
1205.3062
|
Malware Detection Module using Machine Learning Algorithms to Assist in
Centralized Security in Enterprise Networks
|
cs.CR cs.LG
|
Malicious software is abundant in a world of innumerable computer users, who
are constantly faced with these threats from various sources like the internet,
local networks and portable drives. Malware is potentially low to high risk and
can cause systems to function incorrectly, steal data and even crash. Malware
may be executable or system library files in the form of viruses, worms,
Trojans, all aimed at breaching the security of the system and compromising
user privacy. Typically, anti-virus software is based on a signature definition
system which keeps updating from the internet and thus keeping track of known
viruses. While this may be sufficient for home-users, a security risk from a
new virus could threaten an entire enterprise network. This paper proposes a
new and more sophisticated antivirus engine that can not only scan files, but
also build knowledge and detect files as potential viruses. This is done by
extracting system API calls made by various normal and harmful executable, and
using machine learning algorithms to classify and hence, rank files on a scale
of security risk. While such a system is processor heavy, it is very effective
when used centrally to protect an enterprise network which maybe more prone to
such threats.
|
1205.3068
|
Bridge the Gap: Measuring and Analyzing Technical Data for Social Trust
between Smartphones
|
cs.NI cs.HC cs.SI
|
Mobiles are nowadays the most relevant communication devices in terms of
quantity and flexibility. Like in most MANETs ad-hoc communication between two
mobile phones requires mutual trust between the devices. A new way of
establishing this trust conducts social trust from technically measurable data
(e.g., interaction logs). To explore the relation between social and technical
trust, we conduct a large-scale survey with more than 217 Android users and
analyze their anonymized call and message logs. We show that a reliable a
priori trust value for a mobile system can be derived from common social
communication metrics.
|
1205.3109
|
Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based
Search
|
cs.LG cs.AI stat.ML
|
Bayesian model-based reinforcement learning is a formally elegant approach to
learning optimal behaviour under model uncertainty, trading off exploration and
exploitation in an ideal way. Unfortunately, finding the resulting
Bayes-optimal policies is notoriously taxing, since the search space becomes
enormous. In this paper we introduce a tractable, sample-based method for
approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our
approach outperformed prior Bayesian model-based RL algorithms by a significant
margin on several well-known benchmark problems -- because it avoids expensive
applications of Bayes rule within the search tree by lazily sampling models
from the current beliefs. We illustrate the advantages of our approach by
showing it working in an infinite state space domain which is qualitatively out
of reach of almost all previous work in Bayesian exploration.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.