id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1206.6465
|
Bayesian Efficient Multiple Kernel Learning
|
cs.LG stat.ML
|
Multiple kernel learning algorithms are proposed to combine kernels in order
to obtain a better similarity measure or to integrate feature representations
coming from different data sources. Most of the previous research on such
methods is focused on the computational efficiency issue. However, it is still
not feasible to combine many kernels using existing Bayesian approaches due to
their high time complexity. We propose a fully conjugate Bayesian formulation
and derive a deterministic variational approximation, which allows us to
combine hundreds or thousands of kernels very efficiently. We briefly explain
how the proposed method can be extended for multiclass learning and
semi-supervised learning. Experiments with large numbers of kernels on
benchmark data sets show that our inference method is quite fast, requiring
less than a minute. On one bioinformatics and three image recognition data
sets, our method outperforms previously reported results with better
generalization performance.
|
1206.6466
|
Utilizing Static Analysis and Code Generation to Accelerate Neural
Networks
|
cs.NE cs.MS cs.PL
|
As datasets continue to grow, neural network (NN) applications are becoming
increasingly limited by both the amount of available computational power and
the ease of developing high-performance applications. Researchers often must
have expert systems knowledge to make their algorithms run efficiently.
Although available computing power increases rapidly each year, algorithm
efficiency is not able to keep pace due to the use of general purpose
compilers, which are not able to fully optimize specialized application
domains. Within the domain of NNs, we have the added knowledge that network
architecture remains constant during training, meaning the architecture's data
structure can be statically optimized by a compiler. In this paper, we present
SONNC, a compiler for NNs that utilizes static analysis to generate optimized
parallel code. We show that SONNC's use of static optimizations make it able to
outperform hand-optimized C++ code by up to 7.8X, and MATLAB code by up to 24X.
Additionally, we show that use of SONNC significantly reduces code complexity
when using structurally sparse networks.
|
1206.6467
|
Semi-Supervised Collective Classification via Hybrid Label
Regularization
|
cs.LG stat.ML
|
Many classification problems involve data instances that are interlinked with
each other, such as webpages connected by hyperlinks. Techniques for
"collective classification" (CC) often increase accuracy for such data graphs,
but usually require a fully-labeled training graph. In contrast, we examine how
to improve the semi-supervised learning of CC models when given only a
sparsely-labeled graph, a common situation. We first describe how to use novel
combinations of classifiers to exploit the different characteristics of the
relational features vs. the non-relational features. We also extend the ideas
of "label regularization" to such hybrid classifiers, enabling them to leverage
the unlabeled data to bias the learning process. We find that these techniques,
which are efficient and easy to implement, significantly increase accuracy on
three real datasets. In addition, our results explain conflicting findings from
prior related studies.
|
1206.6468
|
Variational Inference in Non-negative Factorial Hidden Markov Models for
Efficient Audio Source Separation
|
cs.LG cs.SD stat.ML
|
The past decade has seen substantial work on the use of non-negative matrix
factorization and its probabilistic counterparts for audio source separation.
Although able to capture audio spectral structure well, these models neglect
the non-stationarity and temporal dynamics that are important properties of
audio. The recently proposed non-negative factorial hidden Markov model
(N-FHMM) introduces a temporal dimension and improves source separation
performance. However, the factorial nature of this model makes the complexity
of inference exponential in the number of sound sources. Here, we present a
Bayesian variant of the N-FHMM suited to an efficient variational inference
algorithm, whose complexity is linear in the number of sound sources. Our
algorithm performs comparably to exact inference in the original N-FHMM but is
significantly faster. In typical configurations of the N-FHMM, our method
achieves around a 30x increase in speed.
|
1206.6469
|
Inferring Latent Structure From Mixed Real and Categorical Relational
Data
|
cs.LG stat.ML
|
We consider analysis of relational data (a matrix), in which the rows
correspond to subjects (e.g., people) and the columns correspond to attributes.
The elements of the matrix may be a mix of real and categorical. Each subject
and attribute is characterized by a latent binary feature vector, and an
inferred matrix maps each row-column pair of binary feature vectors to an
observed matrix element. The latent binary features of the rows are modeled via
a multivariate Gaussian distribution with low-rank covariance matrix, and the
Gaussian random variables are mapped to latent binary features via a probit
link. The same type construction is applied jointly to the columns. The model
infers latent, low-dimensional binary features associated with each row and
each column, as well correlation structure between all rows and between all
columns.
|
1206.6470
|
A Combinatorial Algebraic Approach for the Identifiability of Low-Rank
Matrix Completion
|
cs.LG cs.DM cs.NA stat.ML
|
In this paper, we review the problem of matrix completion and expose its
intimate relations with algebraic geometry, combinatorics and graph theory. We
present the first necessary and sufficient combinatorial conditions for
matrices of arbitrary rank to be identifiable from a set of matrix entries,
yielding theoretical constraints and new algorithms for the problem of matrix
completion. We conclude by algorithmically evaluating the tightness of the
given conditions and algorithms for practically relevant matrix sizes, showing
that the algebraic-combinatoric approach can lead to improvements over
state-of-the-art matrix completion methods.
|
1206.6471
|
On Causal and Anticausal Learning
|
cs.LG stat.ML
|
We consider the problem of function estimation in the case where an
underlying causal model can be inferred. This has implications for popular
scenarios such as covariate shift, concept drift, transfer learning and
semi-supervised learning. We argue that causal knowledge may facilitate some
approaches for a given problem, and rule out others. In particular, we
formulate a hypothesis for when semi-supervised learning can help, and
corroborate it with empirical results.
|
1206.6472
|
An Efficient Approach to Sparse Linear Discriminant Analysis
|
cs.LG stat.ML
|
We present a novel approach to the formulation and the resolution of sparse
Linear Discriminant Analysis (LDA). Our proposal, is based on penalized Optimal
Scoring. It has an exact equivalence with penalized LDA, contrary to the
multi-class approaches based on the regression of class indicator that have
been proposed so far. Sparsity is obtained thanks to a group-Lasso penalty that
selects the same features in all discriminant directions. Our experiments
demonstrate that this approach generates extremely parsimonious models without
compromising prediction performances. Besides prediction, the resulting sparse
discriminant directions are also amenable to low-dimensional representations of
data. Our algorithm is highly efficient for medium to large number of
variables, and is thus particularly well suited to the analysis of gene
expression data.
|
1206.6473
|
Compositional Planning Using Optimal Option Models
|
cs.AI cs.LG
|
In this paper we introduce a framework for option model composition. Option
models are temporal abstractions that, like macro-operators in classical
planning, jump directly from a start state to an end state. Prior work has
focused on constructing option models from primitive actions, by intra-option
model learning; or on using option models to construct a value function, by
inter-option planning. We present a unified view of intra- and inter-option
model learning, based on a major generalisation of the Bellman equation. Our
fundamental operation is the recursive composition of option models into other
option models. This key idea enables compositional planning over many levels of
abstraction. We illustrate our framework using a dynamic programming algorithm
that simultaneously constructs optimal option models for multiple subgoals, and
also searches over those option models to provide rapid progress towards other
subgoals.
|
1206.6474
|
Estimation of Simultaneously Sparse and Low Rank Matrices
|
cs.DS cs.LG cs.NA stat.ML
|
The paper introduces a penalized matrix estimation procedure aiming at
solutions which are sparse and low-rank at the same time. Such structures arise
in the context of social networks or protein interactions where underlying
graphs have adjacency matrices which are block-diagonal in the appropriate
basis. We introduce a convex mixed penalty which involves $\ell_1$-norm and
trace norm simultaneously. We obtain an oracle inequality which indicates how
the two effects interact according to the nature of the target matrix. We bound
generalization error in the link prediction problem. We also develop proximal
descent strategies to solve the optimization problem efficiently and evaluate
performance on synthetic and real data sets.
|
1206.6475
|
A Split-Merge Framework for Comparing Clusterings
|
cs.LG stat.ML
|
Clustering evaluation measures are frequently used to evaluate the
performance of algorithms. However, most measures are not properly normalized
and ignore some information in the inherent structure of clusterings. We model
the relation between two clusterings as a bipartite graph and propose a general
component-based decomposition formula based on the components of the graph.
Most existing measures are examples of this formula. In order to satisfy
consistency in the component, we further propose a split-merge framework for
comparing clusterings of different data sets. Our framework gives measures that
are conditionally normalized, and it can make use of data point information,
such as feature vectors and pairwise distances. We use an entropy-based
instance of the framework and a coreference resolution data set to demonstrate
empirically the utility of our framework over other measures.
|
1206.6476
|
Similarity Learning for Provably Accurate Sparse Linear Classification
|
cs.LG cs.AI stat.ML
|
In recent years, the crucial importance of metrics in machine learning
algorithms has led to an increasing interest for optimizing distance and
similarity functions. Most of the state of the art focus on learning
Mahalanobis distances (requiring to fulfill a constraint of positive
semi-definiteness) for use in a local k-NN algorithm. However, no theoretical
link is established between the learned metrics and their performance in
classification. In this paper, we make use of the formal framework of good
similarities introduced by Balcan et al. to design an algorithm for learning a
non PSD linear similarity optimized in a nonlinear feature space, which is then
used to build a global linear classifier. We show that our approach has uniform
stability and derive a generalization bound on the classification error.
Experiments performed on various datasets confirm the effectiveness of our
approach compared to state-of-the-art methods and provide evidence that (i) it
is fast, (ii) robust to overfitting and (iii) produces very sparse classifiers.
|
1206.6477
|
Discovering Support and Affiliated Features from Very High Dimensions
|
cs.LG stat.ML
|
In this paper, a novel learning paradigm is presented to automatically
identify groups of informative and correlated features from very high
dimensions. Specifically, we explicitly incorporate correlation measures as
constraints and then propose an efficient embedded feature selection method
using recently developed cutting plane strategy. The benefits of the proposed
algorithm are two-folds. First, it can identify the optimal discriminative and
uncorrelated feature subset to the output labels, denoted here as Support
Features, which brings about significant improvements in prediction performance
over other state of the art feature selection methods considered in the paper.
Second, during the learning process, the underlying group structures of
correlated features associated with each support feature, denoted as Affiliated
Features, can also be discovered without any additional cost. These affiliated
features serve to improve the interpretations on the learning tasks. Extensive
empirical studies on both synthetic and very high dimensional real-world
datasets verify the validity and efficiency of the proposed method.
|
1206.6478
|
Maximum Margin Output Coding
|
cs.LG stat.ML
|
In this paper we study output coding for multi-label prediction. For a
multi-label output coding to be discriminative, it is important that codewords
for different label vectors are significantly different from each other. In the
meantime, unlike in traditional coding theory, codewords in output coding are
to be predicted from the input, so it is also critical to have a predictable
label encoding.
To find output codes that are both discriminative and predictable, we first
propose a max-margin formulation that naturally captures these two properties.
We then convert it to a metric learning formulation, but with an exponentially
large number of constraints as commonly encountered in structured prediction
problems. Without a label structure for tractable inference, we use
overgenerating (i.e., relaxation) techniques combined with the cutting plane
method for optimization.
In our empirical study, the proposed output coding scheme outperforms a
variety of existing multi-label prediction methods for image, text and music
classification.
|
1206.6479
|
The Landmark Selection Method for Multiple Output Prediction
|
cs.LG stat.ML
|
Conditional modeling x \to y is a central problem in machine learning. A
substantial research effort is devoted to such modeling when x is high
dimensional. We consider, instead, the case of a high dimensional y, where x is
either low dimensional or high dimensional. Our approach is based on selecting
a small subset y_L of the dimensions of y, and proceed by modeling (i) x \to
y_L and (ii) y_L \to y. Composing these two models, we obtain a conditional
model x \to y that possesses convenient statistical properties. Multi-label
classification and multivariate regression experiments on several datasets show
that this model outperforms the one vs. all approach as well as several
sophisticated multiple output prediction methods.
|
1206.6480
|
A Dantzig Selector Approach to Temporal Difference Learning
|
cs.LG stat.ML
|
LSTD is a popular algorithm for value function approximation. Whenever the
number of features is larger than the number of samples, it must be paired with
some form of regularization. In particular, L1-regularization methods tend to
perform feature selection by promoting sparsity, and thus, are well-suited for
high-dimensional problems. However, since LSTD is not a simple regression
algorithm, but it solves a fixed--point problem, its integration with
L1-regularization is not straightforward and might come with some drawbacks
(e.g., the P-matrix assumption for LASSO-TD). In this paper, we introduce a
novel algorithm obtained by integrating LSTD with the Dantzig Selector. We
investigate the performance of the proposed algorithm and its relationship with
the existing regularized approaches, and show how it addresses some of their
drawbacks.
|
1206.6481
|
Cross Language Text Classification via Subspace Co-Regularized
Multi-View Learning
|
cs.CL cs.IR cs.LG
|
In many multilingual text classification problems, the documents in different
languages often share the same set of categories. To reduce the labeling cost
of training a classification model for each individual language, it is
important to transfer the label knowledge gained from one language to another
language by conducting cross language classification. In this paper we develop
a novel subspace co-regularized multi-view learning method for cross language
text classification. This method is built on parallel corpora produced by
machine translation. It jointly minimizes the training error of each classifier
in each language while penalizing the distance between the subspace
representations of parallel documents. Our empirical study on a large set of
cross language text classification tasks shows the proposed method consistently
outperforms a number of inductive methods, domain adaptation methods, and
multi-view learning methods.
|
1206.6482
|
Modeling Images using Transformed Indian Buffet Processes
|
cs.CV cs.LG stat.ML
|
Latent feature models are attractive for image modeling, since images
generally contain multiple objects. However, many latent feature models ignore
that objects can appear at different locations or require pre-segmentation of
images. While the transformed Indian buffet process (tIBP) provides a method
for modeling transformation-invariant features in unsegmented binary images,
its current form is inappropriate for real images because of its computational
cost and modeling assumptions. We combine the tIBP with likelihoods appropriate
for real images and develop an efficient inference, using the cross-correlation
between images and features, that is theoretically and empirically faster than
existing inference techniques. Our method discovers reasonable components and
achieve effective image reconstruction in natural images.
|
1206.6483
|
Subgraph Matching Kernels for Attributed Graphs
|
cs.LG stat.ML
|
We propose graph kernels based on subgraph matchings, i.e.
structure-preserving bijections between subgraphs. While recently proposed
kernels based on common subgraphs (Wale et al., 2008; Shervashidze et al.,
2009) in general can not be applied to attributed graphs, our approach allows
to rate mappings of subgraphs by a flexible scoring scheme comparing vertex and
edge attributes by kernels. We show that subgraph matching kernels generalize
several known kernels. To compute the kernel we propose a graph-theoretical
algorithm inspired by a classical relation between common subgraphs of two
graphs and cliques in their product graph observed by Levi (1973). Encouraging
experimental results on a classification task of real-world graphs are
presented.
|
1206.6484
|
Apprenticeship Learning for Model Parameters of Partially Observable
Environments
|
cs.LG cs.AI stat.ML
|
We consider apprenticeship learning, i.e., having an agent learn a task by
observing an expert demonstrating the task in a partially observable
environment when the model of the environment is uncertain. This setting is
useful in applications where the explicit modeling of the environment is
difficult, such as a dialogue system. We show that we can extract information
about the environment model by inferring action selection process behind the
demonstration, under the assumption that the expert is choosing optimal actions
based on knowledge of the true model of the target environment. Proposed
algorithms can achieve more accurate estimates of POMDP parameters and better
policies from a short demonstration, compared to methods that learns only from
the reaction from the environment.
|
1206.6485
|
Greedy Algorithms for Sparse Reinforcement Learning
|
cs.LG stat.ML
|
Feature selection and regularization are becoming increasingly prominent
tools in the efforts of the reinforcement learning (RL) community to expand the
reach and applicability of RL. One approach to the problem of feature selection
is to impose a sparsity-inducing form of regularization on the learning method.
Recent work on $L_1$ regularization has adapted techniques from the supervised
learning literature for use with RL. Another approach that has received renewed
attention in the supervised learning community is that of using a simple
algorithm that greedily adds new features. Such algorithms have many of the
good properties of the $L_1$ regularization methods, while also being extremely
efficient and, in some cases, allowing theoretical guarantees on recovery of
the true form of a sparse target function from sampled data. This paper
considers variants of orthogonal matching pursuit (OMP) applied to
reinforcement learning. The resulting algorithms are analyzed and compared
experimentally with existing $L_1$ regularized approaches. We demonstrate that
perhaps the most natural scenario in which one might hope to achieve sparse
recovery fails; however, one variant, OMP-BRM, provides promising theoretical
guarantees under certain assumptions on the feature dictionary. Another
variant, OMP-TD, empirically outperforms prior methods both in approximation
accuracy and efficiency on several benchmark problems.
|
1206.6486
|
Flexible Modeling of Latent Task Structures in Multitask Learning
|
cs.LG stat.ML
|
Multitask learning algorithms are typically designed assuming some fixed, a
priori known latent structure shared by all the tasks. However, it is usually
unclear what type of latent task structure is the most appropriate for a given
multitask learning problem. Ideally, the "right" latent task structure should
be learned in a data-driven manner. We present a flexible, nonparametric
Bayesian model that posits a mixture of factor analyzers structure on the
tasks. The nonparametric aspect makes the model expressive enough to subsume
many existing models of latent task structures (e.g, mean-regularized tasks,
clustered tasks, low-rank or linear/non-linear subspace assumption on tasks,
etc.). Moreover, it can also learn more general task structures, addressing the
shortcomings of such models. We present a variational inference algorithm for
our model. Experimental results on synthetic and real-world datasets, on both
regression and classification problems, demonstrate the effectiveness of the
proposed method.
|
1206.6487
|
An Adaptive Algorithm for Finite Stochastic Partial Monitoring
|
cs.LG cs.GT stat.ML
|
We present a new anytime algorithm that achieves near-optimal regret for any
instance of finite stochastic partial monitoring. In particular, the new
algorithm achieves the minimax regret, within logarithmic factors, for both
"easy" and "hard" problems. For easy problems, it additionally achieves
logarithmic individual regret. Most importantly, the algorithm is adaptive in
the sense that if the opponent strategy is in an "easy region" of the strategy
space then the regret grows as if the problem was easy. As an implication, we
show that under some reasonable additional assumptions, the algorithm enjoys an
O(\sqrt{T}) regret in Dynamic Pricing, proven to be hard by Bartok et al.
(2011).
|
1206.6488
|
The Nonparanormal SKEPTIC
|
stat.ME cs.LG stat.ML
|
We propose a semiparametric approach, named nonparanormal skeptic, for
estimating high dimensional undirected graphical models. In terms of modeling,
we consider the nonparanormal family proposed by Liu et al (2009). In terms of
estimation, we exploit nonparametric rank-based correlation coefficient
estimators including the Spearman's rho and Kendall's tau. In high dimensional
settings, we prove that the nonparanormal skeptic achieves the optimal
parametric rate of convergence in both graph and parameter estimation. This
result suggests that the nonparanormal graphical models are a safe replacement
of the Gaussian graphical models, even when the data are Gaussian.
|
1206.6514
|
Investigation of Color Constancy for Ubiquitous Wireless LAN/Camera
Positioning: An Initial Outcome
|
cs.CV cs.HC
|
This paper present our color constancy investigation in the hybridization of
Wireless LAN and Camera positioning in the mobile phone. Five typical color
constancy schemes are analyzed in different location environment. The results
can be used to combine with RF signals from Wireless LAN positioning by using
model fitting approach in order to establish absolute positioning output. There
is no conventional searching algorithm required, thus it is expected to reduce
the complexity of computation. Finally we present our preliminary results to
illustrate the indoor positioning algorithm performance evaluation for an
indoor environment set-up.
|
1206.6544
|
Minimum KL-divergence on complements of $L_1$ balls
|
cs.IT math.IT
|
Pinsker's widely used inequality upper-bounds the total variation distance
$||P-Q||_1$ in terms of the Kullback-Leibler divergence $D(P||Q)$. Although in
general a bound in the reverse direction is impossible, in many applications
the quantity of interest is actually $D^*(P,\eps)$ --- defined, for an
arbitrary fixed $P$, as the infimum of $D(P||Q)$ over all distributions $Q$
that are $\eps$-far away from $P$ in total variation. We show that
$D^*(P,\eps)\le C\eps^2 + O(\eps^3)$, where $C=C(P)=1/2$ for "balanced"
distributions, thereby providing a kind of reverse Pinsker inequality. An
application to large deviations is given, and some of the structural results
may be of independent interest. Keywords: Pinsker inequality, Sanov's theorem,
large deviations
|
1206.6557
|
Mining Event Logs to Support Workflow Resource Allocation
|
cs.SE cs.DB
|
Workflow technology is widely used to facilitate the business process in
enterprise information systems (EIS), and it has the potential to reduce design
time, enhance product quality and decrease product cost. However, significant
limitations still exist: as an important task in the context of workflow, many
present resource allocation operations are still performed manually, which are
time-consuming. This paper presents a data mining approach to address the
resource allocation problem (RAP) and improve the productivity of workflow
resource management. Specifically, an Apriori-like algorithm is used to find
the frequent patterns from the event log, and association rules are generated
according to predefined resource allocation constraints. Subsequently, a
correlation measure named lift is utilized to annotate the negatively
correlated resource allocation rules for resource reservation. Finally, the
rules are ranked using the confidence measures as resource allocation rules.
Comparative experiments are performed using C4.5, SVM, ID3, Na\"ive Bayes and
the presented approach, and the results show that the presented approach is
effective in both accuracy and candidate resource recommendations.
|
1206.6561
|
Regenerative and Adaptive schemes Based on Network Coding for Wireless
Relay Network
|
cs.IT math.IT
|
Recent technological advances in wireless communications offer new
opportunities and challenges for relay network.To enhance system performance,
Demodulate-Network Coding (Dm-NC) scheme has been examined at relay node; it
works directly to De-map the received signals and after that forward the
mixture to the destination. Simulation analysis has been proven that the
performance of Dm-NC has superiority over analog-NC. In addition, the
Quantize-Decode-NC scheme (QDF-NC) has been introduced. The presented
simulation results clearly provide that the QDF-NC perform better than
analog-NC. The toggle between analogNC and QDF-NC is simulated in order to
investigate delay and power consumption reduction at relay node.
|
1206.6570
|
Extension of Three-Variable Counterfactual Casual Graphic Model: from
Two-Value to Three-Value Random Variable
|
stat.ME cs.AI
|
The extension of counterfactual causal graphic model with three variables of
vertex set in directed acyclic graph (DAG) is discussed in this paper by
extending two- value distribution to three-value distribution of the variables
involved in DAG. Using the conditional independence as ancillary information, 6
kinds of extension counterfactual causal graphic models with some variables are
extended from two-value distribution to three-value distribution and the
sufficient conditions of identifiability are derived.
|
1206.6584
|
An Approximate Coding-Rate Versus Minimum Distance Formula for Binary
Codes
|
cs.IT math.IT
|
We devise an analytically simple as well as invertible approximate
expression, which describes the relation between the minimum distance of a
binary code and the corresponding maximum attainable code-rate. For example,
for a rate-(1/4), length-256 binary code the best known bounds limit the
attainable minimum distance to 65<d(n=256,k=64)<90, while our solution yields
d(n=256,k=64)=74.4. The proposed formula attains the approximation accuracy
within the rounding error for ~97% of (n,k) scenarios, where the exact value of
the minimum distance is known. The results provided may be utilized for the
analysis and design of efficient communication systems.
|
1206.6588
|
How the online social networks are used: Dialogs-based structure of
MySpace
|
physics.soc-ph cs.SI
|
Quantitative study of collective dynamics in online social networks is a new
challenge based on the abundance of empirical data. Conclusions, however, may
depend on factors as user's psychology profiles and their reasons to use the
online contacts. In this paper we have compiled and analyzed two datasets from
\texttt{MySpace}. The data contain networked dialogs occurring within a
specified time depth, high temporal resolution, and texts of messages, in which
the emotion valence is assessed by using SentiStrength classifier. Performing a
comprehensive analysis we obtain three groups of results: Dynamic topology of
the dialogs-based networks have characteristic structure with Zipf's
distribution of communities, low link reciprocity, and disassortative
correlations. Overlaps supporting "weak-ties" hypothesis are found to follow
the laws recently conjectured for online games. Long-range temporal
correlations and persistent fluctuations occur in the time series of messages
carrying positive (negative) emotion. Patterns of user communications have
dominant positive emotion (attractiveness) and strong impact of circadian
cycles and nteractivity times longer than one day. Taken together, these
results give a new insight into functioning of the online social networks and
unveil importance of the amount of information and emotion that is communicated
along the social links. (All data used in this study are fully anonymized.)
|
1206.6646
|
Aggregate Skyline Join Queries: Skylines with Aggregate Operations over
Multiple Relations
|
cs.DB
|
The multi-criteria decision making, which is possible with the advent of
skyline queries, has been applied in many areas. Though most of the existing
research is concerned with only a single relation, several real world
applications require finding the skyline set of records over multiple
relations. Consequently, the join operation over skylines where the preferences
are local to each relation, has been proposed. In many of those cases, however,
the join often involves performing aggregate operations among some of the
attributes from the different relations. In this paper, we introduce such
queries as "aggregate skyline join queries". Since the naive algorithm is
impractical, we propose three algorithms to efficiently process such queries.
The algorithms utilize certain properties of skyline sets, and processes the
skylines as much as possible locally before computing the join. Experiments
with real and synthetic datasets exhibit the practicality and scalability of
the algorithms with respect to the cardinality and dimensionality of the
relations.
|
1206.6648
|
Asynchronous Decentralized Event-triggered Control
|
math.OC cs.SY
|
In this paper we propose an approach to the implementation of controllers
with decentralized strategies triggering controller updates. We consider
set-ups with a central node in charge of the computation of the control
commands, and a set of not co-located sensors providing measurements to the
controller node. The solution we propose does not require measurements from the
sensors to be synchronized in time. The sensors in our proposal provide
measurements in an aperiodic way triggered by local conditions. Furthermore, in
the proposed implementation (most of) the communication between nodes requires
only the exchange of one bit of information (per controller update), which
could aid in reducing transmission delays and as a secondary effect result in
fewer transmissions being triggered.
|
1206.6651
|
Geometric properties of graph layouts optimized for greedy navigation
|
physics.soc-ph cs.SI
|
The graph layouts used for complex network studies have been mainly been
developed to improve visualization. If we interpret the layouts in metric
spaces such as Euclidean ones, however, the embedded spatial information can be
a valuable cue for various purposes. In this work, we focus on the navigational
properties of spatial graphs. We use an recently user-centric navigation
protocol to explore spatial layouts of complex networks that are optimal for
navigation. These layouts are generated with a simple simulated annealing
optimization technique. We compared these layouts to others targeted at better
visualization. We discuss the spatial statistical properties of the optimized
layouts for better navigability and its implication.
|
1206.6679
|
Fixed-Form Variational Posterior Approximation through Stochastic Linear
Regression
|
stat.CO cs.CV stat.ML
|
We propose a general algorithm for approximating nonstandard Bayesian
posterior distributions. The algorithm minimizes the Kullback-Leibler
divergence of an approximating distribution to the intractable posterior
distribution. Our method can be used to approximate any posterior distribution,
provided that it is given in closed form up to the proportionality constant.
The approximation can be any distribution in the exponential family or any
mixture of such distributions, which means that it can be made arbitrarily
precise. Several examples illustrate the speed and accuracy of our
approximation method in practice.
|
1206.6682
|
Pricing-based Distributed Downlink Beamforming in Multi-Cell OFDMA
Networks
|
cs.IT cs.NI math.IT
|
We address the problem of downlink beamforming for mitigating the co-channel
interference in multi-cell OFDMA networks. Based on the network utility
maximization framework, we formulate the problem as a non-convex optimization
problem subject to the per-cell power constraints, in which a general utility
function of SINR is used to characterize the network performance. Some
classical utility functions, such as the proportional fairness utility, the
weighted sum-rate utility and the {$\alpha$}-fairness utility, are subsumed as
special cases of our formulation. To solve the problem in a distributed
fashion, we devise an algorithm based on the non-cooperative game with pricing
mechanism. We give a sufficient condition for the convergence of the algorithm
to the Nash equilibrium (NE), and analyze the information exchange overhead
among the base stations. Moreover, to speed up the optimization of the
beam-vectors at each cell, we derive an efficient algorithm to solve for the
KKT conditions at each cell. We provide extensive simulation results to
demonstrate that the proposed distributed multi-cell beamforming algorithm
converges to an NE point in just a few iterations with low information exchange
overhead. Moreover, it provides significant performance gains, especially under
the strong interference scenario, in comparison with several existing
multi-cell interference mitigation schemes, such as the distributed
interference alignment method.
|
1206.6722
|
Piecewise Linear Topology, Evolutionary Algorithms, and Optimization
Problems
|
cs.NE math.GN math.OC
|
Schemata theory, Markov chains, and statistical mechanics have been used to
explain how evolutionary algorithms (EAs) work. Incremental success has been
achieved with all of these methods, but each has been stymied by limitations
related to its less-than-global view. We show that moving the investigation
into topological space improves our understanding of why EAs work.
|
1206.6728
|
Epidemic thresholds of the Susceptible-Infected-Susceptible model on
networks: A comparison of numerical and theoretical results
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
Recent work has shown that different theoretical approaches to the dynamics
of the Susceptible-Infected-Susceptible (SIS) model for epidemics lead to
qualitatively different estimates for the position of the epidemic threshold in
networks. Here we present large-scale numerical simulations of the SIS dynamics
on various types of networks, allowing the precise determination of the
effective threshold for systems of finite size N. We compare quantitatively the
numerical thresholds with theoretical predictions of the heterogeneous
mean-field theory and of the quenched mean-field theory. We show that the
latter is in general more accurate, scaling with N with the correct exponent,
but often failing to capture the correct prefactor.
|
1206.6735
|
Elimination of Spurious Ambiguity in Transition-Based Dependency Parsing
|
cs.CL cs.AI
|
We present a novel technique to remove spurious ambiguity from transition
systems for dependency parsing. Our technique chooses a canonical sequence of
transition operations (computation) for a given dependency tree. Our technique
can be applied to a large class of bottom-up transition systems, including for
instance Nivre (2004) and Attardi (2006).
|
1206.6741
|
Categorization of interestingness measures for knowledge extraction
|
cs.IT math.IT
|
Finding interesting association rules is an important and active research
field in data mining. The algorithms of the Apriori family are based on two
rule extraction measures, support and confidence. Although these two measures
have the virtue of being algorithmically fast, they generate a prohibitive
number of rules most of which are redundant and irrelevant. It is therefore
necessary to use further measures which filter uninteresting rules. Many
synthesis studies were then realized on the interestingness measures according
to several points of view. Different reported studies have been carried out to
identify "good" properties of rule extraction measures and these properties
have been assessed on 61 measures. The purpose of this paper is twofold. First
to extend the number of the measures and properties to be studied, in addition
to the formalization of the properties proposed in the literature. Second, in
the light of this formal study, to categorize the studied measures. This paper
leads then to identify categories of measures in order to help the users to
efficiently select an appropriate measure by choosing one or more measure(s)
during the knowledge extraction process. The properties evaluation on the 61
measures has enabled us to identify 7 classes of measures, classes that we
obtained using two different clustering techniques.
|
1206.6808
|
A Multi-State Power Model for Adequacy Assessment of Distributed
Generation via Universal Generating Function
|
cs.OH cs.PF cs.SY
|
The current and future developments of electric power systems are pushing the
boundaries of reliability assessment to consider distribution networks with
renewable generators. Given the stochastic features of these elements, most
modeling approaches rely on Monte Carlo simulation. The computational costs
associated to the simulation approach force to treating mostly small-sized
systems, i.e. with a limited number of lumped components of a given renewable
technology (e.g. wind or solar, etc.) whose behavior is described by a binary
state, working or failed. In this paper, we propose an analytical multi-state
modeling approach for the reliability assessment of distributed generation
(DG). The approach allows looking to a number of diverse energy generation
technologies distributed on the system. Multiple states are used to describe
the randomness in the generation units, due to the stochastic nature of the
generation sources and of the mechanical degradation/failure behavior of the
generation systems. The universal generating function (UGF) technique is used
for the individual component multi-state modeling. A multiplication-type
composition operator is introduced to combine the UGFs for the mechanical
degradation and renewable generation source states into the UGF of the
renewable generator power output. The overall multi-state DG system UGF is then
constructed and classical reliability indices (e.g. loss of load expectation
(LOLE), expected energy not supplied (EENS)) are computed from the DG system
generation and load UGFs. An application of the model is shown on a DG system
adapted from the IEEE 34 nodes distribution test feeder.
|
1206.6811
|
An Information-Theoretic Perspective of the Poisson Approximation via
the Chen-Stein Method
|
cs.IT math.IT math.PR
|
The first part of this work considers the entropy of the sum of (possibly
dependent and non-identically distributed) Bernoulli random variables. Upper
bounds on the error that follows from an approximation of this entropy by the
entropy of a Poisson random variable with the same mean are derived via the
Chen-Stein method. The second part of this work derives new lower bounds on the
total variation (TV) distance and relative entropy between the distribution of
the sum of independent Bernoulli random variables and the Poisson distribution.
The starting point of the derivation of the new bounds in the second part of
this work is an introduction of a new lower bound on the total variation
distance, whose derivation generalizes and refines the analysis by Barbour and
Hall (1984), based on the Chen-Stein method for the Poisson approximation. A
new lower bound on the relative entropy between these two distributions is
introduced, and this lower bound is compared to a previously reported upper
bound on the relative entropy by Kontoyiannis et al. (2005). The derivation of
the new lower bound on the relative entropy follows from the new lower bound on
the total variation distance, combined with a distribution-dependent refinement
of Pinsker's inequality by Ordentlich and Weinberger (2005). Upper and lower
bounds on the Bhattacharyya parameter, Chernoff information and Hellinger
distance between the distribution of the sum of independent Bernoulli random
variables and the Poisson distribution with the same mean are derived as well
via some relations between these quantities with the total variation distance
and the relative entropy. The analysis in this work combines elements of
information theory with the Chen-Stein method for the Poisson approximation.
The resulting bounds are easy to compute, and their applicability is
exemplified.
|
1206.6813
|
A concentration theorem for projections
|
cs.LG stat.ML
|
X in R^D has mean zero and finite second moments. We show that there is a
precise sense in which almost all linear projections of X into R^d (for d < D)
look like a scale-mixture of spherical Gaussians -- specifically, a mixture of
distributions N(0, sigma^2 I_d) where the weight of the particular sigma
component is P (| X |^2 = sigma^2 D). The extent of this effect depends upon
the ratio of d to D, and upon a particular coefficient of eccentricity of X's
distribution. We explore this result in a variety of experiments.
|
1206.6814
|
An Empirical Comparison of Algorithms for Aggregating Expert Predictions
|
cs.AI cs.LG
|
Predicting the outcomes of future events is a challenging problem for which a
variety of solution methods have been explored and attempted. We present an
empirical comparison of a variety of online and offline adaptive algorithms for
aggregating experts' predictions of the outcomes of five years of US National
Football League games (1319 games) using expert probability elicitations
obtained from an Internet contest called ProbabilitySports. We find that it is
difficult to improve over simple averaging of the predictions in terms of
prediction accuracy, but that there is room for improvement in quadratic loss.
Somewhat surprisingly, a Bayesian estimation algorithm which estimates the
variance of each expert's prediction exhibits the most consistent superior
performance over simple averaging among our collection of algorithms.
|
1206.6815
|
Discriminative Learning via Semidefinite Probabilistic Models
|
cs.LG stat.ML
|
Discriminative linear models are a popular tool in machine learning. These
can be generally divided into two types: The first is linear classifiers, such
as support vector machines, which are well studied and provide state-of-the-art
results. One shortcoming of these models is that their output (known as the
'margin') is not calibrated, and cannot be translated naturally into a
distribution over the labels. Thus, it is difficult to incorporate such models
as components of larger systems, unlike probabilistic based approaches. The
second type of approach constructs class conditional distributions using a
nonlinearity (e.g. log-linear models), but is occasionally worse in terms of
classification error. We propose a supervised learning method which combines
the best of both approaches. Specifically, our method provides a distribution
over the labels, which is a linear function of the model parameters. As a
consequence, differences between probabilities are linear functions, a property
which most probabilistic models (e.g. log-linear) do not have.
Our model assumes that classes correspond to linear subspaces (rather than to
half spaces). Using a relaxed projection operator, we construct a measure which
evaluates the degree to which a given vector 'belongs' to a subspace, resulting
in a distribution over labels. Interestingly, this view is closely related to
similar concepts in quantum detection theory. The resulting models can be
trained either to maximize the margin or to optimize average likelihood
measures. The corresponding optimization problems are semidefinite programs
which can be solved efficiently. We illustrate the performance of our algorithm
on real world datasets, and show that it outperforms 2nd order kernel methods.
|
1206.6816
|
MAIES: A Tool for DNA Mixture Analysis
|
cs.AI cs.CE stat.AP
|
We describe an expert system, MAIES, developed for analysing forensic
identification problems involving DNA mixture traces using quantitative peak
area information. Peak area information is represented by conditional Gaussian
distributions, and inference based on exact junction tree propagation
ascertains whether individuals, whose profiles have been measured, have
contributed to the mixture. The system can also be used to predict DNA profiles
of unknown contributors by separating the mixture into its individual
components. The use of the system is illustrated with an application to a real
world example. The system implements a novel MAP (maximum a posteriori) search
algorithm that is described in an appendix.
|
1206.6817
|
A Variational Approach for Approximating Bayesian Networks by Edge
Deletion
|
cs.AI
|
We consider in this paper the formulation of approximate inference in
Bayesian networks as a problem of exact inference on an approximate network
that results from deleting edges (to reduce treewidth). We have shown in
earlier work that deleting edges calls for introducing auxiliary network
parameters to compensate for lost dependencies, and proposed intuitive
conditions for determining these parameters. We have also shown that our method
corresponds to IBP when enough edges are deleted to yield a polytree, and
corresponds to some generalizations of IBP when fewer edges are deleted. In
this paper, we propose a different criteria for determining auxiliary
parameters based on optimizing the KL-divergence between the original and
approximate networks. We discuss the relationship between the two methods for
selecting parameters, shedding new light on IBP and its generalizations. We
also discuss the application of our new method to approximating inference
problems which are exponential in constrained treewidth, including MAP and
nonmyopic value of information.
|
1206.6818
|
Sensitivity Analysis for Threshold Decision Making with Dynamic Networks
|
cs.AI cs.CE
|
The effect of inaccuracies in the parameters of a dynamic Bayesian network
can be investigated by subjecting the network to a sensitivity analysis. Having
detailed the resulting sensitivity functions in our previous work, we now study
the effect of parameter inaccuracies on a recommended decision in view of a
threshold decision-making model. We detail the effect of varying a single and
multiple parameters from a conditional probability table and present a
computational procedure for establishing bounds between which assessments for
these parameters can be varied without inducing a change in the recommended
decision. We illustrate the various concepts involved by means of a real-life
dynamic network in the field of infectious disease.
|
1206.6819
|
On the Robustness of Most Probable Explanations
|
cs.AI
|
In Bayesian networks, a Most Probable Explanation (MPE) is a complete
variable instantiation with a highest probability given the current evidence.
In this paper, we discuss the problem of finding robustness conditions of the
MPE under single parameter changes. Specifically, we ask the question: How much
change in a single network parameter can we afford to apply while keeping the
MPE unchanged? We will describe a procedure, which is the first of its kind,
that computes this answer for each parameter in the Bayesian network variable
in time O(n exp(w)), where n is the number of network variables and w is its
treewidth.
|
1206.6820
|
Optimal Coordinated Planning Amongst Self-Interested Agents with Private
State
|
cs.GT cs.AI
|
Consider a multi-agent system in a dynamic and uncertain environment. Each
agent's local decision problem is modeled as a Markov decision process (MDP)
and agents must coordinate on a joint action in each period, which provides a
reward to each agent and causes local state transitions. A social planner knows
the model of every agent's MDP and wants to implement the optimal joint policy,
but agents are self-interested and have private local state. We provide an
incentive-compatible mechanism for eliciting state information that achieves
the optimal joint plan in a Markov perfect equilibrium of the induced
stochastic game. In the special case in which local problems are Markov chains
and agents compete to take a single action in each period, we leverage Gittins
allocation indices to provide an efficient factored algorithm and distribute
computation of the optimal policy among the agents. Distributed, optimal
coordinated learning in a multi-agent variant of the multi-armed bandit problem
is obtained as a special case.
|
1206.6821
|
Graphical Condition for Identification in recursive SEM
|
cs.AI stat.ME
|
The paper concerns the problem of predicting the effect of actions or
interventions on a system from a combination of (i) statistical data on a set
of observed variables, and (ii) qualitative causal knowledge encoded in the
form of a directed acyclic graph (DAG). The DAG represents a set of linear
equations called Structural Equations Model (SEM), whose coefficients are
parameters representing direct causal effects. Reliable quantitative
conclusions can only be obtained from the model if the causal effects are
uniquely determined by the data. That is, if there exists a unique
parametrization for the model that makes it compatible with the data. If this
is the case, the model is called identified. The main result of the paper is a
general sufficient condition for identification of recursive SEM models.
|
1206.6822
|
Cutset Sampling with Likelihood Weighting
|
cs.AI
|
The paper analyzes theoretically and empirically the performance of
likelihood weighting (LW) on a subset of nodes in Bayesian networks. The
proposed scheme requires fewer samples to converge due to reduction in sampling
variance. The method exploits the structure of the network to bound the
complexity of exact inference used to compute sampling distributions, similar
to Gibbs cutset sampling. Yet, the extension of the previosly proposed cutset
sampling principles to likelihood weighting is non-trivial due to differences
in the sampling processes of Gibbs sampler and LW. We demonstrate empirically
that likelihood weighting on a cutset (LWLC) is effective time-wise and has a
lower rejection rate than LW when applied to networks with many deterministic
probabilities. Finally, we show that the performance of likelihood weighting on
a cutset can be improved further by caching computed sampling distributions
and, consequently, learning 'zeros' of the target distribution.
|
1206.6823
|
An Efficient Triplet-based Algorithm for Evidential Reasoning
|
cs.AI
|
Linear-time computational techniques have been developed for combining
evidence which is available on a number of contending hypotheses. They offer a
means of making the computation-intensive calculations involved more efficient
in certain circumstances. Unfortunately, they restrict the orthogonal sum of
evidential functions to the dichotomous structure applies only to elements and
their complements. In this paper, we present a novel evidence structure in
terms of a triplet and a set of algorithms for evidential reasoning. The merit
of this structure is that it divides a set of evidence into three subsets,
distinguishing trivial evidential elements from important ones focusing some
particular elements. It avoids the deficits of the dichotomous structure in
representing the preference of evidence and estimating the basic probability
assignment of evidence. We have established a formalism for this structure and
the general formulae for combining pieces of evidence in the form of the
triplet, which have been theoretically justified.
|
1206.6824
|
Gene Expression Time Course Clustering with Countably Infinite Hidden
Markov Models
|
cs.LG cs.CE stat.ML
|
Most existing approaches to clustering gene expression time course data treat
the different time points as independent dimensions and are invariant to
permutations, such as reversal, of the experimental time course. Approaches
utilizing HMMs have been shown to be helpful in this regard, but are hampered
by having to choose model architectures with appropriate complexities. Here we
propose for a clustering application an HMM with a countably infinite state
space; inference in this model is possible by recasting it in the hierarchical
Dirichlet process (HDP) framework (Teh et al. 2006), and hence we call it the
HDP-HMM. We show that the infinite model outperforms model selection methods
over finite models, and traditional time-independent methods, as measured by a
variety of external and internal indices for clustering on two large publicly
available data sets. Moreover, we show that the infinite models utilize more
hidden states and employ richer architectures (e.g. state-to-state transitions)
without the damaging effects of overfitting.
|
1206.6825
|
Non-Minimal Triangulations for Mixed Stochastic/Deterministic Graphical
Models
|
cs.AI cs.DS
|
We observe that certain large-clique graph triangulations can be useful to
reduce both computational and space requirements when making queries on mixed
stochastic/deterministic graphical models. We demonstrate that many of these
large-clique triangulations are non-minimal and are thus unattainable via the
variable elimination algorithm. We introduce ancestral pairs as the basis for
novel triangulation heuristics and prove that no more than the addition of
edges between ancestral pairs need be considered when searching for state space
optimal triangulations in such graphs. Empirical results on random and real
world graphs show that the resulting triangulations that yield significant
speedups are almost always non-minimal. We also give an algorithm and
correctness proof for determining if a triangulation can be obtained via
elimination, and we show that the decision problem associated with finding
optimal state space triangulations in this mixed stochastic/deterministic
setting is NP-complete.
|
1206.6827
|
Linear Algebra Approach to Separable Bayesian Networks
|
cs.AI
|
Separable Bayesian Networks, or the Influence Model, are dynamic Bayesian
Networks in which the conditional probability distribution can be separated
into a function of only the marginal distribution of a node's neighbors,
instead of the joint distributions. In terms of modeling, separable networks
has rendered possible siginificant reduction in complexity, as the state space
is only linear in the number of variables on the network, in contrast to a
typical state space which is exponential. In this work, We describe the
connection between an arbitrary Conditional Probability Table (CPT) and
separable systems using linear algebra. We give an alternate proof on the
equivalence of sufficiency and separability. We present a computational method
for testing whether a given CPT is separable.
|
1206.6828
|
Advances in exact Bayesian structure discovery in Bayesian networks
|
cs.LG cs.AI stat.ML
|
We consider a Bayesian method for learning the Bayesian network structure
from complete data. Recently, Koivisto and Sood (2004) presented an algorithm
that for any single edge computes its marginal posterior probability in O(n
2^n) time, where n is the number of attributes; the number of parents per
attribute is bounded by a constant. In this paper we show that the posterior
probabilities for all the n (n - 1) potential edges can be computed in O(n 2^n)
total time. This result is achieved by a forward-backward technique and fast
Moebius transform algorithms, which are of independent interest. The resulting
speedup by a factor of about n^2 allows us to experimentally study the
statistical power of learning moderate-size networks. We report results from a
simulation study that covers data sets with 20 to 10,000 records over 5 to 25
discrete attributes
|
1206.6829
|
Inequality Constraints in Causal Models with Hidden Variables
|
cs.AI stat.ME
|
We present a class of inequality constraints on the set of distributions
induced by local interventions on variables governed by a causal Bayesian
network, in which some of the variables remain unmeasured. We derive bounds on
causal effects that are not directly measured in randomized experiments. We
derive instrumental inequality type of constraints on nonexperimental
distributions. The results have applications in testing causal models with
observational or experimental data.
|
1206.6830
|
The AI&M Procedure for Learning from Incomplete Data
|
stat.ME cs.AI cs.LG
|
We investigate methods for parameter learning from incomplete data that is
not missing at random. Likelihood-based methods then require the optimization
of a profile likelihood that takes all possible missingness mechanisms into
account. Optimzing this profile likelihood poses two main difficulties:
multiple (local) maxima, and its very high-dimensional parameter space. In this
paper a new method is presented for optimizing the profile likelihood that
addresses the second difficulty: in the proposed AI&M (adjusting imputation and
mazimization) procedure the optimization is performed by operations in the
space of data completions, rather than directly in the parameter space of the
profile likelihood. We apply the AI&M method to learning parameters for
Bayesian networks. The method is compared against conservative inference, which
takes into account each possible data completion, and against EM. The results
indicate that likelihood-based inference is still feasible in the case of
unknown missingness mechanisms, and that conservative inference is
unnecessarily weak. On the other hand, our results also provide evidence that
the EM algorithm is still quite effective when the data is not missing at
random.
|
1206.6831
|
Pearl's Calculus of Intervention Is Complete
|
cs.AI
|
This paper is concerned with graphical criteria that can be used to solve the
problem of identifying casual effects from nonexperimental data in a causal
Bayesian network structure, i.e., a directed acyclic graph that represents
causal relationships. We first review Pearl's work on this topic [Pearl, 1995],
in which several useful graphical criteria are presented. Then we present a
complete algorithm [Huang and Valtorta, 2006b] for the identifiability problem.
By exploiting the completeness of this algorithm, we prove that the three basic
do-calculus rules that Pearl presents are complete, in the sense that, if a
causal effect is identifiable, there exists a sequence of applications of the
rules of the do-calculus that transforms the causal effect formula into a
formula that only includes observational quantities.
|
1206.6832
|
Convex Structure Learning for Bayesian Networks: Polynomial Feature
Selection and Approximate Ordering
|
cs.LG stat.ML
|
We present a new approach to learning the structure and parameters of a
Bayesian network based on regularized estimation in an exponential family
representation. Here we show that, given a fixed variable order, the optimal
structure and parameters can be learned efficiently, even without restricting
the size of the parent sets. We then consider the problem of optimizing the
variable order for a given set of features. This is still a computationally
hard problem, but we present a convex relaxation that yields an optimal 'soft'
ordering in polynomial time. One novel aspect of the approach is that we do not
perform a discrete search over DAG structures, nor over variable orders, but
instead solve a continuous relaxation that can then be rounded to obtain a
valid network structure. We conduct an experimental comparison against standard
structure search procedures over standard objectives, which cope with local
minima, and evaluate the advantages of using convex relaxations that reduce the
effects of local minima.
|
1206.6833
|
Matrix Tile Analysis
|
cs.LG cs.CE cs.NA stat.ML
|
Many tasks require finding groups of elements in a matrix of numbers, symbols
or class likelihoods. One approach is to use efficient bi- or tri-linear
factorization techniques including PCA, ICA, sparse matrix factorization and
plaid analysis. These techniques are not appropriate when addition and
multiplication of matrix elements are not sensibly defined. More directly,
methods like bi-clustering can be used to classify matrix elements, but these
methods make the overly-restrictive assumption that the class of each element
is a function of a row class and a column class. We introduce a general
computational problem, `matrix tile analysis' (MTA), which consists of
decomposing a matrix into a set of non-overlapping tiles, each of which is
defined by a subset of usually nonadjacent rows and columns. MTA does not
require an algebra for combining tiles, but must search over discrete
combinations of tile assignments. Exact MTA is a computationally intractable
integer programming problem, but we describe an approximate iterative technique
and a computationally efficient sum-product relaxation of the integer program.
We compare the effectiveness of these methods to PCA and plaid on hundreds of
randomly generated tasks. Using double-gene-knockout data, we show that MTA
finds groups of interacting yeast genes that have biologically-related
functions.
|
1206.6834
|
A new axiomatization for likelihood gambles
|
cs.AI
|
This paper studies a new and more general axiomatization than one presented
previously for preference on likelihood gambles. Likelihood gambles describe
actions in a situation where a decision maker knows multiple probabilistic
models and a random sample generated from one of those models but does not know
prior probability of models. This new axiom system is inspired by Jensen's
axiomatization of probabilistic gambles. Our approach provides a new
perspective to the role of data in decision making under ambiguity. It avoids
one of the most controversial issue of Bayesian methodology namely the
assumption of prior probability.
|
1206.6835
|
Dimension Reduction in Singularly Perturbed Continuous-Time Bayesian
Networks
|
cs.AI
|
Continuous-time Bayesian networks (CTBNs) are graphical representations of
multi-component continuous-time Markov processes as directed graphs. The edges
in the network represent direct influences among components. The joint rate
matrix of the multi-component process is specified by means of conditional rate
matrices for each component separately. This paper addresses the situation
where some of the components evolve on a time scale that is much shorter
compared to the time scale of the other components. In this paper, we prove
that in the limit where the separation of scales is infinite, the Markov
process converges (in distribution, or weakly) to a reduced, or effective
Markov process that only involves the slow components. We also demonstrate that
for reasonable separation of scale (an order of magnitude) the reduced process
is a good approximation of the marginal process over the slow components. We
provide a simple procedure for building a reduced CTBN for this effective
process, with conditional rate matrices that can be directly calculated from
the original CTBN, and discuss the implications for approximate reasoning in
large systems.
|
1206.6836
|
Methods for computing state similarity in Markov Decision Processes
|
cs.AI
|
A popular approach to solving large probabilistic systems relies on
aggregating states based on a measure of similarity. Many approaches in the
literature are heuristic. A number of recent methods rely instead on metrics
based on the notion of bisimulation, or behavioral equivalence between states
(Givan et al, 2001, 2003; Ferns et al, 2004). An integral component of such
metrics is the Kantorovich metric between probability distributions. However,
while this metric enables many satisfying theoretical properties, it is costly
to compute in practice. In this paper, we use techniques from network
optimization and statistical sampling to overcome this problem. We obtain in
this manner a variety of distance functions for MDP state aggregation, which
differ in the tradeoff between time and space complexity, as well as the
quality of the aggregation. We provide an empirical evaluation of these
trade-offs.
|
1206.6837
|
Residual Belief Propagation: Informed Scheduling for Asynchronous
Message Passing
|
cs.AI
|
Inference for probabilistic graphical models is still very much a practical
challenge in large domains. The commonly used and effective belief propagation
(BP) algorithm and its generalizations often do not converge when applied to
hard, real-life inference tasks. While it is widely recognized that the
scheduling of messages in these algorithms may have significant consequences,
this issue remains largely unexplored. In this work, we address the question of
how to schedule messages for asynchronous propagation so that a fixed point is
reached faster and more often. We first show that any reasonable asynchronous
BP converges to a unique fixed point under conditions similar to those that
guarantee convergence of synchronous BP. In addition, we show that the
convergence rate of a simple round-robin schedule is at least as good as that
of synchronous propagation. We then propose residual belief propagation (RBP),
a novel, easy-to-implement, asynchronous propagation algorithm that schedules
messages in an informed way, that pushes down a bound on the distance from the
fixed point. Finally, we demonstrate the superiority of RBP over
state-of-the-art methods for a variety of challenging synthetic and real-life
problems: RBP converges significantly more often than other methods; and it
significantly reduces running time until convergence, even when other methods
converge.
|
1206.6838
|
Continuous Time Markov Networks
|
cs.AI cs.LG
|
A central task in many applications is reasoning about processes that change
in a continuous time. The mathematical framework of Continuous Time Markov
Processes provides the basic foundations for modeling such systems. Recently,
Nodelman et al introduced continuous time Bayesian networks (CTBNs), which
allow a compact representation of continuous-time processes over a factored
state space. In this paper, we introduce continuous time Markov networks
(CTMNs), an alternative representation language that represents a different
type of continuous-time dynamics. In many real life processes, such as
biological and chemical systems, the dynamics of the process can be naturally
described as an interplay between two forces - the tendency of each entity to
change its state, and the overall fitness or energy function of the entire
system. In our model, the first force is described by a continuous-time
proposal process that suggests possible local changes to the state of the
system at different rates. The second force is represented by a Markov network
that encodes the fitness, or desirability, of different states; a proposed
local change is then accepted with a probability that is a function of the
change in the fitness distribution. We show that the fitness distribution is
also the stationary distribution of the Markov process, so that this
representation provides a characterization of a temporal process whose
stationary distribution has a compact graphical representation. This allows us
to naturally capture a different type of structure in complex dynamical
processes, such as evolving biological sequences. We describe the semantics of
the representation, its basic properties, and how it compares to CTBNs. We also
provide algorithms for learning such models from data, and discuss its
applicability to biological sequence evolution.
|
1206.6841
|
Asymmetric separation for local independence graphs
|
cs.AI
|
Directed possibly cyclic graphs have been proposed by Didelez (2000) and
Nodelmann et al. (2002) in order to represent the dynamic dependencies among
stochastic processes. These dependencies are based on a generalization of
Granger-causality to continuous time, first developed by Schweder (1970) for
Markov processes, who called them local dependencies. They deserve special
attention as they are asymmetric unlike stochastic (in)dependence. In this
paper we focus on their graphical representation and develop a suitable, i.e.
asymmetric notion of separation, called delta-separation. The properties of
this graph separation as well as of local independence are investigated in
detail within a framework of asymmetric (semi)graphoids allowing a deeper
insight into what information can be read off these graphs.
|
1206.6842
|
Chi-square Tests Driven Method for Learning the Structure of Factored
MDPs
|
cs.LG cs.AI stat.ML
|
SDYNA is a general framework designed to address large stochastic
reinforcement learning problems. Unlike previous model based methods in FMDPs,
it incrementally learns the structure and the parameters of a RL problem using
supervised learning techniques. Then, it integrates decision-theoric planning
algorithms based on FMDPs to compute its policy. SPITI is an instanciation of
SDYNA that exploits ITI, an incremental decision tree algorithm, to learn the
reward function and the Dynamic Bayesian Networks with local structures
representing the transition function of the problem. These representations are
used by an incremental version of the Structured Value Iteration algorithm. In
order to learn the structure, SPITI uses Chi-Square tests to detect the
independence between two probability distributions. Thus, we study the relation
between the threshold used in the Chi-Square test, the size of the model built
and the relative error of the value function of the induced policy with respect
to the optimal value. We show that, on stochastic problems, one can tune the
threshold so as to generate both a compact model and an efficient policy. Then,
we show that SPITI, while keeping its model compact, uses the generalization
property of its learning method to perform better than a stochastic classical
tabular algorithm in large RL problem with an unknown structure. We also
introduce a new measure based on Chi-Square to qualify the accuracy of the
model learned by SPITI. We qualitatively show that the generalization property
in SPITI within the FMDP framework may prevent an exponential growth of the
time required to learn the structure of large stochastic RL problems.
|
1206.6843
|
Adjacency-Faithfulness and Conservative Causal Inference
|
cs.AI stat.ME
|
Most causal inference algorithms in the literature (e.g., Pearl (2000),
Spirtes et al. (2000), Heckerman et al. (1999)) exploit an assumption usually
referred to as the causal Faithfulness or Stability condition. In this paper,
we highlight two components of the condition used in constraint-based
algorithms, which we call "Adjacency-Faithfulness" and
"Orientation-Faithfulness". We point out that assuming Adjacency-Faithfulness
is true, it is in principle possible to test the validity of
Orientation-Faithfulness. Based on this observation, we explore the consequence
of making only the Adjacency-Faithfulness assumption. We show that the familiar
PC algorithm has to be modified to be (asymptotically) correct under the
weaker, Adjacency-Faithfulness assumption. Roughly the modified algorithm,
called Conservative PC (CPC), checks whether Orientation-Faithfulness holds in
the orientation phase, and if not, avoids drawing certain causal conclusions
the PC algorithm would draw. However, if the stronger, standard causal
Faithfulness condition actually obtains, the CPC algorithm is shown to output
the same pattern as the PC algorithm does in the large sample limit. We also
present a simulation study showing that the CPC algorithm runs almost as fast
as the PC algorithm, and outputs significantly fewer false causal arrowheads
than the PC algorithm does on realistic sample sizes. We end our paper by
discussing how score-based algorithms such as GES perform when the
Adjacency-Faithfulness but not the standard causal Faithfulness condition
holds, and how to extend our work to the FCI algorithm, which allows for the
possibility of latent variables.
|
1206.6844
|
From influence diagrams to multi-operator cluster DAGs
|
cs.AI
|
There exist several architectures to solve influence diagrams using local
computations, such as the Shenoy-Shafer, the HUGIN, or the Lazy Propagation
architectures. They all extend usual variable elimination algorithms thanks to
the use of so-called 'potentials'. In this paper, we introduce a new
architecture, called the Multi-operator Cluster DAG architecture, which can
produce decompositions with an improved constrained induced-width, and
therefore induce potentially exponential gains. Its principle is to benefit
from the composite nature of influence diagrams, instead of using uniform
potentials, in order to better analyze the problem structure.
|
1206.6845
|
Gibbs Sampling for (Coupled) Infinite Mixture Models in the Stick
Breaking Representation
|
stat.ME cs.LG stat.ML
|
Nonparametric Bayesian approaches to clustering, information retrieval,
language modeling and object recognition have recently shown great promise as a
new paradigm for unsupervised data analysis. Most contributions have focused on
the Dirichlet process mixture models or extensions thereof for which efficient
Gibbs samplers exist. In this paper we explore Gibbs samplers for infinite
complexity mixture models in the stick breaking representation. The advantage
of this representation is improved modeling flexibility. For instance, one can
design the prior distribution over cluster sizes or couple multiple infinite
mixture models (e.g. over time) at the level of their parameters (i.e. the
dependent Dirichlet process model). However, Gibbs samplers for infinite
mixture models (as recently introduced in the statistics literature) seem to
mix poorly over cluster labels. Among others issues, this can have the adverse
effect that labels for the same cluster in coupled mixture models are mixed up.
We introduce additional moves in these samplers to improve mixing over cluster
labels and to bring clusters into correspondence. An application to modeling of
storm trajectories is used to illustrate these ideas.
|
1206.6846
|
Approximate Separability for Weak Interaction in Dynamic Systems
|
cs.LG cs.AI stat.ML
|
One approach to monitoring a dynamic system relies on decomposition of the
system into weakly interacting subsystems. An earlier paper introduced a notion
of weak interaction called separability, and showed that it leads to exact
propagation of marginals for prediction. This paper addresses two questions
left open by the earlier paper: can we define a notion of approximate
separability that occurs naturally in practice, and do separability and
approximate separability lead to accurate monitoring? The answer to both
questions is afirmative. The paper also analyzes the structure of approximately
separable decompositions, and provides some explanation as to why these models
perform well.
|
1206.6847
|
Identifying the Relevant Nodes Without Learning the Model
|
cs.LG cs.AI stat.ML
|
We propose a method to identify all the nodes that are relevant to compute
all the conditional probability distributions for a given set of nodes. Our
method is simple, effcient, consistent, and does not require learning a
Bayesian network first. Therefore, our method can be applied to
high-dimensional databases, e.g. gene expression databases.
|
1206.6849
|
General-Purpose MCMC Inference over Relational Structures
|
cs.AI
|
Tasks such as record linkage and multi-target tracking, which involve
reconstructing the set of objects that underlie some observed data, are
particularly challenging for probabilistic inference. Recent work has achieved
efficient and accurate inference on such problems using Markov chain Monte
Carlo (MCMC) techniques with customized proposal distributions. Currently,
implementing such a system requires coding MCMC state representations and
acceptance probability calculations that are specific to a particular
application. An alternative approach, which we pursue in this paper, is to use
a general-purpose probabilistic modeling language (such as BLOG) and a generic
Metropolis-Hastings MCMC algorithm that supports user-supplied proposal
distributions. Our algorithm gains flexibility by using MCMC states that are
only partial descriptions of possible worlds; we provide conditions under which
MCMC over partial worlds yields correct answers to queries. We also show how to
use a context-specific Bayes net to identify the factors in the acceptance
probability that need to be computed for a given proposed move. Experimental
results on a citation matching task show that our general-purpose MCMC engine
compares favorably with an application-specific system.
|
1206.6850
|
Visualization of Collaborative Data
|
cs.GR cs.AI cs.HC
|
Collaborative data consist of ratings relating two distinct sets of objects:
users and items. Much of the work with such data focuses on filtering:
predicting unknown ratings for pairs of users and items. In this paper we focus
on the problem of visualizing the information. Given all of the ratings, our
task is to embed all of the users and items as points in the same Euclidean
space. We would like to place users near items that they have rated (or would
rate) high, and far away from those they would give a low rating. We pose this
problem as a real-valued non-linear Bayesian network and employ Markov chain
Monte Carlo and expectation maximization to find an embedding. We present a
metric by which to judge the quality of a visualization and compare our results
to local linear embedding and Eigentaste on three real-world datasets.
|
1206.6851
|
A compact, hierarchical Q-function decomposition
|
cs.LG cs.AI stat.ML
|
Previous work in hierarchical reinforcement learning has faced a dilemma:
either ignore the values of different possible exit states from a subroutine,
thereby risking suboptimal behavior, or represent those values explicitly
thereby incurring a possibly large representation cost because exit values
refer to nonlocal aspects of the world (i.e., all subsequent rewards). This
paper shows that, in many cases, one can avoid both of these problems. The
solution is based on recursively decomposing the exit value function in terms
of Q-functions at higher levels of the hierarchy. This leads to an intuitively
appealing runtime architecture in which a parent subroutine passes to its child
a value function on the exit states and the child reasons about how its choices
affect the exit value. We also identify structural conditions on the value
function and transition distributions that allow much more concise
representations of exit state distributions, leading to further state
abstraction. In essence, the only variables whose exit values need be
considered are those that the parent cares about and the child affects. We
demonstrate the utility of our algorithms on a series of increasingly complex
environments.
|
1206.6852
|
Structured Priors for Structure Learning
|
cs.LG cs.AI stat.ML
|
Traditional approaches to Bayes net structure learning typically assume
little regularity in graph structure other than sparseness. However, in many
cases, we expect more systematicity: variables in real-world systems often
group into classes that predict the kinds of probabilistic dependencies they
participate in. Here we capture this form of prior knowledge in a hierarchical
Bayesian framework, and exploit it to enable structure learning and type
discovery from small datasets. Specifically, we present a nonparametric
generative model for directed acyclic graphs as a prior for Bayes net structure
learning. Our model assumes that variables come in one or more classes and that
the prior probability of an edge existing between two variables is a function
only of their classes. We derive an MCMC algorithm for simultaneous inference
of the number of classes, the class assignments of variables, and the Bayes net
structure over variables. For several realistic, sparse datasets, we show that
the bias towards systematicity of connections provided by our model yields more
accurate learned networks than a traditional, uniform prior approach, and that
the classes found by our model are appropriate.
|
1206.6853
|
A theoretical study of Y structures for causal discovery
|
cs.AI stat.ME
|
There are several existing algorithms that under appropriate assumptions can
reliably identify a subset of the underlying causal relationships from
observational data. This paper introduces the first computationally feasible
score-based algorithm that can reliably identify causal relationships in the
large sample limit for discrete models, while allowing for the possibility that
there are unobserved common causes. In doing so, the algorithm does not ever
need to assign scores to causal structures with unobserved common causes. The
algorithm is based on the identification of so called Y substructures within
Bayesian network structures that can be learned from observational data. An
example of a Y substructure is A -> C, B -> C, C -> D. After providing
background on causal discovery, the paper proves the conditions under which the
algorithm is reliable in the large sample limit.
|
1206.6854
|
Belief Update in CLG Bayesian Networks With Lazy Propagation
|
cs.AI
|
In recent years Bayesian networks (BNs) with a mixture of continuous and
discrete variables have received an increasing level of attention. We present
an architecture for exact belief update in Conditional Linear Gaussian BNs (CLG
BNs). The architecture is an extension of lazy propagation using operations of
Lauritzen & Jensen [6] and Cowell [2]. By decomposing clique and separator
potentials into sets of factors, the proposed architecture takes advantage of
independence and irrelevance properties induced by the structure of the graph
and the evidence. The resulting benefits are illustrated by examples. Results
of a preliminary empirical performance evaluation indicate a significant
potential of the proposed architecture.
|
1206.6856
|
Reasoning about Uncertainty in Metric Spaces
|
cs.AI
|
We set up a model for reasoning about metric spaces with belief theoretic
measures. The uncertainty in these spaces stems from both probability and
metric. To represent both aspect of uncertainty, we choose an expected distance
function as a measure of uncertainty. A formal logical system is constructed
for the reasoning about expected distance. Soundness and completeness are shown
for this logic. For reasoning on product metric space with uncertainty, a new
metric is defined and shown to have good properties.
|
1206.6857
|
Faster Gaussian Summation: Theory and Experiment
|
cs.LG cs.NA stat.ML
|
We provide faster algorithms for the problem of Gaussian summation, which
occurs in many machine learning methods. We develop two new extensions - an
O(Dp) Taylor expansion for the Gaussian kernel with rigorous error bounds and a
new error control scheme integrating any arbitrary approximation method -
within the best discretealgorithmic framework using adaptive hierarchical data
structures. We rigorously evaluate these techniques empirically in the context
of optimal bandwidth selection in kernel density estimation, revealing the
strengths and weaknesses of current state-of-the-art approaches for the first
time. Our results demonstrate that the new error control scheme yields improved
performance, whereas the series expansion approach is only effective in low
dimensions (five or less).
|
1206.6858
|
Sequential Document Representations and Simplicial Curves
|
cs.IR cs.LG
|
The popular bag of words assumption represents a document as a histogram of
word occurrences. While computationally efficient, such a representation is
unable to maintain any sequential information. We present a continuous and
differentiable sequential document representation that goes beyond the bag of
words assumption, and yet is efficient and effective. This representation
employs smooth curves in the multinomial simplex to account for sequential
information. We discuss the representation and its geometric properties and
demonstrate its applicability for the task of text classification.
|
1206.6859
|
Propagation of Delays in the National Airspace System
|
cs.AI
|
The National Airspace System (NAS) is a large and complex system with
thousands of interrelated components: administration, control centers,
airports, airlines, aircraft, passengers, etc. The complexity of the NAS
creates many difficulties in management and control. One of the most pressing
problems is flight delay. Delay creates high cost to airlines, complaints from
passengers, and difficulties for airport operations. As demand on the system
increases, the delay problem becomes more and more prominent. For this reason,
it is essential for the Federal Aviation Administration to understand the
causes of delay and to find ways to reduce delay. Major contributing factors to
delay are congestion at the origin airport, weather, increasing demand, and air
traffic management (ATM) decisions such as the Ground Delay Programs (GDP).
Delay is an inherently stochastic phenomenon. Even if all known causal factors
could be accounted for, macro-level national airspace system (NAS) delays could
not be predicted with certainty from micro-level aircraft information. This
paper presents a stochastic model that uses Bayesian Networks (BNs) to model
the relationships among different components of aircraft delay and the causal
factors that affect delays. A case study on delays of departure flights from
Chicago O'Hare international airport (ORD) to Hartsfield-Jackson Atlanta
International Airport (ATL) reveals how local and system level environmental
and human-caused factors combine to affect components of delay, and how these
components contribute to the final arrival delay at the destination airport.
|
1206.6860
|
Predicting Conditional Quantiles via Reduction to Classification
|
cs.LG stat.ML
|
We show how to reduce the process of predicting general order statistics (and
the median in particular) to solving classification. The accompanying
theoretical statement shows that the regret of the classifier bounds the regret
of the quantile regression under a quantile loss. We also test this reduction
empirically against existing quantile regression methods on large real-world
datasets and discover that it provides state-of-the-art performance.
|
1206.6861
|
Stratified Analysis of `Probabilities of Causation'
|
stat.ME cs.AI
|
This paper proposes new formulas for the probabilities of causation difined
by Pearl (2000). Tian and Pearl (2000a, 2000b) showed how to bound the
quantities of the probabilities of causation from experimental and
observational data, under the minimal assumptions about the data-generating
process. We derive narrower bounds than Tian-Pearl bounds by making use of the
covariate information measured in experimental and observational studies. In
addition, we provide identifiable case under no-prevention assumption and
discuss the covariate selection problem from the viewpoint of estimation
accuracy. These results are helpful in providing more evidence for public
policy assessment and dicision making problems.
|
1206.6862
|
On the Number of Samples Needed to Learn the Correct Structure of a
Bayesian Network
|
cs.LG cs.AI stat.ML
|
Bayesian Networks (BNs) are useful tools giving a natural and compact
representation of joint probability distributions. In many applications one
needs to learn a Bayesian Network (BN) from data. In this context, it is
important to understand the number of samples needed in order to guarantee a
successful learning. Previous work have studied BNs sample complexity, yet it
mainly focused on the requirement that the learned distribution will be close
to the original distribution which generated the data. In this work, we study a
different aspect of the learning, namely the number of samples needed in order
to learn the correct structure of the network. We give both asymptotic results,
valid in the large sample limit, and experimental results, demonstrating the
learning behavior for feasible sample sizes. We show that structure learning is
a more difficult task, compared to approximating the correct distribution, in
the sense that it requires a much larger number of samples, regardless of the
computational power available for the learner.
|
1206.6863
|
Bayesian Multicategory Support Vector Machines
|
cs.LG stat.ML
|
We show that the multi-class support vector machine (MSVM) proposed by Lee
et. al. (2004), can be viewed as a MAP estimation procedure under an
appropriate probabilistic interpretation of the classifier. We also show that
this interpretation can be extended to a hierarchical Bayesian architecture and
to a fully-Bayesian inference procedure for multi-class classification based on
data augmentation. We present empirical results that show that the advantages
of the Bayesian formalism are obtained without a loss in classification
accuracy.
|
1206.6864
|
Infinite Hidden Relational Models
|
cs.AI cs.DB cs.LG
|
In many cases it makes sense to model a relationship symmetrically, not
implying any particular directionality. Consider the classical example of a
recommendation system where the rating of an item by a user should
symmetrically be dependent on the attributes of both the user and the item. The
attributes of the (known) relationships are also relevant for predicting
attributes of entities and for predicting attributes of new relations. In
recommendation systems, the exploitation of relational attributes is often
referred to as collaborative filtering. Again, in many applications one might
prefer to model the collaborative effect in a symmetrical way. In this paper we
present a relational model, which is completely symmetrical. The key innovation
is that we introduce for each entity (or object) an infinite-dimensional latent
variable as part of a Dirichlet process (DP) model. We discuss inference in the
model, which is based on a DP Gibbs sampler, i.e., the Chinese restaurant
process. We extend the Chinese restaurant process to be applicable to
relational modeling. Our approach is evaluated in three applications. One is a
recommendation system based on the MovieLens data set. The second application
concerns the prediction of the function of yeast genes/proteins on the data set
of KDD Cup 2001 using a multi-relational model. The third application involves
a relational medical domain. The experimental results show that our model gives
significantly improved estimates of attributes describing relationships or
entities in complex relational models.
|
1206.6865
|
A Non-Parametric Bayesian Method for Inferring Hidden Causes
|
cs.LG cs.AI stat.ML
|
We present a non-parametric Bayesian approach to structure learning with
hidden causes. Previous Bayesian treatments of this problem define a prior over
the number of hidden causes and use algorithms such as reversible jump Markov
chain Monte Carlo to move between solutions. In contrast, we assume that the
number of hidden causes is unbounded, but only a finite number influence
observable variables. This makes it possible to use a Gibbs sampler to
approximate the distribution over causal structures. We evaluate the
performance of both approaches in discovering hidden causes in simulated data,
and use our non-parametric approach to discover hidden causes in a real medical
dataset.
|
1206.6866
|
Stochastic Optimal Control in Continuous Space-Time Multi-Agent Systems
|
cs.MA cs.SY math.OC
|
Recently, a theory for stochastic optimal control in non-linear dynamical
systems in continuous space-time has been developed (Kappen, 2005). We apply
this theory to collaborative multi-agent systems. The agents evolve according
to a given non-linear dynamics with additive Wiener noise. Each agent can
control its own dynamics. The goal is to minimize the accumulated joint cost,
which consists of a state dependent term and a term that is quadratic in the
control. We focus on systems of non-interacting agents that have to distribute
themselves optimally over a number of targets, given a set of end-costs for the
different possible agent-target combinations. We show that optimal control is
the combinatorial sum of independent single-agent single-target optimal
controls weighted by a factor proportional to the end-costs of the different
combinations. Thus, multi-agent control is related to a standard graphical
model inference problem. The additional computational cost compared to
single-agent control is exponential in the tree-width of the graph specifying
the combinatorial sum times the number of targets. We illustrate the result by
simulations of systems with up to 42 agents.
|
1206.6867
|
Axiomatic Foundations for a Class of Generalized Expected Utility:
Algebraic Expected Utility
|
cs.AI cs.GT
|
Expected Utility: Algebraic Expected Utility In this paper, we provide two
axiomatizations of algebraic expected utility, which is a particular
generalized expected utility, in a von Neumann-Morgenstern setting, i.e.
uncertainty representation is supposed to be given and here to be described by
a plausibility measure valued on a semiring, which could be partially ordered.
We show that axioms identical to those for expected utility entail that
preferences are represented by an algebraic expected utility. This algebraic
approach allows many previous propositions (expected utility, binary
possibilistic utility,...) to be unified in a same general framework and proves
that the obtained utility enjoys the same nice features as expected utility:
linearity, dynamic consistency, autoduality of the underlying uncertainty
measure, autoduality of the decision criterion and possibility of modeling
decision maker's attitude toward uncertainty.
|
1206.6868
|
Bayesian Random Fields: The Bethe-Laplace Approximation
|
cs.LG stat.ML
|
While learning the maximum likelihood value of parameters of an undirected
graphical model is hard, modelling the posterior distribution over parameters
given data is harder. Yet, undirected models are ubiquitous in computer vision
and text modelling (e.g. conditional random fields). But where Bayesian
approaches for directed models have been very successful, a proper Bayesian
treatment of undirected models in still in its infant stages. We propose a new
method for approximating the posterior of the parameters given data based on
the Laplace approximation. This approximation requires the computation of the
covariance matrix over features which we compute using the linear response
approximation based in turn on loopy belief propagation. We develop the theory
for conditional and 'unconditional' random fields with or without hidden
variables. In the conditional setting we introduce a new variant of bagging
suitable for structured domains. Here we run the loopy max-product algorithm on
a 'super-graph' composed of graphs for individual models sampled from the
posterior and connected by constraints. Experiments on real world data validate
the proposed methods.
|
1206.6869
|
Recognizing Activities and Spatial Context Using Wearable Sensors
|
cs.AI
|
We introduce a new dynamic model with the capability of recognizing both
activities that an individual is performing as well as where that ndividual is
located. Our model is novel in that it utilizes a dynamic graphical model to
jointly estimate both activity and spatial context over time based on the
simultaneous use of asynchronous observations consisting of GPS measurements,
and measurements from a small mountable sensor board. Joint inference is quite
desirable as it has the ability to improve accuracy of the model. A key goal,
however, in designing our overall system is to be able to perform accurate
inference decisions while minimizing the amount of hardware an individual must
wear. This minimization leads to greater comfort and flexibility, decreased
power requirements and therefore increased battery life, and reduced cost. We
show results indicating that our joint measurement model outperforms
measurements from either the sensor board or GPS alone, using two types of
probabilistic inference procedures, namely particle filtering and pruned exact
inference.
|
1206.6870
|
Incremental Model-based Learners With Formal Learning-Time Guarantees
|
cs.LG cs.AI stat.ML
|
Model-based learning algorithms have been shown to use experience efficiently
when learning to solve Markov Decision Processes (MDPs) with finite state and
action spaces. However, their high computational cost due to repeatedly solving
an internal model inhibits their use in large-scale problems. We propose a
method based on real-time dynamic programming (RTDP) to speed up two
model-based algorithms, RMAX and MBIE (model-based interval estimation),
resulting in computationally much faster algorithms with little loss compared
to existing bounds. Specifically, our two new learning algorithms, RTDP-RMAX
and RTDP-IE, have considerably smaller computational demands than RMAX and
MBIE. We develop a general theoretical framework that allows us to prove that
both are efficient learners in a PAC (probably approximately correct) sense. We
also present an experimental evaluation of these new algorithms that helps
quantify the tradeoff between computational and experience demands.
|
1206.6871
|
Ranking by Dependence - A Fair Criteria
|
cs.LG stat.ML
|
Estimating the dependences between random variables, and ranking them
accordingly, is a prevalent problem in machine learning. Pursuing frequentist
and information-theoretic approaches, we first show that the p-value and the
mutual information can fail even in simplistic situations. We then propose two
conditions for regularizing an estimator of dependence, which leads to a simple
yet effective new measure. We discuss its advantages and compare it to
well-established model-selection criteria. Apart from that, we derive a simple
constraint for regularizing parameter estimates in a graphical model. This
results in an analytical approximation for the optimal value of the equivalent
sample size, which agrees very well with the more involved Bayesian approach in
our experiments.
|
1206.6872
|
A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous
Driving
|
cs.CV cs.LG cs.RO
|
We present a machine learning approach for estimating the second derivative
of a drivable surface, its roughness. Robot perception generally focuses on the
first derivative, obstacle detection. However, the second derivative is also
important due to its direct relation (with speed) to the shock the vehicle
experiences. Knowing the second derivative allows a vehicle to slow down in
advance of rough terrain. Estimating the second derivative is challenging due
to uncertainty. For example, at range, laser readings may be so sparse that
significant information about the surface is missing. Also, a high degree of
precision is required in projecting laser readings. This precision may be
unavailable due to latency or error in the pose estimation. We model these
sources of error as a multivariate polynomial. Its coefficients are learned
using the shock data as ground truth -- the accelerometers are used to train
the lasers. The resulting classifier operates on individual laser readings from
a road surface described by a 3D point cloud. The classifier identifies
sections of road where the second derivative is likely to be large. Thus, the
vehicle can slow down in advance, reducing the shock it experiences. The
algorithm is an evolution of one we used in the 2005 DARPA Grand Challenge. We
analyze it using data from that route.
|
1206.6873
|
Variable noise and dimensionality reduction for sparse Gaussian
processes
|
cs.LG stat.ML
|
The sparse pseudo-input Gaussian process (SPGP) is a new approximation method
for speeding up GP regression in the case of a large number of data points N.
The approximation is controlled by the gradient optimization of a small set of
M `pseudo-inputs', thereby reducing complexity from N^3 to NM^2. One limitation
of the SPGP is that this optimization space becomes impractically big for high
dimensional data sets. This paper addresses this limitation by performing
automatic dimensionality reduction. A projection of the input space to a low
dimensional space is learned in a supervised manner, alongside the
pseudo-inputs, which now live in this reduced space. The paper also
investigates the suitability of the SPGP for modeling data with input-dependent
noise. A further extension of the model is made to make it even more powerful
in this regard - we learn an uncertainty parameter for each pseudo-input. The
combination of sparsity, reduced dimension, and input-dependent noise makes it
possible to apply GPs to much larger and more complex data sets than was
previously practical. We demonstrate the benefits of these methods on several
synthetic and real world problems.
|
1206.6874
|
Bayesian Inference for Gaussian Mixed Graph Models
|
stat.ME cs.AI
|
We introduce priors and algorithms to perform Bayesian inference in Gaussian
models defined by acyclic directed mixed graphs. Such a class of graphs,
composed of directed and bi-directed edges, is a representation of conditional
independencies that is closed under marginalization and arises naturally from
causal models which allow for unmeasured confounding. Monte Carlo methods and a
variational approximation for such models are presented. Our algorithms for
Bayesian inference allow the evaluation of posterior distributions for several
quantities of interest, including causal effects that are not identifiable from
data alone but could otherwise be inferred where informative prior knowledge
about confounding is available.
|
1206.6875
|
A simple approach for finding the globally optimal Bayesian network
structure
|
cs.AI
|
We study the problem of learning the best Bayesian network structure with
respect to a decomposable score such as BDe, BIC or AIC. This problem is known
to be NP-hard, which means that solving it becomes quickly infeasible as the
number of variables increases. Nevertheless, in this paper we show that it is
possible to learn the best Bayesian network structure with over 30 variables,
which covers many practically interesting cases. Our algorithm is less
complicated and more efficient than the techniques presented earlier. It can be
easily parallelized, and offers a possibility for efficient exploration of the
best networks consistent with different variable orderings. In the experimental
part of the paper we compare the performance of the algorithm to the previous
state-of-the-art algorithm. Free source-code and an online-demo can be found at
http://b-course.hiit.fi/bene.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.