id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.3535 | Airport Gate Scheduling for Passengers, Aircraft, and Operation | cs.SY cs.AI | Passengers' experience is becoming a key metric to evaluate the air
transportation system's performance. Efficient and robust tools to handle
airport operations are needed along with a better understanding of passengers'
interests and concerns. Among various airport operations, this paper studies
airport gate scheduling for improved passengers' experience. Three objectives
accounting for passengers, aircraft, and operation are presented. Trade-offs
between these objectives are analyzed, and a balancing objective function is
proposed. The results show that the balanced objective can improve the
efficiency of traffic flow in passenger terminals and on ramps, as well as the
robustness of gate operations.
|
1301.3537 | Learning Stable Group Invariant Representations with Convolutional
Networks | cs.AI math.NA | Transformation groups, such as translations or rotations, effectively express
part of the variability observed in many recognition problems. The group
structure enables the construction of invariant signal representations with
appealing mathematical properties, where convolutions, together with pooling
operators, bring stability to additive and geometric perturbations of the
input. Whereas physical transformation groups are ubiquitous in image and audio
applications, they do not account for all the variability of complex signal
classes.
We show that the invariance properties built by deep convolutional networks
can be cast as a form of stable group invariance. The network wiring
architecture determines the invariance group, while the trainable filter
coefficients characterize the group action. We give explanatory examples which
illustrate how the network architecture controls the resulting invariance
group. We also explore the principle by which additional convolutional layers
induce a group factorization enabling more abstract, powerful invariant
representations.
|
1301.3539 | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | cs.LG | We proposea graphical model for multi-view feature extraction that
automatically adapts its structure to achieve better representation of data
distribution. The proposed model, structure-adapting multi-view harmonium
(SA-MVH) has switch parameters that control the connection between hidden nodes
and input views, and learn the switch parameter while training. Numerical
experiments on synthetic and a real-world dataset demonstrate the useful
behavior of the SA-MVH, compared to existing multi-view feature extraction
methods.
|
1301.3541 | Deep Predictive Coding Networks | cs.LG cs.CV stat.ML | The quality of data representation in deep learning methods is directly
related to the prior model imposed on the representations; however, generally
used fixed priors are not capable of adjusting to the context in the data. To
address this issue, we propose deep predictive coding networks, a hierarchical
generative model that empirically alters priors on the latent representations
in a dynamic and context-sensitive manner. This model captures the temporal
dependencies in time-varying signals and uses top-down information to modulate
the representation in lower layers. The centerpiece of our model is a novel
procedure to infer sparse states of a dynamic model which is used for feature
extraction. We also extend this feature extraction block to introduce a pooling
function that captures locally invariant representations. When applied on a
natural video data, we show that our method is able to learn high-level visual
features. We also demonstrate the role of the top-down connections by showing
the robustness of the proposed model to structured noise.
|
1301.3545 | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | cs.LG cs.NE stat.ML | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for
training Boltzmann Machines. Similar in spirit to the Hessian-Free method of
Martens [8], our algorithm belongs to the family of truncated Newton methods
and exploits an efficient matrix-vector product to avoid explicitely storing
the natural gradient metric $L$. This metric is shown to be the expected second
derivative of the log-partition function (under the model distribution), or
equivalently, the variance of the vector of partial derivatives of the energy
function. We evaluate our method on the task of joint-training a 3-layer Deep
Boltzmann Machine and show that MFNG does indeed have faster per-epoch
convergence compared to Stochastic Maximum Likelihood with centering, though
wall-clock performance is currently not competitive.
|
1301.3547 | A Rhetorical Analysis Approach to Natural Language Processing | cs.CL stat.ML | The goal of this research was to find a way to extend the capabilities of
computers through the processing of language in a more human way, and present
applications which demonstrate the power of this method. This research presents
a novel approach, Rhetorical Analysis, to solving problems in Natural Language
Processing (NLP). The main benefit of Rhetorical Analysis, as opposed to
previous approaches, is that it does not require the accumulation of large sets
of training data, but can be used to solve a multitude of problems within the
field of NLP. The NLP problems investigated with Rhetorical Analysis were the
Author Identification problem - predicting the author of a piece of text based
on its rhetorical strategies, Election Prediction - predicting the winner of a
presidential candidate's re-election campaign based on rhetorical strategies
within that president's inaugural address, Natural Language Generation - having
a computer produce text containing rhetorical strategies, and Document
Summarization. The results of this research indicate that an Author
Identification system based on Rhetorical Analysis could predict the correct
author 100% of the time, that a re-election predictor based on Rhetorical
Analysis could predict the correct winner of a re-election campaign 55% of the
time, that a Natural Language Generation system based on Rhetorical Analysis
could output text with up to 87.3% similarity to Shakespeare in style, and that
a Document Summarization system based on Rhetorical Analysis could extract
highly relevant sentences. Overall, this study demonstrated that Rhetorical
Analysis could be a useful approach to solving problems in NLP.
|
1301.3551 | Information Theoretic Learning with Infinitely Divisible Kernels | cs.LG cs.CV | In this paper, we develop a framework for information theoretic learning
based on infinitely divisible matrices. We formulate an entropy-like functional
on positive definite matrices based on Renyi's axiomatic definition of entropy
and examine some key properties of this functional that lead to the concept of
infinite divisibility. The proposed formulation avoids the plug in estimation
of density and brings along the representation power of reproducing kernel
Hilbert spaces. As an application example, we derive a supervised metric
learning algorithm using a matrix based analogue to conditional entropy
achieving results comparable with the state of the art.
|
1301.3552 | Negative Imaginary Systems Theory in the Robust Control of Highly
Resonant Flexible Structures | cs.SY math.OC | This paper covers recent developments in the theory of negative imaginary
systems and their application to the control of highly resonant flexible
structures. The theory of negative imaginary systems arose out of a desire to
unify a number of classical methods for the control of lightly damped
structures with collocated force actuators and position sensors including
positive position feedback and integral force feedback. The key result is a
stability result which shows why these methods are guaranteed to yield robust
closed loop stability in the face of unmodelled spillover dynamics. Related
results to be presented connect the theory of negative imaginary systems to
positive real systems theory and a negative imaginary lemma has been
established which is analogous to the positive real lemma. The paper also
presents recent controller synthesis results based on the theory of negative
imaginary systems.
|
1301.3557 | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | cs.LG cs.NE stat.ML | We introduce a simple and effective method for regularizing large
convolutional neural networks. We replace the conventional deterministic
pooling operations with a stochastic procedure, randomly picking the activation
within each pooling region according to a multinomial distribution, given by
the activities within the pooling region. The approach is hyper-parameter free
and can be combined with other regularization approaches, such as dropout and
data augmentation. We achieve state-of-the-art performance on four image
datasets, relative to other approaches that do not utilize data augmentation.
|
1301.3560 | Complexity of Representation and Inference in Compositional Models with
Part Sharing | cs.CV | This paper describes serial and parallel compositional models of multiple
objects with part sharing. Objects are built by part-subpart compositions and
expressed in terms of a hierarchical dictionary of object parts. These parts
are represented on lattices of decreasing sizes which yield an executive
summary description. We describe inference and learning algorithms for these
models. We analyze the complexity of this model in terms of computation time
(for serial computers) and numbers of nodes (e.g., "neurons") for parallel
computers. In particular, we compute the complexity gains by part sharing and
its dependence on how the dictionary scales with the level of the hierarchy. We
explore three regimes of scaling behavior where the dictionary size (i)
increases exponentially with the level, (ii) is determined by an unsupervised
compositional learning algorithm applied to real data, (iii) decreases
exponentially with scale. This analysis shows that in some regimes the use of
shared parts enables algorithms which can perform inference in time linear in
the number of levels for an exponential number of objects. In other regimes
part sharing has little advantage for serial computers but can give linear
processing on parallel computers.
|
1301.3568 | Joint Training Deep Boltzmann Machines for Classification | stat.ML cs.LG | We introduce a new method for training deep Boltzmann machines jointly. Prior
methods of training DBMs require an initial learning pass that trains the model
greedily, one layer at a time, or do not perform well on classification tasks.
In our approach, we train all layers of the DBM simultaneously, using a novel
training procedure called multi-prediction training. The resulting model can
either be interpreted as a single generative model trained to maximize a
variational approximation to the generalized pseudolikelihood, or as a family
of recurrent networks that share parameters and may be approximately averaged
together using a novel technique we call the multi-inference trick. We show
that our approach performs competitively for classification and outperforms
previous methods in terms of accuracy of approximate inference and
classification with missing inputs.
|
1301.3572 | Indoor Semantic Segmentation using depth information | cs.CV | This work addresses multi-class segmentation of indoor scenes with RGB-D
inputs. While this area of research has gained much attention recently, most
works still rely on hand-crafted features. In contrast, we apply a multiscale
convolutional network to learn features directly from the images and the depth
information. We obtain state-of-the-art on the NYU-v2 depth dataset with an
accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos
sequences that could be processed in real-time using appropriate hardware such
as an FPGA.
|
1301.3575 | Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative
Clustering | cs.LG cs.CV stat.ML | Large scale agglomerative clustering is hindered by computational burdens. We
propose a novel scheme where exact inter-instance distance calculation is
replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing
(KLSH) hashed values. This results in a method that drastically decreases
computation time. Additionally, we take advantage of certain labeled data
points via distance metric learning to achieve a competitive precision and
recall comparing to K-Means but in much less computation time.
|
1301.3577 | Saturating Auto-Encoders | cs.LG | We introduce a simple new regularizer for auto-encoders whose hidden-unit
activation functions contain at least one zero-gradient (saturated) region.
This regularizer explicitly encourages activations in the saturated region(s)
of the corresponding activation function. We call these Saturating
Auto-Encoders (SATAE). We show that the saturation regularizer explicitly
limits the SATAE's ability to reconstruct inputs which are not near the data
manifold. Furthermore, we show that a wide variety of features can be learned
when different activation functions are used. Finally, connections are
established with the Contractive and Sparse Auto-Encoders.
|
1301.3578 | Cramer-Rao Lower Bound and Information Geometry | cs.IT math.IT | This article focuses on an important piece of work of the world renowned
Indian statistician, Calyampudi Radhakrishna Rao. In 1945, C. R. Rao (25 years
old then) published a pathbreaking paper, which had a profound impact on
subsequent statistical research.
|
1301.3583 | Big Neural Networks Waste Capacity | cs.LG cs.CV | This article exposes the failure of some big neural networks to leverage
added capacity to reduce underfitting. Past research suggest diminishing
returns when increasing the size of neural networks. Our experiments on
ImageNet LSVRC-2010 show that this may be due to the fact there are highly
diminishing returns for capacity in terms of training error, leading to
underfitting. This suggests that the optimization method - first order gradient
descent - fails at this regime. Directly attacking this problem, either through
the optimization method or the choices of parametrization, may allow to improve
the generalization error on large datasets, for which a large capacity is
required.
|
1301.3584 | Revisiting Natural Gradient for Deep Networks | cs.LG cs.NA | We evaluate natural gradient, an algorithm originally proposed in Amari
(1997), for learning deep models. The contributions of this paper are as
follows. We show the connection between natural gradient and three other
recently proposed methods for training deep models: Hessian-Free (Martens,
2010), Krylov Subspace Descent (Vinyals and Povey, 2012) and TONGA (Le Roux et
al., 2008). We describe how one can use unlabeled data to improve the
generalization error obtained by natural gradient and empirically evaluate the
robustness of the algorithm to the ordering of the training set compared to
stochastic gradient descent. Finally we extend natural gradient to incorporate
second order information alongside the manifold information and provide a
benchmark of the new algorithm using a truncated Newton approach for inverting
the metric matrix instead of using a diagonal approximation of it.
|
1301.3590 | Tree structured sparse coding on cubes | cs.IT cs.CV math.IT | A brief description of tree structured sparse coding on the binary cube.
|
1301.3592 | Deep Learning for Detecting Robotic Grasps | cs.LG cs.CV cs.RO | We consider the problem of detecting robotic grasps in an RGB-D view of a
scene containing objects. In this work, we apply a deep learning approach to
solve this problem, which avoids time-consuming hand-design of features. This
presents two main challenges. First, we need to evaluate a huge number of
candidate grasps. In order to make detection fast, as well as robust, we
present a two-step cascaded structure with two deep networks, where the top
detections from the first are re-evaluated by the second. The first network has
fewer features, is faster to run, and can effectively prune out unlikely
candidate grasps. The second, with more features, is slower but has to run only
on the top few detections. Second, we need to handle multimodal inputs well,
for which we present a method to apply structured regularization on the weights
based on multimodal group regularization. We demonstrate that our method
outperforms the previous state-of-the-art methods in robotic grasp detection,
and can be used to successfully execute grasps on two different robotic
platforms.
|
1301.3598 | Low-Complexity Scheduling Policies for Achieving Throughput and
Asymptotic Delay Optimality in Multi-Channel Wireless Networks | cs.NI cs.IT math.IT | In this paper, we study the scheduling problem for downlink transmission in a
multi-channel (e.g., OFDM-based) wireless network. We focus on a single cell,
with the aim of developing a unifying framework for designing low-complexity
scheduling policies that can provide optimal performance in terms of both
throughput and delay. We develop new easy-to-verify sufficient conditions for
rate-function delay optimality (in the many-channel many-user asymptotic
regime) and throughput optimality (in general non-asymptotic setting),
respectively. The sufficient conditions allow us to prove rate-function delay
optimality for a class of Oldest Packets First (OPF) policies and throughput
optimality for a large class of Maximum Weight in the Fluid limit (MWF)
policies, respectively. By exploiting the special features of our carefully
chosen sufficient conditions and intelligently combining policies from the
classes of OPF and MWF policies, we design hybrid policies that are both
rate-function delay-optimal and throughput-optimal with a complexity of
$O(n^{2.5} \log n)$, where $n$ is the number of channels or users. Our
sufficient condition is also used to show that a previously proposed policy
called Delay Weighted Matching (DWM) is rate-function delay-optimal. However,
DWM incurs a high complexity of $O(n^5)$. Thus, our approach yields
significantly lower complexity than the only previously designed delay and
throughput optimal scheduling policy. We also conduct numerical experiments to
validate our theoretical results.
|
1301.3601 | Statistical Analysis of Self-Organizing Networks with Biased Cell
Association and Interference Avoidance | cs.NI cs.IT math.IT stat.AP | In this work, we assess the viability of heterogeneous networks composed of
legacy macrocells which are underlaid with self-organizing picocells. Aiming to
improve coverage, cell-edge throughput and overall system capacity,
self-organizing solutions, such as range expansion bias, almost blank subframe
and distributed antenna systems are considered. Herein, stochastic geometry is
used to model network deployments, while higher-order statistics through the
cumulants concept is utilized to characterize the probability distribution of
the received power and aggregate interference at the user of interest. A
compre- hensive analytical framework is introduced to evaluate the performance
of such self-organizing networks in terms of outage probability and average
channel capacity with respect to the tagged receiver. To conduct our studies,
we consider a shadowed fading channel model incorporating log-normal shadowing
and Nakagami-m fading. Results show that the analytical framework matches well
with numerical results obtained from Monte Carlo simulations. We also observed
that by simply using almost blank subframes the aggregate interference at the
tagged receiver is reduced by about 12dB. Although more elaborated interference
control techniques such as, downlink bitmap and distributed antennas systems
become needed, when the density of picocells in the underlaid tier gets high.
|
1301.3605 | Feature Learning in Deep Neural Networks - Studies on Speech Recognition
Tasks | cs.LG cs.CL cs.NE eess.AS | Recent studies have shown that deep neural networks (DNNs) perform
significantly better than shallow networks and Gaussian mixture models (GMMs)
on large vocabulary speech recognition tasks. In this paper, we argue that the
improved accuracy achieved by the DNNs is the result of their ability to
extract discriminative internal representations that are robust to the many
sources of variability in speech signals. We show that these representations
become increasingly insensitive to small perturbations in the input with
increasing network depth, which leads to better speech recognition performance
with deeper networks. We also show that DNNs cannot extrapolate to test samples
that are substantially different from the training examples. If the training
data are sufficiently representative, however, internal features learned by the
DNN are relatively stable with respect to speaker differences, bandwidth
differences, and environment distortion. This enables DNN-based recognizers to
perform as well or better than state-of-the-art systems based on GMMs or
shallow networks without the need for explicit model adaptation or feature
normalization.
|
1301.3614 | Joint Space Neural Probabilistic Language Model for Statistical Machine
Translation | cs.CL | A neural probabilistic language model (NPLM) provides an idea to achieve the
better perplexity than n-gram language model and their smoothed language
models. This paper investigates application area in bilingual NLP, specifically
Statistical Machine Translation (SMT). We focus on the perspectives that NPLM
has potential to open the possibility to complement potentially `huge'
monolingual resources into the `resource-constraint' bilingual resources. We
introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian
construction. In order to facilitate the application to various tasks, we
propose the joint space model of ngram-HMM language model. We show an
experiment of system combination in the area of SMT. One discovery was that our
treatment of noise improved the results 0.20 BLEU points if NPLM is trained in
relatively small corpus, in our case 500,000 sentence pairs, which is often the
case due to the long training time of NPLM.
|
1301.3618 | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | cs.CL cs.LG | Knowledge bases provide applications with the benefit of easily accessible,
systematic relational knowledge but often suffer in practice from their
incompleteness and lack of knowledge of new entities and relations. Much work
has focused on building or extending them by finding patterns in large
unannotated text corpora. In contrast, here we mainly aim to complete a
knowledge base by predicting additional true relationships between entities,
based on generalizations that can be discerned in the given knowledgebase. We
introduce a neural tensor network (NTN) model which predicts new relationship
entries that can be added to the database. This model can be improved by
initializing entity representations with word vectors learned in an
unsupervised fashion from text, and when doing this, existing relations can
even be queried for entities that were not present in the database. Our model
generalizes and outperforms existing models for this problem, and can classify
unseen relationships in WordNet with an accuracy of 75.8%.
|
1301.3627 | Two SVDs produce more focal deep learning representations | cs.CL cs.LG | A key characteristic of work on deep learning and neural networks in general
is that it relies on representations of the input that support generalization,
robust inference, domain adaptation and other desirable functionalities. Much
recent progress in the field has focused on efficient and effective methods for
computing representations. In this paper, we propose an alternative method that
is more efficient than prior work and produces representations that have a
property we call focality -- a property we hypothesize to be important for
neural network representations. The method consists of a simple application of
two consecutive SVDs and is inspired by Anandkumar (2012).
|
1301.3630 | Behavior Pattern Recognition using A New Representation Model | cs.LG | We study the use of inverse reinforcement learning (IRL) as a tool for the
recognition of agents' behavior on the basis of observation of their sequential
decision behavior interacting with the environment. We model the problem faced
by the agents as a Markov decision process (MDP) and model the observed
behavior of the agents in terms of forward planning for the MDP. We use IRL to
learn reward functions and then use these reward functions as the basis for
clustering or classification models. Experimental studies with GridWorld, a
navigation problem, and the secretary problem, an optimal stopping problem,
suggest reward vectors found from IRL can be a good basis for behavior pattern
recognition problems. Empirical comparisons of our method with several existing
IRL algorithms and with direct methods that use feature statistics observed in
state-action space suggest it may be superior for recognition problems.
|
1301.3641 | Training Neural Networks with Stochastic Hessian-Free Optimization | cs.LG cs.NE stat.ML | Hessian-free (HF) optimization has been successfully used for training deep
autoencoders and recurrent networks. HF uses the conjugate gradient algorithm
to construct update directions through curvature-vector products that can be
computed on the same order of time as gradients. In this paper we exploit this
property and study stochastic HF with gradient and curvature mini-batches
independent of the dataset size. We modify Martens' HF for these settings and
integrate dropout, a method for preventing co-adaptation of feature detectors,
to guard against overfitting. Stochastic Hessian-free optimization gives an
intermediary between SGD and HF that achieves competitive performance on both
classification and deep autoencoder experiments.
|
1301.3644 | Regularized Discriminant Embedding for Visual Descriptor Learning | cs.CV cs.LG | Images can vary according to changes in viewpoint, resolution, noise, and
illumination. In this paper, we aim to learn representations for an image,
which are robust to wide changes in such environmental conditions, using
training pairs of matching and non-matching local image patches that are
collected under various environmental conditions. We present a regularized
discriminant analysis that emphasizes two challenging categories among the
given training pairs: (1) matching, but far apart pairs and (2) non-matching,
but close pairs in the original feature space (e.g., SIFT feature space).
Compared to existing work on metric learning and discriminant analysis, our
method can better distinguish relevant images from irrelevant, but look-alike
images.
|
1301.3662 | Composable security of delegated quantum computation | quant-ph cs.CR cs.IT math.IT | Delegating difficult computations to remote large computation facilities,
with appropriate security guarantees, is a possible solution for the
ever-growing needs of personal computing power. For delegated computation
protocols to be usable in a larger context---or simply to securely run two
protocols in parallel---the security definitions need to be composable. Here,
we define composable security for delegated quantum computation. We distinguish
between protocols which provide only blindness---the computation is hidden from
the server---and those that are also verifiable---the client can check that it
has received the correct result. We show that the composable security
definition capturing both these notions can be reduced to a combination of
several distinct "trace-distance-type" criteria---which are, individually,
non-composable security definitions.
Additionally, we study the security of some known delegated quantum
computation protocols, including Broadbent, Fitzsimons and Kashefi's Universal
Blind Quantum Computation protocol. Even though these protocols were originally
proposed with insufficient security criteria, they turn out to still be secure
given the stronger composable definitions.
|
1301.3666 | Zero-Shot Learning Through Cross-Modal Transfer | cs.CV cs.LG | This work introduces a model that can recognize objects in images even if no
training data is available for the objects. The only necessary knowledge about
the unseen categories comes from unsupervised large text corpora. In our
zero-shot framework distributional information in language can be seen as
spanning a semantic basis for understanding what objects look like. Most
previous zero-shot learning models can only differentiate between unseen
classes. In contrast, our model can both obtain state of the art performance on
classes that have thousands of training images and obtain reasonable
performance on unseen classes. This is achieved by first using outlier
detection in the semantic space and then two separate recognition models.
Furthermore, our model does not require any manually defined semantic features
for either words or images.
|
1301.3676 | Duality and Network Theory in Passivity-based Cooperative Control | math.OC cs.SY | This paper presents a class of passivity-based cooperative control problems
that have an explicit connection to convex network optimization problems. The
new notion of maximal equilibrium independent passivity is introduced and it is
shown that networks of systems possessing this property asymptotically approach
the solutions of a dual pair of network optimization problems, namely an
optimal potential and an optimal flow problem. This connection leads to an
interpretation of the dynamic variables, such as system inputs and outputs, to
variables in a network optimization framework, such as divergences and
potentials, and reveals that several duality relations known in convex network
optimization theory translate directly to passivity-based cooperative control
problems. The presented results establish a strong and explicit connection
between passivity-based cooperative control theory on the one side and network
optimization theory on the other, and they provide a unifying framework for
network analysis and optimal design. The results are illustrated on a nonlinear
traffic dynamics model that is shown to be asymptotically clustering.
|
1301.3683 | Convex Variational Image Restoration with Histogram Priors | math.OC cs.CV | We present a novel variational approach to image restoration (e.g.,
denoising, inpainting, labeling) that enables to complement established
variational approaches with a histogram-based prior enforcing closeness of the
solution to some given empirical measure. By minimizing a single objective
function, the approach utilizes simultaneously two quite different sources of
information for restoration: spatial context in terms of some smoothness prior
and non-spatial statistics in terms of the novel prior utilizing the
Wasserstein distance between probability measures. We study the combination of
the functional lifting technique with two different relaxations of the
histogram prior and derive a jointly convex variational approach. Mathematical
equivalence of both relaxations is established and cases where optimality holds
are discussed. Additionally, we present an efficient algorithmic scheme for the
numerical treatment of the presented model. Experiments using the basic
total-variation based denoising approach as a case study demonstrate our novel
regularization approach.
|
1301.3698 | Modeling human dynamics of face-to-face interaction networks | physics.soc-ph cond-mat.stat-mech cs.SI | Face-to-face interaction networks describe social interactions in human
gatherings, and are the substrate for processes such as epidemic spreading and
gossip propagation. The bursty nature of human behavior characterizes many
aspects of empirical data, such as the distribution of conversation lengths, of
conversations per person, or of inter-conversation times. Despite several
recent attempts, a general theoretical understanding of the global picture
emerging from data is still lacking. Here we present a simple model that
reproduces quantitatively most of the relevant features of empirical
face-to-face interaction networks. The model describes agents which perform a
random walk in a two dimensional space and are characterized by an
attractiveness whose effect is to slow down the motion of people around them.
The proposed framework sheds light on the dynamics of human interactions and
can improve the modeling of dynamical processes taking place on the ensuing
dynamical social networks.
|
1301.3708 | Training Sequence Design for MIMO Channels: An Application-Oriented
Approach | cs.IT math.IT | In this paper, the problem of training optimization for estimating a
multiple-input multiple-output (MIMO) flat fading channel in the presence of
spatially and temporally correlated Gaussian noise is studied in an
application-oriented setup. So far, the problem of MIMO channel estimation has
mostly been treated within the context of minimizing the mean square error
(MSE) of the channel estimate subject to various constraints, such as an upper
bound on the available training energy. We introduce a more general framework
for the task of training sequence design in MIMO systems, which can treat not
only the minimization of channel estimator's MSE, but also the optimization of
a final performance metric of interest related to the use of the channel
estimate in the communication system. First, we show that the proposed
framework can be used to minimize the training energy budget subject to a
quality constraint on the MSE of the channel estimator. A deterministic version
of the "dual" problem is also provided. We then focus on four specific
applications, where the training sequence can be optimized with respect to the
classical channel estimation MSE, a weighted channel estimation MSE and the MSE
of the equalization error due to the use of an equalizer at the receiver or an
appropriate linear precoder at the transmitter. In this way, the intended use
of the channel estimate is explicitly accounted for. The superiority of the
proposed designs over existing methods is demonstrated via numerical
simulations.
|
1301.3720 | The IBMAP approach for Markov networks structure learning | cs.AI cs.LG | In this work we consider the problem of learning the structure of Markov
networks from data. We present an approach for tackling this problem called
IBMAP, together with an efficient instantiation of the approach: the IBMAP-HC
algorithm, designed for avoiding important limitations of existing
independence-based algorithms. These algorithms proceed by performing
statistical independence tests on data, trusting completely the outcome of each
test. In practice tests may be incorrect, resulting in potential cascading
errors and the consequent reduction in the quality of the structures learned.
IBMAP contemplates this uncertainty in the outcome of the tests through a
probabilistic maximum-a-posteriori approach. The approach is instantiated in
the IBMAP-HC algorithm, a structure selection strategy that performs a
polynomial heuristic local search in the space of possible structures. We
present an extensive empirical evaluation on synthetic and real data, showing
that our algorithm outperforms significantly the current independence-based
algorithms, in terms of data efficiency and quality of learned structures, with
equivalent computational complexities. We also show the performance of IBMAP-HC
in a real-world application of knowledge discovery: EDAs, which are
evolutionary algorithms that use structure learning on each generation for
modeling the distribution of populations. The experiments show that when
IBMAP-HC is used to learn the structure, EDAs improve the convergence to the
optimum.
|
1301.3753 | Switched linear encoding with rectified linear autoencoders | cs.LG | Several recent results in machine learning have established formal
connections between autoencoders---artificial neural network models that
attempt to reproduce their inputs---and other coding models like sparse coding
and K-means. This paper explores in depth an autoencoder model that is
constructed using rectified linear activations on its hidden units. Our
analysis builds on recent results to further unify the world of sparse linear
coding models. We provide an intuitive interpretation of the behavior of these
coding models and demonstrate this intuition using small, artificial datasets
with known distributions.
|
1301.3755 | Gradient Driven Learning for Pooling in Visual Pipeline Feature
Extraction Models | cs.CV | Hyper-parameter selection remains a daunting task when building a pattern
recognition architecture which performs well, particularly in recently
constructed visual pipeline models for feature extraction. We re-formulate
pooling in an existing pipeline as a function of adjustable pooling map weight
parameters and propose the use of supervised error signals from gradient
descent to tune the established maps within the model. This technique allows us
to learn what would otherwise be a design choice within the model and
specialize the maps to aggregate areas of invariance for the task presented.
Preliminary results show moderate potential gains in classification accuracy
and highlight areas of importance within the intermediate feature
representation space.
|
1301.3758 | Mutual Localization: Two Camera Relative 6-DOF Pose Estimation from
Reciprocal Fiducial Observation | cs.RO | Concurrently estimating the 6-DOF pose of multiple cameras or
robots---cooperative localization---is a core problem in contemporary robotics.
Current works focus on a set of mutually observable world landmarks and often
require inbuilt egomotion estimates; situations in which both assumptions are
violated often arise, for example, robots with erroneous low quality odometry
and IMU exploring an unknown environment. In contrast to these existing works
in cooperative localization, we propose a cooperative localization method,
which we call mutual localization, that uses reciprocal observations of
camera-fiducials to obviate the need for egomotion estimates and mutually
observable world landmarks. We formulate and solve an algebraic formulation for
the pose of the two camera mutual localization setup under these assumptions.
Our experiments demonstrate the capabilities of our proposal egomotion-free
cooperative localization method: for example, the method achieves 2cm range and
0.7 degree accuracy at 2m sensing for 6-DOF pose. To demonstrate the
applicability of the proposed work, we deploy our method on Turtlebots and we
compare our results with ARToolKit and Bundler, over which our method achieves
a 10 fold improvement in translation estimation accuracy.
|
1301.3764 | Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients | cs.LG cs.AI stat.ML | Recent work has established an empirically successful framework for adapting
learning rates for stochastic gradient descent (SGD). This effectively removes
all needs for tuning, while automatically reducing learning rates over time on
stationary problems, and permitting learning rates to grow appropriately in
non-stationary tasks. Here, we extend the idea in three directions, addressing
proper minibatch parallelization, including reweighted updates for sparse or
orthogonal gradients, improving robustness on non-smooth loss functions, in the
process replacing the diagonal Hessian estimation procedure that may not always
be available by a robust finite-difference approximation. The final algorithm
integrates all these components, has linear complexity and is hyper-parameter
free.
|
1301.3775 | Discriminative Recurrent Sparse Auto-Encoders | cs.LG cs.CV | We present the discriminative recurrent sparse auto-encoder model, comprising
a recurrent encoder of rectified linear units, unrolled for a fixed number of
iterations, and connected to two linear decoders that reconstruct the input and
predict its supervised classification. Training via
backpropagation-through-time initially minimizes an unsupervised sparse
reconstruction error; the loss function is then augmented with a discriminative
term on the supervised classification. The depth implicit in the
temporally-unrolled form allows the system to exhibit all the power of deep
networks, while substantially reducing the number of trainable parameters.
From an initially unstructured network the hidden units differentiate into
categorical-units, each of which represents an input prototype with a
well-defined class; and part-units representing deformations of these
prototypes. The learned organization of the recurrent encoder is hierarchical:
part-units are driven directly by the input, whereas the activity of
categorical-units builds up over time through interactions with the part-units.
Even using a small number of hidden units per layer, discriminative recurrent
sparse auto-encoders achieve excellent performance on MNIST.
|
1301.3781 | Efficient Estimation of Word Representations in Vector Space | cs.CL | We propose two novel model architectures for computing continuous vector
representations of words from very large data sets. The quality of these
representations is measured in a word similarity task, and the results are
compared to the previously best performing techniques based on different types
of neural networks. We observe large improvements in accuracy at much lower
computational cost, i.e. it takes less than a day to learn high quality word
vectors from a 1.6 billion words data set. Furthermore, we show that these
vectors provide state-of-the-art performance on our test set for measuring
syntactic and semantic word similarities.
|
1301.3784 | Asymptotic Consensus Without Self-Confidence | math.DS cs.SY | This paper studies asymptotic consensus in systems in which agents do not
necessarily have self-confidence, i.e., may disregard their own value during
execution of the update rule. We show that the prevalent hypothesis of
self-confidence in many convergence results can be replaced by the existence of
aperiodic cores. These are stable aperiodic subgraphs, which allow to virtually
store information about an agent's value distributedly in the network. Our
results are applicable to systems with message delays and memory loss.
|
1301.3791 | XORing Elephants: Novel Erasure Codes for Big Data | cs.IT cs.DC cs.NI math.IT | Distributed storage systems for large clusters typically use replication to
provide reliability. Recently, erasure codes have been used to reduce the large
storage overhead of three-replicated systems. Reed-Solomon codes are the
standard design choice and their high repair cost is often considered an
unavoidable price to pay for high storage efficiency and high reliability.
This paper shows how to overcome this limitation. We present a novel family
of erasure codes that are efficiently repairable and offer higher reliability
compared to Reed-Solomon codes. We show analytically that our codes are optimal
on a recently identified tradeoff between locality and minimum distance.
We implement our new codes in Hadoop HDFS and compare to a currently deployed
HDFS module that uses Reed-Solomon codes. Our modified HDFS implementation
shows a reduction of approximately 2x on the repair disk I/O and repair network
traffic. The disadvantage of the new coding scheme is that it requires 14% more
storage compared to Reed-Solomon codes, an overhead shown to be information
theoretically optimal to obtain locality. Because the new codes repair failures
faster, this provides higher reliability, which is orders of magnitude higher
compared to replication.
|
1301.3816 | Learning Output Kernels for Multi-Task Problems | cs.LG | Simultaneously solving multiple related learning tasks is beneficial under a
variety of circumstances, but the prior knowledge necessary to correctly model
task relationships is rarely available in practice. In this paper, we develop a
novel kernel-based multi-task learning technique that automatically reveals
structural inter-task relationships. Building over the framework of output
kernel learning (OKL), we introduce a method that jointly learns multiple
functions and a low-rank multi-task kernel by solving a non-convex
regularization problem. Optimization is carried out via a block coordinate
descent strategy, where each subproblem is solved using suitable conjugate
gradient (CG) type iterative methods for linear operator equations. The
effectiveness of the proposed approach is demonstrated on pharmacological and
collaborative filtering data.
|
1301.3832 | A Complete Calculus for Possibilistic Logic Programming with Fuzzy
Propositional Variables | cs.AI | In this paper we present a propositional logic programming language for
reasoning under possibilistic uncertainty and representing vague knowledge.
Formulas are represented by pairs (A, c), where A is a many-valued proposition
and c is value in the unit interval [0,1] which denotes a lower bound on the
belief on A in terms of necessity measures. Belief states are modeled by
possibility distributions on the set of all many-valued interpretations. In
this framework, (i) we define a syntax and a semantics of the general
underlying uncertainty logic; (ii) we provide a modus ponens-style calculus for
a sublanguage of Horn-rules and we prove that it is complete for determining
the maximum degree of possibilistic belief with which a fuzzy propositional
variable can be entailed from a set of formulas; and finally, (iii) we show how
the computation of a partial matching between fuzzy propositional variables, in
terms of necessity measures for fuzzy sets, can be included in our logic
programming system.
|
1301.3833 | Reversible Jump MCMC Simulated Annealing for Neural Networks | cs.LG cs.NE stat.ML | We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated
annealing algorithm to optimize radial basis function (RBF) networks. This
algorithm enables us to maximize the joint posterior distribution of the
network parameters and the number of basis functions. It performs a global
search in the joint space of the parameters and number of parameters, thereby
surmounting the problem of local minima. We also show that by calibrating a
Bayesian model, we can obtain the classical AIC, BIC and MDL model selection
criteria within a penalized likelihood framework. Finally, we show
theoretically and empirically that the algorithm converges to the modes of the
full posterior distribution in an efficient way.
|
1301.3834 | Perfect Tree-Like Markovian Distributions | cs.AI | We show that if a strictly positive joint probability distribution for a set
of binary random variables factors according to a tree, then vertex separation
represents all and only the independence relations enclosed in the
distribution. The same result is shown to hold also for multivariate strictly
positive normal distributions. Our proof uses a new property of conditional
independence that holds for these two classes of probability distributions.
|
1301.3835 | A Principled Analysis of Merging Operations in Possibilistic Logic | cs.AI | Possibilistic logic offers a qualitative framework for representing pieces of
information associated with levels of uncertainty of priority. The fusion of
multiple sources information is discussed in this setting. Different classes of
merging operators are considered including conjunctive, disjunctive,
reinforcement, adaptive and averaging operators. Then we propose to analyse
these classes in terms of postulates. This is done by first extending the
postulate for merging classical bases to the case where priorites are avaialbe.
|
1301.3836 | The Complexity of Decentralized Control of Markov Decision Processes | cs.AI | Planning for distributed agents with partial state information is considered
from a decision- theoretic perspective. We describe generalizations of both the
MDP and POMDP models that allow for decentralized control. For even a small
number of agents, the finite-horizon problems corresponding to both of our
models are complete for nondeterministic exponential time. These complexity
results illustrate a fundamental difference between centralized and
decentralized control of Markov processes. In contrast to the MDP and POMDP
problems, the problems we consider provably do not admit polynomial-time
algorithms and most likely require doubly exponential time to solve in the
worst case. We have thus provided mathematical evidence corresponding to the
intuition that decentralized planning problems cannot easily be reduced to
centralized problems and solved exactly using established techniques.
|
1301.3837 | Dynamic Bayesian Multinets | cs.LG cs.AI stat.ML | In this work, dynamic Bayesian multinets are introduced where a Markov chain
state at time t determines conditional independence patterns between random
variables lying within a local time window surrounding t. It is shown how
information-theoretic criterion functions can be used to induce sparse,
discriminative, and class-conditional network structures that yield an optimal
approximation to the class posterior probability, and therefore are useful for
the classification task. Using a new structure learning heuristic, the
resulting models are tested on a medium-vocabulary isolated-word speech
recognition task. It is demonstrated that these discriminatively structured
dynamic Bayesian multinets, when trained in a maximum likelihood setting using
EM, can outperform both HMMs and other dynamic Bayesian networks with a similar
number of parameters.
|
1301.3838 | Variational Relevance Vector Machines | cs.LG stat.ML | The Support Vector Machine (SVM) of Vapnik (1998) has become widely
established as one of the leading approaches to pattern recognition and machine
learning. It expresses predictions in terms of a linear combination of kernel
functions centred on a subset of the training data, known as support vectors.
Despite its widespread success, the SVM suffers from some important
limitations, one of the most significant being that it makes point predictions
rather than generating predictive distributions. Recently Tipping (1999) has
formulated the Relevance Vector Machine (RVM), a probabilistic model whose
functional form is equivalent to the SVM. It achieves comparable recognition
accuracy to the SVM, yet provides a full predictive distribution, and also
requires substantially fewer kernel functions.
The original treatment of the RVM relied on the use of type II maximum
likelihood (the `evidence framework') to provide point estimates of the
hyperparameters which govern model sparsity. In this paper we show how the RVM
can be formulated and solved within a completely Bayesian paradigm through the
use of variational inference, thereby giving a posterior distribution over both
parameters and hyperparameters. We demonstrate the practicality and performance
of the variational RVM using both synthetic and real world examples.
|
1301.3839 | Approximately Optimal Monitoring of Plan Preconditions | cs.AI | Monitoring plan preconditions can allow for replanning when a precondition
fails, generally far in advance of the point in the plan where the precondition
is relevant. However, monitoring is generally costly, and some precondition
failures have a very small impact on plan quality. We formulate a model for
optimal precondition monitoring, using partially-observable Markov decisions
processes, and describe methods for solving this model efficitively, though
approximately. Specifically, we show that the single-precondition monitoring
problem is generally tractable, and the multiple-precondition monitoring
policies can be efficitively approximated using single-precondition soultions.
|
1301.3840 | Utilities as Random Variables: Density Estimation and Structure
Discovery | cs.AI cs.LG | Decision theory does not traditionally include uncertainty over utility
functions. We argue that the a person's utility value for a given outcome can
be treated as we treat other domain attributes: as a random variable with a
density function over its possible values. We show that we can apply
statistical density estimation techniques to learn such a density function from
a database of partially elicited utility functions. In particular, we define a
Bayesian learning framework for this problem, assuming the distribution over
utilities is a mixture of Gaussians, where the mixture components represent
statistically coherent subpopulations. We can also extend our techniques to the
problem of discovering generalized additivity structure in the utility
functions in the population. We define a Bayesian model selection criterion for
utility function structure and a search procedure over structures. The
factorization of the utilities in the learned model, and the generalization
obtained from density estimation, allows us to provide robust estimates of
utilities using a significantly smaller number of utility elicitation
questions. We experiment with our technique on synthetic utility data and on a
real database of utility functions in the domain of prenatal diagnosis.
|
1301.3841 | Computational Investigation of Low-Discrepancy Sequences in Simulation
Algorithms for Bayesian Networks | cs.AI | Monte Carlo sampling has become a major vehicle for approximate inference in
Bayesian networks. In this paper, we investigate a family of related simulation
approaches, known collectively as quasi-Monte Carlo methods based on
deterministic low-discrepancy sequences. We first outline several theoretical
aspects of deterministic low-discrepancy sequences, show three examples of such
sequences, and then discuss practical issues related to applying them to belief
updating in Bayesian networks. We propose an algorithm for selecting direction
numbers for Sobol sequence. Our experimental results show that low-discrepancy
sequences (especially Sobol sequence) significantly improve the performance of
simulation algorithms in Bayesian networks compared to Monte Carlo sampling.
|
1301.3842 | A Decision Theoretic Approach to Targeted Advertising | cs.AI | A simple advertising strategy that can be used to help increase sales of a
product is to mail out special offers to selected potential customers. Because
there is a cost associated with sending each offer, the optimal mailing
strategy depends on both the benefit obtained from a purchase and how the offer
affects the buying behavior of the customers. In this paper, we describe two
methods for partitioning the potential customers into groups, and show how to
perform a simple cost-benefit analysis to decide which, if any, of the groups
should be targeted. In particular, we consider two decision-tree learning
algorithms. The first is an "off the shelf" algorithm used to model the
probability that groups of customers will buy the product. The second is a new
algorithm that is similar to the first, except that for each group, it
explicitly models the probability of purchase under the two mailing scenarios:
(1) the mail is sent to members of that group and (2) the mail is not sent to
members of that group. Using data from a real-world advertising experiment, we
compare the algorithms to each other and to a naive mail-to-all strategy.
|
1301.3843 | Bayesian Classification and Feature Selection from Finite Data Sets | cs.LG stat.ML | Feature selection aims to select the smallest subset of features for a
specified level of performance. The optimal achievable classification
performance on a feature subset is summarized by its Receiver Operating Curve
(ROC). When infinite data is available, the Neyman- Pearson (NP) design
procedure provides the most efficient way of obtaining this curve. In practice
the design procedure is applied to density estimates from finite data sets. We
perform a detailed statistical analysis of the resulting error propagation on
finite alphabets. We show that the estimated performance curve (EPC) produced
by the design procedure is arbitrarily accurate given sufficient data,
independent of the size of the feature set. However, the underlying likelihood
ranking procedure is highly sensitive to errors that reduces the probability
that the EPC is in fact the ROC. In the worst case, guaranteeing that the EPC
is equal to the ROC may require data sizes exponential in the size of the
feature set. These results imply that in theory the NP design approach may only
be valid for characterizing relatively small feature subsets, even when the
performance of any given classifier can be estimated very accurately. We
discuss the practical limitations for on-line methods that ensures that the NP
procedure operates in a statistically valid region.
|
1301.3844 | A Bayesian Method for Causal Modeling and Discovery Under Selection | cs.AI | This paper describes a Bayesian method for learning causal networks using
samples that were selected in a non-random manner from a population of
interest. Examples of data obtained by non-random sampling include convenience
samples and case-control data in which a fixed number of samples with and
without some condition is collected; such data are not uncommon. The paper
describes a method for combining data under selection with prior beliefs in
order to derive a posterior probability for a model of the causal processes
that are generating the data in the population of interest. The priors include
beliefs about the nature of the non-random sampling procedure. Although exact
application of the method would be computationally intractable for most
realistic datasets, efficient special-case and approximation methods are
discussed. Finally, the paper describes how to combine learning under selection
with previous methods for learning from observational and experimental data
that are obtained on random samples of the population of interest. The net
result is a Bayesian methodology that supports causal modeling and discovery
from a rich mixture of different types of data.
|
1301.3845 | Separation Properties of Sets of Probability Measures | cs.AI | This paper analyzes independence concepts for sets of probability measures
associated with directed acyclic graphs. The paper shows that epistemic
independence and the standard Markov condition violate desirable separation
properties. The adoption of a contraction condition leads to d-separation but
still fails to guarantee a belief separation property. To overcome this
unsatisfactory situation, a strong Markov condition is proposed, based on
epistemic independence. The main result is that the strong Markov condition
leads to strong independence and does enforce separation properties; this
result implies that (1) separation properties of Bayesian networks do extend to
epistemic independence and sets of probability measures, and (2) strong
independence has a clear justification based on epistemic independence and the
strong Markov condition.
|
1301.3846 | Stochastic Logic Programs: Sampling, Inference and Applications | cs.AI | Algorithms for exact and approximate inference in stochastic logic programs
(SLPs) are presented, based respectively, on variable elimination and
importance sampling. We then show how SLPs can be used to represent prior
distributions for machine learning, using (i) logic programs and (ii) Bayes net
structures as examples. Drawing on existing work in statistics, we apply the
Metropolis-Hasting algorithm to construct a Markov chain which samples from the
posterior distribution. A Prolog implementation for this is described. We also
discuss the possibility of constructing explicit representations of the
posterior.
|
1301.3847 | A Differential Approach to Inference in Bayesian Networks | cs.AI | We present a new approach for inference in Bayesian networks, which is mainly
based on partial differentiation. According to this approach, one compiles a
Bayesian network into a multivariate polynomial and then computes the partial
derivatives of this polynomial with respect to each variable. We show that once
such derivatives are made available, one can compute in constant-time answers
to a large class of probabilistic queries, which are central to classical
inference, parameter estimation, model validation and sensitivity analysis. We
present a number of complexity results relating to the compilation of such
polynomials and to the computation of their partial derivatives. We argue that
the combined simplicity, comprehensiveness and computational complexity of the
presented framework is unique among existing frameworks for inference in
Bayesian networks.
|
1301.3848 | Any-Space Probabilistic Inference | cs.AI | We have recently introduced an any-space algorithm for exact inference in
Bayesian networks, called Recursive Conditioning, RC, which allows one to trade
space with time at increments of X-bytes, where X is the number of bytes needed
to cache a floating point number. In this paper, we present three key
extensions of RC. First, we modify the algorithm so it applies to more general
factorization of probability distributions, including (but not limited to)
Bayesian network factorizations. Second, we present a forgetting mechanism
which reduces the space requirements of RC considerably and then compare such
requirmenets with those of variable elimination on a number of realistic
networks, showing orders of magnitude improvements in certain cases. Third, we
present a version of RC for computing maximum a posteriori hypotheses (MAP),
which turns out to be the first MAP algorithm allowing a smooth time-space
tradeoff. A key advantage of presented MAP algorithm is that it does not have
to start from scratch each time a new query is presented, but can reuse some of
its computations across multiple queries, leading to significant savings in
ceratain cases.
|
1301.3849 | Experiments with Random Projection | cs.LG stat.ML | Recent theoretical work has identified random projection as a promising
dimensionality reduction technique for learning mixtures of Gausians. Here we
summarize these results and illustrate them by a wide variety of experiments on
synthetic and real data.
|
1301.3850 | A Two-round Variant of EM for Gaussian Mixtures | cs.LG stat.ML | Given a set of possible models (e.g., Bayesian network structures) and a data
sample, in the unsupervised model selection problem the task is to choose the
most accurate model with respect to the domain joint probability distribution.
In contrast to this, in supervised model selection it is a priori known that
the chosen model will be used in the future for prediction tasks involving more
``focused' predictive distributions. Although focused predictive distributions
can be produced from the joint probability distribution by marginalization, in
practice the best model in the unsupervised sense does not necessarily perform
well in supervised domains. In particular, the standard marginal likelihood
score is a criterion for the unsupervised task, and, although frequently used
for supervised model selection also, does not perform well in such tasks. In
this paper we study the performance of the marginal likelihood score
empirically in supervised Bayesian network selection tasks by using a large
number of publicly available classification data sets, and compare the results
to those obtained by alternative model selection criteria, including empirical
crossvalidation methods, an approximation of a supervised marginal likelihood
measure, and a supervised version of Dawids prequential(predictive sequential)
principle.The results demonstrate that the marginal likelihood score does NOT
perform well FOR supervised model selection, WHILE the best results are
obtained BY using Dawids prequential r napproach.
|
1301.3851 | Minimum Message Length Clustering Using Gibbs Sampling | cs.LG stat.ML | The K-Mean and EM algorithms are popular in clustering and mixture modeling,
due to their simplicity and ease of implementation. However, they have several
significant limitations. Both coverage to a local optimum of their respective
objective functions (ignoring the uncertainty in the model space), require the
apriori specification of the number of classes/clsuters, and are inconsistent.
In this work we overcome these limitations by using the Minimum Message Length
(MML) principle and a variation to the K-Means/EM observation assignment and
parameter calculation scheme. We maintain the simplicity of these approaches
while constructing a Bayesian mixture modeling tool that samples/searches the
model space using a Markov Chain Monte Carlo (MCMC) sampler known as a Gibbs
sampler. Gibbs sampling allows us to visit each model according to its
posterior probability. Therefore, if the model space is multi-modal we will
visit all models and not get stuck in local optima. We call our approach
multiple chains at equilibrium (MCE) MML sampling.
|
1301.3852 | Mix-nets: Factored Mixtures of Gaussians in Bayesian Networks With Mixed
Continuous And Discrete Variables | cs.LG cs.AI stat.ML | Recently developed techniques have made it possible to quickly learn accurate
probability density functions from data in low-dimensional continuous space. In
particular, mixtures of Gaussians can be fitted to data very quickly using an
accelerated EM algorithm that employs multiresolution kd-trees (Moore, 1999).
In this paper, we propose a kind of Bayesian networks in which low-dimensional
mixtures of Gaussians over different subsets of the domain's variables are
combined into a coherent joint probability model over the entire domain. The
network is also capable of modeling complex dependencies between discrete
variables and continuous variables without requiring discretization of the
continuous variables. We present efficient heuristic algorithms for
automatically learning these networks from data, and perform comparative
experiments illustrated how well these networks model real scientific data and
synthetic data. We also briefly discuss some possible improvements to the
networks, as well as possible applications.
|
1301.3853 | Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks | cs.LG cs.AI stat.CO | Particle filters (PFs) are powerful sampling-based inference/learning
algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a
principled way, any type of probability distribution, nonlinearity and
non-stationarity. They have appeared in several fields under such names as
"condensation", "sequential Monte Carlo" and "survival of the fittest". In this
paper, we show how we can exploit the structure of the DBN to increase the
efficiency of particle filtering, using a technique known as
Rao-Blackwellisation. Essentially, this samples some of the variables, and
marginalizes out the rest exactly, using the Kalman filter, HMM filter,
junction tree algorithm, or any other finite dimensional optimal filter. We
show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate
estimates than standard PFs. We demonstrate RBPFs on two problems, namely
non-stationary online regression with radial basis function networks and robot
localization and map building. We also discuss other potential application
areas and provide references to some finite dimensional optimal filters.
|
1301.3854 | Learning Graphical Models of Images, Videos and Their Spatial
Transformations | cs.CV cs.LG stat.ML | Mixtures of Gaussians, factor analyzers (probabilistic PCA) and hidden Markov
models are staples of static and dynamic data modeling and image and video
modeling in particular. We show how topographic transformations in the input,
such as translation and shearing in images, can be accounted for in these
models by including a discrete transformation variable. The resulting models
perform clustering, dimensionality reduction and time-series analysis in a way
that is invariant to transformations in the input. Using the EM algorithm,
these transformation-invariant models can be fit to static data and time
series. We give results on filtering microscopy images, face and facial pose
clustering, handwritten digit modeling and recognition, video clustering,
object tracking, and removal of distractions from video sequences.
|
1301.3855 | Likelihood Computations Using Value Abstractions | cs.AI | In this paper, we use evidence-specific value abstraction for speeding
Bayesian networks inference. This is done by grouping variable values and
treating the combined values as a single entity. As we show, such abstractions
can exploit regularities in conditional probability distributions and also the
specific values of observed variables. To formally justify value abstraction,
we define the notion of safe value abstraction and devise inference algorithms
that use it to reduce the cost of inference. Our procedure is particularly
useful for learning complex networks with many hidden variables. In such cases,
repeated likelihood computations are required for EM or other parameter
optimization techniques. Since these computations are repeated with respect to
the same evidence set, our methods can provide significant speedup to the
learning procedure. We demonstrate the algorithm on genetic linkage problems
where the use of value abstraction sometimes differentiates between a feasible
and non-feasible solution.
|
1301.3856 | Being Bayesian about Network Structure | cs.LG cs.AI stat.ML | In many domains, we are interested in analyzing the structure of the
underlying distribution, e.g., whether one variable is a direct parent of the
other. Bayesian model-selection attempts to find the MAP model and use its
structure to answer these questions. However, when the amount of available data
is modest, there might be many models that have non-negligible posterior. Thus,
we want compute the Bayesian posterior of a feature, i.e., the total posterior
probability of all models that contain it. In this paper, we propose a new
approach for this task. We first show how to efficiently compute a sum over the
exponential number of networks that are consistent with a fixed ordering over
network variables. This allows us to compute, for a given ordering, both the
marginal probability of the data and the posterior of a feature. We then use
this result as the basis for an algorithm that approximates the Bayesian
posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC)
method, but over orderings rather than over network structures. The space of
orderings is much smaller and more regular than the space of structures, and
has a smoother posterior `landscape'. We present empirical results on synthetic
and real-life datasets that compare our approach to full model averaging (when
possible), to MCMC over network structures, and to a non-Bayesian bootstrap
approach.
|
1301.3857 | Gaussian Process Networks | cs.AI cs.LG stat.ML | In this paper we address the problem of learning the structure of a Bayesian
network in domains with continuous variables. This task requires a procedure
for comparing different candidate structures. In the Bayesian framework, this
is done by evaluating the {em marginal likelihood/} of the data given a
candidate structure. This term can be computed in closed-form for standard
parametric families (e.g., Gaussians), and can be approximated, at some
computational cost, for some semi-parametric families (e.g., mixtures of
Gaussians).
We present a new family of continuous variable probabilistic networks that
are based on {em Gaussian Process/} priors. These priors are semi-parametric in
nature and can learn almost arbitrary noisy functional relations. Using these
priors, we can directly compute marginal likelihoods for structure learning.
The resulting method can discover a wide range of functional dependencies in
multivariate data. We develop the Bayesian score of Gaussian Process Networks
and describe how to learn them from data. We present empirical results on
artificial data as well as on real-life domains with non-linear dependencies.
|
1301.3858 | A Qualitative Linear Utility Theory for Spohn's Theory of Epistemic
Beliefs | cs.AI | In this paper, we formulate a qualitative "linear" utility theory for
lotteries in which uncertainty is expressed qualitatively using a Spohnian
disbelief function. We argue that a rational decision maker facing an uncertain
decision problem in which the uncertainty is expressed qualitatively should
behave so as to maximize "qualitative expected utility." Our axiomatization of
the qualitative utility is similar to the axiomatization developed by von
Neumann and Morgenstern for probabilistic lotteries. We compare our results
with other recent results in qualitative decision making.
|
1301.3859 | Building a Stochastic Dynamic Model of Application Use | cs.AI | Many intelligent user interfaces employ application and user models to
determine the user's preferences, goals and likely future actions. Such models
require application analysis, adaptation and expansion. Building and
maintaining such models adds a substantial amount of time and labour to the
application development cycle. We present a system that observes the interface
of an unmodified application and records users' interactions with the
application. From a history of such observations we build a coarse state space
of observed interface states and actions between them. To refine the space, we
hypothesize sub-states based upon the histories that led users to a given
state. We evaluate the information gain of possible state splits, varying the
length of the histories considered in such splits. In this way, we
automatically produce a stochastic dynamic model of the application and of how
it is used. To evaluate our approach, we present models derived from real-world
application usage data.
|
1301.3860 | Maximum Entropy and the Glasses You Are Looking Through | cs.AI | We give an interpretation of the Maximum Entropy (MaxEnt) Principle in
game-theoretic terms. Based on this interpretation, we make a formal
distinction between different ways of {em applying/} Maximum Entropy
distributions. MaxEnt has frequently been criticized on the grounds that it
leads to highly representation dependent results. Our distinction allows us to
avoid this problem in many cases.
|
1301.3861 | Inference for Belief Networks Using Coupling From the Past | cs.AI cs.LG | Inference for belief networks using Gibbs sampling produces a distribution
for unobserved variables that differs from the correct distribution by a
(usually) unknown error, since convergence to the right distribution occurs
only asymptotically. The method of "coupling from the past" samples from
exactly the correct distribution by (conceptually) running dependent Gibbs
sampling simulations from every possible starting state from a time far enough
in the past that all runs reach the same state at time t=0. Explicitly
considering every possible state is intractable for large networks, however. We
propose a method for layered noisy-or networks that uses a compact, but often
imprecise, summary of a set of states. This method samples from exactly the
correct distribution, and requires only about twice the time per step as
ordinary Gibbs sampling, but it may require more simulation steps than would be
needed if chains were tracked exactly.
|
1301.3862 | Dependency Networks for Collaborative Filtering and Data Visualization | cs.AI cs.IR cs.LG | We describe a graphical model for probabilistic relationships---an
alternative to the Bayesian network---called a dependency network. The graph of
a dependency network, unlike a Bayesian network, is potentially cyclic. The
probability component of a dependency network, like a Bayesian network, is a
set of conditional distributions, one for each node given its parents. We
identify several basic properties of this representation and describe a
computationally efficient procedure for learning the graph and probability
components from data. We describe the application of this representation to
probabilistic inference, collaborative filtering (the task of predicting
preferences), and the visualization of acausal predictive relationships.
|
1301.3863 | YGGDRASIL - A Statistical Package for Learning Split Models | cs.AI cs.MS stat.ME | There are two main objectives of this paper. The first is to present a
statistical framework for models with context specific independence structures,
i.e., conditional independences holding only for sepcific values of the
conditioning variables. This framework is constituted by the class of split
models. Split models are extension of graphical models for contigency tables
and allow for a more sophisticiated modelling than graphical models. The
treatment of split models include estimation, representation and a Markov
property for reading off those independencies holding in a specific context.
The second objective is to present a software package named YGGDRASIL which is
designed for statistical inference in split models, i.e., for learning such
models on the basis of data.
|
1301.3864 | Probabilistic Arc Consistency: A Connection between Constraint Reasoning
and Probabilistic Reasoning | cs.AI | We document a connection between constraint reasoning and probabilistic
reasoning. We present an algorithm, called {em probabilistic arc consistency},
which is both a generalization of a well known algorithm for arc consistency
used in constraint reasoning, and a specialization of the belief updating
algorithm for singly-connected networks. Our algorithm is exact for singly-
connected constraint problems, but can work well as an approximation for
arbitrary problems. We briefly discuss some empirical results, and related
methods.
|
1301.3865 | Feature Selection and Dualities in Maximum Entropy Discrimination | cs.LG stat.ML | Incorporating feature selection into a classification or regression method
often carries a number of advantages. In this paper we formalize feature
selection specifically from a discriminative perspective of improving
classification/regression accuracy. The feature selection method is developed
as an extension to the recently proposed maximum entropy discrimination (MED)
framework. We describe MED as a flexible (Bayesian) regularization approach
that subsumes, e.g., support vector classification, regression and exponential
family models. For brevity, we restrict ourselves primarily to feature
selection in the context of linear classification/regression methods and
demonstrate that the proposed approach indeed carries substantial improvements
in practice. Moreover, we discuss and develop various extensions of feature
selection, including the problem of dealing with example specific but
unobserved degrees of freedom -- alignments or invariants.
|
1301.3866 | Marginalization in Composed Probabilistic Models | cs.AI | Composition of low-dimensional distributions, whose foundations were laid in
the papaer published in the Proceeding of UAI'97 (Jirousek 1997), appeared to
be an alternative apparatus to describe multidimensional probabilistic models.
In contrast to Graphical Markov Models, which define multidomensinoal
distributions in a declarative way, this approach is rather procedural.
Ordering of low-dimensional distributions into a proper sequence fully defines
the resepctive computational procedure; therefore, a stury of different type of
generating sequences is one fo the central problems in this field. Thus, it
appears that an important role is played by special sequences that are called
perfect. Their main characterization theorems are presetned in this paper.
However, the main result of this paper is a solution to the problem of
margnialization for general sequences. The main theorem describes a way to
obtain a generating sequence that defines the model corresponding to the
marginal of the distribution defined by an arbitrary genearting sequence. From
this theorem the reader can see to what extent these comutations are local;
i.e., the sequence consists of marginal distributions whose computation must be
made by summing up over the values of the variable eliminated (the paper deals
with finite model).
|
1301.3867 | Fast Planning in Stochastic Games | cs.GT cs.AI | Stochastic games generalize Markov decision processes (MDPs) to a multiagent
setting by allowing the state transitions to depend jointly on all player
actions, and having rewards determined by multiplayer matrix games at each
state. We consider the problem of computing Nash equilibria in stochastic
games, the analogue of planning in MDPs. We begin by providing a generalization
of finite-horizon value iteration that computes a Nash strategy for each player
in generalsum stochastic games. The algorithm takes an arbitrary Nash selection
function as input, which allows the translation of local choices between
multiple Nash equilibria into the selection of a single global Nash
equilibrium.
Our main technical result is an algorithm for computing near-Nash equilibria
in large or infinite state spaces. This algorithm builds on our finite-horizon
value iteration algorithm, and adapts the sparse sampling methods of Kearns,
Mansour and Ng (1999) to stochastic games. We conclude by descrbing a
counterexample showing that infinite-horizon discounted value iteration, which
was shown by shaplely to converge in the zero-sum case (a result we give extend
slightly here), does not converge in the general-sum case.
|
1301.3868 | Making Sensitivity Analysis Computationally Efficient | cs.AI | To investigate the robustness of the output probabilities of a Bayesian
network, a sensitivity analysis can be performed. A one-way sensitivity
analysis establishes, for each of the probability parameters of a network, a
function expressing a posterior marginal probability of interest in terms of
the parameter. Current methods for computing the coefficients in such a
function rely on a large number of network evaluations. In this paper, we
present a method that requires just a single outward propagation in a junction
tree for establishing the coefficients in the functions for all possible
parameters; in addition, an inward propagation is required for processing
evidence. Conversely, the method requires a single outward propagation for
computing the coefficients in the functions expressing all possible posterior
marginals in terms of a single parameter. We extend these results to an n-way
sensitivity analysis in which sets of parameters are studied.
|
1301.3869 | Policy Iteration for Factored MDPs | cs.AI | Many large MDPs can be represented compactly using a dynamic Bayesian
network. Although the structure of the value function does not retain the
structure of the process, recent work has shown that value functions in
factored MDPs can often be approximated well using a decomposed value function:
a linear combination of <I>restricted</I> basis functions, each of which refers
only to a small subset of variables. An approximate value function for a
particular policy can be computed using approximate dynamic programming, but
this approach (and others) can only produce an approximation relative to a
distance metric which is weighted by the stationary distribution of the current
policy. This type of weighted projection is ill-suited to policy improvement.
We present a new approach to value determination, that uses a simple
closed-form computation to directly compute a least-squares decomposed
approximation to the value function <I>for any weights</I>. We then use this
value determination algorithm as a subroutine in a policy iteration process. We
show that, under reasonable restrictions, the policies induced by a factored
value function are compactly represented, and can be manipulated efficiently in
a policy iteration process. We also present a method for computing error bounds
for decomposed value functions using a variable-elimination algorithm for
function optimization. The complexity of all of our algorithms depends on the
factorization of system dynamics and of the approximate value function.
|
1301.3870 | Game Networks | cs.GT cs.AI | We introduce Game networks (G nets), a novel representation for multi-agent
decision problems. Compared to other game-theoretic representations, such as
strategic or extensive forms, G nets are more structured and more compact; more
fundamentally, G nets constitute a computationally advantageous framework for
strategic inference, as both probability and utility independencies are
captured in the structure of the network and can be exploited in order to
simplify the inference process. An important aspect of multi-agent reasoning is
the identification of some or all of the strategic equilibria in a game; we
present original convergence methods for strategic equilibrium which can take
advantage of strategic separabilities in the G net structure in order to
simplify the computations. Specifically, we describe a method which identifies
a unique equilibrium as a function of the game payoffs, and one which
identifies all equilibria.
|
1301.3871 | Combinatorial Optimization by Learning and Simulation of Bayesian
Networks | cs.AI cs.DS | This paper shows how the Bayesian network paradigm can be used in order to
solve combinatorial optimization problems. To do it some methods of structure
learning from data and simulation of Bayesian networks are inserted inside
Estimation of Distribution Algorithms (EDA). EDA are a new tool for
evolutionary computation in which populations of individuals are created by
estimation and simulation of the joint probability distribution of the selected
individuals. We propose new approaches to EDA for combinatorial optimization
based on the theory of probabilistic graphical models. Experimental results are
also presented.
|
1301.3872 | Causal Mechanism-based Model Construction | cs.AI | We propose a framework for building graphical causal model that is based on
the concept of causal mechanisms. Causal models are intuitive for human users
and, more importantly, support the prediction of the effect of manipulation. We
describe an implementation of the proposed framework as an interactive model
construction module, ImaGeNIe, in SMILE (Structural Modeling, Inference, and
Learning Engine) and in GeNIe (SMILE's Windows user interface).
|
1301.3873 | Credal Networks under Maximum Entropy | cs.AI | We apply the principle of maximum entropy to select a unique joint
probability distribution from the set of all joint probability distributions
specified by a credal network. In detail, we start by showing that the unique
joint distribution of a Bayesian tree coincides with the maximum entropy model
of its conditional distributions. This result, however, does not hold anymore
for general Bayesian networks. We thus present a new kind of maximum entropy
models, which are computed sequentially. We then show that for all general
Bayesian networks, the sequential maximum entropy model coincides with the
unique joint distribution. Moreover, we apply the new principle of sequential
maximum entropy to interval Bayesian networks and more generally to credal
networks. We especially show that this application is equivalent to a number of
small local entropy maximizations.
|
1301.3874 | Risk Agoras: Dialectical Argumentation for Scientific Reasoning | cs.AI | We propose a formal framework for intelligent systems which can reason about
scientific domains, in particular about the carcinogenicity of chemicals, and
we study its properties. Our framework is grounded in a philosophy of
scientific enquiry and discourse, and uses a model of dialectical
argumentation. The formalism enables representation of scientific uncertainty
and conflict in a manner suitable for qualitative reasoning about the domain.
|
1301.3875 | Tractable Bayesian Learning of Tree Belief Networks | cs.LG cs.AI stat.ML | In this paper we present decomposable priors, a family of priors over
structure and parameters of tree belief nets for which Bayesian learning with
complete observations is tractable, in the sense that the posterior is also
decomposable and can be completely determined analytically in polynomial time.
This follows from two main results: First, we show that factored distributions
over spanning trees in a graph can be integrated in closed form. Second, we
examine priors over tree parameters and show that a set of assumptions similar
to (Heckerman and al. 1995) constrain the tree parameter priors to be a
compactly parameterized product of Dirichlet distributions. Beside allowing for
exact Bayesian learning, these results permit us to formulate a new class of
tractable latent variable models in which the likelihood of a data point is
computed through an ensemble average over tree structures.
|
1301.3876 | Probabilistic Models for Agents' Beliefs and Decisions | cs.AI | Many applications of intelligent systems require reasoning about the mental
states of agents in the domain. We may want to reason about an agent's beliefs,
including beliefs about other agents; we may also want to reason about an
agent's preferences, and how his beliefs and preferences relate to his
behavior. We define a probabilistic epistemic logic (PEL) in which belief
statements are given a formal semantics, and provide an algorithm for asserting
and querying PEL formulas in Bayesian networks. We then show how to reason
about an agent's behavior by modeling his decision process as an influence
diagram and assuming that he behaves rationally. PEL can then be used for
reasoning from an agent's observed actions to conclusions about other aspects
of the domain, including unobserved domain variables and the agent's mental
states.
|
1301.3877 | The Anchors Hierachy: Using the triangle inequality to survive high
dimensional data | cs.LG cs.DS stat.ML | This paper is about metric data structures in high-dimensional or
non-Euclidean space that permit cached sufficient statistics accelerations of
learning algorithms.
It has recently been shown that for less than about 10 dimensions, decorating
kd-trees with additional "cached sufficient statistics" such as first and
second moments and contingency tables can provide satisfying acceleration for a
very wide range of statistical learning tasks such as kernel regression,
locally weighted regression, k-means clustering, mixture modeling and Bayes Net
learning.
In this paper, we begin by defining the anchors hierarchy - a fast data
structure and algorithm for localizing data based only on a
triangle-inequality-obeying distance metric. We show how this, in its own
right, gives a fast and effective clustering of data. But more importantly we
show how it can produce a well-balanced structure similar to a Ball-Tree
(Omohundro, 1991) or a kind of metric tree (Uhlmann, 1991; Ciaccia, Patella, &
Zezula, 1997) in a way that is neither "top-down" nor "bottom-up" but instead
"middle-out". We then show how this structure, decorated with cached sufficient
statistics, allows a wide variety of statistical learning algorithms to be
accelerated even in thousands of dimensions.
|
1301.3878 | PEGASUS: A Policy Search Method for Large MDPs and POMDPs | cs.AI cs.LG | We propose a new approach to the problem of searching a space of policies for
a Markov decision process (MDP) or a partially observable Markov decision
process (POMDP), given a model. Our approach is based on the following
observation: Any (PO)MDP can be transformed into an "equivalent" POMDP in which
all state transitions (given the current state and action) are deterministic.
This reduces the general problem of policy search to one in which we need only
consider POMDPs with deterministic transitions. We give a natural way of
estimating the value of all policies in these transformed POMDPs. Policy search
is then simply performed by searching for a policy with high estimated value.
We also establish conditions under which our value estimates will be good,
recovering theoretical results similar to those of Kearns, Mansour and Ng
(1999), but with "sample complexity" bounds that have only a polynomial rather
than exponential dependence on the horizon time. Our method applies to
arbitrary POMDPs, including ones with infinite state and action spaces. We also
present empirical results for our approach on a small discrete problem, and on
a complex continuous state/continuous action problem involving learning to ride
a bicycle.
|
1301.3879 | Representing and Solving Asymmetric Bayesian Decision Problems | cs.AI | This paper deals with the representation and solution of asymmetric Bayesian
decision problems. We present a formal framework, termed asymmetric influence
diagrams, that is based on the influence diagram and allows an efficient
representation of asymmetric decision problems. As opposed to existing
frameworks, the asymmetric influece diagram primarily encodes asymmetry at the
qualitative level and it can therefore be read directly from the model. We give
an algorithm for solving asymmetric influence diagrams. The algorithm initially
decomposes the asymmetric decision problem into a structure of symmetric
subproblems organized as a tree. A solution to the decision problem can then be
found by propagating from the leaves toward the root using existing evaluation
methods to solve the sub-problems.
|
1301.3880 | Using ROBDDs for Inference in Bayesian Networks with Troubleshooting as
an Example | cs.AI | When using Bayesian networks for modelling the behavior of man-made
machinery, it usually happens that a large part of the model is deterministic.
For such Bayesian networks deterministic part of the model can be represented
as a Boolean function, and a central part of belief updating reduces to the
task of calculating the number of satisfying configurations in a Boolean
function. In this paper we explore how advances in the calculation of Boolean
functions can be adopted for belief updating, in particular within the context
of troubleshooting. We present experimental results indicating a substantial
speed-up compared to traditional junction tree propagation.
|
1301.3881 | Evaluating Influence Diagrams using LIMIDs | cs.AI | We present a new approach to the solution of decision problems formulated as
influence diagrams. The approach converts the influence diagram into a simpler
structure, the LImited Memory Influence Diagram (LIMID), where only the
requisite information for the computation of optimal policies is depicted.
Because the requisite information is explicitly represented in the diagram, the
evaluation procedure can take advantage of it. In this paper we show how to
convert an influence diagram to a LIMID and describe the procedure for finding
an optimal strategy. Our approach can yield significant savings of memory and
computational time when compared to traditional methods.
|
1301.3882 | Adaptive Importance Sampling for Estimation in Structured Domains | cs.AI cs.LG stat.ML | Sampling is an important tool for estimating large, complex sums and
integrals over high dimensional spaces. For instance, important sampling has
been used as an alternative to exact methods for inference in belief networks.
Ideally, we want to have a sampling distribution that provides optimal-variance
estimators. In this paper, we present methods that improve the sampling
distribution by systematically adapting it as we obtain information from the
samples. We present a stochastic-gradient-descent method for sequentially
updating the sampling distribution based on the direct minization of the
variance. We also present other stochastic-gradient-descent methods based on
the minimizationof typical notions of distance between the current sampling
distribution and approximations of the target, optimal distribution. We finally
validate and compare the different methods empirically by applying them to the
problem of action evaluation in influence diagrams.
|
1301.3883 | Conversation as Action Under Uncertainty | cs.AI | Conversations abound with uncetainties of various kinds. Treating
conversation as inference and decision making under uncertainty, we propose a
task independent, multimodal architecture for supporting robust continuous
spoken dialog called Quartet. We introduce four interdependent levels of
analysis, and describe representations, inference procedures, and decision
strategies for managing uncertainties within and between the levels. We
highlight the approach by reviewing interactions between a user and two spoken
dialog systems developed using the Quartet architecture: Prsenter, a prototype
system for navigating Microsoft PowerPoint presentations, and the Bayesian
Receptionist, a prototype system for dealing with tasks typically handled by
front desk receptionists at the Microsoft corporate campus.
|
1301.3884 | Probabilistic Models for Query Approximation with Large Sparse Binary
Datasets | cs.AI cs.DB | Large sparse sets of binary transaction data with millions of records and
thousands of attributes occur in various domains: customers purchasing
products, users visiting web pages, and documents containing words are just
three typical examples. Real-time query selectivity estimation (the problem of
estimating the number of rows in the data satisfying a given predicate) is an
important practical problem for such databases.
We investigate the application of probabilistic models to this problem. In
particular, we study a Markov random field (MRF) approach based on frequent
sets and maximum entropy, and compare it to the independence model and the
Chow-Liu tree model. We find that the MRF model provides substantially more
accurate probability estimates than the other methods but is more expensive
from a computational and memory viewpoint. To alleviate the computational
requirements we show how one can apply bucket elimination and clique tree
approaches to take advantage of structure in the models and in the queries. We
provide experimental results on two large real-world transaction datasets.
|
1301.3885 | Collaborative Filtering by Personality Diagnosis: A Hybrid Memory- and
Model-Based Approach | cs.IR | The growth of Internet commerce has stimulated the use of collaborative
filtering (CF) algorithms as recommender systems. Such systems leverage
knowledge about the known preferences of multiple users to recommend items of
interest to other users. CF methods have been harnessed to make recommendations
about such items as web pages, movies, books, and toys. Researchers have
proposed and evaluated many approaches for generating recommendations. We
describe and evaluate a new method called emph{personality diagnosis (PD)}.
Given a user's preferences for some items, we compute the probability that he
or she is of the same "personality type" as other users, and, in turn, the
probability that he or she will like new items. PD retains some of the
advantages of traditional similarity-weighting techniques in that all data is
brought to bear on each prediction and new data can be added easily and
incrementally. Additionally, PD has a meaningful probabilistic interpretation,
which may be leveraged to justify, explain, and augment results. We report
empirical results on the EachMovie database of movie ratings, and on user
profile data collected from the CiteSeer digital library of Computer Science
research papers. The probabilistic framework naturally supports a variety of
descriptive measurements - in particular, we consider the applicability of a
value of information (VOI) computation.
|
1301.3887 | Value-Directed Belief State Approximation for POMDPs | cs.AI | We consider the problem belief-state monitoring for the purposes of
implementing a policy for a partially-observable Markov decision process
(POMDP), specifically how one might approximate the belief state. Other schemes
for belief-state approximation (e.g., based on minimixing a measures such as
KL-diveregence between the true and estimated state) are not necessarily
appropriate for POMDPs. Instead we propose a framework for analyzing
value-directed approximation schemes, where approximation quality is determined
by the expected error in utility rather than by the error in the belief state
itself. We propose heuristic methods for finding good projection schemes for
belief state estimation - exhibiting anytime characteristics - given a POMDP
value fucntion. We also describe several algorithms for constructing bounds on
the error in decision quality (expected utility) associated with acting in
accordance with a given belief state approximation.
|
1301.3888 | Probabilistic State-Dependent Grammars for Plan Recognition | cs.AI | Techniques for plan recognition under uncertainty require a stochastic model
of the plan-generation process. We introduce Probabilistic State-Dependent
Grammars (PSDGs) to represent an agent's plan-generation process. The PSDG
language model extends probabilistic context-free grammars (PCFGs) by allowing
production probabilities to depend on an explicit model of the planning agent's
internal and external state. Given a PSDG description of the plan-generation
process, we can then use inference algorithms that exploit the particular
independence properties of the PSDG language to efficiently answer
plan-recognition queries. The combination of the PSDG language model and
inference algorithms extends the range of plan-recognition domains for which
practical probabilistic inference is possible, as illustrated by applications
in traffic monitoring and air combat.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.