id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1203.3321
|
The Automorphism Group of an Extremal [72,36,16] Code does not contain
elements of order 6
|
cs.IT math.IT
|
The existence of an extremal code of length 72 is a long-standing open
problem. Let C be a putative extremal code of length 72 and suppose that C has
an automorphism g of order 6. We show that C, as an F_2<g>-module, is the
direct sum of two modules, one easily determinable and the other one which has
a very restrictive structure. We use this fact to do an exhaustive search and
we do not find any code. This proves that the automorphism group of an extremal
code of length 72 does not contain elements of order 6.
|
1203.3322
|
A note on Shannon entropy
|
cs.IT math.IT
|
We present a somewhat different way of looking on Shannon entropy. This leads
to an axiomatisation of Shannon entropy that is essentially equivalent to that
of Fadeev. In particular we give a new proof of Fadeev theorem.
|
1203.3324
|
A Detailed Survey on Various Aspects of SQL Injection in Web
Applications: Vulnerabilities, Innovative Attacks, and Remedies
|
cs.CR cs.DB cs.NI
|
In today's world, Web applications play a very important role in individual
life as well as in any country's development. Web applications have gone
through a very rapid growth in the recent years and their adoption is moving
faster than that was expected few years ago. Now-a-days, billions of
transactions are done online with the aid of different Web applications. Though
these applications are used by hundreds of people, in many cases the security
level is weak, which makes them vulnerable to get compromised. In most of the
scenarios, a user has to be identified before any communication is established
with the backend database. An arbitrary user should not be allowed access to
the system without proof of valid credentials. However, a crafted injection
gives access to unauthorized users. This is mostly accomplished via SQL
Injection input. In spite of the development of different approaches to prevent
SQL injection, it still remains an alarming threat to Web applications. In this
paper, we present a detailed survey on various types of SQL Injection
vulnerabilities, attacks, and their prevention techniques. Alongside presenting
our findings from the study, we also note down future expectations and possible
development of countermeasures against SQL Injection attacks.
|
1203.3329
|
On quantum information
|
cs.IT math-ph math.IT math.MP
|
We investigate the following generalisation of the entropy of quantum
measurement. Let H be an infinite-dimensional separable Hilbert space with a
'density' operator {\rho}, tr {\rho}=1. Let I(P)\in R be defined for any
partition P = (P_1,...,P_m), P_1+ ... +P_m=1_H, P_i \in proj H$ and let I(P_i
Qj, i \leq m, j \leq n) = I(P) + I(Q) for Q =(Q_1,..., Q_n), \sum Q_j = 1_H and
P_iQ_j = Q_j P_i, tr {\rho} P_iQ_j = tr {\rho} P_i tr {\rho} Q_j (P, Q are
physically independent). Assuming some continuity properties we give a general
form of generalised information I.
|
1203.3341
|
A Comparison of the Embedding Method to Multi-Parametric Programming,
Mixed-Integer Programming, Gradient-Descent, and Hybrid Minimum Principle
Based Methods
|
math.OC cs.SY
|
In recent years, the embedding approach for solving switched optimal control
problems has been developed in a series of papers. However, the embedding
approach, which advantageously converts the hybrid optimal control problem to a
classical nonlinear optimization, has not been extensively compared to
alternative solution approaches. The goal of this paper is thus to compare the
embedding approach to multi-parametric programming, mixed-integer programming
(e.g., CPLEX), and gradient-descent based methods in the context of five
recently published examples: a spring-mass system, moving-target tracking for a
mobile robot, two-tank filling, DC-DC boost converter, and skid-steered
vehicle. A sixth example, an autonomous switched 11-region linear system, is
used to compare a hybrid minimum principle method and traditional numerical
programming. For a given performance index for each case, cost and solution
times are presented. It is shown that there are numerical advantages of the
embedding approach: lower performance index cost (except in some instances when
autonomous switches are present), generally faster solution time, and
convergence to a solution when other methods may fail. In addition, the
embedding method requires no ad hoc assumptions (e.g., predetermined mode
sequences) or specialized control models. Theoretical advantages of the
embedding approach over the other methods are also described: guaranteed
existence of a solution under mild conditions, convexity of the embedded hybrid
optimization problem (under the customary conditions on the performance index),
solvability with traditional techniques (e.g., sequential quadratic
programming) avoiding the combinatorial complexity in the number of
modes/discrete variables of mixed-integer programming, applicability to affine
nonlinear systems, and no need to explicitly assign discrete/mode variables to
autonomous switches.
|
1203.3376
|
Learning, Social Intelligence and the Turing Test - why an
"out-of-the-box" Turing Machine will not pass the Turing Test
|
cs.AI cs.LG nlin.AO
|
The Turing Test (TT) checks for human intelligence, rather than any putative
general intelligence. It involves repeated interaction requiring learning in
the form of adaption to the human conversation partner. It is a macro-level
post-hoc test in contrast to the definition of a Turing Machine (TM), which is
a prior micro-level definition. This raises the question of whether learning is
just another computational process, i.e. can be implemented as a TM. Here we
argue that learning or adaption is fundamentally different from computation,
though it does involve processes that can be seen as computations. To
illustrate this difference we compare (a) designing a TM and (b) learning a TM,
defining them for the purpose of the argument. We show that there is a
well-defined sequence of problems which are not effectively designable but are
learnable, in the form of the bounded halting problem. Some characteristics of
human intelligence are reviewed including it's: interactive nature, learning
abilities, imitative tendencies, linguistic ability and context-dependency. A
story that explains some of these is the Social Intelligence Hypothesis. If
this is broadly correct, this points to the necessity of a considerable period
of acculturation (social learning in context) if an artificial intelligence is
to pass the TT. Whilst it is always possible to 'compile' the results of
learning into a TM, this would not be a designed TM and would not be able to
continually adapt (pass future TTs). We conclude three things, namely that: a
purely "designed" TM will never pass the TT; that there is no such thing as a
general intelligence since it necessary involves learning; and that
learning/adaption and computation should be clearly distinguished.
|
1203.3442
|
A low multiplicative complexity fast recursive DCT-2 algorithm
|
cs.IT math.IT
|
A fast Discrete Cosine Transform (DCT) algorithm is introduced that can be of
particular interest in image processing. The main features of the algorithm are
regularity of the graph and very low arithmetic complexity. The 16-point
version of the algorithm requires only 32 multiplications and 81 additions. The
computational core of the algorithm consists of only 17 nontrivial
multiplications, the rest 15 are scaling factors that can be compensated in the
post-processing. The derivation of the algorithm is based on the algebraic
signal processing theory (ASP). MATLAB implementation of the algorithm can be
found in the public repository https://github.com/Mak-Sim/Fast_recursive_DCT.
|
1203.3445
|
Coded Cooperative Data Exchange in Multihop Networks
|
cs.IT math.IT
|
Consider a connected network of n nodes that all wish to recover k desired
packets. Each node begins with a subset of the desired packets and exchanges
coded packets with its neighbors. This paper provides necessary and sufficient
conditions which characterize the set of all transmission schemes that permit
every node to ultimately learn (recover) all k packets. When the network
satisfies certain regularity conditions and packets are randomly distributed,
this paper provides tight concentration results on the number of transmissions
required to achieve universal recovery. For the case of a fully connected
network, a polynomial-time algorithm for computing an optimal transmission
scheme is derived. An application to secrecy generation is discussed.
|
1203.3453
|
Calibrating Data to Sensitivity in Private Data Analysis
|
cs.CR cs.SI
|
We present an approach to differentially private computation in which one
does not scale up the magnitude of noise for challenging queries, but rather
scales down the contributions of challenging records. While scaling down all
records uniformly is equivalent to scaling up the noise magnitude, we show that
scaling records non-uniformly can result in substantially higher accuracy by
bypassing the worst-case requirements of differential privacy for the noise
magnitudes. This paper details the data analysis platform wPINQ, which
generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a
few simple operators (including a non-uniformly scaling Join operator) wPINQ
can reproduce (and improve) several recent results on graph analysis and
introduce new generalizations (e.g., counting triangles with given degrees). We
also show how to integrate probabilistic inference techniques to synthesize
datasets respecting more complicated (and less easily interpreted)
measurements.
|
1203.3461
|
Robust Metric Learning by Smooth Optimization
|
cs.LG stat.ML
|
Most existing distance metric learning methods assume perfect side
information that is usually given in pairwise or triplet constraints. Instead,
in many real-world applications, the constraints are derived from side
information, such as users' implicit feedbacks and citations among articles. As
a result, these constraints are usually noisy and contain many mistakes. In
this work, we aim to learn a distance metric from noisy constraints by robust
optimization in a worst-case scenario, to which we refer as robust metric
learning. We formulate the learning task initially as a combinatorial
optimization problem, and show that it can be elegantly transformed to a convex
programming problem. We present an efficient learning algorithm based on smooth
optimization [7]. It has a worst-case convergence rate of
O(1/{\surd}{\varepsilon}) for smooth optimization problems, where {\varepsilon}
is the desired error of the approximate solution. Finally, our empirical study
with UCI data sets demonstrate the effectiveness of the proposed method in
comparison to state-of-the-art methods.
|
1203.3462
|
Gaussian Process Topic Models
|
cs.LG stat.ML
|
We introduce Gaussian Process Topic Models (GPTMs), a new family of topic
models which can leverage a kernel among documents while extracting correlated
topics. GPTMs can be considered a systematic generalization of the Correlated
Topic Models (CTMs) using ideas from Gaussian Process (GP) based embedding.
Since GPTMs work with both a topic covariance matrix and a document kernel
matrix, learning GPTMs involves a novel component-solving a suitable Sylvester
equation capturing both topic and document dependencies. The efficacy of GPTMs
is demonstrated with experiments evaluating the quality of both topic modeling
and embedding.
|
1203.3463
|
Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering
Birth/Death and Evolution of Topics in Text Stream
|
cs.IR cs.LG stat.ML
|
Topic models have proven to be a useful tool for discovering latent
structures in document collections. However, most document collections often
come as temporal streams and thus several aspects of the latent structure such
as the number of topics, the topics' distribution and popularity are
time-evolving. Several models exist that model the evolution of some but not
all of the above aspects. In this paper we introduce infinite dynamic topic
models, iDTM, that can accommodate the evolution of all the aforementioned
aspects. Our model assumes that documents are organized into epochs, where the
documents within each epoch are exchangeable but the order between the
documents is maintained across epochs. iDTM allows for unbounded number of
topics: topics can die or be born at any epoch, and the representation of each
topic can evolve according to a Markovian dynamics. We use iDTM to analyze the
birth and evolution of topics in the NIPS community and evaluated the efficacy
of our model on both simulated and real datasets with favorable outcome.
|
1203.3464
|
Gibbs Sampling in Open-Universe Stochastic Languages
|
cs.AI
|
Languages for open-universe probabilistic models (OUPMs) can represent
situations with an unknown number of objects and iden- tity uncertainty. While
such cases arise in a wide range of important real-world appli- cations,
existing general purpose inference methods for OUPMs are far less efficient
than those available for more restricted lan- guages and model classes. This
paper goes some way to remedying this deficit by in- troducing, and proving
correct, a generaliza- tion of Gibbs sampling to partial worlds with possibly
varying model structure. Our ap- proach draws on and extends previous generic
OUPM inference methods, as well as aux- iliary variable samplers for
nonparametric mixture models. It has been implemented for BLOG, a well-known
OUPM language. Combined with compile-time optimizations, the resulting
algorithm yields very substan- tial speedups over existing methods on sev- eral
test cases, and substantially improves the practicality of OUPM languages
generally.
|
1203.3465
|
Compiling Possibilistic Networks: Alternative Approaches to
Possibilistic Inference
|
cs.AI
|
Qualitative possibilistic networks, also known as min-based possibilistic
networks, are important tools for handling uncertain information in the
possibility theory frame- work. Despite their importance, only the junction
tree adaptation has been proposed for exact reasoning with such networks. This
paper explores alternative algorithms using compilation techniques. We first
propose possibilistic adaptations of standard compilation-based probabilistic
methods. Then, we develop a new, purely possibilistic, method based on the
transformation of the initial network into a possibilistic base. A comparative
study shows that this latter performs better than the possibilistic adap-
tations of probabilistic methods. This result is also confirmed by experimental
results.
|
1203.3466
|
Possibilistic Answer Set Programming Revisited
|
cs.AI
|
Possibilistic answer set programming (PASP) extends answer set programming
(ASP) by attaching to each rule a degree of certainty. While such an extension
is important from an application point of view, existing semantics are not
well-motivated, and do not always yield intuitive results. To develop a more
suitable semantics, we first introduce a characterization of answer sets of
classical ASP programs in terms of possibilistic logic where an ASP program
specifies a set of constraints on possibility distributions. This
characterization is then naturally generalized to define answer sets of PASP
programs. We furthermore provide a syntactic counterpart, leading to a
possibilistic generalization of the well-known Gelfond-Lifschitz reduct, and we
show how our framework can readily be implemented using standard ASP solvers.
|
1203.3467
|
Three new sensitivity analysis methods for influence diagrams
|
cs.AI
|
Performing sensitivity analysis for influence diagrams using the decision
circuit framework is particularly convenient, since the partial derivatives
with respect to every parameter are readily available [Bhattacharjya and
Shachter, 2007; 2008]. In this paper we present three non-linear sensitivity
analysis methods that utilize this partial derivative information and therefore
do not require re-evaluating the decision situation multiple times.
Specifically, we show how to efficiently compare strategies in decision
situations, perform sensitivity to risk aversion and compute the value of
perfect hedging [Seyller, 2008].
|
1203.3468
|
Bayesian Rose Trees
|
cs.LG stat.ML
|
Hierarchical structure is ubiquitous in data across many domains. There are
many hierarchical clustering methods, frequently used by domain experts, which
strive to discover this structure. However, most of these methods limit
discoverable hierarchies to those with binary branching structure. This
limitation, while computationally convenient, is often undesirable. In this
paper we explore a Bayesian hierarchical clustering algorithm that can produce
trees with arbitrary branching structure at each node, known as rose trees. We
interpret these trees as mixtures over partitions of a data set, and use a
computationally efficient, greedy agglomerative algorithm to find the rose
trees which have high marginal likelihood given the data. Lastly, we perform
experiments which demonstrate that rose trees are better models of data than
the typical binary trees returned by other hierarchical clustering algorithms.
|
1203.3469
|
Probabilistic Similarity Logic
|
cs.AI
|
Many machine learning applications require the ability to learn from and
reason about noisy multi-relational data. To address this, several effective
representations have been developed that provide both a language for expressing
the structural regularities of a domain, and principled support for
probabilistic inference. In addition to these two aspects, however, many
applications also involve a third aspect-the need to reason about
similarities-which has not been directly supported in existing frameworks. This
paper introduces probabilistic similarity logic (PSL), a general-purpose
framework for joint reasoning about similarity in relational domains that
incorporates probabilistic reasoning about similarities and relational
structure in a principled way. PSL can integrate any existing domain-specific
similarity measures and also supports reasoning about similarities between sets
of entities. We provide efficient inference and learning techniques for PSL and
demonstrate its effectiveness both in common relational tasks and in settings
that require reasoning about similarity.
|
1203.3470
|
ALARMS: Alerting and Reasoning Management System for Next Generation
Aircraft Hazards
|
cs.AI
|
The Next Generation Air Transportation System will introduce new, advanced
sensor technologies into the cockpit. With the introduction of such systems,
the responsibilities of the pilot are expected to dramatically increase. In the
ALARMS (ALerting And Reasoning Management System) project for NASA, we focus on
a key challenge of this environment, the quick and efficient handling of
aircraft sensor alerts. It is infeasible to alert the pilot on the state of all
subsystems at all times. Furthermore, there is uncertainty as to the true
hazard state despite the evidence of the alerts, and there is uncertainty as to
the effect and duration of actions taken to address these alerts. This paper
reports on the first steps in the construction of an application designed to
handle Next Generation alerts. In ALARMS, we have identified 60 different
aircraft subsystems and 20 different underlying hazards. In this paper, we show
how a Bayesian network can be used to derive the state of the underlying
hazards, based on the sensor input. Then, we propose a framework whereby an
automated system can plan to address these hazards in cooperation with the
pilot, using a Time-Dependent Markov Process (TMDP). Different hazards and
pilot states will call for different alerting automation plans. We demonstrate
this emerging application of Bayesian networks and TMDPs to cockpit automation,
for a use case where a small number of hazards are present, and analyze the
resulting alerting automation policies.
|
1203.3471
|
An Online Learning-based Framework for Tracking
|
cs.LG cs.AI stat.ML
|
We study the tracking problem, namely, estimating the hidden state of an
object over time, from unreliable and noisy measurements. The standard
framework for the tracking problem is the generative framework, which is the
basis of solutions such as the Bayesian algorithm and its approximation, the
particle filters. However, these solutions can be very sensitive to model
mismatches. In this paper, motivated by online learning, we introduce a new
framework for tracking. We provide an efficient tracking algorithm for this
framework. We provide experimental results comparing our algorithm to the
Bayesian algorithm on simulated data. Our experiments show that when there are
slight model mismatches, our algorithm outperforms the Bayesian algorithm.
|
1203.3472
|
Super-Samples from Kernel Herding
|
cs.LG stat.ML
|
We extend the herding algorithm to continuous spaces by using the kernel
trick. The resulting "kernel herding" algorithm is an infinite memory
deterministic process that learns to approximate a PDF with a collection of
samples. We show that kernel herding decreases the error of expectations of
functions in the Hilbert space at a rate O(1/T) which is much faster than the
usual O(1/pT) for iid random samples. We illustrate kernel herding by
approximating Bayesian predictive distributions.
|
1203.3473
|
Lifted Inference for Relational Continuous Models
|
cs.AI
|
Relational Continuous Models (RCMs) represent joint probability densities
over attributes of objects, when the attributes have continuous domains. With
relational representations, they can model joint probability distributions over
large numbers of variables compactly in a natural way. This paper presents a
new exact lifted inference algorithm for RCMs, thus it scales up to large
models of real world applications. The algorithm applies to Relational Pairwise
Models which are (relational) products of potentials of arity 2. Our algorithm
is unique in two ways. First, it substantially improves the efficiency of
lifted inference with variables of continuous domains. When a relational model
has Gaussian potentials, it takes only linear-time compared to cubic time of
previous methods. Second, it is the first exact inference algorithm which
handles RCMs in a lifted way. The algorithm is illustrated over an example from
econometrics. Experimental results show that our algorithm outperforms both a
groundlevel inference algorithm and an algorithm built with previously-known
lifted methods.
|
1203.3474
|
Distribution over Beliefs for Memory Bounded Dec-POMDP Planning
|
cs.AI
|
We propose a new point-based method for approximate planning in Dec-POMDP
which outperforms the state-of-the-art approaches in terms of solution quality.
It uses a heuristic estimation of the prior probability of beliefs to choose a
bounded number of policy trees: this choice is formulated as a combinatorial
optimisation problem minimising the error induced by pruning.
|
1203.3475
|
Inferring deterministic causal relations
|
cs.LG stat.ML
|
We consider two variables that are related to each other by an invertible
function. While it has previously been shown that the dependence structure of
the noise can provide hints to determine which of the two variables is the
cause, we presently show that even in the deterministic (noise-free) case,
there are asymmetries that can be exploited for causal inference. Our method is
based on the idea that if the function and the probability density of the cause
are chosen independently, then the distribution of the effect will, in a
certain sense, depend on the function. We provide a theoretical analysis of
this method, showing that it also works in the low noise regime, and link it to
information geometry. We report strong empirical results on various real-world
data sets from different domains.
|
1203.3476
|
Inference-less Density Estimation using Copula Bayesian Networks
|
cs.LG stat.ML
|
We consider learning continuous probabilistic graphical models in the face of
missing data. For non-Gaussian models, learning the parameters and structure of
such models depends on our ability to perform efficient inference, and can be
prohibitive even for relatively modest domains. Recently, we introduced the
Copula Bayesian Network (CBN) density model - a flexible framework that
captures complex high-dimensional dependency structures while offering direct
control over the univariate marginals, leading to improved generalization. In
this work we show that the CBN model also offers significant computational
advantages when training data is partially observed. Concretely, we leverage on
the specialized form of the model to derive a computationally amenable learning
objective that is a lower bound on the log-likelihood function. Importantly,
our energy-like bound circumvents the need for costly inference of an auxiliary
distribution, thus facilitating practical learning of highdimensional
densities. We demonstrate the effectiveness of our approach for learning the
structure and parameters of a CBN model for two reallife continuous domains.
|
1203.3477
|
A Scalable Method for Solving High-Dimensional Continuous POMDPs Using
Local Approximation
|
cs.AI
|
Partially-Observable Markov Decision Processes (POMDPs) are typically solved
by finding an approximate global solution to a corresponding belief-MDP. In
this paper, we offer a new planning algorithm for POMDPs with continuous state,
action and observation spaces. Since such domains have an inherent notion of
locality, we can find an approximate solution using local optimization methods.
We parameterize the belief distribution as a Gaussian mixture, and use the
Extended Kalman Filter (EKF) to approximate the belief update. Since the EKF is
a first-order filter, we can marginalize over the observations analytically. By
using feedback control and state estimation during policy execution, we recover
a behavior that is effectively conditioned on incoming observations despite the
unconditioned planning. Local optimization provides no guarantees of global
optimality, but it allows us to tackle domains that are at least an order of
magnitude larger than the current state-of-the-art. We demonstrate the
scalability of our algorithm by considering a simulated hand-eye coordination
domain with 16 continuous state dimensions and 6 continuous action dimensions.
|
1203.3478
|
Playing games against nature: optimal policies for renewable resource
allocation
|
cs.AI cs.GT
|
In this paper we introduce a class of Markov decision processes that arise as
a natural model for many renewable resource allocation problems. Upon extending
results from the inventory control literature, we prove that they admit a
closed form solution and we show how to exploit this structure to speed up its
computation. We consider the application of the proposed framework to several
problems arising in very different domains, and as part of the ongoing effort
in the emerging field of Computational Sustainability we discuss in detail its
application to the Northern Pacific Halibut marine fishery. Our approach is
applied to a model based on real world data, obtaining a policy with a
guaranteed lower bound on the utility function that is structurally very
different from the one currently employed.
|
1203.3479
|
Maximum likelihood fitting of acyclic directed mixed graphs to binary
data
|
stat.ME cs.AI
|
Acyclic directed mixed graphs, also known as semi-Markov models represent the
conditional independence structure induced on an observed margin by a DAG model
with latent variables. In this paper we present the first method for fitting
these models to binary data using maximum likelihood estimation.
|
1203.3480
|
Learning Game Representations from Data Using Rationality Constraints
|
cs.GT cs.AI
|
While game theory is widely used to model strategic interactions, a natural
question is where do the game representations come from? One answer is to learn
the representations from data. If one wants to learn both the payoffs and the
players' strategies, a naive approach is to learn them both directly from the
data. This approach ignores the fact the players might be playing reasonably
good strategies, so there is a connection between the strategies and the data.
The main contribution of this paper is to make this connection while learning.
We formulate the learning problem as a weighted constraint satisfaction
problem, including constraints both for the fit of the payoffs and strategies
to the data and the fit of the strategies to the payoffs. We use quantal
response equilibrium as our notion of rationality for quantifying the latter
fit. Our results show that incorporating rationality constraints can improve
learning when the amount of data is limited.
|
1203.3481
|
Real-Time Scheduling via Reinforcement Learning
|
cs.LG cs.AI stat.ML
|
Cyber-physical systems, such as mobile robots, must respond adaptively to
dynamic operating conditions. Effective operation of these systems requires
that sensing and actuation tasks are performed in a timely manner.
Additionally, execution of mission specific tasks such as imaging a room must
be balanced against the need to perform more general tasks such as obstacle
avoidance. This problem has been addressed by maintaining relative utilization
of shared resources among tasks near a user-specified target level. Producing
optimal scheduling strategies requires complete prior knowledge of task
behavior, which is unlikely to be available in practice. Instead, suitable
scheduling strategies must be learned online through interaction with the
system. We consider the sample complexity of reinforcement learning in this
domain, and demonstrate that while the problem state space is countably
infinite, we may leverage the problem's structure to guarantee efficient
learning.
|
1203.3482
|
Formula-Based Probabilistic Inference
|
cs.AI
|
Computing the probability of a formula given the probabilities or weights
associated with other formulas is a natural extension of logical inference to
the probabilistic setting. Surprisingly, this problem has received little
attention in the literature to date, particularly considering that it includes
many standard inference problems as special cases. In this paper, we propose
two algorithms for this problem: formula decomposition and conditioning, which
is an exact method, and formula importance sampling, which is an approximate
method. The latter is, to our knowledge, the first application of model
counting to approximate probabilistic inference. Unlike conventional
variable-based algorithms, our algorithms work in the dual realm of logical
formulas. Theoretically, we show that our algorithms can greatly improve
efficiency by exploiting the structural information in the formulas.
Empirically, we show that they are indeed quite powerful, often achieving
substantial performance gains over state-of-the-art schemes.
|
1203.3483
|
Regularized Maximum Likelihood for Intrinsic Dimension Estimation
|
cs.LG stat.ML
|
We propose a new method for estimating the intrinsic dimension of a dataset
by applying the principle of regularized maximum likelihood to the distances
between close neighbors. We propose a regularization scheme which is motivated
by divergence minimization principles. We derive the estimator by a Poisson
process approximation, argue about its convergence properties and apply it to a
number of simulated and real datasets. We also show it has the best overall
performance compared with two other intrinsic dimension estimators.
|
1203.3484
|
Intracluster Moves for Constrained Discrete-Space MCMC
|
stat.CO cs.AI
|
This paper addresses the problem of sampling from binary distributions with
constraints. In particular, it proposes an MCMC method to draw samples from a
distribution of the set of all states at a specified distance from some
reference state. For example, when the reference state is the vector of zeros,
the algorithm can draw samples from a binary distribution with a constraint on
the number of active variables, say the number of 1's. We motivate the need for
this algorithm with examples from statistical physics and probabilistic
inference. Unlike previous algorithms proposed to sample from binary
distributions with these constraints, the new algorithm allows for large moves
in state space and tends to propose them such that they are energetically
favourable. The algorithm is demonstrated on three Boltzmann machines of
varying difficulty: A ferromagnetic Ising model (with positive potentials), a
restricted Boltzmann machine with learned Gabor-like filters as potentials, and
a challenging three-dimensional spin-glass (with positive and negative
potentials).
|
1203.3485
|
The Hierarchical Dirichlet Process Hidden Semi-Markov Model
|
cs.LG stat.ML
|
There is much interest in the Hierarchical Dirichlet Process Hidden Markov
Model (HDP-HMM) as a natural Bayesian nonparametric extension of the
traditional HMM. However, in many settings the HDP-HMM's strict Markovian
constraints are undesirable, particularly if we wish to learn or encode
non-geometric state durations. We can extend the HDP-HMM to capture such
structure by drawing upon explicit-duration semi-Markovianity, which has been
developed in the parametric setting to allow construction of highly
interpretable models that admit natural prior information on state durations.
In this paper we introduce the explicitduration HDP-HSMM and develop posterior
sampling algorithms for efficient inference in both the direct-assignment and
weak-limit approximation settings. We demonstrate the utility of the model and
our inference methods on synthetic data as well as experiments on a speaker
diarization problem and an example of learning the patterns in Morse code.
|
1203.3486
|
Combining Spatial and Telemetric Features for Learning Animal Movement
Models
|
cs.LG stat.ML
|
We introduce a new graphical model for tracking radio-tagged animals and
learning their movement patterns. The model provides a principled way to
combine radio telemetry data with an arbitrary set of userdefined, spatial
features. We describe an efficient stochastic gradient algorithm for fitting
model parameters to data and demonstrate its effectiveness via asymptotic
analysis and synthetic experiments. We also apply our model to real datasets,
and show that it outperforms the most popular radio telemetry software package
used in ecology. We conclude that integration of different data sources under a
single statistical framework, coupled with appropriate parameter and state
estimation procedures, produces both accurate location estimates and an
interpretable statistical model of animal movement.
|
1203.3487
|
BEEM : Bucket Elimination with External Memory
|
cs.AI cs.DS
|
A major limitation of exact inference algorithms for probabilistic graphical
models is their extensive memory usage, which often puts real-world problems
out of their reach. In this paper we show how we can extend inference
algorithms, particularly Bucket Elimination, a special case of cluster (join)
tree decomposition, to utilize disk memory. We provide the underlying ideas and
show promising empirical results of exactly solving large problems not solvable
before.
|
1203.3488
|
Causal Conclusions that Flip Repeatedly and Their Justification
|
cs.LG cs.AI stat.ML
|
Over the past two decades, several consistent procedures have been designed
to infer causal conclusions from observational data. We prove that if the true
causal network might be an arbitrary, linear Gaussian network or a discrete
Bayes network, then every unambiguous causal conclusion produced by a
consistent method from non-experimental data is subject to reversal as the
sample size increases any finite number of times. That result, called the
causal flipping theorem, extends prior results to the effect that causal
discovery cannot be reliable on a given sample size. We argue that since
repeated flipping of causal conclusions is unavoidable in principle for
consistent methods, the best possible discovery methods are consistent methods
that retract their earlier conclusions no more than necessary. A series of
simulations of various methods across a wide range of sample sizes illustrates
concretely both the theorem and the principle of comparing methods in terms of
retractions.
|
1203.3489
|
Bayesian exponential family projections for coupled data sources
|
cs.LG stat.ML
|
Exponential family extensions of principal component analysis (EPCA) have
received a considerable amount of attention in recent years, demonstrating the
growing need for basic modeling tools that do not assume the squared loss or
Gaussian distribution. We extend the EPCA model toolbox by presenting the first
exponential family multi-view learning methods of the partial least squares and
canonical correlation analysis, based on a unified representation of EPCA as
matrix factorization of the natural parameters of exponential family. The
models are based on a new family of priors that are generally usable for all
such factorizations. We also introduce new inference strategies, and
demonstrate how the methods outperform earlier ones when the Gaussianity
assumption does not hold.
|
1203.3490
|
Anytime Planning for Decentralized POMDPs using Expectation Maximization
|
cs.AI
|
Decentralized POMDPs provide an expressive framework for multi-agent
sequential decision making. While fnite-horizon DECPOMDPs have enjoyed
signifcant success, progress remains slow for the infnite-horizon case mainly
due to the inherent complexity of optimizing stochastic controllers
representing agent policies. We present a promising new class of algorithms for
the infnite-horizon case, which recasts the optimization problem as inference
in a mixture of DBNs. An attractive feature of this approach is the
straightforward adoption of existing inference techniques in DBNs for solving
DEC-POMDPs and supporting richer representations such as factored or continuous
states and actions. We also derive the Expectation Maximization (EM) algorithm
to optimize the joint policy represented as DBNs. Experiments on benchmark
domains show that EM compares favorably against the state-of-the-art solvers.
|
1203.3491
|
Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost
|
cs.LG stat.ML
|
Logitboost is an influential boosting algorithm for classification. In this
paper, we develop robust logitboost to provide an explicit formulation of
tree-split criterion for building weak learners (regression trees) for
logitboost. This formulation leads to a numerically stable implementation of
logitboost. We then propose abc-logitboost for multi-class classification, by
combining robust logitboost with the prior work of abc-boost. Previously,
abc-boost was implemented as abc-mart using the mart algorithm. Our extensive
experiments on multi-class classification compare four algorithms: mart,
abcmart, (robust) logitboost, and abc-logitboost, and demonstrate the
superiority of abc-logitboost. Comparisons with other learning methods
including SVM and deep learning are also available through prior publications.
|
1203.3492
|
Approximating Higher-Order Distances Using Random Projections
|
cs.LG stat.ML
|
We provide a simple method and relevant theoretical analysis for efficiently
estimating higher-order lp distances. While the analysis mainly focuses on l4,
our methodology extends naturally to p = 6,8,10..., (i.e., when p is even).
Distance-based methods are popular in machine learning. In large-scale
applications, storing, computing, and retrieving the distances can be both
space and time prohibitive. Efficient algorithms exist for estimating lp
distances if 0 < p <= 2. The task for p > 2 is known to be difficult. Our work
partially fills this gap.
|
1203.3493
|
Solving Hybrid Influence Diagrams with Deterministic Variables
|
cs.AI
|
We describe a framework and an algorithm for solving hybrid influence
diagrams with discrete, continuous, and deterministic chance variables, and
discrete and continuous decision variables. A continuous chance variable in an
influence diagram is said to be deterministic if its conditional distributions
have zero variances. The solution algorithm is an extension of Shenoy's fusion
algorithm for discrete influence diagrams. We describe an extended
Shenoy-Shafer architecture for propagation of discrete, continuous, and utility
potentials in hybrid influence diagrams that include deterministic chance
variables. The algorithm and framework are illustrated by solving two small
examples.
|
1203.3494
|
Negative Tree Reweighted Belief Propagation
|
cs.LG stat.ML
|
We introduce a new class of lower bounds on the log partition function of a
Markov random field which makes use of a reversed Jensen's inequality. In
particular, our method approximates the intractable distribution using a linear
combination of spanning trees with negative weights. This technique is a
lower-bound counterpart to the tree-reweighted belief propagation algorithm,
which uses a convex combination of spanning trees with positive weights to
provide corresponding upper bounds. We develop algorithms to optimize and
tighten the lower bounds over the non-convex set of valid parameter values. Our
algorithm generalizes mean field approaches (including naive and structured
mean field approximations), which it includes as a limiting case.
|
1203.3495
|
Parameter-Free Spectral Kernel Learning
|
cs.LG stat.ML
|
Due to the growing ubiquity of unlabeled data, learning with unlabeled data
is attracting increasing attention in machine learning. In this paper, we
propose a novel semi-supervised kernel learning method which can seamlessly
combine manifold structure of unlabeled data and Regularized Least-Squares
(RLS) to learn a new kernel. Interestingly, the new kernel matrix can be
obtained analytically with the use of spectral decomposition of graph Laplacian
matrix. Hence, the proposed algorithm does not require any numerical
optimization solvers. Moreover, by maximizing kernel target alignment on
labeled data, we can also learn model parameters automatically with a
closed-form solution. For a given graph Laplacian matrix, our proposed method
does not need to tune any model parameter including the tradeoff parameter in
RLS and the balance parameter for unlabeled data. Extensive experiments on ten
benchmark datasets show that our proposed two-stage parameter-free spectral
kernel learning algorithm can obtain comparable performance with fine-tuned
manifold regularization methods in transductive setting, and outperform
multiple kernel learning in supervised setting.
|
1203.3496
|
Dirichlet Process Mixtures of Generalized Mallows Models
|
cs.LG stat.ML
|
We present a Dirichlet process mixture model over discrete incomplete
rankings and study two Gibbs sampling inference techniques for estimating
posterior clusterings. The first approach uses a slice sampling subcomponent
for estimating cluster parameters. The second approach marginalizes out several
cluster parameters by taking advantage of approximations to the conditional
posteriors. We empirically demonstrate (1) the effectiveness of this
approximation for improving convergence, (2) the benefits of the Dirichlet
process model over alternative clustering techniques for ranked data, and (3)
the applicability of the approach to exploring large realworld ranking
datasets.
|
1203.3497
|
Parametric Return Density Estimation for Reinforcement Learning
|
cs.LG stat.ML
|
Most conventional Reinforcement Learning (RL) algorithms aim to optimize
decision-making rules in terms of the expected returns. However, especially for
risk management purposes, other risk-sensitive criteria such as the
value-at-risk or the expected shortfall are sometimes preferred in real
applications. Here, we describe a parametric method for estimating density of
the returns, which allows us to handle various criteria in a unified manner. We
first extend the Bellman equation for the conditional expected return to cover
a conditional probability density of the returns. Then we derive an extension
of the TD-learning algorithm for estimating the return densities in an unknown
environment. As test instances, several parametric density estimation
algorithms are presented for the Gaussian, Laplace, and skewed Laplace
distributions. We show that these algorithms lead to risk-sensitive as well as
robust RL paradigms through numerical experiments.
|
1203.3498
|
Automated Planning in Repeated Adversarial Games
|
cs.GT cs.AI
|
Game theory's prescriptive power typically relies on full rationality and/or
self-play interactions. In contrast, this work sets aside these fundamental
premises and focuses instead on heterogeneous autonomous interactions between
two or more agents. Specifically, we introduce a new and concise representation
for repeated adversarial (constant-sum) games that highlight the necessary
features that enable an automated planing agent to reason about how to score
above the game's Nash equilibrium, when facing heterogeneous adversaries. To
this end, we present TeamUP, a model-based RL algorithm designed for learning
and planning such an abstraction. In essence, it is somewhat similar to R-max
with a cleverly engineered reward shaping that treats exploration as an
adversarial optimization problem. In practice, it attempts to find an ally with
which to tacitly collude (in more than two-player games) and then collaborates
on a joint plan of actions that can consistently score a high utility in
adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament
to demonstrate the effectiveness of our approach, and find that TeamUP is the
best performing agent, demoting the Tournament's actual winning strategy into
second place. In our experimental analysis, we show hat our strategy
successfully and consistently builds collaborations with many different
heterogeneous (and sometimes very sophisticated) adversaries.
|
1203.3499
|
A Delayed Column Generation Strategy for Exact k-Bounded MAP Inference
in Markov Logic Networks
|
cs.AI
|
The paper introduces k-bounded MAP inference, a parameterization of MAP
inference in Markov logic networks. k-Bounded MAP states are MAP states with at
most k active ground atoms of hidden (non-evidence) predicates. We present a
novel delayed column generation algorithm and provide empirical evidence that
the algorithm efficiently computes k-bounded MAP states for meaningful
real-world graph matching problems. The underlying idea is that, instead of
solving one large optimization problem, it is often more efficient to tackle
several small ones.
|
1203.3500
|
Comparative Analysis of Probabilistic Models for Activity Recognition
with an Instrumented Walker
|
cs.AI
|
Rollating walkers are popular mobility aids used by older adults to improve
balance control. There is a need to automatically recognize the activities
performed by walker users to better understand activity patterns, mobility
issues and the context in which falls are more likely to happen. We design and
compare several techniques to recognize walker related activities. A
comprehensive evaluation with control subjects and walker users from a
retirement community is presented.
|
1203.3501
|
Algorithms and Complexity Results for Exact Bayesian Structure Learning
|
cs.LG cs.DS stat.ML
|
Bayesian structure learning is the NP-hard problem of discovering a Bayesian
network that optimally represents a given set of training data. In this paper
we study the computational worst-case complexity of exact Bayesian structure
learning under graph theoretic restrictions on the super-structure. The
super-structure (a concept introduced by Perrier, Imoto, and Miyano, JMLR 2008)
is an undirected graph that contains as subgraphs the skeletons of solution
networks. Our results apply to several variants of score-based Bayesian
structure learning where the score of a network decomposes into local scores of
its nodes. Results: We show that exact Bayesian structure learning can be
carried out in non-uniform polynomial time if the super-structure has bounded
treewidth and in linear time if in addition the super-structure has bounded
maximum degree. We complement this with a number of hardness results. We show
that both restrictions (treewidth and degree) are essential and cannot be
dropped without loosing uniform polynomial time tractability (subject to a
complexity-theoretic assumption). Furthermore, we show that the restrictions
remain essential if we do not search for a globally optimal network but we aim
to improve a given network by means of at most k arc additions, arc deletions,
or arc reversals (k-neighborhood local search).
|
1203.3502
|
The Cost of Troubleshooting Cost Clusters with Inside Information
|
cs.AI cs.DS
|
Decision theoretical troubleshooting is about minimizing the expected cost of
solving a certain problem like repairing a complicated man-made device. In this
paper we consider situations where you have to take apart some of the device to
get access to certain clusters and actions. Specifically, we investigate
troubleshooting with independent actions in a tree of clusters where actions
inside a cluster cannot be performed before the cluster is opened. The problem
is non-trivial because there is a cost associated with opening and closing a
cluster. Troubleshooting with independent actions and no clusters can be solved
in O(n lg n) time (n being the number of actions) by the well-known "P-over-C"
algorithm due to Kadane and Simon, but an efficient and optimal algorithm for a
tree cluster model has not yet been found. In this paper we describe a
"bottom-up P-over-C" O(n lg n) time algorithm and show that it is optimal when
the clusters do not need to be closed to test whether the actions solved the
problem.
|
1203.3503
|
On a Class of Bias-Amplifying Variables that Endanger Effect Estimates
|
stat.ME cs.AI
|
This note deals with a class of variables that, if conditioned on, tends to
amplify confounding bias in the analysis of causal effects. This class,
independently discovered by Bhattacharya and Vogt (2007) and Wooldridge (2009),
includes instrumental variables and variables that have greater influence on
treatment selection than on the outcome. We offer a simple derivation and an
intuitive explanation of this phenomenon and then extend the analysis to non
linear models. We show that: 1. the bias-amplifying potential of instrumental
variables extends over to non-linear models, though not as sweepingly as in
linear models; 2. in non-linear models, conditioning on instrumental variables
may introduce new bias where none existed before; 3. in both linear and
non-linear models, instrumental variables have no effect on selection-induced
bias.
|
1203.3504
|
On Measurement Bias in Causal Inference
|
stat.ME cs.AI
|
This paper addresses the problem of measurement errors in causal inference
and highlights several algebraic and graphical methods for eliminating
systematic bias induced by such errors. In particulars, the paper discusses the
control of partially observable confounders in parametric and non parametric
models and the computational problem of obtaining bias-free effect estimates in
such models.
|
1203.3505
|
Confounding Equivalence in Causal Inference
|
stat.ME cs.AI
|
The paper provides a simple test for deciding, from a given causal diagram,
whether two sets of variables have the same bias-reducing potential under
adjustment. The test requires that one of the following two conditions holds:
either (1) both sets are admissible (i.e., satisfy the back-door criterion) or
(2) the Markov boundaries surrounding the manipulated variable(s) are identical
in both sets. Applications to covariate selection and model testing are
discussed.
|
1203.3506
|
A Family of Computationally Efficient and Simple Estimators for
Unnormalized Statistical Models
|
cs.LG stat.ML
|
We introduce a new family of estimators for unnormalized statistical models.
Our family of estimators is parameterized by two nonlinear functions and uses a
single sample from an auxiliary distribution, generalizing Maximum Likelihood
Monte Carlo estimation of Geyer and Thompson (1992). The family is such that we
can estimate the partition function like any other parameter in the model. The
estimation is done by optimizing an algebraically simple, well defined
objective function, which allows for the use of dedicated optimization methods.
We establish consistency of the estimator family and give an expression for the
asymptotic covariance matrix, which enables us to further analyze the influence
of the nonlinearities and the auxiliary density on estimation performance. Some
estimators in our family are particularly stable for a wide range of auxiliary
densities. Interestingly, a specific choice of the nonlinearity establishes a
connection between density estimation and classification by nonlinear logistic
regression. Finally, the optimal amount of auxiliary samples relative to the
given amount of the data is considered from the perspective of computational
efficiency.
|
1203.3507
|
Sparse-posterior Gaussian Processes for general likelihoods
|
cs.LG stat.ML
|
Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. Among them, two state-of-the-art
methods are sparse pseudo-input Gaussian process (SPGP) (Snelson and
Ghahramani, 2006) and variablesigma GP (VSGP) Walder et al. (2008), which
generalizes SPGP and allows each basis point to have its own length scale.
However, VSGP was only derived for regression. In this paper, we propose a new
sparse GP framework that uses expectation propagation to directly approximate
general GP likelihoods using a sparse and smooth basis. It includes both SPGP
and VSGP for regression as special cases. Plus as an EP algorithm, it inherits
the ability to process data online. As a particular choice of approximating
family, we blur each basis point with a Gaussian distribution that has a full
covariance matrix representing the data distribution around that basis point;
as a result, we can summarize local data manifold information with a small set
of basis points. Our experiments demonstrate that this framework outperforms
previous GP classification methods on benchmark datasets in terms of minimizing
divergence to the non-sparse GP solution as well as lower misclassification
rate.
|
1203.3508
|
Merging Knowledge Bases in Possibilistic Logic by Lexicographic
Aggregation
|
cs.AI
|
Belief merging is an important but difficult problem in Artificial
Intelligence, especially when sources of information are pervaded with
uncertainty. Many merging operators have been proposed to deal with this
problem in possibilistic logic, a weighted logic which is powerful for handling
inconsistency and deal- ing with uncertainty. They often result in a
possibilistic knowledge base which is a set of weighted formulas. Although
possibilistic logic is inconsistency tolerant, it suers from the well-known
"drowning effect". Therefore, we may still want to obtain a consistent possi-
bilistic knowledge base as the result of merg- ing. In such a case, we argue
that it is not always necessary to keep weighted informa- tion after merging.
In this paper, we define a merging operator that maps a set of pos- sibilistic
knowledge bases and a formula rep- resenting the integrity constraints to a
clas- sical knowledge base by using lexicographic ordering. We show that it
satisfies nine pos- tulates that generalize basic postulates for propositional
merging given in [11]. These postulates capture the principle of minimal change
in some sense. We then provide an algorithm for generating the resulting knowl-
edge base of our merging operator. Finally, we discuss the compatibility of our
merging operator with propositional merging and es- tablish the advantage of
our merging opera- tor over existing semantic merging operators in the
propositional case.
|
1203.3509
|
Characterizing the Set of Coherent Lower Previsions with a Finite Number
of Constraints or Vertices
|
cs.AI
|
The standard coherence criterion for lower previsions is expressed using an
infinite number of linear constraints. For lower previsions that are
essentially defined on some finite set of gambles on a finite possibility
space, we present a reformulation of this criterion that only uses a finite
number of constraints. Any such lower prevision is coherent if it lies within
the convex polytope defined by these constraints. The vertices of this polytope
are the extreme coherent lower previsions for the given set of gambles. Our
reformulation makes it possible to compute them. We show how this is done and
illustrate the procedure and its results.
|
1203.3510
|
Irregular-Time Bayesian Networks
|
cs.AI cs.LG stat.ML
|
In many fields observations are performed irregularly along time, due to
either measurement limitations or lack of a constant immanent rate. While
discrete-time Markov models (as Dynamic Bayesian Networks) introduce either
inefficient computation or an information loss to reasoning about such
processes, continuous-time Markov models assume either a discrete state space
(as Continuous-Time Bayesian Networks), or a flat continuous state space (as
stochastic differential equations). To address these problems, we present a new
modeling class called Irregular-Time Bayesian Networks (ITBNs), generalizing
Dynamic Bayesian Networks, allowing substantially more compact representations,
and increasing the expressivity of the temporal dynamics. In addition, a
globally optimal solution is guaranteed when learning temporal systems,
provided that they are fully observed at the same irregularly spaced
time-points, and a semiparametric subclass of ITBNs is introduced to allow
further adaptation to the irregular nature of the available data.
|
1203.3511
|
Inference by Minimizing Size, Divergence, or their Sum
|
cs.LG cs.CL stat.ML
|
We speed up marginal inference by ignoring factors that do not significantly
contribute to overall accuracy. In order to pick a suitable subset of factors
to ignore, we propose three schemes: minimizing the number of model factors
under a bound on the KL divergence between pruned and full models; minimizing
the KL divergence under a bound on factor count; and minimizing the weighted
sum of KL divergence and factor count. All three problems are solved using an
approximation of the KL divergence than can be calculated in terms of marginals
computed on a simple seed graph. Applied to synthetic image denoising and to
three different types of NLP parsing models, this technique performs marginal
inference up to 11 times faster than loopy BP, with graph sizes reduced up to
98%-at comparable error in marginals and parsing accuracy. We also show that
minimizing the weighted sum of divergence and size is substantially faster than
minimizing either of the other objectives based on the approximation to
divergence presented here.
|
1203.3512
|
Exact and Approximate Inference in Associative Hierarchical Networks
using Graph Cuts
|
cs.AI cs.CV
|
Markov Networks are widely used through out computer vision and machine
learning. An important subclass are the Associative Markov Networks which are
used in a wide variety of applications. For these networks a good approximate
minimum cost solution can be found efficiently using graph cut based move
making algorithms such as alpha-expansion. Recently a related model has been
proposed, the associative hierarchical network, which provides a natural
generalisation of the Associative Markov Network for higher order cliques (i.e.
clique size greater than two). This method provides a good model for object
class segmentation problem in computer vision. Within this paper we briefly
describe the associative hierarchical network and provide a computationally
efficient method for approximate inference based on graph cuts. Our method
performs well for networks containing hundreds of thousand of variables, and
higher order potentials are defined over cliques containing tens of thousands
of variables. Due to the size of these problems standard linear programming
techniques are inapplicable. We show that our method has a bound of 4 for the
solution of general associative hierarchical network with arbitrary clique size
noting that few results on bounds exist for the solution of labelling of Markov
Networks with higher order cliques.
|
1203.3513
|
Dynamic programming in in uence diagrams with decision circuits
|
cs.AI
|
Decision circuits perform efficient evaluation of influence diagrams,
building on the ad- vances in arithmetic circuits for belief net- work
inference [Darwiche, 2003; Bhattachar- jya and Shachter, 2007]. We show how
even more compact decision circuits can be con- structed for dynamic
programming in influ- ence diagrams with separable value functions and
conditionally independent subproblems. Once a decision circuit has been
constructed based on the diagram's "global" graphical structure, it can be
compiled to exploit "lo- cal" structure for efficient evaluation and sen-
sitivity analysis.
|
1203.3514
|
Maximizing the Spread of Cascades Using Network Design
|
cs.SI physics.soc-ph
|
We introduce a new optimization framework to maximize the expected spread of
cascades in networks. Our model allows a rich set of actions that directly
manipulate cascade dynamics by adding nodes or edges to the network. Our
motivating application is one in spatial conservation planning, where a cascade
models the dispersal of wild animals through a fragmented landscape. We propose
a mixed integer programming (MIP) formulation that combines elements from
network design and stochastic optimization. Our approach results in solutions
with stochastic optimality guarantees and points to conservation strategies
that are fundamentally different from naive approaches.
|
1203.3515
|
On the Validity of Covariate Adjustment for Estimating Causal Effects
|
stat.ME cs.AI
|
Identifying effects of actions (treatments) on outcome variables from
observational data and causal assumptions is a fundamental problem in causal
inference. This identification is made difficult by the presence of confounders
which can be related to both treatment and outcome variables. Confounders are
often handled, both in theory and in practice, by adjusting for covariates, in
other words considering outcomes conditioned on treatment and covariate values,
weighed by probability of observing those covariate values. In this paper, we
give a complete graphical criterion for covariate adjustment, which we term the
adjustment criterion, and derive some interesting corollaries of the
completeness of this criterion.
|
1203.3516
|
Modeling Events with Cascades of Poisson Processes
|
cs.LG cs.AI stat.ML
|
We present a probabilistic model of events in continuous time in which each
event triggers a Poisson process of successor events. The ensemble of observed
events is thereby modeled as a superposition of Poisson processes. Efficient
inference is feasible under this model with an EM algorithm. Moreover, the EM
algorithm can be implemented as a distributed algorithm, permitting the model
to be applied to very large datasets. We apply these techniques to the modeling
of Twitter messages and the revision history of Wikipedia.
|
1203.3517
|
A Bayesian Matrix Factorization Model for Relational Data
|
cs.LG stat.ML
|
Relational learning can be used to augment one data source with other
correlated sources of information, to improve predictive accuracy. We frame a
large class of relational learning problems as matrix factorization problems,
and propose a hierarchical Bayesian model. Training our Bayesian model using
random-walk Metropolis-Hastings is impractically slow, and so we develop a
block Metropolis-Hastings sampler which uses the gradient and Hessian of the
likelihood to dynamically tune the proposal. We demonstrate that a predictive
model of brain response to stimuli can be improved by augmenting it with side
information about the stimuli.
|
1203.3518
|
Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
|
cs.LG cs.AI stat.ML
|
The explore{exploit dilemma is one of the central challenges in Reinforcement
Learning (RL). Bayesian RL solves the dilemma by providing the agent with
information in the form of a prior distribution over environments; however,
full Bayesian planning is intractable. Planning with the mean MDP is a common
myopic approximation of Bayesian planning. We derive a novel reward bonus that
is a function of the posterior distribution over environments, which, when
added to the reward in planning with the mean MDP, results in an agent which
explores efficiently and effectively. Although our method is similar to
existing methods when given an uninformative or unstructured prior, unlike
existing methods, our method can exploit structured priors. We prove that our
method results in a polynomial sample complexity and empirically demonstrate
its advantages in a structured exploration task.
|
1203.3519
|
Bayesian Inference in Monte-Carlo Tree Search
|
cs.LG cs.AI stat.ML
|
Monte-Carlo Tree Search (MCTS) methods are drawing great interest after
yielding breakthrough results in computer Go. This paper proposes a Bayesian
approach to MCTS that is inspired by distributionfree approaches such as UCT
[13], yet significantly differs in important respects. The Bayesian framework
allows potentially much more accurate (Bayes-optimal) estimation of node values
and node uncertainties from a limited number of simulation trials. We further
propose propagating inference in the tree via fast analytic Gaussian
approximation methods: this can make the overhead of Bayesian inference
manageable in domains such as Go, while preserving high accuracy of
expected-value estimates. We find substantial empirical outperformance of UCT
in an idealized bandit-tree test environment, where we can obtain valuable
insights by comparing with known ground truth. Additionally we rigorously prove
on-policy and off-policy convergence of the proposed methods.
|
1203.3520
|
Bayesian Model Averaging Using the k-best Bayesian Network Structures
|
cs.LG cs.AI stat.ML
|
We study the problem of learning Bayesian network structures from data. We
develop an algorithm for finding the k-best Bayesian network structures. We
propose to compute the posterior probabilities of hypotheses of interest by
Bayesian model averaging over the k-best Bayesian networks. We present
empirical results on structural discovery over several real and synthetic data
sets and show that the method outperforms the model selection method and the
state of-the-art MCMC methods.
|
1203.3521
|
Learning networks determined by the ratio of prior and data
|
cs.LG stat.ML
|
Recent reports have described that the equivalent sample size (ESS) in a
Dirichlet prior plays an important role in learning Bayesian networks. This
paper provides an asymptotic analysis of the marginal likelihood score for a
Bayesian network. Results show that the ratio of the ESS and sample size
determine the penalty of adding arcs in learning Bayesian networks. The number
of arcs increases monotonically as the ESS increases; the number of arcs
monotonically decreases as the ESS decreases. Furthermore, the marginal
likelihood score provides a unified expression of various score metrics by
changing prior knowledge.
|
1203.3522
|
Online Semi-Supervised Learning on Quantized Graphs
|
cs.LG stat.ML
|
In this paper, we tackle the problem of online semi-supervised learning
(SSL). When data arrive in a stream, the dual problems of computation and data
storage arise for any SSL method. We propose a fast approximate online SSL
algorithm that solves for the harmonic solution on an approximate graph. We
show, both empirically and theoretically, that good behavior can be achieved by
collapsing nearby points into a set of local "representative points" that
minimize distortion. Moreover, we regularize the harmonic solution to achieve
better stability properties. We apply our algorithm to face recognition and
optical character recognition applications to show that we can take advantage
of the manifold structure to outperform the previous methods. Unlike previous
heuristic approaches, we show that our method yields provable performance
bounds.
|
1203.3523
|
Risk Sensitive Path Integral Control
|
cs.SY math.OC
|
Recently path integral methods have been developed for stochastic optimal
control for a wide class of models with non-linear dynamics in continuous
space-time. Path integral methods find the control that minimizes the expected
cost-to-go. In this paper we show that under the same assumptions, path
integral methods generalize directly to risk sensitive stochastic optimal
control. Here the method minimizes in expectation an exponentially weighted
cost-to-go. Depending on the exponential weight, risk seeking or risk averse
behaviour is obtained. We demonstrate the approach on risk sensitive stochastic
optimal control problems beyond the linear-quadratic case, showing the
intricate interaction of multi-modal control with risk sensitivity.
|
1203.3524
|
Speeding up the binary Gaussian process classification
|
stat.ML cs.LG
|
Gaussian processes (GP) are attractive building blocks for many probabilistic
models. Their drawbacks, however, are the rapidly increasing inference time and
memory requirement alongside increasing data. The problem can be alleviated
with compactly supported (CS) covariance functions, which produce sparse
covariance matrices that are fast in computations and cheap to store. CS
functions have previously been used in GP regression but here the focus is in a
classification problem. This brings new challenges since the posterior
inference has to be done approximately. We utilize the expectation propagation
algorithm and show how its standard implementation has to be modified to obtain
computational benefits from the sparse covariance matrices. We study four CS
covariance functions and show that they may lead to substantial speed up in the
inference time compared to globally supported functions.
|
1203.3525
|
Learning Why Things Change: The Difference-Based Causality Learner
|
cs.AI
|
In this paper, we present the Difference- Based Causality Learner (DBCL), an
algorithm for learning a class of discrete-time dynamic models that represents
all causation across time by means of difference equations driving change in a
system. We motivate this representation with real-world mechanical systems and
prove DBCL's correctness for learning structure from time series data, an
endeavour that is complicated by the existence of latent derivatives that have
to be detected. We also prove that, under common assumptions for causal
discovery, DBCL will identify the presence or absence of feedback loops, making
the model more useful for predicting the effects of manipulating variables when
the system is in equilibrium. We argue analytically and show empirically the
advantages of DBCL over vector autoregression (VAR) and Granger causality
models as well as modified forms of Bayesian and constraintbased structure
discovery algorithms. Finally, we show that our algorithm can discover causal
directions of alpha rhythms in human brains from EEG data.
|
1203.3526
|
Primal View on Belief Propagation
|
cs.LG cs.AI stat.ML
|
It is known that fixed points of loopy belief propagation (BP) correspond to
stationary points of the Bethe variational problem, where we minimize the Bethe
free energy subject to normalization and marginalization constraints.
Unfortunately, this does not entirely explain BP because BP is a dual rather
than primal algorithm to solve the Bethe variational problem -- beliefs are
infeasible before convergence. Thus, we have no better understanding of BP than
as an algorithm to seek for a common zero of a system of non-linear functions,
not explicitly related to each other. In this theoretical paper, we show that
these functions are in fact explicitly related -- they are the partial
derivatives of a single function of reparameterizations. That means, BP seeks
for a stationary point of a single function, without any constraints. This
function has a very natural form: it is a linear combination of local
log-partition functions, exactly as the Bethe entropy is the same linear
combination of local entropies.
|
1203.3527
|
Truthful Feedback for Sanctioning Reputation Mechanisms
|
cs.GT cs.AI
|
For product rating environments, similar to that of Amazon Reviews, it has
been shown that the truthful elicitation of feedback is possible through
mechanisms which pay buyer reports contingent on the reports of other buyers.
We study whether similar mechanisms can be designed for reputation mechanisms
at online auction sites where the buyers' experiences are partially determined
by a strategic seller. We show that this is impossible for the basic setting.
However, introducing a small prior belief that the seller is a cooperative
commitment player leads to a payment scheme with a truthful perfect Bayesian
equilibrium.
|
1203.3528
|
Rollout Sampling Policy Iteration for Decentralized POMDPs
|
cs.AI
|
We present decentralized rollout sampling policy iteration (DecRSPI) - a new
algorithm for multi-agent decision problems formalized as DEC-POMDPs. DecRSPI
is designed to improve scalability and tackle problems that lack an explicit
model. The algorithm uses Monte- Carlo methods to generate a sample of
reachable belief states. Then it computes a joint policy for each belief state
based on the rollout estimations. A new policy representation allows us to
represent solutions compactly. The key benefits of the algorithm are its linear
time complexity over the number of agents, its bounded memory usage and good
solution quality. It can solve larger problems that are intractable for
existing planning algorithms. Experimental results confirm the effectiveness
and scalability of the approach.
|
1203.3529
|
Modeling Multiple Annotator Expertise in the Semi-Supervised Learning
Scenario
|
cs.LG cs.AI stat.ML
|
Learning algorithms normally assume that there is at most one annotation or
label per data point. However, in some scenarios, such as medical diagnosis and
on-line collaboration,multiple annotations may be available. In either case,
obtaining labels for data points can be expensive and time-consuming (in some
circumstances ground-truth may not exist). Semi-supervised learning approaches
have shown that utilizing the unlabeled data is often beneficial in these
cases. This paper presents a probabilistic semi-supervised model and algorithm
that allows for learning from both unlabeled and labeled data in the presence
of multiple annotators. We assume that it is known what annotator labeled which
data points. The proposed approach produces annotator models that allow us to
provide (1) estimates of the true label and (2) annotator variable expertise
for both labeled and unlabeled data. We provide numerical comparisons under
various scenarios and with respect to standard semi-supervised learning.
Experiments showed that the presented approach provides clear advantages over
multi-annotator methods that do not use the unlabeled data and over methods
that do not use multi-labeler information.
|
1203.3530
|
Hybrid Generative/Discriminative Learning for Automatic Image Annotation
|
cs.LG cs.CV stat.ML
|
Automatic image annotation (AIA) raises tremendous challenges to machine
learning as it requires modeling of data that are both ambiguous in input and
output, e.g., images containing multiple objects and labeled with multiple
semantic tags. Even more challenging is that the number of candidate tags is
usually huge (as large as the vocabulary size) yet each image is only related
to a few of them. This paper presents a hybrid generative-discriminative
classifier to simultaneously address the extreme data-ambiguity and
overfitting-vulnerability issues in tasks such as AIA. Particularly: (1) an
Exponential-Multinomial Mixture (EMM) model is established to capture both the
input and output ambiguity and in the meanwhile to encourage prediction
sparsity; and (2) the prediction ability of the EMM model is explicitly
maximized through discriminative learning that integrates variational inference
of graphical models and the pairwise formulation of ordinal regression.
Experiments show that our approach achieves both superior annotation
performance and better tag scalability.
|
1203.3531
|
Solving Multistage Influence Diagrams using Branch-and-Bound Search
|
cs.AI
|
A branch-and-bound approach to solving influ- ence diagrams has been
previously proposed in the literature, but appears to have never been
implemented and evaluated - apparently due to the difficulties of computing
effective bounds for the branch-and-bound search. In this paper, we describe
how to efficiently compute effective bounds, and we develop a practical
implementa- tion of depth-first branch-and-bound search for influence diagram
evaluation that outperforms existing methods for solving influence diagrams
with multiple stages.
|
1203.3532
|
Learning Structural Changes of Gaussian Graphical Models in Controlled
Experiments
|
cs.LG stat.ML
|
Graphical models are widely used in scienti fic and engineering research to
represent conditional independence structures between random variables. In many
controlled experiments, environmental changes or external stimuli can often
alter the conditional dependence between the random variables, and potentially
produce significant structural changes in the corresponding graphical models.
Therefore, it is of great importance to be able to detect such structural
changes from data, so as to gain novel insights into where and how the
structural changes take place and help the system adapt to the new environment.
Here we report an effective learning strategy to extract structural changes in
Gaussian graphical model using l1-regularization based convex optimization. We
discuss the properties of the problem formulation and introduce an efficient
implementation by the block coordinate descent algorithm. We demonstrate the
principle of the approach on a numerical simulation experiment, and we then
apply the algorithm to the modeling of gene regulatory networks under different
conditions and obtain promising yet biologically plausible results.
|
1203.3533
|
Source Separation and Higher-Order Causal Analysis of MEG and EEG
|
cs.LG stat.ML
|
Separation of the sources and analysis of their connectivity have been an
important topic in EEG/MEG analysis. To solve this problem in an automatic
manner, we propose a two-layer model, in which the sources are conditionally
uncorrelated from each other, but not independent; the dependence is caused by
the causality in their time-varying variances (envelopes). The model is
identified in two steps. We first propose a new source separation technique
which takes into account the autocorrelations (which may be time-varying) and
time-varying variances of the sources. The causality in the envelopes is then
discovered by exploiting a special kind of multivariate GARCH (generalized
autoregressive conditional heteroscedasticity) model. The resulting causal
diagram gives the effective connectivity between the separated sources; in our
experimental results on MEG data, sources with similar functions are grouped
together, with negative influences between groups, and the groups are connected
via some interesting sources.
|
1203.3534
|
Invariant Gaussian Process Latent Variable Models and Application in
Causal Discovery
|
cs.LG stat.ML
|
In nonlinear latent variable models or dynamic models, if we consider the
latent variables as confounders (common causes), the noise dependencies imply
further relations between the observed variables. Such models are then closely
related to causal discovery in the presence of nonlinear confounders, which is
a challenging problem. However, generally in such models the observation noise
is assumed to be independent across data dimensions, and consequently the noise
dependencies are ignored. In this paper we focus on the Gaussian process latent
variable model (GPLVM), from which we develop an extended model called
invariant GPLVM (IGPLVM), which can adapt to arbitrary noise covariances. With
the Gaussian process prior put on a particular transformation of the latent
nonlinear functions, instead of the original ones, the algorithm for IGPLVM
involves almost the same computational loads as that for the original GPLVM.
Besides its potential application in causal discovery, IGPLVM has the advantage
that its estimated latent nonlinear manifold is invariant to any nonsingular
linear transformation of the data. Experimental results on both synthetic and
realworld data show its encouraging performance in nonlinear manifold learning
and causal discovery.
|
1203.3535
|
Multi-Domain Collaborative Filtering
|
cs.IR cs.AI
|
Collaborative filtering is an effective recommendation approach in which the
preference of a user on an item is predicted based on the preferences of other
users with similar interests. A big challenge in using collaborative filtering
methods is the data sparsity problem which often arises because each user
typically only rates very few items and hence the rating matrix is extremely
sparse. In this paper, we address this problem by considering multiple
collaborative filtering tasks in different domains simultaneously and
exploiting the relationships between domains. We refer to it as a multi-domain
collaborative filtering (MCF) problem. To solve the MCF problem, we propose a
probabilistic framework which uses probabilistic matrix factorization to model
the rating problem in each domain and allows the knowledge to be adaptively
transferred across different domains by automatically learning the correlation
between domains. We also introduce the link function for different domains to
correct their biases. Experiments conducted on several real-world applications
demonstrate the effectiveness of our methods when compared with some
representative methods.
|
1203.3536
|
A Convex Formulation for Learning Task Relationships in Multi-Task
Learning
|
cs.LG cs.AI stat.ML
|
Multi-task learning is a learning paradigm which seeks to improve the
generalization performance of a learning task with the help of some other
related tasks. In this paper, we propose a regularization formulation for
learning the relationships between tasks in multi-task learning. This
formulation can be viewed as a novel generalization of the regularization
framework for single-task learning. Besides modeling positive task correlation,
our method, called multi-task relationship learning (MTRL), can also describe
negative task correlation and identify outlier tasks based on the same
underlying principle. Under this regularization framework, the objective
function of MTRL is convex. For efficiency, we use an alternating method to
learn the optimal model parameters for each task as well as the relationships
between tasks. We study MTRL in the symmetric multi-task learning setting and
then generalize it to the asymmetric setting as well. We also study the
relationships between MTRL and some existing multi-task learning methods.
Experiments conducted on a toy problem as well as several benchmark data sets
demonstrate the effectiveness of MTRL.
|
1203.3537
|
Automatic Tuning of Interactive Perception Applications
|
cs.LG cs.CV stat.ML
|
Interactive applications incorporating high-data rate sensing and computer
vision are becoming possible due to novel runtime systems and the use of
parallel computation resources. To allow interactive use, such applications
require careful tuning of multiple application parameters to meet required
fidelity and latency bounds. This is a nontrivial task, often requiring expert
knowledge, which becomes intractable as resources and application load
characteristics change. This paper describes a method for automatic performance
tuning that learns application characteristics and effects of tunable
parameters online, and constructs models that are used to maximize fidelity for
a given latency constraint. The paper shows that accurate latency models can be
learned online, knowledge of application structure can be used to reduce the
complexity of the learning task, and operating points can be found that achieve
90% of the optimal fidelity by exploring the parameter space only 3% of the
time.
|
1203.3538
|
RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains
|
cs.AI
|
Despite the intractability of generic optimal partially observable Markov
decision process planning, there exist important problems that have highly
structured models. Previous researchers have used this insight to construct
more efficient algorithms for factored domains, and for domains with
topological structure in the flat state dynamics model. In our work, motivated
by findings from the education community relevant to automated tutoring, we
consider problems that exhibit a form of topological structure in the factored
dynamics model. Our Reachable Anytime Planner for Imprecisely-sensed Domains
(RAPID) leverages this structure to efficiently compute a good initial envelope
of reachable states under the optimal MDP policy in time linear in the number
of state variables. RAPID performs partially-observable planning over the
limited envelope of states, and slowly expands the state space considered as
time allows. RAPID performs well on a large tutoring-inspired problem
simulation with 122 state variables, corresponding to a flat state space of
over 10^30 states.
|
1203.3584
|
An Accurate Arabic Root-Based Lemmatizer for Information Retrieval
Purposes
|
cs.CL
|
In spite of its robust syntax, semantic cohesion, and less ambiguity, lemma
level analysis and generation does not yet focused in Arabic NLP literatures.
In the current research, we propose the first non-statistical accurate Arabic
lemmatizer algorithm that is suitable for information retrieval (IR) systems.
The proposed lemmatizer makes use of different Arabic language knowledge
resources to generate accurate lemma form and its relevant features that
support IR purposes. As a POS tagger, the experimental results show that, the
proposed algorithm achieves a maximum accuracy of 94.8%. For first seen
documents, an accuracy of 89.15% is achieved, compared to 76.7% of up to date
Stanford accurate Arabic model, for the same, dataset.
|
1203.3586
|
Automated Text Summarization Base on Lexicales Chain and graph Using of
WordNet and Wikipedia Knowledge Base
|
cs.IR cs.CL
|
The technology of automatic document summarization is maturing and may
provide a solution to the information overload problem. Nowadays, document
summarization plays an important role in information retrieval. With a large
volume of documents, presenting the user with a summary of each document
greatly facilitates the task of finding the desired documents. Document
summarization is a process of automatically creating a compressed version of a
given document that provides useful information to users, and multi-document
summarization is to produce a summary delivering the majority of information
content from a set of documents about an explicit or implicit main topic. The
lexical cohesion structure of the text can be exploited to determine the
importance of a sentence/phrase. Lexical chains are useful tools to analyze the
lexical cohesion structure in a text .In this paper we consider the effect of
the use of lexical cohesion features in Summarization, And presenting a
algorithm base on the knowledge base. Ours algorithm at first find the correct
sense of any word, Then constructs the lexical chains, remove Lexical chains
that less score than other, detects topics roughly from lexical chains,
segments the text with respect to the topics and selects the most important
sentences. The experimental results on an open benchmark datasets from DUC01
and DUC02 show that our proposed approach can improve the performance compared
to sate-of-the-art summarization approaches.
|
1203.3589
|
Building MultiView Analyst Profile From Multidimensional Query Logs:
From Consensual to Conflicting Preferences
|
cs.DB cs.IR
|
In order to provide suitable results to the analyst needs, user preferences
summarization is widely used in several domains. In this paper, we introduce a
new approach for user profile construction from OLAP query logs. The key idea
is to learn the user's preferences by drawing the evidence from OLAP logs. In
fact, the analyst preferences are clustered into three main pools : (i)
consensual or non conflicting preferences referring to same preferences for all
analysts; (ii) semi-conflicting preferences corresponding to similar
preferences for some analysts; (iii) conflicting preferences related to
disjoint preferences for all analysts. To build generic and global model
accurately describing the analyst, we enrich the obtained characteristics
through including several views, namely the personal view, the professional
view and the behavioral view. After that, the multiview profile extracted from
multidimensional database can be annotated.
|
1203.3621
|
Robustness of correlated networks against propagating attacks
|
physics.soc-ph cs.SI
|
We investigate robustness of correlated networks against propagating attacks
modeled by a susceptible-infected-removed model. By Monte-Carlo simulations, we
numerically determine the first critical infection rate, above which a global
outbreak of disease occurs, and the second critical infection rate, above which
disease disintegrates the network. Our result shows that correlated networks
are robust compared to the uncorrelated ones, regardless of whether they are
assortative or disassortative, when a fraction of infected nodes in an initial
state is not too large. For large initial fraction, disassortative network
becomes fragile while assortative network holds robustness. This behavior is
related to the layered network structure inevitably generated by a rewiring
procedure we adopt to realize correlated networks.
|
1203.3659
|
Cognitive Wyner Networks with Clustered Decoding
|
cs.IT math.IT
|
We study an interference network where equally-numbered transmitters and
receivers lie on two parallel lines, each transmitter opposite its intended
receiver. We consider two short-range interference models: the "asymmetric
network," where the signal sent by each transmitter is interfered only by the
signal sent by its left neighbor (if present), and a "symmetric network," where
it is interfered by both its left and its right neighbors. Each transmitter is
cognizant of its own message, the messages of the $t_\ell$ transmitters to its
left, and the messages of the $t_r$ transmitters to its right. Each receiver
decodes its message based on the signals received at its own antenna, at the
$r_\ell$ receive antennas to its left, and the $r_r$ receive antennas to its
right. For such networks we provide upper and lower bounds on the multiplexing
gain, i.e., on the high-SNR asymptotic logarithmic growth of the sum-rate
capacity. In some cases our bounds meet, e.g., for the asymmetric network. Our
results exhibit an equivalence between the transmitter side-information
parameters $t_\ell, t_r$ and the receiver side-information parameters $r_\ell,
r_r$ in the sense that increasing/decreasing $t_\ell$ or $t_r$ by a positive
integer $\delta$ has the same effect on the multiplexing gain as
increasing/decreasing $r_\ell$ or $r_r$ by $\delta$. Moreover---even in
asymmetric networks---there is an equivalence between the left side-information
parameters $t_\ell, r_\ell$ and the right side-information parameters $t_r,
r_r$.
|
1203.3725
|
Bayesian Parameter Estimation for Latent Markov Random Fields and Social
Networks
|
stat.CO cond-mat.stat-mech cs.AI cs.SI physics.data-an
|
Undirected graphical models are widely used in statistics, physics and
machine vision. However Bayesian parameter estimation for undirected models is
extremely challenging, since evaluation of the posterior typically involves the
calculation of an intractable normalising constant. This problem has received
much attention, but very little of this has focussed on the important practical
case where the data consists of noisy or incomplete observations of the
underlying hidden structure. This paper specifically addresses this problem,
comparing two alternative methodologies. In the first of these approaches
particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently
explore the parameter space, combined with the exchange algorithm (Murray et
al., 2006) for avoiding the calculation of the intractable normalising constant
(a proof showing that this combination targets the correct distribution in
found in a supplementary appendix online). This approach is compared with
approximate Bayesian computation (Pritchard et al., 1999). Applications to
estimating the parameters of Ising models and exponential random graphs from
noisy data are presented. Each algorithm used in the paper targets an
approximation to the true posterior due to the use of MCMC to simulate from the
latent graphical model, in lieu of being able to do this exactly in general.
The supplementary appendix also describes the nature of the resulting
approximation.
|
1203.3764
|
The Abzooba Smart Health Informatics Platform (SHIP) TM - From Patient
Experiences to Big Data to Insights
|
cs.IR cs.AI
|
This paper describes a technology to connect patients to information in the
experiences of other patients by using the power of structured big data. The
approach, implemented in the Abzooba Smart Health Informatics Platform
(SHIP),is to distill concepts of facts and expressions from conversations and
discussions in health social media forums, and use those distilled concepts in
connecting patients to experiences and insights that are highly relevant to
them in particular. We envision our work, in progress, to provide new and
effective tools to exploit the richness of content in social media in health
for outcomes research.
|
1203.3783
|
Learning Feature Hierarchies with Centered Deep Boltzmann Machines
|
stat.ML cs.AI cs.LG
|
Deep Boltzmann machines are in principle powerful models for extracting the
hierarchical structure of data. Unfortunately, attempts to train layers jointly
(without greedy layer-wise pretraining) have been largely unsuccessful. We
propose a modification of the learning algorithm that initially recenters the
output of the activation functions to zero. This modification leads to a better
conditioned Hessian and thus makes learning easier. We test the algorithm on
real data and demonstrate that our suggestion, the centered deep Boltzmann
machine, learns a hierarchy of increasingly abstract representations and a
better generative model of data.
|
1203.3815
|
Theory and Applications of Compressed Sensing
|
cs.IT math.IT math.NA
|
Compressed sensing is a novel research area, which was introduced in 2006,
and since then has already become a key concept in various areas of applied
mathematics, computer science, and electrical engineering. It surprisingly
predicts that high-dimensional signals, which allow a sparse representation by
a suitable basis or, more generally, a frame, can be recovered from what was
previously considered highly incomplete linear measurements by using efficient
algorithms. This article shall serve as an introduction to and a survey about
compressed sensing.
|
1203.3832
|
Data Mining: A Prediction for Performance Improvement of Engineering
Students using Classification
|
cs.LG
|
Now-a-days the amount of data stored in educational database increasing
rapidly. These databases contain hidden information for improvement of
students' performance. Educational data mining is used to study the data
available in the educational field and bring out the hidden knowledge from it.
Classification methods like decision trees, Bayesian network etc can be applied
on the educational data for predicting the student's performance in
examination. This prediction will help to identify the weak students and help
them to score better marks. The C4.5, ID3 and CART decision tree algorithms are
applied on engineering student's data to predict their performance in the final
exam. The outcome of the decision tree predicted the number of students who are
likely to pass, fail or promoted to next year. The results provide steps to
improve the performance of the students who were predicted to fail or promoted.
After the declaration of the results in the final examination the marks
obtained by the students are fed into the system and the results were analyzed
for the next session. The comparative analysis of the results states that the
prediction has helped the weaker students to improve and brought out betterment
in the result.
|
1203.3838
|
A Study on the Behavior of a Neural Network for Grouping the Data
|
cs.NE cs.RO
|
One of the frequently stated advantages of neural networks is that they can
work effectively with non-normally distributed data. But optimal results are
possible with normalized data.In this paper, how normality of the input affects
the behaviour of a K-means fast learning artificial neural network(KFLANN) for
grouping the data is presented. Basically, the grouping of high dimensional
input data is controlled by additional neural network input parameters namely
vigilance and tolerance.Neural networks learn faster and give better
performance if the input variables are pre-processed before being fed to the
input units of the neural network. A common way of dealing with data that is
not normally distributed is to perform some form of mathematical transformation
on the data that shifts it towards a normal distribution.In a neural network,
data preprocessing transforms the data into a format that will be more easily
and effectively processed for the purpose of the user. Among various methods,
Normalization is one which organizes data for more efficient access.
Experimental results on several artificial and synthetic data sets indicate
that the groups formed in the data vary with non-normally distributed data and
normalized data and also depends on the normalization method used.
|
1203.3847
|
Handwritten digit Recognition using Support Vector Machine
|
cs.NE
|
Handwritten Numeral recognition plays a vital role in postal automation
services especially in countries like India where multiple languages and
scripts are used Discrete Hidden Markov Model (HMM) and hybrid of Neural
Network (NN) and HMM are popular methods in handwritten word recognition
system. The hybrid system gives better recognition result due to better
discrimination capability of the NN. A major problem in handwriting recognition
is the huge variability and distortions of patterns. Elastic models based on
local observations and dynamic programming such HMM are not efficient to absorb
this variability. But their vision is local. But they cannot face to length
variability and they are very sensitive to distortions. Then the SVM is used to
estimate global correlations and classify the pattern. Support Vector Machine
(SVM) is an alternative to NN. In Handwritten recognition, SVM gives a better
recognition result. The aim of this paper is to develop an approach which
improve the efficiency of handwritten recognition using artificial neural
network
|
1203.3864
|
Matrix ALPS: Accelerated Low Rank and Sparse Matrix Reconstruction
|
cs.IT math.IT
|
We propose Matrix ALPS for recovering a sparse plus low-rank decomposition of
a matrix given its corrupted and incomplete linear measurements. Our approach
is a first-order projected gradient method over non-convex sets, and it
exploits a well-known memory-based acceleration technique. We theoretically
characterize the convergence properties of Matrix ALPS using the stable
embedding properties of the linear measurement operator. We then numerically
illustrate that our algorithm outperforms the existing convex as well as
non-convex state-of-the-art algorithms in computational efficiency without
sacrificing stability.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.