id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1207.0854
|
CROSS-MBCR: Exact Minimum Bandwith Coordinated Regenerating Codes
|
cs.IT math.IT
|
We study the exact and optimal repair of multiple failures in codes for
distributed storage. More particularly, we provide an explicit construction of
exact minimum bandwidth coordinated regenerating codes (MBCR) for n=d+t,k,d >=
k,t >= 1. Our construction differs from existing constructions by allowing both
t>1 (i.e., repair of multiple failures) and d>k (i.e., contacting more than k
devices during repair).
|
1207.0865
|
Asymptotic normality of maximum likelihood and its variational
approximation for stochastic blockmodels
|
math.ST cs.SI stat.TH
|
Variational methods for parameter estimation are an active research area,
potentially offering computationally tractable heuristics with theoretical
performance bounds. We build on recent work that applies such methods to
network data, and establish asymptotic normality rates for parameter estimates
of stochastic blockmodel data, by either maximum likelihood or variational
estimation. The result also applies to various sub-models of the stochastic
blockmodel found in the literature.
|
1207.0869
|
Theory and Techniques for Synthesizing a Family of Graph Algorithms
|
cs.SE cs.AI cs.DS cs.PL
|
Although Breadth-First Search (BFS) has several advantages over Depth-First
Search (DFS) its prohibitive space requirements have meant that algorithm
designers often pass it over in favor of DFS. To address this shortcoming, we
introduce a theory of Efficient BFS (EBFS) along with a simple recursive
program schema for carrying out the search. The theory is based on dominance
relations, a long standing technique from the field of search algorithms. We
show how the theory can be used to systematically derive solutions to two graph
algorithms, namely the Single Source Shortest Path problem and the Minimum
Spanning Tree problem. The solutions are found by making small systematic
changes to the derivation, revealing the connections between the two problems
which are often obscured in textbook presentations of them.
|
1207.0872
|
Differential Privacy for Relational Algebra: Improving the Sensitivity
Bounds via Constraint Systems
|
cs.CR cs.DB
|
Differential privacy is a modern approach in privacy-preserving data analysis
to control the amount of information that can be inferred about an individual
by querying a database. The most common techniques are based on the
introduction of probabilistic noise, often defined as a Laplacian parametric on
the sensitivity of the query. In order to maximize the utility of the query, it
is crucial to estimate the sensitivity as precisely as possible.
In this paper we consider relational algebra, the classical language for
queries in relational databases, and we propose a method for computing a bound
on the sensitivity of queries in an intuitive and compositional way. We use
constraint-based techniques to accumulate the information on the possible
values for attributes provided by the various components of the query, thus
making it possible to compute tight bounds on the sensitivity.
|
1207.0873
|
Hybrid performance modelling of opportunistic networks
|
cs.SY cs.LO cs.NI cs.PF
|
We demonstrate the modelling of opportunistic networks using the process
algebra stochastic HYPE. Network traffic is modelled as continuous flows,
contact between nodes in the network is modelled stochastically, and
instantaneous decisions are modelled as discrete events. Our model describes a
network of stationary video sensors with a mobile ferry which collects data
from the sensors and delivers it to the base station. We consider different
mobility models and different buffer sizes for the ferries. This case study
illustrates the flexibility and expressive power of stochastic HYPE. We also
discuss the software that enables us to describe stochastic HYPE models and
simulate them.
|
1207.0877
|
Exchanging Third-Party Information with Minimum Transmission Cost
|
cs.IT cs.NI math.IT
|
In this paper, we consider the problem of minimizing the total transmission
cost for exchanging channel state information. We proposed a network coded
cooperative data exchange scheme, such that the total transmission cost is
minimized while each client can decode all the channel information held by all
other clients. In this paper, we first derive a necessary and sufficient
condition for a feasible transmission. Based on the derived condition, there
exists a feasible code design to guarantee that each client can decode the
complete information. We further formulate the problem of minimizing the total
transmission cost as an integer linear programming. Finally, we discuss the
probability that each client can decode the complete information with
distributed random linear network coding.
|
1207.0879
|
Exact Cooperative Regenerating Codes with Minimum-Repair-Bandwidth for
Distributed Storage
|
cs.IT math.IT
|
We give an explicit construction of exact cooperative regenerating codes at
the MBCR (minimum bandwidth cooperative regeneration) point. Before the paper,
the only known explicit MBCR code is given with parameters $n=d+r$ and $d=k$,
while our construction applies to all possible values of $n,k,d,r$. The code
has a brief expression in the polynomial form and the data reconstruction is
accomplished by bivariate polynomial interpolation. It is a scalar code and
operates over a finite field of size $q\geq n$. Besides, we establish several
subspace properties for linear exact MBCR codes. Based on these properties we
prove that linear exact MBCR codes cannot achieve repair-by-transfer.
|
1207.0893
|
Majority Dynamics and Aggregation of Information in Social Networks
|
math.ST cs.SI physics.soc-ph stat.TH
|
Consider n individuals who, by popular vote, choose among q >= 2
alternatives, one of which is "better" than the others. Assume that each
individual votes independently at random, and that the probability of voting
for the better alternative is larger than the probability of voting for any
other. It follows from the law of large numbers that a plurality vote among the
n individuals would result in the correct outcome, with probability approaching
one exponentially quickly as n tends to infinity. Our interest in this paper is
in a variant of the process above where, after forming their initial opinions,
the voters update their decisions based on some interaction with their
neighbors in a social network. Our main example is "majority dynamics", in
which each voter adopts the most popular opinion among its friends. The
interaction repeats for some number of rounds and is then followed by a
population-wide plurality vote.
The question we tackle is that of "efficient aggregation of information": in
which cases is the better alternative chosen with probability approaching one
as n tends to infinity? Conversely, for which sequences of growing graphs does
aggregation fail, so that the wrong alternative gets chosen with probability
bounded away from zero? We construct a family of examples in which interaction
prevents efficient aggregation of information, and give a condition on the
social network which ensures that aggregation occurs. For the case of majority
dynamics we also investigate the question of unanimity in the limit. In
particular, if the voters' social network is an expander graph, we show that if
the initial population is sufficiently biased towards a particular alternative
then that alternative will eventually become the unanimous preference of the
entire population.
|
1207.0913
|
Estimating Node Influenceability in Social Networks
|
cs.SI cs.DB physics.soc-ph
|
Influence analysis is a fundamental problem in social network analysis and
mining. The important applications of the influence analysis in social network
include influence maximization for viral marketing, finding the most
influential nodes, online advertising, etc. For many of these applications, it
is crucial to evaluate the influenceability of a node. In this paper, we study
the problem of evaluating influenceability of nodes in social network based on
the widely used influence spread model, namely, the independent cascade model.
Since this problem is #P-complete, most existing work is based on Naive
Monte-Carlo (\nmc) sampling. However, the \nmc estimator typically results in a
large variance, which significantly reduces its effectiveness. To overcome this
problem, we propose two families of new estimators based on the idea of
stratified sampling. We first present two basic stratified sampling (\bss)
estimators, namely \bssi estimator and \bssii estimator, which partition the
entire population into $2^r$ and $r+1$ strata by choosing $r$ edges
respectively. Second, to further reduce the variance, we find that both \bssi
and \bssii estimators can be recursively performed on each stratum, thus we
propose two recursive stratified sampling (\rss) estimators, namely \rssi
estimator and \rssii estimator. Theoretically, all of our estimators are shown
to be unbiased and their variances are significantly smaller than the variance
of the \nmc estimator. Finally, our extensive experimental results on both
synthetic and real datasets demonstrate the efficiency and accuracy of our new
estimators.
|
1207.0922
|
MDM: A Mode Diagram Modeling Framework for Periodic Control Systems
|
cs.SY cs.SE
|
Periodic control systems used in spacecrafts and automotives are usually
period-driven and can be decomposed into different modes with each mode
representing a system state observed from outside. Such systems may also
involve intensive computing in their modes. Despite the fact that such control
systems are widely used in the above-mentioned safety-critical embedded
domains, there is lack of domain-specific formal modelling languages for such
systems in the relevant industry. To address this problem, we propose a formal
visual modeling framework called MDM as a concise and precise way to specify
and analyze such systems. To capture the temporal properties of periodic
control systems, we provide, along with MDM, a property specification language
based on interval logic for the description of concrete temporal requirements
the engineers are concerned with. The statistical model checking technique can
then be used to verify the MDM models against desired properties. To
demonstrate the viability of our approach, we have applied our modelling
framework to some real life case studies from industry and helped detect two
design defects for some spacecraft control systems.
|
1207.0931
|
Effects of Weak Ties on Epidemic Predictability in Community Networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Weak ties play a significant role in the structures and the dynamics of
community networks. Based on the susceptible-infected model in contact process,
we study numerically how weak ties influence the predictability of epidemic
dynamics. We first investigate the effects of different kinds of weak ties on
the variabilities of both the arrival time and the prevalence of disease, and
find that the bridgeness with small degree can enhance the predictability of
epidemic spreading. Once weak ties are settled, compared with the variability
of arrival time, the variability of prevalence displays a diametrically opposed
changing trend with both the distance of the initial seed to the bridgeness and
the degree of the initial seed. More specifically, the further distance and the
larger degree of the initial seed can induce the better predictability of
arrival time and the worse predictability of prevalence. Moreover, we discuss
the effects of weak tie number on the epidemic variability. As community
strength becomes very strong, which is caused by the decrease of weak tie
number, the epidemic variability will change dramatically. Compared with the
case of hub seed and random seed, the bridgenss seed can result in the worst
predictability of arrival time and the best predictability of prevalence. These
results show that the variability of arrival time always marks a complete
reversal trend of that of prevalence, which implies it is impossible to predict
epidemic spreading in the early stage of outbreaks accurately.
|
1207.0938
|
Symbol Error Rate of Space-Time Network Coding in Nakagami-m Fading
|
cs.IT math.IT
|
In this paper, we analyze the symbol error rate (SER) of space-time network
coding (STNC) in a distributed cooperative network over independent but not
necessarily identically distributed (i.n.i.d.) Nakagami-$m$ fading channels. In
this network, multiple sources communicate with a single destination with the
assistance of multiple decode-and-forward (DF) relays. We first derive new
exact closed-form expressions for the SER with $M$-ary phase shift-keying
modulation ($M$-PSK) and $M$-ary quadrature amplitude modulation ($M$-QAM). We
then derive new compact expressions for the asymptotic SER to offer valuable
insights into the network behavior in the high signal-to-noise ratio (SNR)
regime. Importantly, we demonstrate that STNC guarantees full diversity order,
which is determined by the Nakagami-$m$ fading parameters of all the channels
but independent of the number of sources. Based on the new expressions, we
examine the impact of the number of relays, relay location, Nakagami-$m$ fading
parameters, power allocation, and nonorthogonal codes on the SER.
|
1207.1016
|
Map-aided Fusion Using Evidential Grids for Mobile Perception in Urban
Environment
|
cs.RO cs.AI
|
Evidential grids have been recently used for mobile object perception. The
novelty of this article is to propose a perception scheme using prior map
knowledge. A geographic map is considered an additional source of information
fused with a grid representing sensor data. Yager's rule is adapted to exploit
the Dempster-Shafer conflict information at large. In order to distinguish
stationary and mobile objects, a counter is introduced and used as a factor for
mass function specialisation. Contextual discounting is used, since we assume
that different pieces of information become obsolete at different rates. Tests
on real-world data are also presented.
|
1207.1019
|
PAC-Bayesian Majority Vote for Late Classifier Fusion
|
stat.ML cs.CV cs.LG cs.MM
|
A lot of attention has been devoted to multimedia indexing over the past few
years. In the literature, we often consider two kinds of fusion schemes: The
early fusion and the late fusion. In this paper we focus on late classifier
fusion, where one combines the scores of each modality at the decision level.
To tackle this problem, we investigate a recent and elegant well-founded
quadratic program named MinCq coming from the Machine Learning PAC-Bayes
theory. MinCq looks for the weighted combination, over a set of real-valued
functions seen as voters, leading to the lowest misclassification rate, while
making use of the voters' diversity. We provide evidence that this method is
naturally adapted to late fusion procedure. We propose an extension of MinCq by
adding an order- preserving pairwise loss for ranking, helping to improve Mean
Averaged Precision measure. We confirm the good behavior of the MinCq-based
fusion approaches with experiments on a real image benchmark.
|
1207.1061
|
Global Exponential Sampled-Data Observers for Nonlinear Systems with
Delayed Measurements
|
math.OC cs.SY
|
This paper presents new results concerning the observer design for wide
classes of nonlinear systems with both sampled and delayed measurements. By
using a small gain approach we provide sufficient conditions, which involve
both the delay and the sampling period, ensuring exponential convergence of the
observer system error. The proposed observer is robust with respect to
measurement errors and perturbations of the sampling schedule. Moreover, new
results on the robust global exponential state predictor design problem are
provided, for wide classes of nonlinear systems.
|
1207.1067
|
Bounding differences in Jager Pairs
|
math.NT cs.IT math.DS math.IT
|
Symmetrical subdivisions in the space of Jager Pairs for continued
fractions-like expansions will provide us with bounds on their difference.
Results will also apply to the classical regular and backwards continued
fractions expansions, which are realized as special cases.
|
1207.1114
|
A Fast Projected Fixed-Point Algorithm for Large Graph Matching
|
cs.CV
|
We propose a fast approximate algorithm for large graph matching. A new
projected fixed-point method is defined and a new doubly stochastic projection
is adopted to derive the algorithm. Previous graph matching algorithms suffer
from high computational complexity and therefore do not have good scalability
with respect to graph size. For matching two weighted graphs of $n$ nodes, our
algorithm has time complexity only $O(n^3)$ per iteration and space complexity
$O(n^2)$. In addition to its scalability, our algorithm is easy to implement,
robust, and able to match undirected weighted attributed graphs of different
sizes. While the convergence rate of previous iterative graph matching
algorithms is unknown, our algorithm is theoretically guaranteed to converge at
a linear rate. Extensive experiments on large synthetic and real graphs (more
than 1,000 nodes) were conducted to evaluate the performance of various
algorithms. Results show that in most cases our proposed algorithm achieves
better performance than previous state-of-the-art algorithms in terms of both
speed and accuracy in large graph matching. In particular, with high accuracy,
our algorithm takes only a few seconds (in a PC) to match two graphs of 1,000
nodes.
|
1207.1115
|
Inferring land use from mobile phone activity
|
stat.ML cs.LG physics.data-an physics.soc-ph
|
Understanding the spatiotemporal distribution of people within a city is
crucial to many planning applications. Obtaining data to create required
knowledge, currently involves costly survey methods. At the same time
ubiquitous mobile sensors from personal GPS devices to mobile phones are
collecting massive amounts of data on urban systems. The locations,
communications, and activities of millions of people are recorded and stored by
new information technologies. This work utilizes novel dynamic data, generated
by mobile phone users, to measure spatiotemporal changes in population. In the
process, we identify the relationship between land use and dynamic population
over the course of a typical week. A machine learning classification algorithm
is used to identify clusters of locations with similar zoned uses and mobile
phone activity patterns. It is shown that the mobile phone data is capable of
delivering useful information on actual land use that supplements zoning
regulations.
|
1207.1119
|
On unified view of nullspace-type conditions for recoveries associated
with general sparsity structures
|
math.OC cs.IT math.IT stat.ML
|
We discuss a general notion of "sparsity structure" and associated recoveries
of a sparse signal from its linear image of reduced dimension possibly
corrupted with noise. Our approach allows for unified treatment of (a) the
"usual sparsity" and "usual $\ell_1$ recovery," (b) block-sparsity with
possibly overlapping blocks and associated block-$\ell_1$ recovery, and (c)
low-rank-oriented recovery by nuclear norm minimization. The proposed recovery
routines are natural extensions of the usual $\ell_1$ minimization used in
Compressed Sensing. Specifically we present nullspace-type sufficient
conditions for the recovery to be precise on sparse signals in the noiseless
case. Then we derive error bounds for imperfect (nearly sparse signal, presence
of observation noise, etc.) recovery under these conditions. In all of these
cases, we present efficiently verifiable sufficient conditions for the validity
of the associated nullspace properties.
|
1207.1134
|
Reconstruction of Signals from Magnitudes of Redundant Representations
|
math.FA cs.IT math.IT stat.AP
|
This paper is concerned with the question of reconstructing a vector in a
finite-dimensional real or complex Hilbert space when only the magnitudes of
the coefficients of the vector under a redundant linear map are known. We
present new invertibility results as well an iterative algorithm that finds the
least-square solution and is robust in the presence of noise. We analyze its
numerical performance by comparing it to two versions of the Cramer-Rao lower
bound.
|
1207.1138
|
Parsing a sequence of qubits
|
quant-ph cs.IT math.CO math.IT
|
We develop a theoretical framework for frame synchronization, also known as
block synchronization, in the quantum domain which makes it possible to attach
classical and quantum metadata to quantum information over a noisy channel even
when the information source and sink are frame-wise asynchronous. This
eliminates the need of frame synchronization at the hardware level and allows
for parsing qubit sequences during quantum information processing. Our
framework exploits binary constant-weight codes that are self-synchronizing.
Possible applications may include asynchronous quantum communication such as a
self-synchronizing quantum network where one can hop into the channel at any
time, catch the next coming quantum information with a label indicating the
sender, and reply by routing her quantum information with control qubits for
quantum switches all without assuming prior frame synchronization between
users.
|
1207.1140
|
Restricted Isometry of Fourier Matrices and List Decodability of Random
Linear Codes
|
cs.IT math.CO math.IT math.PR
|
We prove that a random linear code over F_q, with probability arbitrarily
close to 1, is list decodable at radius (1-1/q-\epsilon) with list size
L=O(1/\epsilon^2) and rate R=\Omega_q(\epsilon^2/(log^3(1/\epsilon))). Up to
the polylogarithmic factor in (1/\epsilon) and constant factors depending on q,
this matches the lower bound L=\Omega_q(1/\epsilon^2) for the list size and
upper bound R=O_q(\epsilon^2) for the rate. Previously only existence (and not
abundance) of such codes was known for the special case q=2 (Guruswami,
H{\aa}stad, Sudan and Zuckerman, 2002).
In order to obtain our result, we employ a relaxed version of the well known
Johnson bound on list decoding that translates the average Hamming distance
between codewords to list decoding guarantees. We furthermore prove that the
desired average-distance guarantees hold for a code provided that a natural
complex matrix encoding the codewords satisfies the Restricted Isometry
Property with respect to the Euclidean norm (RIP-2). For the case of random
binary linear codes, this matrix coincides with a random submatrix of the
Hadamard-Walsh transform matrix that is well studied in the compressed sensing
literature.
Finally, we improve the analysis of Rudelson and Vershynin (2008) on the
number of random frequency samples required for exact reconstruction of
k-sparse signals of length N. Specifically, we improve the number of samples
from O(k log(N) log^2(k) (log k + loglog N)) to O(k log(N) log^3(k)). The proof
involves bounding the expected supremum of a related Gaussian process by using
an improved analysis of the metric defined by the process. This improvement is
crucial for our application in list decoding.
|
1207.1157
|
A New Efficient Asymmetric Cryptosystem Based on the Square Root Problem
|
cs.IT cs.CR math.IT
|
The square root modulo problem is a known primitive in designing an
asymmetric cryptosystem. It was first attempted by Rabin. Decryption failure of
the Rabin cryptosystem caused by the 4-to-1 decryption output is overcome
efficiently in this work. The proposed scheme (known as the AA_\beta-
cryptosystem) has its encryption speed having a complexity order faster than
the Diffie-Hellman Key Exchange, El-Gammal, RSA and ECC. It can also transmit a
larger data set securely when compared to existing asymmetric schemes. It has a
simple mathematical structure. Thus, it would have low computational
requirements and would enable communication devices with low computing power to
deploy secure communication procedures efficiently.
|
1207.1166
|
On the Fundamental Relationship Determining the Capacity of Static and
Mobile Wireless Networks
|
cs.NI cs.IT math.IT
|
Studying the capacity of wireless multi-hop networks is an important problem
and extensive research has been done in the area. In this letter, we sift
through various capacity-impacting parameters and show that the capacity of
both static and mobile networks is fundamentally determined by the average
number of simultaneous transmissions, the link capacity and the average number
of transmissions required to deliver a packet to its destination. We then use
this result to explain and help to better understand existing results on the
capacities of static networks, mobile networks and hybrid networks and the
multicast capacity.
|
1207.1206
|
Threshold model of cascades in temporal networks
|
physics.soc-ph cs.SI
|
Threshold models try to explain the consequences of social influence like the
spread of fads and opinions. Along with models of epidemics, they constitute a
major theoretical framework of social spreading processes. In threshold models
on static networks, an individual changes her state if a certain fraction of
her neighbors has done the same. When there are strong correlations in the
temporal aspects of contact patterns, it is useful to represent the system as a
temporal network. In such a system, not only contacts but also the time of the
contacts are represented explicitly. There is a consensus that bursty temporal
patterns slow down disease spreading. However, as we will see, this is not a
universal truth for threshold models. In this work, we propose an extension of
Watts' classic threshold model to temporal networks. We do this by assuming
that an agent is influenced by contacts which lie a certain time into the past.
I.e., the individuals are affected by contacts within a time window. In
addition to thresholds as the fraction of contacts, we also investigate the
number of contacts within the time window as a basis for influence. To
elucidate the model's behavior, we run the model on real and randomized
empirical contact datasets.
|
1207.1230
|
Higher-Order Partial Least Squares (HOPLS): A Generalized Multi-Linear
Regression Method
|
cs.AI
|
A new generalized multilinear regression model, termed the Higher-Order
Partial Least Squares (HOPLS), is introduced with the aim to predict a tensor
(multiway array) $\tensor{Y}$ from a tensor $\tensor{X}$ through projecting the
data onto the latent space and performing regression on the corresponding
latent variables. HOPLS differs substantially from other regression models in
that it explains the data by a sum of orthogonal Tucker tensors, while the
number of orthogonal loadings serves as a parameter to control model complexity
and prevent overfitting. The low dimensional latent space is optimized
sequentially via a deflation operation, yielding the best joint subspace
approximation for both $\tensor{X}$ and $\tensor{Y}$. Instead of decomposing
$\tensor{X}$ and $\tensor{Y}$ individually, higher order singular value
decomposition on a newly defined generalized cross-covariance tensor is
employed to optimize the orthogonal loadings. A systematic comparison on both
synthetic data and real-world decoding of 3D movement trajectories from
electrocorticogram (ECoG) signals demonstrate the advantages of HOPLS over the
existing methods in terms of better predictive ability, suitability to handle
small sample sizes, and robustness to noise.
|
1207.1238
|
On the Hardness of Entropy Minimization and Related Problems
|
cs.IT cs.CC math.IT
|
We investigate certain optimization problems for Shannon information
measures, namely, minimization of joint and conditional entropies $H(X,Y)$,
$H(X|Y)$, $H(Y|X)$, and maximization of mutual information $I(X;Y)$, over
convex regions. When restricted to the so-called transportation polytopes (sets
of distributions with fixed marginals), very simple proofs of NP-hardness are
obtained for these problems because in that case they are all equivalent, and
their connection to the well-known \textsc{Subset sum} and \textsc{Partition}
problems is revealed. The computational intractability of the more general
problems over arbitrary polytopes is then a simple consequence. Further, a
simple class of polytopes is shown over which the above problems are not
equivalent and their complexity differs sharply, namely, minimization of
$H(X,Y)$ and $H(Y|X)$ is trivial, while minimization of $H(X|Y)$ and
maximization of $I(X;Y)$ are strongly NP-hard problems. Finally, two new
(pseudo)metrics on the space of discrete probability distributions are
introduced, based on the so-called variation of information quantity, and
NP-hardness of their computation is shown.
|
1207.1253
|
Interpolating between Random Walks and Shortest Paths: a Path Functional
Approach
|
cs.SI physics.soc-ph
|
General models of network navigation must contain a deterministic or drift
component, encouraging the agent to follow routes of least cost, as well as a
random or diffusive component, enabling free wandering. This paper proposes a
thermodynamic formalism involving two path functionals, namely an energy
functional governing the drift and an entropy functional governing the
diffusion. A freely adjustable parameter, the temperature, arbitrates between
the conflicting objectives of minimising travel costs and maximising spatial
exploration. The theory is illustrated on various graphs and various
temperatures. The resulting optimal paths, together with presumably new
associated edges and nodes centrality indices, are analytically and numerically
investigated.
|
1207.1257
|
Generalizing Redundancy in Propositional Logic: Foundations and Hitting
Sets Duality
|
cs.LO cs.AI
|
Detection and elimination of redundant clauses from propositional formulas in
Conjunctive Normal Form (CNF) is a fundamental problem with numerous
application domains, including AI, and has been the subject of extensive
research. Moreover, a number of recent applications motivated various
extensions of this problem. For example, unsatisfiable formulas partitioned
into disjoint subsets of clauses (so-called groups) often need to be simplified
by removing redundant groups, or may contain redundant variables, rather than
clauses. In this report we present a generalized theoretical framework of
labelled CNF formulas that unifies various extensions of the redundancy
detection and removal problem and allows to derive a number of results that
subsume and extend previous work. The follow-up reports contain a number of
additional theoretical results and algorithms for various computational
problems in the context of the proposed framework.
|
1207.1271
|
Automated Verification of Quantum Protocols using MCMAS
|
cs.LO cs.CR cs.MA quant-ph
|
We present a methodology for the automated verification of quantum protocols
using MCMAS, a symbolic model checker for multi-agent systems The method is
based on the logical framework developed by D'Hondt and Panangaden for
investigating epistemic and temporal properties, built on the model for
Distributed Measurement-based Quantum Computation (DMC), an extension of the
Measurement Calculus to distributed quantum systems. We describe the
translation map from DMC to interpreted systems, the typical formalism for
reasoning about time and knowledge in multi-agent systems. Then, we introduce
dmc2ispl, a compiler into the input language of the MCMAS model checker. We
demonstrate the technique by verifying the Quantum Teleportation Protocol, and
discuss the performance of the tool.
|
1207.1276
|
Controllers with Minimal Observation Power (Application to Timed
Systems)
|
cs.SY cs.GT cs.LO
|
We consider the problem of controller synthesis under imperfect information
in a setting where there is a set of available observable predicates equipped
with a cost function. The problem that we address is the computation of a
subset of predicates sufficient for control and whose cost is minimal. Our
solution avoids a full exploration of all possible subsets of predicates and
reuses some information between different iterations. We apply our approach to
timed systems. We have developed a tool prototype and analyze the performance
of our optimization algorithm on two case studies.
|
1207.1280
|
Probabilistically Safe Control of Noisy Dubins Vehicles
|
cs.RO cs.SY
|
We address the problem of controlling a stochastic version of a Dubins
vehicle such that the probability of satisfying a temporal logic specification
over a set of properties at the regions in a partitioned environment is
maximized. We assume that the vehicle can determine its precise initial
position in a known map of the environment. However, inspired by practical
limitations, we assume that the vehicle is equipped with noisy actuators and,
during its motion in the environment, it can only measure its angular velocity
using a limited accuracy gyroscope. Through quantization and discretization, we
construct a finite approximation for the motion of the vehicle in the form of a
Markov Decision Process (MDP). We allow for task specifications given as
temporal logic statements over the environmental properties, and use tools in
Probabilistic Computation Tree Logic (PCTL) to generate an MDP control policy
that maximizes the probability of satisfaction. We translate this policy to a
vehicle feedback control strategy and show that the probability that the
vehicle satisfies the specification in the original environment is bounded from
below by the maximum probability of satisfying the specification on the MDP.
|
1207.1291
|
Generating Robust and Efficient Networks Under Targeted Attacks
|
physics.soc-ph cs.SI
|
Much of our commerce and traveling depend on the efficient operation of large
scale networks. Some of those, such as electric power grids, transportation
systems, communication networks, and others, must maintain their efficiency
even after several failures, or malicious attacks. We outline a procedure that
modifies any given network to enhance its robustness, defined as the size of
its largest connected component after a succession of attacks, whilst keeping a
high efficiency, described in terms of the shortest paths among nodes. We also
show that this generated set of networks is very similar to networks optimized
for robustness in several aspects such as high assortativity and the presence
of an onion-like structure.
|
1207.1315
|
An experimental study of exhaustive solutions for the Mastermind puzzle
|
cs.NE math.OC
|
Mastermind is in essence a search problem in which a string of symbols that
is kept secret must be found by sequentially playing strings that use the same
alphabet, and using the responses that indicate how close are those other
strings to the secret one as hints. Although it is commercialized as a game, it
is a combinatorial problem of high complexity, with applications on fields that
range from computer security to genomics. As such a kind of problem, there are
no exact solutions; even exhaustive search methods rely on heuristics to
choose, at every step, strings to get the best possible hint. These methods
mostly try to play the move that offers the best reduction in search space size
in the next step; this move is chosen according to an empirical score. However,
in this paper we will examine several state of the art exhaustive search
methods and show that another factor, the presence of the actual solution among
the candidate moves, or, in other words, the fact that the actual solution has
the highest score, plays also a very important role. Using that, we will
propose new exhaustive search approaches that obtain results which are
comparable to the classic ones, and besides, are better suited as a basis for
non-exhaustive search strategies such as evolutionary algorithms, since their
behavior in a series of key indicators is better than the classical algorithms.
|
1207.1345
|
Distributed Structure: Joint Expurgation for the Multiple-Access Channel
|
cs.IT math.IT
|
In this work we show how an improved lower bound to the error exponent of the
memoryless multiple-access (MAC) channel is attained via the use of linear
codes, thus demonstrating that structure can be beneficial even in cases where
there is no capacity gain. We show that if the MAC channel is modulo-additive,
then any error probability, and hence any error exponent, achievable by a
linear code for the corresponding single-user channel, is also achievable for
the MAC channel. Specifically, for an alphabet of prime cardinality, where
linear codes achieve the best known exponents in the single-user setting and
the optimal exponent above the critical rate, this performance carries over to
the MAC setting. At least at low rates, where expurgation is needed, our
approach strictly improves performance over previous results, where expurgation
was used at most for one of the users. Even when the MAC channel is not
additive, it may be transformed into such a channel. While the transformation
is lossy, we show that the distributed structure gain in some "nearly additive"
cases outweighs the loss, and thus the error exponent can improve upon the best
known error exponent for these cases as well. Finally we apply a similar
approach to the Gaussian MAC channel. We obtain an improvement over the best
known achievable exponent, given by Gallager, for certain rate pairs, using
lattice codes which satisfy a nesting condition.
|
1207.1350
|
Cost Sensitive Reachability Heuristics for Handling State Uncertainty
|
cs.AI
|
While POMDPs provide a general platform for non-deterministic conditional
planning under a variety of quality metrics they have limited scalability. On
the other hand, non-deterministic conditional planners scale very well, but
many lack the ability to optimize plan quality metrics. We present a novel
generalization of planning graph based heuristics that helps conditional
planners both scale and generate high quality plans when using actions with
nonuniform costs. We make empirical comparisons with two state of the art
planners to show the benefit of our techniques.
|
1207.1351
|
Stable Independence in Perfect Maps
|
cs.AI
|
With the aid of the concept of stable independence we can construct, in an
efficient way, a compact representation of a semi-graphoid independence
relation. We show that this representation provides a new necessary condition
for the existence of a directed perfect map for the relation. The test for this
condition is based to a large extent on the transitivity property of a special
form of d-separation. The complexity of the test is linear in the size of the
representation. The test, moreover, brings the additional benefit that it can
be used to guide the early stages of network construction.
|
1207.1352
|
Prediction, Expectation, and Surprise: Methods, Designs, and Study of a
Deployed Traffic Forecasting Service
|
cs.AI physics.soc-ph
|
We present research on developing models that forecast traffic flow and
congestion in the Greater Seattle area. The research has led to the deployment
of a service named JamBayes, that is being actively used by over 2,500 users
via smartphones and desktop versions of the system. We review the modeling
effort and describe experiments probing the predictive accuracy of the models.
Finally, we present research on building models that can identify current and
future surprises, via efforts on modeling and forecasting unexpected
situations.
|
1207.1353
|
'Say EM' for Selecting Probabilistic Models for Logical Sequences
|
cs.AI
|
Many real world sequences such as protein secondary structures or shell logs
exhibit a rich internal structures. Traditional probabilistic models of
sequences, however, consider sequences of flat symbols only. Logical hidden
Markov models have been proposed as one solution. They deal with logical
sequences, i.e., sequences over an alphabet of logical atoms. This comes at the
expense of a more complex model selection problem. Indeed, different
abstraction levels have to be explored. In this paper, we propose a novel
method for selecting logical hidden Markov models from data called SAGEM. SAGEM
combines generalized expectation maximization, which optimizes parameters, with
structure search for model selection using inductive logic programming
refinement operators. We provide convergence and experimental results that show
SAGEM's effectiveness.
|
1207.1354
|
Of Starships and Klingons: Bayesian Logic for the 23rd Century
|
cs.AI
|
Intelligent systems in an open world must reason about many interacting
entities related to each other in diverse ways and having uncertain features
and relationships. Traditional probabilistic languages lack the expressive
power to handle relational domains. Classical first-order logic is sufficiently
expressive, but lacks a coherent plausible reasoning capability. Recent years
have seen the emergence of a variety of approaches to integrating first-order
logic, probability, and machine learning. This paper presents Multi-entity
Bayesian networks (MEBN), a formal system that integrates First Order Logic
(FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks
to allow representation of graphical models with repeated sub-structures, and
can express a probability distribution over models of any consistent, finitely
axiomatizable first-order theory. We present the logic using an example
inspired by the Paramount Series StarTrek.
|
1207.1355
|
A Differential Semantics of Lazy AR Propagation
|
cs.AI
|
In this paper we present a differential semantics of Lazy AR Propagation
(LARP) in discrete Bayesian networks. We describe how both single and multi
dimensional partial derivatives of the evidence may easily be calculated from a
junction tree in LARP equilibrium. We show that the simplicity of the
calculations stems from the nature of LARP. Based on the differential semantics
we describe how variable propagation in the LARP architecture may give access
to additional partial derivatives. The cautious LARP (cLARP) scheme is derived
to produce a flexible cLARP equilibrium that offers additional opportunities
for calculating single and multidimensional partial derivatives of the evidence
and subsets of the evidence from a single propagation. The results of an
empirical evaluation illustrates how the access to a largely increased number
of partial derivatives comes at a low computational cost.
|
1207.1356
|
Modifying Bayesian Networks by Probability Constraints
|
cs.AI
|
This paper deals with the following problem: modify a Bayesian network to
satisfy a given set of probability constraints by only change its conditional
probability tables, and the probability distribution of the resulting network
should be as close as possible to that of the original network. We propose to
solve this problem by extending IPFP (iterative proportional fitting procedure)
to probability distributions represented by Bayesian networks. The resulting
algorithm E-IPFP is further developed to D-IPFP, which reduces the
computational cost by decomposing a global EIPFP into a set of smaller local
E-IPFP problems. Limited analysis is provided, including the convergence proofs
of the two algorithms. Computer experiments were conducted to validate the
algorithms. The results are consistent with the theoretical analysis.
|
1207.1357
|
Exploiting Evidence-dependent Sensitivity Bounds
|
cs.AI
|
Studying the effects of one-way variation of any number of parameters on any
number of output probabilities quickly becomes infeasible in practice,
especially if various evidence profiles are to be taken into consideration. To
provide for identifying the parameters that have a potentially large effect
prior to actually performing the analysis, we need properties of sensitivity
functions that are independent of the network under study, of the available
evidence, or of both. In this paper, we study properties that depend upon just
the probability of the entered evidence. We demonstrate that these properties
provide for establishing an upper bound on the sensitivity value for a
parameter; they further provide for establishing the region in which the vertex
of the sensitivity function resides, thereby serving to identify parameters
with a low sensitivity value that may still have a large impact on the
probability of interest for relatively small parameter variations.
|
1207.1358
|
Unsupervised spectral learning
|
cs.LG stat.ML
|
In spectral clustering and spectral image segmentation, the data is partioned
starting from a given matrix of pairwise similarities S. the matrix S is
constructed by hand, or learned on a separate training set. In this paper we
show how to achieve spectral clustering in unsupervised mode. Our algorithm
starts with a set of observed pairwise features, which are possible components
of an unknown, parametric similarity function. This function is learned
iteratively, at the same time as the clustering of the data. The algorithm
shows promosing results on synthetic and real data.
|
1207.1359
|
MAA*: A Heuristic Search Algorithm for Solving Decentralized POMDPs
|
cs.AI
|
We present multi-agent A* (MAA*), the first complete and optimal heuristic
search algorithm for solving decentralized partially-observable Markov decision
problems (DEC-POMDPs) with finite horizon. The algorithm is suitable for
computing optimal plans for a cooperative group of agents that operate in a
stochastic environment such as multirobot coordination, network traffic
control, `or distributed resource allocation. Solving such problems efiectively
is a major challenge in the area of planning under uncertainty. Our solution is
based on a synthesis of classical heuristic search and decentralized control
theory. Experimental results show that MAA* has significant advantages. We
introduce an anytime variant of MAA* and conclude with a discussion of
promising extensions such as an approach to solving infinite horizon problems.
|
1207.1361
|
Local Utility Elicitation in GAI Models
|
cs.GT cs.AI
|
Structured utility models are essential for the effective representation and
elicitation of complex multiattribute utility functions. Generalized additive
independence (GAI) models provide an attractive structural model of user
preferences, offering a balanced tradeoff between simplicity and applicability.
While representation and inference with such models is reasonably well
understood, elicitation of the parameters of such models has been studied less
from a practical perspective. We propose a procedure to elicit GAI model
parameters using only "local" utility queries rather than "global" queries over
full outcomes. Our local queries take full advantage of GAI structure and
provide a sound framework for extending the elicitation procedure to settings
where the uncertainty over utility parameters is represented probabilistically.
We describe experiments using a myopic value-of-information approach to
elicitation in a large GAI model.
|
1207.1363
|
A unified setting for inference and decision: An argumentation-based
approach
|
cs.AI
|
Inferring from inconsistency and making decisions are two problems which have
always been treated separately by researchers in Artificial Intelligence.
Consequently, different models have been proposed for each category. Different
argumentation systems [2, 7, 10, 11] have been developed for handling
inconsistency in knowledge bases. Recently, other argumentation systems [3, 4,
8] have been defined for making decisions under uncertainty. The aim of this
paper is to present a general argumentation framework in which both inferring
from inconsistency and decision making are captured. The proposed framework can
be used for decision under uncertainty, multiple criteria decision, rule-based
decision and finally case-based decision. Moreover, works on classical decision
suppose that the information about environment is coherent, and this no longer
required by this general framework.
|
1207.1364
|
Learning from Sparse Data by Exploiting Monotonicity Constraints
|
cs.LG stat.ML
|
When training data is sparse, more domain knowledge must be incorporated into
the learning algorithm in order to reduce the effective size of the hypothesis
space. This paper builds on previous work in which knowledge about qualitative
monotonicities was formally represented and incorporated into learning
algorithms (e.g., Clark & Matwin's work with the CN2 rule learning algorithm).
We show how to interpret knowledge of qualitative influences, and in particular
of monotonicities, as constraints on probability distributions, and to
incorporate this knowledge into Bayesian network learning algorithms. We show
that this yields improved accuracy, particularly with very small training sets
(e.g. less than 10 examples).
|
1207.1365
|
Towards Characterizing Markov Equivalence Classes for Directed Acyclic
Graphs with Latent Variables
|
stat.ME cs.AI
|
It is well known that there may be many causal explanations that are
consistent with a given set of data. Recent work has been done to represent the
common aspects of these explanations into one representation. In this paper, we
address what is less well known: how do the relationships common to every
causal explanation among the observed variables of some DAG process change in
the presence of latent variables? Ancestral graphs provide a class of graphs
that can encode conditional independence relations that arise in DAG models
with latent and selection variables. In this paper we present a set of
orientation rules that construct the Markov equivalence class representative
for ancestral graphs, given a member of the equivalence class. These rules are
sound and complete. We also show that when the equivalence class includes a
DAG, the equivalence class representative is the essential graph for the said
DAG
|
1207.1366
|
Learning Factor Graphs in Polynomial Time & Sample Complexity
|
cs.LG stat.ML
|
We study computational and sample complexity of parameter and structure
learning in graphical models. Our main result shows that the class of factor
graphs with bounded factor size and bounded connectivity can be learned in
polynomial time and polynomial number of samples, assuming that the data is
generated by a network in this class. This result covers both parameter
estimation for a known network structure and structure learning. It implies as
a corollary that we can learn factor graphs for both Bayesian networks and
Markov networks of bounded degree, in polynomial time and sample complexity.
Unlike maximum likelihood estimation, our method does not require inference in
the underlying network, and so applies to networks where inference is
intractable. We also show that the error of our learned model degrades
gracefully when the generating distribution is not a member of the target class
of networks.
|
1207.1367
|
Belief Updating and Learning in Semi-Qualitative Probabilistic Networks
|
cs.AI stat.ML
|
This paper explores semi-qualitative probabilistic networks (SQPNs) that
combine numeric and qualitative information. We first show that exact
inferences with SQPNs are NPPP-Complete. We then show that existing qualitative
relations in SQPNs (plus probabilistic logic and imprecise assessments) can be
dealt effectively through multilinear programming. We then discuss learning: we
consider a maximum likelihood method that generates point estimates given a
SQPN and empirical data, and we describe a Bayesian-minded method that employs
the Imprecise Dirichlet Model to generate set-valued estimates.
|
1207.1368
|
Common Voting Rules as Maximum Likelihood Estimators
|
cs.GT cs.AI
|
Voting is a very general method of preference aggregation. A voting rule
takes as input every voter's vote (typically, a ranking of the alternatives),
and produces as output either just the winning alternative or a ranking of the
alternatives. One potential view of voting is the following. There exists a
'correct' outcome (winner/ranking), and each voter's vote corresponds to a
noisy perception of this correct outcome. If we are given the noise model, then
for any vector of votes, we can
|
1207.1369
|
Hybrid Bayesian Networks with Linear Deterministic Variables
|
cs.AI
|
When a hybrid Bayesian network has conditionally deterministic variables with
continuous parents, the joint density function for the continuous variables
does not exist. Conditional linear Gaussian distributions can handle such cases
when the continuous variables have a multi-variate normal distribution and the
discrete variables do not have continuous parents. In this paper, operations
required for performing inference with conditionally deterministic variables in
hybrid Bayesian networks are developed. These methods allow inference in
networks with deterministic variables where continuous variables may be
non-Gaussian, and their density functions can be approximated by mixtures of
truncated exponentials. There are no constraints on the placement of continuous
and discrete nodes in the network.
|
1207.1370
|
On Bayesian Network Approximation by Edge Deletion
|
cs.AI
|
We consider the problem of deleting edges from a Bayesian network for the
purpose of simplifying models in probabilistic inference. In particular, we
propose a new method for deleting network edges, which is based on the evidence
at hand. We provide some interesting bounds on the KL-divergence between
original and approximate networks, which highlight the impact of given evidence
on the quality of approximation and shed some light on good and bad candidates
for edge deletion. We finally demonstrate empirically the promise of the
proposed edge deletion technique as a basis for approximate inference.
|
1207.1372
|
Exploiting Evidence in Probabilistic Inference
|
cs.AI
|
We define the notion of compiling a Bayesian network with evidence and
provide a specific approach for evidence-based compilation, which makes use of
logical processing. The approach is practical and advantageous in a number of
application areas-including maximum likelihood estimation, sensitivity
analysis, and MAP computations-and we provide specific empirical results in the
domain of genetic linkage analysis. We also show that the approach is
applicable for networks that do not contain determinism, and show that it
empirically subsumes the performance of the quickscore algorithm when applied
to noisy-or networks.
|
1207.1373
|
Counterexample-guided Planning
|
cs.AI cs.GT
|
Planning in adversarial and uncertain environments can be modeled as the
problem of devising strategies in stochastic perfect information games. These
games are generalizations of Markov decision processes (MDPs): there are two
(adversarial) players, and a source of randomness. The main practical obstacle
to computing winning strategies in such games is the size of the state space.
In practice therefore, one typically works with abstractions of the model. The
diffculty is to come up with an abstraction that is neither too coarse to
remove all winning strategies (plans), nor too fine to be intractable. In
verification, the paradigm of counterexample-guided abstraction refinement has
been successful to construct useful but parsimonious abstractions
automatically. We extend this paradigm to probabilistic models (namely, perfect
information games and, as a special case, MDPs). This allows us to apply the
counterexample-guided abstraction paradigm to the AI planning problem. As
special cases, we get planning algorithms for MDPs and deterministic systems
that automatically construct system abstractions.
|
1207.1374
|
Use of Dempster-Shafer Conflict Metric to Detect Interpretation
Inconsistency
|
cs.AI
|
A model of the world built from sensor data may be incorrect even if the
sensors are functioning correctly. Possible causes include the use of
inappropriate sensors (e.g. a laser looking through glass walls), sensor
inaccuracies accumulate (e.g. localization errors), the a priori models are
wrong, or the internal representation does not match the world (e.g. a static
occupancy grid used with dynamically moving objects). We are interested in the
case where the constructed model of the world is flawed, but there is no access
to the ground truth that would allow the system to see the discrepancy, such as
a robot entering an unknown environment. This paper considers the problem of
determining when something is wrong using only the sensor data used to
construct the world model. It proposes 11 interpretation inconsistency
indicators based on the Dempster-Shafer conflict metric, Con, and evaluates
these indicators according to three criteria: ability to distinguish true
inconsistency from sensor noise (classification), estimate the magnitude of
discrepancies (estimation), and determine the source(s) (if any) of sensing
problems in the environment (isolation). The evaluation is conducted using data
from a mobile robot with sonar and laser range sensors navigating indoor
environments under controlled conditions. The evaluation shows that the Gambino
indicator performed best in terms of estimation (at best 0.77 correlation),
isolation, and classification of the sensing situation as degraded (7% false
negative rate) or normal (0% false positive rate).
|
1207.1375
|
Nonparametric Bayesian Logic
|
cs.AI
|
The Bayesian Logic (BLOG) language was recently developed for defining
first-order probability models over worlds with unknown numbers of objects. It
handles important problems in AI, including data association and population
estimation. This paper extends BLOG by adopting generative processes over
function spaces - known as nonparametrics in the Bayesian literature. We
introduce syntax for reasoning about arbitrary collections of objects, and
their properties, in an intuitive manner. By exploiting exchangeability,
distributions over unknown objects and their attributes are cast as Dirichlet
processes, which resolve difficulties in model selection and inference caused
by varying numbers of objects. We demonstrate these concepts with application
to citation matching.
|
1207.1376
|
Counterfactual Reasoning in Linear Structural Equation Models
|
cs.AI stat.ME
|
Consider the case where causal relations among variables can be described as
a Gaussian linear structural equation model. This paper deals with the problem
of clarifying how the variance of a response variable would have changed if a
treatment variable were assigned to some value (counterfactually), given that a
set of variables is observed (actually). In order to achieve this aim, we
reformulate the formulas of the counterfactual distribution proposed by Balke
and Pearl (1995) through both the total effects and a covariance matrix of
observed variables. We further extend the framework of Balke and Pearl (1995)
from point observations to interval observations, and from an unconditional
plan to a conditional plan. The results of this paper enable us to clarify the
properties of counterfactual distribution and establish an optimal plan.
|
1207.1377
|
Efficient algorithm for estimation of qualitative expected utility in
possibilistic case-based reasoning
|
cs.AI
|
We propose an efficient algorithm for estimation of possibility based
qualitative expected utility. It is useful for decision making mechanisms where
each possible decision is assigned a multi-attribute possibility distribution.
The computational complexity of ordinary methods calculating the expected
utility based on discretization is growing exponentially with the number of
attributes, and may become infeasible with a high number of these attributes.
We present series of theorems and lemmas proving the correctness of our
algorithm that exibits a linear computational complexity. Our algorithm has
been applied in the context of selecting the most prospective partners in
multi-party multi-attribute negotiation, and can also be used in making
decisions about potential offers during the negotiation as other similar
problems.
|
1207.1378
|
Local Markov Property for Models Satisfying Composition Axiom
|
cs.AI
|
The local Markov condition for a DAG to be an independence map of a
probability distribution is well known. For DAGs with latent variables,
represented as bi-directed edges in the graph, the local Markov property may
invoke exponential number of conditional independencies. This paper shows that
the number of conditional independence relations required may be reduced if the
probability distributions satisfy the composition axiom. In certain types of
graphs, only linear number of conditional independencies are required. The
result has applications in testing linear structural equation models with
correlated errors.
|
1207.1379
|
On the Detection of Concept Changes in Time-Varying Data Stream by
Testing Exchangeability
|
cs.LG stat.ML
|
A martingale framework for concept change detection based on testing data
exchangeability was recently proposed (Ho, 2005). In this paper, we describe
the proposed change-detection test based on the Doob's Maximal Inequality and
show that it is an approximation of the sequential probability ratio test
(SPRT). The relationship between the threshold value used in the proposed test
and its size and power is deduced from the approximation. The mean delay time
before a change is detected is estimated using the average sample number of a
SPRT. The performance of the test using various threshold values is examined on
five different data stream scenarios simulated using two synthetic data sets.
Finally, experimental results show that the test is effective in detecting
changes in time-varying data streams simulated using three benchmark data sets.
|
1207.1380
|
Bayes Blocks: An Implementation of the Variational Bayesian Building
Blocks Framework
|
cs.MS cs.LG stat.ML
|
A software library for constructing and learning probabilistic models is
presented. The library offers a set of building blocks from which a large
variety of static and dynamic models can be built. These include hierarchical
models for variances of other variables and many nonlinear models. The
underlying variational Bayesian machinery, providing for fast and robust
estimation but being mathematically rather involved, is almost completely
hidden from the user thus making it very easy to use the library. The building
blocks include Gaussian, rectified Gaussian and mixture-of-Gaussians variables
and computational nodes which can be combined rather freely.
|
1207.1381
|
Unsupervised Activity Discovery and Characterization From Event-Streams
|
cs.AI
|
We present a framework to discover and characterize different classes of
everyday activities from event-streams. We begin by representing activities as
bags of event n-grams. This allows us to analyze the global structural
information of activities, using their local event statistics. We demonstrate
how maximal cliques in an undirected edge-weighted graph of activities, can be
used for activity-class discovery in an unsupervised manner. We show how
modeling an activity as a variable length Markov process, can be used to
discover recurrent event-motifs to characterize the discovered
activity-classes. We present results over extensive data-sets, collected from
multiple active environments, to show the competence and generalizability of
our proposed framework.
|
1207.1382
|
Maximum Margin Bayesian Networks
|
cs.LG stat.ML
|
We consider the problem of learning Bayesian network classifiers that
maximize the marginover a set of classification variables. We find that this
problem is harder for Bayesian networks than for undirected graphical models
like maximum margin Markov networks. The main difficulty is that the parameters
in a Bayesian network must satisfy additional normalization constraints that an
undirected graphical model need not respect. These additional constraints
complicate the optimization task. Nevertheless, we derive an effective training
algorithm that solves the maximum margin training problem for a range of
Bayesian network topologies, and converges to an approximate solution for
arbitrary network topologies. Experimental results show that the method can
demonstrate improved generalization performance over Markov networks when the
directed graphical structure encodes relevant knowledge. In practice, the
training technique allows one to combine prior knowledge expressed as a
directed (causal) model with state of the art discriminative learning methods.
|
1207.1384
|
Modeling Transportation Routines using Hybrid Dynamic Mixed Networks
|
cs.AI
|
This paper describes a general framework called Hybrid Dynamic Mixed Networks
(HDMNs) which are Hybrid Dynamic Bayesian Networks that allow representation of
discrete deterministic information in the form of constraints. We propose
approximate inference algorithms that integrate and adjust well known
algorithmic principles such as Generalized Belief Propagation,
Rao-Blackwellised Particle Filtering and Constraint Propagation to address the
complexity of modeling and reasoning in HDMNs. We use this framework to model a
person's travel activity over time and to predict destination and routes given
the current location. We present a preliminary empirical evaluation
demonstrating the effectiveness of our modeling framework and algorithms using
several variants of the activity model.
|
1207.1385
|
Approximate Inference Algorithms for Hybrid Bayesian Networks with
Discrete Constraints
|
cs.AI
|
In this paper, we consider Hybrid Mixed Networks (HMN) which are Hybrid
Bayesian Networks that allow discrete deterministic information to be modeled
explicitly in the form of constraints. We present two approximate inference
algorithms for HMNs that integrate and adjust well known algorithmic principles
such as Generalized Belief Propagation, Rao-Blackwellised Importance Sampling
and Constraint Propagation to address the complexity of modeling and reasoning
in HMNs. We demonstrate the performance of our approximate inference algorithms
on randomly generated HMNs.
|
1207.1386
|
Metrics for Markov Decision Processes with Infinite State Spaces
|
cs.AI
|
We present metrics for measuring state similarity in Markov decision
processes (MDPs) with infinitely many states, including MDPs with continuous
state spaces. Such metrics provide a stable quantitative analogue of the notion
of bisimulation for MDPs, and are suitable for use in MDP approximation. We
show that the optimal value function associated with a discounted infinite
horizon planning task varies continuously with respect to our metric distances.
|
1207.1387
|
Learning Bayesian Network Parameters with Prior Knowledge about
Context-Specific Qualitative Influences
|
cs.AI cs.LG stat.ML
|
We present a method for learning the parameters of a Bayesian network with
prior knowledge about the signs of influences between variables. Our method
accommodates not just the standard signs, but provides for context-specific
signs as well. We show how the various signs translate into order constraints
on the network parameters and how isotonic regression can be used to compute
order-constrained estimates from the available data. Our experimental results
show that taking prior knowledge about the signs of influences into account
leads to an improved fit of the true distribution, especially when only a small
sample of data is available. Moreover, the computed estimates are guaranteed to
be consistent with the specified signs, thereby resulting in a network that is
more likely to be accepted by experts in its domain of application.
|
1207.1388
|
Planning in POMDPs Using Multiplicity Automata
|
cs.AI cs.FL
|
Planning and learning in Partially Observable MDPs (POMDPs) are among the
most challenging tasks in both the AI and Operation Research communities.
Although solutions to these problems are intractable in general, there might be
special cases, such as structured POMDPs, which can be solved efficiently. A
natural and possibly efficient way to represent a POMDP is through the
predictive state representation (PSR) - a representation which recently has
been receiving increasing attention. In this work, we relate POMDPs to
multiplicity automata- showing that POMDPs can be represented by multiplicity
automata with no increase in the representation size. Furthermore, we show that
the size of the multiplicity automaton is equal to the rank of the predictive
state representation. Therefore, we relate both the predictive state
representation and POMDPs to the well-founded multiplicity automata literature.
Based on the multiplicity automata representation, we provide a planning
algorithm which is exponential only in the multiplicity automata rank rather
than the number of states of the POMDP. As a result, whenever the predictive
state representation is logarithmic in the standard POMDP representation, our
planning algorithm is efficient.
|
1207.1389
|
On the Number of Experiments Sufficient and in the Worst Case Necessary
to Identify All Causal Relations Among N Variables
|
cs.AI stat.ME
|
We show that if any number of variables are allowed to be simultaneously and
independently randomized in any one experiment, log2(N) + 1 experiments are
sufficient and in the worst case necessary to determine the causal relations
among N >= 2 variables when no latent variables, no sample selection bias and
no feedback cycles are present. For all K, 0 < K < 1/(2N) we provide an upper
bound on the number experiments required to determine causal structure when
each experiment simultaneously randomizes K variables. For large N, these
bounds are significantly lower than the N - 1 bound required when each
experiment randomizes at most one variable. For kmax < N/2, we show that
(N/kmax-1)+N/(2kmax)log2(kmax) experiments aresufficient and in the worst case
necessary. We over a conjecture as to the minimal number of experiments that
are in the worst case sufficient to identify all causal relations among N
observed variables that are a subset of the vertices of a DAG.
|
1207.1390
|
Unstructuring User Preferences: Efficient Non-Parametric Utility
Revelation
|
cs.AI cs.GT
|
Tackling the problem of ordinal preference revelation and reasoning, we
propose a novel methodology for generating an ordinal utility function from a
set of qualitative preference statements. To the best of our knowledge, our
proposal constitutes the first nonparametric solution for this problem that is
both efficient and semantically sound. Our initial experiments provide strong
evidence for practical effectiveness of our approach.
|
1207.1391
|
Existence and Finiteness Conditions for Risk-Sensitive Planning: Results
and Conjectures
|
cs.AI
|
Decision-theoretic planning with risk-sensitive planning objectives is
important for building autonomous agents or decision-support systems for
real-world applications. However, this line of research has been largely
ignored in the artificial intelligence and operations research communities
since planning with risk-sensitive planning objectives is more complicated than
planning with risk-neutral planning objectives. To remedy this situation, we
derive conditions that guarantee that the optimal expected utilities of the
total plan-execution reward exist and are finite for fully observable Markov
decision process models with non-linear utility functions. In case of Markov
decision process models with both positive and negative rewards, most of our
results hold for stationary policies only, but we conjecture that they can be
generalized to non stationary policies.
|
1207.1392
|
The Graphical Identification for Total Effects by using Surrogate
Variables
|
stat.ME cs.AI
|
Consider the case where cause-effect relationships between variables can be
described as a directed acyclic graph and the corresponding linear structural
equation model. This paper provides graphical identifiability criteria for
total effects by using surrogate variables in the case where it is difficult to
observe a treatment/response variable. The results enable us to judge from
graph structure whether a total effect can be identified through the
observation of surrogate variables.
|
1207.1393
|
Learning about individuals from group statistics
|
cs.LG stat.ML
|
We propose a new problem formulation which is similar to, but more
informative than, the binary multiple-instance learning problem. In this
setting, we are given groups of instances (described by feature vectors) along
with estimates of the fraction of positively-labeled instances per group. The
task is to learn an instance level classifier from this information. That is,
we are trying to estimate the unknown binary labels of individuals from
knowledge of group statistics. We propose a principled probabilistic model to
solve this problem that accounts for uncertainty in the parameters and in the
unknown individual labels. This model is trained with an efficient MCMC
algorithm. Its performance is demonstrated on both synthetic and real-world
data arising in general object recognition.
|
1207.1394
|
Near-optimal Nonmyopic Value of Information in Graphical Models
|
cs.AI
|
A fundamental issue in real-world systems, such as sensor networks, is the
selection of observations which most effectively reduce uncertainty. More
specifically, we address the long standing problem of nonmyopically selecting
the most informative subset of variables in a graphical model. We present the
first efficient randomized algorithm providing a constant factor
(1-1/e-epsilon) approximation guarantee for any epsilon > 0 with high
confidence. The algorithm leverages the theory of submodular functions, in
combination with a polynomial bound on sample complexity. We furthermore prove
that no polynomial time algorithm can provide a constant factor approximation
better than (1 - 1/e) unless P = NP. Finally, we provide extensive evidence of
the effectiveness of our method on two complex real-world datasets.
|
1207.1395
|
On the optimality of tree-reweighted max-product message-passing
|
cs.AI cs.DS
|
Tree-reweighted max-product (TRW) message passing is a modified form of the
ordinary max-product algorithm for attempting to find minimal energy
configurations in Markov random field with cycles. For a TRW fixed point
satisfying the strong tree agreement condition, the algorithm outputs a
configuration that is provably optimal. In this paper, we focus on the case of
binary variables with pairwise couplings, and establish stronger properties of
TRW fixed points that satisfy only the milder condition of weak tree agreement
(WTA). First, we demonstrate how it is possible to identify part of the optimal
solution|i.e., a provably optimal solution for a subset of nodes| without
knowing a complete solution. Second, we show that for submodular functions, a
WTA fixed point always yields a globally optimal solution. We establish that
for binary variables, any WTA fixed point always achieves the global maximum of
the linear programming relaxation underlying the TRW method.
|
1207.1396
|
Toward Practical N2 Monte Carlo: the Marginal Particle Filter
|
stat.CO cs.LG stat.ML
|
Sequential Monte Carlo techniques are useful for state estimation in
non-linear, non-Gaussian dynamic models. These methods allow us to approximate
the joint posterior distribution using sequential importance sampling. In this
framework, the dimension of the target distribution grows with each time step,
thus it is necessary to introduce some resampling steps to ensure that the
estimates provided by the algorithm have a reasonable variance. In many
applications, we are only interested in the marginal filtering distribution
which is defined on a space of fixed dimension. We present a Sequential Monte
Carlo algorithm called the Marginal Particle Filter which operates directly on
the marginal distribution, hence avoiding having to perform importance sampling
on a space of growing dimension. Using this idea, we also derive an improved
version of the auxiliary particle filter. We show theoretic and empirical
results which demonstrate a reduction in variance over conventional particle
filtering, and present techniques for reducing the cost of the marginal
particle filter with N particles from O(N2) to O(N logN).
|
1207.1397
|
A Revision-Based Approach to Resolving Conflicting Information
|
cs.AI
|
In this paper, we propose a revision-based approach for conflict resolution
by generalizing the Disjunctive Maxi-Adjustment (DMA) approach (Benferhat et
al. 2004). Revision operators can be classified into two different families:
the model-based ones and the formula-based ones. So the revision-based approach
has two different versions according to which family of revision operators is
chosen. Two particular revision operators are considered, one is the Dalal's
revision operator, which is a model-based revision operator, and the other is
the cardinality-maximal based revision operator, which is a formulabased
revision operator. When the Dalal's revision operator is chosen, the
revision-based approach is independent of the syntactic form in each stratum
and it captures some notion of minimal change. When the cardinalitymaximal
based revision operator is chosen, the revision-based approach is equivalent to
the DMA approach. We also show that both approaches are computationally easier
than the DMA approach.
|
1207.1398
|
Asynchronous Dynamic Bayesian Networks
|
cs.AI
|
Systems such as sensor networks and teams of autonomous robots consist of
multiple autonomous entities that interact with each other in a distributed,
asynchronous manner. These entities need to keep track of the state of the
system as it evolves. Asynchronous systems lead to special challenges for
monitoring, as nodes must update their beliefs independently of each other and
no central coordination is possible. Furthermore, the state of the system
continues to change as beliefs are being updated. Previous approaches to
developing distributed asynchronous probabilistic reasoning systems have used
static models. We present an approach using dynamic models, that take into
account the way the system changes state over time. Our approach, which is
based on belief propagation, is fully distributed and asynchronous, and allows
the world to keep on changing as messages are being sent around. Experimental
results show that our approach compares favorably to the factored frontier
algorithm.
|
1207.1399
|
Robotic Mapping with Polygonal Random Fields
|
cs.RO cs.AI
|
Two types of probabilistic maps are popular in the mobile robotics
literature: occupancy grids and geometric maps. Occupancy grids have the
advantages of simplicity and speed, but they represent only a restricted class
of maps and they make incorrect independence assumptions. On the other hand,
current geometric approaches, which characterize the environment by features
such as line segments, can represent complex environments compactly. However,
they do not reason explicitly about occupancy, a necessity for motion planning;
and, they lack a complete probability model over environmental structures. In
this paper we present a probabilistic mapping technique based on polygonal
random fields (PRF), which combines the advantages of both approaches. Our
approach explicitly represents occupancy using a geometric representation, and
it is based upon a consistent probability distribution over environments which
avoids the incorrect independence assumptions made by occupancy grids. We show
how sampling techniques for PRFs can be applied to localized laser and sonar
data, and we demonstrate significant improvements in mapping performance over
occupancy grids.
|
1207.1401
|
Expectation Propagation for Continuous Time Bayesian Networks
|
cs.AI
|
Continuous time Bayesian networks (CTBNs) describe structured stochastic
processes with finitely many states that evolve over continuous time. A CTBN is
a directed (possibly cyclic) dependency graph over a set of variables, each of
which represents a finite state continuous time Markov process whose transition
model is a function of its parents. As shown previously, exact inference in
CTBNs is intractable. We address the problem of approximate inference, allowing
for general queries conditioned on evidence over continuous time intervals and
at discrete time points. We show how CTBNs can be parameterized within the
exponential family, and use that insight to develop a message passing scheme in
cluster graphs and allows us to apply expectation propagation to CTBNs. The
clusters in our cluster graph do not contain distributions over the cluster
variables at individual time points, but distributions over trajectories of the
variables throughout a duration. Thus, unlike discrete time temporal models
such as dynamic Bayesian networks, we can adapt the time granularity at which
we reason for different variables and in different conditions.
|
1207.1402
|
Expectation Maximization and Complex Duration Distributions for
Continuous Time Bayesian Networks
|
cs.AI
|
Continuous time Bayesian networks (CTBNs) describe structured stochastic
processes with finitely many states that evolve over continuous time. A CTBN is
a directed (possibly cyclic) dependency graph over a set of variables, each of
which represents a finite state continuous time Markov process whose transition
model is a function of its parents. We address the problem of learning the
parameters and structure of a CTBN from partially observed data. We show how to
apply expectation maximization (EM) and structural expectation maximization
(SEM) to CTBNs. The availability of the EM algorithm allows us to extend the
representation of CTBNs to allow a much richer class of transition durations
distributions, known as phase distributions. This class is a highly expressive
semi-parametric representation, which can approximate any duration distribution
arbitrarily closely. This extension to the CTBN framework addresses one of the
main limitations of both CTBNs and DBNs - the restriction to exponentially /
geometrically distributed duration. We present experimental results on a real
data set of people's life spans, showing that our algorithm learns reasonable
models - structure and parameters - from partially observed data, and, with the
use of phase distributions, achieves better performance than DBNs.
|
1207.1403
|
Obtaining Calibrated Probabilities from Boosting
|
cs.LG stat.ML
|
Boosted decision trees typically yield good accuracy, precision, and ROC
area. However, because the outputs from boosting are not well calibrated
posterior probabilities, boosting yields poor squared error and cross-entropy.
We empirically demonstrate why AdaBoost predicts distorted probabilities and
examine three calibration methods for correcting this distortion: Platt
Scaling, Isotonic Regression, and Logistic Correction. We also experiment with
boosting using log-loss instead of the usual exponential loss. Experiments show
that Logistic Correction and boosting with log-loss work well when boosting
weak models such as decision stumps, but yield poor performance when boosting
more complex models such as full decision trees. Platt Scaling and Isotonic
Regression, however, significantly improve the probabilities predicted by
|
1207.1404
|
A submodular-supermodular procedure with applications to discriminative
structure learning
|
cs.LG cs.DS stat.ML
|
In this paper, we present an algorithm for minimizing the difference between
two submodular functions using a variational framework which is based on (an
extension of) the concave-convex procedure [17]. Because several commonly used
metrics in machine learning, like mutual information and conditional mutual
information, are submodular, the problem of minimizing the difference of two
submodular problems arises naturally in many machine learning applications. Two
such applications are learning discriminatively structured graphical models and
feature selection under computational complexity constraints. A commonly used
metric for measuring discriminative capacity is the EAR measure which is the
difference between two conditional mutual information terms. Feature selection
taking complexity considerations into account also fall into this framework
because both the information that a set of features provide and the cost of
computing and using the features can be modeled as submodular functions. This
problem is NP-hard, and we give a polynomial time heuristic for it. We also
present results on synthetic data to show that classifiers based on
discriminative graphical models using this algorithm can significantly
outperform classifiers based on generative graphical models.
|
1207.1405
|
Sufficient conditions for convergence of Loopy Belief Propagation
|
cs.AI
|
We derive novel sufficient conditions for convergence of Loopy Belief
Propagation (also known as the Sum-Product algorithm) to a unique fixed point.
Our results improve upon previously known conditions. For binary variables with
(anti-)ferromagnetic interactions, our conditions seem to be sharp.
|
1207.1406
|
A Conditional Random Field for Discriminatively-trained Finite-state
String Edit Distance
|
cs.LG cs.AI
|
The need to measure sequence similarity arises in information extraction,
object identity, data mining, biological sequence analysis, and other domains.
This paper presents discriminative string-edit CRFs, a finitestate conditional
random field model for edit sequences between strings. Conditional random
fields have advantages over generative approaches to this problem, such as pair
HMMs or the work of Ristad and Yianilos, because as conditionally-trained
methods, they enable the use of complex, arbitrary actions and features of the
input strings. As in generative models, the training data does not have to
specify the edit sequences between the given string pairs. Unlike generative
models, however, our model is trained on both positive and negative instances
of string pairs. We present positive experimental results on several data sets.
|
1207.1407
|
The Relationship Between AND/OR Search and Variable Elimination
|
cs.AI
|
In this paper we compare search and inference in graphical models through the
new framework of AND/OR search. Specifically, we compare Variable Elimination
(VE) and memoryintensive AND/OR Search (AO) and place algorithms such as
graph-based backjumping and no-good and good learning, as well as Recursive
Conditioning [7] and Value Elimination [2] within the AND/OR search framework.
|
1207.1408
|
Representation Policy Iteration
|
cs.AI
|
This paper addresses a fundamental issue central to approximation methods for
solving large Markov decision processes (MDPs): how to automatically learn the
underlying representation for value function approximation? A novel
theoretically rigorous framework is proposed that automatically generates
geometrically customized orthonormal sets of basis functions, which can be used
with any approximate MDP solver like least squares policy iteration (LSPI). The
key innovation is a coordinate-free representation of value functions, using
the theory of smooth functions on a Riemannian manifold. Hodge theory yields a
constructive method for generating basis functions for approximating value
functions based on the eigenfunctions of the self-adjoint (Laplace-Beltrami)
operator on manifolds. In effect, this approach performs a global Fourier
analysis on the state space graph to approximate value functions, where the
basis functions reflect the largescale topology of the underlying state space.
A new class of algorithms called Representation Policy Iteration (RPI) are
presented that automatically learn both basis functions and approximately
optimal policies. Illustrative experiments compare the performance of RPI with
that of LSPI using two handcoded basis functions (RBF and polynomial state
encodings).
|
1207.1409
|
Piecewise Training for Undirected Models
|
cs.LG stat.ML
|
For many large undirected models that arise in real-world applications, exact
maximumlikelihood training is intractable, because it requires computing
marginal distributions of the model. Conditional training is even more
difficult, because the partition function depends not only on the parameters,
but also on the observed input, requiring repeated inference over each training
example. An appealing idea for such models is to independently train a local
undirected classifier over each clique, afterwards combining the learned
weights into a single global model. In this paper, we show that this piecewise
method can be justified as minimizing a new family of upper bounds on the log
partition function. On three natural-language data sets, piecewise training is
more accurate than pseudolikelihood, and often performs comparably to global
training using belief propagation.
|
1207.1410
|
Description Logics with Fuzzy Concrete Domains
|
cs.AI cs.LO
|
We present a fuzzy version of description logics with concrete domains. Main
features are: (i) concept constructors are based on t-norm, t-conorm, negation
and implication; (ii) concrete domains are fuzzy sets; (iii) fuzzy modifiers
are allowed; and (iv) the reasoning algorithm is based on a mixture of
completion rules and bounded mixed integer programming.
|
1207.1411
|
Bayes' Bluff: Opponent Modelling in Poker
|
cs.GT cs.AI
|
Poker is a challenging problem for artificial intelligence, with
non-deterministic dynamics, partial observability, and the added difficulty of
unknown adversaries. Modelling all of the uncertainties in this domain is not
an easy task. In this paper we present a Bayesian probabilistic model for a
broad class of poker games, separating the uncertainty in the game dynamics
from the uncertainty of the opponent's strategy. We then describe approaches to
two key subproblems: (i) inferring a posterior over opponent strategies given a
prior distribution and observations of their play, and (ii) playing an
appropriate response to that distribution. We demonstrate the overall approach
on a reduced version of poker using Dirichlet priors and then on the full game
of Texas hold'em using a more informed prior. We demonstrate methods for
playing effective responses to the opponent, based on the posterior.
|
1207.1412
|
Point-Based POMDP Algorithms: Improved Analysis and Implementation
|
cs.AI
|
Existing complexity bounds for point-based POMDP value iteration algorithms
focus either on the curse of dimensionality or the curse of history. We derive
a new bound that relies on both and uses the concept of discounted
reachability; our conclusions may help guide future algorithm design. We also
discuss recent improvements to our (point-based) heuristic search value
iteration algorithm. Our new implementation calculates tighter initial bounds,
avoids solving linear programs, and makes more effective use of sparsity.
|
1207.1413
|
Discovery of non-gaussian linear causal models using ICA
|
cs.LG cs.MS stat.ML
|
In recent years, several methods have been proposed for the discovery of
causal structure from non-experimental data (Spirtes et al. 2000; Pearl 2000).
Such methods make various assumptions on the data generating process to
facilitate its identification from purely observational data. Continuing this
line of research, we show how to discover the complete causal structure of
continuous-valued data, under the assumptions that (a) the data generating
process is linear, (b) there are no unobserved confounders, and (c) disturbance
variables have non-gaussian distributions of non-zero variances. The solution
relies on the use of the statistical method known as independent component
analysis (ICA), and does not require any pre-specified time-ordering of the
variables. We provide a complete Matlab package for performing this LiNGAM
analysis (short for Linear Non-Gaussian Acyclic Model), and demonstrate the
effectiveness of the method using artificially generated data.
|
1207.1414
|
Two-Way Latent Grouping Model for User Preference Prediction
|
cs.IR cs.LG stat.ML
|
We introduce a novel latent grouping model for predicting the relevance of a
new document to a user. The model assumes a latent group structure for both
users and documents. We compared the model against a state-of-the-art method,
the User Rating Profile model, where only users have a latent group structure.
We estimate both models by Gibbs sampling. The new method predicts relevance
more accurately for new documents that have few known ratings. The reason is
that generalization over documents then becomes necessary and hence the twoway
grouping is profitable.
|
1207.1415
|
Approximate Linear Programming for First-order MDPs
|
cs.AI
|
We introduce a new approximate solution technique for first-order Markov
decision processes (FOMDPs). Representing the value function linearly w.r.t. a
set of first-order basis functions, we compute suitable weights by casting the
corresponding optimization as a first-order linear program and show how
off-the-shelf theorem prover and LP software can be effectively used. This
technique allows one to solve FOMDPs independent of a specific domain
instantiation; furthermore, it allows one to determine bounds on approximation
error that apply equally to all domain instantiations. We apply this solution
technique to the task of elevator scheduling with a rich feature space and
multi-criteria additive reward, and demonstrate that it outperforms a number of
intuitive, heuristicallyguided policies.
|
1207.1416
|
Predictive Linear-Gaussian Models of Stochastic Dynamical Systems
|
cs.AI
|
Models of dynamical systems based on predictive state representations (PSRs)
are defined strictly in terms of observable quantities, in contrast with
traditional models (such as Hidden Markov Models) that use latent variables or
statespace representations. In addition, PSRs have an effectively infinite
memory, allowing them to model some systems that finite memory-based models
cannot. Thus far, PSR models have primarily been developed for domains with
discrete observations. Here, we develop the Predictive Linear-Gaussian (PLG)
model, a class of PSR models for domains with continuous observations. We show
that PLG models subsume Linear Dynamical System models (also called Kalman
filter models or state-space models) while using fewer parameters. We also
introduce an algorithm to estimate PLG parameters from data, and contrast it
with standard Expectation Maximization (EM) algorithms used to estimate Kalman
filter parameters. We show that our algorithm is a consistent estimation
procedure and present preliminary empirical results suggesting that our
algorithm outperforms EM, particularly as the model dimension increases.
|
1207.1417
|
The DLR Hierarchy of Approximate Inference
|
cs.LG stat.ML
|
We propose a hierarchy for approximate inference based on the Dobrushin,
Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms,
such as belief propagation, and also motivates novel algorithms such as
factorized neighbors (FN) algorithms and variants of mean field (MF)
algorithms. In particular, we show that extrema of the Bethe free energy
correspond to approximate solutions of the DLR equations. In addition, we
demonstrate a close connection between these approximate algorithms and Gibbs
sampling. Finally, we compare and contrast various of the algorithms in the DLR
hierarchy on spin-glass problems. The experiments show that algorithms higher
up in the hierarchy give more accurate results when they converge but tend to
be less stable.
|
1207.1418
|
Efficient Test Selection in Active Diagnosis via Entropy Approximation
|
cs.AI
|
We consider the problem of diagnosing faults in a system represented by a
Bayesian network, where diagnosis corresponds to recovering the most likely
state of unobserved nodes given the outcomes of tests (observed nodes). Finding
an optimal subset of tests in this setting is intractable in general. We show
that it is difficult even to compute the next most-informative test using
greedy test selection, as it involves several entropy terms whose exact
computation is intractable. We propose an approximate approach that utilizes
the loopy belief propagation infrastructure to simultaneously compute
approximations of marginal and conditional entropies on multiple subsets of
nodes. We apply our method to fault diagnosis in computer networks, and show
the algorithm to be very effective on realistic Internet-like topologies. We
also provide theoretical justification for the greedy test selection approach,
along with some performance guarantees.
|
1207.1419
|
A Transformational Characterization of Markov Equivalence for Directed
Acyclic Graphs with Latent Variables
|
cs.AI stat.ME
|
Different directed acyclic graphs (DAGs) may be Markov equivalent in the
sense that they entail the same conditional independence relations among the
observed variables. Chickering (1995) provided a transformational
characterization of Markov equivalence for DAGs (with no latent variables),
which is useful in deriving properties shared by Markov equivalent DAGs, and,
with certain generalization, is needed to prove the asymptotic correctness of a
search procedure over Markov equivalence classes, known as the GES algorithm.
For DAG models with latent variables, maximal ancestral graphs (MAGs) provide a
neat representation that facilitates model search. However, no transformational
characterization -- analogous to Chickering's -- of Markov equivalent MAGs is
yet available. This paper establishes such a characterization for directed
MAGs, which we expect will have similar uses as it does for DAGs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.