id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.3283 | StructBoost: Boosting Methods for Predicting Structured Output Variables | cs.LG | Boosting is a method for learning a single accurate predictor by linearly
combining a set of less accurate weak learners. Recently, structured learning
has found many applications in computer vision. Inspired by structured support
vector machines (SSVM), here we propose a new boosting algorithm for structured
output prediction, which we refer to as StructBoost. StructBoost supports
nonlinear structured learning by combining a set of weak structured learners.
As SSVM generalizes SVM, our StructBoost generalizes standard boosting
approaches such as AdaBoost, or LPBoost to structured learning. The resulting
optimization problem of StructBoost is more challenging than SSVM in the sense
that it may involve exponentially many variables and constraints. In contrast,
for SSVM one usually has an exponential number of constraints and a
cutting-plane method is used. In order to efficiently solve StructBoost, we
formulate an equivalent $ 1 $-slack formulation and solve it using a
combination of cutting planes and column generation. We show the versatility
and usefulness of StructBoost on a range of problems such as optimizing the
tree loss for hierarchical multi-class classification, optimizing the Pascal
overlap criterion for robust visual tracking and learning conditional random
field parameters for image segmentation.
|
1302.3292 | On Consistency of Operational Transformation Approach | cs.DC cs.SY | The Operational Transformation (OT) approach, used in many collaborative
editors, allows a group of users to concurrently update replicas of a shared
object and exchange their updates in any order. The basic idea of this approach
is to transform any received update operation before its execution on a replica
of the object. This transformation aims to ensure the convergence of the
different replicas of the object, even though the operations are executed in
different orders. However, designing transformation functions for achieving
convergence is a critical and challenging issue. Indeed, the transformation
functions proposed in the literature are all revealed incorrect.
In this paper, we investigate the existence of transformation functions for a
shared string altered by insert and delete operations. From the theoretical
point of view, two properties - named TP1 and TP2 - are necessary and
sufficient to ensure convergence. Using controller synthesis technique, we show
that there are some transformation functions which satisfy only TP1 for the
basic signatures of insert and delete operations. As a matter of fact, it is
impossible to meet both properties TP1 and TP2 with these simple signatures.
|
1302.3299 | The Geography of Happiness: Connecting Twitter sentiment and expression,
demographics, and objective characteristics of place | physics.soc-ph cs.SI | We conduct a detailed investigation of correlations between real-time
expressions of individuals made across the United States and a wide range of
emotional, geographic, demographic, and health characteristics. We do so by
combining (1) a massive, geo-tagged data set comprising over 80 million words
generated over the course of several recent years on the social network service
Twitter and (2) annually-surveyed characteristics of all 50 states and close to
400 urban populations. Among many results, we generate taxonomies of states and
cities based on their similarities in word use; estimate the happiness levels
of states and cities; correlate highly-resolved demographic characteristics
with happiness levels; and connect word choice and message length with urban
characteristics such as education levels and obesity rates. Our results show
how social media may potentially be used to estimate real-time levels and
changes in population-level measures such as obesity rates.
|
1302.3365 | Under-approximating Cut Sets for Reachability in Large Scale Automata
Networks | cs.SY | In the scope of discrete finite-state models of interacting components, we
present a novel algorithm for identifying sets of local states of components
whose activity is necessary for the reachability of a given local state. If all
the local states from such a set are disabled in the model, the concerned
reachability is impossible. Those sets are referred to as cut sets and are
computed from a particular abstract causality structure, so-called Graph of
Local Causality, inspired from previous work and generalised here to finite
automata networks. The extracted sets of local states form an
under-approximation of the complete minimal cut sets of the dynamics: there may
exist smaller or additional cut sets for the given reachability. Applied to
qualitative models of biological systems, such cut sets provide potential
therapeutic targets that are proven to prevent molecules of interest to become
active, up to the correctness of the model. Our new method makes tractable the
formal analysis of very large scale networks, as illustrated by the computation
of cut sets within a Boolean model of biological pathways interactions
gathering more than 9000 components.
|
1302.3407 | A consistent clustering-based approach to estimating the number of
change-points in highly dependent time-series | stat.ML cs.IT cs.LG math.IT math.ST stat.TH | The problem of change-point estimation is considered under a general
framework where the data are generated by unknown stationary ergodic process
distributions. In this context, the consistent estimation of the number of
change-points is provably impossible. However, it is shown that a consistent
clustering method may be used to estimate the number of change points, under
the additional constraint that the correct number of process distributions that
generate the data is provided. This additional parameter has a natural
interpretation in many real-world applications. An algorithm is proposed that
estimates the number of change-points and locates the changes. The proposed
algorithm is shown to be asymptotically consistent; its empirical evaluations
are provided.
|
1302.3412 | Secrecy capacities of compound quantum wiretap channels and applications | cs.IT math.IT quant-ph | We determine the secrecy capacity of the compound channel with quantum
wiretapper and channel state information at the transmitter. Moreover, we
derive a lower bound on the secrecy capacity of this channel without channel
state information and determine the secrecy capacity of the compound
classical-quantum wiretap channel with channel state information at the
transmitter. We use this result to derive a new proof for a lower bound on the
entanglement generating capacity of compound quantum channel. We also derive a
new proof for the entanglement generating capacity of compound quantum channel
with channel state information at the encoder.
|
1302.3416 | Centralized Versus Decentralized Team Games of Distributed Stochastic
Differential Decision Systems with Noiseless Information Structures-Part II:
Applications | math.OC cs.IT cs.SY math.IT | In this second part of our two-part paper, we invoke the stochastic maximum
principle, conditional Hamiltonian and the coupled backward-forward stochastic
differential equations of the first part [1] to derive team optimal
decentralized strategies for distributed stochastic differential systems with
noiseless information structures. We present examples of such team games of
nonlinear as well as linear quadratic forms. In some cases we obtain closed
form expressions of the optimal decentralized strategies.
Through the examples, we illustrate the effect of information signaling among
the decision makers in reducing the computational complexity of optimal
decentralized decision strategies.
|
1302.3446 | Adaptive Temporal Compressive Sensing for Video | stat.AP cs.CV cs.MM | This paper introduces the concept of adaptive temporal compressive sensing
(CS) for video. We propose a CS algorithm to adapt the compression ratio based
on the scene's temporal complexity, computed from the compressed data, without
compromising the quality of the reconstructed video. The temporal adaptivity is
manifested by manipulating the integration time of the camera, opening the
possibility to real-time implementation. The proposed algorithm is a
generalized temporal CS approach that can be incorporated with a diverse set of
existing hardware systems.
|
1302.3447 | Exact Methods for Multistage Estimation of a Binomial Proportion | math.ST cs.LG cs.NA math.PR stat.TH | We first review existing sequential methods for estimating a binomial
proportion. Afterward, we propose a new family of group sequential sampling
schemes for estimating a binomial proportion with prescribed margin of error
and confidence level. In particular, we establish the uniform controllability
of coverage probability and the asymptotic optimality for such a family of
sampling schemes. Our theoretical results establish the possibility that the
parameters of this family of sampling schemes can be determined so that the
prescribed level of confidence is guaranteed with little waste of samples.
Analytic bounds for the cumulative distribution functions and expectations of
sample numbers are derived. Moreover, we discuss the inherent connection of
various sampling schemes. Numerical issues are addressed for improving the
accuracy and efficiency of computation. Computational experiments are conducted
for comparing sampling schemes. Illustrative examples are given for
applications in clinical trials.
|
1302.3452 | Centralized Versus Decentralized Team Games of Distributed Stochastic
Differential Decision Systems with Noiseless Information Structures-Part I:
General Theory | math.OC cs.IT math.IT math.ST stat.TH | Decentralized optimization of distributed stochastic differential systems has
been an active area of research for over half a century. Its formulation
utilizing static team and person-by-person optimality criteria is well
investigated. However, the results have not been generalized to nonlinear
distributed stochastic differential systems possibly due to technical
difficulties inherent with decentralized decision strategies.
In this first part of the two-part paper, we derive team optimality and
person-by-person optimality conditions for distributed stochastic differential
systems with different information structures. The optimality conditions are
given in terms of a Hamiltonian system of equations described by a system of
coupled backward and forward stochastic differential equations and a
conditional Hamiltonian, under both regular and relaxed strategies. Our
methodology is based on the semi martingale representation theorem and
variational methods. Throughout the presentation we discuss similarities to
optimality conditions of centralized decision making.
|
1302.3492 | Outer Bounds for Multiterminal Source Coding via a Strong Data
Processing Inequality | cs.IT math.IT | An intuitive outer bound for the multiterminal source coding problem is
given. The proposed bound explicitly couples the rate distortion functions for
each source and correlation measures which derive from a "strong" data
processing inequality. Unlike many standard outer bounds, the proposed bound is
not parameterized by a continuous family of auxiliary random variables, but
instead only requires maximizing two ratios of divergences which do not depend
on the distortion functions under consideration.
|
1302.3518 | Analysis of the Min-Sum Algorithm for Packing and Covering Problems via
Linear Programming | cs.IT cs.DM cs.DS math.IT | Message-passing algorithms based on belief-propagation (BP) are successfully
used in many applications including decoding error correcting codes and solving
constraint satisfaction and inference problems. BP-based algorithms operate
over graph representations, called factor graphs, that are used to model the
input. Although in many cases BP-based algorithms exhibit impressive empirical
results, not much has been proved when the factor graphs have cycles.
This work deals with packing and covering integer programs in which the
constraint matrix is zero-one, the constraint vector is integral, and the
variables are subject to box constraints. We study the performance of the
min-sum algorithm when applied to the corresponding factor graph models of
packing and covering LPs.
We compare the solutions computed by the min-sum algorithm for packing and
covering problems to the optimal solutions of the corresponding linear
programming (LP) relaxations. In particular, we prove that if the LP has an
optimal fractional solution, then for each fractional component, the min-sum
algorithm either computes multiple solutions or the solution oscillates below
and above the fraction. This implies that the min-sum algorithm computes the
optimal integral solution only if the LP has a unique optimal solution that is
integral.
The converse is not true in general. For a special case of packing and
covering problems, we prove that if the LP has a unique optimal solution that
is integral and on the boundary of the box constraints, then the min-sum
algorithm computes the optimal solution in pseudo-polynomial time.
Our results unify and extend recent results for the maximum weight matching
problem by [Sanghavi et al.,'2011] and [Bayati et al., 2011] and for the
maximum weight independent set problem [Sanghavi et al.'2009].
|
1302.3530 | Duality between equilibrium and growing networks | cond-mat.stat-mech cs.SI physics.soc-ph | In statistical physics any given system can be either at an equilibrium or
away from it. Networks are not an exception. Most network models can be
classified as either equilibrium or growing. Here we show that under certain
conditions there exists an equilibrium formulation for any growing network
model, and vice versa. The equivalence between the equilibrium and
nonequilibrium formulations is exact not only asymptotically, but even for any
finite system size. The required conditions are satisfied in random geometric
graphs in general and causal sets in particular, and to a large extent in some
real networks.
|
1302.3541 | An analysis of NK and generalized NK landscapes | cs.NE | Simulated landscapes have been used for decades to evaluate search strategies
whose goal is to find the landscape location with maximum fitness. Applications
include modeling the capacity of enzymes to catalyze reactions and the clinical
effectiveness of medical treatments. Understanding properties of landscapes is
important for understanding search difficulty. This paper presents a novel and
transparent characterization of NK landscapes.
We prove that NK landscapes can be represented by parametric linear
interaction models where model coefficients have meaningful interpretations. We
derive the statistical properties of the model coefficients, providing insight
into how the NK algorithm parses importance to main effects and interactions.
An important insight derived from the linear model representation is that the
rank of the linear model defined by the NK algorithm is correlated with the
number of local optima, a strong determinant of landscape complexity and search
difficulty. We show that the maximal rank for an NK landscape is achieved
through epistatic interactions that form partially balanced incomplete block
designs. Finally, an analytic expression representing the expected number of
local optima on the landscape is derived, providing a way to quickly compute
the expected number of local optima for very large landscapes.
|
1302.3548 | On Realizations of a Joint Degree Matrix | math.CO cs.DM cs.SI | The joint degree matrix of a graph gives the number of edges between vertices
of degree i and degree j for every pair (i,j). One can perform restricted swap
operations to transform a graph into another with the same joint degree matrix.
We prove that the space of all realizations of a given joint degree matrix over
a fixed vertex set is connected via these restricted swap operations. This was
claimed before, but there is an error in the previous proof, which we
illustrate by example. We also give a simplified proof of the necessary and
sufficient conditions for a matrix to be a joint degree matrix. Finally, we
address some of the issues concerning the mixing time of the corresponding MCMC
method to sample uniformly from these realizations.
|
1302.3549 | An Algorithm for Finding Minimum d-Separating Sets in Belief Networks | cs.AI | The criterion commonly used in directed acyclic graphs (dags) for testing
graphical independence is the well-known d-separation criterion. It allows us
to build graphical representations of dependency models (usually probabilistic
dependency models) in the form of belief networks, which make easy
interpretation and management of independence relationships possible, without
reference to numerical parameters (conditional probabilities). In this paper,
we study the following combinatorial problem: finding the minimum d-separating
set for two nodes in a dag. This set would represent the minimum information
(in the sense of minimum number of variables) necessary to prevent these two
nodes from influencing each other. The solution to this basic problem and some
of its extensions can be useful in several ways, as we shall see later. Our
solution is based on a two-step process: first, we reduce the original problem
to the simpler one of finding a minimum separating set in an undirected graph,
and second, we develop an algorithm for solving it.
|
1302.3550 | Constraining Influence Diagram Structure by Generative Planning: An
Application to the Optimization of Oil Spill Response | cs.AI | This paper works through the optimization of a real world planning problem,
with a combination of a generative planning tool and an influence diagram
solver. The problem is taken from an existing application in the domain of oil
spill emergency response. The planning agent manages constraints that order
sets of feasible equipment employment actions. This is mapped at an
intermediate level of abstraction onto an influence diagram. In addition, the
planner can apply a surveillance operator that determines observability of the
state---the unknown trajectory of the oil. The uncertain world state and the
objective function properties are part of the influence diagram structure, but
not represented in the planning agent domain. By exploiting this structure
under the constraints generated by the planning agent, the influence diagram
solution complexity simplifies considerably, and an optimum solution to the
employment problem based on the objective function is found. Finding this
optimum is equivalent to the simultaneous evaluation of a range of plans. This
result is an example of bounded optimality, within the limitations of this
hybrid generative planner and influence diagram architecture.
|
1302.3551 | Inference Using Message Propagation and Topology Transformation in
Vector Gaussian Continuous Networks | cs.AI | We extend Gaussian networks - directed acyclic graphs that encode
probabilistic relationships between variables - to its vector form. Vector
Gaussian continuous networks consist of composite nodes representing
multivariates, that take continuous values. These vector or composite nodes can
represent correlations between parents, as opposed to conventional univariate
nodes. We derive rules for inference in these networks based on two methods:
message propagation and topology transformation. These two approaches lead to
the development of algorithms, that can be implemented in either a centralized
or a decentralized manner. The domain of application of these networks are
monitoring and estimation problems. This new representation along with the
rules for inference developed here can be used to derive current Bayesian
algorithms such as the Kalman filter, and provide a rich foundation to develop
new algorithms. We illustrate this process by deriving the decentralized form
of the Kalman filter. This work unifies concepts from artificial intelligence
and modern control theory.
|
1302.3552 | A Structurally and Temporally Extended Bayesian Belief Network Model:
Definitions, Properties, and Modeling Techniques | cs.AI | We developed the language of Modifiable Temporal Belief Networks (MTBNs) as a
structural and temporal extension of Bayesian Belief Networks (BNs) to
facilitate normative temporal and causal modeling under uncertainty. In this
paper we present definitions of the model, its components, and its fundamental
properties. We also discuss how to represent various types of temporal
knowledge, with an emphasis on hybrid temporal-explicit time modeling, dynamic
structures, avoiding causal temporal inconsistencies, and dealing with models
that involve simultaneously actions (decisions) and causal and non-causal
associations. We examine the relationships among BNs, Modifiable Belief
Networks, and MTBNs with a single temporal granularity, and suggest areas of
application suitable to each one of them.
|
1302.3553 | An Alternative Markov Property for Chain Graphs | cs.AI | Graphical Markov models use graphs, either undirected, directed, or mixed, to
represent possible dependences among statistical variables. Applications of
undirected graphs (UDGs) include models for spatial dependence and image
analysis, while acyclic directed graphs (ADGs), which are especially convenient
for statistical analysis, arise in such fields as genetics and psychometrics
and as models for expert systems and Bayesian belief networks. Lauritzen,
Wermuth and Frydenberg (LWF) introduced a Markov property for chain graphs,
which are mixed graphs that can be used to represent simultaneously both causal
and associative dependencies and which include both UDGs and ADGs as special
cases. In this paper an alternative Markov property (AMP) for chain graphs is
introduced, which in some ways is a more direct extension of the ADG Markov
property than is the LWF property for chain graph.
|
1302.3554 | Plan Development using Local Probabilistic Models | cs.AI | Approximate models of world state transitions are necessary when building
plans for complex systems operating in dynamic environments. External event
probabilities can depend on state feature values as well as time spent in that
particular state. We assign temporally -dependent probability functions to
state transitions. These functions are used to locally compute state
probabilities, which are then used to select highly probable goal paths and
eliminate improbable states. This probabilistic model has been implemented in
the Cooperative Intelligent Real-time Control Architecture (CIRCA), which
combines an AI planner with a separate real-time system such that plans are
developed, scheduled, and executed with real-time guarantees. We present flight
simulation tests that demonstrate how our probabilistic model may improve CIRCA
performance.
|
1302.3555 | Entailment in Probability of Thresholded Generalizations | cs.AI | A nonmonotonic logic of thresholded generalizations is presented. Given
propositions A and B from a language L and a positive integer k, the
thresholded generalization A=>B{k} means that the conditional probability
P(B|A) falls short of one by no more than c*d^k. A two-level probability
structure is defined. At the lower level, a model is defined to be a
probability function on L. At the upper level, there is a probability
distribution over models. A definition is given of what it means for a
collection of thresholded generalizations to entail another thresholded
generalization. This nonmonotonic entailment relation, called "entailment in
probability", has the feature that its conclusions are "probabilistically
trustworthy" meaning that, given true premises, it is improbable that an
entailed conclusion would be false. A procedure is presented for ascertaining
whether any given collection of premises entails any given conclusion. It is
shown that entailment in probability is closely related to Goldszmidt and
Pearl's System-Z^+, thereby demonstrating that the conclusions of System-Z^+
are probabilistically trustworthy.
|
1302.3556 | Object Recognition with Imperfect Perception and Redundant Description | cs.CV cs.AI | This paper deals with a scene recognition system in a robotics contex. The
general problem is to match images with <I>a priori</I> descriptions. A typical
mission would consist in identifying an object in an installation with a vision
system situated at the end of a manipulator and with a human operator provided
description, formulated in a pseudo-natural language, and possibly redundant.
The originality of this work comes from the nature of the description, from the
special attention given to the management of imprecision and uncertainty in the
interpretation process and from the way to assess the description redundancy so
as to reinforce the overall matching likelihood.
|
1302.3557 | Approximations for Decision Making in the Dempster-Shafer Theory of
Evidence | cs.AI | The computational complexity of reasoning within the Dempster-Shafer theory
of evidence is one of the main points of criticism this formalism has to face.
To overcome this difficulty various approximation algorithms have been
suggested that aim at reducing the number of focal elements in the belief
functions involved. Besides introducing a new algorithm using this method, this
paper describes an empirical study that examines the appropriateness of these
approximation procedures in decision making situations. It presents the
empirical findings and discusses the various tradeoffs that have to be taken
into account when actually applying one of these methods.
|
1302.3558 | A Sufficiently Fast Algorithm for Finding Close to Optimal Junction
Trees | cs.DS cs.AI | An algorithm is developed for finding a close to optimal junction tree of a
given graph G. The algorithm has a worst case complexity O(c^k n^a) where a and
c are constants, n is the number of vertices, and k is the size of the largest
clique in a junction tree of G in which this size is minimized. The algorithm
guarantees that the logarithm of the size of the state space of the heaviest
clique in the junction tree produced is less than a constant factor off the
optimal value. When k = O(log n), our algorithm yields a polynomial inference
algorithm for Bayesian networks.
|
1302.3559 | Coping with the Limitations of Rational Inference in the Framework of
Possibility Theory | cs.AI | Possibility theory offers a framework where both Lehmann's "preferential
inference" and the more productive (but less cautious) "rational closure
inference" can be represented. However, there are situations where the second
inference does not provide expected results either because it cannot produce
them, or even provide counter-intuitive conclusions. This state of facts is not
due to the principle of selecting a unique ordering of interpretations (which
can be encoded by one possibility distribution), but rather to the absence of
constraints expressing pieces of knowledge we have implicitly in mind. It is
advocated in this paper that constraints induced by independence information
can help finding the right ordering of interpretations. In particular,
independence constraints can be systematically assumed with respect to formulas
composed of literals which do not appear in the conditional knowledge base, or
for default rules with respect to situations which are "normal" according to
the other default rules in the base. The notion of independence which is used
can be easily expressed in the qualitative setting of possibility theory.
Moreover, when a counter-intuitive plausible conclusion of a set of defaults,
is in its rational closure, but not in its preferential closure, it is always
possible to repair the set of defaults so as to produce the desired conclusion.
|
1302.3560 | Arguing for Decisions: A Qualitative Model of Decision Making | cs.AI | We develop a qualitative model of decision making with two aims: to describe
how people make simple decisions and to enable computer programs to do the
same. Current approaches based on Planning or Decisions Theory either ignore
uncertainty and tradeoffs, or provide languages and algorithms that are too
complex for this task. The proposed model provides a language based on rules, a
semantics based on high probabilities and lexicographical preferences, and a
transparent decision procedure where reasons for and against decisions
interact. The model is no substitude for Decision Theory, yet for decisions
that people find easy to explain it may provide an appealing alternative.
|
1302.3561 | Learning Conventions in Multiagent Stochastic Domains using Likelihood
Estimates | cs.GT cs.MA | Fully cooperative multiagent systems - those in which agents share a joint
utility model- is of special interest in AI. A key problem is that of ensuring
that the actions of individual agents are coordinated, especially in settings
where the agents are autonomous decision makers. We investigate approaches to
learning coordinated strategies in stochastic domains where an agent's actions
are not directly observable by others. Much recent work in game theory has
adopted a Bayesian learning perspective to the more general problem of
equilibrium selection, but tends to assume that actions can be observed. We
discuss the special problems that arise when actions are not observable,
including effects on rates of convergence, and the effect of action failure
probabilities and asymmetries. We also use likelihood estimates as a means of
generalizing fictitious play learning models in our setting. Finally, we
propose the use of maximum likelihood as a means of removing strategies from
consideration, with the aim of convergence to a conventional equilibrium, at
which point learning and deliberation can cease.
|
1302.3562 | Context-Specific Independence in Bayesian Networks | cs.AI | Bayesian networks provide a language for qualitatively representing the
conditional independence properties of a distribution. This allows a natural
and compact representation of the distribution, eases knowledge acquisition,
and supports effective inference algorithms. It is well-known, however, that
there are certain independencies that we cannot capture qualitatively within
the Bayesian network structure: independencies that hold only in certain
contexts, i.e., given a specific assignment of values to certain variables. In
this paper, we propose a formal notion of context-specific independence (CSI),
based on regularities in the conditional probability tables (CPTs) at a node.
We present a technique, analogous to (and based on) d-separation, for
determining when such independence holds in a given network. We then focus on a
particular qualitative representation scheme - tree-structured CPTs - for
capturing CSI. We suggest ways in which this representation can be used to
support effective inference algorithms. In particular, we present a structural
decomposition of the resulting network which can improve the performance of
clustering algorithms, and an alternative algorithm based on cutset
conditioning.
|
1302.3563 | Decision-Theoretic Troubleshooting: A Framework for Repair and
Experiment | cs.AI | We develop and extend existing decision-theoretic methods for troubleshooting
a nonfunctioning device. Traditionally, diagnosis with Bayesian networks has
focused on belief updating---determining the probabilities of various faults
given current observations. In this paper, we extend this paradigm to include
taking actions. In particular, we consider three classes of actions: (1) we can
make observations regarding the behavior of a device and infer likely faults as
in traditional diagnosis, (2) we can repair a component and then observe the
behavior of the device to infer likely faults, and (3) we can change the
configuration of the device, observe its new behavior, and infer the likelihood
of faults. Analysis of latter two classes of troubleshooting actions requires
incorporating notions of persistence into the belief-network formalism used for
probabilistic inference.
|
1302.3564 | Tail Sensitivity Analysis in Bayesian Networks | cs.AI stat.AP | The paper presents an efficient method for simulating the tails of a target
variable Z=h(X) which depends on a set of basic variables X=(X_1, ..., X_n). To
this aim, variables X_i, i=1, ..., n are sequentially simulated in such a
manner that Z=h(x_1, ..., x_i-1, X_i, ..., X_n) is guaranteed to be in the tail
of Z. When this method is difficult to apply, an alternative method is
proposed, which leads to a low rejection proportion of sample values, when
compared with the Monte Carlo method. Both methods are shown to be very useful
to perform a sensitivity analysis of Bayesian networks, when very large
confidence intervals for the marginal/conditional probabilities are required,
as in reliability or risk analysis. The methods are shown to behave best when
all scores coincide. The required modifications for this to occur are
discussed. The methods are illustrated with several examples and one example of
application to a real case is used to illustrate the whole process.
|
1302.3565 | Decision-Analytic Approaches to Operational Decision Making: Application
and Observation | cs.AI cs.CY | Decision analysis (DA) and the rich set of tools developed by researchers in
decision making under uncertainty show great potential to penetrate the
technological content of the products and services delivered by firms in a
variety of industries as well as the business processes used to deliver those
products and services to market. In this paper I describe work in progress at
Sun Microsystems in the application of decision-analytic methods to Operational
Decision Making (ODM) in its World-Wide Operations (WWOPS) Business Management
Group. Working with membersof product engineering, marketing, and sales,
operations planners from WWOPS have begun to use a decision-analytic framework
called SCRAM (Supply Communication/Risk Assessment and Management) to structure
and solve problems in product planning, tracking, and transition. Concepts such
as information value provide a powerful method of managing huge information
sets and thereby enable managers to focus attention on factors that matter most
for their business. Finally, our process-oriented introduction of
decision-analytic methods to Sun managers has led to a focused effort to
develop decision support software based on methods from decision making under
uncertainty.
|
1302.3566 | Learning Equivalence Classes of Bayesian Networks Structures | cs.AI cs.LG stat.ML | Approaches to learning Bayesian networks from data typically combine a
scoring function with a heuristic search procedure. Given a Bayesian network
structure, many of the scoring functions derived in the literature return a
score for the entire equivalence class to which the structure belongs. When
using such a scoring function, it is appropriate for the heuristic search
algorithm to search over equivalence classes of Bayesian networks as opposed to
individual structures. We present the general formulation of a search space for
which the states of the search correspond to equivalence classes of structures.
Using this space, any one of a number of heuristic search algorithms can easily
be applied. We compare greedy search performance in the proposed search space
to greedy search performance in a search space for which the states correspond
to individual Bayesian network structures.
|
1302.3567 | Efficient Approximations for the Marginal Likelihood of Incomplete Data
Given a Bayesian Network | cs.LG cs.AI stat.ML | We discuss Bayesian methods for learning Bayesian networks when data sets are
incomplete. In particular, we examine asymptotic approximations for the
marginal likelihood of incomplete data given a Bayesian network. We consider
the Laplace approximation and the less accurate but more efficient BIC/MDL
approximation. We also consider approximations proposed by Draper (1993) and
Cheeseman and Stutz (1995). These approximations are as efficient as BIC/MDL,
but their accuracy has not been studied in any depth. We compare the accuracy
of these approximations under the assumption that the Laplace approximation is
the most accurate. In experiments using synthetic data generated from discrete
naive-Bayes models having a hidden root node, we find that the CS measure is
the most accurate.
|
1302.3568 | Independence with Lower and Upper Probabilities | cs.AI | It is shown that the ability of the interval probability representation to
capture epistemological independence is severely limited. Two events are
epistemologically independent if knowledge of the first event does not alter
belief (i.e., probability bounds) about the second. However, independence in
this form can only exist in a 2-monotone probability function in degenerate
cases i.e., if the prior bounds are either point probabilities or entirely
vacuous. Additional limitations are characterized for other classes of lower
probabilities as well. It is argued that these phenomena are simply a matter of
interpretation. They appear to be limitations when one interprets probability
bounds as a measure of epistemological indeterminacy (i.e., uncertainty arising
from a lack of knowledge), but are exactly as one would expect when probability
intervals are interpreted as representations of ontological indeterminacy
(indeterminacy introduced by structural approximations). The ontological
interpretation is introduced and discussed.
|
1302.3569 | Propagation of 2-Monotone Lower Probabilities on an Undirected Graph | cs.AI | Lower and upper probabilities, also known as Choquet capacities, are widely
used as a convenient representation for sets of probability distributions. This
paper presents a graphical decomposition and exact propagation algorithm for
computing marginal posteriors of 2-monotone lower probabilities (equivalently,
2-alternating upper probabilities).
|
1302.3570 | Quasi-Bayesian Strategies for Efficient Plan Generation: Application to
the Planning to Observe Problem | cs.AI | Quasi-Bayesian theory uses convex sets of probability distributions and
expected loss to represent preferences about plans. The theory focuses on
decision robustness, i.e., the extent to which plans are affected by deviations
in subjective assessments of probability. The present work presents solutions
for plan generation when robustness of probability assessments must be
included: plans contain information about the robustness of certain actions.
The surprising result is that some problems can be solved faster in the
Quasi-Bayesian framework than within usual Bayesian theory. We investigate this
on the planning to observe problem, i.e., an agent must decide whether to take
new observations or not. The fundamental question is: How, and how much, to
search for a "best" plan, based on the robustness of probability assessments?
Plan generation algorithms are derived in the context of material
classification with an acoustic robotic probe. A package that constructs
Quasi-Bayesian plans is available through anonymous ftp.
|
1302.3571 | Some Experiments with Real-Time Decision Algorithms | cs.AI | Real-time Decision algorithms are a class of incremental resource-bounded
[Horvitz, 89] or anytime [Dean, 93] algorithms for evaluating influence
diagrams. We present a test domain for real-time decision algorithms, and the
results of experiments with several Real-time Decision Algorithms in this
domain. The results demonstrate high performance for two algorithms, a
decision-evaluation variant of Incremental Probabilisitic Inference [D'Ambrosio
93] and a variant of an algorithm suggested by Goldszmidt, [Goldszmidt, 95],
PK-reduced. We discuss the implications of these experimental results and
explore the broader applicability of these algorithms.
|
1302.3572 | Bucket Elimination: A Unifying Framework for Several Probabilistic
Inference | cs.AI | Probabilistic inference algorithms for finding the most probable explanation,
the maximum aposteriori hypothesis, and the maximum expected utility and for
updating belief are reformulated as an elimination--type algorithm called
bucket elimination. This emphasizes the principle common to many of the
algorithms appearing in that literature and clarifies their relationship to
nonserial dynamic programming algorithms. We also present a general way of
combining conditioning and elimination within this framework. Bounds on
complexity are given for all the algorithms as a function of the problem's
structure.
|
1302.3573 | Topological Parameters for Time-Space Tradeoff | cs.AI | In this paper we propose a family of algorithms combining tree-clustering
with conditioning that trade space for time. Such algorithms are useful for
reasoning in probabilistic and deterministic networks as well as for
accomplishing optimization tasks. By analyzing the problem structure it will be
possible to select from a spectrum the algorithm that best meets a given
time-space specification.
|
1302.3574 | Sound Abstraction of Probabilistic Actions in The Constraint Mass
Assignment Framework | cs.AI | This paper provides a formal and practical framework for sound abstraction of
probabilistic actions. We start by precisely defining the concept of sound
abstraction within the context of finite-horizon planning (where each plan is a
finite sequence of actions). Next we show that such abstraction cannot be
performed within the traditional probabilistic action representation, which
models a world with a single probability distribution over the state space. We
then present the constraint mass assignment representation, which models the
world with a set of probability distributions and is a generalization of mass
assignment representations. Within this framework, we present sound abstraction
procedures for three types of action abstraction. We end the paper with
discussions and related work on sound and approximate abstraction. We give
pointers to papers in which we discuss other sound abstraction-related issues,
including applications, estimating loss due to abstraction, and automatically
generating abstraction hierarchies.
|
1302.3575 | Belief Revision with Uncertain Inputs in the Possibilistic Setting | cs.AI | This paper discusses belief revision under uncertain inputs in the framework
of possibility theory. Revision can be based on two possible definitions of the
conditioning operation, one based on min operator which requires a purely
ordinal scale only, and another based on product, for which a richer structure
is needed, and which is a particular case of Dempster's rule of conditioning.
Besides, revision under uncertain inputs can be understood in two different
ways depending on whether the input is viewed, or not, as a constraint to
enforce. Moreover, it is shown that M.A. Williams' transmutations, originally
defined in the setting of Spohn's functions, can be captured in this framework,
as well as Boutilier's natural revision.
|
1302.3576 | An Evaluation of Structural Parameters for Probabilistic Reasoning:
Results on Benchmark Circuits | cs.AI | Many algorithms for processing probabilistic networks are dependent on the
topological properties of the problem's structure. Such algorithms (e.g.,
clustering, conditioning) are effective only if the problem has a sparse graph
captured by parameters such as tree width and cycle-cut set size. In this paper
we initiate a study to determine the potential of structure-based algorithms in
real-life applications. We analyze empirically the structural properties of
problems coming from the circuit diagnosis domain. Specifically, we locate
those properties that capture the effectiveness of clustering and conditioning
as well as of a family of conditioning+clustering algorithms designed to
gradually trade space for time. We perform our analysis on 11 benchmark
circuits widely used in the testing community. We also report on the effect of
ordering heuristics on tree-clustering and show that, on our benchmarks, the
well-known max-cardinality ordering is substantially inferior to an ordering
called min-degree.
|
1302.3577 | Learning Bayesian Networks with Local Structure | cs.AI cs.LG stat.ML | In this paper we examine a novel addition to the known methods for learning
Bayesian networks from data that improves the quality of the learned networks.
Our approach explicitly represents and learns the local structure in the
conditional probability tables (CPTs), that quantify these networks. This
increases the space of possible models, enabling the representation of CPTs
with a variable number of parameters that depends on the learned local
structures. The resulting learning procedure is capable of inducing models that
better emulate the real complexity of the interactions present in the data. We
describe the theoretical foundations and practical aspects of learning local
structures, as well as an empirical evaluation of the proposed method. This
evaluation indicates that learning curves characterizing the procedure that
exploits the local structure converge faster than these of the standard
procedure. Our results also show that networks learned with local structure
tend to be more complex (in terms of arcs), yet require less parameters.
|
1302.3578 | A Qualitative Markov Assumption and its Implications for Belief Change | cs.AI | The study of belief change has been an active area in philosophy and AI. In
recent years two special cases of belief change, belief revision and belief
update, have been studied in detail. Roughly, revision treats a surprising
observation as a sign that previous beliefs were wrong, while update treats a
surprising observation as an indication that the world has changed. In general,
we would expect that an agent making an observation may both want to revise
some earlier beliefs and assume that some change has occurred in the world. We
define a novel approach to belief change that allows us to do this, by applying
ideas from probability theory in a qualitative setting. The key idea is to use
a qualitative Markov assumption, which says that state transitions are
independent. We show that a recent approach to modeling qualitative uncertainty
using plausibility measures allows us to make such a qualitative Markov
assumption in a relatively straightforward way, and show how the Markov
assumption can be used to provide an attractive belief-change model.
|
1302.3579 | On the Sample Complexity of Learning Bayesian Networks | cs.LG stat.ML | In recent years there has been an increasing interest in learning Bayesian
networks from data. One of the most effective methods for learning such
networks is based on the minimum description length (MDL) principle. Previous
work has shown that this learning procedure is asymptotically successful: with
probability one, it will converge to the target distribution, given a
sufficient number of samples. However, the rate of this convergence has been
hitherto unknown. In this work we examine the sample complexity of MDL based
learning procedures for Bayesian networks. We show that the number of samples
needed to learn an epsilon-close approximation (in terms of entropy distance)
with confidence delta is O((1/epsilon)^(4/3)log(1/epsilon)log(1/delta)loglog
(1/delta)). This means that the sample complexity is a low-order polynomial in
the error threshold and sub-linear in the confidence bound. We also discuss how
the constants in this term depend on the complexity of the target distribution.
Finally, we address questions of asymptotic minimality and propose a method for
using the sample complexity results to speed up the learning process.
|
1302.3580 | Asymptotic Model Selection for Directed Networks with Hidden Variables | cs.LG cs.AI stat.ML | We extend the Bayesian Information Criterion (BIC), an asymptotic
approximation for the marginal likelihood, to Bayesian networks with hidden
variables. This approximation can be used to select models given large samples
of data. The standard BIC as well as our extension punishes the complexity of a
model according to the dimension of its parameters. We argue that the dimension
of a Bayesian network with hidden variables is the rank of the Jacobian matrix
of the transformation between the parameters of the network and the parameters
of the observable variables. We compute the dimensions of several networks
including the naive Bayes model with a hidden root node.
|
1302.3581 | Theoretical Foundations for Abstraction-Based Probabilistic Planning | cs.AI | Modeling worlds and actions under uncertainty is one of the central problems
in the framework of decision-theoretic planning. The representation must be
general enough to capture real-world problems but at the same time it must
provide a basis upon which theoretical results can be derived. The central
notion in the framework we propose here is that of the affine-operator, which
serves as a tool for constructing (convex) sets of probability distributions,
and which can be considered as a generalization of belief functions and
interval mass assignments. Uncertainty in the state of the worlds is modeled
with sets of probability distributions, represented by affine-trees while
actions are defined as tree-manipulators. A small set of key properties of the
affine-operator is presented, forming the basis for most existing
operator-based definitions of probabilistic action projection and action
abstraction. We derive and prove correct three projection rules, which vividly
illustrate the precision-complexity tradeoff in plan projection. Finally, we
show how the three types of action abstraction identified by Haddawy and Doan
are manifested in the present framework.
|
1302.3582 | Why Is Diagnosis Using Belief Networks Insensitive to Imprecision In
Probabilities? | cs.AI | Recent research has found that diagnostic performance with Bayesian belief
networks is often surprisingly insensitive to imprecision in the numerical
probabilities. For example, the authors have recently completed an extensive
study in which they applied random noise to the numerical probabilities in a
set of belief networks for medical diagnosis, subsets of the CPCS network, a
subset of the QMR (Quick Medical Reference) focused on liver and bile diseases.
The diagnostic performance in terms of the average probabilities assigned to
the actual diseases showed small sensitivity even to large amounts of noise. In
this paper, we summarize the findings of this study and discuss possible
explanations of this low sensitivity. One reason is that the criterion for
performance is average probability of the true hypotheses, rather than average
error in probability, which is insensitive to symmetric noise distributions.
But, we show that even asymmetric, logodds-normal noise has modest effects. A
second reason is that the gold-standard posterior probabilities are often near
zero or one, and are little disturbed by noise.
|
1302.3583 | Flexible Policy Construction by Information Refinement | cs.AI | We report on work towards flexible algorithms for solving decision problems
represented as influence diagrams. An algorithm is given to construct a tree
structure for each decision node in an influence diagram. Each tree represents
a decision function and is constructed incrementally. The improvements to the
tree converge to the optimal decision function (neglecting computational costs)
and the asymptotic behaviour is only a constant factor worse than dynamic
programming techniques, counting the number of Bayesian network queries.
Empirical results show how expected utility increases with the size of the tree
and the number of Bayesian net calculations.
|
1302.3584 | Efficient Search-Based Inference for Noisy-OR Belief Networks:
TopEpsilon | cs.AI | Inference algorithms for arbitrary belief networks are impractical for large,
complex belief networks. Inference algorithms for specialized classes of belief
networks have been shown to be more efficient. In this paper, we present a
search-based algorithm for approximate inference on arbitrary, noisy-OR belief
networks, generalizing earlier work on search-based inference for two-level,
noisy-OR belief networks. Initial experimental results appear promising.
|
1302.3585 | A Probabilistic Model For Sensor Validation | cs.AI | The validation of data from sensors has become an important issue in the
operation and control of modern industrial plants. One approach is to use
knowledge based techniques to detect inconsistencies in measured data. This
article presents a probabilistic model for the detection of such
inconsistencies. Based on probability propagation, this method is able to find
the existence of a possible fault among the set of sensors. That is, if an
error exists, many sensors present an apparent fault due to the propagation
from the sensor(s) with a real fault. So the fault detection mechanism can only
tell if a sensor has a potential fault, but it can not tell if the fault is
real or apparent. So the central problem is to develop a theory, and then an
algorithm, for distinguishing real and apparent faults, given that one or more
sensors can fail at the same time. This article then, presents an approach
based on two levels: (i) probabilistic reasoning, to detect a potential fault,
and (ii) constraint management, to distinguish the real fault from the apparent
ones. The proposed approach is exemplified by applying it to a power plant
model.
|
1302.3586 | Computing Upper and Lower Bounds on Likelihoods in Intractable Networks | cs.AI | We present deterministic techniques for computing upper and lower bounds on
marginal probabilities in sigmoid and noisy-OR networks. These techniques
become useful when the size of the network (or clique size) precludes exact
computations. We illustrate the tightness of the bounds by numerical
experiments.
|
1302.3587 | MIDAS - An Influence Diagram for Management of Mildew in Winter Wheat | cs.AI | We present a prototype of a decision support system for management of the
fungal disease mildew in winter wheat. The prototype is based on an influence
diagram which is used to determine the optimal time and dose of mildew
treatments. This involves multiple decision opportunities over time,
stochasticity, inaccurate information and incomplete knowledge. The paper
describes the practical and theoretical problems encountered during the
construction of the influence diagram, and also the experience with the
prototype.
|
1302.3588 | Computational Complexity Reduction for BN2O Networks Using Similarity of
States | cs.AI | Although probabilistic inference in a general Bayesian belief network is an
NP-hard problem, computation time for inference can be reduced in most
practical cases by exploiting domain knowledge and by making approximations in
the knowledge representation. In this paper we introduce the property of
similarity of states and a new method for approximate knowledge representation
and inference which is based on this property. We define two or more states of
a node to be similar when the ratio of their probabilities, the likelihood
ratio, does not depend on the instantiations of the other nodes in the network.
We show that the similarity of states exposes redundancies in the joint
probability distribution which can be exploited to reduce the computation time
of probabilistic inference in networks with multiple similar states, and that
the computational complexity in the networks with exponentially many similar
states might be polynomial. We demonstrate our ideas on the example of a BN2O
network -- a two layer network often used in diagnostic problems -- by reducing
it to a very close network with multiple similar states. We show that the
answers to practical queries converge very fast to the answers obtained with
the original network. The maximum error is as low as 5% for models that require
only 10% of the computation time needed by the original BN2O model.
|
1302.3589 | Uncertain Inferences and Uncertain Conclusions | cs.AI | Uncertainty may be taken to characterize inferences, their conclusions, their
premises or all three. Under some treatments of uncertainty, the inferences
itself is never characterized by uncertainty. We explore both the significance
of uncertainty in the premises and in the conclusion of an argument that
involves uncertainty. We argue that for uncertainty to characterize the
conclusion of an inference is natural, but that there is an interplay between
uncertainty in the premises and uncertainty in the procedure of argument
itself. We show that it is possible in principle to incorporate all uncertainty
in the premises, rendering uncertainty arguments deductively valid. But we then
argue (1) that this does not reflect human argument, (2) that it is
computationally costly, and (3) that the gain in simplicity obtained by
allowing uncertainty inference can sometimes outweigh the loss of flexibility
it entails.
|
1302.3590 | Bayesian Learning of Loglinear Models for Neural Connectivity | cs.LG q-bio.NC stat.AP stat.ML | This paper presents a Bayesian approach to learning the connectivity
structure of a group of neurons from data on configuration frequencies. A major
objective of the research is to provide statistical tools for detecting changes
in firing patterns with changing stimuli. Our framework is not restricted to
the well-understood case of pair interactions, but generalizes the Boltzmann
machine model to allow for higher order interactions. The paper applies a
Markov Chain Monte Carlo Model Composition (MC3) algorithm to search over
connectivity structures and uses Laplace's method to approximate posterior
probabilities of structures. Performance of the methods was tested on synthetic
data. The models were also applied to data obtained by Vaadia on multi-unit
recordings of several neurons in the visual cortex of a rhesus monkey in two
different attentional states. Results confirmed the experimenters' conjecture
that different attentional states were associated with different interaction
structures.
|
1302.3591 | Network Engineering for Complex Belief Networks | cs.AI | Like any large system development effort, the construction of a complex
belief network model requires systems engineering to manage the design and
construction process. We propose a rapid prototyping approach to network
engineering. We describe criteria for identifying network modules and the use
of "stubs" to represent not-yet-constructed modules. We propose an object
oriented representation for belief networks which captures the semantics of the
problem in addition to conditional independencies and probabilities. Methods
for evaluating complex belief network models are discussed. The ideas are
illustrated with examples from a large belief network construction problem in
the military intelligence domain.
|
1302.3592 | Probabilistic Disjunctive Logic Programming | cs.AI | In this paper we propose a framework for combining Disjunctive Logic
Programming and Poole's Probabilistic Horn Abduction. We use the concept of
hypothesis to specify the probability structure. We consider the case in which
probabilistic information is not available. Instead of using probability
intervals, we allow for the specification of the probabilities of disjunctions.
Because minimal models are used as characteristic models in disjunctive logic
programming, we apply the principle of indifference on the set of minimal
models to derive default probability values. We define the concepts of
explanation and partial explanation of a formula, and use them to determine the
default probability distribution(s) induced by a program. An algorithm for
calculating the default probability of a goal is presented.
|
1302.3593 | Toward a Market Model for Bayesian Inference | cs.GT cs.AI | We present a methodology for representing probabilistic relationships in a
general-equilibrium economic model. Specifically, we define a precise mapping
from a Bayesian network with binary nodes to a market price system where
consumers and producers trade in uncertain propositions. We demonstrate the
correspondence between the equilibrium prices of goods in this economy and the
probabilities represented by the Bayesian network. A computational market model
such as this may provide a useful framework for investigations of belief
aggregation, distributed probabilistic inference, resource allocation under
uncertainty, and other problems of decentralized uncertainty.
|
1302.3594 | Geometric Implications of the Naive Bayes Assumption | cs.AI | A naive (or Idiot) Bayes network is a network with a single hypothesis node
and several observations that are conditionally independent given the
hypothesis. We recently surveyed a number of members of the UAI community and
discovered a general lack of understanding of the implications of the Naive
Bayes assumption on the kinds of problems that can be solved by these networks.
It has long been recognized [Minsky 61] that if observations are binary, the
decision surfaces in these networks are hyperplanes. We extend this result
(hyperplane separability) to Naive Bayes networks with m-ary observations. In
addition, we illustrate the effect of observation-observation dependencies on
decision surfaces. Finally, we discuss the implications of these results on
knowledge acquisition and research in learning.
|
1302.3595 | Identifying Independencies in Causal Graphs with Feedback | cs.AI | We show that the d -separation criterion constitutes a valid test for
conditional independence relationships that are induced by feedback systems
involving discrete variables.
|
1302.3596 | A Graph-Theoretic Analysis of Information Value | cs.AI | We derive qualitative relationships about the informational relevance of
variables in graphical decision models based on a consideration of the topology
of the models. Specifically, we identify dominance relations for the expected
value of information on chance variables in terms of their position and
relationships in influence diagrams. The qualitative relationships can be
harnessed to generate nonnumerical procedures for ordering uncertain variables
in a decision model by their informational relevance.
|
1302.3597 | A Framework for Decision-Theoretic Planning I: Combining the Situation
Calculus, Conditional Plans, Probability and Utility | cs.AI | This paper shows how we can combine logical representations of actions and
decision theory in such a manner that seems natural for both. In particular we
assume an axiomatization of the domain in terms of situation calculus, using
what is essentially Reiter's solution to the frame problem, in terms of the
completion of the axioms defining the state change. Uncertainty is handled in
terms of the independent choice logic, which allows for independent choices and
a logic program that gives the consequences of the choices. As part of the
consequences are a specification of the utility of (final) states. The robot
adopts robot plans, similar to the GOLOG programming language. Within this
logic, we can define the expected utility of a conditional plan, based on the
axiomatization of the actions, the uncertainty and the utility. The ?planning'
problem is to find the plan with the highest expected utility. This is related
to recent structured representations for POMDPs; here we use stochastic
situation calculus rules to specify the state transition function and the
reward/value function. Finally we show that with stochastic frame axioms,
actions representations in probabilistic STRIPS are exponentially larger than
using the representation proposed here.
|
1302.3598 | Optimal Monte Carlo Estimation of Belief Network Inference | cs.AI | We present two Monte Carlo sampling algorithms for probabilistic inference
that guarantee polynomial-time convergence for a larger class of network than
current sampling algorithms provide. These new methods are variants of the
known likelihood weighting algorithm. We use of recent advances in the theory
of optimal stopping rules for Monte Carlo simulation to obtain an inference
approximation with relative error epsilon and a small failure probability
delta. We present an empirical evaluation of the algorithms which demonstrates
their improved performance.
|
1302.3599 | A Discovery Algorithm for Directed Cyclic Graphs | cs.AI | Directed acyclic graphs have been used fruitfully to represent causal
strucures (Pearl 1988). However, in the social sciences and elsewhere models
are often used which correspond both causally and statistically to directed
graphs with directed cycles (Spirtes 1995). Pearl (1993) discussed predicting
the effects of intervention in models of this kind, so-called linear
non-recursive structural equation models. This raises the question of whether
it is possible to make inferences about causal structure with cycles, form
sample data. In particular do there exist general, informative, feasible and
reliable precedures for inferring causal structure from conditional
independence relations among variables in a sample generated by an unknown
causal structure? In this paper I present a discovery algorithm that is correct
in the large sample limit, given commonly (but often implicitly) made plausible
assumptions, and which provides information about the existence or
non-existence of causal pathways from one variable to another. The algorithm is
polynomial on sparse graphs.
|
1302.3600 | A Polynomial-Time Algorithm for Deciding Markov Equivalence of Directed
Cyclic Graphical Models | cs.AI | Although the concept of d-separation was originally defined for directed
acyclic graphs (see Pearl 1988), there is a natural extension of he concept to
directed cyclic graphs. When exactly the same set of d-separation relations
hold in two directed graphs, no matter whether respectively cyclic or acyclic,
we say that they are Markov equivalent. In other words, when two directed
cyclic graphs are Markov equivalent, the set of distributions that satisfy a
natural extension of the Global Directed Markov condition (Lauritzen et al.
1990) is exactly the same for each graph. There is an obvious exponential (in
the number of vertices) time algorithm for deciding Markov equivalence of two
directed cyclic graphs; simply chech all of the d-separation relations in each
graph. In this paper I state a theorem that gives necessary and sufficient
conditions for the Markov equivalence of two directed cyclic graphs, where each
of the conditions can be checked in polynomial time. Hence, the theorem can be
easily adapted into a polynomial time algorithm for deciding the Markov
equivalence of two directed cyclic graphs. Although space prohibits inclusion
of correctness proofs, they are fully described in Richardson (1994b).
|
1302.3601 | Coherent Knowledge Processing at Maximum Entropy by SPIRIT | cs.AI | SPIRIT is an expert system shell for probabilistic knowledge bases. Knowledge
acquisition is performed by processing facts and rules on discrete variables in
a rich syntax. The shell generates a probability distribution which respects
all acquired facts and rules and which maximizes entropy. The user-friendly
devices of SPIRIT to define variables, formulate rules and create the knowledge
base are revealed in detail. Inductive learning is possible. Medium sized
applications show the power of the system.
|
1302.3602 | Sample-and-Accumulate Algorithms for Belief Updating in Bayes Networks | cs.AI | Belief updating in Bayes nets, a well known computationally hard problem, has
recently been approximated by several deterministic algorithms, and by various
randomized approximation algorithms. Deterministic algorithms usually provide
probability bounds, but have an exponential runtime. Some randomized schemes
have a polynomial runtime, but provide only probability estimates. We present
randomized algorithms that enumerate high-probability partial instantiations,
resulting in probability bounds. Some of these algorithms are also sampling
algorithms. Specifically, we introduce and evaluate a variant of backward
sampling, both as a sampling algorithm and as a randomized enumeration
algorithm. We also relax the implicit assumption used by both sampling and
accumulation algorithms, that query nodes must be instantiated in all the
samples.
|
1302.3603 | A Measure of Decision Flexibility | cs.AI | We propose a decision-analytical approach to comparing the flexibility of
decision situations from the perspective of a decision-maker who exhibits
constant risk-aversion over a monetary value model. Our approach is simple yet
seems to be consistent with a variety of flexibility concepts, including robust
and adaptive alternatives. We try to compensate within the model for
uncertainty that was not anticipated or not modeled. This approach not only
allows one to compare the flexibility of plans, but also guides the search for
new, more flexible alternatives.
|
1302.3604 | Binary Join Trees | cs.AI | The main goal of this paper is to describe a data structure called binary
join trees that are useful in computing multiple marginals efficiently using
the Shenoy-Shafer architecture. We define binary join trees, describe their
utility, and sketch a procedure for constructing them.
|
1302.3605 | Efficient Enumeration of Instantiations in Bayesian Networks | cs.AI | Over the past several years Bayesian networks have been applied to a wide
variety of problems. A central problem in applying Bayesian networks is that of
finding one or more of the most probable instantiations of a network. In this
paper we develop an efficient algorithm that incrementally enumerates the
instantiations of a Bayesian network in decreasing order of probability. Such
enumeration algorithms are applicable in a variety of applications ranging from
medical expert systems to model-based diagnosis. Fundamentally, our algorithm
is simply performing a lazy enumeration of the sorted list of all
instantiations of the network. This insight leads to a very concise algorithm
statement which is both easily understood and implemented. We show that for
singly connected networks, our algorithm generates the next instantiation in
time polynomial in the size of the network. The algorithm extends to arbitrary
Bayesian networks using standard conditioning techniques. We empirically
evaluate the enumeration algorithm and demonstrate its practicality.
|
1302.3606 | On Separation Criterion and Recovery Algorithm for Chain Graphs | cs.AI | Chain graphs give a natural unifying point of view on Markov and Bayesian
networks and enlarge the potential of graphical models for description of
conditional independence structures. In the paper a direct graphical separation
criterion for chain graphs, called c-separation, which generalizes the
d-separation criterion for Bayesian networks is introduced (recalled). It is
equivalent to the classic moralization criterion for chain graphs and complete
in sense that for every chain graph there exists a probability distribution
satisfying exactly conditional independencies derivable from the chain graph by
the c-separation criterion. Every class of Markov equivalent chain graphs can
be uniquely described by a natural representative, called the largest chain
graph. A recovery algorithm, which on basis of the (conditional) dependency
model induced by an unknown chain graph finds the corresponding largest chain
graph, is presented.
|
1302.3607 | Possible World Partition Sequences: A Unifying Framework for Uncertain
Reasoning | cs.AI | When we work with information from multiple sources, the formalism each
employs to handle uncertainty may not be uniform. In order to be able to
combine these knowledge bases of different formats, we need to first establish
a common basis for characterizing and evaluating the different formalisms, and
provide a semantics for the combined mechanism. A common framework can provide
an infrastructure for building an integrated system, and is essential if we are
to understand its behavior. We present a unifying framework based on an ordered
partition of possible worlds called partition sequences, which corresponds to
our intuitive notion of biasing towards certain possible scenarios when we are
uncertain of the actual situation. We show that some of the existing
formalisms, namely, default logic, autoepistemic logic, probabilistic
conditioning and thresholding (generalized conditioning), and possibility
theory can be incorporated into this general framework.
|
1302.3608 | Supply Restoration in Power Distribution Systems - A Case Study in
Integrating Model-Based Diagnosis and Repair Planning | cs.AI | Integrating diagnosis and repair is particularly crucial when gaining
sufficient information to discriminate between several candidate diagnoses
requires carrying out some repair actions. A typical case is supply restoration
in a faulty power distribution system. This problem, which is a major concern
for electricity distributors, features partial observability, and stochastic
repair actions which are more elaborate than simple replacement of components.
This paper analyses the difficulties in applying existing work on integrating
model-based diagnosis and repair and on planning in partially observable
stochastic domains to this real-world problem, and describes the pragmatic
approach we have retained so far.
|
1302.3609 | Real Time Estimation of Bayesian Networks | cs.AI | For real time evaluation of a Bayesian network when there is not sufficient
time to obtain an exact solution, a guaranteed response time, approximate
solution is required. It is shown that nontraditional methods utilizing
estimators based on an archive of trial solutions and genetic search can
provide an approximate solution that is considerably superior to the
traditional Monte Carlo simulation methods.
|
1302.3610 | Testing Implication of Probabilistic Dependencies | cs.AI | Axiomatization has been widely used for testing logical implications. This
paper suggests a non-axiomatic method, the chase, to test if a new dependency
follows from a given set of probabilistic dependencies. Although the chase
computation may require exponential time in some cases, this technique is a
powerful tool for establishing nontrivial theoretical results. More
importantly, this approach provides valuable insight into the intriguing
connection between relational databases and probabilistic reasoning systems.
|
1302.3611 | Optimal Factory Scheduling using Stochastic Dominance A* | cs.AI | We examine a standard factory scheduling problem with stochastic processing
and setup times, minimizing the expectation of the weighted number of tardy
jobs. Because the costs of operators in the schedule are stochastic and
sequence dependent, standard dynamic programming algorithms such as A* may fail
to find the optimal schedule. The SDA* (Stochastic Dominance A*) algorithm
remedies this difficulty by relaxing the pruning condition. We present an
improved state-space search formulation for these problems and discuss the
conditions under which stochastic scheduling problems can be solved optimally
using SDA*. In empirical testing on randomly generated problems, we found that
in 70%, the expected cost of the optimal stochastic solution is lower than that
of the solution derived using a deterministic approximation, with comparable
search effort.
|
1302.3612 | Critical Remarks on Single Link Search in Learning Belief Networks | cs.AI | In learning belief networks, the single link lookahead search is widely
adopted to reduce the search space. We show that there exists a class of
probabilistic domain models which displays a special pattern of dependency. We
analyze the behavior of several learning algorithms using different scoring
metrics such as the entropy, conditional independence, minimal description
length and Bayesian metrics. We demonstrate that single link lookahead search
procedures (employed in these algorithms) cannot learn these models correctly.
Thus, when the underlying domain model actually belongs to this class, the use
of a single link search procedure will result in learning of an incorrect
model. This may lead to inference errors when the model is used. Our analysis
suggests that if the prior knowledge about a domain does not rule out the
possible existence of these models, a multi-link lookahead search or other
heuristics should be used for the learning process.
|
1302.3639 | A Latent Source Model for Nonparametric Time Series Classification | stat.ML cs.LG cs.SI | For classifying time series, a nearest-neighbor approach is widely used in
practice with performance often competitive with or better than more elaborate
methods such as neural networks, decision trees, and support vector machines.
We develop theoretical justification for the effectiveness of
nearest-neighbor-like classification of time series. Our guiding hypothesis is
that in many applications, such as forecasting which topics will become trends
on Twitter, there aren't actually that many prototypical time series to begin
with, relative to the number of time series we have access to, e.g., topics
become trends on Twitter only in a few distinct manners whereas we can collect
massive amounts of Twitter data. To operationalize this hypothesis, we propose
a latent source model for time series, which naturally leads to a "weighted
majority voting" classification rule that can be approximated by a
nearest-neighbor classifier. We establish nonasymptotic performance guarantees
of both weighted majority voting and nearest-neighbor classification under our
model accounting for how much of the time series we observe and the model
complexity. Experimental results on synthetic data show weighted majority
voting achieving the same misclassification rate as nearest-neighbor
classification while observing less of the time series. We then use weighted
majority to forecast which news topics on Twitter become trends, where we are
able to detect such "trending topics" in advance of Twitter 79% of the time,
with a mean early advantage of 1 hour and 26 minutes, a true positive rate of
95%, and a false positive rate of 4%.
|
1302.3660 | On Zero Delay Source-Channel Coding | cs.IT math.IT | In this paper, we study the zero-delay source-channel coding problem, and
specifically the problem of obtaining the vector transformations that optimally
map between the m-dimensional source space and the k-dimensional channel space,
under a given transmission power constraint and for the mean square error
distortion. We first study the functional properties of this problem and show
that the objective is concave in the source and noise densities and convex in
the density of the input to the channel. We then derive the necessary
conditions for optimality of the encoder and decoder mappings. A well known
result in information theory pertains to the linearity of optimal encoding and
decoding mappings in the scalar Gaussian source and channel setting, at all
channel signal-to-noise ratios (CSNRs). In this paper, we study this result
more generally, beyond the Gaussian source and channel, and derive the
necessary and sufficient condition for linearity of optimal mappings, given a
noise (or source) distribution, and a specified power constraint. We also prove
that the Gaussian source-channel pair is unique in the sense that it is the
only source-channel pair for which the optimal mappings are linear at more than
one CSNR values. Moreover, we show the asymptotic linearity of optimal mappings
for low CSNR if the channel is Gaussian regardless of the source and, at the
other extreme, for high CSNR if the source is Gaussian, regardless of the
channel. Our numerical results show strict improvement over prior methods. The
numerical approach is extended to the scenario of source-channel coding with
decoder side information. The resulting encoding mappings are shown to be
continuous relatives of, and in fact subsume as special case, the Wyner-Ziv
mappings encountered in digital distributed source coding systems.
|
1302.3663 | Spatially Heterogeneous Biofilm Simulations using an Immersed Boundary
Method with Lagrangian Nodes Defined by Bacterial Locations | math.NA cs.CE physics.flu-dyn | In this work we consider how surface-adherent bacterial biofilm communities
respond in flowing systems. We simulate the fluid-structure interaction and
separation process using the immersed boundary method. In these simulations we
model and simulate different density and viscosity values of the biofilm than
that of the surrounding fluid. The simulation also includes breakable springs
connecting the bacteria in the biofilm. This allows the inclusion of erosion
and detachment into the simulation. We use the incompressible Navier-Stokes
(N-S) equations to describe the motion of the flowing fluid. We discretize the
fluid equations using finite differences and use a geometric multigrid method
to solve the resulting equations at each time step. The use of multigrid is
necessary because of the dramatically different densities and viscosities
between the biofilm and the surrounding fluid. We investigate and simulate the
model in both two and three dimensions.
Our method differs from previous attempts of using IBM for modeling
biofilm/flow interactions in the following ways: the density and viscosity of
the biofilm can differ from the surrounding fluid, and the Lagrangian node
locations correspond to experimentally measured bacterial cell locations from
3D images taken of Staphylococcus epidermidis in a biofilm.
|
1302.3668 | Bio-inspired data mining: Treating malware signatures as biosequences | cs.LG q-bio.QM stat.ML | The application of machine learning to bioinformatics problems is well
established. Less well understood is the application of bioinformatics
techniques to machine learning and, in particular, the representation of
non-biological data as biosequences. The aim of this paper is to explore the
effects of giving amino acid representation to problematic machine learning
data and to evaluate the benefits of supplementing traditional machine learning
with bioinformatics tools and techniques. The signatures of 60 computer viruses
and 60 computer worms were converted into amino acid representations and first
multiply aligned separately to identify conserved regions across different
families within each class (virus and worm). This was followed by a second
alignment of all 120 aligned signatures together so that non-conserved regions
were identified prior to input to a number of machine learning techniques.
Differences in length between virus and worm signatures after the first
alignment were resolved by the second alignment. Our first set of experiments
indicates that representing computer malware signatures as amino acid sequences
followed by alignment leads to greater classification and prediction accuracy.
Our second set of experiments indicates that checking the results of data
mining from artificial virus and worm data against known proteins can lead to
generalizations being made from the domain of naturally occurring proteins to
malware signatures. However, further work is needed to determine the advantages
and disadvantages of different representations and sequence alignment methods
for handling problematic machine learning data.
|
1302.3681 | On Weak Dress Codes for Cloud Storage | cs.IT math.IT | In a distributed storage network, reliability and bandwidth optimization can
be provided by regenerating codes. Recently table based regenerating codes viz.
DRESS (Distributed Replication-based Exact Simple Storage) codes has been
proposed which also optimizes the disk I/O. Dress codes consists of an outer
MDS code with an inner fractional repetition (FR) code with replication degree
$\rho$. Several constructions of FR codes based on regular graphs, resolvable
designs and bipartite graphs are known. This paper presents a simple modular
construction of FR codes. We also generalize the concept of FR codes to weak
fractional repetition (WFR) codes where each node has different number of
packets. We present a construction of WFR codes based on partial regular graph.
Finally we present a simple generalized ring construction of both strong and
weak fractional repetition codes.
|
1302.3700 | Density Ratio Hidden Markov Models | stat.ML cs.LG | Hidden Markov models and their variants are the predominant sequential
classification method in such domains as speech recognition, bioinformatics and
natural language processing. Being generative rather than discriminative
models, however, their classification performance is a drawback. In this paper
we apply ideas from the field of density ratio estimation to bypass the
difficult step of learning likelihood functions in HMMs. By reformulating
inference and model fitting in terms of density ratios and applying a fast
kernel-based estimation method, we show that it is possible to obtain a
striking increase in discriminative performance while retaining the
probabilistic qualities of the HMM. We demonstrate experimentally that this
formulation makes more efficient use of training data than alternative
approaches.
|
1302.3702 | A Fresnelet-Based Encryption of Medical Images using Arnold Transform | cs.CR cs.CV | Medical images are commonly stored in digital media and transmitted via
Internet for certain uses. If a medical information image alters, this can lead
to a wrong diagnosis which may create a serious health problem. Moreover,
medical images in digital form can easily be modified by wiping off or adding
small pieces of information intentionally for certain illegal purposes. Hence,
the reliability of medical images is an important criterion in a hospital
information system. In this paper, Fresnelet transform is employed along with
appropriate handling of the Arnold transform and the discrete cosine transform
to provide secure distribution of medical images. This method presents a new
data hiding system in which steganography and cryptography are used to prevent
unauthorized data access. The experimental results exhibit high
imperceptibility for embedded images and significant encryption of information
images.
|
1302.3705 | Partial Third-Party Information Exchange with Network Coding | cs.IT math.IT | In this paper, we consider the problem of exchanging channel state
information in a wireless network such that a subset of the clients can obtain
the complete channel state information of all the links in the network. We
first derive the minimum number of required transmissions for such partial
third-party information exchange problem. We then design an optimal
transmission scheme by determining the number of packets that each client
should send, and designing a deterministic encoding strategy such that the
subset of clients can acquire complete channel state information of the network
with minimal number of transmissions. Numerical results show that network
coding can efficiently reduce the number of transmissions, even with only
pairwise encoding.
|
1302.3721 | Thompson Sampling in Switching Environments with Bayesian Online Change
Point Detection | cs.LG | Thompson Sampling has recently been shown to be optimal in the Bernoulli
Multi-Armed Bandit setting[Kaufmann et al., 2012]. This bandit problem assumes
stationary distributions for the rewards. It is often unrealistic to model the
real world as a stationary distribution. In this paper we derive and evaluate
algorithms using Thompson Sampling for a Switching Multi-Armed Bandit Problem.
We propose a Thompson Sampling strategy equipped with a Bayesian change point
mechanism to tackle this problem. We develop algorithms for a variety of cases
with constant switching rate: when switching occurs all arms change (Global
Switching), switching occurs independently for each arm (Per-Arm Switching),
when the switching rate is known and when it must be inferred from data. This
leads to a family of algorithms we collectively term Change-Point Thompson
Sampling (CTS). We show empirical results of the algorithm in 4 artificial
environments, and 2 derived from real world data; news click-through[Yahoo!,
2011] and foreign exchange data[Dukascopy, 2012], comparing them to some other
bandit algorithms. In real world data CTS is the most effective.
|
1302.3723 | Computing preimages of Boolean Networks | cs.IT math.IT | In this paper we present an algorithm to address the predecessor problem of
feed-forward Boolean networks. We propose an probabilistic algorithm, which
solves this problem in linear time with respect to the number of nodes in the
network. Finally, we evaluate our algorithm for random Boolean networks and the
regulatory network of Escherichia coli.
|
1302.3747 | Construction of minimal non-abelian left group codes | math.RT cs.IT math.GR math.IT math.RA | Algorithms to construct minimal left group codes are provided. These are
based on results describing a complete set of orthogonal primitive idempotents
in each Wedderburn component of a semisimple finite group algebra FG for a
large class of groups G.
As an illustration of our methods, alternative constructions to some best
linear codes over F_2 and F_3 are given.
|
1302.3777 | Capacity of the State-Dependent Half-Duplex Relay Channel Without
Source-Destination Link | cs.IT math.IT | We derive the capacity of the state-dependent half-duplex relay channel
without source-destination link. The output of the state-dependent half-duplex
relay channel depends on the randomly varying channel states of the
source-relay and relay-destination links, which are known causally at all three
nodes. For this channel, we prove a converse and show the achievability of the
capacity based on a buffer-aided relaying protocol with adaptive link
selection. This protocol chooses in each times slot one codeword to be
transmitted over either the source-relay or the relay-destination channel
depending on the channel states. Our proof of the converse reveals that
state-dependent half-duplex relay networks offer one additional degree of
freedom which has been previously overlooked. Namely, the freedom of the
half-duplex relay to choose when to receive and when to transmit.
|
1302.3785 | Analysis of Descent-Based Image Registration | cs.CV | We present a performance analysis for image registration with gradient
descent methods. We consider a typical multiscale registration setting where
the global 2-D translation between a pair of images is estimated by smoothing
the images and minimizing the distance between them with gradient descent. Our
study particularly concentrates on the effect of noise and low-pass filtering
on the alignment accuracy. We adopt an analytic representation for images and
analyze the well-behavedness of the image distance function by estimating the
neighborhood of translations for which it is free of undesired local minima.
This corresponds to the neighborhood of translation vectors that are correctly
computable with a simple gradient descent minimization. We show that the area
of this neighborhood increases at least quadratically with the smoothing filter
size, which justifies the use of a smoothing step in image registration with
local optimizers such as gradient descent. We then examine the effect of noise
on the alignment accuracy and derive an upper bound for the alignment error in
terms of the noise properties and filter size. Our main finding is that the
error increases at a rate that is at least linear with respect to the filter
size. Therefore, smoothing improves the well-behavedness of the distance
function; however, this comes at the cost of amplifying the alignment error in
noisy settings. Our results provide a mathematical insight about why
hierarchical techniques are effective in image registration, suggesting that
the multiscale coarse-to-fine alignment strategy of these techniques is very
suitable from the perspective of the trade-off between the well-behavedness of
the objective function and the registration accuracy. To the best of our
knowledge, this is the first such study for descent-based image registration.
|
1302.3800 | An Enhanced Spectral Efficiency Chaos-Based Symbolic Dynamics
Transceiver Design | cs.IT math.IT | Chaotic synchronization performs poorly in noisy environments, with the main
drawback being that the coherent receiver cannot be implemented in realistic
communication channels. In this paper, we focus our study on a promising
communication system based on chaotic symbolic dynamics. Such modulation shows
a high synchronization quality, without the need for a complex chaotic
synchronization mechanism. Our study mainly concerns an improvement of the
bandwidth efficiency of the chaotic modulator. A new chaotic map is proposed to
achieve this goal, and a receiver based on the maximum likelihood algorithm is
designed to estimate the transmitted symbols. The performance of the proposed
system is analyzed and discussed.
|
1302.3826 | Quickest Search Over Multiple Sequences with Mixed Observations | cs.IT math.IT | The problem of sequentially finding an independent and identically
distributed (i.i.d.) sequence that is drawn from a probability distribution
$F_1$ by searching over multiple sequences, some of which are drawn from $F_1$
and the others of which are drawn from a different distribution $F_0$, is
considered. The sensor is allowed to take one observation at a time. It has
been shown in a recent work that if each observation comes from one sequence,
Cumulative Sum (CUSUM) test is optimal. In this paper, we propose a new
approach in which each observation can be a linear combination of samples from
multiple sequences. The test has two stages. In the first stage, namely
scanning stage, one takes a linear combination of a pair of sequences with the
hope of scanning through sequences that are unlikely to be generated from $F_1$
and quickly identifying a pair of sequences such that at least one of them is
highly likely to be generated by $F_1$. In the second stage, namely refinement
stage, one examines the pair identified from the first stage more closely and
picks one sequence to be the final sequence. The problem under this setup
belongs to a class of multiple stopping time problems. In particular, it is an
ordered two concatenated Markov stopping time problem. We obtain the optimal
solution using the tools from the multiple stopping time theory. Numerical
simulation results show that this search strategy can significantly reduce the
searching time, especially when $F_{1}$ is rare.
|
1302.3828 | Rumor Spreading in Random Evolving Graphs | cs.DM cs.DC cs.SI math.PR | Randomized gossip is one of the most popular way of disseminating information
in large scale networks. This method is appreciated for its simplicity,
robustness, and efficiency. In the "push" protocol, every informed node
selects, at every time step (a.k.a. round), one of its neighboring node
uniformly at random and forwards the information to this node. This protocol is
known to complete information spreading in $O(\log n)$ time steps with high
probability (w.h.p.) in several families of $n$-node "static" networks. The
Push protocol has also been empirically shown to perform well in practice, and,
specifically, to be robust against dynamic topological changes.
In this paper, we aim at analyzing the Push protocol in "dynamic" networks.
We consider the "edge-Markovian" evolving graph model which captures natural
temporal dependencies between the structure of the network at time $t$, and the
one at time $t+1$. Precisely, a non-edge appears with probability $p$, while an
existing edge dies with probability $q$. In order to fit with real-world
traces, we mostly concentrate our study on the case where $p=\Omega(1/n)$ and
$q$ is constant. We prove that, in this realistic scenario, the Push protocol
does perform well, completing information spreading in $O(\log n)$ time steps
w.h.p. Note that this performance holds even when the network is, w.h.p.,
disconnected at every time step (e.g., when $p << (\log n) / n$). Our result
provides the first formal argument demonstrating the robustness of the Push
protocol against network changes. We also address other ranges of parameters
$p$ and $q$ (e.g., $p+q=1$ with arbitrary $p$ and $q$, and $p=1/n$ with
arbitrary $q$). Although they do not precisely fit with the measures performed
on real-world traces, they can be of independent interest for other settings.
The results in these cases confirm the positive impact of dynamism.
|
1302.3831 | Quantum Entanglement in Concept Combinations | cs.AI cs.CL quant-ph | Research in the application of quantum structures to cognitive science
confirms that these structures quite systematically appear in the dynamics of
concepts and their combinations and quantum-based models faithfully represent
experimental data of situations where classical approaches are problematical.
In this paper, we analyze the data we collected in an experiment on a specific
conceptual combination, showing that Bell's inequalities are violated in the
experiment. We present a new refined entanglement scheme to model these data
within standard quantum theory rules, where 'entangled measurements and
entangled evolutions' occur, in addition to the expected 'entangled states',
and present a full quantum representation in complex Hilbert space of the data.
This stronger form of entanglement in measurements and evolutions might have
relevant applications in the foundations of quantum theory, as well as in the
interpretation of nonlocality tests. It could indeed explain some
non-negligible 'anomalies' identified in EPR-Bell experiments.
|
1302.3834 | Non-Bayesian Quickest Detection with Stochastic Sample Right Constraints | cs.IT math.IT | In this paper, we study the design and analysis of optimal detection scheme
for sensors that are deployed to monitor the change in the environment and are
powered by the energy harvested from the environment. In this type of
applications, detection delay is of paramount importance. We model this problem
as quickest change detection problem with a stochastic energy constraint. In
particular, a wireless sensor powered by renewable energy takes observations
from a random sequence, whose distribution will change at a certain unknown
time. Such a change implies events of interest. The energy in the sensor is
consumed by taking observations and is replenished randomly. The sensor cannot
take observations if there is no energy left in the battery. Our goal is to
design a power allocation scheme and a detection strategy to minimize the worst
case detection delay, which is the difference between the time when an alarm is
raised and the time when the change occurs. Two types of average run length
(ARL) constraint, namely an algorithm level ARL constraint and an system level
ARL constraint, are considered. We propose a low complexity scheme in which the
energy allocation rule is to spend energy to take observations as long as the
battery is not empty and the detection scheme is the Cumulative Sum test. We
show that this scheme is optimal for the formulation with the algorithm level
ARL constraint and is asymptotically optimal for the formulations with the
system level ARL constraint.
|
1302.3857 | Technical Report: Cooperative Multi-Target Localization With Noisy
Sensors | cs.RO cs.MA | This technical report is an extended version of the paper 'Cooperative
Multi-Target Localization With Noisy Sensors' accepted to the 2013 IEEE
International Conference on Robotics and Automation (ICRA).
This paper addresses the task of searching for an unknown number of static
targets within a known obstacle map using a team of mobile robots equipped with
noisy, limited field-of-view sensors. Such sensors may fail to detect a subset
of the visible targets or return false positive detections. These measurement
sets are used to localize the targets using the Probability Hypothesis Density,
or PHD, filter. Robots communicate with each other on a local peer-to-peer
basis and with a server or the cloud via access points, exchanging measurements
and poses to update their belief about the targets and plan future actions. The
server provides a mechanism to collect and synthesize information from all
robots and to share the global, albeit time-delayed, belief state to robots
near access points. We design a decentralized control scheme that exploits this
communication architecture and the PHD representation of the belief state.
Specifically, robots move to maximize mutual information between the target set
and measurements, both self-collected and those available by accessing the
server, balancing local exploration with sharing knowledge across the team.
Furthermore, robots coordinate their actions with other robots exploring the
same local region of the environment.
|
1302.3860 | ScalienDB: Designing and Implementing a Distributed Database using Paxos | cs.DB cs.DC | ScalienDB is a scalable, replicated database built on top of the Paxos
algorithm. It was developed from 2010 to 2012, when the startup backing it
failed. This paper discusses the design decisions of the distributed database,
describes interesting parts of the C++ codebase and enumerates lessons learned
putting ScalienDB into production at a handful of clients. The source code is
available on Github under the AGPL license, but it is no longer developed or
maintained.
|
1302.3868 | Symbolic control of stochastic systems via approximately bisimilar
finite abstractions | math.OC cs.SY | Symbolic approaches to the control design over complex systems employ the
construction of finite-state models that are related to the original control
systems, then use techniques from finite-state synthesis to compute controllers
satisfying specifications given in a temporal logic, and finally translate the
synthesized schemes back as controllers for the concrete complex systems. Such
approaches have been successfully developed and implemented for the synthesis
of controllers over non-probabilistic control systems. In this paper, we extend
the technique to probabilistic control systems modeled by controlled stochastic
differential equations. We show that for every stochastic control system
satisfying a probabilistic variant of incremental input-to-state stability, and
for every given precision $\varepsilon>0$, a finite-state transition system can
be constructed, which is $\varepsilon$-approximately bisimilar (in the sense of
moments) to the original stochastic control system. Moreover, we provide
results relating stochastic control systems to their corresponding finite-state
transition systems in terms of probabilistic bisimulation relations known in
the literature. We demonstrate the effectiveness of the construction by
synthesizing controllers for stochastic control systems over rich
specifications expressed in linear temporal logic. The discussed technique
enables a new, automated, correct-by-construction controller synthesis approach
for stochastic control systems, which are common mathematical models employed
in many safety critical systems subject to structured uncertainty and are thus
relevant for cyber-physical applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.