id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.6667 | Crowdsourcing for Bioinformatics | q-bio.QM cs.CY cs.SI physics.soc-ph | Motivation: Bioinformatics is faced with a variety of problems that require
human involvement. Tasks like genome annotation, image analysis, knowledge-base
construction and protein structure determination all benefit from human input.
In some cases people are needed in vast quantities while in others we need just
a few with very rare abilities. Crowdsourcing encompasses an emerging
collection of approaches for harnessing such distributed human intelligence.
Recently, the bioinformatics community has begun to apply crowdsourcing in a
variety of contexts, yet few resources are available that describe how these
human-powered systems work and how to use them effectively in scientific
domains. Results: Here, we provide a framework for understanding and applying
several different types of crowdsourcing. The framework considers two broad
classes: systems for solving large-volume 'microtasks' and systems for solving
high-difficulty 'megatasks'. Within these classes, we discuss system types
including: volunteer labor, games with a purpose, microtask markets and open
innovation contests. We illustrate each system type with successful examples in
bioinformatics and conclude with a guide for matching problems to crowdsourcing
solutions.
|
1302.6668 | Finite-time consensus using stochastic matrices with positive diagonals | cs.MA cs.SY | We discuss the possibility of reaching consensus in finite time using only
linear iterations, with the additional restrictions that the update matrices
must be stochastic with positive diagonals and consistent with a given graph
structure. We show that finite-time average consensus can always be achieved
for connected undirected graphs. For directed graphs, we show some necessary
conditions for finite-time consensus, including strong connectivity and the
presence of a simple cycle of even length.
|
1302.6677 | Taming the Curse of Dimensionality: Discrete Integration by Hashing and
Optimization | cs.LG cs.AI stat.ML | Integration is affected by the curse of dimensionality and quickly becomes
intractable as the dimensionality of the problem grows. We propose a randomized
algorithm that, with high probability, gives a constant-factor approximation of
a general discrete integral defined over an exponentially large set. This
algorithm relies on solving only a small number of instances of a discrete
combinatorial optimization problem subject to randomly generated parity
constraints used as a hash function. As an application, we demonstrate that
with a small number of MAP queries we can efficiently approximate the partition
function of discrete graphical models, which can in turn be used, for instance,
for marginal computation or model selection.
|
1302.6683 | Decentralized set-valued state estimation based on non-deterministic
chains | cs.SY | A general decentralized computational framework for set-valued state
estimation and prediction for the class of systems that accept a hybrid state
machine representation is considered in this article. The decentralized scheme
consists of a conjunction of distributed state machines that are specified by a
decomposition of the external signal space. While this is shown to produce, in
general, outer approximations of the outcomes of the original monolithic state
machine, here, specific rules for the signal space decomposition are devised by
utilizing structural properties of the underyling transition relation, leading
to a recovery of the exact state set results. By applying a suitable
approximation algorithm, we show that computational complexity in the
decentralized setting may thereby essentially reduce as compared to the
centralized estimation scheme.
|
1302.6703 | Compressive Sensing for Spread Spectrum Receivers | cs.IT math.IT | With the advent of ubiquitous computing there are two design parameters of
wireless communication devices that become very important power: efficiency and
production cost. Compressive sensing enables the receiver in such devices to
sample below the Shannon-Nyquist sampling rate, which may lead to a decrease in
the two design parameters. This paper investigates the use of Compressive
Sensing (CS) in a general Code Division Multiple Access (CDMA) receiver. We
show that when using spread spectrum codes in the signal domain, the CS
measurement matrix may be simplified. This measurement scheme, named
Compressive Spread Spectrum (CSS), allows for a simple, effective receiver
design. Furthermore, we numerically evaluate the proposed receiver in terms of
bit error rate under different signal to noise ratio conditions and compare it
with other receiver structures. These numerical experiments show that though
the bit error rate performance is degraded by the subsampling in the CS-enabled
receivers, this may be remedied by including quantization in the receiver
model. We also study the computational complexity of the proposed receiver
design under different sparsity and measurement ratios. Our work shows that it
is possible to subsample a CDMA signal using CSS and that in one example the
CSS receiver outperforms the classical receiver.
|
1302.6704 | Decentralized set-valued state estimation and prediction for hybrid
systems: A symbolic approach | cs.SY | A symbolic approach to decentralized set-valued state estimation and
prediction for systems that admit a hybrid state machine representations is
proposed. The decentralized computational scheme represents a conj unction of a
finite number of distributed state machines, which are specified by an
appropriate decomposition of the external signal space. It aims at a
distribution of computational tasks into smaller ones, allocated to individual
distributed state machines, leading to a potentially significant reduction in
the overall space/time computational complexity. We show that, in general, such
a scheme outerapproximates the state set estimates and predictions of the
original monolithic state machine. By utilizing structural properties of the
transition relation of the latter, in a next step, we propose constructive
decomposition algorithms for a recovery of the exact state set outcomes.
|
1302.6738 | Finding overlapping communities in networks using evolutionary method | cs.SI physics.soc-ph | Community structure is a typical property of many real-world networks, and
has become a key to understand the dynamics of the networked systems. In these
networks most nodes apparently lie in a community while there often exists a
few nodes straddling several communities. An ideal algorithm for community
detection is preferable which can identify the overlapping communities in such
networks. To represent an overlapping division we develop a encoding schema
composed of two segments, the first one represents a disjoint partition and the
second one represents a extension of the partition that allows of multiple
memberships. We give a measure for the informativeness of a node, and present
an evolutionary method for detecting the overlapping communities in a network.
|
1302.6764 | Categorizing Bugs with Social Networks: A Case Study on Four Open Source
Software Communities | cs.SE cs.LG cs.SI nlin.AO physics.soc-ph | Efficient bug triaging procedures are an important precondition for
successful collaborative software engineering projects. Triaging bugs can
become a laborious task particularly in open source software (OSS) projects
with a large base of comparably inexperienced part-time contributors. In this
paper, we propose an efficient and practical method to identify valid bug
reports which a) refer to an actual software bug, b) are not duplicates and c)
contain enough information to be processed right away. Our classification is
based on nine measures to quantify the social embeddedness of bug reporters in
the collaboration network. We demonstrate its applicability in a case study,
using a comprehensive data set of more than 700,000 bug reports obtained from
the Bugzilla installation of four major OSS communities, for a period of more
than ten years. For those projects that exhibit the lowest fraction of valid
bug reports, we find that the bug reporters' position in the collaboration
network is a strong indicator for the quality of bug reports. Based on this
finding, we develop an automated classification scheme that can easily be
integrated into bug tracking platforms and analyze its performance in the
considered OSS communities. A support vector machine (SVM) to identify valid
bug reports based on the nine measures yields a precision of up to 90.3% with
an associated recall of 38.9%. With this, we significantly improve the results
obtained in previous case studies for an automated early identification of bugs
that are eventually fixed. Furthermore, our study highlights the potential of
using quantitative measures of social organization in collaborative software
engineering. It also opens a broad perspective for the integration of social
awareness in the design of support infrastructures.
|
1302.6768 | Missing Entries Matrix Approximation and Completion | math.NA cs.LG stat.ML | We describe several algorithms for matrix completion and matrix approximation
when only some of its entries are known. The approximation constraint can be
any whose approximated solution is known for the full matrix. For low rank
approximations, similar algorithms appears recently in the literature under
different names. In this work, we introduce new theorems for matrix
approximation and show that these algorithms can be extended to handle
different constraints such as nuclear norm, spectral norm, orthogonality
constraints and more that are different than low rank approximations. As the
algorithms can be viewed from an optimization point of view, we discuss their
convergence to global solution for the convex case. We also discuss the optimal
step size and show that it is fixed in each iteration. In addition, the derived
matrix completion flow is robust and does not require any parameters. This
matrix completion flow is applicable to different spectral minimizations and
can be applied to physics, mathematics and electrical engineering problems such
as data reconstruction of images and data coming from PDEs such as Helmholtz
equation used for electromagnetic waves.
|
1302.6770 | Total communicability as a centrality measure | cs.SI math.NA physics.soc-ph | We examine a node centrality measure based on the notion of total
communicability, defined in terms of the row sums of the exponential of the
adjacency matrix of the network. We argue that this is a natural metric for
ranking nodes in a network, and we point out that it can be computed very
rapidly even in the case of large networks. Furthermore, we propose the total
sum of node communicabilities as a useful measure of network connectivity.
Extensive numerical studies are conducted in order to compare this centrality
measure with the closely related ones of subgraph centrality [E. Estrada and J.
A. Rodriguez-Velazquez, Phys. Rev. E, 71 (2005), 056103] and Katz centrality
[L. Katz, Psychometrica, 18 (1953), pp. 39-43]. Both synthetic and real-world
networks are used in the computations.
|
1302.6777 | Ending-based Strategies for Part-of-speech Tagging | cs.CL | Probabilistic approaches to part-of-speech tagging rely primarily on
whole-word statistics about word/tag combinations as well as contextual
information. But experience shows about 4 per cent of tokens encountered in
test sets are unknown even when the training set is as large as a million
words. Unseen words are tagged using secondary strategies that exploit word
features such as endings, capitalizations and punctuation marks. In this work,
word-ending statistics are primary and whole-word statistics are secondary.
First, a tagger was trained and tested on word endings only. Subsequent
experiments added back whole-word statistics for the words occurring most
frequently in the training set. As grew larger, performance was expected to
improve, in the limit performing the same as word-based taggers. Surprisingly,
the ending-based tagger initially performed nearly as well as the word-based
tagger; in the best case, its performance significantly exceeded that of the
word-based tagger. Lastly, and unexpectedly, an effect of negative returns was
observed - as grew larger, performance generally improved and then declined. By
varying factors such as ending length and tag-list strategy, we achieved a
success rate of 97.5 percent.
|
1302.6779 | An Evaluation of an Algorithm for Inductive Learning of Bayesian Belief
Networks Usin | cs.AI | Bayesian learning of belief networks (BLN) is a method for automatically
constructing belief networks (BNs) from data using search and Bayesian scoring
techniques. K2 is a particular instantiation of the method that implements a
greedy search strategy. To evaluate the accuracy of K2, we randomly generated a
number of BNs and for each of those we simulated data sets. K2 was then used to
induce the generating BNs from the simulated data. We examine the performance
of the program, and the factors that influence it. We also present a simple BN
model, developed from our results, which predicts the accuracy of K2, when
given various characteristics of the data set.
|
1302.6780 | Probabilistic Constraint Satisfaction with Non-Gaussian Noise | cs.AI | We have previously reported a Bayesian algorithm for determining the
coordinates of points in three-dimensional space from uncertain constraints.
This method is useful in the determination of biological molecular structure.
It is limited, however, by the requirement that the uncertainty in the
constraints be normally distributed. In this paper, we present an extension of
the original algorithm that allows constraint uncertainty to be represented as
a mixture of Gaussians, and thereby allows arbitrary constraint distributions.
We illustrate the performance of this algorithm on a problem drawn from the
domain of molecular structure determination, in which a multicomponent
constraint representation produces a much more accurate solution than the old
single component mechanism. The new mechanism uses mixture distributions to
decompose the problem into a set of independent problems with unimodal
constraint uncertainty. The results of the unimodal subproblems are
periodically recombined using Bayes' law, to avoid combinatorial explosion. The
new algorithm is particularly suited for parallel implementation.
|
1302.6781 | A Bayesian Method Reexamined | cs.AI | This paper examines the "K2" network scoring metric of Cooper and Herskovits.
It shows counterintuitive results from applying this metric to simple networks.
One family of noninformative priors is suggested for assigning equal scores to
equivalent networks.
|
1302.6782 | Laplace's Method Approximations for Probabilistic Inference in Belief
Networks with Continuous Variables | cs.AI | Laplace's method, a family of asymptotic methods used to approximate
integrals, is presented as a potential candidate for the tool box of techniques
used for knowledge acquisition and probabilistic inference in belief networks
with continuous variables. This technique approximates posterior moments and
marginal posterior distributions with reasonable accuracy [errors are O(n^-2)
for posterior means] in many interesting cases. The method also seems promising
for computing approximations for Bayes factors for use in the context of model
selection, model uncertainty and mixtures of pdfs. The limitations, regularity
conditions and computational difficulties for the implementation of Laplace's
method are comparable to those associated with the methods of maximum
likelihood and posterior mode analysis.
|
1302.6783 | Generating New Beliefs From Old | cs.AI | In previous work [BGHK92, BGHK93], we have studied the random-worlds approach
-- a particular (and quite powerful) method for generating degrees of belief
(i.e., subjective probabilities) from a knowledge base consisting of objective
(first-order, statistical, and default) information. But allowing a knowledge
base to contain only objective information is sometimes limiting. We
occasionally wish to include information about degrees of belief in the
knowledge base as well, because there are contexts in which old beliefs
represent important information that should influence new beliefs. In this
paper, we describe three quite general techniques for extending a method that
generates degrees of belief from objective information to one that can make use
of degrees of belief as well. All of our techniques are bloused on well-known
approaches, such as cross-entropy. We discuss general connections between the
techniques and in particular show that, although conceptually and technically
quite different, all of the techniques give the same answer when applied to the
random-worlds method.
|
1302.6784 | Counterfactual Probabilities: Computational Methods, Bounds and
Applications | cs.AI | Evaluation of counterfactual queries (e.g., "If A were true, would C have
been true?") is important to fault diagnosis, planning, and determination of
liability. In this paper we present methods for computing the probabilities of
such queries using the formulation proposed in [Balke and Pearl, 1994], where
the antecedent of the query is interpreted as an external action that forces
the proposition A to be true. When a prior probability is available on the
causal mechanisms governing the domain, counterfactual probabilities can be
evaluated precisely. However, when causal knowledge is specified as conditional
probabilities on the observables, only bounds can computed. This paper develops
techniques for evaluating these bounds, and demonstrates their use in two
applications: (1) the determination of treatment efficacy from studies in which
subjects may choose their own treatment, and (2) the determination of liability
in product-safety litigation.
|
1302.6786 | Modus Ponens Generating Function in the Class of ^-valuations of
Plausibility | cs.AI | We discuss the problem of construction of inference procedures which can
manipulate with uncertainties measured in ordinal scales and fulfill to the
property of strict monotonicity of conclusion. The class of A-valuations of
plausibility is considered where operations based only on information about
linear ordering of plausibility values are used. In this class the modus ponens
generating function fulfiling to the property of strict monotonicity of
conclusions is introduced.
|
1302.6787 | Approximation Algorithms for the Loop Cutset Problem | cs.AI cs.DS | We show how to find a small loop curser in a Bayesian network. Finding such a
loop cutset is the first step in the method of conditioning for inference. Our
algorithm for finding a loop cutset, called MGA, finds a loop cutset which is
guaranteed in the worst case to contain less than twice the number of variables
contained in a minimum loop cutset. We test MGA on randomly generated graphs
and find that the average ratio between the number of instances associated with
the algorithms' output and the number of instances associated with a minimum
solution is 1.22.
|
1302.6788 | Possibility and Necessity Functions over Non-classical Logics | cs.AI | We propose an integration of possibility theory into non-classical logics. We
obtain many formal results that generalize the case where possibility and
necessity functions are based on classical logic. We show how useful such an
approach is by applying it to reasoning under uncertain and inconsistent
information.
|
1302.6789 | Exploratory Model Building | cs.AI | Some instances of creative thinking require an agent to build and test
hypothetical theories. Such a reasoner needs to explore the space of not only
those situations that have occurred in the past, but also those that are
rationally conceivable. In this paper we present a formalism for exploring the
space of conceivable situation-models for those domains in which the knowledge
is primarily probabilistic in nature. The formalism seeks to construct
consistent, minimal, and desirable situation-descriptions by selecting suitable
domain-attributes and dependency relationships from the available domain
knowledge.
|
1302.6791 | Planning with External Events | cs.AI | I describe a planning methodology for domains with uncertainty in the form of
external events that are not completely predictable. The events are represented
by enabling conditions and probabilities of occurrence. The planner is
goal-directed and backward chaining, but the subgoals are suggested by
analyzing the probability of success of the partial plan rather than being
simply the open conditions of the operators in the plan. The partial plan is
represented as a Bayesian belief net to compute its probability of success.
Since calculating the probability of success of a plan can be very expensive I
introduce two other techniques for computing it, one that uses Monte Carlo
simulation to estimate it and one based on a Markov chain representation that
uses knowledge about the dependencies between the predicates describing the
domain.
|
1302.6792 | Properties of Bayesian Belief Network Learning Algorithms | cs.AI | Bayesian belief network learning algorithms have three basic components: a
measure of a network structure and a database, a search heuristic that chooses
network structures to be considered, and a method of estimating the probability
tables from the database. This paper contributes to all these three topics. The
behavior of the Bayesian measure of Cooper and Herskovits and a minimum
description length (MDL) measure are compared with respect to their properties
for both limiting size and finite size databases. It is shown that the MDL
measure has more desirable properties than the Bayesian measure when a
distribution is to be learned. It is shown that selecting belief networks with
certain minimallity properties is NP-hard. This result justifies the use of
search heuristics instead of exact algorithms for choosing network structures
to be considered. In some cases, a collection of belief networks can be
represented by a single belief network which leads to a new kind of probability
table estimation called smoothing. We argue that smoothing can be efficiently
implemented by incorporating it in the search heuristic. Experimental results
suggest that for learning probabilities of belief networks smoothing is
helpful.
|
1302.6793 | A Stratified Simulation Scheme for Inference in Bayesian Belief Networks | cs.AI | Simulation schemes for probabilistic inference in Bayesian belief networks
offer many advantages over exact algorithms; for example, these schemes have a
linear and thus predictable runtime while exact algorithms have exponential
runtime. Experiments have shown that likelihood weighting is one of the most
promising simulation schemes. In this paper, we present a new simulation scheme
that generates samples more evenly spread in the sample space than the
likelihood weighting scheme. We show both theoretically and experimentally that
the stratified scheme outperforms likelihood weighting in average runtime and
error in estimates of beliefs.
|
1302.6794 | Efficient Estimation of the Value of Information in Monte Carlo Models | cs.AI | The expected value of information (EVI) is the most powerful measure of
sensitivity to uncertainty in a decision model: it measures the potential of
information to improve the decision, and hence measures the expected value of
outcome. Standard methods for computing EVI use discrete variables and are
computationally intractable for models that contain more than a few variables.
Monte Carlo simulation provides the basis for more tractable evaluation of
large predictive models with continuous and discrete variables, but so far
computation of EVI in a Monte Carlo setting also has appeared impractical. We
introduce an approximate approach based on pre-posterior analysis for
estimating EVI in Monte Carlo models. Our method uses a linear approximation to
the value function and multiple linear regression to estimate the linear model
from the samples. The approach is efficient and practical for extremely large
models. It allows easy estimation of EVI for perfect or partial information on
individual variables or on combinations of variables. We illustrate its
implementation within Demos (a decision modeling system), and its application
to a large model for crisis transportation planning.
|
1302.6795 | Symbolic Probabilitistic Inference in Large BN2O Networks | cs.AI | A BN2O network is a two level belief net in which the parent interactions are
modeled using the noisy-or interaction model. In this paper we discuss
application of the SPI local expression language to efficient inference in
large BN2O networks. In particular, we show that there is significant
structure, which can be exploited to improve over the Quickscore result. We
further describe how symbolic techniques can provide information which can
significantly reduce the computation required for computing all cause posterior
marginals. Finally, we present a novel approximation technique with preliminary
experimental results.
|
1302.6796 | Action Networks: A Framework for Reasoning about Actions and Change
under Uncertainty | cs.AI | This work proposes action networks as a semantically well-founded framework
for reasoning about actions and change under uncertainty. Action networks add
two primitives to probabilistic causal networks: controllable variables and
persistent variables. Controllable variables allow the representation of
actions as directly setting the value of specific events in the domain, subject
to preconditions. Persistent variables provide a canonical model of persistence
according to which both the state of a variable and the causal mechanism
dictating its value persist over time unless intervened upon by an action (or
its consequences). Action networks also allow different methods for quantifying
the uncertainty in causal relationships, which go beyond traditional
probabilistic quantification. This paper describes both recent results and work
in progress.
|
1302.6797 | On the Relation between Kappa Calculus and Probabilistic Reasoning | cs.AI | We study the connection between kappa calculus and probabilistic reasoning in
diagnosis applications. Specifically, we abstract a probabilistic belief
network for diagnosing faults into a kappa network and compare the ordering of
faults computed using both methods. We show that, at least for the example
examined, the ordering of faults coincide as long as all the causal relations
in the original probabilistic network are taken into account. We also provide a
formal analysis of some network structures where the two methods will differ.
Both kappa rankings and infinitesimal probabilities have been used extensively
to study default reasoning and belief revision. But little has been done on
utilizing their connection as outlined above. This is partly because the
relation between kappa and probability calculi assumes that probabilities are
arbitrarily close to one (or zero). The experiments in this paper investigate
this relation when this assumption is not satisfied. The reported results have
important implications on the use of kappa rankings to enhance the knowledge
engineering of uncertainty models.
|
1302.6798 | A Structured, Probabilistic Representation of Action | cs.AI | When agents devise plans for execution in the real world, they face two
important forms of uncertainty: they can never have complete knowledge about
the state of the world, and they do not have complete control, as the effects
of their actions are uncertain. While most classical planning methods avoid
explicit uncertainty reasoning, we believe that uncertainty should be
explicitly represented and reasoned about. We develop a probabilistic
representation for states and actions, based on belief networks. We define
conditional belief nets (CBNs) to capture the probabilistic dependency of the
effects of an action upon the state of the world. We also use a CBN to
represent the intrinsic relationships among entities in the environment, which
persist from state to state. We present a simple projection algorithm to
construct the belief network of the state succeeding an action, using the
environment CBN model to infer indirect effects. We discuss how the qualitative
aspects of belief networks and CBNs make them appropriate for the various
stages of the problem solving process, from model construction to the design of
planning algorithms.
|
1302.6799 | Integrating Planning and Execution in Stochastic Domains | cs.AI | We investigate planning in time-critical domains represented as Markov
Decision Processes, showing that search based techniques can be a very powerful
method for finding close to optimal plans. To reduce the computational cost of
planning in these domains, we execute actions as we construct the plan, and
sacrifice optimality by searching to a fixed depth and using a heuristic
function to estimate the value of states. Although this paper concentrates on
the search algorithm, we also discuss ways of constructing heuristic functions
suitable for this approach. Our results show that by interleaving search and
execution, close to optimal policies can be found without the computational
requirements of other approaches.
|
1302.6800 | Localized Partial Evaluation of Belief Networks | cs.AI | Most algorithms for propagating evidence through belief networks have been
exact and exhaustive: they produce an exact (point-valued) marginal probability
for every node in the network. Often, however, an application will not need
information about every n ode in the network nor will it need exact
probabilities. We present the localized partial evaluation (LPE) propagation
algorithm, which computes interval bounds on the marginal probability of a
specified query node by examining a subset of the nodes in the entire network.
Conceptually, LPE ignores parts of the network that are "too far away" from the
queried node to have much impact on its value. LPE has the "anytime" property
of being able to produce better solutions (tighter intervals) given more time
to consider more of the network.
|
1302.6801 | A Probabilistic Model of Action for Least-Commitment Planning with
Information Gather | cs.AI | AI planning algorithms have addressed the problem of generating sequences of
operators that achieve some input goal, usually assuming that the planning
agent has perfect control over and information about the world. Relaxing these
assumptions requires an extension to the action representation that allows
reasoning both about the changes an action makes and the information it
provides. This paper presents an action representation that extends the
deterministic STRIPS model, allowing actions to have both causal and
informational effects, both of which can be context dependent and noisy. We
also demonstrate how a standard least-commitment planning algorithm can be
extended to include informational actions and contingent execution.
|
1302.6802 | Some Properties of Joint Probability Distributions | cs.AI | Several Artificial Intelligence schemes for reasoning under uncertainty
explore either explicitly or implicitly asymmetries among probabilities of
various states of their uncertain domain models. Even though the correct
working of these schemes is practically contingent upon the existence of a
small number of probable states, no formal justification has been proposed of
why this should be the case. This paper attempts to fill this apparent gap by
studying asymmetries among probabilities of various states of uncertain models.
By rewriting the joint probability distribution over a model's variables into a
product of individual variables' prior and conditional probability
distributions, and applying central limit theorem to this product, we can
demonstrate that the probabilities of individual states of the model can be
expected to be drawn from highly skewed, log-normal distributions. With
sufficient asymmetry in individual prior and conditional probability
distributions, a small fraction of states can be expected to cover a large
portion of the total probability space with the remaining states having
practically negligible probability. Theoretical discussion is supplemented by
simulation results and an illustrative real-world example.
|
1302.6803 | An Ordinal View of Independence with Application to Plausible Reasoning | cs.AI | An ordinal view of independence is studied in the framework of possibility
theory. We investigate three possible definitions of dependence, of increasing
strength. One of them is the counterpart to the multiplication law in
probability theory, and the two others are based on the notion of conditional
possibility. These two have enough expressive power to support the whole
possibility theory, and a complete axiomatization is provided for the strongest
one. Moreover we show that weak independence is well-suited to the problems of
belief change and plausible reasoning, especially to address the problem of
blocking of property inheritance in exception-tolerant taxonomic reasoning.
|
1302.6804 | Penalty logic and its Link with Dempster-Shafer Theory | cs.AI | Penalty logic, introduced by Pinkas, associates to each formula of a
knowledge base the price to pay if this formula is violated. Penalties may be
used as a criterion for selecting preferred consistent subsets in an
inconsistent knowledge base, thus inducing a non-monotonic inference relation.
A precise formalization and the main properties of penalty logic and of its
associated non-monotonic inference relation are given in the first part. We
also show that penalty logic and Dempster-Shafer theory are related, especially
in the infinitesimal case.
|
1302.6805 | Value of Evidence on Influence Diagrams | cs.AI | In this paper, we introduce evidence propagation operations on influence
diagrams and a concept of value of evidence, which measures the value of
experimentation. Evidence propagation operations are critical for the
computation of the value of evidence, general update and inference operations
in normative expert systems which are based on the influence diagram
(generalized Bayesian network) paradigm. The value of evidence allows us to
compute directly an outcome sensitivity, a value of perfect information and a
value of control which are used in decision analysis (the science of decision
making under uncertainty). More specifically, the outcome sensitivity is the
maximum difference among the values of evidence, the value of perfect
information is the expected value of the values of evidence, and the value of
control is the optimal value of the values of evidence. We also discuss an
implementation and a relative computational efficiency issues related to the
value of evidence and the value of perfect information.
|
1302.6806 | Conditional Independence in Possibility Theory | cs.AI | Possibilistic conditional independence is investigated: we propose a
definition of this notion similar to the one used in probability theory. The
links between independence and non-interactivity are investigated, and
properties of these relations are given. The influence of the conjunction used
to define a conditional measure of possibility is also highlighted: we examine
three types of conjunctions: Lukasiewicz - like T-norms, product-like T-norms
and the minimum operator.
|
1302.6807 | Backward Simulation in Bayesian Networks | cs.AI | Backward simulation is an approximate inference technique for Bayesian belief
networks. It differs from existing simulation methods in that it starts
simulation from the known evidence and works backward (i.e., contrary to the
direction of the arcs). The technique's focus on the evidence leads to improved
convergence in situations where the posterior beliefs are dominated by the
evidence rather than by the prior probabilities. Since this class of situations
is large, the technique may make practical the application of approximate
inference in Bayesian belief networks to many real-world problems.
|
1302.6808 | Learning Gaussian Networks | cs.AI cs.LG stat.ML | We describe algorithms for learning Bayesian networks from a combination of
user knowledge and statistical data. The algorithms have two components: a
scoring metric and a search procedure. The scoring metric takes a network
structure, statistical data, and a user's prior knowledge, and returns a score
proportional to the posterior probability of the network structure given the
data. The search procedure generates networks for evaluation by the scoring
metric. Previous work has concentrated on metrics for domains containing only
discrete variables, under the assumption that data represents a multinomial
sample. In this paper, we extend this work, developing scoring metrics for
domains containing all continuous variables or a mixture of discrete and
continuous variables, under the assumption that continuous data is sampled from
a multivariate normal distribution. Our work extends traditional statistical
approaches for identifying vanishing regression coefficients in that we
identify two important assumptions, called event equivalence and parameter
modularity, that when combined allow the construction of prior distributions
for multivariate normal parameters from a single prior Bayesian network
specified by a user.
|
1302.6809 | On Testing Whether an Embedded Bayesian Network Represents a Probability
Model | cs.AI | Testing the validity of probabilistic models containing unmeasured (hidden)
variables is shown to be a hard task. We show that the task of testing whether
models are structurally incompatible with the data at hand, requires an
exponential number of independence evaluations, each of the form: "X is
conditionally independent of Y, given Z." In contrast, a linear number of such
evaluations is required to test a standard Bayesian network (one per vertex).
On the positive side, we show that if a network with hidden variables G has a
tree skeleton, checking whether G represents a given probability model P
requires the polynomial number of such independence evaluations. Moreover, we
provide an algorithm that efficiently constructs a tree-structured Bayesian
network (with hidden variables) that represents P if such a network exists, and
further recognizes when such a network does not exist.
|
1302.6810 | Epsilon-Safe Planning | cs.AI | We introduce an approach to high-level conditional planning we call
epsilon-safe planning. This probabilistic approach commits us to planning to
meet some specified goal with a probability of success of at least 1-epsilon
for some user-supplied epsilon. We describe several algorithms for epsilon-safe
planning based on conditional planners. The two conditional planners we discuss
are Peot and Smith's nonlinear conditional planner, CNLP, and our own linear
conditional planner, PLINTH. We present a straightforward extension to
conditional planners for which computing the necessary probabilities is simple,
employing a commonly-made but perhaps overly-strong independence assumption. We
also discuss a second approach to epsilon-safe planning which relaxes this
independence assumption, involving the incremental construction of a
probability dependence model in conjunction with the construction of the plan
graph.
|
1302.6811 | Generating Bayesian Networks from Probability Logic Knowledge Bases | cs.AI | We present a method for dynamically generating Bayesian networks from
knowledge bases consisting of first-order probability logic sentences. We
present a subset of probability logic sufficient for representing the class of
Bayesian networks with discrete-valued nodes. We impose constraints on the form
of the sentences that guarantee that the knowledge base contains all the
probabilistic information necessary to generate a network. We define the
concept of d-separation for knowledge bases and prove that a knowledge base
with independence conditions defined by d-separation is a complete
specification of a probability distribution. We present a network generation
algorithm that, given an inference problem in the form of a query Q and a set
of evidence E, generates a network to compute P(Q|E). We prove the algorithm to
be correct.
|
1302.6812 | Abstracting Probabilistic Actions | cs.AI | This paper discusses the problem of abstracting conditional probabilistic
actions. We identify two distinct types of abstraction: intra-action
abstraction and inter-action abstraction. We define what it means for the
abstraction of an action to be correct and then derive two methods of
intra-action abstraction and two methods of inter-action abstraction which are
correct according to this criterion. We illustrate the developed techniques by
applying them to actions described with the temporal action representation used
in the DRIPS decision-theoretic planner and we describe how the planner uses
abstraction to reduce the complexity of planning.
|
1302.6813 | On Modal Logics for Qualitative Possibility in a Fuzzy Setting | cs.LO cs.AI | Within the possibilistic approach to uncertainty modeling, the paper presents
a modal logical system to reason about qualitative (comparative) statements of
the possibility (and necessity) of fuzzy propositions. We relate this
qualitative modal logic to the many--valued analogues MVS5 and MVKD45 of the
well known modal logics of knowledge and belief S5 and KD45 respectively.
Completeness results are obtained for such logics and therefore, they extend
previous existing results for qualitative possibilistic logics in the classical
non-fuzzy setting.
|
1302.6814 | A New Look at Causal Independence | cs.AI | Heckerman (1993) defined causal independence in terms of a set of temporal
conditional independence statements. These statements formalized certain types
of causal interaction where (1) the effect is independent of the order that
causes are introduced and (2) the impact of a single cause on the effect does
not depend on what other causes have previously been applied. In this paper, we
introduce an equivalent a temporal characterization of causal independence
based on a functional representation of the relationship between causes and the
effect. In this representation, the interaction between causes and effect can
be written as a nested decomposition of functions. Causal independence can be
exploited by representing this decomposition in the belief network, resulting
in representations that are more efficient for inference than general causal
models. We present empirical results showing the benefits of a
causal-independence representation for belief-network inference.
|
1302.6815 | Learning Bayesian Networks: The Combination of Knowledge and Statistical
Data | cs.AI | We describe algorithms for learning Bayesian networks from a combination of
user knowledge and statistical data. The algorithms have two components: a
scoring metric and a search procedure. The scoring metric takes a network
structure, statistical data, and a user's prior knowledge, and returns a score
proportional to the posterior probability of the network structure given the
data. The search procedure generates networks for evaluation by the scoring
metric. Our contributions are threefold. First, we identify two important
properties of metrics, which we call event equivalence and parameter
modularity. These properties have been mostly ignored, but when combined,
greatly simplify the encoding of a user's prior knowledge. In particular, a
user can express her knowledge-for the most part-as a single prior Bayesian
network for the domain. Second, we describe local search and annealing
algorithms to be used in conjunction with scoring metrics. In the special case
where each node has at most one parent, we show that heuristic search can be
replaced with a polynomial algorithm to identify the networks with the highest
score. Third, we describe a methodology for evaluating Bayesian-network
learning algorithms. We apply this approach to a comparison of metrics and
search procedures.
|
1302.6816 | A Decision-Based View of Causality | cs.AI | Most traditional models of uncertainty have focused on the associational
relationship among variables as captured by conditional dependence. In order to
successfully manage intelligent systems for decision making, however, we must
be able to predict the effects of actions. In this paper, we attempt to unite
two branches of research that address such predictions: causal modeling and
decision analysis. First, we provide a definition of causal dependence in
decision-analytic terms, which we derive from consequences of causal dependence
cited in the literature. Using this definition, we show how causal dependence
can be represented within an influence diagram. In particular, we identify two
inadequacies of an ordinary influence diagram as a representation for cause. We
introduce a special class of influence diagrams, called causal influence
diagrams, which corrects one of these problems, and identify situations where
the other inadequacy can be eliminated. In addition, we describe the
relationships between Howard Canonical Form and existing graphical
representations of cause.
|
1302.6817 | Probabilistic Description Logics | cs.AI | On the one hand, classical terminological knowledge representation excludes
the possibility of handling uncertain concept descriptions involving, e.g.,
"usually true" concept properties, generalized quantifiers, or exceptions. On
the other hand, purely numerical approaches for handling uncertainty in general
are unable to consider terminological knowledge. This paper presents the
language ACP which is a probabilistic extension of terminological logics and
aims at closing the gap between the two areas of research. We present the
formal semantics underlying the language ALUP and introduce the probabilistic
formalism that is based on classes of probabilities and is realized by means of
probabilistic constraints. Besides inferring implicitly existent probabilistic
relationships, the constraints guarantee terminological and probabilistic
consistency. Altogether, the new language ALUP applies to domains where both
term descriptions and uncertainty have to be handled.
|
1302.6818 | An Experimental Comparison of Numerical and Qualitative Probabilistic
Reasoning | cs.AI | Qualitative and infinitesimal probability schemes are consistent with the
axioms of probability theory, but avoid the need for precise numerical
probabilities. Using qualitative probabilities could substantially reduce the
effort for knowledge engineering and improve the robustness of results. We
examine experimentally how well infinitesimal probabilities (the kappa-calculus
of Goldszmidt and Pearl) perform a diagnostic task - troubleshooting a car that
will not start by comparison with a conventional numerical belief network. We
found the infinitesimal scheme to be as good as the numerical scheme in
identifying the true fault. The performance of the infinitesimal scheme worsens
significantly for prior fault probabilities greater than 0.03. These results
suggest that infinitesimal probability methods may be of substantial practical
value for machine diagnosis with small prior fault probabilities.
|
1302.6819 | An Alternative Proof Method for Possibilistic Logic and its Application
to Terminological Logics | cs.AI | Possibilistic logic, an extension of first-order logic, deals with
uncertainty that can be estimated in terms of possibility and necessity
measures. Syntactically, this means that a first-order formula is equipped with
a possibility degree or a necessity degree that expresses to what extent the
formula is possibly or necessarily true. Possibilistic resolution yields a
calculus for possibilistic logic which respects the semantics developed for
possibilistic logic. A drawback, which possibilistic resolution inherits from
classical resolution, is that it may not terminate if applied to formulas
belonging to decidable fragments of first-order logic. Therefore we propose an
alternative proof method for possibilistic logic. The main feature of this
method is that it completely abstracts from a concrete calculus but uses as
basic operation a test for classical entailment. We then instantiate
possibilistic logic with a terminological logic, which is a decidable subclass
o f first-order logic but nevertheless much more expressive than propositional
logic. This yields an extension of terminological logics towards the
representation of uncertain knowledge which is satisfactory from a semantic as
well as algorithmic point of view.
|
1302.6820 | Possibilistic Conditioning and Propagation | cs.AI | We give an axiomatization of confidence transfer - a known conditioning
scheme - from the perspective of expectation-based inference in the sense of
Gardenfors and Makinson. Then, we use the notion of belief independence to
"filter out" different proposal s of possibilistic conditioning rules, all are
variations of confidence transfer. Among the three rules that we consider, only
Dempster's rule of conditioning passes the test of supporting the notion of
belief independence. With the use of this conditioning rule, we then show that
we can use local computation for computing desired conditional marginal
possibilities of the joint possibility satisfying the given constraints. It
turns out that our local computation scheme is already proposed by Shenoy.
However, our intuitions are completely different from that of Shenoy. While
Shenoy just defines a local computation scheme that fits his framework of
valuation-based systems, we derive that local computation scheme from II(,8) =
tI(,8 I a) * II(a) and appropriate independence assumptions, just like how the
Bayesians derive their local computation scheme.
|
1302.6821 | The Automated Mapping of Plans for Plan Recognition | cs.AI | To coordinate with other agents in its environment, an agent needs models of
what the other agents are trying to do. When communication is impossible or
expensive, this information must be acquired indirectly via plan recognition.
Typical approaches to plan recognition start with a specification of the
possible plans the other agents may be following, and develop special
techniques for discriminating among the possibilities. Perhaps more desirable
would be a uniform procedure for mapping plans to general structures supporting
inference based on uncertain and incomplete observations. In this paper, we
describe a set of methods for converting plans represented in a flexible
procedural language to observation models represented as probabilistic belief
networks.
|
1302.6822 | A Logic for Default Reasoning About Probabilities | cs.AI | A logic is defined that allows to express information about statistical
probabilities and about degrees of belief in specific propositions. By
interpreting the two types of probabilities in one common probability space,
the semantics given are well suited to model the influence of statistical
information on the formation of subjective beliefs. Cross entropy minimization
is a key element in these semantics, the use of which is justified by showing
that the resulting logic exhibits some very reasonable properties.
|
1302.6823 | Optimal Junction Trees | cs.AI | The paper deals with optimality issues in connection with updating beliefs in
networks. We address two processes: triangulation and construction of junction
trees. In the first part, we give a simple algorithm for constructing an
optimal junction tree from a triangulated network. In the second part, we argue
that any exact method based on local calculations must either be less efficient
than the junction tree method, or it has an optimality problem equivalent to
that of triangulation.
|
1302.6824 | From Influence Diagrams to Junction Trees | cs.AI | We present an approach to the solution of decision problems formulated as
influence diagrams. This approach involves a special triangulation of the
underlying graph, the construction of a junction tree with special properties,
and a message passing algorithm operating on the junction tree for computation
of expected utilities and optimal decision policies.
|
1302.6825 | Reduction of Computational Complexity in Bayesian Networks through
Removal of Weak Dependencies | cs.AI | The paper presents a method for reducing the computational complexity of
Bayesian networks through identification and removal of weak dependencies
(removal of links from the (moralized) independence graph). The removal of a
small number of links may reduce the computational complexity dramatically,
since several fill-ins and moral links may be rendered superfluous by the
removal. The method is described in terms of impact on the independence graph,
the junction tree, and the potential functions associated with these. An
empirical evaluation of the method using large real-world networks demonstrates
the applicability of the method. Further, the method, which has been
implemented in Hugin, complements the approximation method suggested by Jensen
& Andersen (1990).
|
1302.6826 | Using New Data to Refine a Bayesian Network | cs.AI | We explore the issue of refining an existent Bayesian network structure using
new data which might mention only a subset of the variables. Most previous
works have only considered the refinement of the network's conditional
probability parameters, and have not addressed the issue of refining the
network's structure. We develop a new approach for refining the network's
structure. Our approach is based on the Minimal Description Length (MDL)
principle, and it employs an adapted version of a Bayesian network learning
algorithm developed in our previous work. One of the adaptations required is to
modify the previous algorithm to account for the structure of the existent
network. The learning algorithm generates a partial network structure which can
then be used to improve the existent network. We also present experimental
evidence demonstrating the effectiveness of our approach.
|
1302.6827 | Syntax-based Default Reasoning as Probabilistic Model-based Diagnosis | cs.AI | We view the syntax-based approaches to default reasoning as a model-based
diagnosis problem, where each source giving a piece of information is
considered as a component. It is formalized in the ATMS framework (each source
corresponds to an assumption). We assume then that all sources are independent
and "fail" with a very small probability. This leads to a probability
assignment on the set of candidates, or equivalently on the set of consistent
environments. This probability assignment induces a Dempster-Shafer belief
function which measures the probability that a proposition can be deduced from
the evidence. This belief function can be used in several different ways to
define a non-monotonic consequence relation. We study and compare these
consequence relations. The -case of prioritized knowledge bases is briefly
considered.
|
1302.6828 | Induction of Selective Bayesian Classifiers | cs.LG stat.ML | In this paper, we examine previous work on the naive Bayesian classifier and
review its limitations, which include a sensitivity to correlated features. We
respond to this problem by embedding the naive Bayesian induction scheme within
an algorithm that c arries out a greedy search through the space of features.
We hypothesize that this approach will improve asymptotic accuracy in domains
that involve correlated features without reducing the rate of learning in ones
that do not. We report experimental results on six natural domains, including
comparisons with decision-tree induction, that support these hypotheses. In
closing, we discuss other approaches to extending naive Bayesian classifiers
and outline some directions for future research.
|
1302.6829 | Fuzzy Geometric Relations to Represent Hierarchical Spatial Information | cs.AI | A model to represent spatial information is presented in this paper. It is
based on fuzzy constraints represented as fuzzy geometric relations that can be
hierarchically structured. The concept of spatial template is introduced to
capture the idea of interrelated objects in two-dimensional space. The
representation model is used to specify imprecise or vague information
consisting in relative locations and orientations of template objects. It is
shown in this paper how a template represented by this model can be matched
against a crisp situation to recognize a particular instance of this template.
Furthermore, the proximity measure (fuzzy measure) between the instance and the
template is worked out - this measure can be interpreted as a degree of
similarity. In this context, template recognition can be viewed as a case of
fuzzy pattern recognition. The results of this work have been implemented and
applied to a complex military problem from which this work originated.
|
1302.6830 | Constructing Belief Networks to Evaluate Plans | cs.AI | This paper examines the problem of constructing belief networks to evaluate
plans produced by an knowledge-based planner. Techniques are presented for
handling various types of complicating plan features. These include plans with
context-dependent consequences, indirect consequences, actions with
preconditions that must be true during the execution of an action,
contingencies, multiple levels of abstraction multiple execution agents with
partially-ordered and temporally overlapping actions, and plans which reference
specific times and time durations.
|
1302.6831 | Operator Selection While Planning Under Uncertainty | cs.AI | This paper describes the best first search strategy used by U-Plan (Mansell
1993a), a planning system that constructs quantitatively ranked plans given an
incomplete description of an uncertain environment. U-Plan uses uncertain and
incomplete evidence de scribing the environment, characterizes it using a
Dempster-Shafer interval, and generates a set of possible world states. Plan
construction takes place in an abstraction hierarchy where strategic decisions
are made before tactical decisions. Search through this abstraction hierarchy
is guided by a quantitative measure (expected fulfillment) based on decision
theory. The search strategy is best first with the provision to update expected
fulfillment and review previous decisions in the light of planning
developments. U-Plan generates multiple plans for multiple possible worlds, and
attempts to use existing plans for new world situations. A super-plan is then
constructed, based on merging the set of plans and appropriately timed
knowledge acquisition operators, which are used to decide between plan
alternatives during plan execution.
|
1302.6832 | Model-Based Diagnosis with Qualitative Temporal Uncertainty | cs.AI | In this paper we describe a framework for model-based diagnosis of dynamic
systems, which extends previous work in this field by using and expressing
temporal uncertainty in the form of qualitative interval relations a la Allen.
Based on a logical framework extended by qualitative and quantitative temporal
constraints we show how to describe behavioral models (both consistency- and
abductive-based), discuss how to use abstract observations and show how
abstract temporal diagnoses are computed. This yields an expressive framework,
which allows the representation of complex temporal behavior allowing us to
represent temporal uncertainty. Due to its abstraction capabilities computation
is made independent of the number of observations and time points in a temporal
setting. An example of hepatitis diagnosis is used throughout the paper.
|
1302.6833 | Incremental Dynamic Construction of Layered Polytree Networks | cs.AI | Certain classes of problems, including perceptual data understanding,
robotics, discovery, and learning, can be represented as incremental,
dynamically constructed belief networks. These automatically constructed
networks can be dynamically extended and modified as evidence of new
individuals becomes available. The main result of this paper is the incremental
extension of the singly connected polytree network in such a way that the
network retains its singly connected polytree structure after the changes. The
algorithm is deterministic and is guaranteed to have a complexity of single
node addition that is at most of order proportional to the number of nodes (or
size) of the network. Additional speed-up can be achieved by maintaining the
path information. Despite its incremental and dynamic nature, the algorithm can
also be used for probabilistic inference in belief networks in a fashion
similar to other exact inference algorithms.
|
1302.6834 | Models of Consensus for Multiple Agent Systems | cs.MA cs.SY | Models of consensus are used to manage multiple agent systems in order to
choose between different recommendations provided by the system. It is assumed
that there is a central agent that solicits recommendations or plans from other
agents. That agent the n determines the consensus of the other agents, and
chooses the resultant consensus recommendation or plan. Voting schemes such as
this have been used in a variety of domains, including air traffic control.
This paper uses an analytic model to study the use of consensus in multiple
agent systems. The binomial model is used to study the probability that the
consensus judgment is correct or incorrect. That basic model is extended to
account for both different levels of agent competence and unequal prior odds.
The analysis of that model is critical in the investigation of multiple agent
systems, since the model leads us to conclude that in some cases consensus
judgment is not appropriate. In addition, the results allow us to determine how
many agents should be used to develop consensus decisions, which agents should
be used to develop consensus decisions and under which conditions the consensus
model should be used.
|
1302.6835 | A Probabilistic Calculus of Actions | cs.AI | We present a symbolic machinery that admits both probabilistic and causal
information about a given domain and produces probabilistic statements about
the effect of actions and the impact of observations. The calculus admits two
types of conditioning operators: ordinary Bayes conditioning, P(y|X = x), which
represents the observation X = x, and causal conditioning, P(y|do(X = x)), read
the probability of Y = y conditioned on holding X constant (at x) by deliberate
action. Given a mixture of such observational and causal sentences, together
with the topology of the causal graph, the calculus derives new conditional
probabilities of both types, thus enabling one to quantify the effects of
actions (and policies) from partially specified knowledge bases, such as
Bayesian networks in which some conditional probabilities may not be available.
|
1302.6836 | Robust Planning in Uncertain Environments | cs.AI | This paper describes a novel approach to planning which takes advantage of
decision theory to greatly improve robustness in an uncertain environment. We
present an algorithm which computes conditional plans of maximum expected
utility. This algorithm relies on a representation of the search space as an
AND/OR tree and employs a depth-limit to control computation costs. A numeric
robustness factor, which parameterizes the utility function, allows the user to
modulate the degree of risk-aversion employed by the planner. Via a look-ahead
search, the planning algorithm seeks to find an optimal plan using expected
utility as its optimization criterion. We present experimental results obtained
by applying our algorithm to a non-deterministic extension of the blocks world
domain. Our results demonstrate that the robustness factor governs the degree
of risk embodied in the conditional plans computed by our algorithm.
|
1302.6837 | Anytime Decision Making with Imprecise Probabilities | cs.AI | This paper examines methods of decision making that are able to accommodate
limitations on both the form in which uncertainty pertaining to a decision
problem can be realistically represented and the amount of computing time
available before a decision must be made. The methods are anytime algorithms in
the sense of Boddy and Dean 1991. Techniques are presented for use with Frisch
and Haddawy's [1992] anytime deduction system, with an anytime adaptation of
Nilsson's [1986] probabilistic logic, and with a probabilistic database model.
|
1302.6838 | Three Approaches to Probability Model Selection | stat.ME cs.AI | This paper compares three approaches to the problem of selecting among
probability models to fit data (1) use of statistical criteria such as Akaike's
information criterion and Schwarz's "Bayesian information criterion," (2)
maximization of the posterior probability of the model, and (3) maximization of
an effectiveness ratio? trading off accuracy and computational cost. The
unifying characteristic of the approaches is that all can be viewed as
maximizing a penalized likelihood function. The second approach with suitable
prior distributions has been shown to reduce to the first. This paper shows
that the third approach reduces to the second for a particular form of the
effectiveness ratio, and illustrates all three approaches with the problem of
selecting the number of components in a mixture of Gaussian distributions.
Unlike the first two approaches, the third can be used even when the candidate
models are chosen for computational efficiency, without regard to physical
interpretation, so that the likelihood and the prior distribution over models
cannot be interpreted literally. As the most general and computationally
oriented of the approaches, it is especially useful for artificial intelligence
applications.
|
1302.6839 | Knowledge Engineering for Large Belief Networks | cs.AI | We present several techniques for knowledge engineering of large belief
networks (BNs) based on the our experiences with a network derived from a large
medical knowledge base. The noisyMAX, a generalization of the noisy-OR gate, is
used to model causal in dependence in a BN with multi-valued variables. We
describe the use of leak probabilities to enforce the closed-world assumption
in our model. We present Netview, a visualization tool based on causal
independence and the use of leak probabilities. The Netview software allows
knowledge engineers to dynamically view sub-networks for knowledge engineering,
and it provides version control for editing a BN. Netview generates
sub-networks in which leak probabilities are dynamically updated to reflect the
missing portions of the network.
|
1302.6840 | Solving Asymmetric Decision Problems with Influence Diagrams | cs.AI | While influence diagrams have many advantages as a representation framework
for Bayesian decision problems, they have a serious drawback in handling
asymmetric decision problems. To be represented in an influence diagram, an
asymmetric decision problem must be symmetrized. A considerable amount of
unnecessary computation may be involved when a symmetrized influence diagram is
evaluated by conventional algorithms. In this paper we present an approach for
avoiding such unnecessary computation in influence diagram evaluation.
|
1302.6841 | Belief Maintenance in Bayesian Networks | cs.AI | Bayesian Belief Networks (BBNs) are a powerful formalism for reasoning under
uncertainty but bear some severe limitations: they require a large amount of
information before any reasoning process can start, they have limited
contradiction handling capabilities, and their ability to provide explanations
for their conclusion is still controversial. There exists a class of reasoning
systems, called Truth Maintenance Systems (TMSs), which are able to deal with
partially specified knowledge, to provide well-founded explanation for their
conclusions, and to detect and handle contradictions. TMSs incorporating
measure of uncertainty are called Belief Maintenance Systems (BMSs). This paper
describes how a BMS based on probabilistic logic can be applied to BBNs, thus
introducing a new class of BBNs, called Ignorant Belief Networks, able to
incrementally deal with partially specified conditional dependencies, to
provide explanations, and to detect and handle contradictions.
|
1302.6842 | Belief Updating by Enumerating High-Probability Independence-Based
Assignments | cs.AI | Independence-based (IB) assignments to Bayesian belief networks were
originally proposed as abductive explanations. IB assignments assign fewer
variables in abductive explanations than do schemes assigning values to all
evidentially supported variables. We use IB assignments to approximate marginal
probabilities in Bayesian belief networks. Recent work in belief updating for
Bayes networks attempts to approximate posterior probabilities by finding a
small number of the highest probability complete (or perhaps evidentially
supported) assignments. Under certain assumptions, the probability mass in the
union of these assignments is sufficient to obtain a good approximation. Such
methods are especially useful for highly-connected networks, where the maximum
clique size or the cutset size make the standard algorithms intractable. Since
IB assignments contain fewer assigned variables, the probability mass in each
assignment is greater than in the respective complete assignment. Thus, fewer
IB assignments are sufficient, and a good approximation can be obtained more
efficiently. IB assignments can be used for efficiently approximating posterior
node probabilities even in cases which do not obey the rather strict skewness
assumptions used in previous research. Two algorithms for finding the high
probability IB assignments are suggested: one by doing a best-first heuristic
search, and another by special-purpose integer linear programming. Experimental
results show that this approach is feasible for highly connected belief
networks.
|
1302.6843 | Global Conditioning for Probabilistic Inference in Belief Networks | cs.AI | In this paper we propose a new approach to probabilistic inference on belief
networks, global conditioning, which is a simple generalization of Pearl's
(1986b) method of loopcutset conditioning. We show that global conditioning, as
well as loop-cutset conditioning, can be thought of as a special case of the
method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (199Oa;
1990b). Nonetheless, this approach provides new opportunities for parallel
processing and, in the case of sequential processing, a tradeoff of time for
memory. We also show how a hybrid method (Suermondt and others 1990) combining
loop-cutset conditioning with Jensen's method can be viewed within our
framework. By exploring the relationships between these methods, we develop a
unifying framework in which the advantages of each approach can be combined
successfully.
|
1302.6844 | Belief Induced by the Partial Knowledge of the Probabilities | cs.AI | We construct the belief function that quantifies the agent, beliefs about
which event of Q will occurred when he knows that the event is selected by a
chance set-up and that the probability function associated to the chance set up
is only partially known.
|
1302.6845 | Ignorance and the Expressiveness of Single- and Set-Valued Probability
Models of Belief | cs.AI | Over time, there have hen refinements in the way that probability
distributions are used for representing beliefs. Models which rely on single
probability distributions depict a complete ordering among the propositions of
interest, yet human beliefs are sometimes not completely ordered. Non-singleton
sets of probability distributions can represent partially ordered beliefs.
Convex sets are particularly convenient and expressive, but it is known that
there are reasonable patterns of belief whose faithful representation require
less restrictive sets. The present paper shows that prior ignorance about three
or more exclusive alternatives and the emergence of partially ordered beliefs
when evidence is obtained defy representation by any single set of
distributions, but yield to a representation baud on several uts. The partial
order is shown to be a partial qualitative probability which shares some
intuitively appealing attributes with probability distributions.
|
1302.6846 | A Probabilistic Approach to Hierarchical Model-based Diagnosis | cs.AI | Model-based diagnosis reasons backwards from a functional schematic of a
system to isolate faults given observations of anomalous behavior. We develop a
fully probabilistic approach to model based diagnosis and extend it to support
hierarchical models. Our scheme translates the functional schematic into a
Bayesian network and diagnostic inference takes place in the Bayesian network.
A Bayesian network diagnostic inference algorithm is modified to take advantage
of the hierarchy to give computational gains.
|
1302.6847 | Semigraphoids Are Two-Antecedental Approximations of Stochastic
Conditional Independence Models | cs.AI | The semigraphoid closure of every couple of CI-statements (GI=conditional
independence) is a stochastic CI-model. As a consequence of this result it is
shown that every probabilistically sound inference rule for CI-model, having at
most two antecedents, is derivable from the semigraphoid inference rules. This
justifies the use of semigraphoids as approximations of stochastic CI-models in
probabilistic reasoning. The list of all 19 potential dominant elements of the
mentioned semigraphoid closure is given as a byproduct.
|
1302.6848 | Exceptional Subclasses in Qualitative Probability | cs.AI | System Z+ [Goldszmidt and Pearl, 1991, Goldszmidt, 1992] is a formalism for
reasoning with normality defaults of the form "typically if phi then + (with
strength cf)" where 6 is a positive integer. The system has a critical
shortcoming in that it does not sanction inheritance across exceptional
subclasses. In this paper we propose an extension to System Z+ that rectifies
this shortcoming by extracting additional conditions between worlds from the
defaults database. We show that the additional constraints do not change the
notion of the consistency of a database. We also make comparisons with
competing default reasoning systems.
|
1302.6849 | A Defect in Dempster-Shafer Theory | cs.AI | By analyzing the relationships among chance, weight of evidence and degree of
beliefwe show that the assertion "probability functions are special cases of
belief functions" and the assertion "Dempster's rule can be used to combine
belief functions based on distinct bodies of evidence" together lead to an
inconsistency in Dempster-Shafer theory. To solve this problem, we must reject
some fundamental postulates of the theory. We introduce a new approach for
uncertainty management that shares many intuitive ideas with D-S theory, while
avoiding this problem.
|
1302.6850 | State-space Abstraction for Anytime Evaluation of Probabilistic Networks | cs.AI | One important factor determining the computational complexity of evaluating a
probabilistic network is the cardinality of the state spaces of the nodes. By
varying the granularity of the state spaces, one can trade off accuracy in the
result for computational efficiency. We present an anytime procedure for
approximate evaluation of probabilistic networks based on this idea. On
application to some simple networks, the procedure exhibits a smooth
improvement in approximation quality as computation time increases. This
suggests that state-space abstraction is one more useful control parameter for
designing real-time probabilistic reasoners.
|
1302.6851 | General Belief Measures | cs.AI | Probability measures by themselves, are known to be inappropriate for
modeling the dynamics of plain belief and their excessively strong
measurability constraints make them unsuitable for some representational tasks,
e.g. in the context of firstorder knowledge. In this paper, we are therefore
going to look for possible alternatives and extensions. We begin by delimiting
the general area of interest, proposing a minimal list of assumptions to be
satisfied by any reasonable quasi-probabilistic valuation concept. Within this
framework, we investigate two particularly interesting kinds of quasi-measures
which are not or much less affected by the traditional problems. * Ranking
measures, which generalize Spohn-type and possibility measures. * Cumulative
measures, which combine the probabilistic and the ranking philosophy, allowing
thereby a fine-grained account of static and dynamic belief.
|
1302.6852 | Generating Graphoids from Generalised Conditional Probability | cs.AI | We take a general approach to uncertainty on product spaces, and give
sufficient conditions for the independence structures of uncertainty measures
to satisfy graphoid properties. Since these conditions are arguably more
intuitive than some of the graphoid properties, they can be viewed as
explanations why probability and certain other formalisms generate graphoids.
The conditions include a sufficient condition for the Intersection property
which can still apply even if there is a strong logical relations hip between
the variables. We indicate how these results can be used to produce theories of
qualitative conditional probability which are semi-graphoids and graphoids.
|
1302.6853 | On Axiomatization of Probabilistic Conditional Independencies | cs.AI | This paper studies the connection between probabilistic conditional
independence in uncertain reasoning and data dependency in relational
databases. As a demonstration of the usefulness of this preliminary
investigation, an alternate proof is presented for refuting the conjecture
suggested by Pearl and Paz that probabilistic conditional independencies have a
complete axiomatization.
|
1302.6854 | Evidential Reasoning with Conditional Belief Functions | cs.AI | In the existing evidential networks with belief functions, the relations
among the variables are always represented by joint belief functions on the
product space of the involved variables. In this paper, we use conditional
belief functions to represent such relations in the network and show some
relations of these two kinds of representations. We also present a propagation
algorithm for such networks. By analyzing the properties of some special
evidential networks with conditional belief functions, we show that the
reasoning process can be simplified in such kinds of networks.
|
1302.6855 | Inter-causal Independence and Heterogeneous Factorization | cs.AI | It is well known that conditional independence can be used to factorize a
joint probability into a multiplication of conditional probabilities. This
paper proposes a constructive definition of inter-causal independence, which
can be used to further factorize a conditional probability. An inference
algorithm is developed, which makes use of both conditional independence and
inter-causal independence to reduce inference complexity in Bayesian networks.
|
1302.6866 | Vandermonde-subspace Frequency Division Multiplexing for Two-Tiered
Cognitive Radio Networks | cs.IT math.IT | Vandermonde-subspace frequency division multiplexing (VFDM) is an overlay
spectrum sharing technique for cognitive radio. VFDM makes use of a precoder
based on a Vandermonde structure to transmit information over a secondary
system, while keeping an orthogonal frequency division multiplexing
(OFDM)-based primary system interference-free. To do so, VFDM exploits
frequency selectivity and the use of cyclic prefixes by the primary system.
Herein, a global view of VFDM is presented, including also practical aspects
such as linear receivers and the impact of channel estimation. We show that
VFDM provides a spectral efficiency increase of up to 1 bps/Hz over cognitive
radio systems based on unused band detection. We also present some key design
parameters for its future implementation and a feasible channel estimation
protocol. Finally we show that, even when some of the theoretical assumptions
are relaxed, VFDM provides non-negligible rates while protecting the primary
system.
|
1302.6906 | Tradition and Innovation in Scientists' Research Strategies | physics.soc-ph cs.DL cs.SI stat.AP | What factors affect a scientist's choice of research problem? Qualitative
research in the history, philosophy, and sociology of science suggests that
this choice is shaped by an "essential tension" between the professional demand
for productivity and a conflicting drive toward risky innovation. We examine
this tension empirically in the context of biomedical chemistry. We use complex
networks to represent the evolving state of scientific knowledge, as expressed
in publications. We then define research strategies relative to these networks.
Scientists can introduce novel chemicals or chemical relationships--or delve
deeper into known ones. They can consolidate existing knowledge clusters, or
bridge distant ones. Analyzing such choices in aggregate, we find that the
distribution of strategies remains remarkably stable, even as chemical
knowledge grows dramatically. High-risk strategies, which explore new chemical
relationships, are less prevalent in the literature, reflecting a growing focus
on established knowledge at the expense of new opportunities. Research
following a risky strategy is more likely to be ignored but also more likely to
achieve high impact and recognition. While the outcome of a risky strategy has
a higher expected reward than the outcome of a conservative strategy, the
additional reward is insufficient to compensate for the additional risk. By
studying the winners of 137 different prizes in biomedicine and chemistry, we
show that the occasional "gamble" for extraordinary impact is the most
plausible explanation for observed levels of risk-taking. Our empirical
demonstration and unpacking of the "essential tension" suggests policy
interventions that may foster more innovative research.
|
1302.6927 | Online Learning for Time Series Prediction | cs.LG | In this paper we address the problem of predicting a time series using the
ARMA (autoregressive moving average) model, under minimal assumptions on the
noise terms. Using regret minimization techniques, we develop effective online
learning algorithms for the prediction problem, without assuming that the noise
terms are Gaussian, identically distributed or even independent. Furthermore,
we show that our algorithm's performances asymptotically approaches the
performance of the best ARMA model in hindsight.
|
1302.6932 | Describing the complexity of systems: multi-variable "set complexity"
and the information basis of systems biology | cs.IT math.IT q-bio.QM | Context dependence is central to the description of complexity. Keying on the
pairwise definition of "set complexity" we use an information theory approach
to formulate general measures of systems complexity. We examine the properties
of multi-variable dependency starting with the concept of interaction
information. We then present a new measure for unbiased detection of
multi-variable dependency, "differential interaction information." This
quantity for two variables reduces to the pairwise "set complexity" previously
proposed as a context-dependent measure of information in biological systems.
We generalize it here to an arbitrary number of variables. Critical limiting
properties of the "differential interaction information" are key to the
generalization. This measure extends previous ideas about biological
information and provides a more sophisticated basis for study of complexity.
The properties of "differential interaction information" also suggest new
approaches to data analysis. Given a data set of system measurements
differential interaction information can provide a measure of collective
dependence, which can be represented in hypergraphs describing complex system
interaction patterns. We investigate this kind of analysis using simulated data
sets. The conjoining of a generalized set complexity measure, multi-variable
dependency analysis, and hypergraphs is our central result. While our focus is
on complex biological systems, our results are applicable to any complex
system.
|
1302.6934 | Optimum Header Positioning in Successive Interference Cancellation (SIC)
based Aloha | cs.IT math.IT | Random Access MAC protocols are simple and effective when the nature of the
traffic is unpredictable and sporadic. In the following paper, investigations
on the new Enhanced Contention Resolution ALOHA (ECRA) are presented, where
some new aspects of the protocol are investigated. Mathematical derivation and
numerical evaluation of the symbol interference probability after SIC are here
provided. Results of the optimum header positioning which is found to be in the
beginning and in the end of the packets, are exploited for the evaluation of
ECRA throughput and Packet Error Rate (PER) under imperfect knowledge of
packets positions. Remarkable gains in the maximum throughput are observed for
ECRA w.r.t. Contention Resolution ALOHA (CRA) under this assumption.
|
1302.6937 | Online Convex Optimization Against Adversaries with Memory and
Application to Statistical Arbitrage | cs.LG | The framework of online learning with memory naturally captures learning
problems with temporal constraints, and was previously studied for the experts
setting. In this work we extend the notion of learning with memory to the
general Online Convex Optimization (OCO) framework, and present two algorithms
that attain low regret. The first algorithm applies to Lipschitz continuous
loss functions, obtaining optimal regret bounds for both convex and strongly
convex losses. The second algorithm attains the optimal regret bounds and
applies more broadly to convex losses without requiring Lipschitz continuity,
yet is more complicated to implement. We complement our theoretic results with
an application to statistical arbitrage in finance: we devise algorithms for
constructing mean-reverting portfolios.
|
1302.6957 | Ensemble Sparse Models for Image Analysis | cs.CV | Sparse representations with learned dictionaries have been successful in
several image analysis applications. In this paper, we propose and analyze the
framework of ensemble sparse models, and demonstrate their utility in image
restoration and unsupervised clustering. The proposed ensemble model
approximates the data as a linear combination of approximations from multiple
\textit{weak} sparse models. Theoretical analysis of the ensemble model reveals
that even in the worst-case, the ensemble can perform better than any of its
constituent individual models. The dictionaries corresponding to the individual
sparse models are obtained using either random example selection or boosted
approaches. Boosted approaches learn one dictionary per round such that the
dictionary learned in a particular round is optimized for the training examples
having high reconstruction error in the previous round. Results with compressed
recovery show that the ensemble representations lead to a better performance
compared to using a single dictionary obtained with the conventional
alternating minimization approach. The proposed ensemble models are also used
for single image superresolution, and we show that they perform comparably to
the recent approaches. In unsupervised clustering, experiments show that the
proposed model performs better than baseline approaches in several standard
datasets.
|
1302.6974 | Spectrum Bandit Optimization | cs.LG cs.NI math.OC | We consider the problem of allocating radio channels to links in a wireless
network. Links interact through interference, modelled as a conflict graph
(i.e., two interfering links cannot be simultaneously active on the same
channel). We aim at identifying the channel allocation maximizing the total
network throughput over a finite time horizon. Should we know the average radio
conditions on each channel and on each link, an optimal allocation would be
obtained by solving an Integer Linear Program (ILP). When radio conditions are
unknown a priori, we look for a sequential channel allocation policy that
converges to the optimal allocation while minimizing on the way the throughput
loss or {\it regret} due to the need for exploring sub-optimal allocations. We
formulate this problem as a generic linear bandit problem, and analyze it first
in a stochastic setting where radio conditions are driven by a stationary
stochastic process, and then in an adversarial setting where radio conditions
can evolve arbitrarily. We provide new algorithms in both settings and derive
upper bounds on their regrets.
|
1302.6990 | Stabilizer information inequalities from phase space distributions | quant-ph cs.IT math-ph math.IT math.MP | The Shannon entropy of a collection of random variables is subject to a
number of constraints, the best-known examples being monotonicity and strong
subadditivity. It remains an open question to decide which of these "laws of
information theory" are also respected by the von Neumann entropy of many-body
quantum states. In this article, we consider a toy version of this difficult
problem by analyzing the von Neumann entropy of stabilizer states. We find that
the von Neumann entropy of stabilizer states satisfies all balanced information
inequalities that hold in the classical case. Our argument is built on the fact
that stabilizer states have a classical model, provided by the discrete Wigner
function: The phase-space entropy of the Wigner function corresponds directly
to the von Neumann entropy of the state, which allows us to reduce to the
classical case. Our result has a natural counterpart for multi-mode Gaussian
states, which sheds some light on the general properties of the construction.
We also discuss the relation of our results to recent work by Linden, Ruskai,
and Winter.
|
1302.7025 | Maximizing Acceptance Probability for Active Friending in On-Line Social
Networks | cs.SI cs.CY physics.soc-ph | Friending recommendation has successfully contributed to the explosive growth
of on-line social networks. Most friending recommendation services today aim to
support passive friending, where a user passively selects friending targets
from the recommended candidates. In this paper, we advocate recommendation
support for active friending, where a user actively specifies a friending
target. To the best of our knowledge, a recommendation designed to provide
guidance for a user to systematically approach his friending target, has not
been explored in existing on-line social networking services. To maximize the
probability that the friending target would accept an invitation from the user,
we formulate a new optimization problem, namely, \emph{Acceptance Probability
Maximization (APM)}, and develop a polynomial time algorithm, called
\emph{Selective Invitation with Tree and In-Node Aggregation (SITINA)}, to find
the optimal solution. We implement an active friending service with SITINA in
Facebook to validate our idea. Our user study and experimental results manifest
that SITINA outperforms manual selection and the baseline approach in solution
quality efficiently.
|
1302.7039 | Content Based Image Retrieval System Using NOHIS-tree | cs.IR cs.CV cs.DB | Content-based image retrieval (CBIR) has been one of the most important
research areas in computer vision. It is a widely used method for searching
images in huge databases. In this paper we present a CBIR system called
NOHIS-Search. The system is based on the indexing technique NOHIS-tree. The two
phases of the system are described and the performance of the system is
illustrated with the image database ImagEval. NOHIS-Search system was compared
to other two CBIR systems; the first that using PDDP indexing algorithm and the
second system is that using the sequential search. Results show that
NOHIS-Search system outperforms the two other systems.
|
1302.7043 | Scoup-SMT: Scalable Coupled Sparse Matrix-Tensor Factorization | stat.ML cs.LG | How can we correlate neural activity in the human brain as it responds to
words, with behavioral data expressed as answers to questions about these same
words? In short, we want to find latent variables, that explain both the brain
activity, as well as the behavioral responses. We show that this is an instance
of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose
Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem
and produces a sparse latent low-rank subspace of the data. In our experiments,
we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm
for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend
Scoup-SMT to handle missing data without degradation of performance. We apply
Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human
subjects) tensor and a (nouns, properties) matrix, with coupling along the
nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well
as to predict brain activity with competitive accuracy. Finally, we demonstrate
the generality of Scoup-SMT, by applying it on a Facebook dataset (users,
friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.
|
1302.7051 | Polyploidy and Discontinuous Heredity Effect on Evolutionary
Multi-Objective Optimization | cs.NE | This paper examines the effect of mimicking discontinuous heredity caused by
carrying more than one chromosome in some living organisms cells in
Evolutionary Multi-Objective Optimization algorithms. In this representation,
the phenotype may not fully reflect the genotype. By doing so we are mimicking
living organisms inheritance mechanism, where traits may be silently carried
for many generations to reappear later. Representations with different number
of chromosomes in each solution vector are tested on different benchmark
problems with high number of decision variables and objectives. A comparison
with Non-Dominated Sorting Genetic Algorithm-II is done on all problems.
|
1302.7056 | KSU KDD: Word Sense Induction by Clustering in Topic Space | cs.CL cs.AI stat.AP stat.ML | We describe our language-independent unsupervised word sense induction
system. This system only uses topic features to cluster different word senses
in their global context topic space. Using unlabeled data, this system trains a
latent Dirichlet allocation (LDA) topic model then uses it to infer the topics
distribution of the test instances. By clustering these topics distributions in
their topic space we cluster them into different senses. Our hypothesis is that
closeness in topic space reflects similarity between different word senses.
This system participated in SemEval-2 word sense induction and disambiguation
task and achieved the second highest V-measure score among all other systems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.