id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1303.5412 | Bayesian Meta-Reasoning: Determining Model Adequacy from Within a Small
World | cs.AI | This paper presents a Bayesian framework for assessing the adequacy of a
model without the necessity of explicitly enumerating a specific alternate
model. A test statistic is developed for tracking the performance of the model
across repeated problem instances. Asymptotic methods are used to derive an
approximate distribution for the test statistic. When the model is rejected,
the individual components of the test statistic can be used to guide search for
an alternate model.
|
1303.5413 | The Bounded Bayesian | cs.AI | The ideal Bayesian agent reasons from a global probability model, but real
agents are restricted to simplified models which they know to be adequate only
in restricted circumstances. Very little formal theory has been developed to
help fallibly rational agents manage the process of constructing and revising
small world models. The goal of this paper is to present a theoretical
framework for analyzing model management approaches. For a probability
forecasting problem, a search process over small world models is analyzed as an
approximation to a larger-world model which the agent cannot explicitly
enumerate or compute. Conditions are given under which the sequence of
small-world models converges to the larger-world probabilities.
|
1303.5414 | Representing Context-Sensitive Knowledge in a Network Formalism: A
Preliminary Report | cs.AI | Automated decision making is often complicated by the complexity of the
knowledge involved. Much of this complexity arises from the context sensitive
variations of the underlying phenomena. We propose a framework for representing
descriptive, context-sensitive knowledge. Our approach attempts to integrate
categorical and uncertain knowledge in a network formalism. This paper outlines
the basic representation constructs, examines their expressiveness and
efficiency, and discusses the potential applications of the framework.
|
1303.5415 | A Probabilistic Network of Predicates | cs.AI | Bayesian networks are directed acyclic graphs representing independence
relationships among a set of random variables. A random variable can be
regarded as a set of exhaustive and mutually exclusive propositions. We argue
that there are several drawbacks resulting from the propositional nature and
acyclic structure of Bayesian networks. To remedy these shortcomings, we
propose a probabilistic network where nodes represent unary predicates and
which may contain directed cycles. The proposed representation allows us to
represent domain knowledge in a single static network even though we cannot
determine the instantiations of the predicates before hand. The ability to deal
with cycles also enables us to handle cyclic causal tendencies and to recognize
recursive plans.
|
1303.5416 | Representing Heuristic Knowledge in D-S Theory | cs.AI | The Dempster-Shafer theory of evidence has been used intensively to deal with
uncertainty in knowledge-based systems. However the representation of uncertain
relationships between evidence and hypothesis groups (heuristic knowledge) is
still a major research problem. This paper presents an approach to representing
such heuristic knowledge by evidential mappings which are defined on the basis
of mass functions. The relationships between evidential mappings and multi
valued mappings, as well as between evidential mappings and Bayesian multi-
valued causal link models in Bayesian theory are discussed. Following this the
detailed procedures for constructing evidential mappings for any set of
heuristic rules are introduced. Several situations of belief propagation are
discussed.
|
1303.5417 | The Topological Fusion of Bayes Nets | cs.AI | Bayes nets are relatively recent innovations. As a result, most of their
theoretical development has focused on the simplest class of single-author
models. The introduction of more sophisticated multiple-author settings raises
a variety of interesting questions. One such question involves the nature of
compromise and consensus. Posterior compromises let each model process all data
to arrive at an independent response, and then split the difference. Prior
compromises, on the other hand, force compromise to be reached on all points
before data is observed. This paper introduces prior compromises in a Bayes net
setting. It outlines the problem and develops an efficient algorithm for fusing
two directed acyclic graphs into a single, consensus structure, which may then
be used as the basis of a prior compromise.
|
1303.5418 | Calculating Uncertainty Intervals From Conditional Convex Sets of
Probabilities | cs.AI | In Moral, Campos (1991) and Cano, Moral, Verdegay-Lopez (1991) a new method
of conditioning convex sets of probabilities has been proposed. The result of
it is a convex set of non-necessarily normalized probability distributions. The
normalizing factor of each probability distribution is interpreted as the
possibility assigned to it by the conditioning information. From this, it is
deduced that the natural value for the conditional probability of an event is a
possibility distribution. The aim of this paper is to study methods of
transforming this possibility distribution into a probability (or uncertainty)
interval. These methods will be based on the use of Sugeno and Choquet
integrals. Their behaviour will be compared in basis to some selected examples.
|
1303.5419 | Sensor Validation Using Dynamic Belief Networks | cs.AI | The trajectory of a robot is monitored in a restricted dynamic environment
using light beam sensor data. We have a Dynamic Belief Network (DBN), based on
a discrete model of the domain, which provides discrete monitoring analogous to
conventional quantitative filter techniques. Sensor observations are added to
the basic DBN in the form of specific evidence. However, sensor data is often
partially or totally incorrect. We show how the basic DBN, which infers only an
impossible combination of evidence, may be modified to handle specific types of
incorrect data which may occur in the domain. We then present an extension to
the DBN, the addition of an invalidating node, which models the status of the
sensor as working or defective. This node provides a qualitative explanation of
inconsistent data: it is caused by a defective sensor. The connection of
successive instances of the invalidating node models the status of a sensor
over time, allowing the DBN to handle both persistent and intermittent faults.
|
1303.5420 | Empirical Probabilities in Monadic Deductive Databases | cs.AI cs.DB | We address the problem of supporting empirical probabilities in monadic logic
databases. Though the semantics of multivalued logic programs has been studied
extensively, the treatment of probabilities as results of statistical findings
has not been studied in logic programming/deductive databases. We develop a
model-theoretic characterization of logic databases that facilitates such a
treatment. We present an algorithm for checking consistency of such databases
and prove its total correctness. We develop a sound and complete query
processing procedure for handling queries to such databases.
|
1303.5421 | aHUGIN: A System Creating Adaptive Causal Probabilistic Networks | cs.AI | The paper describes aHUGIN, a tool for creating adaptive systems. aHUGIN is
an extension of the HUGIN shell, and is based on the methods reported by
Spiegelhalter and Lauritzen (1990a). The adaptive systems resulting from aHUGIN
are able to adjust the C011ditional probabilities in the model. A short
analysis of the adaptation task is given and the features of aHUGIN are
described. Finally a session with experiments is reported and the results are
discussed.
|
1303.5422 | MESA: Maximum Entropy by Simulated Annealing | cs.AI | Probabilistic reasoning systems combine different probabilistic rules and
probabilistic facts to arrive at the desired probability values of
consequences. In this paper we describe the MESA-algorithm (Maximum Entropy by
Simulated Annealing) that derives a joint distribution of variables or
propositions. It takes into account the reliability of probability values and
can resolve conflicts between contradictory statements. The joint distribution
is represented in terms of marginal distributions and therefore allows to
process large inference networks and to determine desired probability values
with high precision. The procedure derives a maximum entropy distribution
subject to the given constraints. It can be applied to inference networks of
arbitrary topology and may be extended into a number of directions.
|
1303.5423 | Decision Methods for Adaptive Task-Sharing in Associate Systems | cs.AI | This paper describes some results of research on associate systems:
knowledge-based systems that flexibly and adaptively support their human users
in carrying out complex, time-dependent problem-solving tasks under
uncertainty. Based on principles derived from decision theory and decision
analysis, a problem-solving approach is presented which can overcome many of
the limitations of traditional expert-systems. This approach implements an
explicit model of the human user's problem-solving capabilities as an integral
element in the overall problem solving architecture. This integrated model,
represented as an influence diagram, is the basis for achieving adaptive task
sharing behavior between the associate system and the human user. This
associate system model has been applied toward ongoing research on a Mars Rover
Manager's Associate (MRMA). MRMA's role would be to manage a small fleet of
robotic rovers on the Martian surface. The paper describes results for a
specific scenario where MRMA examines the benefits and costs of consulting
human experts on Earth to assist a Mars rover with a complex resource
management decision.
|
1303.5424 | Modeling Uncertain Temporal Evolutions in Model-Based Diagnosis | cs.AI | Although the notion of diagnostic problem has been extensively investigated
in the context of static systems, in most practical applications the behavior
of the modeled system is significantly variable during time. The goal of the
paper is to propose a novel approach to the modeling of uncertainty about
temporal evolutions of time-varying systems and a characterization of
model-based temporal diagnosis. Since in most real world cases knowledge about
the temporal evolution of the system to be diagnosed is uncertain, we consider
the case when probabilistic temporal knowledge is available for each component
of the system and we choose to model it by means of Markov chains. In fact, we
aim at exploiting the statistical assumptions underlying reliability theory in
the context of the diagnosis of timevarying systems. We finally show how to
exploit Markov chain theory in order to discard, in the diagnostic process,
very unlikely diagnoses.
|
1303.5425 | Guess-And-Verify Heuristics for Reducing Uncertainties in Expert
Classification Systems | cs.AI | An expert classification system having statistical information about the
prior probabilities of the different classes should be able to use this
knowledge to reduce the amount of additional information that it must collect,
e.g., through questions, in order to make a correct classification. This paper
examines how best to use such prior information and additional
information-collection opportunities to reduce uncertainty about the class to
which a case belongs, thus minimizing the average cost or effort required to
correctly classify new cases.
|
1303.5426 | R&D Analyst: An Interactive Approach to Normative Decision System Model
Construction | cs.AI | This paper describes the architecture of R&D Analyst, a commercial
intelligent decision system for evaluating corporate research and development
projects and portfolios. In analyzing projects, R&D Analyst interactively
guides a user in constructing an influence diagram model for an individual
research project. The system's interactive approach can be clearly explained
from a blackboard system perspective. The opportunistic reasoning emphasis of
blackboard systems satisfies the flexibility requirements of model
construction, thereby suggesting that a similar architecture would be valuable
for developing normative decision systems in other domains. Current research is
aimed at extending the system architecture to explicitly consider of sequential
decisions involving limited temporal, financial, and physical resources.
|
1303.5427 | Possibilistic Constraint Satisfaction Problems or "How to handle soft
constraints?" | cs.AI | Many AI synthesis problems such as planning or scheduling may be modelized as
constraint satisfaction problems (CSP). A CSP is typically defined as the
problem of finding any consistent labeling for a fixed set of variables
satisfying all given constraints between these variables. However, for many
real tasks such as job-shop scheduling, time-table scheduling, design?, all
these constraints have not the same significance and have not to be necessarily
satisfied. A first distinction can be made between hard constraints, which
every solution should satisfy and soft constraints, whose satisfaction has not
to be certain. In this paper, we formalize the notion of possibilistic
constraint satisfaction problems that allows the modeling of uncertainly
satisfied constraints. We use a possibility distribution over labelings to
represent respective possibilities of each labeling. Necessity-valued
constraints allow a simple expression of the respective certainty degrees of
each constraint. The main advantage of our approach is its integration in the
CSP technical framework. Most classical techniques, such as Backtracking (BT),
arcconsistency enforcing (AC) or Forward Checking have been extended to handle
possibilistics CSP and are effectively implemented. The utility of our approach
is demonstrated on a simple design problem.
|
1303.5428 | Decision Making Using Probabilistic Inference Methods | cs.AI | The analysis of decision making under uncertainty is closely related to the
analysis of probabilistic inference. Indeed, much of the research into
efficient methods for probabilistic inference in expert systems has been
motivated by the fundamental normative arguments of decision theory. In this
paper we show how the developments underlying those efficient methods can be
applied immediately to decision problems. In addition to general approaches
which need know nothing about the actual probabilistic inference method, we
suggest some simple modifications to the clustering family of algorithms in
order to efficiently incorporate decision making capabilities.
|
1303.5429 | Conditional Independence in Uncertainty Theories | cs.AI | This paper introduces the notions of independence and conditional
independence in valuation-based systems (VBS). VBS is an axiomatic framework
capable of representing many different uncertainty calculi. We define
independence and conditional independence in terms of factorization of the
joint valuation. The definitions of independence and conditional independence
in VBS generalize the corresponding definitions in probability theory. Our
definitions apply not only to probability theory, but also to Dempster-Shafer's
belief-function theory, Spohn's epistemic-belief theory, and Zadeh's
possibility theory. In fact, they apply to any uncertainty calculi that fit in
the framework of valuation-based systems.
|
1303.5430 | The Nature of the Unnormalized Beliefs Encountered in the Transferable
Belief Model | cs.AI | Within the transferable belief model, positive basic belief masses can be
allocated to the empty set, leading to unnormalized belief functions. The
nature of these unnormalized beliefs is analyzed.
|
1303.5431 | Intuitions about Ordered Beliefs Leading to Probabilistic Models | cs.AI | The general use of subjective probabilities to model belief has been
justified using many axiomatic schemes. For example, ?consistent betting
behavior' arguments are well-known. To those not already convinced of the
unique fitness and generality of probability models, such justifications are
often unconvincing. The present paper explores another rationale for
probability models. ?Qualitative probability,' which is known to provide
stringent constraints on belief representation schemes, is derived from five
simple assumptions about relationships among beliefs. While counterparts of
familiar rationality concepts such as transitivity, dominance, and consistency
are used, the betting context is avoided. The gap between qualitative
probability and probability proper can be bridged by any of several additional
assumptions. The discussion here relies on results common in the recent AI
literature, introducing a sixth simple assumption. The narrative emphasizes
models based on unique complete orderings, but the rationale extends easily to
motivate set-valued representations of partial orderings as well.
|
1303.5432 | Expressing Relational and Temporal Knowledge in Visual Probabilistic
Networks | cs.AI | Bayesian networks have been used extensively in diagnostic tasks such as
medicine, where they represent the dependency relations between a set of
symptoms and a set of diseases. A criticism of this type of knowledge
representation is that it is restricted to this kind of task, and that it
cannot cope with the knowledge required in other artificial intelligence
applications. For example, in computer vision, we require the ability to model
complex knowledge, including temporal and relational factors. In this paper we
extend Bayesian networks to model relational and temporal knowledge for
high-level vision. These extended networks have a simple structure which
permits us to propagate probability efficiently. We have applied them to the
domain of endoscopy, illustrating how the general modelling principles can be
used in specific cases.
|
1303.5433 | A Fuzzy Logic Approach to Target Tracking | cs.AI | This paper discusses a target tracking problem in which no dynamic
mathematical model is explicitly assumed. A nonlinear filter based on the fuzzy
If-then rules is developed. A comparison with a Kalman filter is made, and
empirical results show that the performance of the fuzzy filter is better.
Intensive simulations suggest that theoretical justification of the empirical
results is possible.
|
1303.5434 | Towards Precision of Probabilistic Bounds Propagation | cs.AI | The DUCK-calculus presented here is a recent approach to cope with
probabilistic uncertainty in a sound and efficient way. Uncertain rules with
bounds for probabilities and explicit conditional independences can be
maintained incrementally. The basic inference mechanism relies on local bounds
propagation, implementable by deductive databases with a bottom-up fixpoint
evaluation. In situations, where no precise bounds are deducible, it can be
combined with simple operations research techniques on a local scope. In
particular, we provide new precise analytical bounds for probabilistic
entailment.
|
1303.5435 | An Algorithm for Deciding if a Set of Observed Independencies Has a
Causal Explanation | cs.AI | In a previous paper [Pearl and Verma, 1991] we presented an algorithm for
extracting causal influences from independence information, where a causal
influence was defined as the existence of a directed arc in all minimal causal
models consistent with the data. In this paper we address the question of
deciding whether there exists a causal model that explains ALL the observed
dependencies and independencies. Formally, given a list M of conditional
independence statements, it is required to decide whether there exists a
directed acyclic graph (dag) D that is perfectly consistent with M, namely,
every statement in M, and no other, is reflected via dseparation in D. We
present and analyze an effective algorithm that tests for the existence of such
a day, and produces one, if it exists.
|
1303.5436 | Generalizing Jeffrey Conditionalization | cs.AI | Jeffrey's rule has been generalized by Wagner to the case in which new
evidence bounds the possible revisions of a prior probability below by a
Dempsterian lower probability. Classical probability kinematics arises within
this generalization as the special case in which the evidentiary focal elements
of the bounding lower probability are pairwise disjoint. We discuss a twofold
extension of this generalization, first allowing the lower bound to be any
two-monotone capacity and then allowing the prior to be a lower envelope.
|
1303.5437 | Interval Structure: A Framework for Representing Uncertain Information | cs.AI | In this paper, a unified framework for representing uncertain information
based on the notion of an interval structure is proposed. It is shown that the
lower and upper approximations of the rough-set model, the lower and upper
bounds of incidence calculus, and the belief and plausibility functions all
obey the axioms of an interval structure. An interval structure can be used to
synthesize the decision rules provided by the experts. An efficient algorithm
to find the desirable set of rules is developed from a set of sound and
complete inference axioms.
|
1303.5438 | Exploring Localization in Bayesian Networks for Large Expert Systems | cs.AI | Current Bayesian net representations do not consider structure in the domain
and include all variables in a homogeneous network. At any time, a human
reasoner in a large domain may direct his attention to only one of a number of
natural subdomains, i.e., there is ?localization' of queries and evidence. In
such a case, propagating evidence through a homogeneous network is inefficient
since the entire network has to be updated each time. This paper presents
multiply sectioned Bayesian networks that enable a (localization preserving)
representation of natural subdomains by separate Bayesian subnets. The subnets
are transformed into a set of permanent junction trees such that evidential
reasoning takes place at only one of them at a time. Probabilities obtained are
identical to those that would be obtained from the homogeneous network. We
discuss attention shift to a different junction tree and propagation of
previously acquired evidence. Although the overall system can be large,
computational requirements are governed by the size of only one junction tree.
|
1303.5439 | A Decision Calculus for Belief Functions in Valuation-Based Systems | cs.AI | Valuation-based system (VBS) provides a general framework for representing
knowledge and drawing inferences under uncertainty. Recent studies have shown
that the semantics of VBS can represent and solve Bayesian decision problems
(Shenoy, 1991a). The purpose of this paper is to propose a decision calculus
for Dempster-Shafer (D-S) theory in the framework of VBS. The proposed calculus
uses a weighting factor whose role is similar to the probabilistic
interpretation of an assumption that disambiguates decision problems
represented with belief functions (Strat 1990). It will be shown that with the
presented calculus, if the decision problems are represented in the valuation
network properly, we can solve the problems by using fusion algorithm (Shenoy
1991a). It will also be shown the presented decision calculus can be reduced to
the calculus for Bayesian probability theory when probabilities, instead of
belief functions, are given.
|
1303.5440 | Sidestepping the Triangulation Problem in Bayesian Net Computations | cs.AI | This paper presents a new approach for computing posterior probabilities in
Bayesian nets, which sidesteps the triangulation problem. The current state of
art is the clique tree propagation approach. When the underlying graph of a
Bayesian net is triangulated, this approach arranges its cliques into a tree
and computes posterior probabilities by appropriately passing around messages
in that tree. The computation in each clique is simply direct marginalization.
When the underlying graph is not triangulated, one has to first triangulated it
by adding edges. Referred to as the triangulation problem, the problem of
finding an optimal or even a ?good? triangulation proves to be difficult. In
this paper, we propose to first decompose a Bayesian net into smaller
components by making use of Tarjan's algorithm for decomposing an undirected
graph at all its minimal complete separators. Then, the components are arranged
into a tree and posterior probabilities are computed by appropriately passing
around messages in that tree. The computation in each component is carried out
by repeating the whole procedure from the beginning. Thus the triangulation
problem is sidestepped.
|
1303.5441 | Generalized Measures for the Evaluation of Community Detection Methods | cs.SI math.ST physics.soc-ph stat.TH | Community detection can be considered as a variant of cluster analysis
applied to complex networks. For this reason, all existing studies have been
using tools derived from this field when evaluating community detection
algorithms. However, those are not completely relevant in the context of
network analysis, because they ignore an essential part of the available
information: the network structure. Therefore, they can lead to incorrect
interpretations. In this article, we review these measures, and illustrate this
limitation. We propose a modification to solve this problem, and apply it to
the three most widespread measures: purity, Rand index and normalized mutual
information (NMI). We then perform an experimental evaluation on artificially
generated networks with realistic community structure. We assess the relevance
of the modified measures by comparison with their traditional counterparts, and
also relatively to the topological properties of the community structures. On
these data, the modified NMI turns out to provide the most relevant results.
|
1303.5442 | Fractional Order Hybrid Systems and Their Stability | cs.SY nlin.AO | This paper deals with hybrid systems (HS) with fractional order dynamics and
their stability. The stability of two particular types of fractional order
hybrid systems (FOHS), i.e., switching and reset control systems, is studied.
Common Lyapunov method, as well as its frequency domain equivalence, are
generalized for the former systems and, for the latter, H$_{\beta}$-condition
is used --frequency domain equivalence of Lyapunov-like method for reset
control systems. The applicability and efficiency of the proposed methods are
shown by some illustrative examples.
|
1303.5452 | Fast Computation of the Series Impedance of Power Cables with Inclusion
of Skin and Proximity Effects | cs.CE | We present an efficient numerical technique for calculating the series
impedance matrix of systems with round conductors. The method is based on a
surface admittance operator in combination with the method of moments and it
accurately predicts both skin and proximity effects. Application to a
three-phase armored cable with wire screens demonstrates a speed-up by a factor
of about 100 compared to a finite elements computation. The inclusion of
proximity effect in combination with the high efficiency makes the new method
very attractive for cable modeling within EMTP-type simulation tools.
Currently, these tools can only take skin effect into account.
|
1303.5457 | Explicit solution of a tropical optimization problem with application to
project scheduling | math.OC cs.SY | A new multidimensional optimization problem is considered in the tropical
mathematics setting. The problem is to minimize a nonlinear function defined on
a finite-dimensional semimodule over an idempotent semifield and given by a
conjugate transposition operator. A special case of the problem, which arises
in just-in-time scheduling, serves as a motivation for the study. To solve the
general problem, we derive a sharp lower bound for the objective function and
then find vectors that yield the bound. Under general conditions, an explicit
solution is obtained in a compact vector form. This result is applied to
provide new solutions for scheduling problems under consideration. To
illustrate, numerical examples are also presented.
|
1303.5464 | Connections between the Generalized Marcum Q-Function and a class of
Hypergeometric Functions | cs.IT math.IT | This paper presents a new connection between the generalized Marcum-Q
function and the confluent hypergeometric function of two variables, phi3. This
result is then applied to the closed-form characterization of the bivariate
Nakagami-m distribution and of the distribution of the minimum eigenvalue of
correlated non-central Wishart matrices, both important in communication
theory. New expressions for the corresponding cumulative distributions are
obtained and a number of communication-theoretic problems involving them are
pointed out.
|
1303.5492 | Sample Distortion for Compressed Imaging | cs.CV cs.IT math.IT | We propose the notion of a sample distortion (SD) function for independent
and identically distributed (i.i.d) compressive distributions to fundamentally
quantify the achievable reconstruction performance of compressed sensing for
certain encoder-decoder pairs at a given sampling ratio. Two lower bounds on
the achievable performance and the intrinsic convexity property is derived. A
zeroing procedure is then introduced to improve non convex SD functions. The SD
framework is then applied to analyse compressed imaging with a multi-resolution
statistical image model using both the generalized Gaussian distribution and
the two-state Gaussian mixture distribution. We subsequently focus on the
Gaussian encoder-Bayesian optimal approximate message passing (AMP) decoder
pair, whose theoretical SD function is provided by the rigorous analysis of the
AMP algorithm. Given the image statistics, analytic bandwise sample allocation
for bandwise independent model is derived as a reverse water-filling scheme.
Som and Schniter's turbo message passing approach is further deployed to
integrate the bandwise sampling with the exploitation of the hidden Markov tree
structure of wavelet coefficients. Natural image simulations confirm that with
oracle image statistics, the SD function associated with the optimized sample
allocation can accurately predict the possible compressed sensing gains.
Finally, a general sample allocation profile based on average image statistics
not only illustrates preferable performance but also makes the scheme
practical.
|
1303.5508 | Sparse Projections of Medical Images onto Manifolds | cs.CV cs.LG stat.ML | Manifold learning has been successfully applied to a variety of medical
imaging problems. Its use in real-time applications requires fast projection
onto the low-dimensional space. To this end, out-of-sample extensions are
applied by constructing an interpolation function that maps from the input
space to the low-dimensional manifold. Commonly used approaches such as the
Nystr\"{o}m extension and kernel ridge regression require using all training
points. We propose an interpolation function that only depends on a small
subset of the input training data. Consequently, in the testing phase each new
point only needs to be compared against a small number of input training data
in order to project the point onto the low-dimensional space. We interpret our
method as an out-of-sample extension that approximates kernel ridge regression.
Our method involves solving a simple convex optimization problem and has the
attractive property of guaranteeing an upper bound on the approximation error,
which is crucial for medical applications. Tuning this error bound controls the
sparsity of the resulting interpolation function. We illustrate our method in
two clinical applications that require fast mapping of input images onto a
low-dimensional space.
|
1303.5513 | Parameters Optimization for Improving ASR Performance in Adverse Real
World Noisy Environmental Conditions | cs.CL cs.SD | From the existing research it has been observed that many techniques and
methodologies are available for performing every step of Automatic Speech
Recognition (ASR) system, but the performance (Minimization of Word Error
Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology
is not dependent on the only technique applied in that method. The research
work indicates that, performance mainly depends on the category of the noise,
the level of the noise and the variable size of the window, frame, frame
overlap etc is considered in the existing methods. The main aim of the work
presented in this paper is to use variable size of parameters like window size,
frame size and frame overlap percentage to observe the performance of
algorithms for various categories of noise with different levels and also train
the system for all size of parameters and category of real world noisy
environment to improve the performance of the speech recognition system. This
paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by
applying variable size of parameters. It is observed that, it is really very
hard to evaluate test results and decide parameter size for ASR performance
improvement for its resultant optimization. Hence, this study further suggests
the feasible and optimum parameter size using Fuzzy Inference System (FIS) for
enhancing resultant accuracy in adverse real world noisy environmental
conditions. This work will be helpful to give discriminative training of
ubiquitous ASR system for better Human Computer Interaction (HCI).
|
1303.5515 | Adverse Conditions and ASR Techniques for Robust Speech User Interface | cs.CL cs.SD | The main motivation for Automatic Speech Recognition (ASR) is efficient
interfaces to computers, and for the interfaces to be natural and truly useful,
it should provide coverage for a large group of users. The purpose of these
tasks is to further improve man-machine communication. ASR systems exhibit
unacceptable degradations in performance when the acoustical environments used
for training and testing the system are not the same. The goal of this research
is to increase the robustness of the speech recognition systems with respect to
changes in the environment. A system can be labeled as environment-independent
if the recognition accuracy for a new environment is the same or higher than
that obtained when the system is retrained for that environment. Attaining such
performance is the dream of the researchers. This paper elaborates some of the
difficulties with Automatic Speech Recognition (ASR). These difficulties are
classified into Speakers characteristics and environmental conditions, and
tried to suggest some techniques to compensate variations in speech signal.
This paper focuses on the robustness with respect to speakers variations and
changes in the acoustical environment. We discussed several different external
factors that change the environment and physiological differences that affect
the performance of a speech recognition system followed by techniques that are
helpful to design a robust ASR system.
|
1303.5526 | On active information storage in input-driven systems | cs.IT math.IT | Information theory and the framework of information dynamics have been used
to provide tools to characterise complex systems. In particular, we are
interested in quantifying information storage, information modification and
information transfer as characteristic elements of computation. Although these
quantities are defined for autonomous dynamical systems, information dynamics
can also help to get a "wholistic" understanding of input-driven systems such
as neural networks. In this case, we do not distinguish between the system
itself, and the effects the input has to the system. This may be desired in
some cases, but it will change the questions we are able to answer, and is
consequently an important consideration, for example, for biological systems
which perform non-trivial computations and also retain a short-term memory of
past inputs. Many other real world systems like cortical networks are also
heavily input-driven, and application of tools designed for autonomous dynamic
systems may not necessarily lead to intuitively interpretable results.
The aim of our work is to extend the measurements used in the information
dynamics framework for input-driven systems. Using the proposed input-corrected
information storage we hope to better quantify system behaviour, which will be
important for heavily input-driven systems like artificial neural networks to
abstract from specific benchmarks, or for brain networks, where intervention is
difficult, individual components cannot be tested in isolation or with
arbitrary input data.
|
1303.5596 | Do scientists trace hot topics? | physics.soc-ph cs.DL cs.SI | Do scientists follow hot topics in their scientific investigations? In this
paper, by performing analysis to papers published in the American Physical
Society (APS) Physical Review journals, it is found that papers are more likely
to be attracted by hot fields, where the hotness of a field is measured by the
number of papers belonging to the field. This indicates that scientists
generally do follow hot topics. However, there are qualitative differences
among scientists from various countries, among research works regarding
different number of authors, different number of affiliations and different
number of references. These observations could be valuable for policy makers
when deciding research funding and also for individual researchers when
searching for scientific projects.
|
1303.5613 | Network Detection Theory and Performance | cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | Network detection is an important capability in many areas of applied
research in which data can be represented as a graph of entities and
relationships. Oftentimes the object of interest is a relatively small subgraph
in an enormous, potentially uninteresting background. This aspect characterizes
network detection as a "big data" problem. Graph partitioning and network
discovery have been major research areas over the last ten years, driven by
interest in internet search, cyber security, social networks, and criminal or
terrorist activities. The specific problem of network discovery is addressed as
a special case of graph partitioning in which membership in a small subgraph of
interest must be determined. Algebraic graph theory is used as the basis to
analyze and compare different network detection methods. A new Bayesian network
detection framework is introduced that partitions the graph based on prior
information and direct observations. The new approach, called space-time threat
propagation, is proved to maximize the probability of detection and is
therefore optimum in the Neyman-Pearson sense. This optimality criterion is
compared to spectral community detection approaches which divide the global
graph into subsets or communities with optimal connectivity properties. We also
explore a new generative stochastic model for covert networks and analyze using
receiver operating characteristics the detection performance of both classes of
optimal detection techniques.
|
1303.5636 | Codes and caps from orthogonal Grassmannians | math.AG cs.IT math.CO math.IT | In this paper we investigate linear error correcting codes and projective
caps related to the Grassmann embedding $\varepsilon_k^{gr}$ of an orthogonal
Grassmannian $\Delta_k$. In particular, we determine some of the parameters of
the codes arising from the projective system determined by
$\varepsilon_k^{gr}(\Delta_k)$. We also study special sets of points of
$\Delta_k$ which are met by any line of $\Delta_k$ in at most 2 points and we
show that their image under the Grassmann embedding $\varepsilon_k^{gr}$ is a
projective cap.
|
1303.5655 | Can we allow linear dependencies in the dictionary in the sparse
synthesis framework? | cs.IT math.IT | Signal recovery from a given set of linear measurements using a sparsity
prior has been a major subject of research in recent years. In this model, the
signal is assumed to have a sparse representation under a given dictionary.
Most of the work dealing with this subject has focused on the reconstruction of
the signal's representation as the means for recovering the signal itself. This
approach forced the dictionary to be of low coherence and with no linear
dependencies between its columns. Recently, a series of contributions that
focus on signal recovery using the analysis model find that linear dependencies
in the analysis dictionary are in fact permitted and beneficial. In this paper
we show theoretically that the same holds also for signal recovery in the
synthesis case for the l0- synthesis minimization problem. In addition, we
demonstrate empirically the relevance of our conclusions for recovering the
signal using an l1-relaxation.
|
1303.5659 | Viterbi training in PRISM | cs.AI | VT (Viterbi training), or hard EM, is an efficient way of parameter learning
for probabilistic models with hidden variables. Given an observation $y$, it
searches for a state of hidden variables $x$ that maximizes $p(x,y \mid
\theta)$ by coordinate ascent on parameters $\theta$ and $x$. In this paper we
introduce VT to PRISM, a logic-based probabilistic modeling system for
generative models. VT improves PRISM in three ways. First VT in PRISM converges
faster than EM in PRISM due to the VT's termination condition. Second,
parameters learned by VT often show good prediction performance compared to
those learned by EM. We conducted two parsing experiments with probabilistic
grammars while learning parameters by a variety of inference methods, i.e.\ VT,
EM, MAP and VB. The result is that VT achieved the best parsing accuracy among
them in both experiments. Also we conducted a similar experiment for
classification tasks where a hidden variable is not a prediction target unlike
probabilistic grammars. We found that in such a case VT does not necessarily
yield superior performance. Third since VT always deals with a single
probability of a single explanation, Viterbi explanation, the exclusiveness
condition that is imposed on PRISM programs is no more required if we learn
parameters by VT.
Last but not least we can say that as VT in PRISM is general and applicable
to any PRISM program, it largely reduces the need for the user to develop a
specific VT algorithm for a specific model. Furthermore since VT in PRISM can
be used just by setting a PRISM flag appropriately, it makes VT easily
accessible to (probabilistic) logic programmers. To appear in Theory and
Practice of Logic Programming (TPLP).
|
1303.5673 | Genetic Algorithm with Ensemble Learning for Detecting Community
Structure in Complex Networks | cs.SI physics.soc-ph | Community detection in complex networks is a topic of considerable recent
interest within the scientific community. For dealing with the problem that
genetic algorithm are hardly applied to community detection, we propose a
genetic algorithm with ensemble learning (GAEL) for detecting community
structure in complex networks. GAEL replaces its traditional crossover operator
with a multi-individual crossover operator based on ensemble learning.
Therefore, GAEL can avoid the problems that are brought by traditional
crossover operator which is only able to mix string blocks of different
individuals, but not able to recombine clustering contexts of different
individuals into new better ones. In addition, the local search strategy, which
makes mutated node be placed into the community where most of its neighbors
are, is used in mutation operator. At last, a Markov random walk based method
is used to initialize population in this paper, and it can provide us a
population of accurate and diverse clustering solutions. Those diverse and
accurate individuals are suitable for ensemble learning based multi-individual
crossover operator. The proposed GAEL is tested on both computer-generated and
real-world networks, and compared with current representative algorithms for
community detection in complex networks. Experimental results demonstrate that
GAEL is highly effective at discovering community structure.
|
1303.5675 | Markov random walk under constraint for discovering overlapping
communities in complex networks | cs.SI cond-mat.stat-mech physics.soc-ph | Detection of overlapping communities in complex networks has motivated recent
research in the relevant fields. Aiming this problem, we propose a Markov
dynamics based algorithm, called UEOC, which means, 'unfold and extract
overlapping communities'. In UEOC, when identifying each natural community that
overlaps, a Markov random walk method combined with a constraint strategy,
which is based on the corresponding annealed network (degree conserving random
network), is performed to unfold the community. Then, a cutoff criterion with
the aid of a local community function, called conductance, which can be thought
of as the ratio between the number of edges inside the community and those
leaving it, is presented to extract this emerged community from the entire
network. The UEOC algorithm depends on only one parameter whose value can be
easily set, and it requires no prior knowledge on the hidden community
structures. The proposed UEOC has been evaluated both on synthetic benchmarks
and on some real-world networks, and was compared with a set of competing
algorithms. Experimental result has shown that UEOC is highly effective and
efficient for discovering overlapping communities.
|
1303.5678 | Interference alignment for the MIMO interference channel | cs.IT math.IT | We study vector space interference alignment for the MIMO interference
channel with no time or frequency diversity, and no symbol extensions. We prove
both necessary and sufficient conditions for alignment. In particular, we
characterize the feasibility of alignment for the symmetric three-user channel
where all users transmit along d dimensions, all transmitters have M antennas
and all receivers have N antennas, as well as feasibility of alignment for the
fully symmetric (M=N) channel with an arbitrary number of users.
An implication of our results is that the total degrees of freedom available
in a K-user interference channel, using only spatial diversity from the
multiple antennas, is at most 2. This is in sharp contrast to the K/2 degrees
of freedom shown to be possible by Cadambe and Jafar with arbitrarily large
time or frequency diversity.
Moving beyond the question of feasibility, we additionally discuss
computation of the number of solutions using Schubert calculus in cases where
there are a finite number of solutions.
|
1303.5685 | Sparse Factor Analysis for Learning and Content Analytics | stat.ML cs.LG math.OC stat.AP | We develop a new model and algorithms for machine learning-based learning
analytics, which estimate a learner's knowledge of the concepts underlying a
domain, and content analytics, which estimate the relationships among a
collection of questions and those concepts. Our model represents the
probability that a learner provides the correct response to a question in terms
of three factors: their understanding of a set of underlying concepts, the
concepts involved in each question, and each question's intrinsic difficulty.
We estimate these factors given the graded responses to a collection of
questions. The underlying estimation problem is ill-posed in general,
especially when only a subset of the questions are answered. The key
observation that enables a well-posed solution is the fact that typical
educational domains of interest involve only a small number of key concepts.
Leveraging this observation, we develop both a bi-convex maximum-likelihood and
a Bayesian solution to the resulting SPARse Factor Analysis (SPARFA) problem.
We also incorporate user-defined tags on questions to facilitate the
interpretability of the estimated factors. Experiments with synthetic and
real-world data demonstrate the efficacy of our approach. Finally, we make a
connection between SPARFA and noisy, binary-valued (1-bit) dictionary learning
that is of independent interest.
|
1303.5691 | Cortical Surface Co-Registration based on MRI Images and Photos | cs.CV | Brain shift, i.e. the change in configuration of the brain after opening the
dura mater, is a key problem in neuronavigation. We present an approach to
co-register intra-operative microscope images with pre-operative MRI to adapt
and optimize intra-operative neuronavigation. The tools are a robust
classification of sulci on MRI extracted cortical surfaces, guided user marking
of most prominent sulci on a microscope image, and the actual variational
registration method with a fidelity energy for 3D deformations of the cortical
surface combined with a higher order, linear elastica type prior energy.
Furthermore, the actual registration is validated on an artificial testbed with
known ground truth deformation and on real data of a neuro clinical patient.
|
1303.5694 | Singular value correlation functions for products of Wishart random
matrices | math-ph cond-mat.stat-mech cs.IT math.IT math.MP | Consider the product of $M$ quadratic random matrices with complex elements
and no further symmetry, where all matrix elements of each factor have a
Gaussian distribution. This generalises the classical Wishart-Laguerre Gaussian
Unitary Ensemble with M=1. In this paper we first compute the joint probability
distribution for the singular values of the product matrix when the matrix size
$N$ and the number $M$ are fixed but arbitrary. This leads to a determinantal
point process which can be realised in two different ways. First, it can be
written as a one-matrix singular value model with a non-standard Jacobian, or
second, for $M\geq2$, as a two-matrix singular value model with a set of
auxiliary singular values and a weight proportional to the Meijer $G$-function.
For both formulations we determine all singular value correlation functions in
terms of the kernels of biorthogonal polynomials which we explicitly construct.
They are given in terms of hypergeometric and Meijer $G$-functions,
generalising the Laguerre polynomials. Our investigation was motivated from
applications in telecommunication of multi-layered scattering MIMO channels. We
present the ergodic mutual information for finite-$N$ for such a channel model
with $M-1$ layers of scatterers as an example.
|
1303.5698 | When Cellular Meets WiFi in Wireless Small Cell Networks | cs.NI cs.IT math.IT | The deployment of small cell base stations(SCBSs) overlaid on existing
macro-cellular systems is seen as a key solution for offloading traffic,
optimizing coverage, and boosting the capacity of future cellular wireless
systems. The next-generation of SCBSs is envisioned to be multi-mode, i.e.,
capable of transmitting simultaneously on both licensed and unlicensed bands.
This constitutes a cost-effective integration of both WiFi and cellular radio
access technologies (RATs) that can efficiently cope with peak wireless data
traffic and heterogeneous quality-of-service requirements. To leverage the
advantage of such multi-mode SCBSs, we discuss the novel proposed paradigm of
cross-system learning by means of which SCBSs self-organize and autonomously
steer their traffic flows across different RATs. Cross-system learning allows
the SCBSs to leverage the advantage of both the WiFi and cellular worlds. For
example, the SCBSs can offload delay-tolerant data traffic to WiFi, while
simultaneously learning the probability distribution function of their
transmission strategy over the licensed cellular band. This article will first
introduce the basic building blocks of cross-system learning and then provide
preliminary performance evaluation in a Long-Term Evolution (LTE) simulator
overlaid with WiFi hotspots. Remarkably, it is shown that the proposed
cross-system learning approach significantly outperforms a number of benchmark
traffic steering policies.
|
1303.5703 | ARCO1: An Application of Belief Networks to the Oil Market | cs.AI q-fin.GN | Belief networks are a new, potentially important, class of knowledge-based
models. ARCO1, currently under development at the Atlantic Richfield Company
(ARCO) and the University of Southern California (USC), is the most advanced
reported implementation of these models in a financial forecasting setting.
ARCO1's underlying belief network models the variables believed to have an
impact on the crude oil market. A pictorial market model-developed on a MAC II-
facilitates consensus among the members of the forecasting team. The system
forecasts crude oil prices via Monte Carlo analyses of the network. Several
different models of the oil market have been developed; the system's ability to
be updated quickly highlights its flexibility.
|
1303.5704 | "Conditional Inter-Causally Independent" Node Distributions, a Property
of "Noisy-Or" Models | cs.AI | This paper examines the interdependence generated between two parent nodes
with a common instantiated child node, such as two hypotheses sharing common
evidence. The relation so generated has been termed "intercausal." It is shown
by construction that inter-causal independence is possible for binary
distributions at one state of evidence. For such "CICI" distributions, the two
measures of inter-causal effect, "multiplicative synergy" and "additive
synergy" are equal. The well known "noisy-or" model is an example of such a
distribution. This introduces novel semantics for the noisy-or, as a model of
the degree of conflict among competing hypotheses of a common observation.
|
1303.5705 | Combining Multiple-Valued Logics in Modular Expert Systems | cs.AI | The way experts manage uncertainty usually changes depending on the task they
are performing. This fact has lead us to consider the problem of communicating
modules (task implementations) in a large and structured knowledge based system
when modules have different uncertainty calculi. In this paper, the analysis of
the communication problem is made assuming that (i) each uncertainty calculus
is an inference mechanism defining an entailment relation, and therefore the
communication is considered to be inference-preserving, and (ii) we restrict
ourselves to the case which the different uncertainty calculi are given by a
class of truth functional Multiple-valued Logics.
|
1303.5706 | Constraint Propagation with Imprecise Conditional Probabilities | cs.AI | An approach to reasoning with default rules where the proportion of
exceptions, or more generally the probability of encountering an exception, can
be at least roughly assessed is presented. It is based on local uncertainty
propagation rules which provide the best bracketing of a conditional
probability of interest from the knowledge of the bracketing of some other
conditional probabilities. A procedure that uses two such propagation rules
repeatedly is proposed in order to estimate any simple conditional probability
of interest from the available knowledge. The iterative procedure, that does
not require independence assumptions, looks promising with respect to the
linear programming method. Improved bounds for conditional probabilities are
given when independence assumptions hold.
|
1303.5707 | Bayesian Networks Aplied to Therapy Monitoring | cs.AI stat.AP | We propose a general Bayesian network model for application in a wide class
of problems of therapy monitoring. We discuss the use of stochastic simulation
as a computational approach to inference on the proposed class of models. As an
illustration we present an application to the monitoring of cytotoxic
chemotherapy in breast cancer.
|
1303.5708 | Some Properties of Plausible Reasoning | cs.AI | This paper presents a plausible reasoning system to illustrate some broad
issues in knowledge representation: dualities between different reasoning
forms, the difficulty of unifying complementary reasoning styles, and the
approximate nature of plausible reasoning. These issues have a common
underlying theme: there should be an underlying belief calculus of which the
many different reasoning forms are special cases, sometimes approximate. The
system presented allows reasoning about defaults, likelihood, necessity and
possibility in a manner similar to the earlier work of Adams. The system is
based on the belief calculus of subjective Bayesian probability which itself is
based on a few simple assumptions about how belief should be manipulated.
Approximations, semantics, consistency and consequence results are presented
for the system. While this puts these often discussed plausible reasoning forms
on a probabilistic footing, useful application to practical problems remains an
issue.
|
1303.5709 | Theory Refinement on Bayesian Networks | cs.AI | Theory refinement is the task of updating a domain theory in the light of new
cases, to be done automatically or with some expert assistance. The problem of
theory refinement under uncertainty is reviewed here in the context of Bayesian
statistics, a theory of belief revision. The problem is reduced to an
incremental learning task as follows: the learning system is initially primed
with a partial theory supplied by a domain expert, and thereafter maintains its
own internal representation of alternative theories which is able to be
interrogated by the domain expert and able to be incrementally refined from
data. Algorithms for refinement of Bayesian networks are presented to
illustrate what is meant by "partial theory", "alternative theory
representation", etc. The algorithms are an incremental variant of batch
learning algorithms from the literature so can work well in batch and
incremental mode.
|
1303.5710 | Combination of Upper and Lower Probabilities | cs.AI | In this paper, we consider several types of information and methods of
combination associated with incomplete probabilistic systems. We discriminate
between 'a priori' and evidential information. The former one is a description
of the whole population, the latest is a restriction based on observations for
a particular case. Then, we propose different combination methods for each one
of them. We also consider conditioning as the heterogeneous combination of 'a
priori' and evidential information. The evidential information is represented
as a convex set of likelihood functions. These will have an associated
possibility distribution with behavior according to classical Possibility
Theory.
|
1303.5711 | A Probabilistic Analysis of Marker-Passing Techniques for
Plan-Recognition | cs.AI | Useless paths are a chronic problem for marker-passing techniques. We use a
probabilistic analysis to justify a method for quickly identifying and
rejecting useless paths. Using the same analysis, we identify key conditions
and assumptions necessary for marker-passing to perform well.
|
1303.5712 | Symbolic Probabilistic Inference with Continuous Variables | cs.AI | Research on Symbolic Probabilistic Inference (SPI) [2, 3] has provided an
algorithm for resolving general queries in Bayesian networks. SPI applies the
concept of dependency directed backward search to probabilistic inference, and
is incremental with respect to both queries and observations. Unlike
traditional Bayesian network inferencing algorithms, SPI algorithm is goal
directed, performing only those calculations that are required to respond to
queries. Research to date on SPI applies to Bayesian networks with
discrete-valued variables and does not address variables with continuous
values. In this papers, we extend the SPI algorithm to handle Bayesian networks
made up of continuous variables where the relationships between the variables
are restricted to be ?linear gaussian?. We call this variation of the SPI
algorithm, SPI Continuous (SPIC). SPIC modifies the three basic SPI operations:
multiplication, summation, and substitution. However, SPIC retains the
framework of the SPI algorithm, namely building the search tree and recursive
query mechanism and therefore retains the goal-directed and incrementality
features of SPI.
|
1303.5713 | Symbolic Probabilistic Inference with Evidence Potential | cs.AI | Recent research on the Symbolic Probabilistic Inference (SPI) algorithm[2]
has focused attention on the importance of resolving general queries in
Bayesian networks. SPI applies the concept of dependency-directed backward
search to probabilistic inference, and is incremental with respect to both
queries and observations. In response to this research we have extended the
evidence potential algorithm [3] with the same features. We call the extension
symbolic evidence potential inference (SEPI). SEPI like SPI can handle generic
queries and is incremental with respect to queries and observations. While in
SPI, operations are done on a search tree constructed from the nodes of the
original network, in SEPI, a clique-tree structure obtained from the evidence
potential algorithm [3] is the basic framework for recursive query processing.
In this paper, we describe the systematic query and caching procedure of SEPI.
SEPI begins with finding a clique tree from a Bayesian network-the standard
procedure of the evidence potential algorithm. With the clique tree, various
probability distributions are computed and stored in each clique. This is the
?pre-processing? step of SEPI. Once this step is done, the query can then be
computed. To process a query, a recursive process similar to the SPI algorithm
is used. The queries are directed to the root clique and decomposed into
queries for the clique's subtrees until a particular query can be answered at
the clique at which it is directed. The algorithm and the computation are
simple. The SEPI algorithm will be presented in this paper along with several
examples.
|
1303.5714 | A Bayesian Method for Constructing Bayesian Belief Networks from
Databases | cs.AI | This paper presents a Bayesian method for constructing Bayesian belief
networks from a database of cases. Potential applications include
computer-assisted hypothesis testing, automated scientific discovery, and
automated construction of probabilistic expert systems. Results are presented
of a preliminary evaluation of an algorithm for constructing a belief network
from a database of cases. We relate the methods in this paper to previous work,
and we discuss open problems.
|
1303.5715 | Local Expression Languages for Probabilistic Dependence: a Preliminary
Report | cs.AI | We present a generalization of the local expression language used in the
Symbolic Probabilistic Inference (SPI) approach to inference in belief nets
[1l, [8]. The local expression language in SPI is the language in which the
dependence of a node on its antecedents is described. The original language
represented the dependence as a single monolithic conditional probability
distribution. The extended language provides a set of operators (*, +, and -)
which can be used to specify methods for combining partial conditional
distributions. As one instance of the utility of this extension, we show how
this extended language can be used to capture the semantics, representational
advantages, and inferential complexity advantages of the "noisy or"
relationship.
|
1303.5716 | Symbolic Decision Theory and Autonomous Systems | cs.AI | The ability to reason under uncertainty and with incomplete information is a
fundamental requirement of decision support technology. In this paper we argue
that the concentration on theoretical techniques for the evaluation and
selection of decision options has distracted attention from many of the wider
issues in decision making. Although numerical methods of reasoning under
uncertainty have strong theoretical foundations, they are representationally
weak and only deal with a small part of the decision process. Knowledge based
systems, on the other hand, offer greater flexibility but have not been
accompanied by a clear decision theory. We describe here work which is under
way towards providing a theoretical framework for symbolic decision procedures.
A central proposal is an extended form of inference which we call
argumentation; reasoning for and against decision options from generalised
domain theories. The approach has been successfully used in several decision
support applications, but it is argued that a comprehensive decision theory
must cover autonomous decision making, where the agent can formulate questions
as well as take decisions. A major theoretical challenge for this theory is to
capture the idea of reflection to permit decision agents to reason about their
goals, what they believe and why, and what they need to know or do in order to
achieve their goals.
|
1303.5717 | A Reason Maintenace System Dealing with Vague Data | cs.AI | A reason maintenance system which extends an ATMS through Mukaidono's fuzzy
logic is described. It supports a problem solver in situations affected by
incomplete information and vague data, by allowing nonmonotonic inferences and
the revision of previous conclusions when contradictions are detected.
|
1303.5718 | Advances in Probabilistic Reasoning | cs.AI | This paper discuses multiple Bayesian networks representation paradigms for
encoding asymmetric independence assertions. We offer three contributions: (1)
an inference mechanism that makes explicit use of asymmetric independence to
speed up computations, (2) a simplified definition of similarity networks and
extensions of their theory, and (3) a generalized representation scheme that
encodes more types of asymmetric independence assertions than do similarity
networks.
|
1303.5719 | Probability Estimation in Face of Irrelevant Information | cs.AI | In this paper, we consider one aspect of the problem of applying decision
theory to the design of agents that learn how to make decisions under
uncertainty. This aspect concerns how an agent can estimate probabilities for
the possible states of the world, given that it only makes limited observations
before committing to a decision. We show that the naive application of
statistical tools can be improved upon if the agent can determine which of his
observations are truly relevant to the estimation problem at hand. We give a
framework in which such determinations can be made, and define an estimation
procedure to use them. Our framework also suggests several extensions, which
show how additional knowledge can be used to improve tile estimation procedure
still further.
|
1303.5720 | An Approximate Nonmyopic Computation for Value of Information | cs.AI | Value-of-information analyses provide a straightforward means for selecting
the best next observation to make, and for determining whether it is better to
gather additional information or to act immediately. Determining the next best
test to perform, given a state of uncertainty about the world, requires a
consideration of the value of making all possible sequences of observations. In
practice, decision analysts and expert-system designers have avoided the
intractability of exact computation of the value of information by relying on a
myopic approximation. Myopic analyses are based on the assumption that only one
additional test will be performed, even when there is an opportunity to make a
large number of observations. We present a nonmyopic approximation for value of
information that bypasses the traditional myopic analyses by exploiting the
statistical properties of large samples.
|
1303.5721 | Search-based Methods to Bound Diagnostic Probabilities in Very Large
Belief Nets | cs.AI | Since exact probabilistic inference is intractable in general for large
multiply connected belief nets, approximate methods are required. A promising
approach is to use heuristic search among hypotheses (instantiations of the
network) to find the most probable ones, as in the TopN algorithm. Search is
based on the relative probabilities of hypotheses which are efficient to
compute. Given upper and lower bounds on the relative probability of partial
hypotheses, it is possible to obtain bounds on the absolute probabilities of
hypotheses. Best-first search aimed at reducing the maximum error progressively
narrows the bounds as more hypotheses are examined. Here, qualitative
probabilistic analysis is employed to obtain bounds on the relative probability
of partial hypotheses for the BN20 class of networks networks and a
generalization replacing the noisy OR assumption by negative synergy. The
approach is illustrated by application to a very large belief network, QMR-BN,
which is a reformulation of the Internist-1 system for diagnosis in internal
medicine.
|
1303.5722 | Time-Dependent Utility and Action Under Uncertainty | cs.AI | We discuss representing and reasoning with knowledge about the time-dependent
utility of an agent's actions. Time-dependent utility plays a crucial role in
the interaction between computation and action under bounded resources. We
present a semantics for time-dependent utility and describe the use of
time-dependent information in decision contexts. We illustrate our discussion
with examples of time-pressured reasoning in Protos, a system constructed to
explore the ideal control of inference by reasoners with limit abilities.
|
1303.5723 | Non-monotonic Reasoning and the Reversibility of Belief Change | cs.AI | Traditional approaches to non-monotonic reasoning fail to satisfy a number of
plausible axioms for belief revision and suffer from conceptual difficulties as
well. Recent work on ranked preferential models (RPMs) promises to overcome
some of these difficulties. Here we show that RPMs are not adequate to handle
iterated belief change. Specifically, we show that RPMs do not always allow for
the reversibility of belief change. This result indicates the need for
numerical strengths of belief.
|
1303.5724 | Belief and Surprise - A Belief-Function Formulation | cs.AI | We motivate and describe a theory of belief in this paper. This theory is
developed with the following view of human belief in mind. Consider the belief
that an event E will occur (or has occurred or is occurring). An agent either
entertains this belief or does not entertain this belief (i.e., there is no
"grade" in entertaining the belief). If the agent chooses to exercise "the will
to believe" and entertain this belief, he/she/it is entitled to a degree of
confidence c (1 > c > 0) in doing so. Adopting this view of human belief, we
conjecture that whenever an agent entertains the belief that E will occur with
c degree of confidence, the agent will be surprised (to the extent c) upon
realizing that E did not occur.
|
1303.5725 | Evidential Reasoning in a Categorial Perspective: Conjunction and
Disjunction of Belief Functions | cs.AI | The categorial approach to evidential reasoning can be seen as a combination
of the probability kinematics approach of Richard Jeffrey (1965) and the
maximum (cross-) entropy inference approach of E. T. Jaynes (1957). As a
consequence of that viewpoint, it is well known that category theory provides
natural definitions for logical connectives. In particular, disjunction and
conjunction are modelled by general categorial constructions known as products
and coproducts. In this paper, I focus mainly on Dempster-Shafer theory of
belief functions for which I introduce a category I call Dempster?s category. I
prove the existence of and give explicit formulas for conjunction and
disjunction in the subcategory of separable belief functions. In Dempster?s
category, the new defined conjunction can be seen as the most cautious
conjunction of beliefs, and thus no assumption about distinctness (of the
sources) of beliefs is needed as opposed to Dempster?s rule of combination,
which calls for distinctness (of the sources) of beliefs.
|
1303.5726 | Reasoning with Mass Distributions | cs.AI | The concept of movable evidence masses that flow from supersets to subsets as
specified by experts represents a suitable framework for reasoning under
uncertainty. The mass flow is controlled by specialization matrices. New
evidence is integrated into the frame of discernment by conditioning or
revision (Dempster's rule of conditioning), for which special specialization
matrices exist. Even some aspects of non-monotonic reasoning can be represented
by certain specialization matrices.
|
1303.5727 | A Logic of Graded Possibility and Certainty Coping with Partial
Inconsistency | cs.AI cs.LO | A semantics is given to possibilistic logic, a logic that handles weighted
classical logic formulae, and where weights are interpreted as lower bounds on
degrees of certainty or possibility, in the sense of Zadeh's possibility
theory. The proposed semantics is based on fuzzy sets of interpretations. It is
tolerant to partial inconsistency. Satisfiability is extended from
interpretations to fuzzy sets of interpretations, each fuzzy set representing a
possibility distribution describing what is known about the state of the world.
A possibilistic knowledge base is then viewed as a set of possibility
distributions that satisfy it. The refutation method of automated deduction in
possibilistic logic, based on previously introduced generalized resolution
principle is proved to be sound and complete with respect to the proposed
semantics, including the case of partial inconsistency.
|
1303.5728 | Conflict and Surprise: Heuristics for Model Revision | cs.AI | Any probabilistic model of a problem is based on assumptions which, if
violated, invalidate the model. Users of probability based decision aids need
to be alerted when cases arise that are not covered by the aid's model.
Diagnosis of model failure is also necessary to control dynamic model
construction and revision. This paper presents a set of decision theoretically
motivated heuristics for diagnosing situations in which a model is likely to
provide an inadequate representation of the process being modeled.
|
1303.5729 | Reasoning under Uncertainty: Some Monte Carlo Results | cs.AI | A series of monte carlo studies were performed to compare the behavior of
some alternative procedures for reasoning under uncertainty. The behavior of
several Bayesian, linear model and default reasoning procedures were examined
in the context of increasing levels of calibration error. The most interesting
result is that Bayesian procedures tended to output more extreme posterior
belief values (posterior beliefs near 0.0 or 1.0) than other techniques, but
the linear models were relatively less likely to output strong support for an
erroneous conclusion. Also, accounting for the probabilistic dependencies
between evidence items was important for both Bayesian and linear updating
procedures.
|
1303.5730 | Representation Requirements for Supporting Decision Model Formulation | cs.AI | This paper outlines a methodology for analyzing the representational support
for knowledge-based decision-modeling in a broad domain. A relevant set of
inference patterns and knowledge types are identified. By comparing the
analysis results to existing representations, some insights are gained into a
design approach for integrating categorical and uncertain knowledge in a
context sensitive manner.
|
1303.5731 | A Language for Planning with Statistics | cs.AI | When a planner must decide whether it has enough evidence to make a decision
based on probability, it faces the sample size problem. Current planners using
probabilities need not deal with this problem because they do not generate
their probabilities from observations. This paper presents an event based
language in which the planner's probabilities are calculated from the binomial
random variable generated by the observed ratio of one type of event to
another. Such probabilities are subject to error, so the planner must
introspect about their validity. Inferences about the probability of these
events can be made using statistics. Inferences about the validity of the
approximations can be made using interval estimation. Interval estimation
allows the planner to avoid making choices that are only weakly supported by
the planner's evidence.
|
1303.5732 | A Modification to Evidential Probability | cs.AI | Selecting the right reference class and the right interval when faced with
conflicting candidates and no possibility of establishing subset style
dominance has been a problem for Kyburg's Evidential Probability system.
Various methods have been proposed by Loui and Kyburg to solve this problem in
a way that is both intuitively appealing and justifiable within Kyburg's
framework. The scheme proposed in this paper leads to stronger statistical
assertions without sacrificing too much of the intuitive appeal of Kyburg's
latest proposal.
|
1303.5733 | Investigation of Variances in Belief Networks | cs.AI | The belief network is a well-known graphical structure for representing
independences in a joint probability distribution. The methods, which perform
probabilistic inference in belief networks, often treat the conditional
probabilities which are stored in the network as certain values. However, if
one takes either a subjectivistic or a limiting frequency approach to
probability, one can never be certain of probability values. An algorithm
should not only be capable of reporting the probabilities of the alternatives
of remaining nodes when other nodes are instantiated; it should also be capable
of reporting the uncertainty in these probabilities relative to the uncertainty
in the probabilities which are stored in the network. In this paper a method
for determining the variances in inferred probabilities is obtained under the
assumption that a posterior distribution on the uncertainty variables can be
approximated by the prior distribution. It is shown that this assumption is
plausible if their is a reasonable amount of confidence in the probabilities
which are stored in the network. Furthermore in this paper, a surprising upper
bound for the prior variances in the probabilities of the alternatives of all
nodes is obtained in the case where the probability distributions of the
probabilities of the alternatives are beta distributions. It is shown that the
prior variance in the probability at an alternative of a node is bounded above
by the largest variance in an element of the conditional probability
distribution for that node.
|
1303.5734 | A Sensitivity Analysis of Pathfinder: A Follow-up Study | cs.AI | At last year?s Uncertainty in AI Conference, we reported the results of a
sensitivity analysis study of Pathfinder. Our findings were quite
unexpected-slight variations to Pathfinder?s parameters appeared to lead to
substantial degradations in system performance. A careful look at our first
analysis, together with the valuable feedback provided by the participants of
last year?s conference, led us to conduct a follow-up study. Our follow-up
differs from our initial study in two ways: (i) the probabilities 0.0 and 1.0
remained unchanged, and (ii) the variations to the probabilities that are close
to both ends (0.0 or 1.0) were less than the ones close to the middle (0.5).
The results of the follow-up study look more reasonable-slight variations to
Pathfinder?s parameters now have little effect on its performance. Taken
together, these two sets of results suggest a viable extension of a common
decision analytic sensitivity analysis to the larger, more complex settings
generally encountered in artificial intelligence.
|
1303.5735 | Non-monotonic Negation in Probabilistic Deductive Databases | cs.AI | In this paper we study the uses and the semantics of non-monotonic negation
in probabilistic deductive data bases. Based on the stable semantics for
classical logic programming, we introduce the notion of stable formula,
functions. We show that stable formula, functions are minimal fixpoints of
operators associated with probabilistic deductive databases with negation.
Furthermore, since a. probabilistic deductive database may not necessarily have
a stable formula function, we provide a stable class semantics for such
databases. Finally, we demonstrate that the proposed semantics can handle
default reasoning naturally in the context of probabilistic deduction.
|
1303.5736 | Management of Uncertainty in the Multi-Level Monitoring and Diagnosis of
the Time of Flight Scintillation Array | cs.AI | We present a general architecture for the monitoring and diagnosis of large
scale sensor-based systems with real time diagnostic constraints. This
architecture is multileveled, combining a single monitoring level based on
statistical methods with two model based diagnostic levels. At each level,
sources of uncertainty are identified, and integrated methodologies for
uncertainty management are developed. The general architecture was applied to
the monitoring and diagnosis of a specific nuclear physics detector at Lawrence
Berkeley National Laboratory that contained approximately 5000 components and
produced over 500 channels of output data. The general architecture is
scalable, and work is ongoing to apply it to detector systems one and two
orders of magnitude more complex.
|
1303.5737 | Integrating Probabilistic Rules into Neural Networks: A Stochastic EM
Learning Algorithm | cs.AI | The EM-algorithm is a general procedure to get maximum likelihood estimates
if part of the observations on the variables of a network are missing. In this
paper a stochastic version of the algorithm is adapted to probabilistic neural
networks describing the associative dependency of variables. These networks
have a probability distribution, which is a special case of the distribution
generated by probabilistic inference networks. Hence both types of networks can
be combined allowing to integrate probabilistic rules as well as unspecified
associations in a sound way. The resulting network may have a number of
interesting features including cycles of probabilistic rules, hidden
'unobservable' variables, and uncertain and contradictory evidence.
|
1303.5738 | Representing Bayesian Networks within Probabilistic Horn Abduction | cs.AI | This paper presents a simple framework for Horn clause abduction, with
probabilities associated with hypotheses. It is shown how this representation
can represent any probabilistic knowledge representable in a Bayesian belief
network. The main contributions are in finding a relationship between logical
and probabilistic notions of evidential reasoning. This can be used as a basis
for a new way to implement Bayesian Networks that allows for approximations to
the value of the posterior probabilities, and also points to a way that
Bayesian networks can be extended beyond a propositional language.
|
1303.5739 | Dynamic Network Updating Techniques For Diagnostic Reasoning | cs.AI | A new probabilistic network construction system, DYNASTY, is proposed for
diagnostic reasoning given variables whose probabilities change over time.
Diagnostic reasoning is formulated as a sequential stochastic process, and is
modeled using influence diagrams. Given a set O of observations, DYNASTY
creates an influence diagram in order to devise the best action given O.
Sensitivity analyses are conducted to determine if the best network has been
created, given the uncertainty in network parameters and topology. DYNASTY uses
an equivalence class approach to provide decision thresholds for the
sensitivity analysis. This equivalence-class approach to diagnostic reasoning
differentiates diagnoses only if the required actions are different. A set of
network-topology updating algorithms are proposed for dynamically updating the
network when necessary.
|
1303.5740 | High Level Path Planning with Uncertainty | cs.AI | For high level path planning, environments are usually modeled as distance
graphs, and path planning problems are reduced to computing the shortest path
in distance graphs. One major drawback of this modeling is the inability to
model uncertainties, which are often encountered in practice. In this paper, a
new tool, called U-yraph, is proposed for environment modeling. A U-graph is an
extension of distance graphs with the ability to handle a kind of uncertainty.
By modeling an uncertain environment as a U-graph, and a navigation problem as
a Markovian decision process, we can precisely define a new optimality
criterion for navigation plans, and more importantly, we can come up with a
general algorithm for computing optimal plans for navigation tasks.
|
1303.5741 | Formal Model of Uncertainty for Possibilistic Rules | cs.AI | Given a universe of discourse X-a domain of possible outcomes-an experiment
may consist of selecting one of its elements, subject to the operation of
chance, or of observing the elements, subject to imprecision. A priori
uncertainty about the actual result of the experiment may be quantified,
representing either the likelihood of the choice of :r_X or the degree to which
any such X would be suitable as a description of the outcome. The former case
corresponds to a probability distribution, while the latter gives a possibility
assignment on X. The study of such assignments and their properties falls
within the purview of possibility theory [DP88, Y80, Z783. It, like probability
theory, assigns values between 0 and 1 to express likelihoods of outcomes.
Here, however, the similarity ends. Possibility theory uses the maximum and
minimum functions to combine uncertainties, whereas probability theory uses the
plus and times operations. This leads to very dissimilar theories in terms of
analytical framework, even though they share several semantic concepts. One of
the shared concepts consists of expressing quantitatively the uncertainty
associated with a given distribution. In probability theory its value
corresponds to the gain of information that would result from conducting an
experiment and ascertaining an actual result. This gain of information can
equally well be viewed as a decrease in uncertainty about the outcome of an
experiment. In this case the standard measure of information, and thus
uncertainty, is Shannon entropy [AD75, G77]. It enjoys several advantages-it is
characterized uniquely by a few, very natural properties, and it can be
conveniently used in decision processes. This application is based on the
principle of maximum entropy; it has become a popular method of relating
decisions to uncertainty. This paper demonstrates that an equally integrated
theory can be built on the foundation of possibility theory. We first show how
to define measures of in formation and uncertainty for possibility assignments.
Next we construct an information-based metric on the space of all possibility
distributions defined on a given domain. It allows us to capture the notion of
proximity in information content among the distributions. Lastly, we show that
all the above constructions can be carried out for continuous
distributions-possibility assignments on arbitrary measurable domains. We
consider this step very significant-finite domains of discourse are but
approximations of the real-life infinite domains. If possibility theory is to
represent real world situations, it must handle continuous distributions both
directly and through finite approximations. In the last section we discuss a
principle of maximum uncertainty for possibility distributions. We show how
such a principle could be formalized as an inference rule. We also suggest it
could be derived as a consequence of simple assumptions about combining
information. We would like to mention that possibility assignments can be
viewed as fuzzy sets and that every fuzzy set gives rise to an assignment of
possibilities. This correspondence has far reaching consequences in logic and
in control theory. Our treatment here is independent of any special
interpretation; in particular we speak of possibility distributions and
possibility measures, defining them as measurable mappings into the interval
[0, 1]. Our presentation is intended as a self-contained, albeit terse summary.
Topics discussed were selected with care, to demonstrate both the completeness
and a certain elegance of the theory. Proofs are not included; we only offer
illustrative examples.
|
1303.5742 | Deliberation and its Role in the Formation of Intentions | cs.AI | Deliberation plays an important role in the design of rational agents
embedded in the real-world. In particular, deliberation leads to the formation
of intentions, i.e., plans of action that the agent is committed to achieving.
In this paper, we present a branching time possible-worlds model for
representing and reasoning about, beliefs, goals, intentions, time, actions,
probabilities, and payoffs. We compare this possible-worlds approach with the
more traditional decision tree representation and provide a transformation from
decision trees to possible worlds. Finally, we illustrate how an agent can
perform deliberation using a decision-tree representation and then use a
possible-worlds model to form and reason about his intentions.
|
1303.5743 | Handling Uncertainty during Plan Recognition in Task-Oriented
Consultation Systems | cs.AI | During interactions with human consultants, people are used to providing
partial and/or inaccurate information, and still be understood and assisted. We
attempt to emulate this capability of human consultants; in computer
consultation systems. In this paper, we present a mechanism for handling
uncertainty in plan recognition during task-oriented consultations. The
uncertainty arises while choosing an appropriate interpretation of a user?s
statements among many possible interpretations. Our mechanism handles this
uncertainty by using probability theory to assess the probabilities of the
interpretations, and complements this assessment by taking into account the
information content of the interpretations. The information content of an
interpretation is a measure of how well defined an interpretation is in terms
of the actions to be performed on the basis of the interpretation. This measure
is used to guide the inference process towards interpretations with a higher
information content. The information content for an interpretation depends on
the specificity and the strength of the inferences in it, where the strength of
an inference depends on the reliability of the information on which the
inference is based. Our mechanism has been developed for use in task-oriented
consultation systems. The domain that we have chosen for exploration is that of
a travel agency.
|
1303.5744 | Truth as Utility: A Conceptual Synthesis | cs.AI | This paper introduces conceptual relations that synthesize utilitarian and
logical concepts, extending the logics of preference of Rescher. We define
first, in the context of a possible worlds model, constraint-dependent measures
that quantify the relative quality of alternative solutions of reasoning
problems or the relative desirability of various policies in control, decision,
and planning problems. We show that these measures may be interpreted as truth
values in a multi valued logic and propose mechanisms for the representation of
complex constraints as combinations of simpler restrictions. These extended
logical operations permit also the combination and aggregation of goal-specific
quality measures into global measures of utility. We identify also relations
that represent differential preferences between alternative solutions and
relate them to the previously defined desirability measures. Extending
conventional modal logic formulations, we introduce structures for the
representation of ignorance about the utility of alternative solutions.
Finally, we examine relations between these concepts and similarity based
semantic models of fuzzy logic.
|
1303.5745 | Pulcinella: A General Tool for Propagating Uncertainty in Valuation
Networks | cs.AI | We present PULCinella and its use in comparing uncertainty theories.
PULCinella is a general tool for Propagating Uncertainty based on the Local
Computation technique of Shafer and Shenoy. It may be specialized to different
uncertainty theories: at the moment, Pulcinella can propagate probabilities,
belief functions, Boolean values, and possibilities. Moreover, Pulcinella
allows the user to easily define his own specializations. To illustrate
Pulcinella, we analyze two examples by using each of the four theories above.
In the first one, we mainly focus on intrinsic differences between theories. In
the second one, we take a knowledge engineer viewpoint, and check the adequacy
of each theory to a given problem.
|
1303.5746 | Structuring Bodies of Evidence | cs.AI | In this article we present two ways of structuring bodies of evidence, which
allow us to reduce the complexity of the operations usually performed in the
framework of evidence theory. The first structure just partitions the focal
elements in a body of evidence by their cardinality. With this structure we are
able to reduce the complexity on the calculation of the belief functions Bel,
Pl, and Q. The other structure proposed here, the Hierarchical Trees, permits
us to reduce the complexity of the calculation of Bel, Pl, and Q, as well as of
the Dempster's rule of combination in relation to the brute-force algorithm.
Both these structures do not require the generation of all the subsets of the
reference domain.
|
1303.5747 | On the Generation of Alternative Explanations with Implications for
Belief Revision | cs.AI | In general, the best explanation for a given observation makes no promises on
how good it is with respect to other alternative explanations. A major
deficiency of message-passing schemes for belief revision in Bayesian networks
is their inability to generate alternatives beyond the second best. In this
paper, we present a general approach based on linear constraint systems that
naturally generates alternative explanations in an orderly and highly efficient
manner. This approach is then applied to cost-based abduction problems as well
as belief revision in Bayesian net works.
|
1303.5748 | Completing Knowledge by Competing Hierarchies | cs.AI | A control strategy for expert systems is presented which is based on Shafer's
Belief theory and the combination rule of Dempster. In contrast to well known
strategies it is not sequentially and hypotheses-driven, but parallel and self
organizing, determined by the concept of information gain. The information
gain, calculated as the maximal difference between the actual evidence
distribution in the knowledge base and the potential evidence determines each
consultation step. Hierarchically structured knowledge is an important
representation form and experts even use several hierarchies in parallel for
constituting their knowledge. Hence the control strategy is applied to a
layered set of distinct hierarchies. Depending on the actual data one of these
hierarchies is chosen by the control strategy for the next step in the
reasoning process. Provided the actual data are well matched to the structure
of one hierarchy, this hierarchy remains selected for a longer consultation
time. If no good match can be achieved, a switch from the actual hierarchy to a
competing one will result, very similar to the phenomenon of restructuring in
problem solving tasks. Up to now the control strategy is restricted to multi
hierarchical knowledge bases with disjunct hierarchies. It is implemented in
the expert system IBIG (inference by information gain), being presently applied
to acquired speech disorders (aphasia).
|
1303.5749 | A Graph-Based Inference Method for Conditional Independence | cs.AI | The graphoid axioms for conditional independence, originally described by
Dawid [1979], are fundamental to probabilistic reasoning [Pearl, 19881. Such
axioms provide a mechanism for manipulating conditional independence assertions
without resorting to their numerical definition. This paper explores a
representation for independence statements using multiple undirected graphs and
some simple graphical transformations. The independence statements derivable in
this system are equivalent to those obtainable by the graphoid axioms.
Therefore, this is a purely graphical proof technique for conditional
independence.
|
1303.5750 | A Fusion Algorithm for Solving Bayesian Decision Problems | cs.AI | This paper proposes a new method for solving Bayesian decision problems. The
method consists of representing a Bayesian decision problem as a
valuation-based system and applying a fusion algorithm for solving it. The
fusion algorithm is a hybrid of local computational methods for computation of
marginals of joint probability distributions and the local computational
methods for discrete optimization problems.
|
1303.5751 | Algorithms for Irrelevance-Based Partial MAPs | cs.AI | Irrelevance-based partial MAPs are useful constructs for domain-independent
explanation using belief networks. We look at two definitions for such partial
MAPs, and prove important properties that are useful in designing algorithms
for computing them effectively. We make use of these properties in modifying
our standard MAP best-first algorithm, so as to handle irrelevance-based
partial MAPs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.