id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.0608 | Optimal Feedback Rate Sharing Strategy in Zero-Forcing MIMO Broadcast
Channels | cs.IT math.IT | In this paper, we consider a multiple-input multiple-output broadcast channel
with limited feedback where all users share the feedback rates. Firstly, we
find the optimal feedback rate sharing strategy using zero-forcing transmission
scheme at the transmitter and random vector quantization at each user. We
mathematically prove that equal sharing of sum feedback size among all users is
the optimal strategy in the low signal-to-noise ratio (SNR) region, while
allocating whole feedback size to a single user is the optimal strategy in the
high SNR region. For the mid-SNR region, we propose a simple numerical method
to find the optimal feedback rate sharing strategy based on our analysis and
show that the equal allocation of sum feedback rate to a partial number of
users is the optimal strategy. It is also shown that the proposed simple
numerical method can be applicable to finding the optimal feedback rate sharing
strategy when different path losses of the users are taken into account. We
show that our proposed feedback rate sharing scheme can be extended to the
system with stream control and is still useful for the systems with other
techniques such as regularized zero-forcing and spherical cap codebook.
|
1304.0620 | Disjunctive Logic Programs versus Normal Logic Programs | cs.AI | This paper focuses on the expressive power of disjunctive and normal logic
programs under the stable model semantics over finite, infinite, or arbitrary
structures. A translation from disjunctive logic programs into normal logic
programs is proposed and then proved to be sound over infinite structures. The
equivalence of expressive power of two kinds of logic programs over arbitrary
structures is shown to coincide with that over finite structures, and coincide
with whether or not NP is closed under complement. Over finite structures, the
intranslatability from disjunctive logic programs to normal logic programs is
also proved if arities of auxiliary predicates and functions are bounded in a
certain way.
|
1304.0640 | Event management for large scale event-driven digital hardware spiking
neural networks | cs.NE cs.AI cs.DC | The interest in brain-like computation has led to the design of a plethora of
innovative neuromorphic systems. Individually, spiking neural networks (SNNs),
event-driven simulation and digital hardware neuromorphic systems get a lot of
attention. Despite the popularity of event-driven SNNs in software, very few
digital hardware architectures are found. This is because existing hardware
solutions for event management scale badly with the number of events. This
paper introduces the structured heap queue, a pipelined digital hardware data
structure, and demonstrates its suitability for event management. The
structured heap queue scales gracefully with the number of events, allowing the
efficient implementation of large scale digital hardware event-driven SNNs. The
scaling is linear for memory, logarithmic for logic resources and constant for
processing time. The use of the structured heap queue is demonstrated on
field-programmable gate array (FPGA) with an image segmentation experiment and
a SNN of 65~536 neurons and 513~184 synapses. Events can be processed at the
rate of 1 every 7 clock cycles and a 406$\times$158 pixel image is segmented in
200 ms.
|
1304.0678 | Randomized Methods for Design of Uncertain Systems: Sample Complexity
and Sequential Algorithms | cs.SY math.OC | In this paper, we study randomized methods for feedback design of uncertain
systems. The first contribution is to derive the sample complexity of various
constrained control problems. In particular, we show the key role played by the
binomial distribution and related tail inequalities, and compute the sample
complexity. This contribution significantly improves the existing results by
reducing the number of required samples in the randomized algorithm. These
results are then applied to the analysis of worst-case performance and design
with robust optimization. The second contribution of the paper is to introduce
a general class of sequential algorithms, denoted as Sequential Probabilistic
Validation (SPV). In these sequential algorithms, at each iteration, a
candidate solution is probabilistically validated, and corrected if necessary,
to meet the required specifications. The results we derive provide the sample
complexity which guarantees that the solutions obtained with SPV algorithms
meet some pre-specified probabilistic accuracy and confidence. The performance
of these algorithms is illustrated and compared with other existing methods
using a numerical example dealing with robust system identification.
|
1304.0682 | Sparse Signal Processing with Linear and Nonlinear Observations: A
Unified Shannon-Theoretic Approach | cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We derive fundamental sample complexity bounds for recovering sparse and
structured signals for linear and nonlinear observation models including sparse
regression, group testing, multivariate regression and problems with missing
features. In general, sparse signal processing problems can be characterized in
terms of the following Markovian property. We are given a set of $N$ variables
$X_1,X_2,\ldots,X_N$, and there is an unknown subset of variables $S \subset
\{1,\ldots,N\}$ that are relevant for predicting outcomes $Y$. More
specifically, when $Y$ is conditioned on $\{X_n\}_{n\in S}$ it is conditionally
independent of the other variables, $\{X_n\}_{n \not \in S}$. Our goal is to
identify the set $S$ from samples of the variables $X$ and the associated
outcomes $Y$. We characterize this problem as a version of the noisy channel
coding problem. Using asymptotic information theoretic analyses, we establish
mutual information formulas that provide sufficient and necessary conditions on
the number of samples required to successfully recover the salient variables.
These mutual information expressions unify conditions for both linear and
nonlinear observations. We then compute sample complexity bounds for the
aforementioned models, based on the mutual information expressions in order to
demonstrate the applicability and flexibility of our results in general sparse
signal processing models.
|
1304.0715 | A cookbook of translating English to Xapi | cs.AI cs.CL | The Xapagy cognitive architecture had been designed to perform narrative
reasoning: to model and mimic the activities performed by humans when
witnessing, reading, recalling, narrating and talking about stories. Xapagy
communicates with the outside world using Xapi, a simplified, "pidgin" language
which is strongly tied to the internal representation model (instances, scenes
and verb instances) and reasoning techniques (shadows and headless shadows).
While not fully a semantic equivalent of natural language, Xapi can represent a
wide range of complex stories. We illustrate the representation technique used
in Xapi through examples taken from folk physics, folk psychology as well as
some more unusual literary examples. We argue that while the Xapi model
represents a conceptual shift from the English representation, the mapping is
logical and consistent, and a trained knowledge engineer can translate between
English and Xapi at near-native speed.
|
1304.0725 | Improved Performance of Unsupervised Method by Renovated K-Means | cs.LG cs.CV stat.ML | Clustering is a separation of data into groups of similar objects. Every
group called cluster consists of objects that are similar to one another and
dissimilar to objects of other groups. In this paper, the K-Means algorithm is
implemented by three distance functions and to identify the optimal distance
function for clustering methods. The proposed K-Means algorithm is compared
with K-Means, Static Weighted K-Means (SWK-Means) and Dynamic Weighted K-Means
(DWK-Means) algorithm by using Davis Bouldin index, Execution Time and
Iteration count methods. Experimental results show that the proposed K-Means
algorithm performed better on Iris and Wine dataset when compared with other
three clustering methods.
|
1304.0730 | Representation, Approximation and Learning of Submodular Functions Using
Low-rank Decision Trees | cs.LG cs.CC cs.DS | We study the complexity of approximate representation and learning of
submodular functions over the uniform distribution on the Boolean hypercube
$\{0,1\}^n$. Our main result is the following structural theorem: any
submodular function is $\epsilon$-close in $\ell_2$ to a real-valued decision
tree (DT) of depth $O(1/\epsilon^2)$. This immediately implies that any
submodular function is $\epsilon$-close to a function of at most
$2^{O(1/\epsilon^2)}$ variables and has a spectral $\ell_1$ norm of
$2^{O(1/\epsilon^2)}$. It also implies the closest previous result that states
that submodular functions can be approximated by polynomials of degree
$O(1/\epsilon^2)$ (Cheraghchi et al., 2012). Our result is proved by
constructing an approximation of a submodular function by a DT of rank
$4/\epsilon^2$ and a proof that any rank-$r$ DT can be $\epsilon$-approximated
by a DT of depth $\frac{5}{2}(r+\log(1/\epsilon))$.
We show that these structural results can be exploited to give an
attribute-efficient PAC learning algorithm for submodular functions running in
time $\tilde{O}(n^2) \cdot 2^{O(1/\epsilon^{4})}$. The best previous algorithm
for the problem requires $n^{O(1/\epsilon^{2})}$ time and examples (Cheraghchi
et al., 2012) but works also in the agnostic setting. In addition, we give
improved learning algorithms for a number of related settings.
We also prove that our PAC and agnostic learning algorithms are essentially
optimal via two lower bounds: (1) an information-theoretic lower bound of
$2^{\Omega(1/\epsilon^{2/3})}$ on the complexity of learning monotone
submodular functions in any reasonable model; (2) computational lower bound of
$n^{\Omega(1/\epsilon^{2/3})}$ based on a reduction to learning of sparse
parities with noise, widely-believed to be intractable. These are the first
lower bounds for learning of submodular functions over the uniform
distribution.
|
1304.0740 | O(logT) Projections for Stochastic Optimization of Smooth and Strongly
Convex Functions | cs.LG | Traditional algorithms for stochastic optimization require projecting the
solution at each iteration into a given domain to ensure its feasibility. When
facing complex domains, such as positive semi-definite cones, the projection
operation can be expensive, leading to a high computational cost per iteration.
In this paper, we present a novel algorithm that aims to reduce the number of
projections for stochastic optimization. The proposed algorithm combines the
strength of several recent developments in stochastic optimization, including
mini-batch, extra-gradient, and epoch gradient descent, in order to effectively
explore the smoothness and strong convexity. We show, both in expectation and
with a high probability, that when the objective function is both smooth and
strongly convex, the proposed algorithm achieves the optimal $O(1/T)$ rate of
convergence with only $O(\log T)$ projections. Our empirical study verifies the
theoretical result.
|
1304.0751 | A Cumulative Multi-Niching Genetic Algorithm for Multimodal Function
Optimization | cs.NE | This paper presents a cumulative multi-niching genetic algorithm (CMN GA),
designed to expedite optimization problems that have computationally-expensive
multimodal objective functions. By never discarding individuals from the
population, the CMN GA makes use of the information from every objective
function evaluation as it explores the design space. A fitness-related
population density control over the design space reduces unnecessary objective
function evaluations. The algorithm's novel arrangement of genetic operations
provides fast and robust convergence to multiple local optima. Benchmark tests
alongside three other multi-niching algorithms show that the CMN GA has a
greater convergence ability and provides an order-of-magnitude reduction in the
number of objective function evaluations required to achieve a given level of
convergence.
|
1304.0806 | IFP-Intuitionistic fuzzy soft set theory and its applications | cs.AI | In this work, we present definition of intuitionistic fuzzy parameterized
(IFP) intuitionistic fuzzy soft set and its operations. Then we define
IFP-aggregation operator to form IFP-intuitionistic fuzzy soft-decision-making
method which allows constructing more efficient decision processes.
|
1304.0823 | Lie Algebrized Gaussians for Image Representation | cs.CV | We present an image representation method which is derived from analyzing
Gaussian probability density function (\emph{pdf}) space using Lie group
theory. In our proposed method, images are modeled by Gaussian mixture models
(GMMs) which are adapted from a globally trained GMM called universal
background model (UBM). Then we vectorize the GMMs based on two facts: (1)
components of image-specific GMMs are closely grouped together around their
corresponding component of the UBM due to the characteristic of the UBM
adaption procedure; (2) Gaussian \emph{pdf}s form a Lie group, which is a
differentiable manifold rather than a vector space. We map each Gaussian
component to the tangent vector space (named Lie algebra) of Lie group at the
manifold position of UBM. The final feature vector, named Lie algebrized
Gaussians (LAG) is then constructed by combining the Lie algebrized Gaussian
components with mixture weights. We apply LAG features to scene category
recognition problem and observe state-of-the-art performance on 15Scenes
benchmark.
|
1304.0825 | Synthesizing Switching Controllers for Hybrid Systems by Continuous
Invariant Generation | cs.SY cs.NA cs.SC | We extend a template-based approach for synthesizing switching controllers
for semi-algebraic hybrid systems, in which all expressions are polynomials.
This is achieved by combining a QE (quantifier elimination)-based method for
generating continuous invariants with a qualitative approach for predefining
templates. Our synthesis method is relatively complete with regard to a given
family of predefined templates. Using qualitative analysis, we discuss
heuristics to reduce the numbers of parameters appearing in the templates. To
avoid too much human interaction in choosing templates as well as the high
computational complexity caused by QE, we further investigate applications of
the SOS (sum-of-squares) relaxation approach and the template polyhedra
approach in continuous invariant generation, which are both well supported by
efficient numerical solvers.
|
1304.0839 | Multiscale Hybrid Non-local Means Filtering Using Modified Similarity
Measure | cs.CV | A new multiscale implementation of non-local means filtering for image
denoising is proposed. The proposed algorithm also introduces a modification of
similarity measure for patch comparison. The standard Euclidean norm is
replaced by weighted Euclidean norm for patch based comparison. Assuming the
patch as an oriented surface, notion of normal vector patch is being associated
with each patch. The inner product of these normal vector patches is then used
in weighted Euclidean distance of photometric patches as the weight factor. The
algorithm involves two steps: The first step is multiscale implementation of an
accelerated non-local means filtering in the stationary wavelet domain to
obtain a refined version of the noisy patches for later comparison. This step
is inspired by a preselection phase of finding similar patches in various
non-local means approaches. The next step is to apply the modified non-local
means filtering to the noisy image using the reference patches obtained in the
first step. These refined patches contain less noise, and consequently the
computation of normal vectors and partial derivatives is more accurate.
Experimental results indicate equivalent or better performance of proposed
algorithm as compared to various state of the art algorithms.
|
1304.0840 | A Fast Semidefinite Approach to Solving Binary Quadratic Problems | cs.CV cs.LG | Many computer vision problems can be formulated as binary quadratic programs
(BQPs). Two classic relaxation methods are widely used for solving BQPs,
namely, spectral methods and semidefinite programming (SDP), each with their
own advantages and disadvantages. Spectral relaxation is simple and easy to
implement, but its bound is loose. Semidefinite relaxation has a tighter bound,
but its computational complexity is high for large scale problems. We present a
new SDP formulation for BQPs, with two desirable properties. First, it has a
similar relaxation bound to conventional SDP formulations. Second, compared
with conventional SDP methods, the new SDP formulation leads to a significantly
more efficient and scalable dual optimization approach, which has the same
degree of complexity as spectral methods. Extensive experiments on various
applications including clustering, image segmentation, co-segmentation and
registration demonstrate the usefulness of our SDP formulation for solving
large-scale BQPs.
|
1304.0844 | Coalitional Manipulation for Schulze's Rule | cs.AI cs.GT | Schulze's rule is used in the elections of a large number of organizations
including Wikimedia and Debian. Part of the reason for its popularity is the
large number of axiomatic properties, like monotonicity and Condorcet
consistency, which it satisfies. We identify a potential shortcoming of
Schulze's rule: it is computationally vulnerable to manipulation. In
particular, we prove that computing an unweighted coalitional manipulation
(UCM) is polynomial for any number of manipulators. This result holds for both
the unique winner and the co-winner versions of UCM. This resolves an open
question stated by Parkes and Xia (2012). We also prove that computing a
weighted coalitional manipulation (WCM) is polynomial for a bounded number of
candidates. Finally, we discuss the relation between the unique winner UCM
problem and the co-winner UCM problem and argue that they have substantially
different necessary and sufficient conditions for the existence of a successful
manipulation.
|
1304.0848 | Phase-Aligned Space-Time Coding for a Single Stream MIMO system | cs.IT math.IT | We present a phase-aligned space-time coding scheme that expands the original
Alamouti codeword to three or four transmit antennas ($N_t = 3$ or 4) with
phase alignment. With $1 \sim 2$ bits feedback for the phase information, the
fundamental performance penalty of $10log_{10}(N_t)$ dB of orthogonal
space-time coding compared to the optimum beamforming is reduced by 1 dB (for
$N_t=3$) or 2 dB (for $N_t = 4$) on average. With the proposed scheme, the full
diversity order of $N_t$ is achievable, whereas the receiver architecture
remains the same as the legacy Alamouti decoding with codeword size of two,
since the spatial expansion is transparent to the receiver. Our results show
the proposed scheme outperforms open-loop space-time coding for three or four
transmit antennas by more than 3 dB.
|
1304.0857 | Coexistence of Near-Field and Far-Field Sources: the Angular Resolution
Limit | cs.IT math.IT | Passive source localization is a well known inverse problem in which we
convert the observed measurements into information about the direction of
arrivals. In this paper we focus on the optimal resolution of such problem.
More precisely, we propose in this contribution to derive and analyze the
Angular Resolution Limit (ARL) for the scenario of mixed Near-Field (NF) and
Far-Field (FF) Sources. This scenario is relevant to some realistic situations.
We base our analysis on the Smith's equation which involves the Cram\'er-Rao
Bound (CRB). This equation provides the theoretical ARL which is independent of
a specific estimator. Our methodology is the following: first, we derive a
closed-form expression of the CRB for the considered problem. Using these
expressions, we can rewrite the Smith's equation as a 4-th order polynomial by
assuming a small separation of the sources. Finally, we derive in closed-form
the analytic ARL under or not the assumption of low noise variance. The
obtained expression is compact and can provide useful qualitative informations
on the behavior of the ARL.
|
1304.0859 | Angular resolution limit for deterministic correlated sources | cs.IT math.IT | This paper is devoted to the analysis of the angular resolution limit (ARL),
an important performance measure in the directions-of-arrival estimation
theory. The main fruit of our endeavor takes the form of an explicit,
analytical expression of this resolution limit, w.r.t. the angular parameters
of interest between two closely spaced point sources in the far-field region.
As by-products, closed-form expressions of the Cram\'er-Rao bound have been
derived. Finally, with the aid of numerical tools, we confirm the validity of
our derivation and provide a detailed discussion on several enlightening
properties of the ARL revealed by our expression, with an emphasis on the
impact of the signal correlation.
|
1304.0869 | Patch-based Probabilistic Image Quality Assessment for Face Selection
and Improved Video-based Face Recognition | cs.CV stat.AP | In video based face recognition, face images are typically captured over
multiple frames in uncontrolled conditions, where head pose, illumination,
shadowing, motion blur and focus change over the sequence. Additionally,
inaccuracies in face localisation can also introduce scale and alignment
variations. Using all face images, including images of poor quality, can
actually degrade face recognition performance. While one solution it to use
only the "best" subset of images, current face selection techniques are
incapable of simultaneously handling all of the abovementioned issues. We
propose an efficient patch-based face image quality assessment algorithm which
quantifies the similarity of a face image to a probabilistic face model,
representing an "ideal" face. Image characteristics that affect recognition are
taken into account, including variations in geometric alignment (shift,
rotation and scale), sharpness, head pose and cast shadows. Experiments on
FERET and PIE datasets show that the proposed algorithm is able to identify
images which are simultaneously the most frontal, aligned, sharp and well
illuminated. Further experiments on a new video surveillance dataset (termed
ChokePoint) show that the proposed method provides better face subsets than
existing face selection techniques, leading to significant improvements in
recognition accuracy.
|
1304.0878 | C Language Extensions for Hybrid CPU/GPU Programming with StarPU | cs.MS cs.CE cs.DC | Modern platforms used for high-performance computing (HPC) include machines
with both general-purpose CPUs, and "accelerators", often in the form of
graphical processing units (GPUs). StarPU is a C library to exploit such
platforms. It provides users with ways to define "tasks" to be executed on CPUs
or GPUs, along with the dependencies among them, and by automatically
scheduling them over all the available processing units. In doing so, it also
relieves programmers from the need to know the underlying architecture details:
it adapts to the available CPUs and GPUs, and automatically transfers data
between main memory and GPUs as needed. While StarPU's approach is successful
at addressing run-time scheduling issues, being a C library makes for a poor
and error-prone programming interface. This paper presents an effort started in
2011 to promote some of the concepts exported by the library as C language
constructs, by means of an extension of the GCC compiler suite. Our main
contribution is the design and implementation of language extensions that map
to StarPU's task programming paradigm. We argue that the proposed extensions
make it easier to get started with StarPU,eliminate errors that can occur when
using the C library, and help diagnose possible mistakes. We conclude on future
work.
|
1304.0886 | Improved Anomaly Detection in Crowded Scenes via Cell-based Analysis of
Foreground Speed, Size and Texture | cs.CV | A robust and efficient anomaly detection technique is proposed, capable of
dealing with crowded scenes where traditional tracking based approaches tend to
fail. Initial foreground segmentation of the input frames confines the analysis
to foreground objects and effectively ignores irrelevant background dynamics.
Input frames are split into non-overlapping cells, followed by extracting
features based on motion, size and texture from each cell. Each feature type is
independently analysed for the presence of an anomaly. Unlike most methods, a
refined estimate of object motion is achieved by computing the optical flow of
only the foreground pixels. The motion and size features are modelled by an
approximated version of kernel density estimation, which is computationally
efficient even for large training datasets. Texture features are modelled by an
adaptively grown codebook, with the number of entries in the codebook selected
in an online fashion. Experiments on the recently published UCSD Anomaly
Detection dataset show that the proposed method obtains considerably better
results than three recent approaches: MPPCA, social force, and mixture of
dynamic textures (MDT). The proposed method is also several orders of magnitude
faster than MDT, the next best performing method.
|
1304.0897 | Duality in STRIPS planning | cs.AI | We describe a duality mapping between STRIPS planning tasks. By exchanging
the initial and goal conditions, taking their respective complements, and
swapping for every action its precondition and delete list, one obtains for
every STRIPS task its dual version, which has a solution if and only if the
original does. This is proved by showing that the described transformation
essentially turns progression (forward search) into regression (backward
search) and vice versa.
The duality sheds new light on STRIPS planning by allowing a transfer of
ideas from one search approach to the other. It can be used to construct new
algorithms from old ones, or (equivalently) to obtain new benchmarks from
existing ones. Experiments show that the dual versions of IPC benchmarks are in
general quite difficult for modern planners. This may be seen as a new
challenge. On the other hand, the cases where the dual versions are easier to
solve demonstrate that the duality can also be made useful in practice.
|
1304.0913 | Predicting Network Attacks Using Ontology-Driven Inference | cs.AI cs.CR cs.NI | Graph knowledge models and ontologies are very powerful modeling and re
asoning tools. We propose an effective approach to model network attacks and
attack prediction which plays important roles in security management. The goals
of this study are: First we model network attacks, their prerequisites and
consequences using knowledge representation methods in order to provide
description logic reasoning and inference over attack domain concepts. And
secondly, we propose an ontology-based system which predicts potential attacks
using inference and observing information which provided by sensory inputs. We
generate our ontology and evaluate corresponding methods using CAPEC, CWE, and
CVE hierarchical datasets. Results from experiments show significant capability
improvements comparing to traditional hierarchical and relational models.
Proposed method also reduces false alarms and improves intrusion detection
effectiveness.
|
1304.0920 | Information-Preserving Markov Aggregation | cs.IT math.IT | We present a sufficient condition for a non-injective function of a Markov
chain to be a second-order Markov chain with the same entropy rate as the
original chain. This permits an information-preserving state space reduction by
merging states or, equivalently, lossless compression of a Markov source on a
sample-by-sample basis. The cardinality of the reduced state space is bounded
from below by the node degrees of the transition graph associated with the
original Markov chain.
We also present an algorithm listing all possible information-preserving
state space reductions, for a given transition graph. We illustrate our results
by applying the algorithm to a bi-gram letter model of an English text.
|
1304.0941 | Recovery of Sparse Signals via Generalized Orthogonal Matching Pursuit:
A New Analysis | cs.IT math.IT | As an extension of orthogonal matching pursuit (OMP) improving the recovery
performance of sparse signals, generalized OMP (gOMP) has recently been studied
in the literature. In this paper, we present a new analysis of the gOMP
algorithm using restricted isometry property (RIP). We show that if the
measurement matrix $\mathbf{\Phi} \in \mathcal{R}^{m \times n}$ satisfies the
RIP with $$\delta_{\max \left\{9, S + 1 \right\}K} \leq \frac{1}{8},$$ then
gOMP performs stable reconstruction of all $K$-sparse signals $\mathbf{x} \in
\mathcal{R}^n$ from the noisy measurements $\mathbf{y} = \mathbf{\Phi x} +
\mathbf{v}$ within $\max \left\{K, \left\lfloor \frac{8K}{S} \right\rfloor
\right\}$ iterations where $\mathbf{v}$ is the noise vector and $S$ is the
number of indices chosen in each iteration of the gOMP algorithm. For Gaussian
random measurements, our results indicate that the number of required
measurements is essentially $m = \mathcal{O}(K \log \frac{n}{K})$, which is a
significant improvement over the existing result $m = \mathcal{O}(K^2 \log
\frac{n}{K})$, especially for large $K$.
|
1304.0954 | Labeling and Retrieval of Emotionally-Annotated Images using WordNet | cs.IR cs.HC | Repositories of images with semantic and emotion content descriptions are
valuable tools in many areas such as Affective Computing and Human-Computer
Interaction, but they are also important in the development of multimodal
searchable online databases. Ever growing number of image documents available
on the Internet continuously motivates research of better annotation models and
more efficient retrieval methods which use mash-up of available data on
semantics, scenes, objects, events, context and emotion. Formal knowledge
representation of such high-level semantics requires rich, explicit, human but
also machine-processable information. To achieve these goals we present an
online ontology-based image annotation tool WNtags and demonstrate its
usefulness in knowledge representation and image retrieval using the
International Affective Picture System database. The WNtags uses WordNet as
image tagging glossary but considers Suggested Upper Merged Ontology as the
preferred upper labeling formalism. The retrieval is performed using node
distance metrics to establish semantic relatedness between a query and the
collaboratively weighted tags describing high-level image semantics, after
which the result is ranked according to the derived importance. We also
elaborate plans to improve the WNtags to create a collaborative Web-based
multimedia repository for research in human emotion and attention.
|
1304.0959 | Conditional Tables in practice | cs.DB | Due to the ever increasing importance of the internet, interoperability of
heterogeneous data sources is as well of ever increasing importance.
Interoperability can be achieved e.g. through data integration and data
exchange. Common to both approaches is the need for the DBMS to be able to
store and query incomplete databases. In this report we present PossDB, a DBMS
capable of storing and querying incomplete databases. The system is wrapper
over PostgreSQL, and the query language is an extension of a subset of standard
SQL. Our experimental results show that our system scales well, actually better
than comparable systems.
|
1304.1014 | A Novel Frank-Wolfe Algorithm. Analysis and Applications to Large-Scale
SVM Training | cs.CV cs.AI cs.LG math.OC stat.ML | Recently, there has been a renewed interest in the machine learning community
for variants of a sparse greedy approximation procedure for concave
optimization known as {the Frank-Wolfe (FW) method}. In particular, this
procedure has been successfully applied to train large-scale instances of
non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has
allowed to obtain efficient algorithms but also important theoretical results,
including convergence analysis of training algorithms and new characterizations
of model sparsity.
In this paper, we present and analyze a novel variant of the FW method based
on a new way to perform away steps, a classic strategy used to accelerate the
convergence of the basic FW procedure. Our formulation and analysis is focused
on a general concave maximization problem on the simplex. However, the
specialization of our algorithm to quadratic forms is strongly related to some
classic methods in computational geometry, namely the Gilbert and MDM
algorithms.
On the theoretical side, we demonstrate that the method matches the
guarantees in terms of convergence rate and number of iterations obtained by
using classic away steps. In particular, the method enjoys a linear rate of
convergence, a result that has been recently proved for MDM on quadratic forms.
On the practical side, we provide experiments on several classification
datasets, and evaluate the results using statistical tests. Experiments show
that our method is faster than the FW method with classic away steps, and works
well even in the cases in which classic away steps slow down the algorithm.
Furthermore, these improvements are obtained without sacrificing the predictive
accuracy of the obtained SVM model.
|
1304.1018 | Estimating Phoneme Class Conditional Probabilities from Raw Speech
Signal using Convolutional Neural Networks | cs.LG cs.CL cs.NE | In hybrid hidden Markov model/artificial neural networks (HMM/ANN) automatic
speech recognition (ASR) system, the phoneme class conditional probabilities
are estimated by first extracting acoustic features from the speech signal
based on prior knowledge such as, speech perception or/and speech production
knowledge, and, then modeling the acoustic features with an ANN. Recent
advances in machine learning techniques, more specifically in the field of
image processing and text processing, have shown that such divide and conquer
strategy (i.e., separating feature extraction and modeling steps) may not be
necessary. Motivated from these studies, in the framework of convolutional
neural networks (CNNs), this paper investigates a novel approach, where the
input to the ANN is raw speech signal and the output is phoneme class
conditional probability estimates. On TIMIT phoneme recognition task, we study
different ANN architectures to show the benefit of CNNs and compare the
proposed approach against conventional approach where, spectral-based feature
MFCC is extracted and modeled by a multilayer perceptron. Our studies show that
the proposed approach can yield comparable or better phoneme recognition
performance when compared to the conventional approach. It indicates that CNNs
can learn features relevant for phoneme classification automatically from the
raw speech signal.
|
1304.1020 | Spatial Resources Optimization in Distributed MIMO Networks with Limited
Data Sharing | cs.IT math.IT | Wireless access through a large distributed network of low-complexity
infrastructure nodes empowered with cooperation and coordination capabilities,
is an emerging radio architecture, candidate to deal with the mobile data
capacity crunch. In the 3GPP evolutionary path, this is known as the Cloud-RAN
paradigm for future radio. In such a complex network, distributed MIMO
resources optimization is of paramount importance, in order to achieve capacity
scaling. In this paper, we investigate efficient strategies towards optimizing
the pairing of access nodes with users as well as linear precoding designs for
providing fair QoS experience across the whole network, when data sharing is
limited due to complexity and overhead constraints. We propose a method for
obtaining the exact optimal spatial resources allocation solution which can be
applied in networks of limited scale, as well as an approximation algorithm
with bounded polynomial complexity which can be used in larger networks. The
particular algorithm outperforms existing user-oriented clustering techniques
and achieves quite high quality-of-service levels with reasonable complexity.
|
1304.1022 | A software for aging faces applied to ancient marble busts | cs.CV | The study and development of software able to show the effect of aging of
faces is one of the tasks of face recognition technologies. Some software
solutions are used for investigations, some others to show the effects of drugs
on healthy appearance, however some other applications can be proposed for the
analysis of visual arts. Here we use a freely available software, which is
providing interesting results, for the comparison of ancient marble busts. An
analysis of Augustus busts is proposed.
|
1304.1039 | Environmental structure and competitive scoring advantages in team
competitions | physics.soc-ph cs.SI physics.data-an stat.AP | In most professional sports, the structure of the environment is kept neutral
so that scoring imbalances may be attributed to differences in team skill. It
thus remains unknown what impact structural heterogeneities can have on scoring
dynamics and producing competitive advantages. Applying a generative model of
scoring dynamics to roughly 10 million team competitions drawn from an online
game, we quantify the relationship between a competition's structure and its
scoring dynamics. Despite wide structural variations, we find the same
three-phase pattern in the tempo of events observed in many sports. Tempo and
balance are highly predictable from a competition's structural features alone
and teams exploit environmental heterogeneities for sustained competitive
advantage. The most balanced competitions are associated with specific
environmental heterogeneities, not from equally skilled teams. These results
shed new light on the principles of balanced competition, and illustrate the
potential of online game data for investigating social dynamics and
competition.
|
1304.1066 | An Improved LR-aided K-Best Algorithm for MIMO Detection | cs.IT cs.DS math.IT math.OC | Recently, lattice reduction (LR) technique has caught great attention for
multi-input multi-output (MIMO) receiver because of its low complexity and high
performance. However, when the number of antennas is large, LR-aided linear
detectors and successive interference cancellation (SIC) detectors still
exhibit considerable performance gap to the optimal maximum likelihood detector
(MLD). To enhance the performance of the LR-aided detectors, the LR-aided
K-best algorithm was developed at the cost of the extra complexity on the order
$\mathcal{O}(N_t^2 K + N_t K^2)$, where $N_t$ is the number of transmit
antennas and $K$ is the number of candidates. In this paper, we develop an
LR-aided K-best algorithm with lower complexity by exploiting a priority queue.
With the aid of the priority queue, our analysis shows that the complexity of
the LR-aided K-best algorithm can be further reduced to $\mathcal{O}(N_t^2 K +
N_t K {\rm log}_2(K))$. The low complexity of the proposed LR-aided K-best
algorithm allows us to perform the algorithm for large MIMO systems (e.g.,
50x50 MIMO systems) with large candidate sizes. Simulations show that as the
number of antennas increases, the error performance approaches that of AWGN
channel.
|
1304.1081 | Exploiting Functional Dependencies in Qualitative Probabilistic
Reasoning | cs.AI | Functional dependencies restrict the potential interactions among variables
connected in a probabilistic network. This restriction can be exploited in
qualitative probabilistic reasoning by introducing deterministic variables and
modifying the inference rules to produce stronger conclusions in the presence
of functional relations. I describe how to accomplish these modifications in
qualitative probabilistic networks by exhibiting the update procedures for
graphical transformations involving probabilistic and deterministic variables
and combinations. A simple example demonstrates that the augmented scheme can
reduce qualitative ambiguity that would arise without the special treatment of
functional dependency. Analysis of qualitative synergy reveals that new
higher-order relations are required to reason effectively about synergistic
interactions among deterministic variables.
|
1304.1082 | Qualitative Propagation and Scenario-based Explanation of Probabilistic
Reasoning | cs.AI | Comprehensible explanations of probabilistic reasoning are a prerequisite for
wider acceptance of Bayesian methods in expert systems and decision support
systems. A study of human reasoning under uncertainty suggests two different
strategies for explaining probabilistic reasoning: The first, qualitative
belief propagation, traces the qualitative effect of evidence through a belief
network from one variable to the next. This propagation algorithm is an
alternative to the graph reduction algorithms of Wellman (1988) for inference
in qualitative probabilistic networks. It is based on a qualitative analysis of
intercausal reasoning, which is a generalization of Pearl's "explaining away",
and an alternative to Wellman's definition of qualitative synergy. The other,
Scenario-based reasoning, involves the generation of alternative causal
"stories" accounting for the evidence. Comparing a few of the most probable
scenarios provides an approximate way to explain the results of probabilistic
reasoning. Both schemes employ causal as well as probabilistic knowledge.
Probabilities may be presented as phrases and/or numbers. Users can control the
style, abstraction and completeness of explanations.
|
1304.1083 | Managing Uncertainty in Rule Based Cognitive Models | cs.AI | An experiment replicated and extended recent findings on psychologically
realistic ways of modeling propagation of uncertainty in rule based reasoning.
Within a single production rule, the antecedent evidence can be summarized by
taking the maximum of disjunctively connected antecedents and the minimum of
conjunctively connected antecedents. The maximum certainty factor attached to
each of the rule's conclusions can be sealed down by multiplication with this
summarized antecedent certainty. Heckerman's modified certainty factor
technique can be used to combine certainties for common conclusions across
production rules.
|
1304.1084 | Context-Dependent Similarity | cs.AI | Attribute weighting and differential weighting, two major mechanisms for
computing context-dependent similarity or dissimilarity measures are studied
and compared. A dissimilarity measure based on subset size in the context is
proposed and its metrization and application are given. It is also shown that
while all attribute weighting dissimilarity measures are metrics differential
weighting dissimilarity measures are usually non-metric.
|
1304.1085 | Similarity Networks for the Construction of Multiple-Faults Belief
Networks | cs.AI | A similarity network is a tool for constructing belief networks for the
diagnosis of a single fault. In this paper, we examine modifications to the
similarity-network representation that facilitate the construction of belief
networks for the diagnosis of multiple coexisting faults.
|
1304.1086 | Integrating Probabilistic, Taxonomic and Causal Knowledge in Abductive
Diagnosis | cs.AI | We propose an abductive diagnosis theory that integrates probabilistic,
causal and taxonomic knowledge. Probabilistic knowledge allows us to select the
most likely explanation; causal knowledge allows us to make reasonable
independence assumptions; taxonomic knowledge allows causation to be modeled at
different levels of detail, and allows observations be described in different
levels of precision. Unlike most other approaches where a causal explanation is
a hypothesis that one or more causative events occurred, we define an
explanation of a set of observations to be an occurrence of a chain of
causation events. These causation events constitute a scenario where all the
observations are true. We show that the probabilities of the scenarios can be
computed from the conditional probabilities of the causation events. Abductive
reasoning is inherently complex even if only modest expressive power is
allowed. However, our abduction algorithm is exponential only in the number of
observations to be explained, and is polynomial in the size of the knowledge
base. This contrasts with many other abduction procedures that are exponential
in the size of the knowledge base.
|
1304.1087 | What is an Optimal Diagnosis? | cs.AI | Within diagnostic reasoning there have been a number of proposed definitions
of a diagnosis, and thus of the most likely diagnosis, including most probable
posterior hypothesis, most probable interpretation, most probable covering
hypothesis, etc. Most of these approaches assume that the most likely diagnosis
must be computed, and that a definition of what should be computed can be made
a priori, independent of what the diagnosis is used for. We argue that the
diagnostic problem, as currently posed, is incomplete: it does not consider how
the diagnosis is to be used, or the utility associated with the treatment of
the abnormalities. In this paper we analyze several well-known definitions of
diagnosis, showing that the different definitions of the most likely diagnosis
have different qualitative meanings, even given the same input data. We argue
that the most appropriate definition of (optimal) diagnosis needs to take into
account the utility of outcomes and what the diagnosis is used for.
|
1304.1088 | Kutato: An Entropy-Driven System for Construction of Probabilistic
Expert Systems from Databases | cs.AI | Kutato is a system that takes as input a database of cases and produces a
belief network that captures many of the dependence relations represented by
those data. This system incorporates a module for determining the entropy of a
belief network and a module for constructing belief networks based on entropy
calculations. Kutato constructs an initial belief network in which all
variables in the database are assumed to be marginally independent. The entropy
of this belief network is calculated, and that arc is added that minimizes the
entropy of the resulting belief network. Conditional probabilities for an arc
are obtained directly from the database. This process continues until an
entropy-based threshold is reached. We have tested the system by generating
databases from networks using the probabilistic logic-sampling method, and then
using those databases as input to Kutato. The system consistently reproduces
the original belief networks with high fidelity.
|
1304.1089 | Ideal Reformulation of Belief Networks | cs.AI | The intelligent reformulation or restructuring of a belief network can
greatly increase the efficiency of inference. However, time expended for
reformulation is not available for performing inference. Thus, under time
pressure, there is a tradeoff between the time dedicated to reformulating the
network and the time applied to the implementation of a solution. We
investigate this partition of resources into time applied to reformulation and
time used for inference. We shall describe first general principles for
computing the ideal partition of resources under uncertainty. These principles
have applicability to a wide variety of problems that can be divided into
interdependent phases of problem solving. After, we shall present results of
our empirical study of the problem of determining the ideal amount of time to
devote to searching for clusters in belief networks. In this work, we acquired
and made use of probability distributions that characterize (1) the performance
of alternative heuristic search methods for reformulating a network instance
into a set of cliques, and (2) the time for executing inference procedures on
various belief networks. Given a preference model describing the value of a
solution as a function of the delay required for its computation, the system
selects an ideal time to devote to reformulation.
|
1304.1090 | Computationally-Optimal Real-Resource Strategies | cs.AI | This paper focuses on managing the cost of deliberation before action. In
many problems, the overall quality of the solution reflects costs incurred and
resources consumed in deliberation as well as the cost and benefit of
execution, when both the resource consumption in deliberation phase, and the
costs in deliberation and execution are uncertain and may be described by
probability distribution functions. A feasible (in terms of resource
consumption) strategy that minimizes the expected total cost is termed
computationally-optimal. For a situation with several independent,
uninterruptible methods to solve the problem, we develop a
pseudopolynomial-time algorithm to construct generate-and-test computationally
optimal strategy. We show this strategy-construction problem to be NP-complete,
and apply Bellman's Optimality Principle to solve it efficiently.
|
1304.1091 | Problem Formulation as the Reduction of a Decision Model | cs.AI | In this paper, we extend the QMRDT probabilistic model for the domain of
internal medicine to include decisions about treatments. In addition, we
describe how we can use the comprehensive decision model to construct a simpler
decision model for a specific patient. In so doing, we transform the task of
problem formulation to that of narrowing of a larger problem.
|
1304.1092 | Dynamic Construction of Belief Networks | cs.AI | We describe a method for incrementally constructing belief networks. We have
developed a network-construction language similar to a forward-chaining
language using data dependencies, but with additional features for specifying
distributions. Using this language, we can define parameterized classes of
probabilistic models. These parameterized models make it possible to apply
probabilistic reasoning to problems for which it is impractical to have a
single large static model.
|
1304.1093 | A New Algorithm for Finding MAP Assignments to Belief Networks | cs.AI | We present a new algorithm for finding maximum a-posterior) (MAP) assignments
of values to belief networks. The belief network is compiled into a network
consisting only of nodes with boolean (i.e. only 0 or 1) conditional
probabilities. The MAP assignment is then found using a best-first search on
the resulting network. We argue that, as one would anticipate, the algorithm is
exponential for the general case, but only linear in the size of the network
for poly trees.
|
1304.1094 | Reducing Uncertainty in Navigation and Exploration | cs.AI | A significant problem in designing mobile robot control systems involves
coping with the uncertainty that arises in moving about in an unknown or
partially unknown environment and relying on noisy or ambiguous sensor data to
acquire knowledge about that environment. We describe a control system that
chooses what activity to engage in next on the basis of expectations about how
the information re- turned as a result of a given activity will improve 2 its
knowledge about the spatial layout of its environment. Certain of the
higher-level components of the control system are specified in terms of
probabilistic decision models whose output is used to mediate the behavior of
lower-level control components responsible for movement and sensing.
|
1304.1095 | Ergo: A Graphical Environment for Constructing Bayesian | cs.AI | We describe an environment that considerably simplifies the process of
generating Bayesian belief networks. The system has been implemented on readily
available, inexpensive hardware, and provides clarity and high performance. We
present an introduction to Bayesian belief networks, discuss algorithms for
inference with these networks, and delineate the classes of problems that can
be solved with this paradigm. We then describe the hardware and software that
constitute the system, and illustrate Ergo's use with several example
|
1304.1096 | Decision Making with Interval Influence Diagrams | cs.AI | In previous work (Fertig and Breese, 1989; Fertig and Breese, 1990) we
defined a mechanism for performing probabilistic reasoning in influence
diagrams using interval rather than point-valued probabilities. In this paper
we extend these procedures to incorporate decision nodes and interval-valued
value functions in the diagram. We derive the procedures for chance node
removal (calculating expected value) and decision node removal (optimization)
in influence diagrams where lower bounds on probabilities are stored at each
chance node and interval bounds are stored on the value function associated
with the diagram's value node. The output of the algorithm are a set of
admissible alternatives for each decision variable and a set of bounds on
expected value based on the imprecision in the input. The procedure can be
viewed as an approximation to a full e-dimensional sensitivity analysis where n
are the number of imprecise probability distributions in the input. We show the
transformations are optimal and sound. The performance of the algorithm on an
influence diagrams is investigated and compared to an exact algorithm.
|
1304.1097 | A Randomized Approximation Algorithm of Logic Sampling | cs.AI | In recent years, researchers in decision analysis and artificial intelligence
(AI) have used Bayesian belief networks to build models of expert opinion.
Using standard methods drawn from the theory of computational complexity,
workers in the field have shown that the problem of exact probabilistic
inference on belief networks almost certainly requires exponential computation
in the worst ease [3]. We have previously described a randomized approximation
scheme, called BN-RAS, for computation on belief networks [ 1, 2, 4]. We gave
precise analytic bounds on the convergence of BN-RAS and showed how to trade
running time for accuracy in the evaluation of posterior marginal
probabilities. We now extend our previous results and demonstrate the
generality of our framework by applying similar mathematical techniques to the
analysis of convergence for logic sampling [7], an alternative simulation
algorithm for probabilistic inference.
|
1304.1098 | Occupancy Grids: A Stochastic Spatial Representation for Active Robot
Perception | cs.RO cs.AI | In this paper we provide an overview of a new framework for robot perception,
real-world modelling, and navigation that uses a stochastic tesselated
representation of spatial information called the Occupancy Grid. The Occupancy
Grid is a multi-dimensional random field model that maintains probabilistic
estimates of the occupancy state of each cell in a spatial lattice. Bayesian
estimation mechanisms employing stochastic sensor models allow incremental
updating of the Occupancy Grid using multi-view, multi-sensor data, composition
of multiple maps, decision-making, and incorporation of robot and sensor
position uncertainty. We present the underlying stochastic formulation of the
Occupancy Grid framework, and discuss its application to a variety of robotic
tusks. These include range-based mapping, multi-sensor integration,
path-planning and obstacle avoidance, handling of robot position uncertainty,
incorporation of pre-compiled maps, recovery of geometric representations, and
other related problems. The experimental results show that the Occupancy Grid
approach generates dense world models, is robust under sensor uncertainty and
errors, and allows explicit handling of uncertainty. It supports the
development of robust and agile sensor interpretation methods, incremental
discovery procedures, and composition of information from multiple sources.
Furthermore, the results illustrate that robotic tasks can be addressed through
operations performed di- rectly on the Occupancy Grid, and that these
operations have strong parallels to operations performed in the image
processing domain.
|
1304.1099 | Time, Chance, and Action | cs.AI | To operate intelligently in the world, an agent must reason about its
actions. The consequences of an action are a function of both the state of the
world and the action itself. Many aspects of the world are inherently
stochastic, so a representation for reasoning about actions must be able to
express chances of world states as well as indeterminacy in the effects of
actions and other events. This paper presents a propositional temporal
probability logic for representing and reasoning about actions. The logic can
represent the probability that facts hold and events occur at various times. It
can represent the probability that actions and other events affect the future.
It can represent concurrent actions and conditions that hold or change during
execution of an action. The model of probability relates probabilities over
time. The logical language integrates both modal and probabilistic constructs
and can thus represent and distinguish between possibility, probability, and
truth. Several examples illustrating the use of the logic are given.
|
1304.1100 | A Dynamic Approach to Probabilistic Inference | cs.AI | In this paper we present a framework for dynamically constructing Bayesian
networks. We introduce the notion of a background knowledge base of schemata,
which is a collection of parameterized conditional probability statements.
These schemata explicitly separate the general knowledge of properties an
individual may have from the specific knowledge of particular individuals that
may have these properties. Knowledge of individuals can be combined with this
background knowledge to create Bayesian networks, which can then be used in any
propagation scheme. We discuss the theory and assumptions necessary for the
implementation of dynamic Bayesian networks, and indicate where our approach
may be useful.
|
1304.1101 | Approximations in Bayesian Belief Universe for Knowledge Based Systems | cs.AI | When expert systems based on causal probabilistic networks (CPNs) reach a
certain size and complexity, the "combinatorial explosion monster" tends to be
present. We propose an approximation scheme that identifies rarely occurring
cases and excludes these from being processed as ordinary cases in a CPN-based
expert system. Depending on the topology and the probability distributions of
the CPN, the numbers (representing probabilities of state combinations) in the
underlying numerical representation can become very small. Annihilating these
numbers and utilizing the resulting sparseness through data structuring
techniques often results in several orders of magnitude of improvement in the
consumption of computer resources. Bounds on the errors introduced into a
CPN-based expert system through approximations are established. Finally,
reports on empirical studies of applying the approximation scheme to a
real-world CPN are given.
|
1304.1102 | Robust Inference Policies | cs.AI | A series of monte carlo studies were performed to assess the extent to which
different inference procedures robustly output reasonable belief values in the
context of increasing levels of judgmental imprecision. It was found that, when
compared to an equal-weights linear model, the Bayesian procedures are more
likely to deduce strong support for a hypothesis. But, the Bayesian procedures
are also more likely to strongly support the wrong hypothesis. Bayesian
techniques are more powerful, but are also more error prone.
|
1304.1103 | Minimum Error Tree Decomposition | cs.AI | This paper describes a generalization of previous methods for constructing
tree-structured belief network with hidden variables. The major new feature of
the described method is the ability to produce a tree decomposition even when
there are errors in the correlation data among the input variables. This is an
important extension of existing methods since the correlational coefficients
usually cannot be measured with precision. The technique involves using a
greedy search algorithm that locally minimizes an error function.
|
1304.1104 | A Polynomial Time Algorithm for Finding Bayesian Probabilities from
Marginal Constraints | cs.AI | A method of calculating probability values from a system of marginal
constraints is presented. Previous systems for finding the probability of a
single attribute have either made an independence assumption concerning the
evidence or have required, in the worst case, time exponential in the number of
attributes of the system. In this paper a closed form solution to the
probability of an attribute given the evidence is found. The closed form
solution, however does not enforce the (non-linear) constraint that all terms
in the underlying distribution be positive. The equation requires O(r^3) steps
to evaluate, where r is the number of independent marginal constraints
describing the system at the time of evaluation. Furthermore, a marginal
constraint may be exchanged with a new constraint, and a new solution
calculated in O(r^2) steps. This method is appropriate for calculating
probabilities in a real time expert system
|
1304.1105 | Computation of Variances in Causal Networks | cs.AI | The causal (belief) network is a well-known graphical structure for
representing independencies in a joint probability distribution. The exact
methods and the approximation methods, which perform probabilistic inference in
causal networks, often treat the conditional probabilities which are stored in
the network as certain values. However, if one takes either a subjectivistic or
a limiting frequency approach to probability, one can never be certain of
probability values. An algorithm for probabilistic inference should not only be
capable of reporting the inferred probabilities; it should also be capable of
reporting the uncertainty in these probabilities relative to the uncertainty in
the probabilities which are stored in the network. In section 2 of this paper a
method is given for determining the prior variances of the probabilities of all
the nodes. Section 3 contains an approximation method for determining the
variances in inferred probabilities.
|
1304.1106 | A Sensitivity Analysis of Pathfinder | cs.AI | Knowledge elicitation is one of the major bottlenecks in expert system
design. Systems based on Bayes nets require two types of information--network
structure and parameters (or probabilities). Both must be elicited from the
domain expert. In general, parameters have greater opacity than structure, and
more time is spent in their refinement than in any other phase of elicitation.
Thus, it is important to determine the point of diminishing returns, beyond
which further refinements will promise little (if any) improvement. Sensitivity
analyses address precisely this issue--the sensitivity of a model to the
precision of its parameters. In this paper, we report the results of a
sensitivity analysis of Pathfinder, a Bayes net based system for diagnosing
pathologies of the lymph system. This analysis is intended to shed some light
on the relative importance of structure and parameters to system performance,
as well as the sensitivity of a system based on a Bayes net to noise in its
assessed parameters.
|
1304.1107 | IDEAL: A Software Package for Analysis of Influence Diagrams | cs.AI | IDEAL (Influence Diagram Evaluation and Analysis in Lisp) is a software
environment for creation and evaluation of belief networks and influence
diagrams. IDEAL is primarily a research tool and provides an implementation of
many of the latest developments in belief network and influence diagram
evaluation in a unified framework. This paper describes IDEAL and some lessons
learned during its development.
|
1304.1108 | On the Equivalence of Causal Models | cs.AI | Scientists often use directed acyclic graphs (days) to model the qualitative
structure of causal theories, allowing the parameters to be estimated from
observational data. Two causal models are equivalent if there is no experiment
which could distinguish one from the other. A canonical representation for
causal models is presented which yields an efficient graphical criterion for
deciding equivalence, and provides a theoretical basis for extracting causal
structures from empirical data. This representation is then extended to the
more general case of an embedded causal model, that is, a dag in which only a
subset of the variables are observable. The canonical representation presented
here yields an efficient algorithm for determining when two embedded causal
models reflect the same dependency information. This algorithm leads to a model
theoretic definition of causation in terms of statistical dependencies.
|
1304.1109 | Application of Confidence Intervals to the Autonomous Acquisition of
High-level Spatial Knowledge | cs.AI | Objects in the world usually appear in context, participating in spatial
relationships and interactions that are predictable and expected. Knowledge of
these contexts can be used in the task of using a mobile camera to search for a
specified object in a room. We call this the object search task. This paper is
concerned with representing this knowledge in a manner facilitating its
application to object search while at the same time lending itself to
autonomous learning by a robot. The ability for the robot to learn such
knowledge without supervision is crucial due to the vast number of possible
relationships that can exist for any given set of objects. Moreover, since a
robot will not have an infinite amount of time to learn, it must be able to
determine an order in which to look for possible relationships so as to
maximize the rate at which new knowledge is gained. In effect, there must be a
"focus of interest" operator that allows the robot to choose which examples are
likely to convey the most new information and should be examined first. This
paper demonstrates how a representation based on statistical confidence
intervals allows the construction of a system that achieves the above goals. An
algorithm, based on the Highest Impact First heuristic, is presented as a means
for providing a "focus of interest" with which to control the learning process,
and examples are given.
|
1304.1110 | Directed Reduction Algorithms and Decomposable Graphs | cs.AI | In recent years, there have been intense research efforts to develop
efficient methods for probabilistic inference in probabilistic influence
diagrams or belief networks. Many people have concluded that the best methods
are those based on undirected graph structures, and that those methods are
inherently superior to those based on node reduction operations on the
influence diagram. We show here that these two approaches are essentially the
same, since they are explicitly or implicity building and operating on the same
underlying graphical structures. In this paper we examine those graphical
structures and show how this insight can lead to an improved class of directed
reduction methods.
|
1304.1111 | Optimal Decomposition of Belief Networks | cs.AI | In this paper, optimum decomposition of belief networks is discussed. Some
methods of decomposition are examined and a new method - the method of Minimum
Total Number of States (MTNS) - is proposed. The problem of optimum belief
network decomposition under our framework, as under all the other frameworks,
is shown to be NP-hard. According to the computational complexity analysis, an
algorithm of belief network decomposition is proposed in (Wee, 1990a) based on
simulated annealing.
|
1304.1112 | Pruning Bayesian Networks for Efficient Computation | cs.AI | This paper analyzes the circumstances under which Bayesian networks can be
pruned in order to reduce computational complexity without altering the
computation for variables of interest. Given a problem instance which consists
of a query and evidence for a set of nodes in the network, it is possible to
delete portions of the network which do not participate in the computation for
the query. Savings in computational complexity can be large when the original
network is not singly connected. Results analogous to those described in this
paper have been derived before [Geiger, Verma, and Pearl 89, Shachter 88] but
the implications for reducing complexity of the computations in Bayesian
networks have not been stated explicitly. We show how a preprocessing step can
be used to prune a Bayesian network prior to using standard algorithms to solve
a given problem instance. We also show how our results can be used in a
parallel distributed implementation in order to achieve greater savings. We
define a computationally equivalent subgraph of a Bayesian network. The
algorithm developed in [Geiger, Verma, and Pearl 89] is modified to construct
the subgraphs described in this paper with O(e) complexity, where e is the
number of edges in the Bayesian network. Finally, we define a minimal
computationally equivalent subgraph and prove that the subgraphs described are
minimal.
|
1304.1113 | On Heuristics for Finding Loop Cutsets in Multiply-Connected Belief
Networks | cs.AI | We introduce a new heuristic algorithm for the problem of finding minimum
size loop cutsets in multiply connected belief networks. We compare this
algorithm to that proposed in [Suemmondt and Cooper, 1988]. We provide lower
bounds on the performance of these algorithms with respect to one another and
with respect to optimal. We demonstrate that no heuristic algorithm for this
problem cam be guaranteed to produce loop cutsets within a constant difference
from optimal. We discuss experimental results based on randomly generated
networks, and discuss future work and open questions.
|
1304.1114 | A Combination of Cutset Conditioning with Clique-Tree Propagation in the
Pathfinder System | cs.AI | Cutset conditioning and clique-tree propagation are two popular methods for
performing exact probabilistic inference in Bayesian belief networks. Cutset
conditioning is based on decomposition of a subset of network nodes, whereas
clique-tree propagation depends on aggregation of nodes. We describe a means to
combine cutset conditioning and clique- tree propagation in an approach called
aggregation after decomposition (AD). We discuss the application of the AD
method in the Pathfinder system, a medical expert system that offers assistance
with diagnosis in hematopathology.
|
1304.1115 | Possibility as Similarity: the Semantics of Fuzzy Logic | cs.AI | This paper addresses fundamental issues on the nature of the concepts and
structures of fuzzy logic, focusing, in particular, on the conceptual and
functional differences that exist between probabilistic and possibilistic
approaches. A semantic model provides the basic framework to define
possibilistic structures and concepts by means of a function that quantifies
proximity, closeness, or resemblance between pairs of possible worlds. The
resulting model is a natural extension, based on multiple conceivability
relations, of the modal logic concepts of necessity and possibility. By
contrast, chance-oriented probabilistic concepts and structures rely on
measures of set extension that quantify the proportion of possible worlds where
a proposition is true. Resemblance between possible worlds is quantified by a
generalized similarity relation: a function that assigns a number between O and
1 to every pair of possible worlds. Using this similarity relation, which is a
form of numerical complement of a classic metric or distance, it is possible to
define and interpret the major constructs and methods of fuzzy logic:
conditional and unconditioned possibility and necessity distributions and the
generalized modus ponens of Zadeh.
|
1304.1116 | Integrating Case-Based and Rule-Based Reasoning: the Possibilistic
Connection | cs.AI | Rule based reasoning (RBR) and case based reasoning (CBR) have emerged as two
important and complementary reasoning methodologies in artificial intelligence
(Al). For problem solving in complex, real world situations, it is useful to
integrate RBR and CBR. This paper presents an approach to achieve a compact and
seamless integration of RBR and CBR within the base architecture of rules. The
paper focuses on the possibilistic nature of the approximate reasoning
methodology common to both CBR and RBR. In CBR, the concept of similarity is
casted as the complement of the distance between cases. In RBR the transitivity
of similarity is the basis for the approximate deductions based on the
generalized modus ponens. It is shown that the integration of CBR and RBR is
possible without altering the inference engine of RBR. This integration is
illustrated in the financial domain of mergers and acquisitions. These ideas
have been implemented in a prototype system called MARS.
|
1304.1117 | Credibility Discounting in the Theory of Approximate Reasoning | cs.AI | We are concerned with the problem of introducing credibility type information
into reasoning systems. The concept of credibility allows us to discount
information provided by agents. An important characteristic of this kind of
procedure is that a complete lack of credibility rather than resulting in the
negation of the information provided results in the nullification of the
information provided. We suggest a representational scheme for credibility
qualification in the theory of approximate reasoning. We discuss the concept of
relative credibility. By this idea we mean to indicate situations in which the
credibility of a piece of evidence is determined by its compatibility with
higher priority evidence. This situation leads to structures very much in the
spirit of nonmonotonic reasoning.
|
1304.1118 | Updating with Belief Functions, Ordinal Conditioning Functions and
Possibility Measures | cs.AI | This paper discusses how a measure of uncertainty representing a state of
knowledge can be updated when a new information, which may be pervaded with
uncertainty, becomes available. This problem is considered in various
framework, namely: Shafer's evidence theory, Zadeh's possibility theory,
Spohn's theory of epistemic states. In the two first cases, analogues of
Jeffrey's rule of conditioning are introduced and discussed. The relations
between Spohn's model and possibility theory are emphasized and Spohn's
updating rule is contrasted with the Jeffrey-like rule of conditioning in
possibility theory. Recent results by Shenoy on the combination of ordinal
conditional functions are reinterpreted in the language of possibility theory.
It is shown that Shenoy's combination rule has a well-known possibilistic
counterpart.
|
1304.1119 | A New Approach to Updating Beliefs | cs.AI cs.LO | We define a new notion of conditional belief, which plays the same role for
Dempster-Shafer belief functions as conditional probability does for
probability functions. Our definition is different from the standard definition
given by Dempster, and avoids many of the well-known problems of that
definition. Just as the conditional probability Pr (lB) is a probability
function which is the result of conditioning on B being true, so too our
conditional belief function Bel (lB) is a belief function which is the result
of conditioning on B being true. We define the conditional belief as the lower
envelope (that is, the inf) of a family of conditional probability functions,
and provide a closed form expression for it. An alternate way of understanding
our definition of conditional belief is provided by considering ideas from an
earlier paper [Fagin and Halpern, 1989], where we connect belief functions with
inner measures. In particular, we show here how to extend the definition of
conditional probability to non measurable sets, in order to get notions of
inner and outer conditional probabilities, which can be viewed as best
approximations to the true conditional probability, given our lack of
information. Our definition of conditional belief turns out to be an exact
analogue of our definition of inner conditional probability.
|
1304.1120 | The Transferable Belief Model and Other Interpretations of
Dempster-Shafer's Model | cs.AI | Dempster-Shafer's model aims at quantifying degrees of belief But there are
so many interpretations of Dempster-Shafer's theory in the literature that it
seems useful to present the various contenders in order to clarify their
respective positions. We shall successively consider the classical probability
model, the upper and lower probabilities model, Dempster's model, the
transferable belief model, the evidentiary value model, the provability or
necessity model. None of these models has received the qualification of
Dempster-Shafer. In fact the transferable belief model is our interpretation
not of Dempster's work but of Shafer's work as presented in his book (Shafer
1976, Smets 1988). It is a ?purified' form of Dempster-Shafer's model in which
any connection with probability concept has been deleted. Any model for belief
has at least two components: one static that describes our state of belief, the
other dynamic that explains how to update our belief given new pieces of
information. We insist on the fact that both components must be considered in
order to study these models. Too many authors restrict themselves to the static
component and conclude that Dempster-Shafer theory is the same as some other
theory. But once the dynamic component is considered, these conclusions break
down. Any comparison based only on the static component is too restricted. The
dynamic component must also be considered as the originality of the models
based on belief functions lies in its dynamic component.
|
1304.1121 | Valuation-Based Systems for Discrete Optimization | cs.AI | This paper describes valuation-based systems for representing and solving
discrete optimization problems. In valuation-based systems, we represent
information in an optimization problem using variables, sample spaces of
variables, a set of values, and functions that map sample spaces of sets of
variables to the set of values. The functions, called valuations, represent the
factors of an objective function. Solving the optimization problem involves
using two operations called combination and marginalization. Combination tells
us how to combine the factors of the joint objective function. Marginalization
is either maximization or minimization. Solving an optimization problem can be
simply described as finding the marginal of the joint objective function for
the empty set. We state some simple axioms that combination and marginalization
need to satisfy to enable us to solve an optimization problem using local
computation. For optimization problems, the solution method of valuation-based
systems reduces to non-serial dynamic programming. Thus our solution method for
VBS can be regarded as an abstract description of dynamic programming. And our
axioms can be viewed as conditions that permit the use of dynamic programming.
|
1304.1122 | Computational Aspects of the Mobius Transform | cs.AI | In this paper we associate with every (directed) graph G a transformation
called the Mobius transformation of the graph G. The Mobius transformation of
the graph (O) is of major significance for Dempster-Shafer theory of evidence.
However, because it is computationally very heavy, the Mobius transformation
together with Dempster's rule of combination is a major obstacle to the use of
Dempster-Shafer theory for handling uncertainty in expert systems. The major
contribution of this paper is the discovery of the 'fast Mobius
transformations' of (O). These 'fast Mobius transformations' are the fastest
algorithms for computing the Mobius transformation of (O). As an easy but
useful application, we provide, via the commonality function, an algorithm for
computing Dempster's rule of combination which is much faster than the usual
one.
|
1304.1123 | Using Dempster-Shafer Theory in Knowledge Representation | cs.AI | In this paper, we suggest marrying Dempster-Shafer (DS) theory with Knowledge
Representation (KR). Born out of this marriage is the definition of
"Dempster-Shafer Belief Bases", abstract data types representing uncertain
knowledge that use DS theory for representing strength of belief about our
knowledge, and the linguistic structures of an arbitrary KR system for
representing the knowledge itself. A formal result guarantees that both the
properties of the given KR system and of DS theory are preserved. The general
model is exemplified by defining DS Belief Bases where First Order Logic and
(an extension of) KRYPTON are used as KR systems. The implementation problem is
also touched upon.
|
1304.1124 | A Hierarchical Approach to Designing Approximate Reasoning-Based
Controllers for Dynamic Physical Systems | cs.AI | This paper presents a new technique for the design of approximate reasoning
based controllers for dynamic physical systems with interacting goals. In this
approach, goals are achieved based on a hierarchy defined by a control
knowledge base and remain highly interactive during the execution of the
control task. The approach has been implemented in a rule-based computer
program which is used in conjunction with a prototype hardware system to solve
the cart-pole balancing problem in real-time. It provides a complementary
approach to the conventional analytical control methodology, and is of
substantial use where a precise mathematical model of the process being
controlled is not available.
|
1304.1125 | Evidence Combination and Reasoning and Its Application to Real-World
Problem-Solving | cs.AI | In this paper a new mathematical procedure is presented for combining
different pieces of evidence which are represented in the interval form to
reflect our knowledge about the truth of a hypothesis. Evidences may be
correlated to each other (dependent evidences) or conflicting in supports
(conflicting evidences). First, assuming independent evidences, we propose a
methodology to construct combination rules which obey a set of essential
properties. The method is based on a geometric model. We compare results
obtained from Dempster-Shafer's rule and the proposed combination rules with
both conflicting and non-conflicting data and show that the values generated by
proposed combining rules are in tune with our intuition in both cases.
Secondly, in the case that evidences are known to be dependent, we consider
extensions of the rules derived for handling conflicting evidence. The
performance of proposed rules are shown by different examples. The results show
that the proposed rules reasonably make decision under dependent evidences
|
1304.1126 | On Some Equivalence Relations between Incidence Calculus and
Dempster-Shafer Theory of Evidence | cs.AI | Incidence Calculus and Dempster-Shafer Theory of Evidence are both theories
to describe agents' degrees of belief in propositions, thus being appropriate
to represent uncertainty in reasoning systems. This paper presents a
straightforward equivalence proof between some special cases of these theories.
|
1304.1127 | Using Belief Functions for Uncertainty Management and Knowledge
Acquisition: An Expert Application | cs.AI | This paper describes recent work on an ongoing project in medical diagnosis
at the University of Guelph. A domain on which experts are not very good at
pinpointing a single disease outcome is explored. On-line medical data is
available over a relatively short period of time. Belief Functions
(Dempster-Shafer theory) are first extracted from data and then modified with
expert opinions. Several methods for doing this are compared and results show
that one formulation statistically outperforms the others, including a method
suggested by Shafer. Expert opinions and statistically derived information
about dependencies among symptoms are also compared. The benefits of using
uncertainty management techniques as methods for knowledge acquisition from
data are discussed.
|
1304.1128 | An Architecture for Probabilistic Concept-Based Information Retrieval | cs.AI | While concept-based methods for information retrieval can provide improved
performance over more conventional techniques, they require large amounts of
effort to acquire the concepts and their qualitative and quantitative
relationships. This paper discusses an architecture for probabilistic
concept-based information retrieval which addresses the knowledge acquisition
problem. The architecture makes use of the probabilistic networks technology
for representing and reasoning about concepts and includes a knowledge
acquisition component which partially automates the construction of concept
knowledge bases from data. We describe two experiments that apply the
architecture to the task of retrieving documents about terrorism from a set of
documents from the Reuters news service. The experiments provide positive
evidence that the architecture design is feasible and that there are advantages
to concept-based methods.
|
1304.1129 | Amplitude-Based Approach to Evidence Accumulation | cs.AI | We point out the need to use probability amplitudes rather than probabilities
to model evidence accumulation in decision processes involving real physical
sensors. Optical information processing systems are given as typical examples
of systems that naturally gather evidence in this manner. We derive a new,
amplitude-based generalization of the Hough transform technique used for object
recognition in machine vision. We argue that one should use complex Hough
accumulators and square their magnitudes to get a proper probabilistic
interpretation of the likelihood that an object is present. Finally, we suggest
that probability amplitudes may have natural applications in connectionist
models, as well as in formulating knowledge-based reasoning problems.
|
1304.1130 | A Probabilistic Reasoning Environment | cs.AI | A framework is presented for a computational theory of probabilistic
argument. The Probabilistic Reasoning Environment encodes knowledge at three
levels. At the deepest level are a set of schemata encoding the system's domain
knowledge. This knowledge is used to build a set of second-level arguments,
which are structured for efficient recapture of the knowledge used to construct
them. Finally, at the top level is a Bayesian network constructed from the
arguments. The system is designed to facilitate not just propagation of beliefs
and assimilation of evidence, but also the dynamic process of constructing a
belief network, evaluating its adequacy, and revising it when necessary.
|
1304.1131 | On Non-monotonic Conditional Reasoning | cs.AI | This note is concerned with a formal analysis of the problem of non-monotonic
reasoning in intelligent systems, especially when the uncertainty is taken into
account in a quantitative way. A firm connection between logic and probability
is established by introducing conditioning notions by means of formal
structures that do not rely on quantitative measures. The associated
conditional logic, compatible with conditional probability evaluations, is
non-monotonic relative to additional evidence. Computational aspects of
conditional probability logic are mentioned. The importance of this development
lies on its role to provide a conceptual basis for various forms of evidence
combination and on its significance to unify multi-valued and non-monotonic
logics
|
1304.1132 | Decisions with Limited Observations over a Finite Product Space: the
Klir Effect | cs.AI | Probability estimation by maximum entropy reconstruction of an initial
relative frequency estimate from its projection onto a hypergraph model of the
approximate conditional independence relations exhibited by it is investigated.
The results of this study suggest that use of this estimation technique may
improve the quality of decisions that must be made on the basis of limited
observations over a decomposable finite product space.
|
1304.1133 | Fine-Grained Decision-Theoretic Search Control | cs.AI | Decision-theoretic control of search has previously used as its basic unit.
of computation the generation and evaluation of a complete set of successors.
Although this simplifies analysis, it results in some lost opportunities for
pruning and satisficing. This paper therefore extends the analysis of the value
of computation to cover individual successor evaluations. The analytic
techniques used may prove useful for control of reasoning in more general
settings. A formula is developed for the expected value of a node, k of whose n
successors have been evaluated. This formula is used to estimate the value of
expanding further successors, using a general formula for the value of a
computation in game-playing developed in earlier work. We exhibit an improved
version of the MGSS* algorithm, giving empirical results for the game of
Othello.
|
1304.1134 | Rules, Belief Functions and Default Logic | cs.AI | This paper describes a natural framework for rules, based on belief
functions, which includes a repre- sentation of numerical rules, default rules
and rules allowing and rules not allowing contraposition. In particular it
justifies the use of the Dempster-Shafer Theory for representing a particular
class of rules, Belief calculated being a lower probability given certain
independence assumptions on an underlying space. It shows how a belief function
framework can be generalised to other logics, including a general Monte-Carlo
algorithm for calculating belief, and how a version of Reiter's Default Logic
can be seen as a limiting case of a belief function formalism.
|
1304.1135 | Combination of Evidence Using the Principle of Minimum Information Gain | cs.AI | One of the most important aspects in any treatment of uncertain information
is the rule of combination for updating the degrees of uncertainty. The theory
of belief functions uses the Dempster rule to combine two belief functions
defined by independent bodies of evidence. However, with limited dependency
information about the accumulated belief the Dempster rule may lead to
unsatisfactory results. The present study suggests a method to determine the
accumulated belief based on the premise that the information gain from the
combination process should be minimum. This method provides a mechanism that is
equivalent to the Bayes rule when all the conditional probabilities are
available and to the Dempster rule when the normalization constant is equal to
one. The proposed principle of minimum information gain is shown to be
equivalent to the maximum entropy formalism, a special case of the principle of
minimum cross-entropy. The application of this principle results in a monotonic
increase in belief with accumulation of consistent evidence. The suggested
approach may provide a more reasonable criterion for identifying conflicts
among various bodies of evidence.
|
1304.1136 | Probabilistic Evaluation of Candidates and Symptom Clustering for
Multidisorder Diagnosis | cs.AI | This paper derives a formula for computing the conditional probability of a
set of candidates, where a candidate is a set of disorders that explain a given
set of positive findings. Such candidate sets are produced by a recent method
for multidisorder diagnosis called symptom clustering. A symptom clustering
represents a set of candidates compactly as a cartesian product of differential
diagnoses. By evaluating the probability of a candidate set, then, a large set
of candidates can be validated or pruned simultaneously. The probability of a
candidate set is then specialized to obtain the probability of a single
candidate. Unlike earlier results, the equation derived here allows the
specification of positive, negative, and unknown symptoms and does not make
assumptions about disorders not in the candidate.
|
1304.1137 | Extending Term Subsumption systems for Uncertainty Management | cs.AI | A major difficulty in developing and maintaining very large knowledge bases
originates from the variety of forms in which knowledge is made available to
the KB builder. The objective of this research is to bring together two
complementary knowledge representation schemes: term subsumption languages,
which represent and reason about defining characteristics of concepts, and
proximate reasoning models, which deal with uncertain knowledge and data in
expert systems. Previous works in this area have primarily focused on
probabilistic inheritance. In this paper, we address two other important issues
regarding the integration of term subsumption-based systems and approximate
reasoning models. First, we outline a general architecture that specifies the
interactions between the deductive reasoner of a term subsumption system and an
approximate reasoner. Second, we generalize the semantics of terminological
language so that terminological knowledge can be used to make plausible
inferences. The architecture, combined with the generalized semantics, forms
the foundation of a synergistic tight integration of term subsumption systems
and approximate reasoning models.
|
1304.1138 | Refinement and Coarsening of Bayesian Networks | cs.AI | In almost all situation assessment problems, it is useful to dynamically
contract and expand the states under consideration as assessment proceeds.
Contraction is most often used to combine similar events or low probability
events together in order to reduce computation. Expansion is most often used to
make distinctions of interest which have significant probability in order to
improve the quality of the assessment. Although other uncertainty calculi,
notably Dempster-Shafer [Shafer, 1976], have addressed these operations, there
has not yet been any approach of refining and coarsening state spaces for the
Bayesian Network technology. This paper presents two operations for refining
and coarsening the state space in Bayesian Networks. We also discuss their
practical implications for knowledge acquisition.
|
1304.1139 | Second Order Probabilities for Uncertain and Conflicting Evidence | cs.AI | In this paper the elicitation of probabilities from human experts is
considered as a measurement process, which may be disturbed by random
'measurement noise'. Using Bayesian concepts a second order probability
distribution is derived reflecting the uncertainty of the input probabilities.
The algorithm is based on an approximate sample representation of the basic
probabilities. This sample is continuously modified by a stochastic simulation
procedure, the Metropolis algorithm, such that the sequence of successive
samples corresponds to the desired posterior distribution. The procedure is
able to combine inconsistent probabilities according to their reliability and
is applicable to general inference networks with arbitrary structure.
Dempster-Shafer probability mass functions may be included using specific
measurement distributions. The properties of the approach are demonstrated by
numerical experiments.
|
1304.1140 | Computing Probability Intervals Under Independency Constraints | cs.AI | Many AI researchers argue that probability theory is only capable of dealing
with uncertainty in situations where a full specification of a joint
probability distribution is available, and conclude that it is not suitable for
application in knowledge-based systems. Probability intervals, however,
constitute a means for expressing incompleteness of information. We present a
method for computing such probability intervals for probabilities of interest
from a partial specification of a joint probability distribution. Our method
improves on earlier approaches by allowing for independency relationships
between statistical variables to be exploited.
|
1304.1141 | An Empirical Analysis of Likelihood-Weighting Simulation on a Large,
Multiply-Connected Belief Network | cs.AI | We analyzed the convergence properties of likelihood- weighting algorithms on
a two-level, multiply connected, belief-network representation of the QMR
knowledge base of internal medicine. Specifically, on two difficult diagnostic
cases, we examined the effects of Markov blanket scoring, importance sampling,
demonstrating that the Markov blanket scoring and self-importance sampling
significantly improve the convergence of the simulation on our model.
|
1304.1142 | Towards a Normative Theory of Scientific Evidence | cs.AI | A scientific reasoning system makes decisions using objective evidence in the
form of independent experimental trials, propositional axioms, and constraints
on the probabilities of events. As a first step towards this goal, we propose a
system that derives probability intervals from objective evidence in those
forms. Our reasoning system can manage uncertainty about data and rules in a
rule based expert system. We expect that our system will be particularly
applicable to diagnosis and analysis in domains with a wealth of experimental
evidence such as medicine. We discuss limitations of this solution and propose
future directions for this research. This work can be considered a
generalization of Nilsson's "probabilistic logic" [Nil86] to intervals and
experimental observations.
|
1304.1143 | A Model for Non-Monotonic Reasoning Using Dempster's Rule | cs.AI | Considerable attention has been given to the problem of non-monotonic
reasoning in a belief function framework. Earlier work (M. Ginsberg) proposed
solutions introducing meta-rules which recognized conditional independencies in
a probabilistic sense. More recently an e-calculus formulation of default
reasoning (J. Pearl) shows that the application of Dempster's rule to a
non-monotonic situation produces erroneous results. This paper presents a new
belief function interpretation of the problem which combines the rules in a way
which is more compatible with probabilistic results and respects conditions of
independence necessary for the application of Dempster's combination rule. A
new general framework for combining conflicting evidence is also proposed in
which the normalization factor becomes modified. This produces more intuitively
acceptable results.
|
1304.1144 | Default Reasoning and the Transferable Belief Model | cs.AI | Inappropriate use of Dempster's rule of combination has led some authors to
reject the Dempster-Shafer model, arguing that it leads to supposedly
unacceptable conclusions when defaults are involved. A most classic example is
about the penguin Tweety. This paper will successively present: the origin of
the miss-management of the Tweety example; two types of default; the correct
solution for both types based on the transferable belief model (our
interpretation of the Dempster-Shafer model (Shafer 1976, Smets 1988)); Except
when explicitly stated, all belief functions used in this paper are simple
support functions, i.e. belief functions for which only one proposition (the
focus) of the frame of discernment receives a positive basic belief mass with
the remaining mass being given to the tautology. Each belief function will be
described by its focus and the weight of the focus (e.g. m(A)=.9). Computation
of the basic belief masses are always performed by vacuously extending each
belief function to the product space built from all variables involved,
combining them on that space by Dempster's rule of combination, and projecting
the result to the space corresponding to each individual variable.
|
1304.1145 | Separable and transitive graphoids | cs.AI | We examine three probabilistic formulations of the sentence a and b are
totally unrelated with respect to a given set of variables U. First, two
variables a and b are totally independent if they are independent given any
value of any subset of the variables in U. Second, two variables are totally
uncoupled if U can be partitioned into two marginally independent sets
containing a and b respectively. Third, two variables are totally disconnected
if the corresponding nodes are disconnected in every belief network
representation. We explore the relationship between these three formulations of
unrelatedness and explain their relevance to the process of acquiring
probabilistic knowledge from human experts.
|
1304.1146 | Analysis in HUGIN of Data Conflict | cs.AI | After a brief introduction to causal probabilistic networks and the HUGIN
approach, the problem of conflicting data is discussed. A measure of conflict
is defined, and it is used in the medical diagnostic system MUNIN. Finally, it
is discussed how to distinguish between conflicting data and a rare case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.