id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.2490 | Kernel Reconstruction ICA for Sparse Representation | cs.CV cs.LG | Independent Component Analysis (ICA) is an effective unsupervised tool to
learn statistically independent representation. However, ICA is not only
sensitive to whitening but also difficult to learn an over-complete basis.
Consequently, ICA with soft Reconstruction cost(RICA) was presented to learn
sparse representations with over-complete basis even on unwhitened data.
Whereas RICA is infeasible to represent the data with nonlinear structure due
to its intrinsic linearity. In addition, RICA is essentially an unsupervised
method and can not utilize the class information. In this paper, we propose a
kernel ICA model with reconstruction constraint (kRICA) to capture the
nonlinear features. To bring in the class information, we further extend the
unsupervised kRICA to a supervised one by introducing a discrimination
constraint, namely d-kRICA. This constraint leads to learn a structured basis
consisted of basis vectors from different basis subsets corresponding to
different class labels. Then each subset will sparsely represent well for its
own class but not for the others. Furthermore, data samples belonging to the
same class will have similar representations, and thereby the learned sparse
representations can take more discriminative power. Experimental results
validate the effectiveness of kRICA and d-kRICA for image classification.
|
1304.2503 | Simulating the Smart Grid | cs.SY | Major challenges for the transition of power systems do not only tackle power
electronics but also communication technology, power market economy and user
acceptance studies. Simulation is an important research method therein, as it
helps to avoid costly failures. A common smart grid simulation platform is
still missing. We introduce a conceptual model of agents in multiple flow
networks. Flow networks extend the depth of established power flow analysis
through use of networks of information flow and financial transactions. We use
this model as a basis for comparing different power system simulators.
Furthermore, a quantitative comparison of simulators is done to facilitate the
decision for a suitable tool in comprehensive smart grid simulation.
|
1304.2504 | A New Access Control Scheme for Facebook-style Social Networks | cs.CR cs.SI | The popularity of online social networks (OSNs) makes the protection of
users' private information an important but scientifically challenging problem.
In the literature, relationship-based access control schemes have been proposed
to address this problem. However, with the dynamic developments of OSNs, we
identify new access control requirements which cannot be fully captured by the
current schemes. In this paper, we focus on public information in OSNs and
treat it as a new dimension which users can use to regulate access to their
resources. We define a new OSN model containing users and their relationships
as well as public information. Based on this model, we introduce a variant of
hybrid logic for formulating access control policies. We exploit a type of
category information and relationship hierarchy to further extend our logic for
its usage in practice. In the end, we propose a few solutions to address the
problem of information reliability in OSNs, and formally model collaborative
access control in our access control scheme.
|
1304.2514 | Automatic Structuring Of Semantic Web Services An Approach | cs.IR | Ontologies have become the effective modeling for various applications and
significantly in the semantic web. The difficulty of extracting information
from the web, which was created mainly for visualising information, has driven
the birth of the semantic web, which will contain much more resources than the
web and will attach machine-readable semantic information to these resources.
Ontological bootstrapping on a set of predefined sources, such as web services,
must address the problem of multiple, largely unrelated concepts. The web
services consist of basically two components, Web Services Description Language
(WSDL) descriptors and free text descriptors. The WSDL descriptor is evaluated
using two methods, namely Term Frequency/Inverse Document Frequency (TF/IDF)
and web context generation. The proposed bootstrapping ontological process
integrates TF/IDF and web context generation and applies validation using the
free text descriptor service, so that, it offers more accurate definition of
ontologies. This paper uses ranking adaption model which predicts the rank for
a collection of web service documents which leads to the automatic
construction, enrichment and adaptation of ontologies.
|
1304.2523 | Communication over Finite-Chain-Ring Matrix Channels | cs.IT math.IT | Though network coding is traditionally performed over finite fields, recent
work on nested-lattice-based network coding suggests that, by allowing network
coding over certain finite rings, more efficient physical-layer network coding
schemes can be constructed. This paper considers the problem of communication
over a finite-ring matrix channel $Y = AX + BE$, where $X$ is the channel
input, $Y$ is the channel output, $E$ is random error, and $A$ and $B$ are
random transfer matrices. Tight capacity results are obtained and simple
polynomial-complexity capacity-achieving coding schemes are provided under the
assumption that $A$ is uniform over all full-rank matrices and $BE$ is uniform
over all rank-$t$ matrices, extending the work of Silva, Kschischang and
K\"{o}tter (2010), who handled the case of finite fields. This extension is
based on several new results, which may be of independent interest, that
generalize concepts and methods from matrices over finite fields to matrices
over finite chain rings.
|
1304.2528 | Characterization of delay propagation in the US air transportation
network | physics.soc-ph cs.SI | Complex networks provide a suitable framework to characterize air traffic.
Previous works described the world air transport network as a graph where
direct flights are edges and commercial airports are vertices. In this work, we
focus instead on the properties of flight delays in the US air transportation
network. We analyze flight performance data in 2010 and study the topological
structure of the network as well as the aircraft rotation. The properties of
flight delays, including the distribution of total delays, the dependence on
the day of the week and the hour-by-hour evolution within each day, are
characterized paying special attention to flights accumulating delays longer
than 12 hours. We find that the distributions are robust to changes in takeoff
or landing operations, different moments of the year or even different airports
in the contiguous states. However, airports in remote areas (Hawaii, Alaska,
Puerto Rico) can show peculiar distributions biased toward long delays.
Additionally, we show that long delayed flights have an important dependence on
the destination airport.
|
1304.2538 | On Appropriate Selection of Fuzzy Aggregation Operators in Medical
Decision Support System | cs.AI cs.IR | The Decision Support System (DSS) contains more than one antecedent and the
degrees of strength of the antecedents need to be combined to determine the
overall strength of the rule consequent. The membership values of the
linguistic variables in Fuzzy have to be combined using an aggregation
operator. But it is not feasible to predefine the form of aggregation operators
in decision making. Instead, each rule should be found based on the feeling of
the experts and on their actual decision pattern over the set of typical
examples. Thus this work illustrates how the choice of aggregation operators is
intended to mimic human decision making and can be selected and adjusted to fit
empirical data, a series of test cases. Both parametrized and nonparametrized
aggregation operators are adapted to fit empirical data. Moreover, they
provided compensatory properties and, therefore, seemed to produce a better
decision support system. To solve the problem, a threshold point from the
output of the aggregation operators is chosen as the separation point between
two classes. The best achieved accuracy is chosen as the appropriate
aggregation operator. Thus a medical decision can be generated which is very
close to a practitioner's guideline.
|
1304.2543 | A New Distributed Evolutionary Computation Technique for Multi-Objective
Optimization | cs.NE | Now-a-days, it is important to find out solutions of Multi-Objective
Optimization Problems (MOPs). Evolutionary Strategy helps to solve such real
world problems efficiently and quickly. But sequential Evolutionary Algorithms
(EAs) require an enormous computation power to solve such problems and it takes
much time to solve large problems. To enhance the performance for solving this
type of problems, this paper presents a new Distributed Novel Evolutionary
Strategy Algorithm (DNESA) for Multi-Objective Optimization. The proposed DNESA
applies the divide-and-conquer approach to decompose population into smaller
sub-population and involves multiple solutions in the form of cooperative
sub-populations. In DNESA, the server distributes the total computation load to
all associate clients and simulation results show that the time for solving
large problems is much less than sequential EAs. Also DNESA shows better
performance in convergence test when compared with other three well-known EAs.
|
1304.2545 | For Solving Linear Equations Recombination is a Needless Operation in
Time-Variant Adaptive Hybrid Algorithms | cs.NE cs.NA | Recently hybrid evolutionary computation (EC) techniques are successfully
implemented for solving large sets of linear equations. All the recently
developed hybrid evolutionary algorithms, for solving linear equations, contain
both the recombination and the mutation operations. In this paper, two modified
hybrid evolutionary algorithms contained time-variant adaptive evolutionary
technique are proposed for solving linear equations in which recombination
operation is absent. The effectiveness of the recombination operator has been
studied for the time-variant adaptive hybrid algorithms for solving large set
of linear equations. Several experiments have been carried out using both the
proposed modified hybrid evolutionary algorithms (in which the recombination
operation is absent) and corresponding existing hybrid algorithms (in which the
recombination operation is present) to solve large set of linear equations. It
is found that the number of generations required by the existing hybrid
algorithms (i.e. the Gauss-Seidel-SR based time variant adaptive (GSBTVA)
hybrid algorithm and the Jacobi-SR based time variant adaptive (JBTVA) hybrid
algorithm) and modified hybrid algorithms (i.e. the modified Gauss-Seidel-SR
based time variant adaptive (MGSBTVA) hybrid algorithm and the modified
Jacobi-SR based time variant adaptive (MJBTVA) hybrid algorithm) are
comparable. Also the proposed modified algorithms require less amount of
computational time in comparison to the corresponding existing hybrid
algorithms. As the proposed modified hybrid algorithms do not contain
recombination operation, so they require less computational effort, and also
they are more efficient, effective and easy to implement.
|
1304.2574 | An Analysis on the Inter-Cell Station Dependency Probability in an IEEE
802.11 Infrastructure WLANs | cs.NI cs.IT math.IT | In this document, we are primarily interested in computing the probabilities
of various types of dependencies that can occur in a multi-cell infrastructure
network.
|
1304.2576 | Shortest Path and Distance Queries on Road Networks: Towards Bridging
Theory and Practice | cs.DS cs.DB | Given two locations $s$ and $t$ in a road network, a distance query returns
the minimum network distance from $s$ to $t$, while a shortest path query
computes the actual route that achieves the minimum distance. These two types
of queries find important applications in practice, and a plethora of solutions
have been proposed in past few decades. The existing solutions, however, are
optimized for either practical or asymptotic performance, but not both. In
particular, the techniques with enhanced practical efficiency are mostly
heuristic-based, and they offer unattractive worst-case guarantees in terms of
space and time. On the other hand, the methods that are worst-case efficient
often entail prohibitive preprocessing or space overheads, which render them
inapplicable for the large road networks (with millions of nodes) commonly used
in modern map applications.
This paper presents {\em Arterial Hierarchy (AH)}, an index structure that
narrows the gap between theory and practice in answering shortest path and
distance queries on road networks. On the theoretical side, we show that, under
a realistic assumption, AH answers any distance query in $\tilde{O}(\log \r)$
time, where $\r = d_{max}/d_{min}$, and $d_{max}$ (resp.\ $d_{min}$) is the
largest (resp.\ smallest) $L_\infty$ distance between any two nodes in the road
network. In addition, any shortest path query can be answered in $\tilde{O}(k +
\log \r)$ time, where $k$ is the number of nodes on the shortest path. On the
practical side, we experimentally evaluate AH on a large set of real road
networks with up to twenty million nodes, and we demonstrate that (i) AH
outperforms the state of the art in terms of query time, and (ii) its space and
pre-computation overheads are moderate.
|
1304.2581 | Stability and performance of stochastic predictive control | cs.SY math.OC | This article is concerned with stability and performance of controlled
stochastic processes under receding horizon policies. We carry out a systematic
study of methods to guarantee stability under receding horizon policies via
appropriate selections of cost functions in the underlying finite-horizon
optimal control problem. We also obtain quantitative bounds on the performance
of the system under receding horizon policies as measured by the long-run
expected average cost. The results are illustrated with the help of several
simple examples.
|
1304.2618 | Lexicographic identifying codes | math.CO cs.DM cs.IT math.IT | An identifying code in a graph is a set of vertices which intersects all the
symmetric differences between pairs of neighbourhoods of vertices. Not all
graphs have identifying codes; those that do are referred to as twin-free. In
this paper, we design an algorithm that finds an identifying code in a
twin-free graph on n vertices in O(n^3) binary operations, and returns a
failure if the graph is not twin-free. We also determine an alternative for
sparse graphs with a running time of O(n^2d log n) binary operations, where d
is the maximum degree. We also prove that these algorithms can return any
identifying code with minimum cardinality, provided the vertices are correctly
sorted.
|
1304.2637 | Containment of Nested Regular Expressions | cs.DB | Nested regular expressions (NREs) have been proposed as a powerful formalism
for querying RDFS graphs, but research in a more general graph database context
has been scarce, and static analysis results are currently lacking. In this
paper we investigate the problem of containment of NREs, and show that it can
be solved in PSPACE, i.e., the same complexity as the problem of containment of
regular expressions or regular path queries (RPQs).
|
1304.2664 | Localized nonlinear functional equations and two sampling problems in
signal processing | math.FA cs.IT math.IT math.NA math.OA | Let $1\le p\le \infty$. In this paper, we consider solving a nonlinear
functional equation $$f(x)=y,$$ where $x, y$ belong to $\ell^p$ and $f$ has
continuous bounded gradient in an inverse-closed subalgebra of ${\mathcal
B}(\ell^2)$, the Banach algebra of all bounded linear operators on the Hilbert
space $\ell^2$. We introduce strict monotonicity property for functions $f$ on
Banach spaces $\ell^p$ so that the above nonlinear functional equation is
solvable and the solution $x$ depends continuously on the given data $y$ in
$\ell^p$. We show that the Van-Cittert iteration converges in $\ell^p$ with
exponential rate and hence it could be used to locate the true solution of the
above nonlinear functional equation. We apply the above theory to handle two
problems in signal processing: nonlinear sampling termed with instantaneous
companding and subsequently average sampling; and local identification of
innovation positions and qualification of amplitudes of signals with finite
rate of innovation.
|
1304.2681 | Maps of Computer Science | cs.IR cs.DL physics.soc-ph | We describe a practical approach for visual exploration of research papers.
Specifically, we use the titles of papers from the DBLP database to create what
we call maps of computer science (MoCS). Words and phrases from the paper
titles are the cities in the map, and countries are created based on word and
phrase similarity, calculated using co-occurrence. With the help of heatmaps,
we can visualize the profile of a particular conference or journal over the
base map. Similarly, heatmap profiles can be made of individual researchers or
groups such as a department. The visualization system also makes it possible to
change the data used to generate the base map. For example, a specific journal
or conference can be used to generate the base map and then the heatmap
overlays can be used to show the evolution of research topics in the field over
the years. As before, individual researchers or research groups profiles can be
visualized using heatmap overlays but this time over the journal or conference
base map. Finally, research papers or abstracts easily generate visual
abstracts giving a visual representation of the distribution of topics in the
paper. We outline a modular and extensible system for term extraction using
natural language processing techniques, and show the applicability of methods
of information retrieval to calculation of term similarity and creation of a
topic map. The system is available at mocs.cs.arizona.edu.
|
1304.2683 | Image Classification by Feature Dimension Reduction and Graph based
Ranking | cs.CV | Dimensionality reduction (DR) of image features plays an important role in
image retrieval and classification tasks. Recently, two types of methods have
been proposed to improve the both the accuracy and efficiency for the
dimensionality reduction problem. One uses Non-negative matrix factorization
(NMF) to describe the image distribution on the space of base matrix. Another
one for dimension reduction trains a subspace projection matrix to project
original data space into some low-dimensional subspaces which have deep
architecture, so that the low-dimensional codes would be learned. At the same
time, the graph based similarity learning algorithm which tries to exploit
contextual information for improving the effectiveness of image rankings is
also proposed for image class and retrieval problem. In this paper, after above
two methods mentioned are utilized to reduce the high-dimensional features of
images respectively, we learn the graph based similarity for the image
classification problem. This paper compares the proposed approach with other
approaches on an image database.
|
1304.2688 | Efficient Wireless Security Through Jamming, Coding and Routing | cs.NI cs.CR cs.IT math.IT | There is a rich recent literature on how to assist secure communication
between a single transmitter and receiver at the physical layer of wireless
networks through techniques such as cooperative jamming. In this paper, we
consider how these single-hop physical layer security techniques can be
extended to multi-hop wireless networks and show how to augment physical layer
security techniques with higher layer network mechanisms such as coding and
routing. Specifically, we consider the secure minimum energy routing problem,
in which the objective is to compute a minimum energy path between two network
nodes subject to constraints on the end-to-end communication secrecy and
goodput over the path. This problem is formulated as a constrained optimization
of transmission power and link selection, which is proved to be NP-hard.
Nevertheless, we show that efficient algorithms exist to compute both exact and
approximate solutions for the problem. In particular, we develop an exact
solution of pseudo-polynomial complexity, as well as an epsilon-optimal
approximation of polynomial complexity. Simulation results are also provided to
show the utility of our algorithms and quantify their energy savings compared
to a combination of (standard) security-agnostic minimum energy routing and
physical layer security. In the simulated scenarios, we observe that, by
jointly optimizing link selection at the network layer and cooperative jamming
at the physical layer, our algorithms reduce the network energy consumption by
half.
|
1304.2694 | Symmetry-Aware Marginal Density Estimation | cs.AI | The Rao-Blackwell theorem is utilized to analyze and improve the scalability
of inference in large probabilistic models that exhibit symmetries. A novel
marginal density estimator is introduced and shown both analytically and
empirically to outperform standard estimators by several orders of magnitude.
The developed theory and algorithms apply to a broad class of probabilistic
models including statistical relational models considered not susceptible to
lifted probabilistic inference.
|
1304.2707 | Tracking the Tracker from its Passive Sonar ML-PDA Estimates | cs.IT math.IT | Target motion analysis with wideband passive sonar has received much
attention. Maximum likelihood probabilistic data-association (ML-PDA)
represents an asymptotically efficient estimator for deterministic target
motion, and is especially well-suited for low-observable targets; the results
presented here apply to situations with higher signal to noise ratio as well,
including of course the situation of a deterministic target observed via clean
measurements without false alarms or missed detections. Here we study the
inverse problem, namely, how to identify the observing platform (following a
two-leg motion model) from the results of the target estimation process, i.e.
the estimated target state and the Fisher information matrix, quantities we
assume an eavesdropper might intercept. We tackle the problem and we present
observability properties, with supporting simulation results.
|
1304.2711 | Is Shafer General Bayes? | cs.AI | This paper examines the relationship between Shafer's belief functions and
convex sets of probability distributions. Kyburg's (1986) result showed that
belief function models form a subset of the class of closed convex probability
distributions. This paper emphasizes the importance of Kyburg's result by
looking at simple examples involving Bernoulli trials. Furthermore, it is shown
that many convex sets of probability distributions generate the same belief
function in the sense that they support the same lower and upper values. This
has implications for a decision theoretic extension. Dempster's rule of
combination is also compared with Bayes' rule of conditioning.
|
1304.2712 | Modifiable Combining Functions | cs.AI | Modifiable combining functions are a synthesis of two common approaches to
combining evidence. They offer many of the advantages of these approaches and
avoid some disadvantages. Because they facilitate the acquisition,
representation, explanation, and modification of knowledge about combinations
of evidence, they are proposed as a tool for knowledge engineers who build
systems that reason under uncertainty, not as a normative theory of evidence.
|
1304.2713 | Dempster-Shafer vs. Probabilistic Logic | cs.AI | The combination of evidence in Dempster-Shafer theory is compared with the
combination of evidence in probabilistic logic. Sufficient conditions are
stated for these two methods to agree. It is then shown that these conditions
are minimal in the sense that disagreement can occur when any one of them is
removed. An example is given in which the traditional assumption of conditional
independence of evidence on hypotheses holds and a uniform prior is assumed,
but probabilistic logic and Dempster's rule give radically different results
for the combination of two evidence events.
|
1304.2714 | Higher Order Probabilities | cs.AI | A number of writers have supposed that for the full specification of belief,
higher order probabilities are required. Some have even supposed that there may
be an unending sequence of higher order probabilities of probabilities of
probabilities.... In the present paper we show that higher order probabilities
can always be replaced by the marginal distributions of joint probability
distributions. We consider both the case in which higher order probabilities
are of the same sort as lower order probabilities and that in which higher
order probabilities are distinct in character, as when lower order
probabilities are construed as frequencies and higher order probabilities are
construed as subjective degrees of belief. In neither case do higher order
probabilities appear to offer any advantages, either conceptually or
computationally.
|
1304.2715 | Belief in Belief Functions: An Examination of Shafer's Canonical
Examples | cs.AI | In the canonical examples underlying Shafer-Dempster theory, beliefs over the
hypotheses of interest are derived from a probability model for a set of
auxiliary hypotheses. Beliefs are derived via a compatibility relation
connecting the auxiliary hypotheses to subsets of the primary hypotheses. A
belief function differs from a Bayesian probability model in that one does not
condition on those parts of the evidence for which no probabilities are
specified. The significance of this difference in conditioning assumptions is
illustrated with two examples giving rise to identical belief functions but
different Bayesian probability distributions.
|
1304.2716 | Do We Need Higher-Order Probabilities and, If So, What Do They Mean? | cs.AI | The apparent failure of individual probabilistic expressions to distinguish
uncertainty about truths from uncertainty about probabilistic assessments have
prompted researchers to seek formalisms where the two types of uncertainties
are given notational distinction. This paper demonstrates that the desired
distinction is already a built-in feature of classical probabilistic models,
thus, specialized notations are unnecessary.
|
1304.2717 | Bayesian Prediction for Artificial Intelligence | cs.AI | This paper shows that the common method used for making predictions under
uncertainty in A1 and science is in error. This method is to use currently
available data to select the best model from a given class of models-this
process is called abduction-and then to use this model to make predictions
about future data. The correct method requires averaging over all the models to
make a prediction-we call this method transduction. Using transduction, an AI
system will not give misleading results when basing predictions on small
amounts of data, when no model is clearly best. For common classes of models we
show that the optimal solution can be given in closed form.
|
1304.2718 | Can Evidence Be Combined in the Dempster-Shafer Theory | cs.AI | Dempster's rule of combination has been the most controversial part of the
Dempster-Shafer (D-S) theory. In particular, Zadeh has reached a conjecture on
the noncombinability of evidence from a relational model of the D-S theory. In
this paper, we will describe another relational model where D-S masses are
represented as conditional granular distributions. By comparing it with Zadeh's
relational model, we will show how Zadeh's conjecture on combinability does not
affect the applicability of Dempster's rule in our model.
|
1304.2719 | An Interesting Uncertainty-Based Combinatoric Problem in Spare Parts
Forecasting: The FRED System | cs.AI | The domain of spare parts forecasting is examined, and is found to present
unique uncertainty based problems in the architectural design of a
knowledge-based system. A mixture of different uncertainty paradigms is
required for the solution, with an intriguing combinatoric problem arising from
an uncertain choice of inference engines. Thus, uncertainty in the system is
manifested in two different meta-levels. The different uncertainty paradigms
and meta-levels must be integrated into a functioning whole. FRED is an example
of a difficult real-world domain to which no existing uncertainty approach is
completely appropriate. This paper discusses the architecture of FRED,
highlighting: the points of uncertainty and other interesting features of the
domain, the specific implications of those features on the system design
(including the combinatoric explosions), their current implementation & future
plans,and other problems and issues with the architecture.
|
1304.2720 | Bayesian Inference in Model-Based Machine Vision | cs.AI | This is a preliminary version of visual interpretation integrating multiple
sensors in SUCCESSOR, an intelligent, model-based vision system. We pursue a
thorough integration of hierarchical Bayesian inference with comprehensive
physical representation of objects and their relations in a system for
reasoning with geometry, surface materials and sensor models in machine vision.
Bayesian inference provides a framework for accruing_ probabilities to rank
order hypotheses.
|
1304.2721 | Using the Dempster-Shafer Scheme in a Diagnostic Expert System Shell | cs.AI | This paper discusses an expert system shell that integrates rule-based
reasoning and the Dempster-Shafer evidence combination scheme. Domain knowledge
is stored as rules with associated belief functions. The reasoning component
uses a combination of forward and backward inferencing mechanisms to allow
interaction with users in a mixed-initiative format.
|
1304.2722 | Stochastic Simulation of Bayesian Belief Networks | cs.AI | This paper examines Bayesian belief network inference using simulation as a
method for computing the posterior probabilities of network variables.
Specifically, it examines the use of a method described by Henrion, called
logic sampling, and a method described by Pearl, called stochastic simulation.
We first review the conditions under which logic sampling is computationally
infeasible. Such cases motivated the development of the Pearl's stochastic
simulation algorithm. We have found that this stochastic simulation algorithm,
when applied to certain networks, leads to much slower than expected
convergence to the true posterior probabilities. This behavior is a result of
the tendency for local areas in the network to become fixed through many
simulation cycles. The time required to obtain significant convergence can be
made arbitrarily long by strengthening the probabilistic dependency between
nodes. We propose the use of several forms of graph modification, such as graph
pruning, arc reversal, and node reduction, in order to convert some networks
into formats that are computationally more efficient for simulation.
|
1304.2723 | Temporal Reasoning About Uncertain Worlds | cs.AI | We present a program that manages a database of temporally scoped beliefs.
The basic functionality of the system includes maintaining a network of
constraints among time points, supporting a variety of fetches, mediating the
application of causal rules, monitoring intervals of time for the addition of
new facts, and managing data dependencies that keep the database consistent. At
this level the system operates independent of any measure of belief or belief
calculus. We provide an example of how an application program mi9ght use this
functionality to implement a belief calculus.
|
1304.2724 | A Perspective on Confidence and Its Use in Focusing Attention During
Knowledge Acquisition | cs.AI | We present a representation of partial confidence in belief and preference
that is consistent with the tenets of decision-theory. The fundamental insight
underlying the representation is that if a person is not completely confident
in a probability or utility assessment, additional modeling of the assessment
may improve decisions to which it is relevant. We show how a traditional
decision-analytic approach can be used to balance the benefits of additional
modeling with associated costs. The approach can be used during knowledge
acquisition to focus the attention of a knowledge engineer or expert on parts
of a decision model that deserve additional refinement.
|
1304.2725 | Practical Issues in Constructing a Bayes' Belief Network | cs.AI | Bayes belief networks and influence diagrams are tools for constructing
coherent probabilistic representations of uncertain knowledge. The process of
constructing such a network to represent an expert's knowledge is used to
illustrate a variety of techniques which can facilitate the process of
structuring and quantifying uncertain relationships. These include some
generalizations of the "noisy OR gate" concept. Sensitivity analysis of generic
elements of Bayes' networks provides insight into when rough probability
assessments are sufficient and when greater precision may be important.
|
1304.2726 | NAIVE: A Method for Representing Uncertainty and Temporal Relationships
in an Automated Reasoner | cs.AI | This paper describes NAIVE, a low-level knowledge representation language and
inferencing process. NAIVE has been designed for reasoning about
nondeterministic dynamic systems like those found in medicine. Knowledge is
represented in a graph structure consisting of nodes, which correspond to the
variables describing the system of interest, and arcs, which correspond to the
procedures used to infer the value of a variable from the values of other
variables. The value of a variable can be determined at an instant in time,
over a time interval or for a series of times. Information about the value of a
variable is expressed as a probability density function which quantifies the
likelihood of each possible value. The inferencing process uses these
probability density functions to propagate uncertainty. NAIVE has been used to
develop medical knowledge bases including over 100 variables.
|
1304.2727 | Objective Probability | cs.AI | A distinction is sometimes made between "statistical" and "subjective"
probabilities. This is based on a distinction between "unique" events and
"repeatable" events. We argue that this distinction is untenable, since all
events are "unique" and all events belong to "kinds", and offer a conception of
probability for A1 in which (1) all probabilities are based on -- possibly
vague -- statistical knowledge, and (2) every statement in the language has a
probability. This conception of probability can be applied to very rich
languages.
|
1304.2728 | Coefficients of Relations for Probabilistic Reasoning | cs.AI | Definitions and notations with historical references are given for some
numerical coefficients commonly used to quantify relations among collections of
objects for the purpose of expressing approximate knowledge and probabilistic
reasoning.
|
1304.2729 | Satisfaction of Assumptions is a Weak Predictor of Performance | cs.AI | This paper demonstrates a methodology for examining the accuracy of uncertain
inference systems (UIS), after their parameters have been optimized, and does
so for several common UIS's. This methodology may be used to test the accuracy
when either the prior assumptions or updating formulae are not exactly
satisfied. Surprisingly, these UIS's were revealed to be no more accurate on
the average than a simple linear regression. Moreover, even on prior
distributions which were deliberately biased so as give very good accuracy,
they were less accurate than the simple probabilistic model which assumes
marginal independence between inputs. This demonstrates that the importance of
updating formulae can outweigh that of prior assumptions. Thus, when UIS's are
judged by their final accuracy after optimization, we get completely different
results than when they are judged by whether or not their prior assumptions are
perfectly satisfied.
|
1304.2730 | Structuring Causal Tree Models with Continuous Variables | cs.AI | This paper considers the problem of invoking auxiliary, unobservable
variables to facilitate the structuring of causal tree models for a given set
of continuous variables. Paralleling the treatment of bi-valued variables in
[Pearl 1986], we show that if a collection of coupled variables are governed by
a joint normal distribution and a tree-structured representation exists, then
both the topology and all internal relationships of the tree can be uncovered
by observing pairwise dependencies among the observed variables (i.e., the
leaves of the tree). Furthermore, the conditions for normally distributed
variables are less restrictive than those governing bi-valued variables. The
result extends the applications of causal tree models which were found useful
in evidential reasoning tasks.
|
1304.2731 | Implementing Evidential Reasoning in Expert Systems | cs.AI | The Dempster-Shafer theory has been extended recently for its application to
expert systems. However, implementing the extended D-S reasoning model in
rule-based systems greatly complicates the task of generating informative
explanations. By implementing GERTIS, a prototype system for diagnosing
rheumatoid arthritis, we show that two kinds of knowledge are essential for
explanation generation: (l) taxonomic class relationships between hypotheses
and (2) pointers to the rules that significantly contribute to belief in the
hypothesis. As a result, the knowledge represented in GERTIS is richer and more
complex than that of conventional rule-based systems. GERTIS not only
demonstrates the feasibility of rule-based evidential-reasoning systems, but
also suggests ways to generate better explanations, and to explicitly represent
various useful relationships among hypotheses and rules.
|
1304.2732 | Decision Tree Induction Systems: A Bayesian Analysis | cs.AI | Decision tree induction systems are being used for knowledge acquisition in
noisy domains. This paper develops a subjective Bayesian interpretation of the
task tackled by these systems and the heuristic methods they use. It is argued
that decision tree systems implicitly incorporate a prior belief that the
simpler (in terms of decision tree complexity) of two hypotheses be preferred,
all else being equal, and that they perform a greedy search of the space of
decision rules to find one in which there is strong posterior belief. A number
of improvements to these systems are then suggested.
|
1304.2733 | The Automatic Training of Rule Bases that Use Numerical Uncertainty
Representations | cs.AI | The use of numerical uncertainty representations allows better modeling of
some aspects of human evidential reasoning. It also makes knowledge acquisition
and system development, test, and modification more difficult. We propose that
where possible, the assignment and/or refinement of rule weights should be
performed automatically. We present one approach to performing this training -
numerical optimization - and report on the results of some preliminary tests in
training rule bases. We also show that truth maintenance can be used to make
training more efficient and ask some epistemological questions raised by
training rule weights.
|
1304.2734 | The Inductive Logic of Information Systems | cs.AI | An inductive logic can be formulated in which the elements are not
propositions or probability distributions, but information systems. The logic
is complete for information systems with binary hypotheses, i.e., it applies to
all such systems. It is not complete for information systems with more than two
hypotheses, but applies to a subset of such systems. The logic is inductive in
that conclusions are more informative than premises. Inferences using the
formalism have a strong justification in terms of the expected value of the
derived information system.
|
1304.2735 | Automated Generation of Connectionist Expert Systems for Problems
Involving Noise and Redundancy | cs.AI | When creating an expert system, the most difficult and expensive task is
constructing a knowledge base. This is particularly true if the problem
involves noisy data and redundant measurements. This paper shows how to modify
the MACIE process for generating connectionist expert systems from training
examples so that it can accommodate noisy and redundant data. The basic idea is
to dynamically generate appropriate training examples by constructing both a
'deep' model and a noise model for the underlying problem. The use of
winner-take-all groups of variables is also discussed. These techniques are
illustrated with a small example that would be very difficult for standard
expert system approaches.
|
1304.2736 | The Recovery of Causal Poly-Trees from Statistical Data | cs.AI | Poly-trees are singly connected causal networks in which variables may arise
from multiple causes. This paper develops a method of recovering ply-trees from
empirically measured probability distributions of pairs of variables. The
method guarantees that, if the measured distributions are generated by a causal
process structured as a ply-tree then the topological structure of such tree
can be recovered precisely and, in addition, the causal directionality of the
branches can be determined up to the maximum extent possible. The method also
pinpoints the minimum (if any) external semantics required to determine the
causal relationships among the variables considered.
|
1304.2737 | A Heuristic Bayesian Approach to Knowledge Acquisition: Application to
Analysis of Tissue-Type Plasminogen Activator | cs.AI | This paper describes a heuristic Bayesian method for computing probability
distributions from experimental data, based upon the multivariate normal form
of the influence diagram. An example illustrates its use in medical technology
assessment. This approach facilitates the integration of results from different
studies, and permits a medical expert to make proper assessments without
considerable statistical training.
|
1304.2738 | Theory-Based Inductive Learning: An Integration of Symbolic and
Quantitative Methods | cs.AI | The objective of this paper is to propose a method that will generate a
causal explanation of observed events in an uncertain world and then make
decisions based on that explanation. Feedback can cause the explanation and
decisions to be modified. I call the method Theory-Based Inductive Learning
(T-BIL). T-BIL integrates deductive learning, based on a technique called
Explanation-Based Generalization (EBG) from the field of machine learning, with
inductive learning methods from Bayesian decision theory. T-BIL takes as inputs
(1) a decision problem involving a sequence of related decisions over time, (2)
a training example of a solution to the decision problem in one period, and (3)
the domain theory relevant to the decision problem. T-BIL uses these inputs to
construct a probabilistic explanation of why the training example is an
instance of a solution to one stage of the sequential decision problem. This
explanation is then generalized to cover a more general class of instances and
is used as the basis for making the next-stage decisions. As the outcomes of
each decision are observed, the explanation is revised, which in turn affects
the subsequent decisions. A detailed example is presented that uses T-BIL to
solve a very general stochastic adaptive control problem for an autonomous
mobile robot.
|
1304.2739 | Using T-Norm Based Uncertainty Calculi in a Naval Situation Assessment
Application | cs.AI | RUM (Reasoning with Uncertainty Module), is an integrated software tool based
on a KEE, a frame system implemented in an object oriented language. RUM's
architecture is composed of three layers: representation, inference, and
control. The representation layer is based on frame-like data structures that
capture the uncertainty information used in the inference layer and the
uncertainty meta-information used in the control layer. The inference layer
provides a selection of five T-norm based uncertainty calculi with which to
perform the intersection, detachment, union, and pooling of information. The
control layer uses the meta-information to select the appropriate calculus for
each context and to resolve eventual ignorance or conflict in the information.
This layer also provides a context mechanism that allows the system to focus on
the relevant portion of the knowledge base, and an uncertain-belief revision
system that incrementally updates the certainty values of well-formed formulae
(wffs) in an acyclic directed deduction graph. RUM has been tested and
validated in a sequence of experiments in both naval and aerial situation
assessment (SA), consisting of correlating reports and tracks, locating and
classifying platforms, and identifying intents and threats. An example of naval
situation assessment is illustrated. The testbed environment for developing
these experiments has been provided by LOTTA, a symbolic simulator implemented
in Flavors. This simulator maintains time-varying situations in a multi-player
antagonistic game where players must make decisions in light of uncertain and
incomplete data. RUM has been used to assist one of the LOTTA players to
perform the SA task.
|
1304.2740 | A Study of Associative Evidential Reasoning | cs.AI | Evidential reasoning is cast as the problem of simplifying the
evidence-hypothesis relation and constructing combination formulas that possess
certain testable properties. Important classes of evidence as identifiers,
annihilators, and idempotents and their roles in determining binary operations
on intervals of reals are discussed. The appropriate way of constructing
formulas for combining evidence and their limitations, for instance, in
robustness, are presented.
|
1304.2741 | A Measure-Free Approach to Conditioning | cs.AI | In an earlier paper, a new theory of measurefree "conditional" objects was
presented. In this paper, emphasis is placed upon the motivation of the theory.
The central part of this motivation is established through an example involving
a knowledge-based system. In order to evaluate combination of evidence for this
system, using observed data, auxiliary at tribute and diagnosis variables, and
inference rules connecting them, one must first choose an appropriate algebraic
logic description pair (ALDP): a formal language or syntax followed by a
compatible logic or semantic evaluation (or model). Three common choices- for
this highly non-unique choice - are briefly discussed, the logics being
Classical Logic, Fuzzy Logic, and Probability Logic. In all three,the key
operator representing implication for the inference rules is interpreted as the
often-used disjunction of a negation (b => a) = (b'v a), for any events a,b.
However, another reasonable interpretation of the implication operator is
through the familiar form of probabilistic conditioning. But, it can be shown -
quite surprisingly - that the ALDP corresponding to Probability Logic cannot be
used as a rigorous basis for this interpretation! To fill this gap, a new ALDP
is constructed consisting of "conditional objects", extending ordinary
Probability Logic, and compatible with the desired conditional probability
interpretation of inference rules. It is shown also that this choice of ALDP
leads to feasible computations for the combination of evidence evaluation in
the example. In addition, a number of basic properties of conditional objects
and the resulting Conditional Probability Logic are given, including a
characterization property and a developed calculus of relations.
|
1304.2742 | Convergent Deduction for Probabilistic Logic | cs.AI | This paper discusses the semantics and proof theory of Nilsson's
probabilistic logic, outlining both the benefits of its well-defined model
theory and the drawbacks of its proof theory. Within Nilsson's semantic
framework, we derive a set of inference rules which are provably sound. The
resulting proof system, in contrast to Nilsson's approach, has the important
feature of convergence - that is, the inference process proceeds by computing
increasingly narrow probability intervals which converge from above and below
on the smallest entailed probability interval. Thus the procedure can be
stopped at any time to yield partial information concerning the smallest
entailed interval.
|
1304.2743 | Comparisons of Reasoning Mechanisms for Computer Vision | cs.CV cs.AI | An evidential reasoning mechanism based on the Dempster-Shafer theory of
evidence is introduced. Its performance in real-world image analysis is
compared with other mechanisms based on the Bayesian formalism and a simple
weight combination method.
|
1304.2744 | A Knowledge Engineer's Comparison of Three Evidence Aggregation Methods | cs.AI | The comparisons of uncertainty calculi from the last two Uncertainty
Workshops have all used theoretical probabilistic accuracy as the sole metric.
While mathematical correctness is important, there are other factors which
should be considered when developing reasoning systems. These other factors
include, among other things, the error in uncertainty measures obtainable for
the problem and the effect of this error on the performance of the resulting
system.
|
1304.2745 | Towards Solving the Multiple Extension Problem: Combining Defaults and
Probabilities | cs.AI | The multiple extension problem arises frequently in diagnostic and default
inference. That is, we can often use any of a number of sets of defaults or
possible hypotheses to explain observations or make Predictions. In default
inference, some extensions seem to be simply wrong and we use qualitative
techniques to weed out the unwanted ones. In the area of diagnosis, however,
the multiple explanations may all seem reasonable, however improbable. Choosing
among them is a matter of quantitative preference. Quantitative preference
works well in diagnosis when knowledge is modelled causally. Here we suggest a
framework that combines probabilities and defaults in a single unified
framework that retains the semantics of diagnosis as construction of
explanations from a fixed set of possible hypotheses. We can then compute
probabilities incrementally as we construct explanations. Here we describe a
branch and bound algorithm that maintains a set of all partial explanations
while exploring a most promising one first. A most probable explanation is
found first if explanations are partially ordered.
|
1304.2746 | Problem Structure and Evidential Reasoning | cs.AI | In our previous series of studies to investigate the role of evidential
reasoning in the RUBRIC system for full-text document retrieval (Tong et al.,
1985; Tong and Shapiro, 1985; Tong and Appelbaum, 1987), we identified the
important role that problem structure plays in the overall performance of the
system. In this paper, we focus on these structural elements (which we now call
"semantic structure") and show how explicit consideration of their properties
reduces what previously were seen as difficult evidential reasoning problems to
more tractable questions.
|
1304.2747 | The Role of Calculi in Uncertain Inference Systems | cs.AI | Much of the controversy about methods for automated decision making has
focused on specific calculi for combining beliefs or propagating uncertainty.
We broaden the debate by (1) exploring the constellation of secondary tasks
surrounding any primary decision problem, and (2) identifying knowledge
engineering concerns that present additional representational tradeoffs. We
argue on pragmatic grounds that the attempt to support all of these tasks
within a single calculus is misguided. In the process, we note several
uncertain reasoning objectives that conflict with the Bayesian ideal of
complete specification of probabilities and utilities. In response, we advocate
treating the uncertainty calculus as an object language for reasoning
mechanisms that support the secondary tasks. Arguments against Bayesian
decision theory are weakened when the calculus is relegated to this role.
Architectures for uncertainty handling that take statements in the calculus as
objects to be reasoned about offer the prospect of retaining normative status
with respect to decision making while supporting the other tasks in uncertain
reasoning.
|
1304.2748 | The Role of Tuning Uncertain Inference Systems | cs.AI | This study examined the effects of "tuning" the parameters of the incremental
function of MYCIN, the independent function of PROSPECTOR, a probability model
that assumes independence, and a simple additive linear equation. me parameters
of each of these models were optimized to provide solutions which most nearly
approximated those from a full probability model for a large set of simple
networks. Surprisingly, MYCIN, PROSPECTOR, and the linear equation performed
equivalently; the independence model was clearly more accurate on the networks
studied.
|
1304.2749 | Evidential Reasoning in Image Understanding | cs.CV cs.AI | In this paper, we present some results of evidential reasoning in
understanding multispectral images of remote sensing systems. The
Dempster-Shafer approach of combination of evidences is pursued to yield
contextual classification results, which are compared with previous results of
the Bayesian context free classification, contextual classifications of dynamic
programming and stochastic relaxation approaches.
|
1304.2750 | Implementing a Bayesian Scheme for Revising Belief Commitments | cs.AI | Our previous work on classifying complex ship images [1,2] has evolved into
an effort to develop software tools for building and solving generic
classification problems. Managing the uncertainty associated with feature data
and other evidence is an important issue in this endeavor. Bayesian techniques
for managing uncertainty [7,12,13] have proven to be useful for managing
several of the belief maintenance requirements of classification problem
solving. One such requirement is the need to give qualitative explanations of
what is believed. Pearl [11] addresses this need by computing what he calls a
belief commitment-the most probable instantiation of all hypothesis variables
given the evidence available. Before belief commitments can be computed, the
straightforward implementation of Pearl's procedure involves finding an
analytical solution to some often difficult optimization problems. We describe
an efficient implementation of this procedure using tensor products that solves
these problems enumeratively and avoids the need for case by case analysis. The
procedure is thereby made more practical to use in the general case.
|
1304.2751 | Integrating Logical and Probabilistic Reasoning for Decision Making | cs.AI | We describe a representation and a set of inference methods that combine
logic programming techniques with probabilistic network representations for
uncertainty (influence diagrams). The techniques emphasize the dynamic
construction and solution of probabilistic and decision-theoretic models for
complex and uncertain domains. Given a query, a logical proof is produced if
possible; if not, an influence diagram based on the query and the knowledge of
the decision domain is produced and subsequently solved. A uniform declarative,
first-order, knowledge representation is combined with a set of integrated
inference procedures for logical, probabilistic, and decision-theoretic
reasoning.
|
1304.2752 | Compiling Fuzzy Logic Control Rules to Hardware Implementations | cs.AI | A major aspect of human reasoning involves the use of approximations.
Particularly in situations where the decision-making process is under stringent
time constraints, decisions are based largely on approximate, qualitative
assessments of the situations. Our work is concerned with the application of
approximate reasoning to real-time control. Because of the stringent processing
speed requirements in such applications, hardware implementations of fuzzy
logic inferencing are being pursued. We describe a programming environment for
translating fuzzy control rules into hardware realizations. Two methods of
hardware realizations are possible. The First is based on a special purpose
chip for fuzzy inferencing. The second is based on a simple memory chip. The
ability to directly translate a set of decision rules into hardware
implementations is expected to make fuzzy control an increasingly practical
approach to the control of complex systems.
|
1304.2753 | Steps Towards Programs that Manage Uncertainty | cs.AI | Reasoning under uncertainty in Al hats come to mean assessing the credibility
of hypotheses inferred from evidence. But techniques for assessing credibility
do not tell a problem solver what to do when it is uncertain. This is the focus
of our current research. We have developed a medical expert system called MUM,
for Managing Uncertainty in Medicine, that plans diagnostic sequences of
questions, tests, and treatments. This paper describes the kinds of problems
that MUM was designed to solve and gives a brief description of its
architecture. More recently, we have built an empty version of MUM called MU,
and used it to reimplement MUM and a small diagnostic system for plant
pathology. The latter part of the paper describes the features of MU that make
it appropriate for building expert systems that manage uncertainty.
|
1304.2754 | An Algorithm for Computing Probabilistic Propositions | cs.AI | A method for computing probabilistic propositions is presented. It assumes
the availability of a single external routine for computing the probability of
one instantiated variable, given a conjunction of other instantiated variables.
In particular, the method allows belief network algorithms to calculate general
probabilistic propositions over nodes in the network. Although in the worst
case the time complexity of the method is exponential in the size of a query,
it is polynomial in the size of a number of common types of queries.
|
1304.2755 | Combining Symbolic and Numeric Approaches to Uncertainty Management | cs.AI | A complete approach to reasoning under uncertainty requires support for
incremental and interactive formulation and revision of, as well as reasoning
with, models of the problem domain capable of representing our uncertainty. We
present a hybrid reasoning scheme which combines symbolic and numeric methods
for uncertainty management to provide efficient and effective support for each
of these tasks. The hybrid is based on symbolic techniques adapted from
Assumption-based Truth Maintenance systems (ATMS), combined with numeric
methods adapted from the Dempster/Shafer theory of evidence, as extended in
Baldwin's Support Logic Programming system. The hybridization is achieved by
viewing an ATMS as a symbolic algebra system for uncertainty calculations. This
technique has several major advantages over conventional methods for performing
inference with numeric certainty estimates in addition to the ability to
dynamically determine hypothesis spaces, including improved management of
dependent and partially independent evidence, faster run-time evaluation of
propositional certainties, the ability to query the certainty value of a
proposition from multiple perspectives, and the ability to incrementally extend
or revise domain models.
|
1304.2756 | Explanation of Probabilistic Inference for Decision Support Systems | cs.AI | An automated explanation facility for Bayesian conditioning aimed at
improving user acceptance of probability-based decision support systems has
been developed. The domain-independent facility is based on an information
processing perspective on reasoning about conditional evidence that accounts
both for biased and normative inferences. Experimental results indicate that
the facility is both acceptable to naive users and effective in improving
understanding.
|
1304.2757 | Estimation Procedures for Robust Sensor Control | cs.SY cs.AI | Many robotic sensor estimation problems can characterized in terms of
nonlinear measurement systems. These systems are contaminated with noise and
may be underdetermined from a single observation. In order to get reliable
estimation results, the system must choose views which result in an
overdetermined system. This is the sensor control problem. Accurate and
reliable sensor control requires an estimation procedure which yields both
estimates and measures of its own performance. In the case of nonlinear
measurement systems, computationally simple closed-form estimation solutions
may not exist. However, approximation techniques provide viable alternatives.
In this paper, we evaluate three estimation techniques: the extended Kalman
filter, a discrete Bayes approximation, and an iterative Bayes approximation.
We present mathematical results and simulation statistics illustrating
operating conditions where the extended Kalman filter is inappropriate for
sensor control, and discuss issues in the use of the discrete Bayes
approximation.
|
1304.2758 | Efficient Inference on Generalized Fault Diagrams | cs.AI | The generalized fault diagram, a data structure for failure analysis based on
the influence diagram, is defined. Unlike the fault tree, this structure allows
for dependence among the basic events and replicated logical elements. A
heuristic procedure is developed for efficient processing of these structures.
|
1304.2759 | Reasoning About Beliefs and Actions Under Computational Resource
Constraints | cs.AI | Although many investigators affirm a desire to build reasoning systems that
behave consistently with the axiomatic basis defined by probability theory and
utility theory, limited resources for engineering and computation can make a
complete normative analysis impossible. We attempt to move discussion beyond
the debate over the scope of problems that can be handled effectively to cases
where it is clear that there are insufficient computational resources to
perform an analysis deemed as complete. Under these conditions, we stress the
importance of considering the expected costs and benefits of applying
alternative approximation procedures and heuristics for computation and
knowledge acquisition. We discuss how knowledge about the structure of user
utility can be used to control value tradeoffs for tailoring inference to
alternative contexts. We address the notion of real-time rationality, focusing
on the application of knowledge about the expected timewise-refinement
abilities of reasoning strategies to balance the benefits of additional
computation with the costs of acting with a partial result. We discuss the
benefits of applying decision theory to control the solution of difficult
problems given limitations and uncertainty in reasoning resources.
|
1304.2760 | Advantages and a Limitation of Using LEG Nets in a Real-TIme Problem | cs.AI | After experimenting with a number of non-probabilistic methods for dealing
with uncertainty many researchers reaffirm a preference for probability methods
[1] [2], although this remains controversial. The importance of being able to
form decisions from incomplete data in diagnostic problems has highlighted
probabilistic methods [5] which compute posterior probabilities from prior
distributions in a way similar to Bayes Rule, and thus are called Bayesian
methods. This paper documents the use of a Bayesian method in a real time
problem which is similar to medical diagnosis in that there is a need to form
decisions and take some action without complete knowledge of conditions in the
problem domain. This particular method has a limitation which is discussed.
|
1304.2797 | Logical Fuzzy Preferences | cs.AI | We present a unified logical framework for representing and reasoning about
both quantitative and qualitative preferences in fuzzy answer set programming,
called fuzzy answer set optimization programs. The proposed framework is vital
to allow defining quantitative preferences over the possible outcomes of
qualitative preferences. We show the application of fuzzy answer set
optimization programs to the course scheduling with fuzzy preferences problem.
To the best of our knowledge, this development is the first to consider a
logical framework for reasoning about quantitative preferences, in general, and
reasoning about both quantitative and qualitative preferences in particular.
|
1304.2798 | Optimal DNA shotgun sequencing: Noisy reads are as good as noiseless
reads | cs.IT math.IT q-bio.GN | We establish the fundamental limits of DNA shotgun sequencing under noisy
reads. We show a surprising result: for the i.i.d. DNA model, noisy reads are
as good as noiseless reads, provided that the noise level is below a certain
threshold which can be surprisingly high. As an example, for a uniformly
distributed DNA sequence and a symmetric substitution noisy read channel, the
threshold is as high as 19%.
|
1304.2799 | Nested Aggregates in Answer Sets: An Application to a Priori
Optimization | cs.AI | We allow representing and reasoning in the presence of nested multiple
aggregates over multiple variables and nested multiple aggregates over
functions involving multiple variables in answer sets, precisely, in answer set
optimization programming and in answer set programming. We show the
applicability of the answer set optimization programming with nested multiple
aggregates and the answer set programming with nested multiple aggregates to
the Probabilistic Traveling Salesman Problem, a fundamental a priori
optimization problem in Operation Research.
|
1304.2809 | On partial sparse recovery | cs.IT math.IT math.OC | We consider the problem of recovering a partially sparse solution of an
underdetermined system of linear equations by minimizing the $\ell_1$-norm of
the part of the solution vector which is known to be sparse. Such a problem is
closely related to a classical problem in Compressed Sensing where the
$\ell_1$-norm of the whole solution vector is minimized. We introduce analogues
of restricted isometry and null space properties for the recovery of partially
sparse vectors and show that these new properties are implied by their original
counterparts. We show also how to extend recovery under noisy measurements to
the partially sparse case.
|
1304.2850 | Entropy landscape of solutions in the binary perceptron problem | cond-mat.dis-nn cond-mat.stat-mech cs.LG | The statistical picture of the solution space for a binary perceptron is
studied. The binary perceptron learns a random classification of input random
patterns by a set of binary synaptic weights. The learning of this network is
difficult especially when the pattern (constraint) density is close to the
capacity, which is supposed to be intimately related to the structure of the
solution space. The geometrical organization is elucidated by the entropy
landscape from a reference configuration and of solution-pairs separated by a
given Hamming distance in the solution space. We evaluate the entropy at the
annealed level as well as replica symmetric level and the mean field result is
confirmed by the numerical simulations on single instances using the proposed
message passing algorithms. From the first landscape (a random configuration as
a reference), we see clearly how the solution space shrinks as more constraints
are added. From the second landscape of solution-pairs, we deduce the
coexistence of clustering and freezing in the solution space.
|
1304.2865 | The BOSARIS Toolkit: Theory, Algorithms and Code for Surviving the New
DCF | stat.AP cs.LG stat.ML | The change of two orders of magnitude in the 'new DCF' of NIST's SRE'10,
relative to the 'old DCF' evaluation criterion, posed a difficult challenge for
participants and evaluator alike. Initially, participants were at a loss as to
how to calibrate their systems, while the evaluator underestimated the required
number of evaluation trials. After the fact, it is now obvious that both
calibration and evaluation require very large sets of trials. This poses the
challenges of (i) how to decide what number of trials is enough, and (ii) how
to process such large data sets with reasonable memory and CPU requirements.
After SRE'10, at the BOSARIS Workshop, we built solutions to these problems
into the freely available BOSARIS Toolkit. This paper explains the principles
and algorithms behind this toolkit. The main contributions of the toolkit are:
1. The Normalized Bayes Error-Rate Plot, which analyses likelihood- ratio
calibration over a wide range of DCF operating points. These plots also help in
judging the adequacy of the sizes of calibration and evaluation databases. 2.
Efficient algorithms to compute DCF and minDCF for large score files, over the
range of operating points required by these plots. 3. A new score file format,
which facilitates working with very large trial lists. 4. A faster logistic
regression optimizer for fusion and calibration. 5. A principled way to define
EER (equal error rate), which is of practical interest when the absolute error
count is small.
|
1304.2867 | Guidelines to the Problem of Location Management and Database
Architecture for the Next Generation Mobile Networks | cs.NI cs.DB | In near future, anticipated large number of mobile users may introduce very
large centralized databases and increase end-to-end delays in location
registration and call delivery on HLR-VLR database and will become infeasible.
After observing several problems we propose some guidelines. Multitree
distributed database, high throughput index structure, memory oriented database
organization are used. Location management guidelines for moving user in
overlapping network, neighbor discovery protocol (NDP), and global roaming rule
are adopted. Analytic model and examples are presented to evaluate the
efficiency of proposed guidelines.
|
1304.2888 | Roborobo! a Fast Robot Simulator for Swarm and Collective Robotics | cs.RO cs.AI cs.NE | Roborobo! is a multi-platform, highly portable, robot simulator for
large-scale collective robotics experiments. Roborobo! is coded in C++, and
follows the KISS guideline ("Keep it simple"). Therefore, its external
dependency is solely limited to the widely available SDL library for fast 2D
Graphics. Roborobo! is based on a Khepera/ePuck model. It is targeted for fast
single and multi-robots simulation, and has already been used in more than a
dozen published research mainly concerned with evolutionary swarm robotics,
including environment-driven self-adaptation and distributed evolutionary
optimization, as well as online onboard embodied evolution and embodied
morphogenesis.
|
1304.2917 | MODULAR: Software for the Autonomous Computation of Modularity in Large
Network Sets | q-bio.QM cs.SI physics.soc-ph | Ecological systems can be seen as networks of interactions between
individual, species, or habitat patches. A key feature of many ecological
networks is their organization into modules, which are subsets of elements that
are more connected to each other than to the other elements in the network. We
introduce MODULAR to perform rapid and autonomous calculation of modularity in
sets of networks. MODULAR reads a set of files with matrices or edge lists that
represent unipartite or bipartite networks, and identify modules using two
different modularity metrics that have been previously used in studies of
ecological networks. To find the network partition that maximizes modularity,
the software offers five optimization methods to the user. We also included two
of the most common null models that are used in studies of ecological networks
to verify how the modularity found by the maximization of each metric differs
from a theoretical benchmark.
|
1304.2924 | Motifs in Triadic Random Graphs based on Steiner Triple Systems | physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an | Conventionally, pairwise relationships between nodes are considered to be the
fundamental building blocks of complex networks. However, over the last decade
the overabundance of certain sub-network patterns, so called motifs, has
attracted high attention. It has been hypothesized, these motifs, instead of
links, serve as the building blocks of network structures.
Although the relation between a network's topology and the general properties
of the system, such as its function, its robustness against perturbations, or
its efficiency in spreading information is the central theme of network
science, there is still a lack of sound generative models needed for testing
the functional role of subgraph motifs. Our work aims to overcome this
limitation.
We employ the framework of exponential random graphs (ERGMs) to define novel
models based on triadic substructures. The fact that only a small portion of
triads can actually be set independently poses a challenge for the formulation
of such models. To overcome this obstacle we use Steiner Triple Systems (STS).
These are partitions of sets of nodes into pair-disjoint triads, which thus can
be specified independently. Combining the concepts of ERGMs and STS, we suggest
novel generative models capable of generating ensembles of networks with
non-trivial triadic Z-score profiles. Further, we discover inevitable
correlations between the abundance of triad patterns, which occur solely for
statistical reasons and need to be taken into account when discussing the
functional implications of motif statistics. Moreover, we calculate the degree
distributions of our triadic random graphs analytically.
|
1304.2948 | Un mod\`ele bool\'een pour l'\'enum\'eration des siphons et des pi\`eges
minimaux dans les r\'eseaux de Petri | cs.CE cs.LO | Petri-nets are a simple formalism for modeling concurrent computation.
Recently, they have emerged as a powerful tool for the modeling and analysis of
biochemical reaction networks, bridging the gap between purely qualitative and
quantitative models. These networks can be large and complex, which makes their
study difficult and computationally challenging. In this paper, we focus on two
structural properties of Petri-nets, siphons and traps, that bring us
information about the persistence of some molecular species. We present two
methods for enumerating all minimal siphons and traps of a Petri-net by
iterating the resolution of a boolean model interpreted as either a SAT or a
CLP(B) program. We compare the performance of these methods with a
state-of-the-art dedicated algorithm of the Petri-net community. We show that
the SAT and CLP(B) programs are both faster. We analyze why these programs
perform so well on the models of the repository of biological models
biomodels.net, and propose some hard instances for the problem of minimal
siphons enumeration.
|
1304.2974 | Are Friends Overrated? A Study for the Social News Aggregator Digg.com | cs.SI physics.soc-ph | The key feature of online social networks (OSN) is the ability of users to
become active, make friends and interact via comments, videos or messages with
those around them. This social interaction is typically perceived as critical
to the proper functioning of these platforms; therefore, a significant share of
OSN research in the recent past has investigated the characteristics and
importance of these social links, studying the networks' friendship relations
through their topological properties, the structure of the resulting
communities and identifying the role and importance of individual members
within these networks.
In this paper, we present results from a multi-year study of the online
social network Digg.com, indicating that the importance of friends and the
friend network in the propagation of information is less than originally
perceived. While we do note that users form and maintain a social structure
along which information is exchanged, the importance of these links and their
contribution is very low: Users with even a nearly identical overlap in
interests react on average only with a probability of 2% to information
propagated and received from friends. Furthermore, in only about 50% of stories
that became popular from the entire body of 10 million news we find evidence
that the social ties among users were a critical ingredient to the successful
spread. Our findings indicate the presence of previously unconsidered factors,
the temporal alignment between user activities and the existence of additional
logical relationships beyond the topology of the social graph, that are able to
drive and steer the dynamics of such OSNs.
|
1304.2994 | A Generalized Online Mirror Descent with Applications to Classification
and Regression | cs.LG | Online learning algorithms are fast, memory-efficient, easy to implement, and
applicable to many prediction problems, including classification, regression,
and ranking. Several online algorithms were proposed in the past few decades,
some based on additive updates, like the Perceptron, and some on multiplicative
updates, like Winnow. A unifying perspective on the design and the analysis of
online algorithms is provided by online mirror descent, a general prediction
strategy from which most first-order algorithms can be obtained as special
cases. We generalize online mirror descent to time-varying regularizers with
generic updates. Unlike standard mirror descent, our more general formulation
also captures second order algorithms, algorithms for composite losses and
algorithms for adaptive filtering. Moreover, we recover, and sometimes improve,
known regret bounds as special cases of our analysis using specific
regularizers. Finally, we show the power of our approach by deriving a new
second order algorithm with a regret bound invariant with respect to arbitrary
rescalings of individual features.
|
1304.2998 | Detecting Directionality in Random Fields Using the Monogenic Signal | cs.IT cs.CV math.IT | Detecting and analyzing directional structures in images is important in many
applications since one-dimensional patterns often correspond to important
features such as object contours or trajectories. Classifying a structure as
directional or non-directional requires a measure to quantify the degree of
directionality and a threshold, which needs to be chosen based on the
statistics of the image. In order to do this, we model the image as a random
field. So far, little research has been performed on analyzing directionality
in random fields. In this paper, we propose a measure to quantify the degree of
directionality based on the random monogenic signal, which enables a unique
decomposition of a 2D signal into local amplitude, local orientation, and local
phase. We investigate the second-order statistical properties of the monogenic
signal for isotropic, anisotropic, and unidirectional random fields. We analyze
our measure of directionality for finite-size sample images, and determine a
threshold to distinguish between unidirectional and non-unidirectional random
fields, which allows the automatic classification of images.
|
1304.2999 | A New Approach To Two-View Motion Segmentation Using Global Dimension
Minimization | cs.CV | We present a new approach to rigid-body motion segmentation from two views.
We use a previously developed nonlinear embedding of two-view point
correspondences into a 9-dimensional space and identify the different motions
by segmenting lower-dimensional subspaces. In order to overcome nonuniform
distributions along the subspaces, whose dimensions are unknown, we suggest the
novel concept of global dimension and its minimization for clustering subspaces
with some theoretical motivation. We propose a fast projected gradient
algorithm for minimizing global dimension and thus segmenting motions from
2-views. We develop an outlier detection framework around the proposed method,
and we present state-of-the-art results on outlier-free and outlier-corrupted
two-view data for segmenting motion.
|
1304.3010 | Characterizing the Life Cycle of Online News Stories Using Social Media
Reactions | cs.SI cs.CY physics.soc-ph | This paper presents a study of the life cycle of news articles posted online.
We describe the interplay between website visitation patterns and social media
reactions to news content. We show that we can use this hybrid observation
method to characterize distinct classes of articles. We also find that social
media reactions can help predict future visitation patterns early and
accurately.
We validate our methods using qualitative analysis as well as quantitative
analysis on data from a large international news network, for a set of articles
generating more than 3,000,000 visits and 200,000 social media reactions. We
show that it is possible to model accurately the overall traffic articles will
ultimately receive by observing the first ten to twenty minutes of social media
reactions. Achieving the same prediction accuracy with visits alone would
require to wait for three hours of data. We also describe significant
improvements on the accuracy of the early prediction of shelf-life for news
stories.
|
1304.3013 | The influence of repressive legislation on the structure of a social
media network | physics.soc-ph cs.SI | Social media have been widely used to organize citizen movements. In 2012,
75% university and college students in Quebec, Canada, participated in mass
protests against an increase in tuition fees, mainly organized using social
media. To reduce public disruption, the government introduced special
legislation designed to impede protest organization. Here, we show that the
legislation changed the behaviour of social media users but not the overall
structure of their social network on Twitter. Thus, users were still able to
spread information to efficiently organize demonstrations using their social
network. This natural experiment shows the power of social media in political
mobilization, as well as behavioural flexibility in information flow over a
large number of individuals.
|
1304.3016 | Autonomous Algorithms for Centralized and Distributed Interference
Coordination: A Virtual Layer Based Approach | cs.IT math.IT | Interference mitigation techniques are essential for improving the
performance of interference limited wireless networks. In this paper, we
introduce novel interference mitigation schemes for wireless cellular networks
with space division multiple access (SDMA). The schemes are based on a virtual
layer that captures and simplifies the complicated interference situation in
the network and that is used for power control. We show how optimization in
this virtual layer generates gradually adapting power control settings that
lead to autonomous interference minimization. Thereby, the granularity of
control ranges from controlling frequency sub-band power via controlling the
power on a per-beam basis, to a granularity of only enforcing average power
constraints per beam. In conjunction with suitable short-term scheduling, our
algorithms gradually steer the network towards a higher utility. We use
extensive system-level simulations to compare three distributed algorithms and
evaluate their applicability for different user mobility assumptions. In
particular, it turns out that larger gains can be achieved by imposing average
power constraints and allowing opportunistic scheduling instantaneously, rather
than controlling the power in a strict way. Furthermore, we introduce a
centralized algorithm, which directly solves the underlying optimization and
shows fast convergence, as a performance benchmark for the distributed
solutions. Moreover, we investigate the deviation from global optimality by
comparing to a branch-and-bound-based solution.
|
1304.3056 | Anticipatory Buffer Control and Resource Allocation for Wireless Video
Streaming | cs.MM cs.NI cs.SY | This paper describes a new approach for allocating resources to video
streaming traffic. Assuming that the future channel state can be predicted for
a certain time, we minimize the fraction of the bandwidth consumed for smooth
streaming by jointly allocating wireless channel resources and play-out buffer
size. To formalize this idea, we introduce a new model to capture the dynamic
of a video streaming buffer and the allocated spectrum in an optimization
problem. The result is a Linear Program that allows to trade off buffer size
and allocated bandwidth. Based on this tractable model, our simulation results
show that anticipating poor channel states and pre-loading the buffer
accordingly allows to serve more users at perfect video quality.
|
1304.3059 | Simple and Generic Simulator Algorithm for Inhomogeneous Random Spatial
Deployment | cs.NI cs.IT math.IT | We conceptualized a straightforward and flexible approach for random spatial
inhomogeneity by proposing the area-specific deployment (ASD) algorithm, which
takes into account the clustering tendency of users. In fact, the ASD method
has the advantage of achieving a more realistic heterogeneous deployment based
on limited planning inputs, while still preserving the stochastic character of
users position. We then applied this technique to different circumstances, and
developed spatial-level network algorithms for controlled and uncontrolled
cellular network deployments. Overall, the derived simulator tools will
effectively and easily be useful for designers and deployment planners modeling
a host of multi-coverage and multi-scale wireless network situations.
|
1304.3071 | Minimal Controllability Problems | math.OC cs.SY | Given a linear system, we consider the problem of finding a small set of
variables to affect with an input so that the resulting system is controllable.
We show that this problem is NP-hard; indeed, we show that even approximating
the minimum number of variables that need to be affected within a
multiplicative factor of $c \log n$ is NP-hard for some positive $c$. On the
positive side, we show it is possible to find sets of variables matching this
inapproximability barrier in polynomial time. This can be done by a simple
greedy heuristic which sequentially picks variables to maximize the rank
increase of the controllability matrix. Experiments on Erdos-Renyi random
graphs demonstrate this heuristic almost always succeeds at findings the
minimum number of variables.
|
1304.3075 | Application of Evidential Reasoning to Helicopter Flight Path Control | cs.AI cs.RO cs.SY | This paper presents a methodology for research and development of the
inferencing and knowledge representation aspects of an Expert System approach
for performing reasoning under uncertainty in support of a real time vehicle
guidance and navigation system. Such a system could be of major benefit for
non-terrain following low altitude flight systems operating in foreign hostile
environments such as might be experienced by NOE helicopter or similar mission
craft. An innovative extension of the evidential reasoning methodology, termed
the Sum-and-Lattice-Points Method, has been developed. The research and
development effort presented in this paper consists of a formal mathematical
development of the Sum-and-Lattice-Points Method, its formulation and
representation in a parallel environment, prototype software development of the
method within an expert system, and initial testing of the system within the
confines of the vehicle guidance system.
|
1304.3076 | Knowledge Engineering Within A Generalized Bayesian Framework | cs.AI | During the ongoing debate over the representation of uncertainty in
Artificial Intelligence, Cheeseman, Lemmer, Pearl, and others have argued that
probability theory, and in particular the Bayesian theory, should be used as
the basis for the inference mechanisms of Expert Systems dealing with
uncertainty. In order to pursue the issue in a practical setting, sophisticated
tools for knowledge engineering are needed that allow flexible and
understandable interaction with the underlying knowledge representation
schemes. This paper describes a Generalized Bayesian framework for building
expert systems which function in uncertain domains, using algorithms proposed
by Lemmer. It is neither rule-based nor frame-based, and requires a new system
of knowledge engineering tools. The framework we describe provides a
knowledge-based system architecture with an inference engine, explanation
capability, and a unique aid for building consistent knowledge bases.
|
1304.3077 | Taxonomy, Structure, and Implementation of Evidential Reasoning | cs.AI | The fundamental elements of evidential reasoning problems are described,
followed by a discussion of the structure of various types of problems.
Bayesian inference networks and state space formalism are used as the tool for
problem representation.
A human-oriented decision making cycle for solving evidential reasoning
problems is described and illustrated for a military situation assessment
problem. The implementation of this cycle may serve as the basis for an expert
system shell for evidential reasoning; i.e. a situation assessment processor.
|
1304.3078 | Probabilistic Reasoning About Ship Images | cs.AI | One of the most important aspects of current expert systems technology is the
ability to make causal inferences about the impact of new evidence. When the
domain knowledge and problem knowledge are uncertain and incomplete Bayesian
reasoning has proven to be an effective way of forming such inferences [3,4,8].
While several reasoning schemes have been developed based on Bayes Rule, there
has been very little work examining the comparative effectiveness of these
schemes in a real application. This paper describes a knowledge based system
for ship classification [1], originally developed using the PROSPECTOR updating
method [2], that has been reimplemented to use the inference procedure
developed by Pearl and Kim [4,5]. We discuss our reasons for making this
change, the implementation of the new inference engine, and the comparative
performance of the two versions of the system.
|
1304.3079 | Towards The Inductive Acquisition of Temporal Knowledge | cs.AI | The ability to predict the future in a given domain can be acquired by
discovering empirically from experience certain temporal patterns that tend to
repeat unerringly. Previous works in time series analysis allow one to make
quantitative predictions on the likely values of certain linear variables.
Since certain types of knowledge are better expressed in symbolic forms, making
qualitative predictions based on symbolic representations require a different
approach. A domain independent methodology called TIM (Time based Inductive
Machine) for discovering potentially uncertain temporal patterns from real time
observations using the technique of inductive inference is described here.
|
1304.3080 | Some Extensions of Probabilistic Logic | cs.AI | In [12], Nilsson proposed the probabilistic logic in which the truth values
of logical propositions are probability values between 0 and 1. It is
applicable to any logical system for which the consistency of a finite set of
propositions can be established. The probabilistic inference scheme reduces to
the ordinary logical inference when the probabilities of all propositions are
either 0 or 1. This logic has the same limitations of other probabilistic
reasoning systems of the Bayesian approach. For common sense reasoning,
consistency is not a very natural assumption. We have some well known examples:
{Dick is a Quaker, Quakers are pacifists, Republicans are not pacifists, Dick
is a Republican}and {Tweety is a bird, birds can fly, Tweety is a penguin}. In
this paper, we shall propose some extensions of the probabilistic logic. In the
second section, we shall consider the space of all interpretations, consistent
or not. In terms of frames of discernment, the basic probability assignment
(bpa) and belief function can be defined. Dempster's combination rule is
applicable. This extension of probabilistic logic is called the evidential
logic in [ 1]. For each proposition s, its belief function is represented by an
interval [Spt(s), Pls(s)]. When all such intervals collapse to single points,
the evidential logic reduces to probabilistic logic (in the generalized version
of not necessarily consistent interpretations). Certainly, we get Nilsson's
probabilistic logic by further restricting to consistent interpretations. In
the third section, we shall give a probabilistic interpretation of
probabilistic logic in terms of multi-dimensional random variables. This
interpretation brings the probabilistic logic into the framework of probability
theory. Let us consider a finite set S = {sl, s2, ..., Sn) of logical
propositions. Each proposition may have true or false values; and may be
considered as a random variable. We have a probability distribution for each
proposition. The e-dimensional random variable (sl,..., Sn) may take values in
the space of all interpretations of 2n binary vectors. We may compute absolute
(marginal), conditional and joint probability distributions. It turns out that
the permissible probabilistic interpretation vector of Nilsson [12] consists of
the joint probabilities of S. Inconsistent interpretations will not appear, by
setting their joint probabilities to be zeros. By summing appropriate joint
probabilities, we get probabilities of individual propositions or subsets of
propositions. Since the Bayes formula and other techniques are valid for
e-dimensional random variables, the probabilistic logic is actually very close
to the Bayesian inference schemes. In the last section, we shall consider a
relaxation scheme for probabilistic logic. In this system, not only new
evidences will update the belief measures of a collection of propositions, but
also constraint satisfaction among these propositions in the relational network
will revise these measures. This mechanism is similar to human reasoning which
is an evaluative process converging to the most satisfactory result. The main
idea arises from the consistent labeling problem in computer vision. This
method is originally applied to scene analysis of line drawings. Later, it is
applied to matching, constraint satisfaction and multi sensor fusion by several
authors [8], [16] (and see references cited there). Recently, this method is
used in knowledge aggregation by Landy and Hummel [9].
|
1304.3081 | Predicting The Performance of Minimax and Product in Game-Tree | cs.AI | The discovery that the minimax decision rule performs poorly in some games
has sparked interest in possible alternatives to minimax. Until recently, the
only games in which minimax was known to perform poorly were games which were
mainly of theoretical interest. However, this paper reports results showing
poor performance of minimax in a more common game called kalah. For the kalah
games tested, a non-minimax decision rule called the product rule performs
significantly better than minimax.
This paper also discusses a possible way to predict whether or not minimax
will perform well in a game when compared to product. A parameter called the
rate of heuristic flaw (rhf) has been found to correlate positively with the.
performance of product against minimax. Both analytical and experimental
results are given that appear to support the predictive power of rhf.
|
1304.3082 | Reasoning With Uncertain Knowledge | cs.AI | A model of knowledge representation is described in which propositional facts
and the relationships among them can be supported by other facts. The set of
knowledge which can be supported is called the set of cognitive units, each
having associated descriptions of their explicit and implicit support
structures, summarizing belief and reliability of belief. This summary is
precise enough to be useful in a computational model while remaining
descriptive of the underlying symbolic support structure. When a fact supports
another supportive relationship between facts we call this meta-support. This
facilitates reasoning about both the propositional knowledge. and the support
structures underlying it.
|
1304.3083 | Models vs. Inductive Inference for Dealing With Probabilistic Knowledge | cs.AI | Two different approaches to dealing with probabilistic knowledge are examined
-models and inductive inference. Examples of the first are: influence diagrams
[1], Bayesian networks [2], log-linear models [3, 4]. Examples of the second
are: games-against nature [5, 6] varieties of maximum-entropy methods [7, 8,
9], and the author's min-score induction [10]. In the modeling approach, the
basic issue is manageability, with respect to data elicitation and computation.
Thus, it is assumed that the pertinent set of users in some sense knows the
relevant probabilities, and the problem is to format that knowledge in a way
that is convenient to input and store and that allows computation of the
answers to current questions in an expeditious fashion. The basic issue for the
inductive approach appears at first sight to be very different. In this
approach it is presumed that the relevant probabilities are only partially
known, and the problem is to extend that incomplete information in a reasonable
way to answer current questions. Clearly, this approach requires that some form
of induction be invoked. Of course, manageability is an important additional
concern. Despite their seeming differences, the two approaches have a fair
amount in common, especially with respect to the structural framework they
employ. Roughly speaking, this framework involves identifying clusters of
variables which strongly interact, establishing marginal probability
distributions on the clusters, and extending the subdistributions to a more
complete distribution, usually via a product formalism. The product extension
is justified on the modeling approach in terms of assumed conditional
independence; in the inductive approach the product form arises from an
inductive rule.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.