id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.3084 | Towards a General-Purpose Belief Maintenance System | cs.AI | There currently exists a gap between the theories proposed by the probability
and uncertainty and the needs of Artificial Intelligence research. These
theories primarily address the needs of expert systems, using knowledge
structures which must be pre-compiled and remain static in structure during
runtime. Many Al systems require the ability to dynamically add and remove
parts of the current knowledge structure (e.g., in order to examine what the
world would be like for different causal theories). This requires more
flexibility than existing uncertainty systems display. In addition, many Al
researchers are only interested in using "probabilities" as a means of
obtaining an ordering, rather than attempting to derive an accurate
probabilistic account of a situation. This indicates the need for systems which
stress ease of use and don't require extensive probability information when one
cannot (or doesn't wish to) provide such information. This paper attempts to
help reconcile the gap between approaches to uncertainty and the needs of many
AI systems by examining the control issues which arise, independent of a
particular uncertainty calculus. when one tries to satisfy these needs. Truth
Maintenance Systems have been used extensively in problem solving tasks to help
organize a set of facts and detect inconsistencies in the believed state of the
world. These systems maintain a set of true/false propositions and their
associated dependencies. However, situations often arise in which we are unsure
of certain facts or in which the conclusions we can draw from available
information are somewhat uncertain. The non-monotonic TMS 12] was an attempt at
reasoning when all the facts are not known, but it fails to take into account
degrees of belief and how available evidence can combine to strengthen a
particular belief. This paper addresses the problem of probabilistic reasoning
as it applies to Truth Maintenance Systems. It describes a belief Maintenance
System that manages a current set of beliefs in much the same way that a TMS
manages a set of true/false propositions. If the system knows that belief in
fact is dependent in some way upon belief in fact2, then it automatically
modifies its belief in facts when new information causes a change in belief of
fact2. It models the behavior of a TMS, replacing its 3-valued logic (true,
false, unknown) with an infinite valued logic, in such a way as to reduce to a
standard TMS if all statements are given in absolute true/false terms. Belief
Maintenance Systems can, therefore, be thought of as a generalization of Truth
Maintenance Systems, whose possible reasoning tasks are a superset of those for
a TMS.
|
1304.3085 | Planning, Scheduling, and Uncertainty in the Sequence of Future Events | cs.AI | Scheduling in the factory setting is compounded by computational complexity
and temporal uncertainty. Together, these two factors guarantee that the
process of constructing an optimal schedule will be costly and the chances of
executing that schedule will be slight. Temporal uncertainty in the task
execution time can be offset by several methods: eliminate uncertainty by
careful engineering, restore certainty whenever it is lost, reduce the
uncertainty by using more accurate sensors, and quantify and circumscribe the
remaining uncertainty. Unfortunately, these methods focus exclusively on the
sources of uncertainty and fail to apply knowledge of the tasks which are to be
scheduled. A complete solution must adapt the schedule of activities to be
performed according to the evolving state of the production world. The example
of vision-directed assembly is presented to illustrate that the principle of
least commitment, in the creation of a plan, in the representation of a
schedule, and in the execution of a schedule, enables a robot to operate
intelligently and efficiently, even in the presence of considerable uncertainty
in the sequence of future events.
|
1304.3086 | Deriving And Combining Continuous Possibility Functions in the Framework
of Evidential Reasoning | cs.AI | To develop an approach to utilizing continuous statistical information within
the Dempster- Shafer framework, we combine methods proposed by Strat and by
Shafero We first derive continuous possibility and mass functions from
probability-density functions. Then we propose a rule for combining such
evidence that is simpler and more efficiently computed than Dempster's rule. We
discuss the relationship between Dempster's rule and our proposed rule for
combining evidence over continuous frames.
|
1304.3087 | Non-Monotonicity in Probabilistic Reasoning | cs.AI | We start by defining an approach to non-monotonic probabilistic reasoning in
terms of non-monotonic categorical (true-false) reasoning. We identify a type
of non-monotonic probabilistic reasoning, akin to default inheritance, that is
commonly found in practice, especially in "evidential" and "Bayesian"
reasoning. We formulate this in terms of the Maximization of Conditional
Independence (MCI), and identify a variety of applications for this sort of
default. We propose a formalization using Pointwise Circumscription. We compare
MCI to Maximum Entropy, another kind of non-monotonic principle, and conclude
by raising a number of open questions
|
1304.3088 | Information and Multi-Sensor Coordination | cs.SY cs.AI cs.MA | The control and integration of distributed, multi-sensor perceptual systems
is a complex and challenging problem. The observations or opinions of different
sensors are often disparate incomparable and are usually only partial views.
Sensor information is inherently uncertain and in addition the individual
sensors may themselves be in error with respect to the system as a whole. The
successful operation of a multi-sensor system must account for this uncertainty
and provide for the aggregation of disparate information in an intelligent and
robust manner. We consider the sensors of a multi-sensor system to be members
or agents of a team, able to offer opinions and bargain in group decisions. We
will analyze the coordination and control of this structure using a theory of
team decision-making. We present some new analytic results on multi-sensor
aggregation and detail a simulation which we use to investigate our ideas. This
simulation provides a basis for the analysis of complex agent structures
cooperating in the presence of uncertainty. The results of this study are
discussed with reference to multi-sensor robot systems, distributed Al and
decision making under uncertainty.
|
1304.3089 | Flexible Interpretations: A Computational Model for Dynamic Uncertainty
Assessment | cs.AI | The investigations reported in this paper center on the process of dynamic
uncertainty assessment during interpretation tasks in real domain. In
particular, we are interested here in the nature of the control structure of
computer programs that can support multiple interpretation and smooth
transitions between them, in real time. Each step of the processing involves
the interpretation of one input item and the appropriate re-establishment of
the system's confidence of the correctness of its interpretation(s).
|
1304.3090 | The Myth of Modularity in Rule-Based Systems | cs.AI | In this paper, we examine the concept of modularity, an often cited advantage
of the ruled-based representation methodology. We argue that the notion of
modularity consists of two distinct concepts which we call syntactic modularity
and semantic modularity. We argue that when reasoning under certainty, it is
reasonable to regard the rule-based approach as both syntactically and
semantically modular. However, we argue that in the case of plausible
reasoning, rules are syntactically modular but are rarely semantically modular.
To illustrate this point, we examine a particular approach for managing
uncertainty in rule-based systems called the MYCIN certainty factor model. We
formally define the concept of semantic modularity with respect to the
certainty factor model and discuss logical consequences of the definition. We
show that the assumption of semantic modularity imposes strong restrictions on
rules in a knowledge base. We argue that such restrictions are rarely valid in
practical applications. Finally, we suggest how the concept of semantic
modularity can be relaxed in a manner that makes it appropriate for plausible
reasoning.
|
1304.3091 | An Axiomatic Framework for Belief Updates | cs.AI | In the 1940's, a physicist named Cox provided the first formal justification
for the axioms of probability based on the subjective or Bayesian
interpretation. He showed that if a measure of belief satisfies several
fundamental properties, then the measure must be some monotonic transformation
of a probability. In this paper, measures of change in belief or belief updates
are examined. In the spirit of Cox, properties for a measure of change in
belief are enumerated. It is shown that if a measure satisfies these
properties, it must satisfy other restrictive conditions. For example, it is
shown that belief updates in a probabilistic context must be equal to some
monotonic transformation of a likelihood ratio. It is hoped that this formal
explication of the belief update paradigm will facilitate critical discussion
and useful extensions of the approach.
|
1304.3092 | Imprecise Meanings as a Cause of Uncertainty in Medical Knowledge-Based
Systems | cs.AI cs.CL | There has been a considerable amount of work on uncertainty in
knowledge-based systems. This work has generally been concerned with
uncertainty arising from the strength of inferences and the weight of evidence.
In this paper we discuss another type of uncertainty: that which is due to
imprecision in the underlying primitives used to represent the knowledge of the
system. In particular, a given word may denote many similar but not identical
entities. Such words are said to be lexically imprecise. Lexical imprecision
has caused widespread problems in many areas. Unless this phenomenon is
recognized and appropriately handled, it can degrade the performance of
knowledge-based systems. In particular, it can lead to difficulties with the
user interface, and with the inferencing processes of these systems. Some
techniques are suggested for coping with this phenomenon.
|
1304.3093 | Evidence as Opinions of Experts | cs.AI | We describe a viewpoint on the Dempster/Shafer 'Theory of Evidence', and
provide an interpretation which regards the combination formulas as statistics
of the opinions of "experts". This is done by introducing spaces with binary
operations that are simpler to interpret or simpler to implement than the
standard combination formula, and showing that these spaces can be mapped
homomorphically onto the Dempster/Shafer theory of evidence space. The experts
in the space of "opinions of experts" combine information in a Bayesian
fashion. We present alternative spaces for the combination of evidence
suggested by this viewpoint.
|
1304.3094 | Decision Under Uncertainty in Diagnosis | cs.AI | This paper describes the incorporation of uncertainty in diagnostic reasoning
based on the set covering model of Reggia et. al. extended to what in the
Artificial Intelligence dichotomy between deep and compiled (shallow, surface)
knowledge based diagnosis may be viewed as the generic form at the compiled end
of the spectrum. A major undercurrent in this is advocating the need for a
strong underlying model and an integrated set of support tools for carrying
such a model in order to deal with uncertainty.
|
1304.3095 | Knowledge and Uncertainty | cs.AI | One purpose -- quite a few thinkers would say the main purpose -- of seeking
knowledge about the world is to enhance our ability to make good decisions. An
item of knowledge that can make no conceivable difference with regard to
anything we might do would strike many as frivolous. Whether or not we want to
be philosophical pragmatists in this strong sense with regard to everything we
might want to enquire about, it seems a perfectly appropriate attitude to adopt
toward artificial knowledge systems. If is granted that we are ultimately
concerned with decisions, then some constraints are imposed on our measures of
uncertainty at the level of decision making. If our measure of uncertainty is
real-valued, then it isn't hard to show that it must satisfy the classical
probability axioms. For example, if an act has a real-valued utility U(E) if
the event E obtains, and the same real-valued utility if the denial of E
obtains, so that U(E) = U(-E), then the expected utility of that act must be
U(E), and that must be the same as the uncertainty-weighted average of the
returns of the act, p-U(E) + q-U('E), where p and q represent the uncertainty
of E and-E respectively. But then we must have p + q = 1.
|
1304.3096 | An Application of Non-Monotonic Probabilistic Reasoning to Air Force
Threat Correlation | cs.AI | Current approaches to expert systems' reasoning under uncertainty fail to
capture the iterative revision process characteristic of intelligent human
reasoning. This paper reports on a system, called the Non-monotonic
Probabilist, or NMP (Cohen, et al., 1985). When its inferences result in
substantial conflict, NMP examines and revises the assumptions underlying the
inferences until conflict is reduced to acceptable levels. NMP has been
implemented in a demonstration computer-based system, described below, which
supports threat correlation and in-flight route replanning by Air Force pilots.
|
1304.3097 | Bayesian Inference for Radar Imagery Based Surveillance | cs.AI | We are interested in creating an automated or semi-automated system with the
capability of taking a set of radar imagery, collection parameters and a priori
map and other tactical data, and producing likely interpretations of the
possible military situations given the available evidence. This paper is
concerned with the problem of the interpretation and computation of certainty
or belief in the conclusions reached by such a system.
|
1304.3098 | Evidential Reasoning in Parallel Hierarchical Vision Programs | cs.AI cs.CV | This paper presents an efficient adaptation and application of the
Dempster-Shafer theory of evidence, one that can be used effectively in a
massively parallel hierarchical system for visual pattern perception. It
describes the techniques used, and shows in an extended example how they serve
to improve the system's performance as it applies a multiple-level set of
processes.
|
1304.3099 | Computing Reference Classes | cs.AI | For any system with limited statistical knowledge, the combination of
evidence and the interpretation of sampling information require the
determination of the right reference class (or of an adequate one). The present
note (1) discusses the use of reference classes in evidential reasoning, and
(2) discusses implementations of Kyburg's rules for reference classes. This
paper contributes the first frank discussion of how much of Kyburg's system is
needed to be powerful, how much can be computed effectively, and how much is
philosophical fat.
|
1304.3100 | An Uncertainty Management Calculus for Ordering Searches in Distributed
Dynamic Databases | cs.AI | MINDS is a distributed system of cooperating query engines that customize,
document retrieval for each user in a dynamic environment. It improves its
performance and adapts to changing patterns of document distribution by
observing system-user interactions and modifying the appropriate certainty
factors, which act as search control parameters. It argued here that the
uncertainty management calculus must account for temporal precedence,
reliability of evidence, degree of support for a proposition, and saturation
effects. The calculus presented here possesses these features. Some results
obtained with this scheme are discussed.
|
1304.3101 | An Explanation Mechanism for Bayesian Inferencing Systems | cs.AI | Explanation facilities are a particularly important feature of expert system
frameworks. It is an area in which traditional rule-based expert system
frameworks have had mixed results. While explanations about control are well
handled, facilities are needed for generating better explanations concerning
knowledge base content. This paper approaches the explanation problem by
examining the effect an event has on a variable of interest within a symmetric
Bayesian inferencing system. We argue that any effect measure operating in this
context must satisfy certain properties. Such a measure is proposed. It forms
the basis for an explanation facility which allows the user of the Generalized
Bayesian Inferencing System to question the meaning of the knowledge base. That
facility is described in detail.
|
1304.3102 | Distributed Revision of Belief Commitment in Multi-Hypothesis
Interpretations | cs.AI | This paper extends the applications of belief-networks to include the
revision of belief commitments, i.e., the categorical acceptance of a subset of
hypotheses which, together, constitute the most satisfactory explanation of the
evidence at hand. A coherent model of non-monotonic reasoning is established
and distributed algorithms for belief revision are presented. We show that, in
singly connected networks, the most satisfactory explanation can be found in
linear time by a message-passing algorithm similar to the one used in belief
updating. In multiply-connected networks, the problem may be exponentially hard
but, if the network is sparse, topological considerations can be used to render
the interpretation task tractable. In general, finding the most probable
combination of hypotheses is no more complex than computing the degree of
belief for any individual hypothesis. Applications to medical diagnosis are
illustrated.
|
1304.3103 | Learning Link-Probabilities in Causal Trees | cs.AI | A learning algorithm is presented which given the structure of a causal tree,
will estimate its link probabilities by sequential measurements on the leaves
only. Internal nodes of the tree represent conceptual (hidden) variables
inaccessible to observation. The method described is incremental, local,
efficient, and remains robust to measurement imprecisions.
|
1304.3104 | Approximate Deduction in Single Evidential Bodies | cs.AI | Results on approximate deduction in the context of the calculus of evidence
of Dempster-Shafer and the theory of interval probabilities are reported.
Approximate conditional knowledge about the truth of conditional propositions
was assumed available and expressed as sets of possible values (actually
numeric intervals) of conditional probabilities. Under different
interpretations of this conditional knowledge, several formulas were produced
to integrate unconditioned estimates (assumed given as sets of possible values
of unconditioned probabilities) with conditional estimates. These formulas are
discussed together with the computational characteristics of the methods
derived from them. Of particular importance is one such evidence integration
formulation, produced under a belief oriented interpretation, which
incorporates both modus ponens and modus tollens inferential mechanisms, allows
integration of conditioned and unconditioned knowledge without resorting to
iterative or sequential approximations, and produces elementary mass
distributions as outputs using similar distributions as inputs.
|
1304.3105 | The Rational and Computational Scope of Probabilistic Rule-Based Expert
Systems | cs.AI | Belief updating schemes in artificial intelligence may be viewed as three
dimensional languages, consisting of a syntax (e.g. probabilities or certainty
factors), a calculus (e.g. Bayesian or CF combination rules), and a semantics
(i.e. cognitive interpretations of competing formalisms). This paper studies
the rational scope of those languages on the syntax and calculus grounds. In
particular, the paper presents an endomorphism theorem which highlights the
limitations imposed by the conditional independence assumptions implicit in the
CF calculus. Implications of the theorem to the relationship between the CF and
the Bayesian languages and the Dempster-Shafer theory of evidence are
presented. The paper concludes with a discussion of some implications on
rule-based knowledge engineering in uncertain domains.
|
1304.3106 | A Causal Bayesian Model for the Diagnosis of Appendicitis | cs.AI | The causal Bayesian approach is based on the assumption that effects (e.g.,
symptoms) that are not conditionally independent with respect to some causal
agent (e.g., a disease) are conditionally independent with respect to some
intermediate state caused by the agent, (e.g., a pathological condition). This
paper describes the development of a causal Bayesian model for the diagnosis of
appendicitis. The paper begins with a description of the standard Bayesian
approach to reasoning about uncertainty and the major critiques it faces. The
paper then lays the theoretical groundwork for the causal extension of the
Bayesian approach, and details specific improvements we have developed. The
paper then goes on to describe our knowledge engineering and implementation and
the results of a test of the system. The paper concludes with a discussion of
how the causal Bayesian approach deals with the criticisms of the standard
Bayesian model and why it is superior to alternative approaches to reasoning
about uncertainty popular in the Al community.
|
1304.3107 | A Backwards View for Assessment | cs.AI | Much artificial intelligence research focuses on the problem of deducing the
validity of unobservable propositions or hypotheses from observable evidence.!
Many of the knowledge representation techniques designed for this problem
encode the relationship between evidence and hypothesis in a directed manner.
Moreover, the direction in which evidence is stored is typically from evidence
to hypothesis.
|
1304.3108 | DAVID: Influence Diagram Processing System for the Macintosh | cs.AI | Influence diagrams are a directed graph representation for uncertainties as
probabilities. The graph distinguishes between those variables which are under
the control of a decision maker (decisions, shown as rectangles) and those
which are not (chances, shown as ovals), as well as explicitly denoting a goal
for solution (value, shown as a rounded rectangle.
|
1304.3109 | Propagation of Belief Functions: A Distributed Approach | cs.AI | In this paper, we describe a scheme for propagating belief functions in
certain kinds of trees using only local computations. This scheme generalizes
the computational scheme proposed by Shafer and Logan1 for diagnostic trees of
the type studied by Gordon and Shortliffe, and the slightly more general scheme
given by Shafer for hierarchical evidence. It also generalizes the scheme
proposed by Pearl for Bayesian causal trees (see Shenoy and Shafer). Pearl's
causal trees and Gordon and Shortliffe's diagnostic trees are both ways of
breaking the evidence that bears on a large problem down into smaller items of
evidence that bear on smaller parts of the problem so that these smaller
problems can be dealt with one at a time. This localization of effort is often
essential in order to make the process of probability judgment feasible, both
for the person who is making probability judgments and for the machine that is
combining them. The basic structure for our scheme is a type of tree that
generalizes both Pearl's and Gordon and Shortliffe's trees. Trees of this
general type permit localized computation in Pearl's sense. They are based on
qualitative judgments of conditional independence. We believe that the scheme
we describe here will prove useful in expert systems. It is now clear that the
successful propagation of probabilities or certainty factors in expert systems
requires much more structure than can be provided in a pure production-system
framework. Bayesian schemes, on the other hand, often make unrealistic demands
for structure. The propagation of belief functions in trees and more general
networks stands on a middle ground where some sensible and useful things can be
done. We would like to emphasize that the basic idea of local computation for
propagating probabilities is due to Judea Pearl. It is a very innovative idea;
we do not believe that it can be found in the Bayesian literature prior to
Pearl's work. We see our contribution as extending the usefulness of Pearl's
idea by generalizing it from Bayesian probabilities to belief functions. In the
next section, we give a brief introduction to belief functions. The notions of
qualitative independence for partitions and a qualitative Markov tree are
introduced in Section III. Finally, in Section IV, we describe a scheme for
propagating belief functions in qualitative Markov trees.
|
1304.3110 | Appropriate and Inappropriate Estimation Techniques | cs.AI | Mode {also called MAP} estimation, mean estimation and median estimation are
examined here to determine when they can be safely used to derive {posterior)
cost minimizing estimates. (These are all Bayes procedures, using the mode.
mean. or median of the posterior distribution). It is found that modal
estimation only returns cost minimizing estimates when the cost function is
0-t. If the cost function is a function of distance then mean estimation only
returns cost minimizing estimates when the cost function is squared distance
from the true value and median estimation only returns cost minimizing
estimates when the cost function ts the distance from the true value. Results
are presented on the goodness or modal estimation with non 0-t cost functions
|
1304.3111 | Estimating Uncertain Spatial Relationships in Robotics | cs.AI | In this paper, we describe a representation for spatial information, called
the stochastic map, and associated procedures for building it, reading
information from it, and revising it incrementally as new information is
obtained. The map contains the estimates of relationships among objects in the
map, and their uncertainties, given all the available information. The
procedures provide a general solution to the problem of estimating uncertain
relative spatial relationships. The estimates are probabilistic in nature, an
advance over the previous, very conservative, worst-case approaches to the
problem. Finally, the procedures are developed in the context of
state-estimation and filtering theory, which provides a solid basis for
numerous extensions.
|
1304.3112 | A VLSI Design and Implementation for a Real-Time Approximate Reasoning | cs.AI | The role of inferencing with uncertainty is becoming more important in
rule-based expert systems (ES), since knowledge given by a human expert is
often uncertain or imprecise. We have succeeded in designing a VLSI chip which
can perform an entire inference process based on fuzzy logic. The design of the
VLSI fuzzy inference engine emphasizes simplicity, extensibility, and
efficiency (operational speed and layout area). It is fabricated in 2.5 um CMOS
technology. The inference engine consists of three major components; a rule set
memory, an inference processor, and a controller. In this implementation, a
rule set memory is realized by a read only memory (ROM). The controller
consists of two counters. In the inference processor, one data path is laid out
for each rule. The number of the inference rule can be increased adding more
data paths to the inference processor. All rules are executed in parallel, but
each rule is processed serially. The logical structure of fuzzy inference
proposed in the current paper maps nicely onto the VLSI structure. A two-phase
nonoverlapping clocking scheme is used. Timing tests indicate that the
inference engine can operate at approximately 20.8 MHz. This translates to an
execution speed of approximately 80,000 Fuzzy Logical Inferences Per Second
(FLIPS), and indicates that the inference engine is suitable for a demanding
real-time application. The potential applications include decision-making in
the area of command and control for intelligent robot systems, process control,
missile and aircraft guidance, and other high performance machines.
|
1304.3113 | A General Purpose Inference Engine for Evidential Reasoning Research | cs.AI | The purpose of this paper is to report on the most recent developments in our
ongoing investigation of the representation and manipulation of uncertainty in
automated reasoning systems. In our earlier studies (Tong and Shapiro, 1985) we
described a series of experiments with RUBRIC (Tong et al., 1985), a system for
full-text document retrieval, that generated some interesting insights into the
effects of choosing among a class of scalar valued uncertainty calculi. [n
order to extend these results we have begun a new series of experiments with a
larger class of representations and calculi, and to help perform these
experiments we have developed a general purpose inference engine.
|
1304.3114 | Generalizing Fuzzy Logic Probabilistic Inferences | cs.AI | Linear representations for a subclass of boolean symmetric functions selected
by a parity condition are shown to constitute a generalization of the linear
constraints on probabilities introduced by Boole. These linear constraints are
necessary to compute probabilities of events with relations between the.
arbitrarily specified with propositional calculus boolean formulas.
|
1304.3115 | Qualitative Probabilistic Networks for Planning Under Uncertainty | cs.AI | Bayesian networks provide a probabilistic semantics for qualitative
assertions about likelihood. A qualitative reasoner based on an algebra over
these assertions can derive further conclusions about the influence of actions.
While the conclusions are much weaker than those computed from complete
probability distributions, they are still valuable for suggesting potential
actions, eliminating obviously inferior plans, identifying important tradeoffs,
and explaining probabilistic models.
|
1304.3116 | Experimentally Comparing Uncertain Inference Systems to Probability | cs.AI | This paper examines the biases and performance of several uncertain inference
systems: Mycin, a variant of Mycin. and a simplified version of probability
using conditional independence assumptions. We present axiomatic arguments for
using Minimum Cross Entropy inference as the best way to do uncertain
inference. For Mycin and its variant we found special situations where its
performance was very good, but also situations where performance was worse than
random guessing, or where data was interpreted as having the opposite of its
true import We have found that all three of these systems usually gave accurate
results, and that the conditional independence assumptions gave the most robust
results. We illustrate how the Importance of biases may be quantitatively
assessed and ranked. Considerations of robustness might be a critical factor is
selecting UlS's for a given application.
|
1304.3117 | Evaluation of Uncertain Inference Models I: PROSPECTOR | cs.AI | This paper examines the accuracy of the PROSPECTOR model for uncertain
reasoning. PROSPECTOR's solutions for a large number of computer-generated
inference networks were compared to those obtained from probability theory and
minimum cross-entropy calculations. PROSPECTOR's answers were generally
accurate for a restricted subset of problems that are consistent with its
assumptions. However, even within this subset, we identified conditions under
which PROSPECTOR's performance deteriorates.
|
1304.3118 | On Implementing Usual Values | cs.AI | In many cases commonsense knowledge consists of knowledge of what is usual.
In this paper we develop a system for reasoning with usual information. This
system is based upon the fact that these pieces of commonsense information
involve both a probabilistic aspect and a granular aspect. We implement this
system with the aid of possibility-probability granules.
|
1304.3119 | On the Combinality of Evidence in the Dempster-Shafer Theory | cs.AI | In the current versions of the Dempster-Shafer theory, the only essential
restriction on the validity of the rule of combination is that the sources of
evidence must be statistically independent. Under this assumption, it is
permissible to apply the Dempster-Shafer rule to two or mere distinct
probability distributions.
|
1304.3120 | GUI Database for the Equipment Store of the Department of Geomatic
Engineering, KNUST | cs.DB | The geospatial analyst is required to apply art, science, and technology to
measure relative positions of natural and man-made features above or beneath
the earths surface, and to present this information either graphically or
numerically. The reference positions for these measurements need to be well
archived and managed to effectively sustain the activities in the spatial
industry. The research herein described highlights the need for an information
system for the Land Surveyors Equipment Store. Such a system is a database
management system with a user friendly graphical interface. This paper
describes one such system that has been developed for the Equipment Store of
the Department of Geomatic Engineering, Kwame Nkrumah University of Science and
Technology, Ghana. The system facilitates efficient management and location of
instruments, as well as easy location of beacons together with their attribute
information, it provides multimedia information about instruments in an
Equipment Store. Digital camera was used capture the pictorial descriptions of
the beacons. Geographic Information System software was employed to visualize
the spatial location of beacons and to publish the various layers for the
Graphical User Interface. The aesthetics of the interface was developed with
user interface design tools and coded by programming. The developed Suite,
powered by a reliable and fully scalable database, provides an efficient way of
booking and analyzing transactions in an Equipment Store.
|
1304.3138 | Sustainable Cooperative Coevolution with a Multi-Armed Bandit | cs.NE | This paper proposes a self-adaptation mechanism to manage the resources
allocated to the different species comprising a cooperative coevolutionary
algorithm. The proposed approach relies on a dynamic extension to the
well-known multi-armed bandit framework. At each iteration, the dynamic
multi-armed bandit makes a decision on which species to evolve for a
generation, using the history of progress made by the different species to
guide the decisions. We show experimentally, on a benchmark and a real-world
problem, that evolving the different populations at different paces allows not
only to identify solutions more rapidly, but also improves the capacity of
cooperative coevolution to solve more complex problems.
|
1304.3144 | Logical Probability Preferences | cs.AI | We present a unified logical framework for representing and reasoning about
both probability quantitative and qualitative preferences in probability answer
set programming, called probability answer set optimization programs. The
proposed framework is vital to allow defining probability quantitative
preferences over the possible outcomes of qualitative preferences. We show the
application of probability answer set optimization programs to a variant of the
well-known nurse restoring problem, called the nurse restoring with probability
preferences problem. To the best of our knowledge, this development is the
first to consider a logical framework for reasoning about probability
quantitative preferences, in general, and reasoning about both probability
quantitative and qualitative preferences in particular.
|
1304.3156 | Data Secrecy in Distributed Storage Systems under Exact Repair | cs.IT math.IT | The problem of securing data against eavesdropping in distributed storage
systems is studied. The focus is on systems that use linear codes and implement
exact repair to recover from node failures.The maximum file size that can be
stored securely is determined for systems in which all the available nodes help
in repair (i.e., repair degree $d=n-1$, where $n$ is the total number of nodes)
and for any number of compromised nodes. Similar results in the literature are
restricted to the case of at most two compromised nodes. Moreover, new explicit
upper bounds are given on the maximum secure file size for systems with
$d<n-1$. The key ingredients for the contribution of this paper are new results
on subspace intersection for the data downloaded during repair. The new bounds
imply the interesting fact that the maximum data that can be stored securely
decreases exponentially with the number of compromised nodes.
|
1304.3179 | Joint Precoding and Multivariate Backhaul Compression for the Downlink
of Cloud Radio Access Networks | cs.IT math.IT | This work studies the joint design of precoding and backhaul compression
strategies for the downlink of cloud radio access networks. In these systems, a
central encoder is connected to multiple multi-antenna base stations (BSs) via
finite-capacity backhaul links. At the central encoder, precoding is followed
by compression in order to produce the rate-limited bit streams delivered to
each BS over the corresponding backhaul link. In current state-of-the-art
approaches, the signals intended for different BSs are compressed
independently. In contrast, this work proposes to leverage joint compression,
also referred to as multivariate compression, of the signals of different BSs
in order to better control the effect of the additive quantization noises at
the mobile stations (MSs). The problem of maximizing the weighted sum-rate with
respect to both the precoding matrix and the joint correlation matrix of the
quantization noises is formulated subject to power and backhaul capacity
constraints. An iterative algorithm is proposed that achieves a stationary
point of the problem. Moreover, in order to enable the practical implementation
of multivariate compression across BSs, a novel architecture is proposed based
on successive steps of minimum mean-squared error (MMSE) estimation and per-BS
compression. Robust design with respect to imperfect channel state information
is also discussed. From numerical results, it is confirmed that the proposed
joint precoding and compression strategy outperforms conventional approaches
based on the separate design of precoding and compression or independent
compression across the BSs.
|
1304.3192 | Rotational Projection Statistics for 3D Local Surface Description and
Object Recognition | cs.CV | Recognizing 3D objects in the presence of noise, varying mesh resolution,
occlusion and clutter is a very challenging task. This paper presents a novel
method named Rotational Projection Statistics (RoPS). It has three major
modules: Local Reference Frame (LRF) definition, RoPS feature description and
3D object recognition. We propose a novel technique to define the LRF by
calculating the scatter matrix of all points lying on the local surface. RoPS
feature descriptors are obtained by rotationally projecting the neighboring
points of a feature point onto 2D planes and calculating a set of statistics
(including low-order central moments and entropy) of the distribution of these
projected points. Using the proposed LRF and RoPS descriptor, we present a
hierarchical 3D object recognition algorithm. The performance of the proposed
LRF, RoPS descriptor and object recognition algorithm was rigorously tested on
a number of popular and publicly available datasets. Our proposed techniques
exhibited superior performance compared to existing techniques. We also showed
that our method is robust with respect to noise and varying mesh resolution.
Our RoPS based algorithm achieved recognition rates of 100%, 98.9%, 95.4% and
96.0% respectively when tested on the Bologna, UWA, Queen's and Ca' Foscari
Venezia Datasets.
|
1304.3200 | An Approach to Solve Linear Equations Using a Time-Variant Adaptation
Based Hybrid Evolutionary Algorithm | cs.NE cs.NA | For small number of equations, systems of linear (and sometimes nonlinear)
equations can be solved by simple classical techniques. However, for large
number of systems of linear (or nonlinear) equations, solutions using classical
method become arduous. On the other hand evolutionary algorithms have mostly
been used to solve various optimization and learning problems. Recently,
hybridization of evolutionary algorithm with classical Gauss-Seidel based
Successive Over Relaxation (SOR) method has successfully been used to solve
large number of linear equations; where a uniform adaptation (UA) technique of
relaxation factor is used. In this paper, a new hybrid algorithm is proposed in
which a time-variant adaptation (TVA) technique of relaxation factor is used
instead of uniform adaptation technique to solve large number of linear
equations. The convergence theorems of the proposed algorithms are proved
theoretically. And the performance of the proposed TVA-based algorithm is
compared with the UA-based hybrid algorithm in the experimental domain. The
proposed algorithm outperforms the hybrid one in terms of efficiency.
|
1304.3208 | From Constraints to Resolution Rules, Part I: Conceptual Framework | cs.AI | Many real world problems naturally appear as constraints satisfaction
problems (CSP), for which very efficient algorithms are known. Most of these
involve the combination of two techniques: some direct propagation of
constraints between variables (with the goal of reducing their sets of possible
values) and some kind of structured search (depth-first, breadth-first,...).
But when such blind search is not possible or not allowed or when one wants a
'constructive' or a 'pattern-based' solution, one must devise more complex
propagation rules instead. In this case, one can introduce the notion of a
candidate (a 'still possible' value for a variable). Here, we give this
intuitive notion a well defined logical status, from which we can define the
concepts of a resolution rule and a resolution theory. In order to keep our
analysis as concrete as possible, we illustrate each definition with the well
known Sudoku example. Part I proposes a general conceptual framework based on
first order logic; with the introduction of chains and braids, Part II will
give much deeper results.
|
1304.3209 | Improvement studies on neutron-gamma separation in HPGe detectors by
using neural networks | physics.ins-det cs.NE nucl-ex | The neutrons emitted in heavy-ion fusion-evaporation (HIFE) reactions
together with the gamma-rays cause unwanted backgrounds in gamma-ray spectra.
Especially in the nuclear reactions, where relativistic ion beams (RIBs) are
used, these neutrons are serious problem. They have to be rejected in order to
obtain clearer gamma-ray peaks. In this study, the radiation energy and three
criteria which were previously determined for separation between neutron and
gamma-rays in the HPGe detectors have been used in artificial neural network
(ANN) for improving of the decomposition power. According to the preliminary
results obtained from ANN method, the ratio of neutron rejection has been
improved by a factor of 1.27 and the ratio of the lost in gamma-rays has been
decreased by a factor of 0.50.
|
1304.3210 | From Constraints to Resolution Rules, Part II: chains, braids,
confluence and T&E | cs.AI | In this Part II, we apply the general theory developed in Part I to a
detailed analysis of the Constraint Satisfaction Problem (CSP). We show how
specific types of resolution rules can be defined. In particular, we introduce
the general notions of a chain and a braid. As in Part I, these notions are
illustrated in detail with the Sudoku example - a problem known to be
NP-complete and which is therefore typical of a broad class of hard problems.
For Sudoku, we also show how far one can go in 'approximating' a CSP with a
resolution theory and we give an empirical statistical analysis of how the
various puzzles, corresponding to different sets of entries, can be classified
along a natural scale of complexity. For any CSP, we also prove the confluence
property of some Resolution Theories based on braids and we show how it can be
used to define different resolution strategies. Finally, we prove that, in any
CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no
guessing and we comment this result in the Sudoku case.
|
1304.3265 | Extension of hidden markov model for recognizing large vocabulary of
sign language | cs.CL | Computers still have a long way to go before they can interact with users in
a truly natural fashion. From a users perspective, the most natural way to
interact with a computer would be through a speech and gesture interface.
Although speech recognition has made significant advances in the past ten
years, gesture recognition has been lagging behind. Sign Languages (SL) are the
most accomplished forms of gestural communication. Therefore, their automatic
analysis is a real challenge, which is interestingly implied to their lexical
and syntactic organization levels. Statements dealing with sign language occupy
a significant interest in the Automatic Natural Language Processing (ANLP)
domain. In this work, we are dealing with sign language recognition, in
particular of French Sign Language (FSL). FSL has its own specificities, such
as the simultaneity of several parameters, the important role of the facial
expression or movement and the use of space for the proper utterance
organization. Unlike speech recognition, Frensh sign language (FSL) events
occur both sequentially and simultaneously. Thus, the computational processing
of FSL is too complex than the spoken languages. We present a novel approach
based on HMM to reduce the recognition complexity.
|
1304.3268 | Web Services Discovery and Recommendation Based on Information
Extraction and Symbolic Reputation | cs.IR | This paper shows that the problem of web services representation is crucial
and analyzes the various factors that influence on it. It presents the
traditional representation of web services considering traditional textual
descriptions based on the information contained in WSDL files. Unfortunately,
textual web services descriptions are dirty and need significant cleaning to
keep only useful information. To deal with this problem, we introduce rules
based text tagging method, which allows filtering web service description to
keep only significant information. A new representation based on such filtered
data is then introduced. Many web services have empty descriptions. Also, we
consider web services representations based on the WSDL file structure (types,
attributes, etc.). Alternatively, we introduce a new representation called
symbolic reputation, which is computed from relationships between web services.
The impact of the use of these representations on web service discovery and
recommendation is studied and discussed in the experimentation using real world
web services.
|
1304.3280 | Channel Coding and Source Coding with Increased Partial Side Information | cs.IT math.IT | Let (S1,i, S2,i), distributed according to i.i.d p(s1, s2), i = 1, 2, . . .
be a memoryless, correlated partial side information sequence. In this work we
study channel coding and source coding problems where the partial side
information (S1, S2) is available at the encoder and the decoder, respectively,
and, additionally, either the encoder's or the decoder's side information is
increased by a limited-rate description of the other's partial side
information. We derive six special cases of channel coding and source coding
problems and we characterize the capacity and the rate-distortion functions for
the different cases. We present a duality between the channel capacity and the
rate-distortion cases we study. In order to find numerical solutions for our
channel capacity and rate-distortion problems, we use the Blahut-Arimoto
algorithm and convex optimization tools. As a byproduct of our work, we found a
tight lower bound on the Wyner-Ziv solution by formulating its Lagrange dual as
a geometric program. Previous results in the literature provide a geometric
programming formulation that is only a lower bound, but not necessarily tight.
Finally, we provide several examples corresponding to the channel capacity and
the rate-distortion cases we presented.
|
1304.3285 | Scaling the Indian Buffet Process via Submodular Maximization | stat.ML cs.LG | Inference for latent feature models is inherently difficult as the inference
space grows exponentially with the size of the input data and number of latent
features. In this work, we use Kurihara & Welling (2008)'s
maximization-expectation framework to perform approximate MAP inference for
linear-Gaussian latent feature models with an Indian Buffet Process (IBP)
prior. This formulation yields a submodular function of the features that
corresponds to a lower bound on the model evidence. By adding a constant to
this function, we obtain a nonnegative submodular function that can be
maximized via a greedy algorithm that obtains at least a one-third
approximation to the optimal solution. Our inference method scales linearly
with the size of the input data, and we show the efficacy of our method on the
largest datasets currently analyzed using an IBP model.
|
1304.3345 | Probabilistic Classification using Fuzzy Support Vector Machines | cs.LG math.ST stat.TH | In medical applications such as recognizing the type of a tumor as Malignant
or Benign, a wrong diagnosis can be devastating. Methods like Fuzzy Support
Vector Machines (FSVM) try to reduce the effect of misplaced training points by
assigning a lower weight to the outliers. However, there are still uncertain
points which are similar to both classes and assigning a class by the given
information will cause errors. In this paper, we propose a two-phase
classification method which probabilistically assigns the uncertain points to
each of the classes. The proposed method is applied to the Breast Cancer
Wisconsin (Diagnostic) Dataset which consists of 569 instances in 2 classes of
Malignant and Benign. This method assigns certain instances to their
appropriate classes with probability of one, and the uncertain instances to
each of the classes with associated probabilities. Therefore, based on the
degree of uncertainty, doctors can suggest further examinations before making
the final diagnosis.
|
1304.3362 | Evolution of Swarm Robotics Systems with Novelty Search | cs.NE | Novelty search is a recent artificial evolution technique that challenges
traditional evolutionary approaches. In novelty search, solutions are rewarded
based on their novelty, rather than their quality with respect to a predefined
objective. The lack of a predefined objective precludes premature convergence
caused by a deceptive fitness function. In this paper, we apply novelty search
combined with NEAT to the evolution of neural controllers for homogeneous
swarms of robots. Our empirical study is conducted in simulation, and we use a
common swarm robotics task - aggregation, and a more challenging task - sharing
of an energy recharging station. Our results show that novelty search is
unaffected by deception, is notably effective in bootstrapping the evolution,
can find solutions with lower complexity than fitness-based evolution, and can
find a broad diversity of solutions for the same task. Even in non-deceptive
setups, novelty search achieves solution qualities similar to those obtained in
traditional fitness-based evolution. Our study also encompasses variants of
novelty search that work in concert with fitness-based evolution to combine the
exploratory character of novelty search with the exploitatory character of
objective-based evolution. We show that these variants can further improve the
performance of novelty search. Overall, our study shows that novelty search is
a promising alternative for the evolution of controllers for robotic swarms.
|
1304.3367 | Analysis of a rate-adaptive reconciliation protocol and the effect of
the leakage on the secret key rate | quant-ph cs.IT math.IT | Quantum key distribution performs the trick of growing a secret key in two
distant places connected by a quantum channel. The main reason is that the
legitimate users can bound the information gathered by the eavesdropper. In
practical systems, whether because of finite resources or external conditions,
the quantum channel is subject to fluctuations. A rate adaptive information
reconciliation protocol, that adapts to the changes in the communication
channel, is then required to minimize the leakage of information in the
classical postprocessing.
We consider here the leakage of a rate-adaptive information reconciliation
protocol. The length of the exchanged messages is larger than that of an
optimal protocol; however, we prove that the min-entropy reduction is limited.
The simulation results, both on the asymptotic and in the finite-length regime,
show that this protocol allows to increase the amount of distillable secret
key.
|
1304.3375 | Degree distribution and scaling in the Connecting Nearest Neighbors
model | physics.soc-ph cs.SI physics.data-an | We present a detailed analysis of the Connecting Nearest Neighbors (CNN)
model by V\'azquez. We show that the degree distribution follows a power law,
but the scaling exponent can vary with the parameter setting. Moreover, the
correspondence of the growing version of the Connecting Nearest Neighbors
(GCNN) model to the particular random walk model (PRW model) and recursive
search model (RS model) is established.
|
1304.3393 | Generic Behaviour Similarity Measures for Evolutionary Swarm Robotics | cs.NE | Novelty search has shown to be a promising approach for the evolution of
controllers for swarm robotics. In existing studies, however, the experimenter
had to craft a domain dependent behaviour similarity measure to use novelty
search in swarm robotics applications. The reliance on hand-crafted similarity
measures places an additional burden to the experimenter and introduces a bias
in the evolutionary process. In this paper, we propose and compare two
task-independent, generic behaviour similarity measures: combined state count
and sampled average state. The proposed measures use the values of sensors and
effectors recorded for each individual robot of the swarm. The characterisation
of the group-level behaviour is then obtained by combining the sensor-effector
values from all the robots. We evaluate the proposed measures in an aggregation
task and in a resource sharing task. We show that the generic measures match
the performance of domain dependent measures in terms of solution quality. Our
results indicate that the proposed generic measures operate as effective
behaviour similarity measures, and that it is possible to leverage the benefits
of novelty search without having to craft domain specific similarity measures.
|
1304.3405 | Do Social Explanations Work? Studying and Modeling the Effects of Social
Explanations in Recommender Systems | cs.SI cs.IR physics.soc-ph | Recommender systems associated with social networks often use social
explanations (e.g. "X, Y and 2 friends like this") to support the
recommendations. We present a study of the effects of these social explanations
in a music recommendation context. We start with an experiment with 237 users,
in which we show explanations with varying levels of social information and
analyze their effect on users' decisions. We distinguish between two key
decisions: the likelihood of checking out the recommended artist, and the
actual rating of the artist based on listening to several songs. We find that
while the explanations do have some influence on the likelihood, there is
little correlation between the likelihood and actual (listening) rating for the
same artist. Based on these insights, we present a generative probabilistic
model that explains the interplay between explanations and background
information on music preferences, and how that leads to a final likelihood
rating for an artist. Acknowledging the impact of explanations, we discuss a
general recommendation framework that models external informational elements in
the recommendation interface, in addition to inherent preferences of users.
|
1304.3406 | Merging Satellite Measurements of Rainfall Using Multi-scale Imagery
Technique | cs.CV cs.IR | Several passive microwave satellites orbit the Earth and measure rainfall.
These measurements have the advantage of almost full global coverage when
compared to surface rain gauges. However, these satellites have low temporal
revisit and missing data over some regions. Image fusion is a useful technique
to fill in the gaps of one image (one satellite measurement) using another one.
The proposed algorithm uses an iterative fusion scheme to integrate information
from two satellite measurements. The algorithm is implemented on two datasets
for 7 years of half-hourly data. The results show significant improvements in
rain detection and rain intensity in the merged measurements.
|
1304.3418 | An Inequality Paradigm for Probabilistic Knowledge | cs.AI | We propose an inequality paradigm for probabilistic reasoning based on a
logic of upper and lower bounds on conditional probabilities. We investigate a
family of probabilistic logics, generalizing the work of Nilsson [14]. We
develop a variety of logical notions for probabilistic reasoning, including
soundness, completeness justification; and convergence: reduction of a theory
to a simpler logical class. We argue that a bound view is especially useful for
describing the semantics of probabilistic knowledge representation and for
describing intermediate states of probabilistic inference and updating. We show
that the Dempster-Shafer theory of evidence is formally identical to a special
case of our generalized probabilistic logic. Our paradigm thus incorporates
both Bayesian "rule-based" approaches and avowedly non-Bayesian "evidential"
approaches such as MYCIN and DempsterShafer. We suggest how to integrate the
two "schools", and explore some possibilities for novel synthesis of a variety
of ideas in probabilistic reasoning.
|
1304.3419 | Probabilistic Interpretations for MYCIN's Certainty Factors | cs.AI | This paper examines the quantities used by MYCIN to reason with uncertainty,
called certainty factors. It is shown that the original definition of certainty
factors is inconsistent with the functions used in MYCIN to combine the
quantities. This inconsistency is used to argue for a redefinition of certainty
factors in terms of the intuitively appealing desiderata associated with the
combining functions. It is shown that this redefinition accommodates an
unlimited number of probabilistic interpretations. These interpretations are
shown to be monotonic transformations of the likelihood ratio p(EIH)/p(El H).
The construction of these interpretations provides insight into the assumptions
implicit in the certainty factor model. In particular, it is shown that if
uncertainty is to be propagated through an inference network in accordance with
the desiderata, evidence must be conditionally independent given the hypothesis
and its negation and the inference network must have a tree structure. It is
emphasized that assumptions implicit in the model are rarely true in practical
applications. Methods for relaxing the assumptions are suggested.
|
1304.3420 | Uncertain Reasoning Using Maximum Entropy Inference | cs.AI | The use of maximum entropy inference in reasoning with uncertain information
is commonly justified by an information-theoretic argument. This paper
discusses a possible objection to this information-theoretic justification and
shows how it can be met. I then compare maximum entropy inference with certain
other currently popular methods for uncertain reasoning. In making such a
comparison, one must distinguish between static and dynamic theories of degrees
of belief: a static theory concerns the consistency conditions for degrees of
belief at a given time; whereas a dynamic theory concerns how one's degrees of
belief should change in the light of new information. It is argued that maximum
entropy is a dynamic theory and that a complete theory of uncertain reasoning
can be gotten by combining maximum entropy inference with probability theory,
which is a static theory. This total theory, I argue, is much better grounded
than are other theories of uncertain reasoning.
|
1304.3421 | Independence and Bayesian Updating Methods | cs.AI | Duda, Hart, and Nilsson have set forth a method for rule-based inference
systems to use in updating the probabilities of hypotheses on the basis of
multiple items of new evidence. Pednault, Zucker, and Muresan claimed to give
conditions under which independence assumptions made by Duda et al. preclude
updating-that is, prevent the evidence from altering the probabilities of the
hypotheses. Glymour refutes Pednault et al.'s claim with a counterexample of a
rather special form (one item of evidence is incompatible with all but one of
the hypotheses); he raises, but leaves open, the question whether their result
would be true with an added assumption to rule out such special cases. We show
that their result does not hold even with the added assumption, but that it can
nevertheless be largely salvaged. Namely, under the conditions assumed by
Pednault et al., at most one of the items of evidence can alter the probability
of any given hypothesis; thus, although updating is possible, multiple updating
for any of the hypotheses is precluded.
|
1304.3422 | A Constraint Propagation Approach to Probabilistic Reasoning | cs.AI | The paper demonstrates that strict adherence to probability theory does not
preclude the use of concurrent, self-activated constraint-propagation
mechanisms for managing uncertainty. Maintaining local records of
sources-of-belief allows both predictive and diagnostic inferences to be
activated simultaneously and propagate harmoniously towards a stable
equilibrium.
|
1304.3423 | Relative Entropy, Probabilistic Inference and AI | cs.AI | Various properties of relative entropy have led to its widespread use in
information theory. These properties suggest that relative entropy has a role
to play in systems that attempt to perform inference in terms of probability
distributions. In this paper, I will review some basic properties of relative
entropy as well as its role in probabilistic inference. I will also mention
briefly a few existing and potential applications of relative entropy to
so-called artificial intelligence (AI).
|
1304.3424 | Foundations of Probability Theory for AI - The Application of
Algorithmic Probability to Problems in Artificial Intelligence | cs.AI | This paper covers two topics: first an introduction to Algorithmic Complexity
Theory: how it defines probability, some of its characteristic properties and
past successful applications. Second, we apply it to problems in A.I. - where
it promises to give near optimum search procedures for two very broad classes
of problems.
|
1304.3425 | Selecting Uncertainty Calculi and Granularity: An Experiment in
Trading-Off Precision and Complexity | cs.AI | The management of uncertainty in expert systems has usually been left to ad
hoc representations and rules of combinations lacking either a sound theory or
clear semantics. The objective of this paper is to establish a theoretical
basis for defining the syntax and semantics of a small subset of calculi of
uncertainty operating on a given term set of linguistic statements of
likelihood. Each calculus is defined by specifying a negation, a conjunction
and a disjunction operator. Families of Triangular norms and conorms constitute
the most general representations of conjunction and disjunction operators.
These families provide us with a formalism for defining an infinite number of
different calculi of uncertainty. The term set will define the uncertainty
granularity, i.e. the finest level of distinction among different
quantifications of uncertainty. This granularity will limit the ability to
differentiate between two similar operators. Therefore, only a small finite
subset of the infinite number of calculi will produce notably different
results. This result is illustrated by two experiments where nine and eleven
different calculi of uncertainty are used with three term sets containing five,
nine, and thirteen elements, respectively. Finally, the use of context
dependent rule set is proposed to select the most appropriate calculus for any
given situation. Such a rule set will be relatively small since it must only
describe the selection policies for a small number of calculi (resulting from
the analyzed trade-off between complexity and precision).
|
1304.3426 | A Framework for Non-Monotonic Reasoning About Probabilistic Assumptions | cs.AI | Attempts to replicate probabilistic reasoning in expert systems have
typically overlooked a critical ingredient of that process. Probabilistic
analysis typically requires extensive judgments regarding interdependencies
among hypotheses and data, and regarding the appropriateness of various
alternative models. The application of such models is often an iterative
process, in which the plausibility of the results confirms or disconfirms the
validity of assumptions made in building the model. In current expert systems,
by contrast, probabilistic information is encapsulated within modular rules
(involving, for example, "certainty factors"), and there is no mechanism for
reviewing the overall form of the probability argument or the validity of the
judgments entering into it.
|
1304.3427 | Metaprobability and Dempster-Shafer in Evidential Reasoning | cs.AI | Evidential reasoning in expert systems has often used ad-hoc uncertainty
calculi. Although it is generally accepted that probability theory provides a
firm theoretical foundation, researchers have found some problems with its use
as a workable uncertainty calculus. Among these problems are representation of
ignorance, consistency of probabilistic judgements, and adjustment of a priori
judgements with experience. The application of metaprobability theory to
evidential reasoning is a new approach to solving these problems.
Metaprobability theory can be viewed as a way to provide soft or hard
constraints on beliefs in much the same manner as the Dempster-Shafer theory
provides constraints on probability masses on subsets of the state space. Thus,
we use the Dempster-Shafer theory, an alternative theory of evidential
reasoning to illuminate metaprobability theory as a theory of evidential
reasoning. The goal of this paper is to compare how metaprobability theory and
Dempster-Shafer theory handle the adjustment of beliefs with evidence with
respect to a particular thought experiment. Sections 2 and 3 give brief
descriptions of the metaprobability and Dempster-Shafer theories.
Metaprobability theory deals with higher order probabilities applied to
evidential reasoning. Dempster-Shafer theory is a generalization of probability
theory which has evolved from a theory of upper and lower probabilities.
Section 4 describes a thought experiment and the metaprobability and
DempsterShafer analysis of the experiment. The thought experiment focuses on
forming beliefs about a population with 6 types of members {1, 2, 3, 4, 5, 6}.
A type is uniquely defined by the values of three features: A, B, C. That is,
if the three features of one member of the population were known then its type
could be ascertained. Each of the three features has two possible values, (e.g.
A can be either "a0" or "al"). Beliefs are formed from evidence accrued from
two sensors: sensor A, and sensor B. Each sensor senses the corresponding
defining feature. Sensor A reports that half of its observations are "a0" and
half the observations are 'al'. Sensor B reports that half of its observations
are ``b0,' and half are "bl". Based on these two pieces of evidence, what
should be the beliefs on the distribution of types in the population? Note that
the third feature is not observed by any sensor.
|
1304.3428 | Implementing Probabilistic Reasoning | cs.AI | General problems in analyzing information in a probabilistic database are
considered. The practical difficulties (and occasional advantages) of storing
uncertain data, of using it conventional forward- or backward-chaining
inference engines, and of working with a probabilistic version of resolution
are discussed. The background for this paper is the incorporation of uncertain
reasoning facilities in MRS, a general-purpose expert system building tool.
|
1304.3429 | Probability Judgement in Artificial Intelligence | cs.AI | This paper is concerned with two theories of probability judgment: the
Bayesian theory and the theory of belief functions. It illustrates these
theories with some simple examples and discusses some of the issues that arise
when we try to implement them in expert systems. The Bayesian theory is well
known; its main ideas go back to the work of Thomas Bayes (1702-1761). The
theory of belief functions, often called the Dempster-Shafer theory in the
artificial intelligence community, is less well known, but it has even older
antecedents; belief-function arguments appear in the work of George Hooper
(16401723) and James Bernoulli (1654-1705). For elementary expositions of the
theory of belief functions, see Shafer (1976, 1985).
|
1304.3430 | A Framework for Comparing Uncertain Inference Systems to Probability | cs.AI | Several different uncertain inference systems (UISs) have been developed for
representing uncertainty in rule-based expert systems. Some of these, such as
Mycin's Certainty Factors, Prospector, and Bayes' Networks were designed as
approximations to probability, and others, such as Fuzzy Set Theory and
DempsterShafer Belief Functions were not. How different are these UISs in
practice, and does it matter which you use? When combining and propagating
uncertain information, each UIS must, at least by implication, make certain
assumptions about correlations not explicily specified. The maximum entropy
principle with minimum cross-entropy updating, provides a way of making
assumptions about the missing specification that minimizes the additional
information assumed, and thus offers a standard against which the other UISs
can be compared. We describe a framework for the experimental comparison of the
performance of different UISs, and provide some illustrative results.
|
1304.3431 | Inductive Inference and the Representation of Uncertainty | cs.AI | The form and justification of inductive inference rules depend strongly on
the representation of uncertainty. This paper examines one generic
representation, namely, incomplete information. The notion can be formalized by
presuming that the relevant probabilities in a decision problem are known only
to the extent that they belong to a class K of probability distributions. The
concept is a generalization of a frequent suggestion that uncertainty be
represented by intervals or ranges on probabilities. To make the representation
useful for decision making, an inductive rule can be formulated which
determines, in a well-defined manner, a best approximation to the unknown
probability, given the set K. In addition, the knowledge set notion entails a
natural procedure for updating -- modifying the set K given new evidence.
Several non-intuitive consequences of updating emphasize the differences
between inference with complete and inference with incomplete information.
|
1304.3432 | Machine Learning, Clustering, and Polymorphy | cs.AI cs.CL cs.LG | This paper describes a machine induction program (WITT) that attempts to
model human categorization. Properties of categories to which human subjects
are sensitive includes best or prototypical members, relative contrasts between
putative categories, and polymorphy (neither necessary or sufficient features).
This approach represents an alternative to usual Artificial Intelligence
approaches to generalization and conceptual clustering which tend to focus on
necessary and sufficient feature rules, equivalence classes, and simple search
and match schemes. WITT is shown to be more consistent with human
categorization while potentially including results produced by more traditional
clustering schemes. Applications of this approach in the domains of expert
systems and information retrieval are also discussed.
|
1304.3433 | Induction, of and by Probability | cs.AI | This paper examines some methods and ideas underlying the author's successful
probabilistic learning systems(PLS), which have proven uniquely effective and
efficient in generalization learning or induction. While the emerging
principles are generally applicable, this paper illustrates them in heuristic
search, which demands noise management and incremental learning. In our
approach, both task performance and learning are guided by probability.
Probabilities are incrementally normalized and revised, and their errors are
located and corrected.
|
1304.3434 | An Odds Ratio Based Inference Engine | cs.AI | Expert systems applications that involve uncertain inference can be
represented by a multidimensional contingency table. These tables offer a
general approach to inferring with uncertain evidence, because they can embody
any form of association between any number of pieces of evidence and
conclusions. (Simpler models may be required, however, if the number of pieces
of evidence bearing on a conclusion is large.) This paper presents a method of
using these tables to make uncertain inferences without assumptions of
conditional independence among pieces of evidence or heuristic combining rules.
As evidence is accumulated, new joint probabilities are calculated so as to
maintain any dependencies among the pieces of evidence that are found in the
contingency table. The new conditional probability of the conclusion is then
calculated directly from these new joint probabilities and the conditional
probabilities in the contingency table.
|
1304.3435 | A Framework for Control Strategies in Uncertain Inference Networks | cs.AI | Control Strategies for hierarchical tree-like probabilistic inference
networks are formulated and investigated. Strategies that utilize staged
look-ahead and temporary focus on subgoals are formalized and refined using the
Depth Vector concept that serves as a tool for defining the 'virtual tree'
regarded by the control strategy. The concept is illustrated by four types of
control strategies for three-level trees that are characterized according to
their Depth Vector, and according to the way they consider intermediate nodes
and the role that they let these nodes play. INFERENTI is a computerized
inference system written in Prolog, which provides tools for exercising a
variety of control strategies. The system also provides tools for simulating
test data and for comparing the relative average performance under different
strategies.
|
1304.3436 | Combining Uncertain Estimates | cs.AI | In a real expert system, one may have unreliable, unconfident, conflicting
estimates of the value for a particular parameter. It is important for decision
making that the information present in this aggregate somehow find its way into
use. We cast the problem of representing and combining uncertain estimates as
selection of two kinds of functions, one to determine an estimate, the other
its uncertainty. The paper includes a long list of properties that such
functions should satisfy, and it presents one method that satisfies them.
|
1304.3437 | Confidence Factors, Empiricism and the Dempster-Shafer Theory of
Evidence | cs.AI | The issue of confidence factors in Knowledge Based Systems has become
increasingly important and Dempster-Shafer (DS) theory has become increasingly
popular as a basis for these factors. This paper discusses the need for an
empirical lnterpretatlon of any theory of confidence factors applied to
Knowledge Based Systems and describes an empirical lnterpretatlon of DS theory
suggesting that the theory has been extensively misinterpreted. For the
essentially syntactic DS theory, a model is developed based on sample spaces,
the traditional semantic model of probability theory. This model is used to
show that, if belief functions are based on reasonably accurate sampling or
observation of a sample space, then the beliefs and upper probabilities as
computed according to DS theory cannot be interpreted as frequency ratios.
Since many proposed applications of DS theory use belief functions in
situations with statistically derived evidence (Wesley [1]) and seem to appeal
to statistical intuition to provide an lnterpretatlon of the results as has
Garvey [2], it may be argued that DS theory has often been misapplied.
|
1304.3438 | Incidence Calculus: A Mechanism for Probabilistic Reasoning | cs.AI | Mechanisms for the automation of uncertainty are required for expert systems.
Sometimes these mechanisms need to obey the properties of probabilistic
reasoning. A purely numeric mechanism, like those proposed so far, cannot
provide a probabilistic logic with truth functional connectives. We propose an
alternative mechanism, Incidence Calculus, which is based on a representation
of uncertainty using sets of points, which might represent situations, models
or possible worlds. Incidence Calculus does provide a probabilistic logic with
truth functional connectives.
|
1304.3439 | Evidential Confirmation as Transformed Probability | cs.AI | A considerable body of work in AI has been concerned with aggregating
measures of confirmatory and disconfirmatory evidence for a common set of
propositions. Claiming classical probability to be inadequate or inappropriate,
several researchers have gone so far as to invent new formalisms and methods.
We show how to represent two major such alternative approaches to evidential
confirmation not only in terms of transformed (Bayesian) probability, but also
in terms of each other. This unifies two of the leading approaches to
confirmation theory, by showing that a revised MYCIN Certainty Factor method
[12] is equivalent to a special case of Dempster-Shafer theory. It yields a
well-understood axiomatic basis, i.e. conditional independence, to interpret
previous work on quantitative confirmation theory. It substantially resolves
the "taxe-them-or-leave-them" problem of priors: MYCIN had to leave them out,
while PROSPECTOR had to have them in. It recasts some of confirmation theory's
advantages in terms of the psychological accessibility of probabilistic
information in different (transformed) formats. Finally, it helps to unify the
representation of uncertain reasoning (see also [11]).
|
1304.3440 | Interval-Based Decisions for Reasoning Systems | cs.AI | This essay looks at decision-making with interval-valued probability
measures. Existing decision methods have either supplemented expected utility
methods with additional criteria of optimality, or have attempted to supplement
the interval-valued measures. We advocate a new approach, which makes the
following questions moot: 1. which additional criteria to use, and 2. how wide
intervals should be. In order to implement the approach, we need more
epistemological information. Such information can be generated by a rule of
acceptance with a parameter that allows various attitudes toward error, or can
simply be declared. In sketch, the argument is: 1. probability intervals are
useful and natural in All. systems; 2. wide intervals avoid error, but are
useless in some risk sensitive decision-making; 3. one may obtain narrower
intervals if one is less cautious; 4. if bodies of knowledge can be ordered by
their caution, one should perform the decision analysis with the acceptable
body of knowledge that is the most cautious, of those that are useful. The
resulting behavior differs from that of a behavioral probabilist (a Bayesian)
because in the proposal, 5. intervals based on successive bodies of knowledge
are not always nested; 6. if the agent uses a probability for a particular
decision, she need not commit to that probability for credence or future
decision; and 7. there may be no acceptable body of knowledge that is useful;
hence, sometimes no decision is mandated.
|
1304.3441 | Machine Generalization and Human Categorization: An
Information-Theoretic View | cs.AI | In designing an intelligent system that must be able to explain its reasoning
to a human user, or to provide generalizations that the human user finds
reasonable, it may be useful to take into consideration psychological data on
what types of concepts and categories people naturally use. The psychological
literature on concept learning and categorization provides strong evidence that
certain categories are more easily learned, recalled, and recognized than
others. We show here how a measure of the informational value of a category
predicts the results of several important categorization experiments better
than standard alternative explanations. This suggests that information-based
approaches to machine generalization may prove particularly useful and natural
for human users of the systems.
|
1304.3442 | Exact Reasoning Under Uncertainty | cs.AI | This paper focuses on designing expert systems to support decision making in
complex, uncertain environments. In this context, our research indicates that
strictly probabilistic representations, which enable the use of
decision-theoretic reasoning, are highly preferable to recently proposed
alternatives (e.g., fuzzy set theory and Dempster-Shafer theory). Furthermore,
we discuss the language of influence diagrams and a corresponding methodology
-decision analysis -- that allows decision theory to be used effectively and
efficiently as a decision-making aid. Finally, we use RACHEL, a system that
helps infertile couples select medical treatments, to illustrate the
methodology of decision analysis as basis for expert decision systems.
|
1304.3443 | The Estimation of Subjective Probabilities via Categorical Judgments of
Uncertainty | cs.AI | Theoretically as well as experimentally it is investigated how people
represent their knowledge in order to make decisions or to share their
knowledge with others. Experiment 1 probes into the ways how people 6ather
information about the frequencies of events and how the requested response
mode, that is, numerical vs. verbal estimates interferes with this knowledge.
The least interference occurs if the subjects are allowed to give verbal
responses. From this it is concluded that processing knowledge about
uncertainty categorically, that is, by means of verbal expressions, imposes
less mental work load on the decision matter than numerical processing.
Possibility theory is used as a framework for modeling the individual usage of
verbal categories for grades of uncertainty. The 'elastic' constraints on the
verbal expressions for every sing1e subject are determined in Experiment 2 by
means of sequential calibration. In further experiments it is shown that the
superiority of the verbal processing of knowledge about uncertainty guise
generally reduces persistent biases reported in the literature: conservatism
(Experiment 3) and neg1igence of regression (Experiment 4). The reanalysis of
Hormann's data reveal that in verbal Judgments people exhibit sensitivity for
base rates and are not prone to the conjunction fallacy. In a final experiment
(5) about predictions in a real-life situation it turns out that in a numerical
forecasting task subjects restricted themselves to those parts of their
knowledge which are numerical. On the other hand subjects in a verbal
forecasting task accessed verbally as well as numerically stated knowledge.
Forecasting is structurally related to the estimation of probabilities for rare
events insofar as supporting and contradicting arguments have to be evaluated
and the choice of the final Judgment has to be Justified according to the
evidence brought forward. In order to assist people in such choice situations a
formal model for the interactive checking of arguments has been developed. The
model transforms the normal-language quantifiers used in the arguments into
fuzzy numbers and evaluates the given train of arguments by means of fuzzy
numerica1 operations. Ambiguities in the meanings of quantifiers are resolved
interactively.
|
1304.3444 | A Cure for Pathological Behavior in Games that Use Minimax | cs.AI | The traditional approach to choosing moves in game-playing programs is the
minimax procedure. The general belief underlying its use is that increasing
search depth improves play. Recent research has shown that given certain
simplifying assumptions about a game tree's structure, this belief is
erroneous: searching deeper decreases the probability of making a correct move.
This phenomenon is called game tree pathology. Among these simplifying
assumptions is uniform depth of win/loss (terminal) nodes, a condition which is
not true for most real games. Analytic studies in [10] have shown that if every
node in a pathological game tree is made terminal with probability exceeding a
certain threshold, the resulting tree is nonpathological. This paper considers
a new evaluation function which recognizes increasing densities of forced wins
at deeper levels in the tree. This property raises two points that strengthen
the hypothesis that uniform win depth causes pathology. First, it proves
mathematically that as search deepens, an evaluation function that does not
explicitly check for certain forced win patterns becomes decreasingly likely to
force wins. This failing predicts the pathological behavior of the original
evaluation function. Second, it shows empirically that despite recognizing
fewer mid-game wins than the theoretically predicted minimum, the new function
is nonpathological.
|
1304.3445 | An Evaluation of Two Alternatives to Minimax | cs.AI | In the field of Artificial Intelligence, traditional approaches to choosing
moves in games involve the we of the minimax algorithm. However, recent
research results indicate that minimizing may not always be the best approach.
In this paper we summarize the results of some measurements on several model
games with several different evaluation functions. These measurements, which
are presented in detail in [NPT], show that there are some new algorithms that
can make significantly better use of evaluation function values than the
minimax algorithm does.
|
1304.3446 | Intelligent Probabilistic Inference | cs.AI | The analysis of practical probabilistic models on the computer demands a
convenient representation for the available knowledge and an efficient
algorithm to perform inference. An appealing representation is the influence
diagram, a network that makes explicit the random variables in a model and
their probabilistic dependencies. Recent advances have developed solution
procedures based on the influence diagram. In this paper, we examine the
fundamental properties that underlie those techniques, and the information
about the probabilistic structure that is available in the influence diagram
representation. The influence diagram is a convenient representation for
computer processing while also being clear and non-mathematical. It displays
probabilistic dependence precisely, in a way that is intuitive for decision
makers and experts to understand and communicate. As a result, the same
influence diagram can be used to build, assess and analyze a model,
facilitating changes in the formulation and feedback from sensitivity analysis.
The goal in this paper is to determine arbitrary conditional probability
distributions from a given probabilistic model. Given qualitative information
about the dependence of the random variables in the model we can, for a
specific conditional expression, specify precisely what quantitative
information we need to be able to determine the desired conditional probability
distribution. It is also shown how we can find that probability distribution by
performing operations locally, that is, over subspaces of the joint
distribution. In this way, we can exploit the conditional independence present
in the model to avoid having to construct or manipulate the full joint
distribution. These results are extended to include maximal processing when the
information available is incomplete, and optimal decision making in an
uncertain environment. Influence diagrams as a computer-aided modeling tool
were developed by Miller, Merkofer, and Howard [5] and extended by Howard and
Matheson [2]. Good descriptions of how to use them in modeling are in Owen [7]
and Howard and Matheson [2]. The notion of solving a decision problem through
influence diagrams was examined by Olmsted [6] and such an algorithm was
developed by Shachter [8]. The latter paper also shows how influence diagrams
can be used to perform a variety of sensitivity analyses. This paper extends
those results by developing a theory of the properties of the diagram that are
used by the algorithm, and the information needed to solve arbitrary
probability inference problems. Section 2 develops the notation and the
framework for the paper and the relationship between influence diagrams and
joint probability distributions. The general probabilistic inference problem is
posed in Section 3. In Section 4 the transformations on the diagram are
developed and then put together into a solution procedure in Section 5. In
Section 6, this procedure is used to calculate the information requirement to
solve an inference problem and the maximal processing that can be performed
with incomplete information. Section 7 contains a summary of results.
|
1304.3447 | Developing and Analyzing Boundary Detection Operators Using
Probabilistic Models | cs.CV | Most feature detectors such as edge detectors or circle finders are
statistical, in the sense that they decide at each point in an image about the
presence of a feature, this paper describes the use of Bayesian feature
detectors.
|
1304.3448 | Strong & Weak Methods: A Logical View of Uncertainty | cs.AI | The last few years has seen a growing debate about techniques for managing
uncertainty in AI systems. Unfortunately this debate has been cast as a rivalry
between AI methods and classical probability based ones. Three arguments for
extending the probability framework of uncertainty are presented, none of which
imply a challenge to classical methods. These are (1) explicit representation
of several types of uncertainty, specifically possibility and plausibility, as
well as probability, (2) the use of weak methods for uncertainty management in
problems which are poorly defined, and (3) symbolic representation of different
uncertainty calculi and methods for choosing between them.
|
1304.3449 | Statistical Mechanics Algorithm for Response to Targets (SMART) | cs.CE cs.AI | It is proposed to apply modern methods of nonlinear nonequilibrium
statistical mechanics to develop software algorithms that will optimally
respond to targets within short response times with minimal computer resources.
This Statistical Mechanics Algorithm for Response to Targets (SMART) can be
developed with a view towards its future implementation into a hardwired
Statistical Algorithm Multiprocessor (SAM) to enhance the efficiency and speed
of response to targets (SMART_SAM).
|
1304.3450 | Probabilistic Conflict Resolution in Hierarchical Hypothesis Spaces | cs.AI | Artificial intelligence applications such as industrial robotics, military
surveillance, and hazardous environment clean-up, require situation
understanding based on partial, uncertain, and ambiguous or erroneous evidence.
It is necessary to evaluate the relative likelihood of multiple possible
hypotheses of the (current) situation faced by the decision making program.
Often, the evidence and hypotheses are hierarchical in nature. In image
understanding tasks, for example, evidence begins with raw imagery, from which
ambiguous features are extracted which have multiple possible aggregations
providing evidential support for the presence of multiple hypothesis of objects
and terrain, which in turn aggregate in multiple ways to provide partial
evidence for different interpretations of the ambient scene. Information fusion
for military situation understanding has a similar evidence/hypothesis
hierarchy from multiple sensor through message level interpretations, and also
provides evidence at multiple levels of the doctrinal hierarchy of military
forces.
|
1304.3451 | Knowledge Structures and Evidential Reasoning in Decision Analysis | cs.AI | The roles played by decision factors in making complex subject are decisions
are characterized by how these factors affect the overall decision. Evidence
that partially matches a factor is evaluated, and then effective computational
rules are applied to these roles to form an appropriate aggregation of the
evidence. The use of this technique supports the expression of deeper levels of
causality, and may also preserve the cognitive structure of the decision maker
better than the usual weighting methods, certainty-factor or other
probabilistic models can.
|
1304.3477 | Concurrent learning-based approximate optimal regulation | cs.SY math.OC | In deterministic systems, reinforcement learning-based online approximate
optimal control methods typically require a restrictive persistence of
excitation (PE) condition for convergence. This paper presents a concurrent
learning-based solution to the online approximate optimal regulation problem
that eliminates the need for PE. The development is based on the observation
that given a model of the system, the Bellman error, which quantifies the
deviation of the system Hamiltonian from the optimal Hamiltonian, can be
evaluated at any point in the state space. Further, a concurrent learning-based
parameter identifier is developed to compensate for parametric uncertainty in
the plant dynamics. Uniformly ultimately bounded (UUB) convergence of the
system states to the origin, and UUB convergence of the developed policy to the
optimal policy are established using a Lyapunov-based analysis, and simulations
are performed to demonstrate the performance of the developed controller.
|
1304.3478 | Sparse Stable Matrices | math.OC cs.SY | In the design of decentralized networked systems, it is useful to know
whether a given network topology can sustain stable dynamics. We consider a
basic version of this problem here: given a vector space of sparse real
matrices, does it contain a stable (Hurwitz) matrix? Said differently, is a
feedback channel (corresponding to a non-zero entry) necessary for
stabilization or can it be done without. We provide in this paper a set of
necessary and a set of sufficient conditions for the existence of stable
matrices in a vector space of sparse matrices. We further prove some properties
of the set of sparse matrix spaces that contain Hurwitz matrices. The
conditions we exhibit are most easily stated in the language of graph theory,
which we thus adopt in this paper.
|
1304.3479 | Approximate optimal cooperative decentralized control for consensus in a
topological network of agents with uncertain nonlinear dynamics | cs.SY math.OC | Efforts in this paper seek to combine graph theory with adaptive dynamic
programming (ADP) as a reinforcement learning (RL) framework to determine
forward-in-time, real-time, approximate optimal controllers for distributed
multi-agent systems with uncertain nonlinear dynamics. A decentralized
continuous time-varying control strategy is proposed, using only local
communication feedback from two-hop neighbors on a communication topology that
has a spanning tree. An actor-critic-identifier architecture is proposed that
employs a nonlinear state derivative estimator to estimate the unknown dynamics
online and uses the estimate thus obtained for value function approximation.
|
1304.3480 | Friendship Paradox Redux: Your Friends Are More Interesting Than You | cs.SI cs.CY nlin.AO physics.soc-ph stat.AP | Feld's friendship paradox states that "your friends have more friends than
you, on average." This paradox arises because extremely popular people, despite
being rare, are overrepresented when averaging over friends. Using a sample of
the Twitter firehose, we confirm that the friendship paradox holds for >98% of
Twitter users. Because of the directed nature of the follower graph on Twitter,
we are further able to confirm more detailed forms of the friendship paradox:
everyone you follow or who follows you has more friends and followers than you.
This is likely caused by a correlation we demonstrate between Twitter activity,
number of friends, and number of followers. In addition, we discover two new
paradoxes: the virality paradox that states "your friends receive more viral
content than you, on average," and the activity paradox, which states "your
friends are more active than you, on average." The latter paradox is important
in regulating online communication. It may result in users having difficulty
maintaining optimal incoming information rates, because following additional
users causes the volume of incoming tweets to increase super-linearly. While
users may compensate for increased information flow by increasing their own
activity, users become information overloaded when they receive more
information than they are able or willing to process. We compare the average
size of cascades that are sent and received by overloaded and underloaded
users. And we show that overloaded users post and receive larger cascades and
they are poor detector of small cascades.
|
1304.3489 | Logical Stochastic Optimization | cs.AI | We present a logical framework to represent and reason about stochastic
optimization problems based on probability answer set programming. This is
established by allowing probability optimization aggregates, e.g., minimum and
maximum in the language of probability answer set programming to allow
minimization or maximization of some desired criteria under the probabilistic
environments. We show the application of the proposed logical stochastic
optimization framework under the probability answer set programming to two
stages stochastic optimization problems with recourse.
|
1304.3513 | Eat the Cake and Have It Too: Privacy Preserving Location Aggregates in
Geosocial Networks | cs.CR cs.SI | Geosocial networks are online social networks centered on the locations of
subscribers and businesses. Providing input to targeted advertising, profiling
social network users becomes an important source of revenue. Its natural
reliance on personal information introduces a trade-off between user privacy
and incentives of participation for businesses and geosocial network providers.
In this paper we introduce location centric profiles (LCPs), aggregates built
over the profiles of users present at a given location. We introduce PROFILR, a
suite of mechanisms that construct LCPs in a private and correct manner. We
introduce iSafe, a novel, context aware public safety application built on
PROFILR . Our Android and browser plugin implementations show that PROFILR is
efficient: the end-to-end overhead is small even under strong correctness
assurances.
|
1304.3518 | Trust in the CODA model: Opinion Dynamics and the reliability of other
agents | physics.soc-ph cs.MA cs.SI | A model for the joint evolution of opinions and how much the agents trust
each other is presented. The model is built using the framework of the
Continuous Opinions and Discrete Actions (CODA) model. Instead of a fixed
probability that the other agents will decide in the favor of the best choice,
each agent considers that other agents might be one one of two types:
trustworthy or useless. Trustworthy agents are considered more likely to be
right than wrong, while the opposite holds for useless ones. Together with the
opinion about the discussed issue, each agent also updates that probability for
each one of the other agents it interacts withe probability each one it
interacts with is of one type or the other. The dynamics of opinions and the
evolution of the trust between the agents are studied. Clear evidences of the
existence of two phases, one where strong polarization is observed and the
other where a clear division is permanent and reinforced are observed. The
transition seems signs of being a first-order transition, with a location
dependent on both the parameters of the model and the initial conditions. This
happens despite the fact that the trust network evolves much slower than the
opinion on the central issue.
|
1304.3531 | Compressed Sensing and Affine Rank Minimization under Restricted
Isometry | cs.IT math.IT math.ST stat.TH | This paper establishes new restricted isometry conditions for compressed
sensing and affine rank minimization. It is shown for compressed sensing that
$\delta_{k}^A+\theta_{k,k}^A < 1$ guarantees the exact recovery of all $k$
sparse signals in the noiseless case through the constrained $\ell_1$
minimization. Furthermore, the upper bound 1 is sharp in the sense that for any
$\epsilon > 0$, the condition $\delta_k^A + \theta_{k, k}^A < 1+\epsilon$ is
not sufficient to guarantee such exact recovery using any recovery method.
Similarly, for affine rank minimization, if
$\delta_{r}^\mathcal{M}+\theta_{r,r}^\mathcal{M}< 1$ then all matrices with
rank at most $r$ can be reconstructed exactly in the noiseless case via the
constrained nuclear norm minimization; and for any $\epsilon > 0$,
$\delta_r^\mathcal{M} +\theta_{r,r}^\mathcal{M} < 1+\epsilon$ does not ensure
such exact recovery using any method. Moreover, in the noisy case the
conditions $\delta_{k}^A+\theta_{k,k}^A < 1$ and
$\delta_{r}^\mathcal{M}+\theta_{r,r}^\mathcal{M}< 1$ are also sufficient for
the stable recovery of sparse signals and low-rank matrices respectively.
Applications and extensions are also discussed.
|
1304.3548 | Crowdsourcing Dilemma | cs.SI cs.GT physics.soc-ph | Crowdsourcing offers unprecedented potential for solving tasks efficiently by
tapping into the skills of large groups of people. A salient feature of
crowdsourcing---its openness of entry---makes it vulnerable to malicious
behavior. Such behavior took place in a number of recent popular crowdsourcing
competitions. We provide game-theoretic analysis of a fundamental tradeoff
between the potential for increased productivity and the possibility of being
set back by malicious behavior. Our results show that in crowdsourcing
competitions malicious behavior is the norm, not the anomaly---a result
contrary to the conventional wisdom in the area. Counterintuitively, making the
attacks more costly does not deter them but leads to a less desirable outcome.
These findings have cautionary implications for the design of crowdsourcing
competitions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.