id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.1876 | Proceedings of the 37th Annual Workshop of the Austrian Association for
Pattern Recognition (\"OAGM/AAPR), 2013 | cs.CV | This volume represents the proceedings of the 37th Annual Workshop of the
Austrian Association for Pattern Recognition (\"OAGM/AAPR), held May 23-24,
2013, in Innsbruck, Austria.
|
1304.1877 | Privacy-preserving Data Mining, Sharing and Publishing | cs.DB cs.CR | The goal of the paper is to present different approaches to
privacy-preserving data sharing and publishing in the context of e-health care
systems. In particular, the literature review on technical issues in privacy
assurance and current real-life high complexity implementation of medical
system that assumes proper data sharing mechanisms are presented in the paper.
|
1304.1898 | Socio-inspired ICT - Towards a socially grounded society-ICT symbiosis | physics.soc-ph cs.SI physics.comp-ph | Modern ICT (Information and Communication Technology) has developed a vision
where the "computer" is no longer associated with the concept of a single
device or a network of devices, but rather the entirety of situated services
originating in a digital world, which are perceived through the physical world.
It is observed that services with explicit user input and output are becoming
to be replaced by a computing landscape sensing the physical world via a huge
variety of sensors, and controlling it via a plethora of actuators. The nature
and appearance of computing devices is changing to be hidden in the fabric of
everyday life, invisibly networked, and omnipresent, with applications greatly
being based on the notions of context and knowledge. Interaction with such
globe spanning, modern ICT systems will presumably be more implicit, at the
periphery of human attention, rather than explicit, i.e. at the focus of human
attention. Socio-inspired ICT assumes that future, globe scale ICT systems
should be viewed as social systems. Such a view challenges research to identify
and formalize the principles of interaction and adaptation in social systems,
so as to be able to ground future ICT systems on those principles. This
position paper therefore is concerned with the intersection of social behaviour
and modern ICT, creating or recreating social conventions and social contexts
through the use of pervasive, globe-spanning, omnipresent and participative
ICT.
|
1304.1903 | Towards a living earth simulator | physics.comp-ph cs.SI physics.soc-ph | The Living Earth Simulator (LES) is one of the core components of the
FuturICT architecture. It will work as a federation of methods, tools,
techniques and facilities supporting all of the FuturICT simulation-related
activities to allow and encourage interactive exploration and understanding of
societal issues. Society-relevant problems will be targeted by leaning on
approaches based on complex systems theories and data science in tight
interaction with the other components of FuturICT. The LES will evaluate and
provide answers to real-world questions by taking into account multiple
scenarios. It will build on present approaches such as agent-based simulation
and modeling, multiscale modelling, statistical inference, and data mining,
moving beyond disciplinary borders to achieve a new perspective on complex
social systems.
|
1304.1914 | Causal--Path Local Time--Stepping in the Discontinuous Galerkin Method
for Maxwell's equations | physics.comp-ph cs.CE math-ph math.MP | We introduce a novel local time-stepping technique for marching-in-time
algorithms. The technique is denoted as Causal-Path Local Time-Stepping (CPLTS)
and it is applied for two time integration techniques: fourth order
low--storage explicit Runge--Kutta (LSERK4) and second order Leapfrog (LF2).
The CPLTS method is applied to evolve Maxwell's curl equations using a
Discontinuous Galerkin (DG) scheme for the spatial discretization. Numerical
results for LF2 and LSERK4 are compared with analytical solutions and the
Montseny's LF2 technique. The results show that the CPLTS technique improves
the dispersive and dissipative properties of LF2-LTS scheme.
|
1304.1924 | Automatic Detection of Search Tactic in Individual Information Seeking:
A Hidden Markov Model Approach | cs.IR | Information seeking process is an important topic in information seeking
behavior research. Both qualitative and empirical methods have been adopted in
analyzing information seeking processes, with major focus on uncovering the
latent search tactics behind user behaviors. Most of the existing works require
defining search tactics in advance and coding data manually. Among the few
works that can recognize search tactics automatically, they missed making sense
of those tactics. In this paper, we proposed using an automatic technique, i.e.
the Hidden Markov Model (HMM), to explicitly model the search tactics. HMM
results show that the identified search tactics of individual information
seeking behaviors are consistent with Marchioninis Information seeking process
model. With the advantages of showing the connections between search tactics
and search actions and the transitions among search tactics, we argue that HMM
is a useful tool to investigate information seeking process, or at least it
provides a feasible way to analyze large scale dataset.
|
1304.1926 | Distributed Space-Time Coding Based on Adjustable Code Matrices for
Cooperative MIMO Relaying Systems | cs.IT math.IT | An adaptive distributed space-time coding (DSTC) scheme is proposed for
two-hop cooperative MIMO networks. Linear minimum mean square error (MMSE)
receive filters and adjustable code matrices are considered subject to a power
constraint with an amplify-and-forward (AF) cooperation strategy. In the
proposed adaptive DSTC scheme, an adjustable code matrix obtained by a feedback
channel is employed to transform the space-time coded matrix at the relay node.
The effects of the limited feedback and the feedback errors are assessed.
Linear MMSE expressions are devised to compute the parameters of the adjustable
code matrix and the linear receive filters. Stochastic gradient (SG) and
least-squares (LS) algorithms are also developed with reduced computational
complexity. An upper bound on the pairwise error probability analysis is
derived and indicates the advantage of employing the adjustable code matrices
at the relay nodes. An alternative optimization algorithm for the adaptive DSTC
scheme is also derived in order to eliminate the need for the feedback. The
algorithm provides a fully distributed scheme for the adaptive DSTC at the
relay node based on the minimization of the error probability. Simulation
results show that the proposed algorithms obtain significant performance gains
as compared to existing DSTC schemes.
|
1304.1930 | Client-Driven Content Extraction Associated with Table | cs.CV cs.IR | The goal of the project is to extract content within table in document images
based on learnt patterns. Real-world users i.e., clients first provide a set of
key fields within the table which they think are important. These are first
used to represent the graph where nodes are labelled with semantics including
other features and edges are attributed with relations. Attributed relational
graph (ARG) is then employed to mine similar graphs from a document image. Each
mined graph will represent an item within the table, and hence a set of such
graphs will compose a table. We have validated the concept by using a
real-world industrial problem.
|
1304.1932 | Generalized Reduced-Rank Decompositions Using Switching and Adaptive
Algorithms for Space-Time Adaptive Processing | cs.IT math.IT | This work presents generalized low-rank signal decompositions with the aid of
switching techniques and adaptive algorithms, which do not require
eigen-decompositions, for space-time adaptive processing. A generalized scheme
is proposed to compute low-rank signal decompositions by imposing suitable
constraints on the filtering and by performing iterations between the computed
subspace and the low-rank filter. An alternating optimization strategy based on
recursive least squares algorithms is presented along with switching and
iterations to cost-effectively compute the bases of the decomposition and the
low-rank filter. An application to space-time interference suppression in
DS-CDMA systems is considered. Simulations show that the proposed scheme and
algorithms obtain significant gains in performance over previously reported
low-rank schemes.
|
1304.1935 | Interference Suppression and Group-Based Power Adjustment via
Alternating Optimization for DS-CDMA Networks with Multihop Relaying | cs.IT math.IT | This work presents joint interference suppression and power allocation
algorithms for DS-CDMA networks with multiple hops and decode-and-forward (DF)
protocols. A scheme for joint allocation of power levels across the relays
subject to group-based power constraints and the design of linear receivers for
interference suppression is proposed. A constrained minimum mean-squared error
(MMSE) design for the receive filters and the power allocation vectors is
devised along with an MMSE channel estimator. In order to solve the proposed
optimization efficiently, a method to form an effective group of users and an
alternating optimization strategy are devised with recursive alternating least
squares (RALS) algorithms for estimating the parameters of the receiver, the
power allocation and the channels. Simulations show that the proposed
algorithms obtain significant gains in capacity and performance over existing
schemes.
|
1304.1962 | Low Complexity MIMO Detection based on Belief Propagation over Pair-wise
Graphs | cs.IT math.IT | This paper considers belief propagation algorithm over pair-wise graphical
models to develop low complexity, iterative multiple-input multiple-output
(MIMO) detectors. The pair-wise graphical model is a bipartite graph where a
pair of variable nodes are related by an observation node represented by the
bivariate Gaussian function obtained by marginalizing the posterior joint
probability density under the Gaussian input assumption. Specifically, we
consider two types of pair-wise models, the fully connected and ring-type. The
pair-wise graphs are sparse, compared to the conventional graphical model in
[18], insofar as the number of edges connected to an observation node (edge
degree) is only two. Consequently the computations are much easier than those
of maximum likelihood (ML) detection, which are similar to the belief
propagation (BP) that is run over the fully connected bipartite graph. The link
level performance for non-Gaussian input is evaluated via simulations, and the
results show the validity of the proposed algorithms. We also customize the
algorithm with Gaussian input assumption to obtain the Gaussian BP run over the
two pair-wise graphical models and, for the ring-type, we prove its convergence
in mean to the linear minimum mean square error (MMSE) estimates. Since the
maximum a posterior (MAP) estimator for Gaussian input is equivalent to the
linear MMSE estimator, it shows the optimality, in mean, of the scheme for
Gaussian input.
|
1304.1969 | One-Bit Quantization Design and Adaptive Methods for Compressed Sensing | cs.IT math.IT | There have been a number of studies on sparse signal recovery from one-bit
quantized measurements. Nevertheless, little attention has been paid to the
choice of the quantization thresholds and its impact on the signal recovery
performance. This paper examines the problem of one-bit quantizer design for
sparse signal recovery. Our analysis shows that the magnitude ambiguity that
ever plagues conventional one-bit compressed sensing methods can be resolved,
and an arbitrarily small reconstruction error can be achieved by setting the
quantization thresholds close enough to the original data samples without being
quantized. Note that unquantized data samples are unaccessible in practice. To
overcome this difficulty, we propose an adaptive quantization method that
adaptively adjusts the quantization thresholds in a way such that the
thresholds converges to the optimal thresholds. Numerical results are
illustrated to collaborate our theoretical results and the effectiveness of the
proposed algorithm.
|
1304.1972 | Facial transformations of ancient portraits: the face of Caesar | cs.CV | Some software solutions used to obtain the facial transformations can help
investigating the artistic metamorphosis of the ancient portraits of the same
person. An analysis with a freely available software of portraitures of Julius
Caesar is proposed, showing his several "morphs". The software helps enhancing
the mood the artist added to a portrait.
|
1304.1978 | Constructing Low Star Discrepancy Point Sets with Genetic Algorithms | cs.NE cs.NA | Geometric discrepancies are standard measures to quantify the irregularity of
distributions. They are an important notion in numerical integration. One of
the most important discrepancy notions is the so-called \emph{star
discrepancy}. Roughly speaking, a point set of low star discrepancy value
allows for a small approximation error in quasi-Monte Carlo integration. It is
thus the most studied discrepancy notion.
In this work we present a new algorithm to compute point sets of low star
discrepancy. The two components of the algorithm (for the optimization and the
evaluation, respectively) are based on evolutionary principles. Our algorithm
clearly outperforms existing approaches. To the best of our knowledge, it is
also the first algorithm which can be adapted easily to optimize inverse star
discrepancies.
|
1304.1979 | Limited communication capacity unveils strategies for human interaction | physics.soc-ph cs.SI | Social connectivity is the key process that characterizes the structural
properties of social networks and in turn processes such as navigation,
influence or information diffusion. Since time, attention and cognition are
inelastic resources, humans should have a predefined strategy to manage their
social interactions over time. However, the limited observational length of
existing human interaction datasets, together with the bursty nature of dyadic
communications have hampered the observation of tie dynamics in social
networks. Here we develop a method for the detection of tie
activation/deactivation, and apply it to a large longitudinal, cross-sectional
communication dataset ($\approx$ 19 months, $\approx$ 20 million people).
Contrary to the perception of ever-growing connectivity, we observe that
individuals exhibit a finite communication capacity, which limits the number of
ties they can maintain active. In particular we find that men have an overall
higher communication capacity than women and that this capacity decreases
gradually for both sexes over the lifespan of individuals (16-70 years). We are
then able to separate communication capacity from communication activity,
revealing a diverse range of tie activation patterns, from stable to
exploratory. We find that, in simulation, individuals exhibiting exploratory
strategies display longer time to receive information spreading in the network
those individuals with stable strategies. Our principled method to determine
the communication capacity of an individual allows us to quantify how
strategies for human interaction shape the dynamical evolution of social
networks.
|
1304.1984 | A Multi-Dimensional Block-Circulant Perfect Array Construction | cs.IT math.IT | We present a $N$-dimensional generalization of the two-dimensional
block-circulant perfect array construction by \cite{Blake2013}. As in
\cite{Blake2013}, the families of $N$-dimensional arrays possess pairwise
\textit{good} zero correlation zone (ZCZ) cross-correlation. Both constructions
use a perfect autocorrelation sequence with the array orthogonality property
(AOP).
|
1304.1995 | Image Retrieval using Histogram Factorization and Contextual Similarity
Learning | cs.CV cs.DB cs.LG | Image retrieval has been a top topic in the field of both computer vision and
machine learning for a long time. Content based image retrieval, which tries to
retrieve images from a database visually similar to a query image, has
attracted much attention. Two most important issues of image retrieval are the
representation and ranking of the images. Recently, bag-of-words based method
has shown its power as a representation method. Moreover, nonnegative matrix
factorization is also a popular way to represent the data samples. In addition,
contextual similarity learning has also been studied and proven to be an
effective method for the ranking problem. However, these technologies have
never been used together. In this paper, we developed an effective image
retrieval system by representing each image using the bag-of-words method as
histograms, and then apply the nonnegative matrix factorization to factorize
the histograms, and finally learn the ranking score using the contextual
similarity learning method. The proposed novel system is evaluated on a large
scale image database and the effectiveness is shown.
|
1304.1998 | Convex conditions for robust stability analysis and stabilization of
linear aperiodic impulsive and sampled-data systems under dwell-time
constraints | math.OC cs.SY math.CA math.DS | Stability analysis and control of linear impulsive systems is addressed in a
hybrid framework, through the use of continuous-time time-varying discontinuous
Lyapunov functions. Necessary and sufficient conditions for stability of
impulsive systems with periodic impulses are first provided in order to set up
the main ideas. Extensions to stability of aperiodic systems under minimum,
maximum and ranged dwell-times are then derived. By exploiting further the
particular structure of the stability conditions, the results are
non-conservatively extended to quadratic stability analysis of linear uncertain
impulsive systems. These stability criteria are, in turn, losslessly extended
to stabilization using a particular, yet broad enough, class of state-feedback
controllers, providing then a convex solution to the open problem of robust
dwell-time stabilization of impulsive systems using hybrid stability criteria.
Relying finally on the representability of sampled-data systems as impulsive
systems, the problems of robust stability analysis and robust stabilization of
periodic and aperiodic uncertain sampled-data systems are straightforwardly
solved using the same ideas. Several examples are discussed in order to show
the effectiveness and reduced complexity of the proposed approach.
|
1304.2014 | Image Compression predicated on Recurrent Iterated Function Systems | math.DS cs.CV math.GT | Recurrent iterated function systems (RIFSs) are improvements of iterated
function systems (IFSs) using elements of the theory of Marcovian stochastic
processes which can produce more natural looking images. We construct new RIFSs
consisting substantially of a vertical contraction factor function and
nonlinear transformations. These RIFSs are applied to image compression.
|
1304.2024 | A General Framework for Interacting Bayes-Optimally with Self-Interested
Agents using Arbitrary Parametric Model and Model Prior | cs.LG cs.AI cs.MA stat.ML | Recent advances in Bayesian reinforcement learning (BRL) have shown that
Bayes-optimality is theoretically achievable by modeling the environment's
latent dynamics using Flat-Dirichlet-Multinomial (FDM) prior. In
self-interested multi-agent environments, the transition dynamics are mainly
controlled by the other agent's stochastic behavior for which FDM's
independence and modeling assumptions do not hold. As a result, FDM does not
allow the other agent's behavior to be generalized across different states nor
specified using prior domain knowledge. To overcome these practical limitations
of FDM, we propose a generalization of BRL to integrate the general class of
parametric models and model priors, thus allowing practitioners' domain
knowledge to be exploited to produce a fine-grained and compact representation
of the other agent's behavior. Empirical evaluation shows that our approach
outperforms existing multi-agent reinforcement learning algorithms.
|
1304.2031 | Temporal Analysis of Activity Patterns of Editors in Collaborative
Mapping Project of OpenStreetMap | cs.CY cs.HC cs.SI physics.data-an physics.soc-ph | In the recent years Wikis have become an attractive platform for social
studies of the human behaviour. Containing millions records of edits across the
globe, collaborative systems such as Wikipedia have allowed researchers to gain
a better understanding of editors participation and their activity patterns.
However, contributions made to Geo-wikis_wiki-based collaborative mapping
projects_ differ from systems such as Wikipedia in a fundamental way due to
spatial dimension of the content that limits the contributors to a set of those
who posses local knowledge about a specific area and therefore cross-platform
studies and comparisons are required to build a comprehensive image of online
open collaboration phenomena. In this work, we study the temporal behavioural
pattern of OpenStreetMap editors, a successful example of geo-wiki, for two
European capital cities. We categorise different type of temporal patterns and
report on the historical trend within a period of 7 years of the project age.
We also draw a comparison with the previously observed editing activity
patterns of Wikipedia.
|
1304.2058 | Stochastic Recovery Of Sparse Signals From Random Measurements | physics.data-an cs.IT math.IT | Sparse signal recovery from a small number of random measurements is a well
known NP-hard to solve combinatorial optimization problem, with important
applications in signal and image processing. The standard approach to the
sparse signal recovery problem is based on the basis pursuit method. This
approach requires the solution of a large convex optimization problem, and
therefore suffers from high computational complexity. Here, we discuss a
stochastic optimization method, as a low-complexity alternative to the basis
pursuit approach.
|
1304.2079 | Learning Coverage Functions and Private Release of Marginals | cs.LG cs.CC cs.DS | We study the problem of approximating and learning coverage functions. A
function $c: 2^{[n]} \rightarrow \mathbf{R}^{+}$ is a coverage function, if
there exists a universe $U$ with non-negative weights $w(u)$ for each $u \in U$
and subsets $A_1, A_2, \ldots, A_n$ of $U$ such that $c(S) = \sum_{u \in
\cup_{i \in S} A_i} w(u)$. Alternatively, coverage functions can be described
as non-negative linear combinations of monotone disjunctions. They are a
natural subclass of submodular functions and arise in a number of applications.
We give an algorithm that for any $\gamma,\delta>0$, given random and uniform
examples of an unknown coverage function $c$, finds a function $h$ that
approximates $c$ within factor $1+\gamma$ on all but $\delta$-fraction of the
points in time $poly(n,1/\gamma,1/\delta)$. This is the first fully-polynomial
algorithm for learning an interesting class of functions in the demanding PMAC
model of Balcan and Harvey (2011). Our algorithms are based on several new
structural properties of coverage functions. Using the results in (Feldman and
Kothari, 2014), we also show that coverage functions are learnable agnostically
with excess $\ell_1$-error $\epsilon$ over all product and symmetric
distributions in time $n^{\log(1/\epsilon)}$. In contrast, we show that,
without assumptions on the distribution, learning coverage functions is at
least as hard as learning polynomial-size disjoint DNF formulas, a class of
functions for which the best known algorithm runs in time
$2^{\tilde{O}(n^{1/3})}$ (Klivans and Servedio, 2004).
As an application of our learning results, we give simple
differentially-private algorithms for releasing monotone conjunction counting
queries with low average error. In particular, for any $k \leq n$, we obtain
private release of $k$-way marginals with average error $\bar{\alpha}$ in time
$n^{O(\log(1/\bar{\alpha}))}$.
|
1304.2097 | Solving Linear Equations by Classical Jacobi-SR Based Hybrid
Evolutionary Algorithm with Uniform Adaptation Technique | cs.NE cs.NA | Solving a set of simultaneous linear equations is probably the most important
topic in numerical methods. For solving linear equations, iterative methods are
preferred over the direct methods especially when the coefficient matrix is
sparse. The rate of convergence of iteration method is increased by using
Successive Relaxation (SR) technique. But SR technique is very much sensitive
to relaxation factor, {\omega}. Recently, hybridization of classical
Gauss-Seidel based successive relaxation technique with evolutionary
computation techniques have successfully been used to solve large set of linear
equations in which relaxation factors are self-adapted. In this paper, a new
hybrid algorithm is proposed in which uniform adaptive evolutionary computation
techniques and classical Jacobi based SR technique are used instead of
classical Gauss-Seidel based SR technique. The proposed Jacobi-SR based uniform
adaptive hybrid algorithm, inherently, can be implemented in parallel
processing environment efficiently. Whereas Gauss-Seidel-SR based hybrid
algorithms cannot be implemented in parallel computing environment efficiently.
The convergence theorem and adaptation theorem of the proposed algorithm are
proved theoretically. And the performance of the proposed Jacobi-SR based
uniform adaptive hybrid evolutionary algorithm is compared with Gauss-Seidel-SR
based uniform adaptive hybrid evolutionary algorithm as well as with both
classical Jacobi-SR method and Gauss-Seidel-SR method in the experimental
domain. The proposed Jacobi-SR based hybrid algorithm outperforms the
Gauss-Seidel-SR based hybrid algorithm as well as both classical Jacobi-SR
method and Gauss-Seidel-SR method in terms of convergence speed and
effectiveness.
|
1304.2103 | High-Throughput Cooperative Communication with Interference Cancellation
for Two-Path Relay in Multi-source System | cs.IT math.IT | Relay-based cooperative communication has become a research focus in recent
years because it can achieve diversity gain in wireless networks. In existing
works, network coding and two-path relay are adopted to deal with the increase
of network size and the half-duplex nature of relay, respectively. To further
improve bandwidth efficiency, we propose a novel cooperative transmission
scheme which combines network coding and two-path relay together in a
multi-source system. Due to the utilization of two-path relay, our proposed
scheme achieves full-rate transmission. Adopting complex field network coding
(CFNC) at both sources and relays ensures that symbols from different sources
are allowed to be broadcast in the same time slot. We also adopt physical-layer
network coding (PNC) at relay nodes to deal with the inter-relay interference
caused by the two-path relay. With careful process design, the ideal throughput
of our scheme achieves by 1 symbol per source per time slot (sym/S/TS).
Furthermore, the theoretical analysis provides a method to estimate the symbol
error probability (SEP) and throughput in additive complex white Gaussian noise
(AWGN) and Rayleigh fading channels. The simulation results verify the
improvement achieved by the proposed scheme.
|
1304.2109 | Automatic Fingerprint Recognition Using Minutiae Matching Technique for
the Large Fingerprint Database | cs.CV | Extracting minutiae from fingerprint images is one of the most important
steps in automatic fingerprint identification system. Because minutiae matching
are certainly the most well-known and widely used method for fingerprint
matching, minutiae are local discontinuities in the fingerprint pattern. In
this paper a fingerprint matching algorithm is proposed using some specific
feature of the minutiae points, also the acquired fingerprint image is
considered by minimizing its size by generating a corresponding fingerprint
template for a large fingerprint database. The results achieved are compared
with those obtained through some other methods also shows some improvement in
the minutiae detection process in terms of memory and time required.
|
1304.2126 | Braess like Paradox in a Small World Network | physics.soc-ph cs.SI | Braess \cite{1} has been studied about a traffic flow on a diamond type
network and found that introducing new edges to the networks always does not
achieve the efficiency. Some researchers studied the Braess' paradox in similar
type networks by introducing various types of cost functions. But whether such
paradox occurs or not is not scarcely studied in complex networks. In this
article, I analytically and numerically study whether Braess like paradox
occurs or not on Dorogovtsev-Mendes network\cite{2}, which is a sort of small
world networks. The cost function needed to go along an edge is postulated to
be equally identified with the length between two nodes, independently of an
amount of traffic on the edge. It is also assumed the it takes a certain cost
$c$ to pass through the center node in Dorogovtsev-Mendes network. If $c$ is
small, then bypasses have the function to provide short cuts. As result of
numerical and theoretical analyses, while I find that any Braess' like paradox
will not occur when the network size becomes infinite, I can show that a
paradoxical phenomenon appears at finite size of network.
|
1304.2132 | The Deformed Consensus Protocol: Extended Version | cs.SY | This paper studies a generalization of the standard continuous-time consensus
protocol, obtained by replacing the Laplacian matrix of the communication graph
with the so-called deformed Laplacian. The deformed Laplacian is a
second-degree matrix polynomial in the real variable 's' which reduces to the
standard Laplacian for 's' equal to unity. The stability properties of the
ensuing deformed consensus protocol are studied in terms of parameter 's' for
some special families of undirected and directed graphs, and for arbitrary
graph topologies by leveraging the spectral theory of quadratic eigenvalue
problems. Examples and simulation results are provided to illustrate our
theoretical findings.
|
1304.2133 | Dynamic Amelioration of Resolution Mismatches for Local Feature Based
Identity Inference | cs.CV cs.IR | While existing face recognition systems based on local features are robust to
issues such as misalignment, they can exhibit accuracy degradation when
comparing images of differing resolutions. This is common in surveillance
environments where a gallery of high resolution mugshots is compared to low
resolution CCTV probe images, or where the size of a given image is not a
reliable indicator of the underlying resolution (eg. poor optics). To alleviate
this degradation, we propose a compensation framework which dynamically chooses
the most appropriate face recognition system for a given pair of image
resolutions. This framework applies a novel resolution detection method which
does not rely on the size of the input images, but instead exploits the
sensitivity of local features to resolution using a probabilistic multi-region
histogram approach. Experiments on a resolution-modified version of the
"Labeled Faces in the Wild" dataset show that the proposed resolution detector
frontend obtains a 99% average accuracy in selecting the most appropriate face
recognition system, resulting in higher overall face discrimination accuracy
(across several resolutions) compared to the individual baseline face
recognition systems.
|
1304.2170 | On sampling SCJ rearrangement scenarios | cs.CE | The Single Cut or Join (SCJ) operation on genomes, generalizing chromosome
evolution by fusions and fissions, is the computationally simplest known model
of genome rearrangement. While most genome rearrangement problems are already
hard when comparing three genomes, it is possible to compute in polynomial time
a most parsimonious SCJ scenario for an arbitrary number of genomes related by
a binary phylogenetic tree.
Here we consider the problems of sampling and counting the most parsimonious
SCJ scenarios. We show that both the sampling and counting problems are easy
for two genomes, and we relate SCJ scenarios to alternating permutations.
However, for an arbitrary number of genomes related by a binary phylogenetic
tree, the counting and sampling problems become hard. We prove that if a Fully
Polynomial Randomized Approximation Scheme or a Fully Polynomial Almost Uniform
Sampler exist for the most parsimonious SCJ scenario, then RP = NP.
The proof has a wider scope than genome rearrangements: the same result holds
for parsimonious evolutionary scenarios on any set of discrete characters.
|
1304.2172 | Binary Hypothesis Testing Game with Training Data | cs.IT cs.GT math.IT | We introduce a game-theoretic framework to study the hypothesis testing
problem, in the presence of an adversary aiming at preventing a correct
decision. Specifically, the paper considers a scenario in which an analyst has
to decide whether a test sequence has been drawn according to a probability
mass function (pmf) P_X or not. In turn, the goal of the adversary is to take a
sequence generated according to a different pmf and modify it in such a way to
induce a decision error. P_X is known only through one or more training
sequences. We derive the asymptotic equilibrium of the game under the
assumption that the analyst relies only on first order statistics of the test
sequence, and compute the asymptotic payoff of the game when the length of the
test sequence tends to infinity. We introduce the concept of
indistinguishability region, as the set of pmf's that can not be distinguished
reliably from P_X in the presence of attacks. Two different scenarios are
considered: in the first one the analyst and the adversary share the same
training sequence, in the second scenario, they rely on independent sequences.
The obtained results are compared to a version of the game in which the pmf P_X
is perfectly known to the analyst and the adversary.
|
1304.2184 | Object-Oriented Translation for Programmable Relational System (DRAFT) | cs.DB | The paper introduces the principles of object-oriented translation for target
machine which provides executing the sequences of elementary operations on
persistent data presented as a set of relations (programmable relational
system). The language of this target machine bases on formal operations of
relational data model. An approach is given to convert both the description of
complex object-oriented data structures and operations on these data, into a
description of relational structures and operations on them. The proposed
approach makes possible to extend the target relational language with commands
allowing data be described as a set of complex persistent objects of different
classes. Object views are introduced which allow relational operations be
applied to the data of complex objects. It is shown that any operation and
method can be executed on any group of the objects without explicit and
implicit iterators. Binding of both attributes and methods with their
polymorphic implementations are discussed. Classes can be co-used with
relations as scalar domains, in referential integrity constraints and in data
query operations.
|
1304.2222 | Sequential Randomized Algorithms for Convex Optimization in the Presence
of Uncertainty | cs.SY math.OC | In this paper, we propose new sequential randomized algorithms for convex
optimization problems in the presence of uncertainty. A rigorous analysis of
the theoretical properties of the solutions obtained by these algorithms, for
full constraint satisfaction and partial constraint satisfaction, respectively,
is given. The proposed methods allow to enlarge the applicability of the
existing randomized methods to real-world applications involving a large number
of design variables. Since the proposed approach does not provide a priori
bounds on the sample complexity, extensive numerical simulations, dealing with
an application to hard-disk drive servo design, are provided. These simulations
testify the goodness of the proposed solution.
|
1304.2233 | Sensors and Navigation Algorithms for Flight Control of Tethered Kites | cs.SY | We present the sensor setup and the basic navigation algorithm used for the
flight control of the SkySails towing kite system. Starting with brief
summaries on system setup and equations of motion of the tethered kite system,
we subsequently give an overview of the sensor setup, present the navigation
task and discuss challenges which have to be mastered. In the second part we
introduce in detail the inertial navigation algorithm which has been used for
operational flights for years. The functional capability of this algorithm is
illustrated by experimental flight data. Finally we suggest a modification of
the algorithms as further development step in order to overcome certain
limitations.
|
1304.2234 | Large deviations of the interference in the Ginibre network model | cs.IT cs.NI math.IT | Under different assumptions on the distribution of the fading random
variables, we derive large deviation estimates for the tail of the interference
in a wireless network model whose nodes are placed, over a bounded region of
the plane, according to the $\beta$-Ginibre process, $0<\beta\leq 1$. The
family of $\beta$-Ginibre processes is formed by determinantal point processes,
with different degree of repulsiveness, which converge in law to a homogeneous
Poisson process, as $\beta \to 0$. In this sense the Poisson network model may
be considered as the limiting uncorrelated case of the $\beta$-Ginibre network
model. Our results indicate the existence of two different regimes. When the
fading random variables are bounded or Weibull superexponential, large values
of the interference are typically originated by the sum of several equivalent
interfering contributions due to nodes in the vicinity of the receiver.
In this case, the tail of the interference has, on the log-scale, the same
asymptotic behavior for any value of $0<\beta\le 1$, but it differs (again on a
log-scale) from the asymptotic behavior of the tail of the interference in the
Poisson network model.
When the fading random variables are exponential or subexponential, instead,
large values of the interference are typically originated by a single
dominating interferer node and, on the log-scale, the asymptotic behavior of
the tail of the interference is essentially insensitive to the distribution of
the nodes. As a consequence, on the log-scale, the asymptotic behavior of the
tail of the interference in any $\beta$-Ginibre network model, $0<\beta\le 1$,
is the same as in the Poisson network model.
|
1304.2263 | Linear codes on posets with extension property | cs.IT math.IT | We investigate linear and additive codes in partially ordered Hamming-like
spaces that satisfy the extension property, meaning that automorphisms of
ideals extend to automorphisms of the poset. The codes are naturally described
in terms of translation association schemes that originate from the groups of
linear isometries of the space. We address questions of duality and invariants
of codes, establishing a connection between the dual association scheme and the
scheme defined on the dual poset (they are isomorphic if and only if the poset
is self-dual). We further discuss invariants that play the role of weight
enumerators of codes in the poset case. In the case of regular rooted trees
such invariants are linked to the classical problem of tree isomorphism. We
also study the question of whether these invariants are preserved under
standard operations on posets such as the ordinal sum and the like.
|
1304.2266 | Synaptic Scaling Balances Learning in a Spiking Model of Neocortex | q-bio.NC cs.NE | Learning in the brain requires complementary mechanisms: potentiation and
activity-dependent homeostatic scaling. We introduce synaptic scaling to a
biologically-realistic spiking model of neocortex which can learn changes in
oscillatory rhythms using STDP, and show that scaling is necessary to balance
both positive and negative changes in input from potentiation and atrophy. We
discuss some of the issues that arise when considering synaptic scaling in such
a model, and show that scaling regulates activity whilst allowing learning to
remain unaltered.
|
1304.2268 | Gossips and Prejudices: Ergodic Randomized Dynamics in Social Networks | cs.SY cs.SI math.OC | In this paper we study a novel model of opinion dynamics in social networks,
which has two main features. First, agents asynchronously interact in pairs,
and these pairs are chosen according to a random process. We refer to this
communication model as "gossiping". Second, agents are not completely
open-minded, but instead take into account their initial opinions, which may be
thought of as their "prejudices". In the literature, such agents are often
called "stubborn". We show that the opinions of the agents fail to converge,
but persistently undergo ergodic oscillations, which asymptotically concentrate
around a mean distribution of opinions. This mean value is exactly the limit of
the synchronous dynamics of the expected opinions.
|
1304.2269 | On Number of Almost Blank Subframes in Heterogeneous Cellular Networks | cs.IT math.IT | In heterogeneous cellular scenarios with macrocells, femtocells or picocells
users may suffer from significant co-channel cross-tier interference. To manage
this interference 3GPP introduced almost blank subframe (ABSF), a subframe in
which the interferer tier is not allowed to transmit data. Vulnerable users
thus get a chance to be scheduled in ABSFs with reduced cross-tier
interference. We analyze downlink scenarios using stochastic geometry and
formulate a condition for the required number of ABSFs based on base station
placement statistics and user throughput requirement. The result is a
semi-analytical formula that serves as a good initial estimate and offers an
easy way to analyze impact of network parameters. We show that while in
macro/femto scenario the residue ABSF interference can be well managed, in
macro/pico scenario it affects the number of required ABSFs strongly. The
effect of ABSFs is subsequently demonstrated via user throughput simulations.
Especially in the macro/pico scenario, we find that using ABSFs is advantageous
for the system since victim users no longer suffer from poor performance for
the price of relatively small drop in higher throughput percentiles.
|
1304.2272 | Algorithms for Large-scale Whole Genome Association Analysis | cs.CE cs.MS q-bio.GN | In order to associate complex traits with genetic polymorphisms, genome-wide
association studies process huge datasets involving tens of thousands of
individuals genotyped for millions of polymorphisms. When handling these
datasets, which exceed the main memory of contemporary computers, one faces two
distinct challenges: 1) Millions of polymorphisms come at the cost of hundreds
of Gigabytes of genotype data, which can only be kept in secondary storage; 2)
the relatedness of the test population is represented by a covariance matrix,
which, for large populations, can only fit in the combined main memory of a
distributed architecture. In this paper, we present solutions for both
challenges: The genotype data is streamed from and to secondary storage using a
double buffering technique, while the covariance matrix is kept across the main
memory of a distributed memory system. We show that these methods sustain
high-performance and allow the analysis of enormous dataset
|
1304.2300 | Incremental Computation of Pseudo-Inverse of Laplacian: Theory and
Applications | cs.DM cs.SI | A divide-and-conquer based approach for computing the Moore-Penrose
pseudo-inverse of the combinatorial Laplacian matrix $(\bb L^+)$ of a simple,
undirected graph is proposed. % The nature of the underlying sub-problems is
studied in detail by means of an elegant interplay between $\bb L^+$ and the
effective resistance distance $(\Omega)$. Closed forms are provided for a novel
{\em two-stage} process that helps compute the pseudo-inverse incrementally.
Analogous scalar forms are obtained for the converse case, that of structural
regress, which entails the breaking up of a graph into disjoint components
through successive edge deletions. The scalar forms in both cases, show
absolute element-wise independence at all stages, thus suggesting potential
parallelizability. Analytical and experimental results are presented for
dynamic (time-evolving) graphs as well as large graphs in general (representing
real-world networks). An order of magnitude reduction in computational time is
achieved for dynamic graphs; while in the general case, our approach performs
better in practice than the standard methods, even though the worst case
theoretical complexities may remain the same: an important contribution with
consequences to the study of online social networks.
|
1304.2302 | ClusterCluster: Parallel Markov Chain Monte Carlo for Dirichlet Process
Mixtures | stat.ML cs.DC cs.LG | The Dirichlet process (DP) is a fundamental mathematical tool for Bayesian
nonparametric modeling, and is widely used in tasks such as density estimation,
natural language processing, and time series modeling. Although MCMC inference
methods for the DP often provide a gold standard in terms asymptotic accuracy,
they can be computationally expensive and are not obviously parallelizable. We
propose a reparameterization of the Dirichlet process that induces conditional
independencies between the atoms that form the random measure. This conditional
independence enables many of the Markov chain transition operators for DP
inference to be simulated in parallel across multiple cores. Applied to mixture
modeling, our approach enables the Dirichlet process to simultaneously learn
clusters that describe the data and superclusters that define the granularity
of parallelization. Unlike previous approaches, our technique does not require
alteration of the model and leaves the true posterior distribution invariant.
It also naturally lends itself to a distributed software implementation in
terms of Map-Reduce, which we test in cluster configurations of over 50
machines and 100 cores. We present experiments exploring the parallel
efficiency and convergence properties of our approach on both synthetic and
real-world data, including runs on 1MM data vectors in 256 dimensions.
|
1304.2310 | Embedding of Blink Frequency in Electrooculography Signal using
Difference Expansion based Reversible Watermarking Technique | cs.CV cs.IR | In the past few years, like other fields, rapid expansion of digitization and
globalization has influenced the medical field as well. For progress of
diagnostic results most of the reputed hospitals and diagnostic centres all
over the world have started exchanging medical information. In this proposed
method, the calculated diagnostic parametric values of the original
Electrooculography (EOG) signal are embedded as a watermark by using Difference
Expansion (DE) algorithm based reversible watermarking technique. The extracted
watermark provides the required parametric values at the recipient end without
any post computation of the recovered EOG signal. By computing the parametric
values from the recovered signal, the integrity of the extracted watermark can
be validated. The time domain features of EOG signal are calculated for the
generation of watermark. In the current work, various features are studied and
two major features related to blink frequency are used to generate the
watermark. The high Signal to Noise Ratio (SNR) and the Bit Error Rate (BER)
claim the robustness of the proposed method.
|
1304.2313 | On Differentially Private Filtering for Event Streams | cs.DB cs.CR cs.SY | Rigorous privacy mechanisms that can cope with dynamic data are required to
encourage a wider adoption of large-scale monitoring and decision systems
relying on end-user information. A promising approach to develop these
mechanisms is to specify quantitative privacy requirements at design time
rather than as an afterthought, and to rely on signal processing techniques to
achieve satisfying trade-offs between privacy and performance specifications.
This paper discusses, from the signal processing point of view, an event stream
analysis problem introduced in the database and cryptography literature. A
discrete-valued input signal describes the occurrence of events contributed by
end-users, and a system is supposed to provide some output signal based on this
information, while preserving the privacy of the participants. The notion of
privacy adopted here is that of event-level differential privacy, which
provides strong privacy guarantees and has important operational advantages.
Several mechanisms are described to provide differentially private output
signals while minimizing the impact on performance. These mechanisms
demonstrate the benefits of leveraging system theoretic techniques to provide
privacy guarantees for dynamic systems.
|
1304.2331 | The PAV algorithm optimizes binary proper scoring rules | stat.AP cs.LG stat.ML | There has been much recent interest in application of the
pool-adjacent-violators (PAV) algorithm for the purpose of calibrating the
probabilistic outputs of automatic pattern recognition and machine learning
algorithms. Special cost functions, known as proper scoring rules form natural
objective functions to judge the goodness of such calibration. We show that for
binary pattern classifiers, the non-parametric optimization of calibration,
subject to a monotonicity constraint, can be solved by PAV and that this
solution is optimal for all regular binary proper scoring rules. This extends
previous results which were limited to convex binary proper scoring rules. We
further show that this result holds not only for calibration of probabilities,
but also for calibration of log-likelihood-ratios, in which case optimality
holds independently of the prior probabilities of the pattern classes.
|
1304.2333 | A primer on information theory, with applications to neuroscience | cs.IT math.IT q-bio.NC | Given the constant rise in quantity and quality of data obtained from neural
systems on many scales ranging from molecular to systems',
information-theoretic analyses became increasingly necessary during the past
few decades in the neurosciences. Such analyses can provide deep insights into
the functionality of such systems, as well as a rigid mathematical theory and
quantitative measures of information processing in both healthy and diseased
states of neural systems. This chapter will present a short introduction to the
fundamentals of information theory, especially suited for people having a less
firm background in mathematics and probability theory. To begin, the
fundamentals of probability theory such as the notion of probability,
probability distributions, and random variables will be reviewed. Then, the
concepts of information and entropy (in the sense of Shannon), mutual
information, and transfer entropy (sometimes also referred to as conditional
mutual information) will be outlined. As these quantities cannot be computed
exactly from measured data in practice, estimation techniques for
information-theoretic quantities will be presented. The chapter will conclude
with the applications of information theory in the field of neuroscience,
including questions of possible medical applications and a short review of
software packages that can be used for information-theoretic analyses of neural
data.
|
1304.2336 | One-shot lossy quantum data compression | quant-ph cs.IT math.IT | We provide a framework for one-shot quantum rate distortion coding, in which
the goal is to determine the minimum number of qubits required to compress
quantum information as a function of the probability that the distortion
incurred upon decompression exceeds some specified level. We obtain a one-shot
characterization of the minimum qubit compression size for an
entanglement-assisted quantum rate-distortion code in terms of the smooth
max-information, a quantity previously employed in the one-shot quantum reverse
Shannon theorem. Next, we show how this characterization converges to the known
expression for the entanglement-assisted quantum rate distortion function for
asymptotically many copies of a memoryless quantum information source. Finally,
we give a tight, finite blocklength characterization for the
entanglement-assisted minimum qubit compression size of a memoryless isotropic
qubit source subject to an average symbol-wise distortion constraint.
|
1304.2339 | The structure of Bayes nets for vision recognition | cs.AI | This paper is part of a study whose goal is to show the effciency of using
Bayes networks to carry out model based vision calculations. [Binford et al.
1987] Recognition proceeds by drawing up a network model from the object's
geometric and functional description that predicts the appearance of an object.
Then this network is used to find the object within a photographic image. Many
existing and proposed techniques for vision recognition resemble the
uncertainty calculations of a Bayes net. In contrast, though, they lack a
derivation from first principles, and tend to rely on arbitrary parameters that
we hope to avoid by a network model. The connectedness of the network depends
on what independence considerations can be identified in the vision problem.
Greater independence leads to easier calculations, at the expense of the net's
expressiveness. Once this trade-off is made and the structure of the network is
determined, it should be possible to tailor a solution technique for it. This
paper explores the use of a network with multiply connected paths, drawing on
both techniques of belief networks [Pearl 86] and influence diagrams. We then
demonstrate how one formulation of a multiply connected network can be solved.
|
1304.2340 | Summary of A New Normative Theory of Probabilistic Logic | cs.AI | By probabilistic logic I mean a normative theory of belief that explains how
a body of evidence affects one's degree of belief in a possible hypothesis. A
new axiomatization of such a theory is presented which avoids a finite
additivity axiom, yet which retains many useful inference rules. Many of the
examples of this theory--its models do not use numerical probabilities. Put
another way, this article gives sharper answers to the two questions: 1.What
kinds of sets can used as the range of a probability function? 2.Under what
conditions is the range set of a probability function isomorphic to the set of
real numbers in the interval 10,1/ with the usual arithmetical operations?
|
1304.2341 | Probability Distributions Over Possible Worlds | cs.AI | In Probabilistic Logic Nilsson uses the device of a probability distribution
over a set of possible worlds to assign probabilities to the sentences of a
logical language. In his paper Nilsson concentrated on inference and associated
computational issues. This paper, on the other hand, examines the probabilistic
semantics in more detail, particularly for the case of first-order languages,
and attempts to explain some of the features and limitations of this form of
probability logic. It is pointed out that the device of assigning probabilities
to logical sentences has certain expressive limitations. In particular,
statistical assertions are not easily expressed by such a device. This leads to
certain difficulties with attempts to give probabilistic semantics to default
reasoning using probabilities assigned to logical sentences.
|
1304.2342 | Hierarchical Evidence and Belief Functions | cs.AI | Dempster/Shafer (D/S) theory has been advocated as a way of representing
incompleteness of evidence in a system's knowledge base. Methods now exist for
propagating beliefs through chains of inference. This paper discusses how rules
with attached beliefs, a common representation for knowledge in automated
reasoning systems, can be transformed into the joint belief functions required
by propagation algorithms. A rule is taken as defining a conditional belief
function on the consequent given the antecedents. It is demonstrated by example
that different joint belief functions may be consistent with a given set of
rules. Moreover, different representations of the same rules may yield
different beliefs on the consequent hypotheses.
|
1304.2343 | Decision-Theoretic Control of Problem Solving: Principles and
Architecture | cs.AI | This paper presents an approach to the design of autonomous, real-time
systems operating in uncertain environments. We address issues of problem
solving and reflective control of reasoning under uncertainty in terms of two
fundamental elements: l) a set of decision-theoretic models for selecting among
alternative problem-solving methods and 2) a general computational architecture
for resource-bounded problem solving. The decisiontheoretic models provide a
set of principles for choosing among alternative problem-solving methods based
on their relative costs and benefits, where benefits are characterized in terms
of the value of information provided by the output of a reasoning activity. The
output may be an estimate of some uncertain quantity or a recommendation for
action. The computational architecture, called Schemer-ll, provides for
interleaving of and communication among various problem-solving subsystems.
These subsystems provide alternative approaches to information gathering,
belief refinement, solution construction, and solution execution. In
particular, the architecture provides a mechanism for interrupting the
subsystems in response to critical events. We provide a decision theoretic
account for scheduling problem-solving elements and for critical-event-driven
interruption of activities in an architecture such as Schemer-II.
|
1304.2344 | Induction and Uncertainty Management Techniques Applied to Veterinary
Medical Diagnosis | cs.AI | This paper discusses a project undertaken between the Departments of
Computing Science, Statistics, and the College of Veterinary Medicine to design
a medical diagnostic system. On-line medical data has been collected in the
hospital database system for several years. A number of induction methods are
being used to extract knowledge from the data in an attempt to improve upon
simple diagnostic charts used by the clinicians. They also enhance the results
of classical statistical methods - finding many more significant variables. The
second part of the paper describes an essentially Bayesian method of evidence
combination using fuzzy events at an initial step. Results are presented and
comparisons are made with other methods.
|
1304.2345 | KNET: Integrating Hypermedia and Bayesian Modeling | cs.AI | KNET is a general-purpose shell for constructing expert systems based on
belief networks and decision networks. Such networks serve as graphical
representations for decision models, in which the knowledge engineer must
define clearly the alternatives, states, preferences, and relationships that
constitute a decision basis. KNET contains a knowledge-engineering core written
in Object Pascal and an interface that tightly integrates HyperCard, a
hypertext authoring tool for the Apple Macintosh computer, into a novel
expert-system architecture. Hypertext and hypermedia have become increasingly
important in the storage management, and retrieval of information. In broad
terms, hypermedia deliver heterogeneous bits of information in dynamic,
extensively cross-referenced packages. The resulting KNET system features a
coherent probabilistic scheme for managing uncertainty, an objectoriented
graphics editor for drawing and manipulating decision networks, and HyperCard's
potential for quickly constructing flexible and friendly user interfaces. We
envision KNET as a useful prototyping tool for our ongoing research on a
variety of Bayesian reasoning problems, including tractable representation,
inference, and explanation.
|
1304.2346 | A Method for Using Belief Networks as Influence Diagrams | cs.AI | This paper demonstrates a method for using belief-network algorithms to solve
influence diagram problems. In particular, both exact and approximation
belief-network algorithms may be applied to solve influence-diagram problems.
More generally, knowing the relationship between belief-network and
influence-diagram problems may be useful in the design and development of more
efficient influence diagram algorithms.
|
1304.2347 | Process, Structure, and Modularity in Reasoning with Uncertainty | cs.AI | Computational mechanisms for uncertainty management must support interactive
and incremental problem formulation, inference, hypothesis testing, and
decision making. However, most current uncertainty inference systems
concentrate primarily on inference, and provide no support for the larger
issues. We present a computational approach to uncertainty management which
provides direct support for the dynamic, incremental aspect of this task, while
at the same time permitting direct representation of the structure of
evidential relationships. At the same time, we show that this approach responds
to the modularity concerns of Heckerman and Horvitz [Heck87]. This paper
emphasizes examples of the capabilities of this approach. Another paper
[D'Am89] details the representations and algorithms involved.
|
1304.2348 | Probabilistic Causal Reasoning | cs.AI | Predicting the future is an important component of decision making. In most
situations, however, there is not enough information to make accurate
predictions. In this paper, we develop a theory of causal reasoning for
predictive inference under uncertainty. We emphasize a common type of
prediction that involves reasoning about persistence: whether or not a
proposition once made true remains true at some later time. We provide a
decision procedure with a polynomial-time algorithm for determining the
probability of the possible consequences of a set events and initial
conditions. The integration of simple probability theory with temporal
projection enables us to circumvent problems that nonmonotonic temporal
reasoning schemes have in dealing with persistence. The ideas in this paper
have been implemented in a prototype system that refines a database of causal
rules in the course of applying those rules to construct and carry out plans in
a manufacturing domain.
|
1304.2349 | Modeling uncertain and vague knowledge in possibility and evidence
theories | cs.AI | This paper advocates the usefulness of new theories of uncertainty for the
purpose of modeling some facets of uncertain knowledge, especially vagueness,
in AI. It can be viewed as a partial reply to Cheeseman's (among others)
defense of probability.
|
1304.2350 | A Temporal Logic for Uncertain Events and An Outline of A Possible
Implementation in An Extension of PROLOG | cs.AI | There is uncertainty associated with the occurrence of many events in real
life. In this paper we develop a temporal logic to deal with such uncertain
events and outline a possible implementation in an extension of PROLOG. Events
are represented as fuzzy sets with the membership function giving the
possibility of occurrence of the event in a given interval of time. The
developed temporal logic is simple but powerful. It can determine effectively
the various temporal relations between uncertain events or their combinations.
PROLOG provides a uniform substrate on which to effectively implement such a
temporal logic for uncertain events
|
1304.2351 | Uncertainty Management for Fuzzy Decision Support Systems | cs.AI | A new approach for uncertainty management for fuzzy, rule based decision
support systems is proposed: The domain expert's knowledge is expressed by a
set of rules that frequently refer to vague and uncertain propositions. The
certainty of propositions is represented using intervals [a, b] expressing that
the proposition's probability is at least a and at most b. Methods and
techniques for computing the overall certainty of fuzzy compound propositions
that have been defined by using logical connectives 'and', 'or' and 'not' are
introduced. Different inference schemas for applying fuzzy rules by using modus
ponens are discussed. Different algorithms for combining evidence that has been
received from different rules for the same proposition are provided. The
relationship of the approach to other approaches is analyzed and its problems
of knowledge acquisition and knowledge representation are discussed in some
detail. The basic concepts of a rule-based programming language called PICASSO,
for which the approach is a theoretical foundation, are outlined.
|
1304.2352 | Probability as a Modal Operator | cs.AI | This paper argues for a modal view of probability. The syntax and semantics
of one particularly strong probability logic are discussed and some examples of
the use of the logic are provided. We show that it is both natural and useful
to think of probability as a modal operator. Contrary to popular belief in AI,
a probability ranging between 0 and 1 represents a continuum between
impossibility and necessity, not between simple falsity and truth. The present
work provides a clear semantics for quantification into the scope of the
probability operator and for higher-order probabilities. Probability logic is a
language for expressing both probabilistic and logical concepts.
|
1304.2353 | Truth Maintenance Under Uncertainty | cs.AI | This paper addresses the problem of resolving errors under uncertainty in a
rule-based system. A new approach has been developed that reformulates this
problem as a neural-network learning problem. The strength and the fundamental
limitations of this approach are explored and discussed. The main result is
that neural heuristics can be applied to solve some but not all problems in
rule-based systems.
|
1304.2354 | Bayesian Assessment of a Connectionist Model for Fault Detection | cs.AI | A previous paper [2] showed how to generate a linear discriminant network
(LDN) that computes likely faults for a noisy fault detection problem by using
a modification of the perceptron learning algorithm called the pocket
algorithm. Here we compare the performance of this connectionist model with
performance of the optimal Bayesian decision rule for the example that was
previously described. We find that for this particular problem the
connectionist model performs about 97% as well as the optimal Bayesian
procedure. We then define a more general class of noisy single-pattern boolean
(NSB) fault detection problems where each fault corresponds to a single
:pattern of boolean instrument readings and instruments are independently
noisy. This is equivalent to specifying that instrument readings are
probabilistic but conditionally independent given any particular fault. We
prove:
1. The optimal Bayesian decision rule for every NSB fault detection problem
is representable by an LDN containing no intermediate nodes. (This slightly
extends a result first published by Minsky & Selfridge.) 2. Given an NSB fault
detection problem, then with arbitrarily high probability after sufficient
iterations the pocket algorithm will generate an LDN that computes an optimal
Bayesian decision rule for that problem. In practice we find that a reasonable
number of iterations of the pocket algorithm produces a network with good, but
not optimal, performance.
|
1304.2355 | On the Logic of Causal Models | cs.AI | This paper explores the role of Directed Acyclic Graphs (DAGs) as a
representation of conditional independence relationships. We show that DAGs
offer polynomially sound and complete inference mechanisms for inferring
conditional independence relationships from a given causal set of such
relationships. As a consequence, d-separation, a graphical criterion for
identifying independencies in a DAG, is shown to uncover more valid
independencies then any other criterion. In addition, we employ the Armstrong
property of conditional independence to show that the dependence relationships
displayed by a DAG are inherently consistent, i.e. for every DAG D there exists
some probability distribution P that embodies all the conditional
independencies displayed in D and none other.
|
1304.2356 | The Optimality of Satisficing Solutions | cs.AI | This paper addresses a prevailing assumption in single-agent heuristic search
theory- that problem-solving algorithms should guarantee shortest-path
solutions, which are typically called optimal. Optimality implies a metric for
judging solution quality, where the optimal solution is the solution with the
highest quality. When path-length is the metric, we will distinguish such
solutions as p-optimal.
|
1304.2357 | An Empirical Comparison of Three Inference Methods | cs.AI | In this paper, an empirical evaluation of three inference methods for
uncertain reasoning is presented in the context of Pathfinder, a large expert
system for the diagnosis of lymph-node pathology. The inference procedures
evaluated are (1) Bayes' theorem, assuming evidence is conditionally
independent given each hypothesis; (2) odds-likelihood updating, assuming
evidence is conditionally independent given each hypothesis and given the
negation of each hypothesis; and (3) a inference method related to the
Dempster-Shafer theory of belief. Both expert-rating and decision-theoretic
metrics are used to compare the diagnostic accuracy of the inference methods.
|
1304.2358 | Parallel Belief Revision | cs.AI | This paper describes a formal system of belief revision developed by Wolfgang
Spohn and shows that this system has a parallel implementation that can be
derived from an influence diagram in a manner similar to that in which Bayesian
networks are derived. The proof rests upon completeness results for an
axiomatization of the notion of conditional independence, with the Spohn system
being used as a semantics for the relation of conditional independence.
|
1304.2359 | Stochastic Sensitivity Analysis Using Fuzzy Influence Diagrams | cs.AI | The practice of stochastic sensitivity analysis described in the decision
analysis literature is a testimonial to the need for considering deviations
from precise point estimates of uncertainty. We propose the use of Bayesian
fuzzy probabilities within an influence diagram computational scheme for
performing sensitivity analysis during the solution of probabilistic inference
and decision problems. Unlike other parametric approaches, the proposed scheme
does not require resolving the problem for the varying probability point
estimates. We claim that the solution to fuzzy influence diagrams provides as
much information as the classical point estimate approach plus additional
information concerning stochastic sensitivity. An example based on diagnostic
decision making in microcomputer assembly is used to illustrate this idea. We
claim that the solution to fuzzy influence diagrams provides as much
information as the classical point estimate approach plus additional interval
information that is useful for stochastic sensitivity analysis.
|
1304.2360 | A Representation of Uncertainty to Aid Insight into Decision Models | cs.AI | Many real world models can be characterized as weak, meaning that there is
significant uncertainty in both the data input and inferences. This lack of
determinism makes it especially difficult for users of computer decision aids
to understand and have confidence in the models. This paper presents a
representation for uncertainty and utilities that serves as a framework for
graphical summary and computer-generated explanation of decision models. The
application described that tests the methodology is a computer decision aid
designed to enhance the clinician-patient consultation process for patients
with angina (chest pain due to lack of blood flow to the heart muscle). The
angina model is represented as a Bayesian decision network. Additionally, the
probabilities and utilities are treated as random variables with probability
distributions on their range of possible values. The initial distributions
represent information on all patients with anginal symptoms, and the approach
allows for rapid tailoring to more patientspecific distributions. This
framework provides a metric for judging the importance of each variable in the
model dynamically.
|
1304.2361 | Rational Nonmonotonic Reasoning | cs.AI | Nonmonotonic reasoning is a pattern of reasoning that allows an agent to make
and retract (tentative) conclusions from inconclusive evidence. This paper
gives a possible-worlds interpretation of the nonmonotonic reasoning problem
based on standard decision theory and the emerging probability logic. The
system's central principle is that a tentative conclusion is a decision to make
a bet, not an assertion of fact. The system is rational, and as sound as the
proof theory of its underlying probability log.
|
1304.2362 | A Comparison of Decision Analysis and Expert Rules for Sequential
Diagnosis | cs.AI | There has long been debate about the relative merits of decision theoretic
methods and heuristic rule-based approaches for reasoning under uncertainty. We
report an experimental comparison of the performance of the two approaches to
troubleshooting, specifically to test selection for fault diagnosis. We use as
experimental testbed the problem of diagnosing motorcycle engines. The first
approach employs heuristic test selection rules obtained from expert mechanics.
We compare it with the optimal decision analytic algorithm for test selection
which employs estimated component failure probabilities and test costs. The
decision analytic algorithm was found to reduce the expected cost (i.e. time)
to arrive at a diagnosis by an average of 14% relative to the expert rules.
Sensitivity analysis shows the results are quite robust to inaccuracy in the
probability and cost estimates. This difference suggests some interesting
implications for knowledge acquisition.
|
1304.2363 | Multiple decision trees | cs.LG cs.AI stat.ML | This paper describes experiments, on two domains, to investigate the effect
of averaging over predictions of multiple decision trees, instead of using a
single tree. Other authors have pointed out theoretical and commonsense reasons
for preferring the multiple tree approach. Ideally, we would like to consider
predictions from all trees, weighted by their probability. However, there is a
vast number of different trees, and it is difficult to estimate the probability
of each tree. We sidestep the estimation problem by using a modified version of
the ID3 algorithm to build good trees, and average over only these trees. Our
results are encouraging. For each domain, we managed to produce a small number
of good trees. We find that it is best to average across sets of trees with
different structure; this usually gives better performance than any of the
constituent trees, including the ID3 tree.
|
1304.2364 | Probabilistic Inference and Probabilistic Reasoning | cs.AI | Uncertainty enters into human reasoning and inference in at least two ways.
It is reasonable to suppose that there will be roles for these distinct uses of
uncertainty also in automated reasoning.
|
1304.2365 | Probabilistic and Non-Monotonic Inference | cs.AI | (l) I have enough evidence to render the sentence S probable. (la) So,
relative to what I know, it is rational of me to believe S. (2) Now that I have
more evidence, S may no longer be probable. (2a) So now, relative to what I
know, it is not rational of me to believe S. These seem a perfectly ordinary,
common sense, pair of situations. Generally and vaguely, I take them to embody
what I shall call probabilistic inference. This form of inference is clearly
non-monotonic. Relatively few people have taken this form of inference, based
on high probability, to serve as a foundation for non-monotonic logic or for a
logical or defeasible inference. There are exceptions: Jane Nutter [16] thinks
that sometimes probability has something to do with non-monotonic reasoning.
Judea Pearl [ 17] has recently been exploring the possibility. There are any
number of people whom one might call probability enthusiasts who feel that
probability provides all the answers by itself, with no need of help from
logic. Cheeseman [1], Henrion [5] and others think it useful to look at a
distribution of probabilities over a whole algebra of statements, to update
that distribution in the light of new evidence, and to use the latest updated
distribution of probability over the algebra as a basis for planning and
decision making. A slightly weaker form of this approach is captured by Nilsson
[15], where one assumes certain probabilities for certain statements, and
infers the probabilities, or constraints on the probabilities of other
statement. None of this corresponds to what I call probabilistic inference. All
of the inference that is taking place, either in Bayesian updating, or in
probabilistic logic, is strictly deductive. Deductive inference, particularly
that concerned with the distribution of classical probabilities or chances, is
of great importance. But this is not to say that there is no important role for
what earlier logicians have called "ampliative" or "inductive" or "scientific"
inference, in which the conclusion goes beyond the premises, asserts more than
do the premises. This depends on what David Israel [6] has called "real rules
of inference". It is characteristic of any such logic or inference procedure
that it can go wrong: that statements accepted at one point may be rejected at
a later point. Research underlying the results reported here has been partially
supported by the Signals Warfare Center of the United States Army.
|
1304.2366 | Epistemological Relevance and Statistical Knowledge | cs.AI | For many years, at least since McCarthy and Hayes (1969), writers have
lamented, and attempted to compensate for, the alleged fact that we often do
not have adequate statistical knowledge for governing the uncertainty of
belief, for making uncertain inferences, and the like. It is hardly ever
spelled out what "adequate statistical knowledge" would be, if we had it, and
how adequate statistical knowledge could be used to control and regulate
epistemic uncertainty.
|
1304.2367 | Utility-Based Control for Computer Vision | cs.CV cs.AI cs.SY | Several key issues arise in implementing computer vision recognition of world
objects in terms of Bayesian networks. Computational efficiency is a driving
force. Perceptual networks are very deep, typically fifteen levels of
structure. Images are wide, e.g., an unspecified-number of edges may appear
anywhere in an image 512 x 512 pixels or larger. For efficiency, we dynamically
instantiate hypotheses of observed objects. The network is not fixed, but is
created incrementally at runtime. Generation of hypotheses of world objects and
indexing of models for recognition are important, but they are not considered
here [4,11]. This work is aimed at near-term implementation with parallel
computation in a radar surveillance system, ADRIES [5, 15], and a system for
industrial part recognition, SUCCESSOR [2]. For many applications, vision must
be faster to be practical and so efficiently controlling the machine vision
process is critical. Perceptual operators may scan megapixels and may require
minutes of computation time. It is necessary to avoid unnecessary sensor
actions and computation. Parallel computation is available at several levels of
processor capability. The potential for parallel, distributed computation for
high-level vision means distributing non-homogeneous computations. This paper
addresses the problem of task control in machine vision systems based on
Bayesian probability models. We separate control and inference to extend the
previous work [3] to maximize utility instead of probability. Maximizing
utility allows adopting perceptual strategies for efficient information
gathering with sensors and analysis of sensor data. Results of controlling
machine vision via utility to recognize military situations are presented in
this paper. Future work extends this to industrial part recognition for
SUCCESSOR.
|
1304.2368 | Evidential Reasoning in a Network Usage Prediction Testbed | cs.AI | This paper reports on empirical work aimed at comparing evidential reasoning
techniques. While there is prima facie evidence for some conclusions, this i6
work in progress; the present focus is methodology, with the goal that
subsequent results be meaningful. The domain is a network of UNIX* cycle
servers, and the task is to predict properties of the state of the network from
partial descriptions of the state. Actual data from the network are taken and
used for blindfold testing in a betting game that allows abstention. The focal
technique has been Kyburg's method for reasoning with data of varying relevance
to a particular query, though the aim is to be able eventually to compare
various uncertainty calculi. The conclusions are not novel, but are
instructive. 1. All of the calculi performed better than human subjects, so
unbiased access to sample experience is apparently of value. 2. Performance
depends on metric: (a) when trials are repeated, net = gains - losses favors
methods that place many bets, if the probability of placing a correct bet is
sufficiently high; that is, it favors point-valued formalisms; (b) yield =
gains/(gains + lossee) favors methods that bet only when sure to bet correctly;
that is, it favors interval-valued formalisms. 3. Among the calculi, there were
no clear winners or losers. Methods are identified for eliminating the bias of
the net as a performance criterion and for separating the calculi effectively:
in both cases by posting odds for the betting game in the appropriate way.
|
1304.2369 | Justifying the Principle of Interval Constraints | cs.AI | When knowledge is obtained from a database, it is only possible to deduce
confidence intervals for probability values. With confidence intervals
replacing point values, the results in the set covering model include interval
constraints for the probabilities of mutually exclusive and exhaustive
explanations. The Principle of Interval Constraints ranks these explanations by
determining the expected values of the probabilities based on distributions
determined from the interval, constraints. This principle was developed using
the Classical Approach to probability. This paper justifies the Principle of
Interval Constraints with a more rigorous statement of the Classical Approach
and by defending the concept of probabilities of probabilities.
|
1304.2370 | Probabilistic Semantics and Defaults | cs.AI | There is much interest in providing probabilistic semantics for defaults but
most approaches seem to suffer from one of two problems: either they require
numbers, a problem defaults were intended to avoid, or they generate peculiar
side effects. Rather than provide semantics for defaults, we address the
problem defaults were intended to solve: that of reasoning under uncertainty
where numeric probability distributions are not available. We describe a
non-numeric formalism called an inference graph based on standard probability
theory, conditional independence and sentences of favouring where a favours b -
favours(a, b) - p(a|b) > p(a). The formalism seems to handle the examples from
the nonmonotonic literature. Most importantly, the sentences of our system can
be verified by performing an appropriate experiment in the semantic domain.
|
1304.2371 | Decision Making with Linear Constraints on Probabilities | cs.AI | Techniques for decision making with knowledge of linear constraints on
condition probabilities are examined. These constraints arise naturally in many
situations: upper and lower condition probabilities are known; an ordering
among the probabilities is determined; marginal probabilities or bounds on such
probabilities are known, e.g., data are available in the form of a
probabilistic database (Cavallo and Pittarelli, 1987a); etc. Standard
situations of decision making under risk and uncertainty may also be
characterized by linear constraints. Each of these types of information may be
represented by a convex polyhedron of numerically determinate condition
probabilities. A uniform approach to decision making under risk, uncertainty,
and partial uncertainty based on a generalized version of a criterion of
Hurwicz is proposed, Methods for processing marginal probabilities to improve
decision making using any of the criteria discussed are presented.
|
1304.2372 | Maintenance in Probabilistic Knowledge-Based Systems | cs.AI | Recent developments using directed acyclical graphs (i.e., influence diagrams
and Bayesian networks) for knowledge representation have lessened the problems
of using probability in knowledge-based systems (KBS). Most current research
involves the efficient propagation of new evidence, but little has been done
concerning the maintenance of domain-specific knowledge, which includes the
probabilistic information about the problem domain. By making use of
conditional independencies represented in she graphs, however, probability
assessments are required only for certain variables when the knowledge base is
updated. The purpose of this study was to investigate, for those variables
which require probability assessments, ways to reduce the amount of new
knowledge required from the expert when updating probabilistic information in a
probabilistic knowledge-based system. Three special cases (ignored outcome,
split outcome, and assumed constraint outcome) were identified under which many
of the original probabilities (those already in the knowledge-base) do not need
to be reassessed when maintenance is required.
|
1304.2373 | A Linear Approximation Method for Probabilistic Inference | cs.AI | An approximation method is presented for probabilistic inference with
continuous random variables. These problems can arise in many practical
problems, in particular where there are "second order" probabilities. The
approximation, based on the Gaussian influence diagram, iterates over linear
approximations to the inference problem.
|
1304.2374 | An Axiomatic Framework for Bayesian and Belief-function Propagation | cs.AI | In this paper, we describe an abstract framework and axioms under which exact
local computation of marginals is possible. The primitive objects of the
framework are variables and valuations. The primitive operators of the
framework are combination and marginalization. These operate on valuations. We
state three axioms for these operators and we derive the possibility of local
computation from the axioms. Next, we describe a propagation scheme for
computing marginals of a valuation when we have a factorization of the
valuation on a hypertree. Finally we show how the problem of computing
marginals of joint probability distributions and joint belief functions fits
the general framework.
|
1304.2375 | A General Non-Probabilistic Theory of Inductive Reasoning | cs.AI | Probability theory, epistemically interpreted, provides an excellent, if not
the best available account of inductive reasoning. This is so because there are
general and definite rules for the change of subjective probabilities through
information or experience; induction and belief change are one and same topic,
after all. The most basic of these rules is simply to conditionalize with
respect to the information received; and there are similar and more general
rules. 1 Hence, a fundamental reason for the epistemological success of
probability theory is that there at all exists a well-behaved concept of
conditional probability. Still, people have, and have reasons for, various
concerns over probability theory. One of these is my starting point:
Intuitively, we have the notion of plain belief; we believe propositions2 to be
true (or to be false or neither). Probability theory, however, offers no formal
counterpart to this notion. Believing A is not the same as having probability 1
for A, because probability 1 is incorrigible3; but plain belief is clearly
corrigible. And believing A is not the same as giving A a probability larger
than some 1 - c, because believing A and believing B is usually taken to be
equivalent to believing A & B.4 Thus, it seems that the formal representation
of plain belief has to take a non-probabilistic route. Indeed, representing
plain belief seems easy enough: simply represent an epistemic state by the set
of all propositions believed true in it or, since I make the common assumption
that plain belief is deductively closed, by the conjunction of all propositions
believed true in it. But this does not yet provide a theory of induction, i.e.
an answer to the question how epistemic states so represented are changed
tbrough information or experience. There is a convincing partial answer: if the
new information is compatible with the old epistemic state, then the new
epistemic state is simply represented by the conjunction of the new information
and the old beliefs. This answer is partial because it does not cover the quite
common case where the new information is incompatible with the old beliefs. It
is, however, important to complete the answer and to cover this case, too;
otherwise, we would not represent plain belief as conigible. The crucial
problem is that there is no good completion. When epistemic states are
represented simply by the conjunction of all propositions believed true in it,
the answer cannot be completed; and though there is a lot of fruitful work, no
other representation of epistemic states has been proposed, as far as I know,
which provides a complete solution to this problem. In this paper, I want to
suggest such a solution. In [4], I have more fully argued that this is the only
solution, if certain plausible desiderata are to be satisfied. Here, in section
2, I will be content with formally defining and intuitively explaining my
proposal. I will compare my proposal with probability theory in section 3. It
will turn out that the theory I am proposing is structurally homomorphic to
probability theory in important respects and that it is thus equally easily
implementable, but moreover computationally simpler. Section 4 contains a very
brief comparison with various kinds of logics, in particular conditional logic,
with Shackle's functions of potential surprise and related theories, and with
the Dempster - Shafer theory of belief functions.
|
1304.2376 | Generating Decision Structures and Causal Explanations for Decision
Making | cs.AI | This paper examines two related problems that are central to developing an
autonomous decision-making agent, such as a robot. Both problems require
generating structured representafions from a database of unstructured
declarative knowledge that includes many facts and rules that are irrelevant in
the problem context. The first problem is how to generate a well structured
decision problem from such a database. The second problem is how to generate,
from the same database, a well-structured explanation of why some possible
world occurred. In this paper it is shown that the problem of generating the
appropriate decision structure or explanation is intractable without
introducing further constraints on the knowledge in the database. The paper
proposes that the problem search space can be constrained by adding knowledge
to the database about causal relafions between events. In order to determine
the causal knowledge that would be most useful, causal theories for
deterministic and indeterministic universes are proposed. A program that uses
some of these causal constraints has been used to generate explanations about
faulty plans. The program shows the expected increase in efficiency as the
causal constraints are introduced.
|
1304.2377 | Updating Probabilities in Multiply-Connected Belief Networks | cs.AI | This paper focuses on probability updates in multiply-connected belief
networks. Pearl has designed the method of conditioning, which enables us to
apply his algorithm for belief updates in singly-connected networks to
multiply-connected belief networks by selecting a loop-cutset for the network
and instantiating these loop-cutset nodes. We discuss conditions that need to
be satisfied by the selected nodes. We present a heuristic algorithm for
finding a loop-cutset that satisfies these conditions.
|
1304.2378 | Handling uncertainty in a system for text-symbol context analysis | cs.AI | In pattern analysis, information regarding an object can often be drawn from
its surroundings. This paper presents a method for handling uncertainty when
using context of symbols and texts for analyzing technical drawings. The method
is based on Dempster-Shafer theory and possibility theory.
|
1304.2379 | Causal Networks: Semantics and Expressiveness | cs.AI | Dependency knowledge of the form "x is independent of y once z is known"
invariably obeys the four graphoid axioms, examples include probabilistic and
database dependencies. Often, such knowledge can be represented efficiently
with graphical structures such as undirected graphs and directed acyclic graphs
(DAGs). In this paper we show that the graphical criterion called d-separation
is a sound rule for reading independencies from any DAG based on a causal input
list drawn from a graphoid. The rule may be extended to cover DAGs that
represent functional dependencies as well as conditional dependencies.
|
1304.2380 | MCE Reasoning in Recursive Causal Networks | cs.AI | A probabilistic method of reasoning under uncertainty is proposed based on
the principle of Minimum Cross Entropy (MCE) and concept of Recursive Causal
Model (RCM). The dependency and correlations among the variables are described
in a special language BNDL (Belief Networks Description Language). Beliefs are
propagated among the clauses of the BNDL programs representing the underlying
probabilistic distributions. BNDL interpreters in both Prolog and C has been
developed and the performance of the method is compared with those of the
others.
|
1304.2381 | Nonmonotonic Reasoning via Possibility Theory | cs.AI | We introduce the operation of possibility qualification and show how. this
modal-like operator can be used to represent "typical" or default knowledge in
a theory of nonmonotonic reasoning. We investigate the representational power
of this approach by looking at a number of prototypical problems from the
nonmonotonic reasoning literature. In particular we look at the so called Yale
shooting problem and its relation to priority in default reasoning.
|
1304.2382 | Predicting the Likely Behaviors of Continuous Nonlinear Systems in
Equilibrium | cs.SY cs.AI | This paper introduces a method for predicting the likely behaviors of
continuous nonlinear systems in equilibrium in which the input values can vary.
The method uses a parameterized equation model and a lower bound on the input
joint density to bound the likelihood that some behavior will occur, such as a
state variable being inside a given numeric range. Using a bound on the density
instead of the density itself is desirable because often the input density's
parameters and shape are not exactly known. The new method is called SAB after
its basic operations: split the input value space into smaller regions, and
then bound those regions' possible behaviors and the probability of being in
them. SAB finds rough bounds at first, and then refines them as more time is
given. In contrast to other researchers' methods, SAB can (1) find all the
possible system behaviors, and indicate how likely they are, (2) does not
approximate the distribution of possible outcomes without some measure of the
error magnitude, (3) does not use discretized variable values, which limit the
events one can find probability bounds for, (4) can handle density bounds, and
(5) can handle such criteria as two state variables both being inside a numeric
range.
|
1304.2383 | Generalizing the Dempster-Shafer Theory to Fuzzy Sets | cs.AI | With the desire to apply the Dempster-Shafer theory to complex real world
problems where the evidential strength is often imprecise and vague, several
attempts have been made to generalize the theory. However, the important
concept in the D-S theory that the belief and plausibility functions are lower
and upper probabilities is no longer preserved in these generalizations. In
this paper, we describe a generalized theory of evidence where the degree of
belief in a fuzzy set is obtained by minimizing the probability of the fuzzy
set under the constraints imposed by a basic probability assignment. To
formulate the probabilistic constraint of a fuzzy focal element, we decompose
it into a set of consonant non-fuzzy focal elements. By generalizing the
compatibility relation to a possibility theory, we are able to justify our
generalization to Dempster's rule based on possibility distribution. Our
generalization not only extends the application of the D-S theory but also
illustrates a way that probability theory and fuzzy set theory can be combined
to deal with different kinds of uncertain information in AI systems.
|
1304.2384 | Logical Fuzzy Optimization | cs.AI | We present a logical framework to represent and reason about fuzzy
optimization problems based on fuzzy answer set optimization programming. This
is accomplished by allowing fuzzy optimization aggregates, e.g., minimum and
maximum in the language of fuzzy answer set optimization programming to allow
minimization or maximization of some desired criteria under fuzzy environments.
We show the application of the proposed logical fuzzy optimization framework
under the fuzzy answer set optimization programming to the fuzzy water
allocation optimization problem.
|
1304.2387 | Blind Interference Suppression and Power Adjustment with Alternating
Optimization for Cooperative DS-CDMA Networks | cs.IT math.IT | This work presents blind joint interference suppression and power allocation
algorithms for DS-CDMA networks with multiple relays and decode and forward
protocols. A scheme for joint allocation of power levels across the relays
subject to group-based power constraints and the design of linear receivers for
interference suppression is proposed. A code-constrained constant modulus (CCM)
design for the receive filters and the power allocation vectors is devised
along with a blind channel estimator. In order to solve the proposed
optimization efficiently, an alternating optimization strategy is presented
with recursive least squares (RLS)-type algorithms for estimating the
parameters of the receiver, the power allocation and the channels. Simulations
show that the proposed algorithms obtain significant gains in capacity and
performance over existing schemes.
|
1304.2388 | Joint Iterative Power Adjustment and Interference Suppression Algorithms
for Cooperative DS-CDMA Networks | cs.IT math.IT | This work presents joint iterative power allocation and interference
suppression algorithms for DS-CDMA networks which employ multiple relays and
the amplify and forward cooperation strategy. We propose a joint constrained
optimization framework that considers the allocation of power levels across the
relays subject to individual and global power constraints and the design of
linear receivers for interference suppression. We derive constrained minimum
mean-squared error (MMSE) expressions for the parameter vectors that determine
the optimal power levels across the relays and the parameters of the linear
receivers. In order to solve the proposed optimization problems efficiently, we
develop recursive least squares (RLS) algorithms for adaptive joint iterative
power allocation, and receiver and channel parameter estimation. Simulation
results show that the proposed algorithms obtain significant gains in
performance and capacity over existing schemes.
|
1304.2401 | RESLVE: Leveraging User Interest to Improve Entity Disambiguation on
Short Text | cs.IR cs.HC | We address the Named Entity Disambiguation (NED) problem for short,
user-generated texts on the social Web. In such settings, the lack of
linguistic features and sparse lexical context result in a high degree of
ambiguity and sharp performance drops of nearly 50% in the accuracy of
conventional NED systems. We handle these challenges by developing a model of
user-interest with respect to a personal knowledge context; and Wikipedia, a
particularly well-established and reliable knowledge base, is used to
instantiate the procedure. We conduct systematic evaluations using individuals'
posts from Twitter, YouTube, and Flickr and demonstrate that our novel
technique is able to achieve substantial performance gains beyond
state-of-the-art NED methods.
|
1304.2418 | Mod\`ele flou d'expression des pr\'ef\'erences bas\'e sur les CP-Nets | cs.AI | This article addresses the problem of expressing preferences in flexible
queries while basing on a combination of the fuzzy logic theory and Conditional
Preference Networks or CP-Nets.
|
1304.2444 | Common Information and Secret Key Capacity | cs.IT math.IT | We study the generation of a secret key of maximum rate by a pair of
terminals observing correlated sources and with the means to communicate over a
noiseless public com- munication channel. Our main result establishes a
structural equivalence between the generation of a maximum rate secret key and
the generation of a common randomness that renders the observations of the two
terminals conditionally independent. The minimum rate of such common
randomness, termed interactive common information, is related to Wyner's notion
of common information, and serves to characterize the minimum rate of
interactive public communication required to generate an optimum rate secret
key. This characterization yields a single-letter expression for the
aforementioned communication rate when the number of rounds of interaction are
bounded. An application of our results shows that interaction does not reduce
this rate for binary symmetric sources. Further, we provide an example for
which interaction does reduce the minimum rate of communication. Also, certain
invariance properties of common information quantities are established that may
be of independent interest.
|
1304.2467 | Evolutionary Design of Digital Circuits Using Genetic Programming | cs.NE | For simple digital circuits, conventional method of designing circuits can
easily be applied. But for complex digital circuits, the conventional method of
designing circuits is not fruitfully applicable because it is time-consuming.
On the contrary, Genetic Programming is used mostly for automatic program
generation. The modern approach for designing Arithmetic circuits, commonly
digital circuits, is based on Graphs. This graph-based evolutionary design of
arithmetic circuits is a method of optimized designing of arithmetic circuits.
In this paper, a new technique for evolutionary design of digital circuits is
proposed using Genetic Programming (GP) with Subtree Mutation in place of
Graph-based design. The results obtained using this technique demonstrates the
potential capability of genetic programming in digital circuit design with
limited computer algorithms. The proposed technique, helps to simplify and
speed up the process of designing digital circuits, discovers a variation in
the field of digital circuit design where optimized digital circuits can be
successfully and effectively designed.
|
1304.2476 | Corpus-based Web Document Summarization using Statistical and Linguistic
Approach | cs.IR cs.CL | Single document summarization generates summary by extracting the
representative sentences from the document. In this paper, we presented a novel
technique for summarization of domain-specific text from a single web document
that uses statistical and linguistic analysis on the text in a reference corpus
and the web document. The proposed summarizer uses the combinational function
of Sentence Weight (SW) and Subject Weight (SuW) to determine the rank of a
sentence, where SW is the function of number of terms (t_n) and number of words
(w_n) in a sentence, and term frequency (t_f) in the corpus and SuW is the
function of t_n and w_n in a subject, and t_f in the corpus. 30 percent of the
ranked sentences are considered to be the summary of the web document. We
generated three web document summaries using our technique and compared each of
them with the summaries developed manually from 16 different human subjects.
Results showed that 68 percent of the summaries produced by our approach
satisfy the manual summaries.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.