id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.7366 | Marginalizing in Undirected Graph and Hypergraph Models | cs.AI | Given an undirected graph G or hypergraph X model for a given set of
variables V, we introduce two marginalization operators for obtaining the
undirected graph GA or hypergraph HA associated with a given subset A c V such
that the marginal distribution of A factorizes according to GA or HA,
respectively. Finally, we illustrate the method by its application to some
practical examples. With them we show that hypergraph models allow defining a
finer factorization or performing a more precise conditional independence
analysis than undirected graph models.
|
1301.7367 | Utility Elicitation as a Classification Problem | cs.AI | We investigate the application of classification techniques to utility
elicitation. In a decision problem, two sets of parameters must generally be
elicited: the probabilities and the utilities. While the prior and conditional
probabilities in the model do not change from user to user, the utility models
do. Thus it is necessary to elicit a utility model separately for each new
user. Elicitation is long and tedious, particularly if the outcome space is
large and not decomposable. There are two common approaches to utility function
elicitation. The first is to base the determination of the users utility
function solely ON elicitation OF qualitative preferences.The second makes
assumptions about the form AND decomposability OF the utility function.Here we
take a different approach: we attempt TO identify the new USERs utility
function based on classification relative to a database of previously collected
utility functions. We do this by identifying clusters of utility functions that
minimize an appropriate distance measure. Having identified the clusters, we
develop a classification scheme that requires many fewer and simpler
assessments than full utility elicitation and is more robust than utility
elicitation based solely on preferences. We have tested our algorithm on a
small database of utility functions in a prenatal diagnosis domain and the
results are quite promising.
|
1301.7368 | Irrelevance and Independence Relations in Quasi-Bayesian Networks | cs.AI | This paper analyzes irrelevance and independence relations in graphical
models associated with convex sets of probability distributions (called
Quasi-Bayesian networks). The basic question in Quasi-Bayesian networks is, How
can irrelevance/independence relations in Quasi-Bayesian networks be detected,
enforced and exploited? This paper addresses these questions through Walley's
definitions of irrelevance and independence. Novel algorithms and results are
presented for inferences with the so-called natural extensions using fractional
linear programming, and the properties of the so-called type-1 extensions are
clarified through a new generalization of d-separation.
|
1301.7369 | Dynamic Jointrees | cs.AI | It is well known that one can ignore parts of a belief network when computing
answers to certain probabilistic queries. It is also well known that the
ignorable parts (if any) depend on the specific query of interest and,
therefore, may change as the query changes. Algorithms based on jointrees,
however, do not seem to take computational advantage of these facts given that
they typically construct jointrees for worst-case queries; that is, queries for
which every part of the belief network is considered relevant. To address this
limitation, we propose in this paper a method for reconfiguring jointrees
dynamically as the query changes. The reconfiguration process aims at
maintaining a jointree which corresponds to the underlying belief network after
it has been pruned given the current query. Our reconfiguration method is
marked by three characteristics: (a) it is based on a non-classical definition
of jointrees; (b) it is relatively efficient; and (c) it can reuse some of the
computations performed before a jointree is reconfigured. We present
preliminary experimental results which demonstrate significant savings over
using static jointrees when query changes are considerable.
|
1301.7370 | On the Semi-Markov Equivalence of Causal Models | cs.AI | The variability of structure in a finite Markov equivalence class of causally
sufficient models represented by directed acyclic graphs has been fully
characterized. Without causal sufficiency, an infinite semi-Markov equivalence
class of models has only been characterized by the fact that each model in the
equivalence class entails the same marginal statistical dependencies. In this
paper, we study the variability of structure of causal models within a
semi-Markov equivalence class and propose a systematic approach to construct
models entailing any specific marginal statistical dependencies.
|
1301.7371 | Comparative Uncertainty, Belief Functions and Accepted Beliefs | cs.AI | This paper relates comparative belief structures and a general view of belief
management in the setting of deductively closed logical representations of
accepted beliefs. We show that the range of compatibility between the classical
deductive closure and uncertain reasoning covers precisely the nonmonotonic
'preferential' inference system of Kraus, Lehmann and Magidor and nothing else.
In terms of uncertain reasoning any possibility or necessity measure gives
birth to a structure of accepted beliefs. The classes of probability functions
and of Shafer's belief functions which yield belief sets prove to be very
special ones.
|
1301.7372 | Qualitative Decision Theory with Sugeno Integrals | cs.AI | This paper presents an axiomatic framework for qualitative decision under
uncertainty in a finite setting. The corresponding utility is expressed by a
sup-min expression, called Sugeno (or fuzzy) integral. Technically speaking,
Sugeno integral is a median, which is indeed a qualitative counterpart to the
averaging operation underlying expected utility. The axiomatic justification of
Sugeno integral-based utility is expressed in terms of preference between acts
as in Savage decision theory. Pessimistic and optimistic qualitative utilities,
based on necessity and possibility measures, previously introduced by two of
the authors, can be retrieved in this setting by adding appropriate axioms.
|
1301.7373 | The Bayesian Structural EM Algorithm | cs.LG cs.AI stat.ML | In recent years there has been a flurry of works on learning Bayesian
networks from data. One of the hard problems in this area is how to effectively
learn the structure of a belief network from incomplete data- that is, in the
presence of missing values or hidden variables. In a recent paper, I introduced
an algorithm called Structural EM that combines the standard Expectation
Maximization (EM) algorithm, which optimizes parameters, with structure search
for model selection. That algorithm learns networks based on penalized
likelihood scores, which include the BIC/MDL score and various approximations
to the Bayesian score. In this paper, I extend Structural EM to deal directly
with Bayesian model selection. I prove the convergence of the resulting
algorithm and show how to apply it for learning a large class of probabilistic
models, including Bayesian networks and some variants thereof.
|
1301.7374 | Learning the Structure of Dynamic Probabilistic Networks | cs.AI cs.LG | Dynamic probabilistic networks are a compact representation of complex
stochastic processes. In this paper we examine how to learn the structure of a
DPN from data. We extend structure scoring rules for standard probabilistic
networks to the dynamic case, and show how to search for structure when some of
the variables are hidden. Finally, we examine two applications where such a
technology might be useful: predicting and classifying dynamic behaviors, and
learning causal orderings in biological processes. We provide empirical results
that demonstrate the applicability of our methods in both domains.
|
1301.7375 | Learning by Transduction | cs.LG stat.ML | We describe a method for predicting a classification of an object given
classifications of the objects in the training set, assuming that the pairs
object/classification are generated by an i.i.d. process from a continuous
probability distribution. Our method is a modification of Vapnik's
support-vector machine; its main novelty is that it gives not only the
prediction itself but also a practicable measure of the evidence found in
support of that prediction. We also describe a procedure for assigning degrees
of confidence to predictions made by the support vector machine. Some
experimental results are presented, and possible extensions of the algorithms
are discussed.
|
1301.7376 | Graphical Models and Exponential Families | cs.LG stat.ML | We provide a classification of graphical models according to their
representation as subfamilies of exponential families. Undirected graphical
models with no hidden variables are linear exponential families (LEFs),
directed acyclic graphical models and chain graphs with no hidden variables,
including Bayesian networks with several families of local distributions, are
curved exponential families (CEFs) and graphical models with hidden variables
are stratified exponential families (SEFs). An SEF is a finite union of CEFs
satisfying a frontier condition. In addition, we illustrate how one can
automatically generate independence and non-independence constraints on the
distributions over the observable variables implied by a Bayesian network with
hidden variables. The relevance of these results for model selection is
examined.
|
1301.7377 | Psychological and Normative Theories of Causal Power and the
Probabilities of Causes | cs.AI stat.ME | This paper (1)shows that the best supported current psychological theory
(Cheng, 1997) of how human subjects judge the causal power or influence of
variations in presence or absence of one feature on another, given data on
their covariation, tacitly uses a Bayes network which is either a noisy or gate
(for causes that promote the effect) or a noisy and gate (for causes that
inhibit the effect); (2)generalizes Chengs theory to arbitrary acyclic networks
of noisy or and noisy and gates; (3)gives various sufficient conditions for the
estimation of the parameters in such networks when there are independent,
unobserved causes; (4)distinguishes direct causal influence of one feature on
another (influence along a path with one edge) from total influence (influence
along all paths from one variable to another) and gives sufficient conditions
for estimating each when there are unobserved causes of the outcome variable;
(5)describes the relation between Cheng models and a simplified version of the
Rubin framework for representing causal relations.
|
1301.7378 | Minimum Encoding Approaches for Predictive Modeling | cs.LG stat.ML | We analyze differences between two information-theoretically motivated
approaches to statistical inference and model selection: the Minimum
Description Length (MDL) principle, and the Minimum Message Length (MML)
principle. Based on this analysis, we present two revised versions of MML: a
pointwise estimator which gives the MML-optimal single parameter model, and a
volumewise estimator which gives the MML-optimal region in the parameter space.
Our empirical results suggest that with small data sets, the MDL approach
yields more accurate predictions than the MML estimators. The empirical results
also demonstrate that the revised MML estimators introduced here perform better
than the original MML estimator suggested by Wallace and Freeman.
|
1301.7379 | Towards Case-Based Preference Elicitation: Similarity Measures on
Preference Structures | cs.AI | While decision theory provides an appealing normative framework for
representing rich preference structures, eliciting utility or value functions
typically incurs a large cost. For many applications involving interactive
systems this overhead precludes the use of formal decision-theoretic models of
preference. Instead of performing elicitation in a vacuum, it would be useful
if we could augment directly elicited preferences with some appropriate default
information. In this paper we propose a case-based approach to alleviating the
preference elicitation bottleneck. Assuming the existence of a population of
users from whom we have elicited complete or incomplete preference structures,
we propose eliciting the preferences of a new user interactively and
incrementally, using the closest existing preference structures as potential
defaults. Since a notion of closeness demands a measure of distance among
preference structures, this paper takes the first step of studying various
distance measures over fully and partially specified preference structures. We
explore the use of Euclidean distance, Spearmans footrule, and define a new
measure, the probabilistic distance. We provide computational techniques for
all three measures.
|
1301.7380 | Solving POMDPs by Searching in Policy Space | cs.AI | Most algorithms for solving POMDPs iteratively improve a value function that
implicitly represents a policy and are said to search in value function space.
This paper presents an approach to solving POMDPs that represents a policy
explicitly as a finite-state controller and iteratively improves the controller
by search in policy space. Two related algorithms illustrate this approach. The
first is a policy iteration algorithm that can outperform value iteration in
solving infinitehorizon POMDPs. It provides the foundation for a new heuristic
search algorithm that promises further speedup by focusing computational effort
on regions of the problem space that are reachable, or likely to be reached,
from a start state.
|
1301.7381 | Hierarchical Solution of Markov Decision Processes using Macro-actions | cs.AI | We investigate the use of temporally abstract actions, or macro-actions, in
the solution of Markov decision processes. Unlike current models that combine
both primitive actions and macro-actions and leave the state space unchanged,
we propose a hierarchical model (using an abstract MDP) that works with
macro-actions only, and that significantly reduces the size of the state space.
This is achieved by treating macroactions as local policies that act in certain
regions of state space, and by restricting states in the abstract MDP to those
at the boundaries of regions. The abstract MDP approximates the original and
can be solved more efficiently. We discuss several ways in which macro-actions
can be generated to ensure good solution quality. Finally, we consider ways in
which macro-actions can be reused to solve multiple, related MDPs; and we show
that this can justify the computational overhead of macro-action generation.
|
1301.7382 | Inferring Informational Goals from Free-Text Queries: A Bayesian
Approach | cs.IR cs.AI cs.CL | People using consumer software applications typically do not use technical
jargon when querying an online database of help topics. Rather, they attempt to
communicate their goals with common words and phrases that describe software
functionality in terms of structure and objects they understand. We describe a
Bayesian approach to modeling the relationship between words in a user's query
for assistance and the informational goals of the user. After reviewing the
general method, we describe several extensions that center on integrating
additional distinctions and structure about language usage and user goals into
the Bayesian models.
|
1301.7383 | Evaluating Las Vegas Algorithms - Pitfalls and Remedies | cs.AI | Stochastic search algorithms are among the most sucessful approaches for
solving hard combinatorial problems. A large class of stochastic search
approaches can be cast into the framework of Las Vegas Algorithms (LVAs). As
the run-time behavior of LVAs is characterized by random variables, the
detailed knowledge of run-time distributions provides important information for
the analysis of these algorithms. In this paper we propose a novel methodology
for evaluating the performance of LVAs, based on the identification of
empirical run-time distributions. We exemplify our approach by applying it to
Stochastic Local Search (SLS) algorithms for the satisfiability problem (SAT)
in propositional logic. We point out pitfalls arising from the use of improper
empirical methods and discuss the benefits of the proposed methodology for
evaluating and comparing LVAs.
|
1301.7384 | An Anytime Algorithm for Decision Making under Uncertainty | cs.AI | We present an anytime algorithm which computes policies for decision problems
represented as multi-stage influence diagrams. Our algorithm constructs
policies incrementally, starting from a policy which makes no use of the
available information. The incremental process constructs policies which
includes more of the information available to the decision maker at each step.
While the process converges to the optimal policy, our approach is designed for
situations in which computing the optimal policy is infeasible. We provide
examples of the process on several large decision problems, showing that, for
these examples, the process constructs valuable (but sub-optimal) policies
before the optimal policy would be available by traditional methods.
|
1301.7385 | The Lumiere Project: Bayesian User Modeling for Inferring the Goals and
Needs of Software Users | cs.AI cs.HC | The Lumiere Project centers on harnessing probability and utility to provide
assistance to computer software users. We review work on Bayesian user models
that can be employed to infer a users needs by considering a user's background,
actions, and queries. Several problems were tackled in Lumiere research,
including (1) the construction of Bayesian models for reasoning about the
time-varying goals of computer users from their observed actions and queries,
(2) gaining access to a stream of events from software applications, (3)
developing a language for transforming system events into observational
variables represented in Bayesian user models, (4) developing persistent
profiles to capture changes in a user expertise, and (5) the development of an
overall architecture for an intelligent user interface. Lumiere prototypes
served as the basis for the Office Assistant in the Microsoft Office '97 suite
of productivity applications.
|
1301.7386 | Any Time Probabilistic Reasoning for Sensor Validation | cs.AI | For many real time applications, it is important to validate the information
received from the sensors before entering higher levels of reasoning. This
paper presents an any time probabilistic algorithm for validating the
information provided by sensors. The system consists of two Bayesian network
models. The first one is a model of the dependencies between sensors and it is
used to validate each sensor. It provides a list of potentially faulty sensors.
To isolate the real faults, a second Bayesian network is used, which relates
the potential faults with the real faults. This second model is also used to
make the validation algorithm any time, by validating first the sensors that
provide more information. To select the next sensor to validate, and measure
the quality of the results at each stage, an entropy function is used. This
function captures in a single quantity both the certainty and specificity
measures of any time algorithms. Together, both models constitute a mechanism
for validating sensors in an any time fashion, providing at each step the
probability of correct/faulty for each sensor, and the total quality of the
results. The algorithm has been tested in the validation of temperature sensors
of a power plant.
|
1301.7387 | Measure Selection: Notions of Rationality and Representation
Independence | cs.AI | We take another look at the general problem of selecting a preferred
probability measure among those that comply with some given constraints. The
dominant role that entropy maximization has obtained in this context is
questioned by arguing that the minimum information principle on which it is
based could be supplanted by an at least as plausible "likelihood of evidence"
principle. We then review a method for turning given selection functions into
representation independent variants, and discuss the tradeoffs involved in this
transformation.
|
1301.7388 | Implementing Resolute Choice Under Uncertainty | cs.GT cs.AI | The adaptation to situations of sequential choice under uncertainty of
decision criteria which deviate from (subjective) expected utility raises the
problem of ensuring the selection of a nondominated strategy. In particular,
when following the suggestion of Machina and McClennen of giving up
separability (also known as consequentialism), which requires the choice of a
substrategy in a subtree to depend only on data relevant to that subtree, one
must renounce to the use of dynamic programming, since Bellman's principle is
no longer valid. An interpretation of McClennen's resolute choice, based on
cooperation between the successive Selves of the decision maker, is proposed.
Implementations of resolute choice which prevent Money Pumps negative prices of
information or, more generally, choices of dominated strategies, while
remaining computationally tractable, are proposed.
|
1301.7389 | Dealing with Uncertainty on the Initial State of a Petri Net | cs.AI cs.SY | This paper proposes a method to find the actual state of a complex dynamic
system from information coming from the sensors on the system himself, or on
its environment. The nominal evolution of the system is a priori known and can
be modeled (by an expert, for example), by different methods. In this paper,
the Petri nets have been chosen. Contrary to the usual use of the Petri nets,
the initial state of the system is unknown. So a degree of belief is bound to
each places, or set of places. The theory used to model this uncertainty is the
Dempster-Shafer's one which is well adapted to this type of problems. From the
given Petri net characterizing the nominal evolution of the dynamic system, and
from the observation inputs, the proposed method allows to determine according
to the reliability of the model and the inputs, the state of the system at any
time.
|
1301.7390 | Hierarchical Mixtures-of-Experts for Exponential Family Regression
Models with Generalized Linear Mean Functions: A Survey of Approximation and
Consistency Results | cs.LG stat.ML | We investigate a class of hierarchical mixtures-of-experts (HME) models where
exponential family regression models with generalized linear mean functions of
the form psi(ga+fx^Tfgb) are mixed. Here psi(...) is the inverse link function.
Suppose the true response y follows an exponential family regression model with
mean function belonging to a class of smooth functions of the form psi(h(fx))
where h(...)in W_2^infty (a Sobolev class over [0,1]^{s}). It is shown that the
HME probability density functions can approximate the true density, at a rate
of O(m^{-2/s}) in L_p norm, and at a rate of O(m^{-4/s}) in Kullback-Leibler
divergence. These rates can be achieved within the family of HME structures
with no more than s-layers, where s is the dimension of the predictor fx. It is
also shown that likelihood-based inference based on HME is consistent in
recovering the truth, in the sense that as the sample size n and the number of
experts m both increase, the mean square error of the predicted mean response
goes to zero. Conditions for such results to hold are stated and discussed.
|
1301.7391 | Exact Inference of Hidden Structure from Sample Data in Noisy-OR
Networks | cs.AI | In the literature on graphical models, there has been increased attention
paid to the problems of learning hidden structure (see Heckerman [H96] for
survey) and causal mechanisms from sample data [H96, P88, S93, P95, F98]. In
most settings we should expect the former to be difficult, and the latter
potentially impossible without experimental intervention. In this work, we
examine some restricted settings in which perfectly reconstruct the hidden
structure solely on the basis of observed sample data.
|
1301.7392 | Large Deviation Methods for Approximate Probabilistic Inference | cs.LG stat.ML | We study two-layer belief networks of binary random variables in which the
conditional probabilities Pr[childlparents] depend monotonically on weighted
sums of the parents. In large networks where exact probabilistic inference is
intractable, we show how to compute upper and lower bounds on many
probabilities of interest. In particular, using methods from large deviation
theory, we derive rigorous bounds on marginal probabilities such as
Pr[children] and prove rates of convergence for the accuracy of our bounds as a
function of network size. Our results apply to networks with generic transfer
function parameterizations of the conditional probability tables, such as
sigmoid and noisy-OR. They also explicitly illustrate the types of averaging
behavior that can simplify the problem of inference in large networks.
|
1301.7393 | Mixture Representations for Inference and Learning in Boltzmann Machines | cs.LG stat.ML | Boltzmann machines are undirected graphical models with two-state stochastic
variables, in which the logarithms of the clique potentials are quadratic
functions of the node states. They have been widely studied in the neural
computing literature, although their practical applicability has been limited
by the difficulty of finding an effective learning algorithm. One
well-established approach, known as mean field theory, represents the
stochastic distribution using a factorized approximation. However, the
corresponding learning algorithm often fails to find a good solution. We
conjecture that this is due to the implicit uni-modality of the mean field
approximation which is therefore unable to capture multi-modality in the true
distribution. In this paper we use variational methods to approximate the
stochastic distribution using multi-modal mixtures of factorized distributions.
We present results for both inference and learning to demonstrate the
effectiveness of this approach.
|
1301.7394 | A Comparison of Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer
Architectures for Computing Marginals of Probability Distributions | cs.AI | In the last decade, several architectures have been proposed for exact
computation of marginals using local computation. In this paper, we compare
three architectures - Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer - from
the perspective of graphical structure for message propagation, message-passing
scheme, computational efficiency, and storage efficiency.
|
1301.7395 | Incremental Tradeoff Resolution in Qualitative Probabilistic Networks | cs.AI | Qualitative probabilistic reasoning in a Bayesian network often reveals
tradeoffs: relationships that are ambiguous due to competing qualitative
influences. We present two techniques that combine qualitative and numeric
probabilistic reasoning to resolve such tradeoffs, inferring the qualitative
relationship between nodes in a Bayesian network. The first approach
incrementally marginalizes nodes that contribute to the ambiguous qualitative
relationships. The second approach evaluates approximate Bayesian networks for
bounds of probability distributions, and uses these bounds to determinate
qualitative relationships in question. This approach is also incremental in
that the algorithm refines the state spaces of random variables for tighter
bounds until the qualitative relationships are resolved. Both approaches
provide systematic methods for tradeoff resolution at potentially lower
computational cost than application of purely numeric methods.
|
1301.7396 | Using Qualitative Relationships for Bounding Probability Distributions | cs.AI | We exploit qualitative probabilistic relationships among variables for
computing bounds of conditional probability distributions of interest in
Bayesian networks. Using the signs of qualitative relationships, we can
implement abstraction operations that are guaranteed to bound the distributions
of interest in the desired direction. By evaluating incrementally improved
approximate networks, our algorithm obtains monotonically tightening bounds
that converge to exact distributions. For supermodular utility functions, the
tightening bounds monotonically reduce the set of admissible decision
alternatives as well.
|
1301.7397 | Magic Inference Rules for Probabilistic Deduction under Taxonomic
Knowledge | cs.AI | We present locally complete inference rules for probabilistic deduction from
taxonomic and probabilistic knowledge-bases over conjunctive events. Crucially,
in contrast to similar inference rules in the literature, our inference rules
are locally complete for conjunctive events and under additional taxonomic
knowledge. We discover that our inference rules are extremely complex and that
it is at first glance not clear at all where the deduced tightest bounds come
from. Moreover, analyzing the global completeness of our inference rules, we
find examples of globally very incomplete probabilistic deductions. More
generally, we even show that all systems of inference rules for taxonomic and
probabilistic knowledge-bases over conjunctive events are globally incomplete.
We conclude that probabilistic deduction by the iterative application of
inference rules on interval restrictions for conditional probabilities, even
though considered very promising in the literature so far, seems very limited
in its field of application.
|
1301.7398 | Lazy Propagation in Junction Trees | cs.AI | The efficiency of algorithms using secondary structures for probabilistic
inference in Bayesian networks can be improved by exploiting independence
relations induced by evidence and the direction of the links in the original
network. In this paper we present an algorithm that on-line exploits
independence relations induced by evidence and the direction of the links in
the original network to reduce both time and space costs. Instead of
multiplying the conditional probability distributions for the various cliques,
we determine on-line which potentials to multiply when a message is to be
produced. The performance improvement of the algorithm is emphasized through
empirical evaluations involving large real world Bayesian networks, and we
compare the method with the HUGIN and Shafer-Shenoy inference algorithms.
|
1301.7399 | Constructing Situation Specific Belief Networks | cs.AI | This paper describes a process for constructing situation-specific belief
networks from a knowledge base of network fragments. A situation-specific
network is a minimal query complete network constructed from a knowledge base
in response to a query for the probability distribution on a set of target
variables given evidence and context variables. We present definitions of query
completeness and situation-specific networks. We describe conditions on the
knowledge base that guarantee query completeness. The relationship of our work
to earlier work on KBMC is also discussed.
|
1301.7401 | An Experimental Comparison of Several Clustering and Initialization
Methods | cs.LG stat.ML | We examine methods for clustering in high dimensions. In the first part of
the paper, we perform an experimental comparison between three batch clustering
algorithms: the Expectation-Maximization (EM) algorithm, a winner take all
version of the EM algorithm reminiscent of the K-means algorithm, and
model-based hierarchical agglomerative clustering. We learn naive-Bayes models
with a hidden root node, using high-dimensional discrete-variable data sets
(both real and synthetic). We find that the EM algorithm significantly
outperforms the other methods, and proceed to investigate the effect of various
initialization schemes on the final solution produced by the EM algorithm. The
initializations that we consider are (1) parameters sampled from an
uninformative prior, (2) random perturbations of the marginal distribution of
the data, and (3) the output of hierarchical agglomerative clustering. Although
the methods are substantially different, they lead to learned models that are
strikingly similar in quality.
|
1301.7402 | From Likelihood to Plausibility | cs.AI | Several authors have explained that the likelihood ratio measures the
strength of the evidence represented by observations in statistical problems.
This idea works fine when the goal is to evaluate the strength of the available
evidence for a simple hypothesis versus another simple hypothesis. However, the
applicability of this idea is limited to simple hypotheses because the
likelihood function is primarily defined on points (simple hypotheses) of the
parameter space. In this paper we define a general weight of evidence that is
applicable to both simple and composite hypotheses. It is based on the
Dempster-Shafer concept of plausibility and is shown to be a generalization of
the likelihood ratio. Functional models are of a fundamental importance for the
general weight of evidence proposed in this paper. The relevant concepts and
ideas are explained by means of a familiar urn problem and the general analysis
of a real-world medical problem is presented.
|
1301.7403 | A Multivariate Discretization Method for Learning Bayesian Networks from
Mixed Data | cs.AI cs.LG | In this paper we address the problem of discretization in the context of
learning Bayesian networks (BNs) from data containing both continuous and
discrete variables. We describe a new technique for <EM>multivariate</EM>
discretization, whereby each continuous variable is discretized while taking
into account its interaction with the other variables. The technique is based
on the use of a Bayesian scoring metric that scores the discretization policy
for a continuous variable given a BN structure and the observed data. Since the
metric is relative to the BN structure currently being evaluated, the
discretization of a variable needs to be dynamically adjusted as the BN
structure changes.
|
1301.7404 | Resolving Conflicting Arguments under Uncertainties | cs.AI | Distributed knowledge based applications in open domain rely on common sense
information which is bound to be uncertain and incomplete. To draw the useful
conclusions from ambiguous data, one must address uncertainties and conflicts
incurred in a holistic view. No integrated frameworks are viable without an
in-depth analysis of conflicts incurred by uncertainties. In this paper, we
give such an analysis and based on the result, propose an integrated framework.
Our framework extends definite argumentation theory to model uncertainty. It
supports three views over conflicting and uncertain knowledge. Thus, knowledge
engineers can draw different conclusions depending on the application context
(i.e. view). We also give an illustrative example on strategical decision
support to show the practical usefulness of our framework.
|
1301.7405 | Flexible Decomposition Algorithms for Weakly Coupled Markov Decision
Problems | cs.AI | This paper presents two new approaches to decomposing and solving large
Markov decision problems (MDPs), a partial decoupling method and a complete
decoupling method. In these approaches, a large, stochastic decision problem is
divided into smaller pieces. The first approach builds a cache of policies for
each part of the problem independently, and then combines the pieces in a
separate, light-weight step. A second approach also divides the problem into
smaller pieces, but information is communicated between the different problem
pieces, allowing intelligent decisions to be made about which piece requires
the most attention. Both approaches can be used to find optimal policies or
approximately optimal policies with provable bounds. These algorithms also
provide a framework for the efficient transfer of knowledge across problems
that share similar structure.
|
1301.7406 | Logarithmic Time Parallel Bayesian Inference | cs.AI | I present a parallel algorithm for exact probabilistic inference in Bayesian
networks. For polytree networks with n variables, the worst-case time
complexity is O(log n) on a CREW PRAM (concurrent-read, exclusive-write
parallel random-access machine) with n processors, for any constant number of
evidence variables. For arbitrary networks, the time complexity is O(r^{3w}*log
n) for n processors, or O(w*log n) for r^{3w}*n processors, where r is the
maximum range of any variable, and w is the induced width (the maximum clique
size), after moralizing and triangulating the network.
|
1301.7407 | Learning From What You Don't Observe | cs.AI | The process of diagnosis involves learning about the state of a system from
various observations of symptoms or findings about the system. Sophisticated
Bayesian (and other) algorithms have been developed to revise and maintain
beliefs about the system as observations are made. Nonetheless, diagnostic
models have tended to ignore some common sense reasoning exploited by human
diagnosticians; In particular, one can learn from which observations have not
been made, in the spirit of conversational implicature. There are two concepts
that we describe to extract information from the observations not made. First,
some symptoms, if present, are more likely to be reported before others.
Second, most human diagnosticians and expert systems are economical in their
data-gathering, searching first where they are more likely to find symptoms
present. Thus, there is a desirable bias toward reporting symptoms that are
present. We develop a simple model for these concepts that can significantly
improve diagnostic inference.
|
1301.7408 | Context-Specific Approximation in Probabilistic Inference | cs.AI | There is evidence that the numbers in probabilistic inference don't really
matter. This paper considers the idea that we can make a probabilistic model
simpler by making fewer distinctions. Unfortunately, the level of a Bayesian
network seems too coarse; it is unlikely that a parent will make little
difference for all values of the other parents. In this paper we consider an
approximation scheme where distinctions can be ignored in some contexts, but
not in other contexts. We elaborate on a notion of a parent context that allows
a structured context-specific decomposition of a probability distribution and
the associated probabilistic inference scheme called probabilistic partial
evaluation (Poole 1997). This paper shows a way to simplify a probabilistic
model by ignoring distinctions which have similar probabilities, a method to
exploit the simpler model, a bound on the resulting errors, and some
preliminary empirical results on simple networks.
|
1301.7409 | Empirical Evaluation of Approximation Algorithms for Probabilistic
Decoding | cs.AI | It was recently shown that the problem of decoding messages transmitted
through a noisy channel can be formulated as a belief updating task over a
probabilistic network [McEliece]. Moreover, it was observed that iterative
application of the (linear time) Pearl's belief propagation algorithm designed
for polytrees outperformed state of the art decoding algorithms, even though
the corresponding networks may have many cycles. This paper demonstrates
empirically that an approximation algorithm approx-mpe for solving the most
probable explanation (MPE) problem, developed within the recently proposed
mini-bucket elimination framework [Dechter96], outperforms iterative belief
propagation on classes of coding networks that have bounded induced width. Our
experiments suggest that approximate MPE decoders can be good competitors to
the approximate belief updating decoders.
|
1301.7410 | Decision Theoretic Foundations of Graphical Model Selection | cs.AI | This paper describes a decision theoretic formulation of learning the
graphical structure of a Bayesian Belief Network from data. This framework
subsumes the standard Bayesian approach of choosing the model with the largest
posterior probability as the solution of a decision problem with a 0-1 loss
function and allows the use of more general loss functions able to trade-off
the complexity of the selected model and the error of choosing an
oversimplified model. A new class of loss functions, called disintegrable, is
introduced, to allow the decision problem to match the decomposability of the
graphical model. With this class of loss functions, the optimal solution to the
decision problem can be found using an efficient bottom-up search strategy.
|
1301.7411 | On the Geometry of Bayesian Graphical Models with Hidden Variables | cs.LG stat.ML | In this paper we investigate the geometry of the likelihood of the unknown
parameters in a simple class of Bayesian directed graphs with hidden variables.
This enables us, before any numerical algorithms are employed, to obtain
certain insights in the nature of the unidentifiability inherent in such
models, the way posterior densities will be sensitive to prior densities and
the typical geometrical form these posterior densities might take. Many of
these insights carry over into more complicated Bayesian networks with
systematic missing data.
|
1301.7412 | Bayes-Ball: The Rational Pastime (for Determining Irrelevance and
Requisite Information in Belief Networks and Influence Diagrams) | cs.AI | One of the benefits of belief networks and influence diagrams is that so much
knowledge is captured in the graphical structure. In particular, statements of
conditional irrelevance (or independence) can be verified in time linear in the
size of the graph. To resolve a particular inference query or decision problem,
only some of the possible states and probability distributions must be
specified, the "requisite information."
This paper presents a new, simple, and efficient "Bayes-ball" algorithm which
is well-suited to both new students of belief networks and state of the art
implementations. The Bayes-ball algorithm determines irrelevant sets and
requisite information more efficiently than existing methods, and is linear in
the size of the graph for belief networks and influence diagrams.
|
1301.7413 | Switching Portfolios | q-fin.PM cs.AI | A constant rebalanced portfolio is an asset allocation algorithm which keeps
the same distribution of wealth among a set of assets along a period of time.
Recently, there has been work on on-line portfolio selection algorithms which
are competitive with the best constant rebalanced portfolio determined in
hindsight. By their nature, these algorithms employ the assumption that high
returns can be achieved using a fixed asset allocation strategy. However, stock
markets are far from being stationary and in many cases the wealth achieved by
a constant rebalanced portfolio is much smaller than the wealth achieved by an
ad-hoc investment strategy that adapts to changes in the market. In this paper
we present an efficient Bayesian portfolio selection algorithm that is able to
track a changing market. We also describe a simple extension of the algorithm
for the case of a general transaction cost, including the transactions cost
models recently investigated by Blum and kalai. We provide a simple analysis of
the competitiveness of the algorithm and check its performance on real stock
data from the New York Stock Exchange accumulated during a 22-year period.
|
1301.7414 | Bayesian Networks from the Point of View of Chain Graphs | cs.AI | AThe paper gives a few arguments in favour of the use of chain graphs for
description of probabilistic conditional independence structures. Every
Bayesian network model can be equivalently introduced by means of a
factorization formula with respect to a chain graph which is Markov equivalent
to the Bayesian network. A graphical characterization of such graphs is given.
The class of equivalent graphs can be represented by a distinguished graph
which is called the largest chain graph. The factorization formula with respect
to the largest chain graph is a basis of a proposal of how to represent the
corresponding (discrete) probability distribution in a computer (i.e.
parametrize it). This way does not depend on the choice of a particular
Bayesian network from the class of equivalent networks and seems to be the most
efficient way from the point of view of memory demands. A separation criterion
for reading independency statements from a chain graph is formulated in a
simpler way. It resembles the well-known d-separation criterion for Bayesian
networks and can be implemented locally.
|
1301.7415 | Learning Mixtures of DAG Models | cs.LG cs.AI stat.ML | We describe computationally efficient methods for learning mixtures in which
each component is a directed acyclic graphical model (mixtures of DAGs or
MDAGs). We argue that simple search-and-score algorithms are infeasible for a
variety of problems, and introduce a feasible approach in which parameter and
structure search is interleaved and expected data is treated as real data. Our
approach can be viewed as a combination of (1) the Cheeseman--Stutz asymptotic
approximation for model posterior probability and (2) the
Expectation--Maximization algorithm. We evaluate our procedure for selecting
among MDAGs on synthetic and real examples.
|
1301.7416 | Probabilistic Inference in Influence Diagrams | cs.AI | This paper is about reducing influence diagram (ID) evaluation into Bayesian
network (BN) inference problems. Such reduction is interesting because it
enables one to readily use one's favorite BN inference algorithm to efficiently
evaluate IDs. Two such reduction methods have been proposed previously (Cooper
1988, Shachter and Peot 1992). This paper proposes a new method. The BN
inference problems induced by the mew method are much easier to solve than
those induced by the two previous methods.
|
1301.7417 | Planning with Partially Observable Markov Decision Processes: Advances
in Exact Solution Method | cs.AI | There is much interest in using partially observable Markov decision
processes (POMDPs) as a formal model for planning in stochastic domains. This
paper is concerned with finding optimal policies for POMDPs. We propose several
improvements to incremental pruning, presently the most efficient exact
algorithm for solving POMDPs.
|
1301.7418 | Flexible and Approximate Computation through State-Space Reduction | cs.AI | In the real world, insufficient information, limited computation resources,
and complex problem structures often force an autonomous agent to make a
decision in time less than that required to solve the problem at hand
completely. Flexible and approximate computations are two approaches to
decision making under limited computation resources. Flexible computation helps
an agent to flexibly allocate limited computation resources so that the overall
system utility is maximized. Approximate computation enables an agent to find
the best satisfactory solution within a deadline. In this paper, we present two
state-space reduction methods for flexible and approximate computation:
quantitative reduction to deal with inaccurate heuristic information, and
structural reduction to handle complex problem structures. These two methods
can be applied successively to continuously improve solution quality if more
computation is available. Our results show that these reduction methods are
effective and efficient, finding better solutions with less computation than
some existing well-known methods.
|
1301.7455 | Opinion Maximization in Social Networks | cs.SI physics.soc-ph | The process of opinion formation through synthesis and contrast of different
viewpoints has been the subject of many studies in economics and social
sciences. Today, this process manifests itself also in online social networks
and social media. The key characteristic of successful promotion campaigns is
that they take into consideration such opinion-formation dynamics in order to
create a overall favorable opinion about a specific information item, such as a
person, a product, or an idea.
In this paper, we adopt a well-established model for social-opinion dynamics
and formalize the campaign-design problem as the problem of identifying a set
of target individuals whose positive opinion about an information item will
maximize the overall positive opinion for the item in the social network. We
call this problem CAMPAIGN. We study the complexity of the CAMPAIGN problem,
and design algorithms for solving it. Our experiments on real data demonstrate
the efficiency and practical utility of our algorithms.
|
1301.7464 | Variable-Length Coding with Feedback: Finite-Length Codewords and
Periodic Decoding | cs.IT math.IT | Theoretical analysis has long indicated that feedback improves the error
exponent but not the capacity of single-user memoryless channels. Recently
Polyanskiy et al. studied the benefit of variable-length feedback with
termination (VLFT) codes in the non-asymptotic regime. In that work,
achievability is based on an infinite length random code and decoding is
attempted at every symbol. The coding rate backoff from capacity due to channel
dispersion is greatly reduced with feedback, allowing capacity to be approached
with surprisingly small expected latency. This paper is mainly concerned with
VLFT codes based on finite-length codes and decoding attempts only at certain
specified decoding times. The penalties of using a finite block-length $N$ and
a sequence of specified decoding times are studied. This paper shows that
properly scaling $N$ with the expected latency can achieve the same performance
up to constant terms as with $N = \infty$. The penalty introduced by periodic
decoding times is a linear term of the interval between decoding times and
hence the performance approaches capacity as the expected latency grows if the
interval between decoding times grows sub-linearly with the expected latency.
|
1301.7473 | Information driven self-organization of complex robotic behaviors | cs.RO cs.IT cs.LG math.IT | Information theory is a powerful tool to express principles to drive
autonomous systems because it is domain invariant and allows for an intuitive
interpretation. This paper studies the use of the predictive information (PI),
also called excess entropy or effective measure complexity, of the sensorimotor
process as a driving force to generate behavior. We study nonlinear and
nonstationary systems and introduce the time-local predicting information
(TiPI) which allows us to derive exact results together with explicit update
rules for the parameters of the controller in the dynamical systems framework.
In this way the information principle, formulated at the level of behavior, is
translated to the dynamics of the synapses. We underpin our results with a
number of case studies with high-dimensional robotic systems. We show the
spontaneous cooperativity in a complex physical system with decentralized
control. Moreover, a jointly controlled humanoid robot develops a high
behavioral variety depending on its physics and the environment it is
dynamically embedded into. The behavior can be decomposed into a succession of
low-dimensional modes that increasingly explore the behavior space. This is a
promising way to avoid the curse of dimensionality which hinders learning
systems to scale well.
|
1301.7482 | Technical Report: A Receding Horizon Algorithm for Informative Path
Planning with Temporal Logic Constraints | cs.RO | This technical report is an extended version of the paper 'A Receding Horizon
Algorithm for Informative Path Planning with Temporal Logic Constraints'
accepted to the 2013 IEEE International Conference on Robotics and Automation
(ICRA). This paper considers the problem of finding the most informative path
for a sensing robot under temporal logic constraints, a richer set of
constraints than have previously been considered in information gathering. An
algorithm for informative path planning is presented that leverages tools from
information theory and formal control synthesis, and is proven to give a path
that satisfies the given temporal logic constraints. The algorithm uses a
receding horizon approach in order to provide a reactive, on-line solution
while mitigating computational complexity. Statistics compiled from multiple
simulation studies indicate that this algorithm performs better than a baseline
exhaustive search approach.
|
1301.7491 | On the Construction and Decoding of Concatenated Polar Codes | cs.IT math.IT | A scheme for concatenating the recently invented polar codes with interleaved
block codes is considered. By concatenating binary polar codes with interleaved
Reed-Solomon codes, we prove that the proposed concatenation scheme captures
the capacity-achieving property of polar codes, while having a significantly
better error-decay rate. We show that for any $\epsilon > 0$, and total frame
length $N$, the parameters of the scheme can be set such that the frame error
probability is less than $2^{-N^{1-\epsilon}}$, while the scheme is still
capacity achieving. This improves upon $2^{-N^{0.5-\eps}}$, the frame error
probability of Arikan's polar codes. We also propose decoding algorithms for
concatenated polar codes, which significantly improve the error-rate
performance at finite block lengths while preserving the low decoding
complexity.
|
1301.7503 | Finite Length Analysis on Listing Failure Probability of Invertible
Bloom Lookup Tables | cs.IT math.IT | The Invertible Bloom Lookup Tables (IBLT) is a data structure which supports
insertion, deletion, retrieval and listing operations of the key-value pair.
The IBLT can be used to realize efficient set reconciliation for database
synchronization. The most notable feature of the IBLT is the complete listing
operation of the key-value pairs based on the algorithm similar to the peeling
algorithm for low-density generator-matrix (LDGM) codes. In this paper, we will
present a stopping set (SS) analysis for the IBLT which reveals finite length
behaviors of the listing failure probability. The key of the analysis is
enumeration of the number of stopping matrices of given size. We derived a
novel recursive formula useful for computationally efficient enumeration. An
upper bound on the listing failure probability based on the union bound
accurately captures the error floor behaviors. It will be shown that, in the
error floor region, the dominant SS have size 2. We propose a simple
modification on hash functions, which are called SS avoiding hash functions,
for preventing occurrences of the SS of size 2.
|
1301.7504 | Improved Lower Bounds on the Total Variation Distance for the Poisson
Approximation | cs.IT math.IT | New lower bounds on the total variation distance between the distribution of
a sum of independent Bernoulli random variables and the Poisson random variable
(with the same mean) are derived via the Chen-Stein method. The new bounds rely
on a non-trivial modification of the analysis by Barbour and Hall (1984) which
surprisingly gives a significant improvement. A use of the new lower bounds is
addressed.
|
1301.7506 | A Fully Distributed Opportunistic Network Coding Scheme for Cellular
Relay Networks | cs.IT math.IT | In this paper, we propose an opportunistic network coding (ONC) scheme in
cellular relay networks, which operates depending on whether the relay decodes
source messages successfully or not. A fully distributed method is presented to
implement the proposed opportunistic network coding scheme without the need of
any feedback between two network nodes. We consider the use of proposed ONC for
cellular downlink transmissions and derive its closed-form outage probability
expression considering cochannel interference in a Rayleigh fading environment.
Numerical results show that the proposed ONC scheme outperforms the traditional
non-cooperation in terms of outage probability. We also develop the
diversity-multiplexing tradeoff (DMT) of proposed ONC and show that the ONC
scheme obtains the full diversity and an increased multiplexing gain as
compared with the conventional cooperation protocols.
|
1301.7515 | Energy Efficiency of Network Cooperation for Cellular Uplink
Transmissions | cs.IT math.IT | There is a growing interest in energy efficient or so-called "green" wireless
communication to reduce the energy consumption in cellular networks. Since
today's wireless terminals are typically equipped with multiple network access
interfaces such as Bluetooth, Wi-Fi, and cellular networks, this paper
investigates user terminals cooperating with each other in transmitting their
data packets to a base station (BS) by exploiting the multiple network access
interfaces, referred to as inter-network cooperation, to improve the energy
efficiency in cellular uplink transmission. Given target outage probability and
data rate requirements, we develop a closed-form expression of energy
efficiency in Bits-per-Joule for the inter-network cooperation by taking into
account the path loss, fading, and thermal noise effects. Numerical results
show that when the cooperating users move towards to each other, the proposed
inter-network cooperation significantly improves the energy efficiency as
compared with the traditional non-cooperation and intra-network cooperation.
This implies that given a certain amount of bits to be transmitted, the
inter-network cooperation requires less energy than the traditional
non-cooperation and intra-network cooperation, showing the energy saving
benefit of inter-network cooperation.
|
1301.7519 | Non-Adaptive Group Testing based on Sparse Pooling Graphs | cs.IT math.IT | In this paper, an information theoretic analysis on non-adaptive group
testing schemes based on sparse pooling graphs is presented. The binary status
of the objects to be tested are modeled by i.i.d. Bernoulli random variables
with probability p. An (l, r, n)-regular pooling graph is a bipartite graph
with left node degree l and right node degree r, where n is the number of left
nodes. Two scenarios are considered: a noiseless setting and a noisy one. The
main contributions of this paper are direct part theorems that give conditions
for the existence of an estimator achieving arbitrary small estimation error
probability. The direct part theorems are proved by averaging an upper bound on
estimation error probability of the typical set estimator over an (l,r,
n)-regular pooling graph ensemble. Numerical results indicate sharp threshold
behaviors in the asymptotic regime.
|
1301.7542 | An Analysis on Minimum s-t Cut Capacity of Random Graphs with Specified
Degree Distribution | cs.IT math.IT | The capacity (or maximum flow) of an unicast network is known to be equal to
the minimum s-t cut capacity due to the max-flow min-cut theorem. If the
topology of a network (or link capacities) is dynamically changing or unknown,
it is not so trivial to predict statistical properties on the maximum flow of
the network. In this paper, we present a probabilistic analysis for evaluating
the accumulate distribution of the minimum s-t cut capacity on random graphs.
The graph ensemble treated in this paper consists of weighted graphs with
arbitrary specified degree distribution. The main contribution of our work is a
lower bound for the accumulate distribution of the minimum s-t cut capacity.
From some computer experiments, it is observed that the lower bound derived
here reflects the actual statistical behavior of the minimum s-t cut capacity
of random graphs with specified degrees.
|
1301.7564 | Multiset Codes for Permutation Channels | cs.IT math.IT | This paper introduces the notion of multiset codes as relevant to the problem
of reliable information transmission over permutation channels. The motivation
for studying permutation channels comes from the effect of out of order
delivery of packets in some types of packet networks. The proposed codes are a
generalization of the so-called subset codes, recently proposed by the authors.
Some of the basic properties of multiset codes are established, among which
their equivalence to integer codes under the Manhattan metric. The presented
coding-theoretic framework follows closely the one proposed by Koetter and
Kschischang for the operator channels. The two mathematical models are similar
in many respects, and the basic idea is presented in a way which admits a
unified view on coding for these types of channels.
|
1301.7566 | On the Capacity of Special Classes of Gaussian Relay Networks with
Orthogonal Components and Noncausal State Information at Source | cs.IT math.IT | In this paper, we study relay networks with orthogonal components in presence
of noncausal channel state information (CSI) available at the source. We
propose an upper bound on the capacity of the discrete memoryless model (DM)for
the case in which just the source component intended for the destination is
encoded against the CSI known non-causally at the source. Also, we derive
capacity for two special classes of the Gaussian structure of the model. The
first class is the one for which we have obtained the upper bound and the
second class is the one in which all of the source components intended for the
relays and destination are encoded against the noncausal CSI, however, no
interference at the relays and destination exists in this case.
|
1301.7592 | Paradoxes in Social Networks with Multiple Products | cs.GT cs.SI | Recently, we introduced in arXiv:1105.2434 a model for product adoption in
social networks with multiple products, where the agents, influenced by their
neighbours, can adopt one out of several alternatives. We identify and analyze
here four types of paradoxes that can arise in these networks. To this end, we
use social network games that we recently introduced in arxiv:1202.2209. These
paradoxes shed light on possible inefficiencies arising when one modifies the
sets of products available to the agents forming a social network. One of the
paradoxes corresponds to the well-known Braess paradox in congestion games and
shows that by adding more choices to a node, the network may end up in a
situation that is worse for everybody. We exhibit a dual version of this, where
removing available choices from someone can eventually make everybody better
off. The other paradoxes that we identify show that by adding or removing a
product from the choice set of some node may lead to permanent instability.
Finally, we also identify conditions under which some of these paradoxes cannot
arise.
|
1301.7619 | Rank regularization and Bayesian inference for tensor completion and
extrapolation | cs.IT cs.LG math.IT stat.ML | A novel regularizer of the PARAFAC decomposition factors capturing the
tensor's rank is proposed in this paper, as the key enabler for completion of
three-way data arrays with missing entries. Set in a Bayesian framework, the
tensor completion method incorporates prior information to enhance its
smoothing and prediction capabilities. This probabilistic approach can
naturally accommodate general models for the data distribution, lending itself
to various fitting criteria that yield optimum estimates in the
maximum-a-posteriori sense. In particular, two algorithms are devised for
Gaussian- and Poisson-distributed data, that minimize the rank-regularized
least-squares error and Kullback-Leibler divergence, respectively. The proposed
technique is able to recover the "ground-truth'' tensor rank when tested on
synthetic data, and to complete brain imaging and yeast gene expression
datasets with 50% and 15% of missing entries respectively, resulting in
recovery errors at -10dB and -15dB.
|
1301.7627 | Load curve data cleansing and imputation via sparsity and low rank | math.OC cs.IT cs.SY math.IT | The smart grid vision is to build an intelligent power network with an
unprecedented level of situational awareness and controllability over its
services and infrastructure. This paper advocates statistical inference methods
to robustify power monitoring tasks against the outlier effects owing to faulty
readings and malicious attacks, as well as against missing data due to privacy
concerns and communication errors. In this context, a novel load cleansing and
imputation scheme is developed leveraging the low intrinsic-dimensionality of
spatiotemporal load profiles and the sparse nature of "bad data.'' A robust
estimator based on principal components pursuit (PCP) is adopted, which effects
a twofold sparsity-promoting regularization through an $\ell_1$-norm of the
outliers, and the nuclear norm of the nominal load profiles. Upon recasting the
non-separable nuclear norm into a form amenable to decentralized optimization,
a distributed (D-) PCP algorithm is developed to carry out the imputation and
cleansing tasks using networked devices comprising the so-termed advanced
metering infrastructure. If D-PCP converges and a qualification inequality is
satisfied, the novel distributed estimator provably attains the performance of
its centralized PCP counterpart, which has access to all networkwide data.
Computer simulations and tests with real load curve data corroborate the
convergence and effectiveness of the novel D-PCP algorithm.
|
1301.7630 | An Extended Fano's Inequality for the Finite Blocklength Coding | cs.IT math.IT | Fano's inequality reveals the relation between the conditional entropy and
the probability of error . It has been the key tool in proving the converse of
coding theorems in the past sixty years. In this paper, an extended Fano's
inequality is proposed, which is tighter and more applicable for codings in the
finite blocklength regime. Lower bounds on the mutual information and an upper
bound on the codebook size are also given, which are shown to be tighter than
the original Fano's inequality. Especially, the extended Fano's inequality is
tight for some symmetric channels such as the $q$-ary symmetric channels (QSC).
|
1301.7641 | Multi-scale Discriminant Saliency with Wavelet-based Hidden Markov Tree
Modelling | cs.CV | The bottom-up saliency, an early stage of humans' visual attention, can be
considered as a binary classification problem between centre and surround
classes. Discriminant power of features for the classification is measured as
mutual information between distributions of image features and corresponding
classes . As the estimated discrepancy very much depends on considered scale
level, multi-scale structure and discriminant power are integrated by employing
discrete wavelet features and Hidden Markov Tree (HMT). With wavelet
coefficients and Hidden Markov Tree parameters, quad-tree like label structures
are constructed and utilized in maximum a posterior probability (MAP) of hidden
class variables at corresponding dyadic sub-squares. Then, a saliency value for
each square block at each scale level is computed with discriminant power
principle. Finally, across multiple scales is integrated the final saliency map
by an information maximization rule. Both standard quantitative tools such as
NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed
multi-scale discriminant saliency (MDIS) method against the well-know
information based approach AIM on its released image collection with
eye-tracking data. Simulation results are presented and analysed to verify the
validity of MDIS as well as point out its limitation for further research
direction.
|
1301.7657 | Energy-Efficient Power Allocation in OFDM Systems with Wireless
Information and Power Transfer | cs.IT math.IT | This paper considers an orthogonal frequency division multiplexing (OFDM)
downlink point-to-point system with simultaneous wireless information and power
transfer. It is assumed that the receiver is able to harvest energy from noise,
interference, and the desired signals.
We study the design of power allocation algorithms maximizing the energy
efficiency of data transmission (bit/Joule delivered to the receiver). In
particular, the algorithm design is formulated as a high-dimensional non-convex
optimization problem which takes into account the circuit power consumption,
the minimum required data rate, and a constraint on the minimum power delivered
to the receiver. Subsequently, by exploiting the properties of nonlinear
fractional programming, the considered non-convex optimization problem, whose
objective function is in fractional form, is transformed into an equivalent
optimization problem having an objective function in subtractive form, which
enables the derivation of an efficient iterative power allocation algorithm. In
each iteration, the optimal power allocation solution is derived based on dual
decomposition and a one-dimensional search. Simulation results illustrate that
the proposed iterative power allocation algorithm converges to the optimal
solution, and unveil the trade-off between energy efficiency, system capacity,
and wireless power transfer: (1) In the low transmit power regime, maximizing
the system capacity may maximize the energy efficiency. (2) Wireless power
transfer can enhance the energy efficiency, especially in the interference
limited regime.
|
1301.7661 | Fast non parametric entropy estimation for spatial-temporal saliency
method | cs.CV | This paper formulates bottom-up visual saliency as center surround
conditional entropy and presents a fast and efficient technique for the
computation of such a saliency map. It is shown that the new saliency
formulation is consistent with self-information based saliency,
decision-theoretic saliency and Bayesian definition of surprises but also faces
the same significant computational challenge of estimating probability density
in very high dimensional spaces with limited samples. We have developed a fast
and efficient nonparametric method to make the practical implementation of
these types of saliency maps possible. By aligning pixels from the center and
surround regions and treating their location coordinates as random variables,
we use a k-d partitioning method to efficiently estimating the center surround
conditional entropy. We present experimental results on two publicly available
eye tracking still image databases and show that the new technique is
competitive with state of the art bottom-up saliency computational methods. We
have also extended the technique to compute spatiotemporal visual saliency of
video and evaluate the bottom-up spatiotemporal saliency against eye tracking
data on a video taken onboard a moving vehicle with the driver's eye being
tracked by a head mounted eye-tracker.
|
1301.7664 | Approximate Optimal Trajectory Tracking for Continuous Time Nonlinear
Systems | cs.SY math.OC | Approximate dynamic programming has been investigated and used as a method to
approximately solve optimal regulation problems. However, the extension of this
technique to optimal tracking problems for continuous time nonlinear systems
has remained a non-trivial open problem. The control development in this paper
guarantees ultimately bounded tracking of a desired trajectory, while also
ensuring that the controller converges to an approximate optimal policy.
|
1301.7669 | Extending the logical update view with transaction support | cs.PL cs.DB | Since the database update view was standardised in the Prolog ISO standard,
the so called logical update view is available in all actively maintained
Prolog systems. While this update view provided a well defined update semantics
and allows for efficient handling of dynamic code, it does not help in
maintaining consistency of the dynamic database. With the introduction of
multiple threads and deployment of Prolog in continuously running server
applications, consistency of the dynamic database becomes important.
In this article, we propose an extension to the generation-based
implementation of the logical update view that supports transactions.
Generation-based transactions have been implemented according to this
description in the SWI-Prolog RDF store. The aim of this paper is to motivate
transactions, outline an implementation and generate discussion on the
desirable semantics and interface prior to implementation.
|
1301.7673 | Toward a Dynamic Programming Solution for the 4-peg Tower of Hanoi
Problem with Configurations | cs.PL cs.AI | The Frame-Stewart algorithm for the 4-peg variant of the Tower of Hanoi,
introduced in 1941, partitions disks into intermediate towers before moving the
remaining disks to their destination. Algorithms that partition the disks have
not been proven to be optimal, although they have been verified for up to 30
disks. This paper presents a dynamic programming approach to this algorithm,
using tabling in B-Prolog. This study uses a variation of the problem,
involving configurations of disks, in order to contrast the tabling approach
with the approaches utilized by other solvers. A comparison of different
partitioning locations for the Frame-Stewart algorithm indicates that, although
certain partitions are optimal for the classic problem, they need to be
modified for certain configurations, and that random configurations might
require an entirely new algorithm.
|
1301.7676 | Efficient Partial Order CDCL Using Assertion Level Choice Heuristics | cs.AI cs.LO | We previously designed Partial Order Conflict Driven Clause Learning
(PO-CDCL), a variation of the satisfiability solving CDCL algorithm with a
partial order on decision levels, and showed that it can speed up the solving
on problems with a high independence between decision levels. In this paper, we
more thoroughly analyze the reasons of the efficiency of PO-CDCL. Of particular
importance is that the partial order introduces several candidates for the
assertion level. By evaluating different heuristics for this choice, we show
that the assertion level selection has an important impact on solving and that
a carefully designed heuristic can significantly improve performances on
relevant benchmarks.
|
1301.7693 | Optimal Locally Repairable Codes and Connections to Matroid Theory | cs.IT math.IT | Petabyte-scale distributed storage systems are currently transitioning to
erasure codes to achieve higher storage efficiency. Classical codes like
Reed-Solomon are highly sub-optimal for distributed environments due to their
high overhead in single-failure events. Locally Repairable Codes (LRCs) form a
new family of codes that are repair efficient. In particular, LRCs minimize the
number of nodes participating in single node repairs during which they generate
small network traffic. Two large-scale distributed storage systems have already
implemented different types of LRCs: Windows Azure Storage and the Hadoop
Distributed File System RAID used by Facebook. The fundamental bounds for LRCs,
namely the best possible distance for a given code locality, were recently
discovered, but few explicit constructions exist. In this work, we present an
explicit and optimal LRCs that are simple to construct. Our construction is
based on grouping Reed-Solomon (RS) coded symbols to obtain RS coded symbols
over a larger finite field. We then partition these RS symbols in small groups,
and re-encode them using a simple local code that offers low repair locality.
For the analysis of the optimality of the code, we derive a new result on the
matroid represented by the code generator matrix.
|
1301.7724 | Axiomatic Construction of Hierarchical Clustering in Asymmetric Networks | cs.LG cs.SI stat.ML | This paper considers networks where relationships between nodes are
represented by directed dissimilarities. The goal is to study methods for the
determination of hierarchical clusters, i.e., a family of nested partitions
indexed by a connectivity parameter, induced by the given dissimilarity
structures. Our construction of hierarchical clustering methods is based on
defining admissible methods to be those methods that abide by the axioms of
value - nodes in a network with two nodes are clustered together at the maximum
of the two dissimilarities between them - and transformation - when
dissimilarities are reduced, the network may become more clustered but not
less. Several admissible methods are constructed and two particular methods,
termed reciprocal and nonreciprocal clustering, are shown to provide upper and
lower bounds in the space of admissible methods. Alternative clustering
methodologies and axioms are further considered. Allowing the outcome of
hierarchical clustering to be asymmetric, so that it matches the asymmetry of
the original data, leads to the inception of quasi-clustering methods. The
existence of a unique quasi-clustering method is shown. Allowing clustering in
a two-node network to proceed at the minimum of the two dissimilarities
generates an alternative axiomatic construction. There is a unique clustering
method in this case too. The paper also develops algorithms for the computation
of hierarchical clusters using matrix powers on a min-max dioid algebra and
studies the stability of the methods proposed. We proved that most of the
methods introduced in this paper are such that similar networks yield similar
hierarchical clustering results. Algorithms are exemplified through their
application to networks describing internal migration within states of the
United States (U.S.) and the interrelation between sectors of the U.S. economy.
|
1301.7738 | PyPLN: a Distributed Platform for Natural Language Processing | cs.CL cs.IR | This paper presents a distributed platform for Natural Language Processing
called PyPLN. PyPLN leverages a vast array of NLP and text processing open
source tools, managing the distribution of the workload on a variety of
configurations: from a single server to a cluster of linux servers. PyPLN is
developed using Python 2.7.3 but makes it very easy to incorporate other
softwares for specific tasks as long as a linux version is available. PyPLN
facilitates analyses both at document and corpus level, simplifying management
and publication of corpora and analytical results through an easy to use web
interface. In the current (beta) release, it supports English and Portuguese
languages with support to other languages planned for future releases. To
support the Portuguese language PyPLN uses the PALAVRAS parser\citep{Bick2000}.
Currently PyPLN offers the following features: Text extraction with encoding
normalization (to UTF-8), part-of-speech tagging, token frequency, semantic
annotation, n-gram extraction, word and sentence repertoire, and full-text
search across corpora. The platform is licensed as GPL-v3.
|
1302.0017 | Adaptive Control of Scalar Plants in the Presence of Unmodeled Dynamics | cs.SY math.OC | Robust adaptive control of scalar plants in the presence of unmodeled
dynamics is established in this paper. It is shown that implementation of a
projection algorithm with standard adaptive control of a scalar plant ensures
global boundedness of the overall adaptive system for a class of unmodeled
dynamics.
|
1302.0019 | Fixed-to-Variable Length Distribution Matching | cs.IT math.IT | Fixed-to-variable length (f2v) matchers are used to reversibly transform an
input sequence of independent and uniformly distributed bits into an output
sequence of bits that are (approximately) independent and distributed according
to a target distribution. The degree of approximation is measured by the
informational divergence between the output distribution and the target
distribution. An algorithm is developed that efficiently finds optimal f2v
codes. It is shown that by encoding the input bits blockwise, the informational
divergence per bit approaches zero as the block length approaches infinity. A
relation to data compression by Tunstall coding is established.
|
1302.0033 | On extremal self-dual codes of length 120 | cs.IT math.IT | We prove that the only primes which may divide the order of the automorphism
group of a putative binary self-dual doubly-even [120, 60, 24] code are 2, 3,
5, 7, 19, 23 and 29. Furthermore we prove that automorphisms of prime order $p
\geq 5$ have a unique cycle structure.
|
1302.0050 | Universal Wyner-Ziv Coding for Distortion Constrained General
Side-Information | cs.IT math.IT | We investigate the Wyner-Ziv coding in which the statistics of the principal
source is known but the statistics of the channel generating the
side-information is unknown except that it is in a certain class. The class
consists of channels such that the distortion between the principal source and
the side-information is smaller than a threshold, but channels may be neither
stationary nor ergodic. In this situation, we define a new rate-distortion
function as the minimum rate such that there exists a Wyner-Ziv code that is
universal for every channel in the class. Then, we show an upper bound and a
lower bound on the rate-distortion function, and derive a matching condition
such that the upper and lower bounds coincide. The relation between the new
rate-distortion function and the rate-distortion function of the Heegard-Berger
problem is also discussed.
|
1302.0059 | A coding approach to guarantee information integrity against a Byzantine
relay | cs.IT math.IT | This paper presents a random coding scheme with which two nodes can exchange
information with guaranteed integrity over a two-way Byzantine relay. This
coding scheme is employed to obtain an inner bound on the capacity region with
guaranteed information integrity. No pre-shared secret or secret transmission
is needed for the proposed scheme. Hence the inner bound obtained is generally
larger than those achieved based on secret transmission schemes. This approach
advocates the separation of supporting information integrity and secrecy.
|
1302.0070 | Towards the full information chain theory: solution methods for optimal
information acquisition problem | physics.data-an cs.IT math.IT | When additional information sources are available in decision making problems
that allow stochastic optimization formulations, an important question is how
to optimally use the information the sources are capable of providing. A
framework that relates information accuracy determined by the source's
knowledge structure to its relevance determined by the problem being solved was
proposed in a companion paper. There, the problem of optimal information
acquisition was formulated as that of minimization of the expected loss of the
solution subject to constraints dictated by the information source knowledge
structure and depth. Approximate solution methods for this problem are
developed making use of probability metrics method and its application for
scenario reduction in stochastic optimization.
|
1302.0077 | Sparse MRI for motion correction | cs.CV physics.bio-ph physics.med-ph | MR image sparsity/compressibility has been widely exploited for imaging
acceleration with the development of compressed sensing. A sparsity-based
approach to rigid-body motion correction is presented for the first time in
this paper. A motion is sought after such that the compensated MR image is
maximally sparse/compressible among the infinite candidates. Iterative
algorithms are proposed that jointly estimate the motion and the image content.
The proposed method has a lot of merits, such as no need of additional data and
loose requirement for the sampling sequence. Promising results are presented to
demonstrate its performance.
|
1302.0081 | Robust Compressive Phase Retrieval via L1 Minimization With Application
to Image Reconstruction | physics.comp-ph cs.IT math.IT math.OC | Phase retrieval refers to a classical nonconvex problem of recovering a
signal from its Fourier magnitude measurements. Inspired by the compressed
sensing technique, signal sparsity is exploited in recent studies of phase
retrieval to reduce the required number of measurements, known as compressive
phase retrieval (CPR). In this paper, l1 minimization problems are formulated
for CPR to exploit the signal sparsity and alternating direction algorithms are
presented for problem solving. For real-valued, nonnegative image
reconstruction, the image of interest is shown to be an optimal solution of the
formulated l1 minimization in the noise free case. Numerical simulations
demonstrate that the proposed approach is fast, accurate and robust to
measurements noises.
|
1302.0082 | Distribution-Free Distribution Regression | stat.ML cs.LG math.ST stat.TH | `Distribution regression' refers to the situation where a response Y depends
on a covariate P where P is a probability distribution. The model is Y=f(P) +
mu where f is an unknown regression function and mu is a random error.
Typically, we do not observe P directly, but rather, we observe a sample from
P. In this paper we develop theory and methods for distribution-free versions
of distribution regression. This means that we do not make distributional
assumptions about the error term mu and covariate P. We prove that when the
effective dimension is small enough (as measured by the doubling dimension),
then the excess prediction risk converges to zero with a polynomial rate.
|
1302.0084 | Peak-to-average power ratio of good codes for Gaussian channel | cs.IT math.IT | Consider a problem of forward error-correction for the additive white
Gaussian noise (AWGN) channel. For finite blocklength codes the backoff from
the channel capacity is inversely proportional to the square root of the
blocklength. In this paper it is shown that codes achieving this tradeoff must
necessarily have peak-to-average power ratio (PAPR) proportional to logarithm
of the blocklength. This is extended to codes approaching capacity slower, and
to PAPR measured at the output of an OFDM modulator. As a by-product the
convergence of (Smith's) amplitude-constrained AWGN capacity to Shannon's
classical formula is characterized in the regime of large amplitudes. This
converse-type result builds upon recent contributions in the study of empirical
output distributions of good channel codes.
|
1302.0103 | A Survey on Array Storage, Query Languages, and Systems | cs.DB | Since scientific investigation is one of the most important providers of
massive amounts of ordered data, there is a renewed interest in array data
processing in the context of Big Data. To the best of our knowledge, a unified
resource that summarizes and analyzes array processing research over its long
existence is currently missing. In this survey, we provide a guide for past,
present, and future research in array processing. The survey is organized along
three main topics. Array storage discusses all the aspects related to array
partitioning into chunks. The identification of a reduced set of array
operators to form the foundation for an array query language is analyzed across
multiple such proposals. Lastly, we survey real systems for array processing.
The result is a thorough survey on array data storage and processing that
should be consulted by anyone interested in this research topic, independent of
experience level. The survey is not complete though. We greatly appreciate
pointers towards any work we might have forgotten to mention.
|
1302.0126 | Proceedings of the 12th International Colloquium on Implementation of
Constraint and LOgic Programming Systems | cs.PL cs.AI | This volume contains the papers presented at CICLOPS'12: 12th International
Colloquium on Implementation of Constraint and LOgic Programming Systems held
on Tueseday September 4th, 2012 in Budapest.
The program included 1 invited talk, 9 technical presentations and a panel
discussion on Prolog open standards (open.pl). Each programme paper was
reviewed by 3 reviewers.
CICLOPS'12 continues a tradition of successful workshops on Implementations
of Logic Programming Systems, previously held in Budapest (1993) and Ithaca
(1994), the Compulog Net workshops on Parallelism and Implementation
Technologies held in Madrid (1993 and 1994), Utrecht (1995) and Bonn (1996),
the Workshop on Parallelism and Implementation Technology for (Constraint)
Logic Programming Languages held in Port Jefferson (1997), Manchester (1998),
Las Cruces (1999), and London (2000), and more recently the Colloquium on
Implementation of Constraint and LOgic Programming Systems in Paphos (2001),
Copenhagen (2002), Mumbai (2003), Saint Malo (2004), Sitges (2005), Seattle
(2006), Porto (2007), Udine (2008), Pasadena (2009), Edinburgh (2010) -
together with WLPE, Lexington (2011).
We would like to thank all the authors, Tom Schrijvers for his invited talk,
the programme committee members, and the ICLP 2012 organisers. We would like to
also thank arXiv.org for providing permanent hosting.
|
1302.0164 | Aperiodic dynamics in a deterministic model of attitude formation in
social groups | physics.soc-ph cs.SI nlin.AO | Homophily and social influence are the fundamental mechanisms that drive the
evolution of attitudes, beliefs and behaviour within social groups. Homophily
relates the similarity between pairs of individuals' attitudinal states to
their frequency of interaction, and hence structural tie strength, while social
influence causes the convergence of individuals' states during interaction.
Building on these basic elements, we propose a new mathematical modelling
framework to describe the evolution of attitudes within a group of interacting
agents. Specifically, our model describes sub-conscious attitudes that have an
activator-inhibitor relationship. We consider a homogeneous population using a
deterministic, continuous-time dynamical system. Surprisingly, the combined
effects of homophily and social influence do not necessarily lead to group
consensus or global monoculture. We observe that sub-group formation and
polarisation-like effects may be transient, the long-time dynamics being
quasi-periodic with sensitive dependence to initial conditions. This is due to
the interplay between the evolving interaction network and Turing instability
associated with the attitudinal state dynamics.
|
1302.0189 | Non-adaptive pooling strategies for detection of rare faulty items | cs.IT cond-mat.stat-mech math.IT q-bio.GN q-bio.QM | We study non-adaptive pooling strategies for detection of rare faulty items.
Given a binary sparse N-dimensional signal x, how to construct a sparse binary
MxN pooling matrix F such that the signal can be reconstructed from the
smallest possible number M of measurements y=Fx? We show that a very low number
of measurements is possible for random spatially coupled design of pools F. Our
design might find application in genetic screening or compressed genotyping. We
show that our results are robust with respect to the uncertainty in the matrix
F when some elements are mistaken.
|
1302.0212 | PREMIER - PRobabilistic Error-correction using Markov Inference in
Errored Reads | cs.IT math.IT | In this work we present a flexible, probabilistic and reference-free method
of error correction for high throughput DNA sequencing data. The key is to
exploit the high coverage of sequencing data and model short sequence outputs
as independent realizations of a Hidden Markov Model (HMM). We pose the problem
of error correction of reads as one of maximum likelihood sequence detection
over this HMM. While time and memory considerations rule out an implementation
of the optimal Baum-Welch algorithm (for parameter estimation) and the optimal
Viterbi algorithm (for error correction), we propose low-complexity approximate
versions of both. Specifically, we propose an approximate Viterbi and a
sequential decoding based algorithm for the error correction. Our results show
that when compared with Reptile, a state-of-the-art error correction method,
our methods consistently achieve superior performances on both simulated and
real data sets.
|
1302.0215 | Informational Divergence Approximations to Product Distributions | cs.IT math.IT | The minimum rate needed to accurately approximate a product distribution
based on an unnormalized informational divergence is shown to be a mutual
information. This result subsumes results of Wyner on common information and
Han-Verd\'{u} on resolvability. The result also extends to cases where the
source distribution is unknown but the entropy is known.
|
1302.0216 | Comparison between the two definitions of AI | cs.AI | Two different definitions of the Artificial Intelligence concept have been
proposed in papers [1] and [2]. The first definition is informal. It says that
any program that is cleverer than a human being, is acknowledged as Artificial
Intelligence. The second definition is formal because it avoids reference to
the concept of human being. The readers of papers [1] and [2] might be left
with the impression that both definitions are equivalent and the definition in
[2] is simply a formal version of that in [1]. This paper will compare both
definitions of Artificial Intelligence and, hopefully, will bring a better
understanding of the concept.
|
1302.0226 | Plug-and-Play Decentralized Model Predictive Control | cs.SY | In this paper we consider a linear system structured into physically coupled
subsystems and propose a decentralized control scheme capable to guarantee
asymptotic stability and satisfaction of constraints on system inputs and
states. The design procedure is totally decentralized, since the synthesis of a
local controller uses only information on a subsystem and its neighbors, i.e.
subsystems coupled to it. We first derive tests for checking if a subsystem can
be plugged into (or unplugged from) an existing plant without spoiling overall
stability and constraint satisfaction. When this is possible, we show how to
automatize the design of local controllers so that it can be carried out in
parallel by smart actuators equipped with computational resources and capable
to exchange information with neighboring subsystems. In particular, local
controllers exploit tube-based Model Predictive Control (MPC) in order to
guarantee robustness with respect to physical coupling among subsystems.
Finally, an application of the proposed control design procedure to frequency
control in power networks is presented.
|
1302.0249 | Bayesian Quadratic Network Game Filters | cs.SY cs.IT cs.SI math.IT | A repeated network game where agents have quadratic utilities that depend on
information externalities -- an unknown underlying state -- as well as payoff
externalities -- the actions of all other agents in the network -- is
considered. Agents play Bayesian Nash Equilibrium strategies with respect to
their beliefs on the state of the world and the actions of all other nodes in
the network. These beliefs are refined over subsequent stages based on the
observed actions of neighboring peers. This paper introduces the Quadratic
Network Game (QNG) filter that agents can run locally to update their beliefs,
select corresponding optimal actions, and eventually learn a sufficient
statistic of the network's state. The QNG filter is demonstrated on a Cournot
market competition game and a coordination game to implement navigation of an
autonomous team.
|
1302.0250 | Elections, Protest, and Alternation of Power | physics.soc-ph cs.GT cs.SI math.PR | Despite many examples to the contrary, most models of elections assume that
rules determining the winner will be followed. We present a model where
elections are solely a public signal of the incumbent popularity, and citizens
can protests against leaders that do not step down from power. In this minimal
setup, rule-based alternation of power as well as "semi-democratic" alternation
of power independent of electoral rules can both arise in equilibrium.
Compliance with electoral rules requires there to be multiple equilibria in the
protest game, where the electoral rule serves as a focal point spurring protest
against losers that do not step down voluntarily. Such multiplicity is possible
when elections are informative and citizens not too polarized. Extensions to
the model are consistent with the facts that protests often center around
accusations of electoral fraud and that in the democratic case turnover is
peaceful while semi-democratic turnover often requires citizens to actually
take to the streets.
|
1302.0265 | Compound Polar Codes | cs.IT math.IT | A capacity-achieving scheme based on polar codes is proposed for reliable
communication over multi-channels which can be directly applied to
bit-interleaved coded modulation schemes. We start by reviewing the
ground-breaking work of polar codes and then discuss our proposed scheme.
Instead of encoding separately across the individual underlying channels, which
requires multiple encoders and decoders, we take advantage of the recursive
structure of polar codes to construct a unified scheme with a single encoder
and decoder that can be used over the multi-channels. We prove that the scheme
achieves the capacity over this multi-channel. Numerical analysis and
simulation results for BICM channels at finite block lengths shows a
considerable improvement in the probability of error comparing to a
conventional separated scheme.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.