id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1206.5263
|
Reading Dependencies from Polytree-Like Bayesian Networks
|
cs.AI cs.LG stat.ML
|
We present a graphical criterion for reading dependencies from the minimal
directed independence map G of a graphoid p when G is a polytree and p
satisfies composition and weak transitivity. We prove that the criterion is
sound and complete. We argue that assuming composition and weak transitivity is
not too restrictive.
|
1206.5264
|
Apprenticeship Learning using Inverse Reinforcement Learning and
Gradient Methods
|
cs.LG stat.ML
|
In this paper we propose a novel gradient algorithm to learn a policy from an
expert's observed behavior assuming that the expert behaves optimally with
respect to some unknown reward function of a Markovian Decision Problem. The
algorithm's aim is to find a reward function such that the resulting optimal
policy matches well the expert's observed behavior. The main difficulty is that
the mapping from the parameters to policies is both nonsmooth and highly
redundant. Resorting to subdifferentials solves the first difficulty, while the
second one is over- come by computing natural gradients. We tested the proposed
method in two artificial domains and found it to be more reliable and efficient
than some previous methods.
|
1206.5265
|
Consensus ranking under the exponential model
|
cs.LG cs.AI stat.ML
|
We analyze the generalized Mallows model, a popular exponential model over
rankings. Estimating the central (or consensus) ranking from data is NP-hard.
We obtain the following new results: (1) We show that search methods can
estimate both the central ranking pi0 and the model parameters theta exactly.
The search is n! in the worst case, but is tractable when the true distribution
is concentrated around its mode; (2) We show that the generalized Mallows model
is jointly exponential in (pi0; theta), and introduce the conjugate prior for
this model class; (3) The sufficient statistics are the pairwise marginal
probabilities that item i is preferred to item j. Preliminary experiments
confirm the theoretical predictions and compare the new algorithm and existing
heuristics.
|
1206.5266
|
AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Weighted Graphical
Models
|
cs.AI
|
Compiling graphical models has recently been under intense investigation,
especially for probabilistic modeling and processing. We present here a novel
data structure for compiling weighted graphical models (in particular,
probabilistic models), called AND/OR Multi-Valued Decision Diagram (AOMDD).
This is a generalization of our previous work on constraint networks, to
weighted models. The AOMDD is based on the frameworks of AND/OR search spaces
for graphical models, and Ordered Binary Decision Diagrams (OBDD). The AOMDD is
a canonical representation of a graphical model, and its size and compilation
time are bounded exponentially by the treewidth of the graph, rather than
pathwidth as is known for OBDDs. We discuss a Variable Elimination schedule for
compilation, and present the general APPLY algorithm that combines two weighted
AOMDDs, and also present a search based method for compilation method. The
preliminary experimental evaluation is quite encouraging, showing the potential
of the AOMDD data structure.
|
1206.5267
|
Collaborative Filtering and the Missing at Random Assumption
|
cs.LG cs.IR stat.ML
|
Rating prediction is an important application, and a popular research topic
in collaborative filtering. However, both the validity of learning algorithms,
and the validity of standard testing procedures rest on the assumption that
missing ratings are missing at random (MAR). In this paper we present the
results of a user study in which we collect a random sample of ratings from
current users of an online radio service. An analysis of the rating data
collected in the study shows that the sample of random ratings has markedly
different properties than ratings of user-selected songs. When asked to report
on their own rating behaviour, a large number of users indicate they believe
their opinion of a song does affect whether they choose to rate that song, a
violation of the MAR condition. Finally, we present experimental results
showing that incorporating an explicit model of the missing data mechanism can
lead to significant improvements in prediction performance on the random sample
of ratings.
|
1206.5268
|
Best-First AND/OR Search for Most Probable Explanations
|
cs.AI
|
The paper evaluates the power of best-first search over AND/OR search spaces
for solving the Most Probable Explanation (MPE) task in Bayesian networks. The
main virtue of the AND/OR representation of the search space is its sensitivity
to the structure of the problem, which can translate into significant time
savings. In recent years depth-first AND/OR Branch-and- Bound algorithms were
shown to be very effective when exploring such search spaces, especially when
using caching. Since best-first strategies are known to be superior to
depth-first when memory is utilized, exploring the best-first control strategy
is called for. The main contribution of this paper is in showing that a recent
extension of AND/OR search algorithms from depth-first Branch-and-Bound to
best-first is indeed very effective for computing the MPE in Bayesian networks.
We demonstrate empirically the superiority of the best-first search approach on
various probabilistic networks.
|
1206.5269
|
Determining the Number of Non-Spurious Arcs in a Learned DAG Model:
Investigation of a Bayesian and a Frequentist Approach
|
stat.AP cs.CE
|
In many application domains, such as computational biology, the goal of
graphical model structure learning is to uncover discrete relationships between
entities. For example, in our problem of interest concerning HIV vaccine
design, we want to infer which HIV peptides interact with which immune system
molecules (HLA molecules). For problems of this nature, we are interested in
determining the number of nonspurious arcs in a learned graphical model. We
describe both a Bayesian and frequentist approach to this problem. In the
Bayesian approach, we use the posterior distribution over model structures to
compute the expected number of true arcs in a learned model. In the frequentist
approach, we develop a method based on the concept of the False Discovery Rate.
On synthetic data sets generated from models similar to the ones learned, we
find that both the Bayesian and frequentist approaches yield accurate estimates
of the number of non-spurious arcs. In addition, we speculate that the
frequentist approach, which is non-parametric, may outperform the parametric
Bayesian approach in situations where the models learned are less
representative of the data. Finally, we apply the frequentist approach to our
problem of HIV vaccine design.
|
1206.5270
|
Nonparametric Bayes Pachinko Allocation
|
cs.IR cs.LG stat.ML
|
Recent advances in topic models have explored complicated structured
distributions to represent topic correlation. For example, the pachinko
allocation model (PAM) captures arbitrary, nested, and possibly sparse
correlations between topics using a directed acyclic graph (DAG). While PAM
provides more flexibility and greater expressive power than previous models
like latent Dirichlet allocation (LDA), it is also more difficult to determine
the appropriate topic structure for a specific dataset. In this paper, we
propose a nonparametric Bayesian prior for PAM based on a variant of the
hierarchical Dirichlet process (HDP). Although the HDP can capture topic
correlations defined by nested data structure, it does not automatically
discover such correlations from unstructured data. By assuming an HDP-based
prior for PAM, we are able to learn both the number of topics and how the
topics are correlated. We evaluate our model on synthetic and real-world text
datasets, and show that nonparametric PAM achieves performance matching the
best of PAM without manually tuning the number of topics.
|
1206.5271
|
Learning Bayesian Network Structure from Correlation-Immune Data
|
cs.AI
|
Searching the complete space of possible Bayesian networks is intractable for
problems of interesting size, so Bayesian network structure learning
algorithms, such as the commonly used Sparse Candidate algorithm, employ
heuristics. However, these heuristics also restrict the types of relationships
that can be learned exclusively from data. They are unable to learn
relationships that exhibit "correlation-immunity", such as parity. To learn
Bayesian networks in the presence of correlation-immune relationships, we
extend the Sparse Candidate algorithm with a technique called "skewing". This
technique uses the observation that relationships that are correlation-immune
under a specific input distribution may not be correlation-immune under
another, sufficiently different distribution. We show that by extending Sparse
Candidate with this technique we are able to discover relationships between
random variables that are approximately correlation-immune, with a
significantly lower computational cost than the alternative of considering
multiple parents of a node at a time.
|
1206.5272
|
Evaluation of the Causal Effect of Control Plans in Nonrecursive
Structural Equation Models
|
stat.ME cs.AI
|
When observational data is available from practical studies and a directed
cyclic graph for how various variables affect each other is known based on
substantive understanding of the process, we consider a problem in which a
control plan of a treatment variable is conducted in order to bring a response
variable close to a target value with variation reduction. We formulate an
optimal control plan concerning a certain treatment variable through path
coefficients in the framework of linear nonrecursive structural equation
models. Based on the formulation, we clarify the properties of causal effects
when conducting a control plan. The results enable us to evaluate the effect of
a control plan on the variance from observational data.
|
1206.5273
|
Survey Propagation Revisited
|
cs.AI
|
Survey propagation (SP) is an exciting new technique that has been remarkably
successful at solving very large hard combinatorial problems, such as
determining the satisfiability of Boolean formulas. In a promising attempt at
understanding the success of SP, it was recently shown that SP can be viewed as
a form of belief propagation, computing marginal probabilities over certain
objects called covers of a formula. This explanation was, however, shortly
dismissed by experiments suggesting that non-trivial covers simply do not exist
for large formulas. In this paper, we show that these experiments were
misleading: not only do covers exist for large hard random formulas, SP is
surprisingly accurate at computing marginals over these covers despite the
existence of many cycles in the formulas. This re-opens a potentially simpler
line of reasoning for understanding SP, in contrast to some alternative lines
of explanation that have been proposed assuming covers do not exist.
|
1206.5274
|
On Discarding, Caching, and Recalling Samples in Active Learning
|
cs.LG stat.ML
|
We address challenges of active learning under scarce informational resources
in non-stationary environments. In real-world settings, data labeled and
integrated into a predictive model may become invalid over time. However, the
data can become informative again with switches in context and such changes may
indicate unmodeled cyclic or other temporal dynamics. We explore principles for
discarding, caching, and recalling labeled data points in active learning based
on computations of value of information. We review key concepts and study the
value of the methods via investigations of predictive performance and costs of
acquiring data for simulated and real-world data sets.
|
1206.5275
|
Polynomial Constraints in Causal Bayesian Networks
|
cs.AI stat.ME
|
We use the implicitization procedure to generate polynomial equality
constraints on the set of distributions induced by local interventions on
variables governed by a causal Bayesian network with hidden variables. We show
how we may reduce the complexity of the implicitization problem and make the
problem tractable in certain causal Bayesian networks. We also show some
preliminary results on the algebraic structure of polynomial constraints. The
results have applications in distinguishing between causal models and in
testing causal models with combined observational and experimental data.
|
1206.5276
|
Template Based Inference in Symmetric Relational Markov Random Fields
|
cs.AI
|
Relational Markov Random Fields are a general and flexible framework for
reasoning about the joint distribution over attributes of a large number of
interacting entities. The main computational difficulty in learning such models
is inference. Even when dealing with complete data, where one can summarize a
large domain by sufficient statistics, learning requires one to compute the
expectation of the sufficient statistics given different parameter choices. The
typical solution to this problem is to resort to approximate inference
procedures, such as loopy belief propagation. Although these procedures are
quite efficient, they still require computation that is on the order of the
number of interactions (or features) in the model. When learning a large
relational model over a complex domain, even such approximations require
unrealistic running time. In this paper we show that for a particular class of
relational MRFs, which have inherent symmetry, we can perform the inference
needed for learning procedures using a template-level belief propagation. This
procedure's running time is proportional to the size of the relational model
rather than the size of the domain. Moreover, we show that this computational
procedure is equivalent to sychronous loopy belief propagation. This enables a
dramatic speedup in inference and learning time. We use this procedure to learn
relational MRFs for capturing the joint distribution of large protein-protein
interaction networks.
|
1206.5277
|
Accuracy Bounds for Belief Propagation
|
cs.AI cs.LG stat.ML
|
The belief propagation (BP) algorithm is widely applied to perform
approximate inference on arbitrary graphical models, in part due to its
excellent empirical properties and performance. However, little is known
theoretically about when this algorithm will perform well. Using recent
analysis of convergence and stability properties in BP and new results on
approximations in binary systems, we derive a bound on the error in BP's
estimates for pairwise Markov random fields over discrete valued random
variables. Our bound is relatively simple to compute, and compares favorably
with a previous method of bounding the accuracy of BP.
|
1206.5278
|
Fast Nonparametric Conditional Density Estimation
|
stat.ME cs.LG stat.ML
|
Conditional density estimation generalizes regression by modeling a full
density f(yjx) rather than only the expected value E(yjx). This is important
for many tasks, including handling multi-modality and generating prediction
intervals. Though fundamental and widely applicable, nonparametric conditional
density estimators have received relatively little attention from statisticians
and little or none from the machine learning community. None of that work has
been applied to greater than bivariate data, presumably due to the
computational difficulty of data-driven bandwidth selection. We describe the
double kernel conditional density estimator and derive fast dual-tree-based
algorithms for bandwidth selection using a maximum likelihood criterion. These
techniques give speedups of up to 3.8 million in our experiments, and enable
the first applications to previously intractable large multivariate datasets,
including a redshift prediction problem from the Sloan Digital Sky Survey.
|
1206.5279
|
Making life better one large system at a time: Challenges for UAI
research
|
cs.SE cs.AI
|
The rapid growth and diversity in service offerings and the ensuing
complexity of information technology ecosystems present numerous management
challenges (both operational and strategic). Instrumentation and measurement
technology is, by and large, keeping pace with this development and growth.
However, the algorithms, tools, and technology required to transform the data
into relevant information for decision making are not. The claim in this paper
(and the invited talk) is that the line of research conducted in Uncertainty in
Artificial Intelligence is very well suited to address the challenges and close
this gap. I will support this claim and discuss open problems using recent
examples in diagnosis, model discovery, and policy optimization on three real
life distributed systems.
|
1206.5280
|
Ranking Under Uncertainty
|
cs.AI stat.AP
|
Ranking objects is a simple and natural procedure for organizing data. It is
often performed by assigning a quality score to each object according to its
relevance to the problem at hand. Ranking is widely used for object selection,
when resources are limited and it is necessary to select a subset of most
relevant objects for further processing. In real world situations, the object's
scores are often calculated from noisy measurements, casting doubt on the
ranking reliability. We introduce an analytical method for assessing the
influence of noise levels on the ranking reliability. We use two similarity
measures for reliability evaluation, Top-K-List overlap and Kendall's tau
measure, and show that the former is much more sensitive to noise than the
latter. We apply our method to gene selection in a series of microarray
experiments of several cancer types. The results indicate that the reliability
of the lists obtained from these experiments is very poor, and that experiment
sizes which are necessary for attaining reasonably stable Top-K-Lists are much
larger than those currently available. Simulations support our analytical
results.
|
1206.5281
|
Learning Selectively Conditioned Forest Structures with Applications to
DBNs and Classification
|
cs.LG stat.ML
|
Dealing with uncertainty in Bayesian Network structures using maximum a
posteriori (MAP) estimation or Bayesian Model Averaging (BMA) is often
intractable due to the superexponential number of possible directed, acyclic
graphs. When the prior is decomposable, two classes of graphs where efficient
learning can take place are tree structures, and fixed-orderings with limited
in-degree. We show how MAP estimates and BMA for selectively conditioned
forests (SCF), a combination of these two classes, can be computed efficiently
for ordered sets of variables. We apply SCFs to temporal data to learn Dynamic
Bayesian Networks having an intra-timestep forest and inter-timestep limited
in-degree structure, improving model accuracy over DBNs without the combination
of structures. We also apply SCFs to Bayes Net classification to learn
selective forest augmented Naive Bayes classifiers. We argue that the built-in
feature selection of selective augmented Bayes classifiers makes them
preferable to similar non-selective classifiers based on empirical evidence.
|
1206.5282
|
A Characterization of Markov Equivalence Classes for Directed Acyclic
Graphs with Latent Variables
|
stat.ME cs.LG stat.ML
|
Different directed acyclic graphs (DAGs) may be Markov equivalent in the
sense that they entail the same conditional independence relations among the
observed variables. Meek (1995) characterizes Markov equivalence classes for
DAGs (with no latent variables) by presenting a set of orientation rules that
can correctly identify all arrow orientations shared by all DAGs in a Markov
equivalence class, given a member of that class. For DAG models with latent
variables, maximal ancestral graphs (MAGs) provide a neat representation that
facilitates model search. Earlier work (Ali et al. 2005) has identified a set
of orientation rules sufficient to construct all arrowheads common to a Markov
equivalence class of MAGs. In this paper, we provide extra rules sufficient to
construct all common tails as well. We end up with a set of orientation rules
sound and complete for identifying commonalities across a Markov equivalence
class of MAGs, which is particularly useful for causal inference.
|
1206.5283
|
Bayesian Active Distance Metric Learning
|
cs.LG stat.ML
|
Distance metric learning is an important component for many tasks, such as
statistical classification and content-based image retrieval. Existing
approaches for learning distance metrics from pairwise constraints typically
suffer from two major problems. First, most algorithms only offer point
estimation of the distance metric and can therefore be unreliable when the
number of training examples is small. Second, since these algorithms generally
select their training examples at random, they can be inefficient if labeling
effort is limited. This paper presents a Bayesian framework for distance metric
learning that estimates a posterior distribution for the distance metric from
labeled pairwise constraints. We describe an efficient algorithm based on the
variational method for the proposed Bayesian approach. Furthermore, we apply
the proposed Bayesian framework to active distance metric learning by selecting
those unlabeled example pairs with the greatest uncertainty in relative
distance. Experiments in classification demonstrate that the proposed framework
achieves higher classification accuracy and identifies more informative
training examples than the non-Bayesian approach and state-of-the-art distance
metric learning algorithms.
|
1206.5284
|
More-or-Less CP-Networks
|
cs.AI
|
Preferences play an important role in our everyday lives. CP-networks, or
CP-nets in short, are graphical models for representing conditional qualitative
preferences under ceteris paribus ("all else being equal") assumptions. Despite
their intuitive nature and rich representation, dominance testing with CP-nets
is computationally complex, even when the CP-nets are restricted to
binary-valued preferences. Tractable algorithms exist for binary CP-nets, but
these algorithms are incomplete for multi-valued CPnets. In this paper, we
identify a class of multivalued CP-nets, which we call more-or-less CPnets,
that have the same computational complexity as binary CP-nets. More-or-less
CP-nets exploit the monotonicity of the attribute values and use intervals to
aggregate values that induce similar preferences. We then present a search
control rule for dominance testing that effectively prunes the search space
while preserving completeness.
|
1206.5285
|
Importance Sampling via Variational Optimization
|
stat.CO cs.AI
|
Computing the exact likelihood of data in large Bayesian networks consisting
of thousands of vertices is often a difficult task. When these models contain
many deterministic conditional probability tables and when the observed values
are extremely unlikely even alternative algorithms such as variational methods
and stochastic sampling often perform poorly. We present a new importance
sampling algorithm for Bayesian networks which is based on variational
techniques. We use the updates of the importance function to predict whether
the stochastic sampling converged above or below the true likelihood, and
change the proposal distribution accordingly. The validity of the method and
its contribution to convergence is demonstrated on hard networks of large
genetic linkage analysis tasks.
|
1206.5286
|
MAP Estimation, Linear Programming and Belief Propagation with Convex
Free Energies
|
cs.AI cs.LG stat.ML
|
Finding the most probable assignment (MAP) in a general graphical model is
known to be NP hard but good approximations have been attained with max-product
belief propagation (BP) and its variants. In particular, it is known that using
BP on a single-cycle graph or tree reweighted BP on an arbitrary graph will
give the MAP solution if the beliefs have no ties. In this paper we extend the
setting under which BP can be used to provably extract the MAP. We define
Convex BP as BP algorithms based on a convex free energy approximation and show
that this class includes ordinary BP with single-cycle, tree reweighted BP and
many other BP variants. We show that when there are no ties, fixed-points of
convex max-product BP will provably give the MAP solution. We also show that
convex sum-product BP at sufficiently small temperatures can be used to solve
linear programs that arise from relaxing the MAP problem. Finally, we derive a
novel condition that allows us to derive the MAP solution even if some of the
convex BP beliefs have ties. In experiments, we show that our theorems allow us
to find the MAP in many real-world instances of graphical models where exact
inference using junction-tree is impossible.
|
1206.5287
|
Policy Iteration for Relational MDPs
|
cs.AI
|
Relational Markov Decision Processes are a useful abstraction for complex
reinforcement learning problems and stochastic planning problems. Recent work
developed representation schemes and algorithms for planning in such problems
using the value iteration algorithm. However, exact versions of more complex
algorithms, including policy iteration, have not been developed or analyzed.
The paper investigates this potential and makes several contributions. First we
observe two anomalies for relational representations showing that the value of
some policies is not well defined or cannot be calculated for restricted
representation schemes used in the literature. On the other hand, we develop a
variant of policy iteration that can get around these anomalies. The algorithm
includes an aspect of policy improvement in the process of policy evaluation
and thus differs from the original algorithm. We show that despite this
difference the algorithm converges to the optimal policy.
|
1206.5288
|
Constrained Automated Mechanism Design for Infinite Games of Incomplete
Information
|
cs.GT cs.AI
|
We present a functional framework for automated mechanism design based on a
two-stage game model of strategic interaction between the designer and the
mechanism participants, and apply it to several classes of two-player infinite
games of incomplete information. At the core of our framework is a black-box
optimization algorithm which guides the selection process of candidate
mechanisms. Our approach yields optimal or nearly optimal mechanisms in several
application domains using various objective functions. By comparing our results
with known optimal mechanisms, and in some cases improving on the best known
mechanisms, we provide evidence that ours is a promising approach to parametric
design of indirect mechanisms.
|
1206.5289
|
A Criterion for Parameter Identification in Structural Equation Models
|
stat.ME cs.AI
|
This paper deals with the problem of identifying direct causal effects in
recursive linear structural equation models. The paper establishes a sufficient
criterion for identifying individual causal effects and provides a procedure
computing identified causal effects in terms of observed covariance matrix.
|
1206.5290
|
Imitation Learning with a Value-Based Prior
|
cs.LG cs.AI stat.ML
|
The goal of imitation learning is for an apprentice to learn how to behave in
a stochastic environment by observing a mentor demonstrating the correct
behavior. Accurate prior knowledge about the correct behavior can reduce the
need for demonstrations from the mentor. We present a novel approach to
encoding prior knowledge about the correct behavior, where we assume that this
prior knowledge takes the form of a Markov Decision Process (MDP) that is used
by the apprentice as a rough and imperfect model of the mentor's behavior.
Specifically, taking a Bayesian approach, we treat the value of a policy in
this modeling MDP as the log prior probability of the policy. In other words,
we assume a priori that the mentor's behavior is likely to be a high value
policy in the modeling MDP, though quite possibly different from the optimal
policy. We describe an efficient algorithm that, given a modeling MDP and a set
of demonstrations by a mentor, provably converges to a stationary point of the
log posterior of the mentor's policy, where the posterior is computed with
respect to the "value based" prior. We also present empirical evidence that
this prior does in fact speed learning of the mentor's policy, and is an
improvement in our experiments over similar previous methods.
|
1206.5291
|
Improved Dynamic Schedules for Belief Propagation
|
cs.LG cs.AI stat.ML
|
Belief propagation and its variants are popular methods for approximate
inference, but their running time and even their convergence depend greatly on
the schedule used to send the messages. Recently, dynamic update schedules have
been shown to converge much faster on hard networks than static schedules,
namely the residual BP schedule of Elidan et al. [2006]. But that RBP algorithm
wastes message updates: many messages are computed solely to determine their
priority, and are never actually performed. In this paper, we show that
estimating the residual, rather than calculating it directly, leads to
significant decreases in the number of messages required for convergence, and
in the total running time. The residual is estimated using an upper bound based
on recent work on message errors in BP. On both synthetic and real-world
networks, this dramatically decreases the running time of BP, in some cases by
a factor of five, without affecting the quality of the solution.
|
1206.5292
|
Markov Logic in Infinite Domains
|
cs.AI
|
Combining first-order logic and probability has long been a goal of AI.
Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching
weights to first-order formulas and viewing them as templates for features of
Markov networks. Unfortunately, it does not have the full power of first-order
logic, because it is only defined for finite domains. This paper extends Markov
logic to infinite domains, by casting it in the framework of Gibbs measures
(Georgii, 1988). We show that a Markov logic network (MLN) admits a Gibbs
measure as long as each ground atom has a finite number of neighbors. Many
interesting cases fall in this category. We also show that an MLN admits a
unique measure if the weights of its non-unit clauses are small enough. We then
examine the structure of the set of consistent measures in the non-unique case.
Many important phenomena, including systems with phase transitions, are
represented by MLNs with non-unique measures. We relate the problem of
satisfiability in first-order logic to the properties of MLN measures, and
discuss how Markov logic relates to previous infinite models.
|
1206.5293
|
On Sensitivity of the MAP Bayesian Network Structure to the Equivalent
Sample Size Parameter
|
cs.LG stat.ML
|
BDeu marginal likelihood score is a popular model selection criterion for
selecting a Bayesian network structure based on sample data. This
non-informative scoring criterion assigns same score for network structures
that encode same independence statements. However, before applying the BDeu
score, one must determine a single parameter, the equivalent sample size alpha.
Unfortunately no generally accepted rule for determining the alpha parameter
has been suggested. This is disturbing, since in this paper we show through a
series of concrete experiments that the solution of the network structure
optimization problem is highly sensitive to the chosen alpha parameter value.
Based on these results, we are able to give explanations for how and why this
phenomenon happens, and discuss ideas for solving this problem.
|
1206.5294
|
What Counterfactuals Can Be Tested
|
cs.AI
|
Counterfactual statements, e.g., "my headache would be gone had I taken an
aspirin" are central to scientific discourse, and are formally interpreted as
statements derived from "alternative worlds". However, since they invoke
hypothetical states of affairs, often incompatible with what is actually known
or observed, testing counterfactuals is fraught with conceptual and practical
difficulties. In this paper, we provide a complete characterization of
"testable counterfactuals," namely, counterfactual statements whose
probabilities can be inferred from physical experiments. We provide complete
procedures for discerning whether a given counterfactual is testable and, if
so, expressing its probability in terms of experimental data.
|
1206.5295
|
Improved Memory-Bounded Dynamic Programming for Decentralized POMDPs
|
cs.AI
|
Memory-Bounded Dynamic Programming (MBDP) has proved extremely effective in
solving decentralized POMDPs with large horizons. We generalize the algorithm
and improve its scalability by reducing the complexity with respect to the
number of observations from exponential to polynomial. We derive error bounds
on solution quality with respect to this new approximation and analyze the
convergence behavior. To evaluate the effectiveness of the improvements, we
introduce a new, larger benchmark problem. Experimental results show that
despite the high complexity of decentralized POMDPs, scalable solution
techniques such as MBDP perform surprisingly well.
|
1206.5327
|
XACML 3.0 in Answer Set Programming
|
cs.IT math.IT
|
We present a systematic technique for transforming XACML 3.0 policies in
Answer Set Programming (ASP). We show that the resulting logic program has a
unique answer set that directly corresponds to our formalisation of the
standard semantics of XACML 3.0 from Ramli et. al. We demonstrate how our
results make it possible to use off-the-shelf ASP solvers to formally verify
properties of access control policies represented in XACML, such as checking
the completeness of a set of access control policies and verifying policy
properties.
|
1206.5333
|
TempEval-3: Evaluating Events, Time Expressions, and Temporal Relations
|
cs.CL
|
We describe the TempEval-3 task which is currently in preparation for the
SemEval-2013 evaluation exercise. The aim of TempEval is to advance research on
temporal information processing. TempEval-3 follows on from previous TempEval
events, incorporating: a three-part task structure covering event, temporal
expression and temporal relation extraction; a larger dataset; and single
overall task quality scores.
|
1206.5345
|
Dynamic Pricing under Finite Space Demand Uncertainty: A Multi-Armed
Bandit with Dependent Arms
|
cs.LG
|
We consider a dynamic pricing problem under unknown demand models. In this
problem a seller offers prices to a stream of customers and observes either
success or failure in each sale attempt. The underlying demand model is unknown
to the seller and can take one of N possible forms. In this paper, we show that
this problem can be formulated as a multi-armed bandit with dependent arms. We
propose a dynamic pricing policy based on the likelihood ratio test. We show
that the proposed policy achieves complete learning, i.e., it offers a bounded
regret where regret is defined as the revenue loss with respect to the case
with a known demand model. This is in sharp contrast with the logarithmic
growing regret in multi-armed bandit with independent arms.
|
1206.5349
|
Provable ICA with Unknown Gaussian Noise, and Implications for Gaussian
Mixtures and Autoencoders
|
cs.LG cs.DS
|
We present a new algorithm for Independent Component Analysis (ICA) which has
provable performance guarantees. In particular, suppose we are given samples of
the form $y = Ax + \eta$ where $A$ is an unknown $n \times n$ matrix and $x$ is
a random variable whose components are independent and have a fourth moment
strictly less than that of a standard Gaussian random variable and $\eta$ is an
$n$-dimensional Gaussian random variable with unknown covariance $\Sigma$: We
give an algorithm that provable recovers $A$ and $\Sigma$ up to an additive
$\epsilon$ and whose running time and sample complexity are polynomial in $n$
and $1 / \epsilon$. To accomplish this, we introduce a novel "quasi-whitening"
step that may be useful in other contexts in which the covariance of Gaussian
noise is not known in advance. We also give a general framework for finding all
local optima of a function (given an oracle for approximately finding just one)
and this is a crucial step in our algorithm, one that has been overlooked in
previous attempts, and allows us to control the accumulation of error when we
find the columns of $A$ one by one via local search.
|
1206.5360
|
Analysis of a Nature Inspired Firefly Algorithm based Back-propagation
Neural Network Training
|
cs.AI cs.NE
|
Optimization algorithms are normally influenced by meta-heuristic approach.
In recent years several hybrid methods for optimization are developed to find
out a better solution. The proposed work using meta-heuristic Nature Inspired
algorithm is applied with back-propagation method to train a feed-forward
neural network. Firefly algorithm is a nature inspired meta-heuristic
algorithm, and it is incorporated into back-propagation algorithm to achieve
fast and improved convergence rate in training feed-forward neural network. The
proposed technique is tested over some standard data set. It is found that
proposed method produces an improved convergence within very few iteration.
This performance is also analyzed and compared to genetic algorithm based
back-propagation. It is observed that proposed method consumes less time to
converge and providing improved convergence rate with minimum feed-forward
neural network design.
|
1206.5361
|
Regional System Identification and Computer Based Switchable Control of
a Nonlinear Hot Air Blower System
|
cs.SY
|
This paper describes the design and implementation of linear controllers with
a switching condition for a nonlinear hot air blower system (HABS) process
trainer PT326. The system is interfaced with a computer through a USB based
data acquisition module and interfacing circuitry. A calibration equation is
implemented through computer in order to convert the amplified output of the
sensor to temperature. Overall plant is nonlinear; therefore, system
identification is performed in three different regions depending upon the plant
input. For these three regions, three linear controllers are designed with
closed-loop system having small rise time, settling time and overshoot.
Switching of controllers is based on regions defined by plant input. In order
to avoid the effect of discontinuity, due to switching of linear controllers,
parameters of all linear controllers are taken closer to each other. Finally,
discretized controllers along with switching condition are implemented for the
plant through computer and practical results are demonstrated.
|
1206.5365
|
Batched Sparse Codes
|
cs.IT math.IT
|
Network coding can significantly improve the transmission rate of
communication networks with packet loss compared with routing. However, using
network coding usually incurs high computational and storage costs in the
network devices and terminals. For example, some network coding schemes require
the computational and/or storage capacities of an intermediate network node to
increase linearly with the number of packets for transmission, making such
schemes difficult to be implemented in a router-like device that has only
constant computational and storage capacities. In this paper, we introduce
BATched Sparse code (BATS code), which enables a digital fountain approach to
resolve the above issue. BATS code is a coding scheme that consists of an outer
code and an inner code. The outer code is a matrix generation of a fountain
code. It works with the inner code that comprises random linear coding at the
intermediate network nodes. BATS codes preserve such desirable properties of
fountain codes as ratelessness and low encoding/decoding complexity. The
computational and storage capacities of the intermediate network nodes required
for applying BATS codes are independent of the number of packets for
transmission. Almost capacity-achieving BATS code schemes are devised for
unicast networks, two-way relay networks, tree networks, a class of three-layer
networks, and the butterfly network. For general networks, under different
optimization criteria, guaranteed decoding rates for the receiving nodes can be
obtained.
|
1206.5384
|
Keyphrase Based Arabic Summarizer (KPAS)
|
cs.CL cs.AI
|
This paper describes a computationally inexpensive and efficient generic
summarization algorithm for Arabic texts. The algorithm belongs to extractive
summarization family, which reduces the problem into representative sentences
identification and extraction sub-problems. Important keyphrases of the
document to be summarized are identified employing combinations of statistical
and linguistic features. The sentence extraction algorithm exploits keyphrases
as the primary attributes to rank a sentence. The present experimental work,
demonstrates different techniques for achieving various summarization goals
including: informative richness, coverage of both main and auxiliary topics,
and keeping redundancy to a minimum. A scoring scheme is then adopted that
balances between these summarization goals. To evaluate the resulted Arabic
summaries with well-established systems, aligned English/Arabic texts are used
through the experiments.
|
1206.5389
|
Information Networks with in-Block Memory
|
cs.IT math.IT
|
A class of channels is introduced for which there is memory inside blocks of
a specified length and no memory across the blocks. The multi-user model is
called an information network with in-block memory (NiBM). It is shown that
block-fading channels, channels with state known causally at the encoder, and
relay networks with delays are NiBMs. A cut-set bound is developed for NiBMs
that unifies, strengthens, and generalizes existing cut bounds for discrete
memoryless networks. The bound gives new finite-letter capacity expressions for
several classes of networks including point-to-point channels, and certain
multiaccess, broadcast, and relay channels. Cardinality bounds on the random
coding alphabets are developed that improve on existing bounds for channels
with action-dependent state available causally at the encoder and for relays
without delay. Finally, quantize-forward network coding is shown to achieve
rates within an additive gap of the new cut-set bound for linear, additive,
Gaussian noise channels, symmetric power constraints, and a multicast session.
|
1206.5396
|
Markov Chains on Orbits of Permutation Groups
|
cs.AI math.CO stat.CO
|
We present a novel approach to detecting and utilizing symmetries in
probabilistic graphical models with two main contributions. First, we present a
scalable approach to computing generating sets of permutation groups
representing the symmetries of graphical models. Second, we introduce orbital
Markov chains, a novel family of Markov chains leveraging model symmetries to
reduce mixing times. We establish an insightful connection between model
symmetries and rapid mixing of orbital Markov chains. Thus, we present the
first lifted MCMC algorithm for probabilistic graphical models. Both analytical
and empirical results demonstrate the effectiveness and efficiency of the
approach.
|
1206.5401
|
Dispersion of Infinite Constellations in Fast Fading Channels
|
cs.IT math.IT
|
In this work we extend the setting of communication without power constraint,
proposed by Poltyrev, to fast fading channels with channel state information
(CSI) at the receiver. The optimal codewords density, or actually the optimal
normalized log density (NLD), is considered. Poltyrev's capacity for this
channel is the highest achievable NLD, at possibly large block length, that
guarantees a vanishing error probability. For a given finite block length n and
a fixed error probability, there is a gap between the highest achievable NLD
and Poltyrev's capacity. As in other channels, this gap asymptotically vanishes
as the square root of the channel dispersion V over n, multiplied by the
inverse Q-function of the allowed error probability. This dispersion, derived
in the paper, equals the dispersion of the power constrained fast fading
channel at the high SNR regime. Connections to the error exponent of the peak
power constrained fading channel are also discussed.
|
1206.5421
|
Information Source Detection in the SIR Model: A Sample Path Based
Approach
|
cs.SI physics.soc-ph
|
This paper studies the problem of detecting the information source in a
network in which the spread of information follows the popular
Susceptible-Infected-Recovered (SIR) model. We assume all nodes in the network
are in the susceptible state initially except the information source which is
in the infected state. Susceptible nodes may then be infected by infected
nodes, and infected nodes may recover and will not be infected again after
recovery. Given a snapshot of the network, from which we know all infected
nodes but cannot distinguish susceptible nodes and recovered nodes, the problem
is to find the information source based on the snapshot and the network
topology. We develop a sample path based approach where the estimator of the
information source is chosen to be the root node associated with the sample
path that most likely leads to the observed snapshot. We prove for
infinite-trees, the estimator is a node that minimizes the maximum distance to
the infected nodes. A reverse-infection algorithm is proposed to find such an
estimator in general graphs. We prove that for $g$-regular trees such that
$gq>1,$ where $g$ is the node degree and $q$ is the infection probability, the
estimator is within a constant distance from the actual source with a high
probability, independent of the number of infected nodes and the time the
snapshot is taken. Our simulation results show that for tree networks, the
estimator produced by the reverse-infection algorithm is closer to the actual
source than the one identified by the closeness centrality heuristic. We then
further evaluate the performance of the reverse infection algorithm on several
real world networks.
|
1206.5426
|
Imperfect Delayed CSIT can be as Useful as Perfect Delayed CSIT: DoF
Analysis and Constructions for the BC
|
cs.IT math.IT
|
In the setting of the two-user broadcast channel, where a two-antenna
transmitter communicates information to two single-antenna receivers, recent
work by Maddah-Ali and Tse has shown that perfect knowledge of delayed channel
state information at the transmitter (perfect delayed CSIT) can be useful, even
in the absence of any knowledge of current CSIT. Similar benefits of perfect
delayed CSIT were revealed in recent work by Kobayashi et al., Yang et al., and
Gou and Jafar, which extended the above to the case of perfect delayed CSIT and
imperfect current CSIT.
The work here considers the general problem of communicating, over the
aforementioned broadcast channel, with imperfect delayed and imperfect current
CSIT, and reveals that even substantially degraded and imperfect delayed-CSIT
is in fact sufficient to achieve the aforementioned gains previously associated
to perfect delayed CSIT. The work proposes novel multi-phase broadcasting
schemes that properly utilize knowledge of imperfect delayed and imperfect
current CSIT, to match in many cases the optimal degrees-of-freedom (DoF)
region achieved with perfect delayed CSIT. In addition to the theoretical
limits and explicitly constructed precoders, the work applies towards gaining
practical insight as to when it is worth improving CSIT quality.
|
1206.5520
|
Semantic Networks of Interests in Online NSSI Communities
|
cs.SI
|
Persons who engage in non-suicidal self-injury (NSSI), often conceal their
practices which limits the examination and understanding of those who engage in
NSSI. The goal of this research is to utilize public online social networks
(namely, in LiveJournal, a major blogging network) to observe the NSSI
population's communication in a naturally occurring setting. Specifically,
LiveJournal users can publicly declare their interests. We collected the
self-declared interests of 22,000 users who are members of or participate in 43
NSSI-related communities. We extracted a bimodal socio-semantic network of
users and interests based on their similarity. The semantic subnetwork of
interests contains NSSI terms (such as "self-injury" and "razors"), references
to music performers (such as "Nine Inch Nails"), and general daily life and
creativity related terms (such as "poetry" and "boys"). Assuming users are
genuine in their declarations, the words reveal distinct patterns of interest
and may signal keys to NSSI.
|
1206.5525
|
Analysis of Coverage Region for MIMO Relay Channel
|
cs.IT math.IT
|
In this paper we investigate the optimal relay location in the sense of
maximizing suitably defined coverage region for MIMO relay channel. We consider
the general Rayleigh fading case and assume that the channel state information
is only available at the receivers (CSIR), which is an important practical case
in applications such as cooperative vehicular communications. In order to
overcome the mathematical difficulty regarding determination of the optimal
relay location, we provide two analytical solutions, and show that it is
possible to determine the optimal relay location (for a desired transmission
rate) at which the coverage region is maximum. Monte Carlo simulations confirm
the validity of the analytical results. Numerical results indicate that using
multiple antennas increases coverage region for a fixed transmission rate, and
also increases the transmission rate linearly for a fixed coverage.
|
1206.5533
|
Practical recommendations for gradient-based training of deep
architectures
|
cs.LG
|
Learning algorithms related to artificial neural networks and in particular
for Deep Learning may seem to involve many bells and whistles, called
hyper-parameters. This chapter is meant as a practical guide with
recommendations for some of the most commonly used hyper-parameters, in
particular in the context of learning algorithms based on back-propagated
gradient and gradient-based optimization. It also discusses how to deal with
the fact that more interesting results can be obtained when allowing one to
adjust many hyper-parameters. Overall, it describes elements of the practice
used to successfully and efficiently train and debug large-scale and often deep
multi-layer neural networks. It closes with open questions about the training
difficulties observed with deeper architectures.
|
1206.5538
|
Representation Learning: A Review and New Perspectives
|
cs.LG
|
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning.
|
1206.5559
|
Speeding up the construction of slow adaptive walks
|
cs.NE
|
An algorithm (bliss) is proposed to speed up the construction of slow
adaptive walks. Slow adaptive walks are adaptive walks biased towards closer
points or smaller move steps. They were previously introduced to explore a
search space, e.g. to detect potential local optima or to assess the ruggedness
of a fitness landscape. To avoid the quadratic cost of computing Hamming
distance (HD) for all-pairs of strings in a set in order to find the set of
closest strings for each string, strings are sorted and clustered by bliss such
that similar strings are more likely to get paired off for HD computation. To
efficiently arrange the strings by similarity, bliss employs the idea of shared
non-overlapping position specific subsequences between strings which is
inspired by an alignment-free protein sequence comparison algorithm. Tests are
performed to evaluate the quality of b-walks, i.e. slow adaptive walks
constructed from the output of bliss, on enumerated search spaces. Finally,
b-walks are applied to explore larger search spaces with the help of
Wang-Landau sampling.
|
1206.5580
|
A Geometric Algorithm for Scalable Multiple Kernel Learning
|
cs.LG stat.ML
|
We present a geometric formulation of the Multiple Kernel Learning (MKL)
problem. To do so, we reinterpret the problem of learning kernel weights as
searching for a kernel that maximizes the minimum (kernel) distance between two
convex polytopes. This interpretation combined with novel structural insights
from our geometric formulation allows us to reduce the MKL problem to a simple
optimization routine that yields provable convergence as well as quality
guarantees. As a result our method scales efficiently to much larger data sets
than most prior methods can handle. Empirical evaluation on eleven datasets
shows that we are significantly faster and even compare favorably with a
uniform unweighted combination of kernels.
|
1206.5582
|
A Survey on Web Service Discovery Approaches
|
cs.IR
|
Web services are playing an important role in e-business and e-commerce
applications. As web service applications are interoperable and can work on any
platform, large scale distributed systems can be developed easily using web
services. Finding most suitable web service from vast collection of web
services is very crucial for successful execution of applications. Traditional
web service discovery approach is a keyword based search using UDDI. Various
other approaches for discovering web services are also available. Some of the
discovery approaches are syntax based while other are semantic based. Having
system for service discovery which can work automatically is also the concern
of service discovery approaches. As these approaches are different, one
solution may be better than another depending on requirements. Selecting a
specific service discovery system is a hard task. In this paper, we give an
overview of different approaches for web service discovery described in
literature. We present a survey of how these approaches differ from each other.
|
1206.5584
|
Web-page Prediction for Domain Specific Web-search using Boolean Bit
Mask
|
cs.IR
|
Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize
their time using an efficient search engine. To improve the performance of the
search engine, we are introducing a unique mechanism which will give Web
searchers more prominent search results. In this paper, we are going to discuss
a domain specific Web search prototype which will generate the predicted
Web-page list for user given search string using Boolean bit mask.
|
1206.5617
|
Robust Downlink Throughput Maximization in MIMO Cognitive Network with
more Realistic Conditions: Imperfect Channel Information & Presence of
Primary Transmitter
|
cs.IT math.IT
|
Designing an efficient scheme in physical layer enables cognitive radio (CR)
users to efficiently utilize resources dedicated to primary users (PUs). In
this paper in order to maximize the SU's throughput, the SU's transceivers
beamforming is designed through new model considering the presence of the PU's
transmitter. Since presence of primary transmitter basically degrades CR's
system performance; proposed beamforming design considers intra-system
interference between PUs and SUs. An optimization problem based on maximizing
CR network throughput subject to controlling interference power from SU
transmitter to PU receiver has been formulated. Due to limited cooperation
between PU and SU network, channel state information (CSI) between two networks
are assumed to be partially available, subsequently conventional CSI
uncertainty model known as norm bounded error model has been employed. The
proposed optimization problem, which is basically difficult to solve, has been
converted to a semi definite program which can be efficiently solved by
optimization toolbox software e.g., CVX-Mathlab. Furthermore, alternative time
efficient and close form solutions are derived. The superiority of the proposed
approach in comparison with the previous works has been confirmed through the
simulation results.
|
1206.5637
|
What you can do with Coordinated Samples
|
cs.DB math.ST stat.TH
|
Sample coordination, where similar instances have similar samples, was
proposed by statisticians four decades ago as a way to maximize overlap in
repeated surveys. Coordinated sampling had been since used for summarizing
massive data sets.
The usefulness of a sampling scheme hinges on the scope and accuracy within
which queries posed over the original data can be answered from the sample. We
aim here to gain a fundamental understanding of the limits and potential of
coordination. Our main result is a precise characterization, in terms of simple
properties of the estimated function, of queries for which estimators with
desirable properties exist. We consider unbiasedness, nonnegativity, finite
variance, and bounded estimates.
Since generally a single estimator can not be optimal (minimize variance
simultaneously) for all data, we propose {\em variance competitiveness}, which
means that the expectation of the square on any data is not too far from the
minimum one possible for the data. Surprisingly perhaps, we show how to
construct, for any function for which an unbiased nonnegative estimator exists,
a variance competitive estimator.
|
1206.5651
|
Optimization of Real, Hermitian Quadratic Forms: Real, Complex
Hopfield-Amari Neural Network
|
cs.NE
|
In this research paper, the problem of optimization of quadratic forms
associated with the dynamics of Hopfield-Amari neural network is considered. An
elegant (and short) proof of the states at which local/global minima of
quadratic form are attained is provided. A theorem associated with local/global
minimization of quadratic energy function using the Hopfield-Amari neural
network is discussed. The results are generalized to a "Complex Hopfield neural
network" dynamics over the complex hypercube (using a "complex signum
function"). It is also reasoned through two theorems that there is no loss of
generality in assuming the threshold vector to be a zero vector in the case of
real as well as a "Complex Hopfield neural network". Some structured quadratic
forms like Toeplitz form and Complex Toeplitz form are discussed.
|
1206.5691
|
Superactivation of Quantum Channels is Limited by the Quantum Relative
Entropy Function
|
quant-ph cs.IT math.IT
|
In this work we prove that the possibility of superactivation of quantum
channel capacities is determined by the mathematical properties of the quantum
relative entropy function. Before our work this fundamental and purely
mathematical connection between the quantum relative entropy function and the
superactivation effect was completely unrevealed. We demonstrate the results
for the quantum capacity; however the proposed theorems and connections hold
for all other channel capacities of quantum channels for which the
superactivation is possible.
|
1206.5693
|
Quasi-Superactivation of Classical Capacity of Zero-Capacity Quantum
Channels
|
quant-ph cs.IT math.IT
|
One of the most surprising recent results in quantum Shannon theory is the
superactivation of the quantum capacity of a quantum channel. This phenomenon
has its roots in the extreme violation of additivity of the channel capacity
and enables to reliably transmit quantum information over zero-capacity quantum
channels. In this work we demonstrate a similar effect for the classical
capacity of a quantum channel which previously was thought to be impossible. We
show that a nonzero classical capacity can be achieved for all zero-capacity
quantum channels and it only requires the assistance of an elementary
photon-atom interaction process - the stimulated emission.
|
1206.5698
|
Relational Approach to Knowledge Engineering for POMDP-based Assistance
Systems as a Translation of a Psychological Model
|
cs.AI
|
Assistive systems for persons with cognitive disabilities (e.g. dementia) are
difficult to build due to the wide range of different approaches people can
take to accomplishing the same task, and the significant uncertainties that
arise from both the unpredictability of client's behaviours and from noise in
sensor readings. Partially observable Markov decision process (POMDP) models
have been used successfully as the reasoning engine behind such assistive
systems for small multi-step tasks such as hand washing. POMDP models are a
powerful, yet flexible framework for modelling assistance that can deal with
uncertainty and utility. Unfortunately, POMDPs usually require a very labour
intensive, manual procedure for their definition and construction. Our previous
work has described a knowledge driven method for automatically generating POMDP
activity recognition and context sensitive prompting systems for complex tasks.
We call the resulting POMDP a SNAP (SyNdetic Assistance Process). The
spreadsheet-like result of the analysis does not correspond to the POMDP model
directly and the translation to a formal POMDP representation is required. To
date, this translation had to be performed manually by a trained POMDP expert.
In this paper, we formalise and automate this translation process using a
probabilistic relational model (PRM) encoded in a relational database. We
demonstrate the method by eliciting three assistance tasks from non-experts. We
validate the resulting POMDP models using case-based simulations to show that
they are reasonable for the domains. We also show a complete case study of a
designer specifying one database, including an evaluation in a real-life
experiment with a human actor.
|
1206.5710
|
Complex networks embedded in space: Dimension and scaling relations
between mass, topological distance and Euclidean distance
|
physics.soc-ph cs.SI
|
Many real networks are embedded in space, where in some of them the links
length decay as a power law distribution with distance. Indications that such
systems can be characterized by the concept of dimension were found recently.
Here, we present further support for this claim, based on extensive numerical
simulations for model networks embedded on lattices of dimensions $d_e=1$ and
$d_e=2$.
We evaluate the dimension $d$ from the power law scaling of (a) the mass of
the network with the Euclidean radius $r$ and (b) the probability of return to
the origin with the distance $r$ travelled by the random walker. Both
approaches yield the same dimension. For networks with $\delta < d_e$, $d$ is
infinity, while for $\delta > 2d_e$, $d$ obtains the value of the embedding
dimension $d_e$. In the intermediate regime of interest $d_e \leq \delta < 2
d_e$, our numerical results suggest that $d$ decreases continously from $d =
\infty$ to $d_e$, with $d - d_e \sim (\delta - d_e)^{-1}$ for $\delta$ close to
$d_e$. Finally, we discuss the scaling of the mass $M$ and the Euclidean
distance $r$ with the topological distance $\ell$. Our results suggest that in
the intermediate regime $d_e \leq \delta < 2 d_e$, $M(\ell)$ and $r(\ell)$ do
not increase with $\ell$ as a power law but with a stretched exponential,
$M(\ell) \sim \exp [A \ell^{\delta' (2 - \delta')}]$ and $r(\ell) \sim \exp [B
\ell^{\delta' (2 - \delta')}]$, where $\delta' = \delta/d_e$. The parameters
$A$ and $B$ are related to $d$ by $d = A/B$, such that $M(\ell) \sim
r(\ell)^d$. For $\delta < d_e$, $M$ increases exponentially with $\ell$, as
known for $\delta=0$, while $r$ is constant and independent of $\ell$. For
$\delta \geq 2d_e$, we find power law scaling, $M(\ell) \sim \ell^{d_\ell}$ and
$r(\ell) \sim \ell^{1/d_{min}}$, with $d_\ell \cdot d_{min} = d$.
|
1206.5725
|
On Deterministic Sketching and Streaming for Sparse Recovery and Norm
Estimation
|
cs.DS cs.IT math.IT
|
We study classic streaming and sparse recovery problems using deterministic
linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the
latter also being known as l1-heavy hitters), norm estimation, and approximate
inner product. We focus on devising a fixed matrix A in R^{m x n} and a
deterministic recovery/estimation procedure which work for all possible input
vectors simultaneously. Our results improve upon existing work, the following
being our main contributions:
* A proof that linf/l1 sparse recovery and inner product estimation are
equivalent, and that incoherent matrices can be used to solve both problems.
Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log
n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms
by making use of the Fast Johnson-Lindenstrauss transform. Both our running
times and number of measurements improve upon previous work. We can also obtain
better error guarantees than previous work in terms of a smaller tail of the
input vector.
* A new lower bound for the number of linear measurements required to solve
l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are
required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where
x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude.
* A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of
measurements required to solve deterministic norm estimation, i.e., to recover
|x|_2 +/- eps|x|_1.
For all the problems we study, tight bounds are already known for the
randomized complexity from previous work, except in the case of l1/l1 sparse
recovery, where a nearly tight bound is known. Our work thus aims to study the
deterministic complexities of these problems.
|
1206.5726
|
L-RCM: a method to detect connected components in undirected graphs by
using the Laplacian matrix and the RCM algorithm
|
cs.DM cs.SI physics.soc-ph
|
In this paper we consider undirected graphs with no loops and multiple edges,
consisting of k connected components. In these cases, it is well known that one
can find a numbering of the vertices such that the adjacency matrix A is block
diagonal with k blocks. This also holds for the (unnormalized) Laplacian matrix
L= D-A, with D a diagonal matrix with the degrees of the nodes. In this paper
we propose to use the Reverse Cuthill-McKee (RCM) algorithm to obtain a block
diagonal form of L that reveals the number of connected components of the
graph. We present some theoretical results about the irreducibility of the
Laplacian matrix ordered by the RCM algorithm. As a practical application we
present a very efficient method to detect connected components with a
computational cost of O(m+n), being m the number of edges and n the number of
nodes. The RCM method is implemented in some comercial packages like MATLAB and
Mathematica. We make the computations by using the function symrcm of MATLAB.
Some numerical results are shown
|
1206.5754
|
Bayesian Modeling with Gaussian Processes using the GPstuff Toolbox
|
stat.ML cs.AI cs.MS
|
Gaussian processes (GP) are powerful tools for probabilistic modeling
purposes. They can be used to define prior distributions over latent functions
in hierarchical Bayesian models. The prior over functions is defined implicitly
by the mean and covariance function, which determine the smoothness and
variability of the function. The inference can then be conducted directly in
the function space by evaluating or approximating the posterior process.
Despite their attractive theoretical properties GPs provide practical
challenges in their implementation. GPstuff is a versatile collection of
computational tools for GP models compatible with Linux and Windows MATLAB and
Octave. It includes, among others, various inference methods, sparse
approximations and tools for model assessment. In this work, we review these
tools and demonstrate the use of GPstuff in several models.
|
1206.5762
|
Geometric WOM codes and coding strategies for multilevel flash memories
|
cs.IT math.CO math.IT
|
This paper investigates the design and application of write-once memory (WOM)
codes for flash memory storage. Using ideas from Merkx ('84), we present a
construction of WOM codes based on finite Euclidean geometries over
$\mathbb{F}_2$. This construction yields WOM codes with new parameters and
provides insight into the criterion that incidence structures should satisfy to
give rise to good codes. We also analyze methods of adapting binary WOM codes
for use on multilevel flash cells. In particular, we give two strategies based
on different rewrite objectives. A brief discussion of the average-write
performance of these strategies, as well as concatenation methods for WOM codes
is also provided.
|
1206.5766
|
Learning mixtures of spherical Gaussians: moment methods and spectral
decompositions
|
cs.LG stat.ML
|
This work provides a computationally efficient and statistically consistent
moment-based estimator for mixtures of spherical Gaussians. Under the condition
that component means are in general position, a simple spectral decomposition
technique yields consistent parameter estimates from low-order observable
moments, without additional minimum separation assumptions needed by previous
computationally efficient estimation procedures. Thus computational and
information-theoretic barriers to efficient estimation in mixture models are
precluded when the mixture components have means in general position and
spherical covariances. Some connections are made to estimation problems related
to independent component analysis.
|
1206.5771
|
The evolution of representation in simple cognitive networks
|
q-bio.NC cs.NE q-bio.PE
|
Representations are internal models of the environment that can provide
guidance to a behaving agent, even in the absence of sensory information. It is
not clear how representations are developed and whether or not they are
necessary or even essential for intelligent behavior. We argue here that the
ability to represent relevant features of the environment is the expected
consequence of an adaptive process, give a formal definition of representation
based on information theory, and quantify it with a measure R. To measure how R
changes over time, we evolve two types of networks---an artificial neural
network and a network of hidden Markov gates---to solve a categorization task
using a genetic algorithm. We find that the capacity to represent increases
during evolutionary adaptation, and that agents form representations of their
environment during their lifetime. This ability allows the agents to act on
sensorial inputs in the context of their acquired representations and enables
complex and context-dependent behavior. We examine which concepts (features of
the environment) our networks are representing, how the representations are
logically encoded in the networks, and how they form as an agent behaves to
solve a task. We conclude that R should be able to quantify the representations
within any cognitive system, and should be predictive of an agent's long-term
adaptive success.
|
1206.5780
|
Black-box optimization benchmarking of IPOP-saACM-ES and BIPOP-saACM-ES
on the BBOB-2012 noiseless testbed
|
cs.NE
|
In this paper, we study the performance of IPOP-saACM-ES and BIPOP-saACM-ES,
recently proposed self-adaptive surrogate-assisted Covariance Matrix Adaptation
Evolution Strategies. Both algorithms were tested using restarts till a total
number of function evaluations of $10^6D$ was reached, where $D$ is the
dimension of the function search space. We compared surrogate-assisted
algorithms with their surrogate-less versions IPOP-saACM-ES and BIPOP-saACM-ES,
two algorithms with one of the best overall performance observed during the
BBOB-2009 and BBOB-2010. The comparison shows that the surrogate-assisted
versions outperform the original CMA-ES algorithms by a factor from 2 to 4 on 8
out of 24 noiseless benchmark problems, showing the best results among all
algorithms of the BBOB-2009 and BBOB-2010 on Ellipsoid, Discus, Bent Cigar,
Sharp Ridge and Sum of different powers functions.
|
1206.5782
|
Spectrum Sharing with Distributed Relay Selection and Clustering
|
cs.IT math.IT
|
We consider a spectrum-sharing network where n secondary relays are used to
increase secondary rate and also mitigate interference on the primary by
reducing the required overall secondary emitted power. We propose a distributed
relay selection and clustering framework, obtain closed-form expressions for
the secondary rate, and show that secondary rate increases proportionally to
log n. Remarkably, this is on the same order as the growth rate obtained in the
absence of a primary system and its imposed constraints. Our results show that
to maximize the rate, the secondary relays must transmit with power
proportional to n^(-1) (thus the sum of relay powers is bounded) and also that
the secondary source may not operate at its maximum allowable power. The
tradeoff between the secondary rate and the interference on the primary is also
characterized, showing that the primary interference can be reduced
asymptotically to zero as n increases, while still maintaining a secondary rate
that grows proportionally to log n. Finally, to address the rate loss due to
half-duplex relaying in the secondary, we propose an alternating relay protocol
and investigate its performance.
|
1206.5790
|
Stabilization of 2D discrete switched systems with state delays under
asynchronous switching
|
math.DS cs.SY math.OC
|
This paper is concerned with the problem of robust stabilization for a class
of uncertain 2D discrete switched systems with state delays represented by a
model of Roesser type, where the switching instants of the controller
experience delays with respect to those of the system, and the parameter
uncertainties are assumed to be norm-bounded. A state feedback controller is
proposed to guarantee exponential stability for such 2D discrete switched
systems, and the dwell time approach is utilized for the stability analysis and
controller design. A numerical example is given to illustrate the effectiveness
of the proposed method.
|
1206.5833
|
Revision of Defeasible Logic Preferences
|
cs.AI
|
There are several contexts of non-monotonic reasoning where a priority
between rules is established whose purpose is preventing conflicts.
One formalism that has been widely employed for non-monotonic reasoning is
the sceptical one known as Defeasible Logic. In Defeasible Logic the tool used
for conflict resolution is a preference relation between rules, that
establishes the priority among them.
In this paper we investigate how to modify such a preference relation in a
defeasible logic theory in order to change the conclusions of the theory
itself. We argue that the approach we adopt is applicable to legal reasoning
where users, in general, cannot change facts or rules, but can propose their
preferences about the relative strength of the rules.
We provide a comprehensive study of the possible combinatorial cases and we
identify and analyse the cases where the revision process is successful.
After this analysis, we identify three revision/update operators and study
them against the AGM postulates for belief revision operators, to discover that
only a part of these postulates are satisfied by the three operators.
|
1206.5851
|
A meta-analysis of state-of-the-art electoral prediction from Twitter
data
|
cs.SI cs.CL cs.CY physics.soc-ph
|
Electoral prediction from Twitter data is an appealing research topic. It
seems relatively straightforward and the prevailing view is overly optimistic.
This is problematic because while simple approaches are assumed to be good
enough, core problems are not addressed. Thus, this paper aims to (1) provide a
balanced and critical review of the state of the art; (2) cast light on the
presume predictive power of Twitter data; and (3) depict a roadmap to push
forward the field. Hence, a scheme to characterize Twitter prediction methods
is proposed. It covers every aspect from data collection to performance
evaluation, through data processing and vote inference. Using that scheme,
prior research is analyzed and organized to explain the main approaches taken
up to date but also their weaknesses. This is the first meta-analysis of the
whole body of research regarding electoral prediction from Twitter data. It
reveals that its presumed predictive power regarding electoral prediction has
been rather exaggerated: although social media may provide a glimpse on
electoral outcomes current research does not provide strong evidence to support
it can replace traditional polls. Finally, future lines of research along with
a set of requirements they must fulfill are provided.
|
1206.5856
|
Crowd Disasters as Systemic Failures: Analysis of the Love Parade
Disaster
|
nlin.CD cs.SI physics.soc-ph
|
Each year, crowd disasters happen in different areas of the world. How and
why do such disasters happen? Are the fatalities caused by relentless behavior
of people or a psychological state of panic that makes the crowd 'go mad'? Or
are they a tragic consequence of a breakdown of coordination? These and other
questions are addressed, based on a qualitative analysis of publicly available
videos and materials, which document the planning and organization of the Love
Parade in Duisburg, Germany, and the crowd disaster on July 24, 2010. Our
analysis reveals a number of misunderstandings that have widely spread. We also
provide a new perspective on concepts such as 'intentional pushing', 'mass
panic', 'stampede', and 'crowd crushs'. The focus of our analysis is on the
contributing causal factors and their mutual interdependencies, not on legal
issues or the judgment of personal or institutional responsibilities. Video
recordings show that, in Duisburg, people stumbled and piled up due to a
'domino effect', resulting from a phenomenon called 'crowd turbulence' or
'crowd quake'. Crowd quakes are a typical reason for crowd disasters, to be
distinguished from crowd disasters resulting from 'panic stampedes' or 'crowd
crushes'. In Duisburg, crowd turbulence was the consequence of amplifying
feedback and cascading effects, which are typical for systemic instabilities.
Accordingly, things can go terribly wrong in spite of no bad intentions from
anyone. Comparing the incident in Duisburg with others, we give recommendations
to help prevent future crowd disasters. In particular, we introduce a new scale
to assess the criticality of conditions in the crowd. This may allow
preventative measures to be taken earlier on. Furthermore, we discuss the
merits and limitations of citizen science for public investigation, considering
that today, almost every event is recorded and reflected in the World Wide Web.
|
1206.5863
|
Improved Constructions of Frameproof Codes
|
math.CO cs.IT math.IT
|
Frameproof codes are used to preserve the security in the context of
coalition when fingerprinting digital data. Let $M_{c,l}(q)$ be the largest
cardinality of a $q$-ary $c$-frameproof code of length $l$ and
$R_{c,l}=\lim_{q\rightarrow \infty}M_{c,l}(q)/q^{\lceil l/c\rceil}$. It has
been determined by Blackburn that $R_{c,l}=1$ when $l\equiv 1\ (\bmod\ c)$,
$R_{c,l}=2$ when $c=2$ and $l$ is even, and $R_{3,5}=5/3$. In this paper, we
give a recursive construction for $c$-frameproof codes of length $l$ with
respect to the alphabet size $q$. As applications of this construction, we
establish the existence results for $q$-ary $c$-frameproof codes of length
$c+2$ and size $\frac{c+2}{c}(q-1)^2+1$ for all odd $q$ when $c=2$ and for all
$q\equiv 4\pmod{6}$ when $c=3$. Furthermore, we show that $R_{c,c+2}=(c+2)/c$
meeting the upper bound given by Blackburn, for all integers $c$ such that
$c+1$ is a prime power.
|
1206.5865
|
Efficient Computing Budget Allocation for Simulation-based Optimization
with Stochastic Simulation Time
|
math.OC cs.SY
|
The dynamics of many systems nowadays follow not only physical laws but also
man-made rules. These systems are known as discrete event dynamic systems and
their performances can be accurately evaluated only through simulations.
Existing studies on simulation-based optimization (SBO) usually assume
deterministic simulation time for each replication. However, in many
applications such as evacuation, smoke detection, and territory exploration,
the simulation time is stochastic due to the randomness in the system behavior.
We consider the computing budget allocation for SBO's with stochastic
simulation time in this paper, which has not been addressed in existing
literatures to the author's best knowledge. We make the following major
contribution. The relationship between simulation time and performance
estimation accuracy is quantified. It is shown that when the asymptotic
performance is of interest only the mean value of individual simulation time
matters. Then based on the existing optimal computing budget allocation (OCBA)
method for deterministic simulation time we develop OCBA for stochastic
simulation time (OCBAS), and show that OCBAS is asymptotically optimal.
Numerical experiments are used to discuss the impact of the variance of
simulation time, the impact of correlated simulation time and performance
estimation, and to demonstrate the performance of OCBAS on a smoke detection
problem in wireless sensor network. The numerical results also show that OCBA
for deterministic simulation time is robust even when the simulation time is
stochastic.
|
1206.5882
|
Exact Recovery of Sparsely-Used Dictionaries
|
cs.LG cs.IT math.IT
|
We consider the problem of learning sparsely used dictionaries with an
arbitrary square dictionary and a random, sparse coefficient matrix. We prove
that $O (n \log n)$ samples are sufficient to uniquely determine the
coefficient matrix. Based on this proof, we design a polynomial-time algorithm,
called Exact Recovery of Sparsely-Used Dictionaries (ER-SpUD), and prove that
it probably recovers the dictionary and coefficient matrix when the coefficient
matrix is sufficiently sparse. Simulation results show that ER-SpUD reveals the
true dictionary as well as the coefficients with probability higher than many
state-of-the-art algorithms.
|
1206.5884
|
MAINWAVE: Multi Agents and Issues Negotiation for Web using Alliance
Virtual Engine
|
cs.MA
|
This paper showcases an improved architecture for a complete negotiation
system that permits multi party multi issue negotiation. The concepts of
multithreading and concurrency has been utilized to perform parallel execution.
The negotiation history has been implemented that stores all the records of the
messages exchanged for every successful and rejected negotiation process and
implements the concepts of artificial intelligence in determination of proper
weights for a valid negotiation mechanism. The issues are arranged in a
hierarchical pattern so as to simplify the representation and priorities are
assigned to each issue, which amounts to its relative importance. There is
refinement of utilities by consideration of the non-functional attributes. So
as to avoid overloading of the system, a maximum number of parties are allowed
to participate in the entire mechanism and if more parties arrive, they're put
into a waiting queue in accordance to certain criteria such as the first come
first serve or the relative priorities. This helps in fault tolerance. It also
supports the formation of alliances among the various parties while carrying
out a negotiation.
|
1206.5901
|
A nonlocal model for fluid-structure interaction with applications in
hydraulic fracturing
|
math.NA cs.CE physics.geo-ph
|
Modeling important engineering problems related to flow-induced damage (in
the context of hydraulic fracturing among others) depends critically on
characterizing the interaction of porous media and interstitial fluid flow.
This work presents a new formulation for incorporating the effects of pore
pressure in a nonlocal representation of solid mechanics. The result is a
framework for modeling fluid-structure interaction problems with the
discontinuity capturing advantages of an integral based formulation. A number
of numerical examples are used to show that the proposed formulation can be
applied to measure the effect of leak-off during hydraulic fracturing as well
as modeling consolidation of fluid saturated rock and surface subsidence caused
by fluid extraction from a geologic reservoir. The formulation incorporates the
effect of pore pressure in the constitutive description of the porous material
in a way that is appropriate for nonlinear materials, easily implemented in
existing codes, straightforward in its evaluation (no history dependence), and
justifiable from first principles. A mixture theory approach is used (deviating
only slightly where necessary) to motivate an alteration to the peridynamic
pressure term based on the fluid pore pressure. The resulting formulation has a
number of similarities to the effective stress principle developed by Terzaghi
and Biot and close correspondence is shown between the proposed method and the
classical effective stress principle.
|
1206.5915
|
Graph Based Classification Methods Using Inaccurate External Classifier
Information
|
cs.LG
|
In this paper we consider the problem of collectively classifying entities
where relational information is available across the entities. In practice
inaccurate class distribution for each entity is often available from another
(external) classifier. For example this distribution could come from a
classifier built using content features or a simple dictionary. Given the
relational and inaccurate external classifier information, we consider two
graph based settings in which the problem of collective classification can be
solved. In the first setting the class distribution is used to fix labels to a
subset of nodes and the labels for the remaining nodes are obtained like in a
transductive setting. In the other setting the class distributions of all nodes
are used to define the fitting function part of a graph regularized objective
function. We define a generalized objective function that handles both the
settings. Methods like harmonic Gaussian field and local-global consistency
(LGC) reported in the literature can be seen as special cases. We extend the
LGC and weighted vote relational neighbor classification (WvRN) methods to
support usage of external classifier information. We also propose an efficient
least squares regularization (LSR) based method and relate it to information
regularization methods. All the methods are evaluated on several benchmark and
real world datasets. Considering together speed, robustness and accuracy,
experimental results indicate that the LSR and WvRN-extension methods perform
better than other methods.
|
1206.5919
|
Performance Improvement of Iterative Multiuser Detection for Large
Sparsely-Spread CDMA Systems by Spatial Coupling
|
cs.IT math.IT
|
Kudekar et al. proved that the belief-propagation (BP) performance for
low-density parity check (LDPC) codes can be boosted up to the
maximum-a-posteriori (MAP) performance by spatial coupling. In this paper,
spatial coupling is applied to sparsely-spread code-division multiple-access
(CDMA) systems to improve the performance of iterative multiuser detection
based on BP. Two iterative receivers based on BP are considered: One receiver
is based on exact BP and the other on an approximate BP with Gaussian
approximation. The performance of the two BP receivers is evaluated via density
evolution (DE) in the dense limit after taking the large-system limit, in which
the number of users and the spreading factor tend to infinity while their ratio
is kept constant. The two BP receivers are shown to achieve the same
performance as each other in these limits. Furthermore, taking a continuum
limit for the obtained DE equations implies that the performance of the two BP
receivers can be improved up to the performance achieved by the symbol-wise MAP
detection, called individually-optimal detection, via spatial coupling.
Numerical simulations show that spatial coupling can provide a significant
improvement in bit error rate for finite-sized systems especially in the region
of high system loads.
|
1206.5928
|
CAPIR: Collaborative Action Planning with Intention Recognition
|
cs.AI
|
We apply decision theoretic techniques to construct non-player characters
that are able to assist a human player in collaborative games. The method is
based on solving Markov decision processes, which can be difficult when the
game state is described by many variables. To scale to more complex games, the
method allows decomposition of a game task into subtasks, each of which can be
modelled by a Markov decision process. Intention recognition is used to infer
the subtask that the human is currently performing, allowing the helper to
assist the human in performing the correct task. Experiments show that the
method can be effective, giving near-human level performance in helping a human
in a collaborative game.
|
1206.5930
|
Linear spaces and transversal designs: k-anonymous combinatorial
configurations for anonymous database search
|
cs.CR cs.DB math.CO
|
Anonymous database search protocols allow users to query a database
anonymously. This can be achieved by letting the users form a peer-to-peer
community and post queries on behalf of each other. In this article we discuss
an application of combinatorial configurations (also known as regular and
uniform partial linear spaces) to a protocol for anonymous database search, as
defining the key-distribution within the user community that implements the
protocol. The degree of anonymity that can be provided by the protocol is
determined by properties of the neighborhoods and the closed neighborhoods of
the points in the combinatorial configuration that is used. Combinatorial
configurations with unique neighborhoods or unique closed neighborhoods are
described and we show how to attack the protocol if such configurations are
used. We apply k-anonymity arguments and present the combinatorial
configurations with k-anonymous neighborhoods and with k-anonymous closed
neighborhoods. The transversal designs and the linear spaces are presented as
optimal configurations among the configurations with k-anonymous neighborhoods
and k-anonymous closed neighborhoods, respectively.
|
1206.5937
|
Freeway ramp metering control made easy and efficient
|
math.OC cs.SY
|
"Model-free" control and the related "intelligent" proportional-integral (PI)
controllers are successfully applied to freeway ramp metering control.
Implementing moreover the corresponding control strategy is straightforward.
Numerical simulations on the other hand need the identification of quite
complex quantities like the free flow sp\^eed and the critical density. This is
achieved thanks to new estimation techniques where the differentiation of noisy
signals plays a key r\^ole. Several excellent computer simulations are provided
and analyzed.
|
1206.5940
|
Bootstrapping Monte Carlo Tree Search with an Imperfect Heuristic
|
cs.AI
|
We consider the problem of using a heuristic policy to improve the value
approximation by the Upper Confidence Bound applied in Trees (UCT) algorithm in
non-adversarial settings such as planning with large-state space Markov
Decision Processes. Current improvements to UCT focus on either changing the
action selection formula at the internal nodes or the rollout policy at the
leaf nodes of the search tree. In this work, we propose to add an auxiliary arm
to each of the internal nodes, and always use the heuristic policy to roll out
simulations at the auxiliary arms. The method aims to get fast convergence to
optimal values at states where the heuristic policy is optimal, while retaining
similar approximation as the original UCT in other states. We show that
bootstrapping with the proposed method in the new algorithm, UCT-Aux, performs
better compared to the original UCT algorithm and its variants in two benchmark
experiment settings. We also examine conditions under which UCT-Aux works well.
|
1206.5980
|
Information Geometric Superactivation of Asymptotic Quantum Capacity and
Classical Zero-Error Capacity of Zero-Capacity Quantum Channels
|
quant-ph cs.IT math.IT
|
The superactivation of zero-capacity quantum channels makes it possible to
use two zero-capacity quantum channels with a positive joint capacity at the
output. Currently, we have no theoretical background for describing all
possible combinations of superactive zero-capacity channels; hence, there may
be many other possible combinations. In this PhD Thesis I provide an
algorithmic solution to the problem of superactivation and prove that
superactivation effect is rooted in information geometric issues.
|
1206.5986
|
On the Theorem of Uniform Recovery of Random Sampling Matrices
|
cs.IT cs.NA math.IT
|
We consider two theorems from the theory of compressive sensing. Mainly a
theorem concerning uniform recovery of random sampling matrices, where the
number of samples needed in order to recover an $s$-sparse signal from linear
measurements (with high probability) is known to be $m\gtrsim s(\ln s)^3\ln N$.
We present new and improved constants together with what we consider to be a
more explicit proof. A proof that also allows for a slightly larger class of
$m\times N$-matrices, by considering what we call \emph{low entropy}. We also
present an improved condition on the so-called restricted isometry constants,
$\delta_s$, ensuring sparse recovery via $\ell^1$-minimization. We show that
$\delta_{2s}<4/\sqrt{41}$ is sufficient and that this can be improved further
to almost allow for a sufficient condition of the type $\delta_{2s}<2/3$.
|
1206.5996
|
Quantum-assisted and Quantum-based Solutions in Wireless Systems
|
quant-ph cs.IT math.IT
|
In wireless systems there is always a trade-off between reducing the transmit
power and mitigating the resultant signal-degradation imposed by the
transmit-power reduction with the aid of sophisticated receiver algorithms,
when considering the total energy consumption. Quantum-assisted wireless
communications exploits the extra computing power offered by quantum mechanics
based architectures. This paper summarizes some recent results in quantum
computing and the corresponding application areas in wireless communications.
|
1206.6003
|
Stabilizing Nonuniformly Quantized Compressed Sensing with Scalar
Companders
|
cs.IT math.IT
|
This paper studies the problem of reconstructing sparse or compressible
signals from compressed sensing measurements that have undergone nonuniform
quantization. Previous approaches to this Quantized Compressed Sensing (QCS)
problem based on Gaussian models (bounded l2-norm) for the quantization
distortion yield results that, while often acceptable, may not be fully
consistent: re-measurement and quantization of the reconstructed signal do not
necessarily match the initial observations. Quantization distortion instead
more closely resembles heteroscedastic uniform noise, with variance depending
on the observed quantization bin. Generalizing our previous work on uniform
quantization, we show that for nonuniform quantizers described by the
"compander" formalism, quantization distortion may be better characterized as
having bounded weighted lp-norm (p >= 2), for a particular weighting. We
develop a new reconstruction approach, termed Generalized Basis Pursuit DeNoise
(GBPDN), which minimizes the sparsity of the reconstructed signal under this
weighted lp-norm fidelity constraint. We prove that for B bits per measurement
and under the oversampled QCS scenario (when the number of measurements is
large compared to the signal sparsity) if the sensing matrix satisfies a
proposed generalized Restricted Isometry Property, then, GBPDN provides a
reconstruction error of sparse signals which decreases like
O(2^{-B}/\sqrt{p+1}): a reduction by a factor \sqrt{p+1} relative to that
produced by using the l2-norm. Besides the QCS scenario, we also show that
GBPDN applies straightforwardly to the related case of CS measurements
corrupted by heteroscedastic Generalized Gaussian noise with provable
reconstruction error reduction. Finally, we describe an efficient numerical
procedure for computing GBPDN via a primal-dual convex optimization scheme, and
demonstrate its effectiveness through simulations.
|
1206.6006
|
Some bounds on the size of codes
|
cs.IT cs.DM math.CO math.IT
|
We present some upper bounds on the size of non-linear codes and their
restriction to systematic codes and linear codes. These bounds are independent
of other known theoretical bounds, e.g. the Griesmer bound, the Johnson bound
or the Plotkin bound, and one of these is actually an improvement of a bound by
Litsyn and Laihonen. Our experiments show that in some cases (the majority of
cases for some q) our bounds provide the best value, compared to all other
theoretical bounds.
|
1206.6015
|
Transductive Classification Methods for Mixed Graphs
|
cs.LG stat.ML
|
In this paper we provide a principled approach to solve a transductive
classification problem involving a similar graph (edges tend to connect nodes
with same labels) and a dissimilar graph (edges tend to connect nodes with
opposing labels). Most of the existing methods, e.g., Information
Regularization (IR), Weighted vote Relational Neighbor classifier (WvRN) etc,
assume that the given graph is only a similar graph. We extend the IR and WvRN
methods to deal with mixed graphs. We evaluate the proposed extensions on
several benchmark datasets as well as two real world datasets and demonstrate
the usefulness of our ideas.
|
1206.6030
|
An Additive Model View to Sparse Gaussian Process Classifier Design
|
cs.LG stat.ML
|
We consider the problem of designing a sparse Gaussian process classifier
(SGPC) that generalizes well. Viewing SGPC design as constructing an additive
model like in boosting, we present an efficient and effective SGPC design
method to perform a stage-wise optimization of a predictive loss function. We
introduce new methods for two key components viz., site parameter estimation
and basis vector selection in any SGPC design. The proposed adaptive sampling
based basis vector selection method aids in achieving improved generalization
performance at a reduced computational cost. This method can also be used in
conjunction with any other site parameter estimation methods. It has similar
computational and storage complexities as the well-known information vector
machine and is suitable for large datasets. The hyperparameters can be
determined by optimizing a predictive loss function. The experimental results
show better generalization performance of the proposed basis vector selection
method on several benchmark datasets, particularly for relatively smaller basis
vector set sizes or on difficult datasets.
|
1206.6036
|
Temporal Heterogeneities Increase the Prevalence of Epidemics on
Evolving Networks
|
physics.soc-ph cs.SI physics.med-ph q-bio.PE
|
Empirical studies suggest that contact patterns follow heterogeneous
inter-event times, meaning that intervals of high activity are followed by
periods of inactivity. Combined with birth and death of individuals, these
temporal constraints affect the spread of infections in a non-trivial way and
are dependent on the particular contact dynamics. We propose a stochastic model
to generate temporal networks where vertices make instantaneous contacts
following heterogeneous inter-event times, and leave and enter the system at
fixed rates. We study how these temporal properties affect the prevalence of an
infection and estimate R0, the number of secondary infections, by modeling
simulated infections (SIR, SI and SIS) co-evolving with the network structure.
We find that heterogeneous contact patterns cause earlier and larger epidemics
on the SIR model in comparison to homogeneous scenarios. In case of SI and SIS,
the epidemics is faster in the early stages (up to 90% of prevalence) followed
by a slowdown in the asymptotic limit in case of heterogeneous patterns. In the
presence of birth and death, heterogeneous patterns always cause higher
prevalence in comparison to homogeneous scenarios with same average inter-event
times. Our results suggest that R0 may be underestimated if temporal
heterogeneities are not taken into account in the modeling of epidemics.
|
1206.6038
|
Predictive Approaches For Gaussian Process Classifier Model Selection
|
cs.LG stat.ML
|
In this paper we consider the problem of Gaussian process classifier (GPC)
model selection with different Leave-One-Out (LOO) Cross Validation (CV) based
optimization criteria and provide a practical algorithm using LOO predictive
distributions with such criteria to select hyperparameters. Apart from the
standard average negative logarithm of predictive probability (NLP), we also
consider smoothed versions of criteria such as F-measure and Weighted Error
Rate (WER), which are useful for handling imbalanced data. Unlike the
regression case, LOO predictive distributions for the classifier case are
intractable. We use approximate LOO predictive distributions arrived from
Expectation Propagation (EP) approximation. We conduct experiments on several
real world benchmark datasets. When the NLP criterion is used for optimizing
the hyperparameters, the predictive approaches show better or comparable NLP
generalization performance with existing GPC approaches. On the other hand,
when the F-measure criterion is used, the F-measure generalization performance
improves significantly on several datasets. Overall, the EP-based predictive
algorithm comes out as an excellent choice for GP classifier model selection
with different optimization criteria.
|
1206.6080
|
Predicting the behavior of interacting humans by fusing data from
multiple sources
|
cs.AI cs.GT
|
Multi-fidelity methods combine inexpensive low-fidelity simulations with
costly but high-fidelity simulations to produce an accurate model of a system
of interest at minimal cost. They have proven useful in modeling physical
systems and have been applied to engineering problems such as wing-design
optimization. During human-in-the-loop experimentation, it has become
increasingly common to use online platforms, like Mechanical Turk, to run
low-fidelity experiments to gather human performance data in an efficient
manner. One concern with these experiments is that the results obtained from
the online environment generalize poorly to the actual domain of interest. To
address this limitation, we extend traditional multi-fidelity approaches to
allow us to combine fewer data points from high-fidelity human-in-the-loop
experiments with plentiful but less accurate data from low-fidelity experiments
to produce accurate models of how humans interact. We present both model-based
and model-free methods, and summarize the predictive performance of each method
under different conditions.
|
1206.6098
|
GUBS, a Behavior-based Language for Open System Dedicated to Synthetic
Biology
|
cs.PL cs.CE
|
In this article, we propose a domain specific language, GUBS (Genomic Unified
Behavior Specification), dedicated to the behavioral specification of synthetic
biological devices, viewed as discrete open dynamical systems. GUBS is a
rule-based declarative language. By contrast to a closed system, a program is
always a partial description of the behavior of the system. The semantics of
the language accounts the existence of some hidden non-specified actions
possibly altering the behavior of the programmed device. The compilation
framework follows a scheme similar to automatic theorem proving, aiming at
improving synthetic biological design safety.
|
1206.6141
|
Directed Time Series Regression for Control
|
cs.LG cs.SY stat.ML
|
We propose directed time series regression, a new approach to estimating
parameters of time-series models for use in certainty equivalent model
predictive control. The approach combines merits of least squares regression
and empirical optimization. Through a computational study involving a
stochastic version of a well known inverted pendulum balancing problem, we
demonstrate that directed time series regression can generate significant
improvements in controller performance over either of the aforementioned
alternatives.
|
1206.6145
|
Two-way Networks: when Adaptation is Useless
|
cs.IT math.IT
|
In two-way networks, nodes act as both sources and destinations of messages.
This allows for "adaptation" at or "interaction" between the nodes - a node's
channel inputs may be functions of its message(s) and previously received
signals. How to best adapt is key to two-way communication, rendering it
challenging. However, examples exist of point-to-point channels where
adaptation is not beneficial from a capacity perspective. We ask whether
analogous examples exist for multi-user two-way networks.
We first consider deterministic two-way channel models: the binary modulo-2
addition channel and a generalization thereof, and the linear deterministic
channel. For these deterministic models we obtain the capacity region for the
two-way multiple access/broadcast channel, the two-way Z channel and the
two-way interference channel (IC). In all cases we permit all nodes to adapt
channel inputs to past outputs (except for portions of the linear deterministic
two-way IC where we only permit 2 of the 4 nodes to fully adapt). However, we
show that this adaptation is useless from a capacity region perspective and
capacity is achieved by strategies where the channel inputs at each use do not
adapt to previous inputs. Finally, we consider the Gaussian two-way IC, and
show that partial adaptation is useless when the interference is very strong.
In the strong and weak interference regimes, we show that the non-adaptive Han
and Kobayashi scheme utilized in parallel in both directions achieves to within
a constant gap for the symmetric rate of the fully (some regimes) or partially
(remaining regimes) adaptive models.
The central technical contribution is the derivation of new, computable outer
bounds which allow for adaptation. Inner bounds follow from non-adaptive
achievability schemes of the corresponding one-way channel models.
|
1206.6153
|
To Sense or Not To Sense
|
cs.IT cs.NI math.IT
|
A longer sensing time improves the sensing performance; however, with a fixed
frame size, the longer sensing time will reduce the allowable data transmission
time of the secondary user (SU). In this paper, we try to address the tradeoff
between sensing the primary channel for $\tau$ seconds of the time slot
proceeded by randomly accessing it and randomly accessing primary channel
without sensing to avoid wasting $\tau$ seconds in sensing. The SU senses
primary channel to exploit the periods of silence, if the primary user (PU) is
declared to be idle the SU randomly accesses the channel with some access
probability $a_s$. In addition to randomly accesses the channel if the PU is
sensed to be idle, it possibly accesses it if the channel is declared to be
busy with some access probability $b_s$. This is because the probability of
false alarm and misdetection cause significant secondary throughput degradation
and affect the PU QoS. We propose variable sensing duration schemes where the
SU optimizes over the optimal sensing time to achieve the maximum stable
throughput for both primary and secondary queues. The results reveal the
performance gains of the proposed schemes over the conventional sensing scheme,
i.e., the SU senses the primary channel for $\tau$ seconds and accesses with
probability 1 if the PU is declared to be idle. Also, the proposed schemes
overcome random access without sensing scheme.
The theoretical and numerical results show that pairs of misdetection and
false alarm probabilities may exist such that sensing the primary channel for
very small duration overcomes sensing it for large portion of the time slot. In
addition, for certain average arrival rate to the primary queue pairs of
misdetection and false alarm probabilities may exist such that the random
access without sensing overcomes the random access with long sensing duration.
|
1206.6172
|
Outage Probability and Outage-Based Robust Beamforming for MIMO
Interference Channels with Imperfect Channel State Information
|
cs.IT math.IT
|
In this paper, the outage probability and outage-based beam design for
multiple-input multiple-output (MIMO) interference channels are considered.
First, closed-form expressions for the outage probability in MIMO interference
channels are derived under the assumption of Gaussian-distributed channel state
information (CSI) error, and the asymptotic behavior of the outage probability
as a function of several system parameters is examined by using the Chernoff
bound. It is shown that the outage probability decreases exponentially with
respect to the quality of CSI measured by the inverse of the mean square error
of CSI. Second, based on the derived outage probability expressions, an
iterative beam design algorithm for maximizing the sum outage rate is proposed.
Numerical results show that the proposed beam design algorithm yields better
sum outage rate performance than conventional algorithms such as interference
alignment developed under the assumption of perfect CSI.
|
1206.6177
|
Structural analysis of high-index DAE for process simulation
|
cs.SY
|
This paper deals with the structural analysis problem of dynamic lumped
process high-index DAE models. We consider two methods for index reduction of
such models by differentiation: Pryce's method and the symbolic differential
elimination algorithm rifsimp. Discussion and comparison of these methods are
given via a class of fundamental process simulation examples. In particular,
the efficiency of the Pryce method is illustrated as a function of the number
of tanks in process design.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.