id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1206.3241
|
Approximating the Partition Function by Deleting and then Correcting for
Model Edges
|
cs.LG stat.ML
|
We propose an approach for approximating the partition function which is
based on two steps: (1) computing the partition function of a simplified model
which is obtained by deleting model edges, and (2) rectifying the result by
applying an edge-by-edge correction. The approach leads to an intuitive
framework in which one can trade-off the quality of an approximation with the
complexity of computing it. It also includes the Bethe free energy
approximation as a degenerate case. We develop the approach theoretically in
this paper and provide a number of empirical results that reveal its practical
utility.
|
1206.3242
|
Multi-View Learning in the Presence of View Disagreement
|
cs.LG stat.ML
|
Traditional multi-view learning approaches suffer in the presence of view
disagreement,i.e., when samples in each view do not belong to the same class
due to view corruption, occlusion or other noise processes. In this paper we
present a multi-view learning approach that uses a conditional entropy
criterion to detect view disagreement. Once detected, samples with view
disagreement are filtered and standard multi-view learning methods can be
successfully applied to the remaining samples. Experimental evaluation on
synthetic and audio-visual databases demonstrates that the detection and
filtering of view disagreement considerably increases the performance of
traditional multi-view learning approaches.
|
1206.3243
|
Bounds on the Bethe Free Energy for Gaussian Networks
|
cs.LG stat.ML
|
We address the problem of computing approximate marginals in Gaussian
probabilistic models by using mean field and fractional Bethe approximations.
As an extension of Welling and Teh (2001), we define the Gaussian fractional
Bethe free energy in terms of the moment parameters of the approximate
marginals and derive an upper and lower bound for it. We give necessary
conditions for the Gaussian fractional Bethe free energies to be bounded from
below. It turns out that the bounding condition is the same as the pairwise
normalizability condition derived by Malioutov et al. (2006) as a sufficient
condition for the convergence of the message passing algorithm. By giving a
counterexample, we disprove the conjecture in Welling and Teh (2001): even when
the Bethe free energy is not bounded from below, it can possess a local minimum
to which the minimization algorithms can converge.
|
1206.3244
|
Bayesian network learning by compiling to weighted MAX-SAT
|
cs.AI
|
The problem of learning discrete Bayesian networks from data is encoded as a
weighted MAX-SAT problem and the MaxWalkSat local search algorithm is used to
address it. For each dataset, the per-variable summands of the (BDeu) marginal
likelihood for different choices of parents ('family scores') are computed
prior to applying MaxWalkSat. Each permissible choice of parents for each
variable is encoded as a distinct propositional atom and the associated family
score encoded as a 'soft' weighted single-literal clause. Two approaches to
enforcing acyclicity are considered: either by encoding the ancestor relation
or by attaching a total order to each graph and encoding that. The latter
approach gives better results. Learning experiments have been conducted on 21
synthetic datasets sampled from 7 BNs. The largest dataset has 10,000
datapoints and 60 variables producing (for the 'ancestor' encoding) a weighted
CNF input file with 19,932 atoms and 269,367 clauses. For most datasets,
MaxWalkSat quickly finds BNs with higher BDeu score than the 'true' BN. The
effect of adding prior information is assessed. It is further shown that
Bayesian model averaging can be effected by collecting BNs generated during the
search.
|
1206.3245
|
Identifying Optimal Sequential Decisions
|
cs.AI math.ST stat.ME stat.TH
|
We consider conditions that allow us to find an optimal strategy for
sequential decisions from a given data situation. For the case where all
interventions are unconditional (atomic), identifiability has been discussed by
Pearl & Robins (1995). We argue here that an optimal strategy must be
conditional, i.e. take the information available at each decision point into
account. We show that the identification of an optimal sequential decision
strategy is more restrictive, in the sense that conditional interventions might
not always be identified when atomic interventions are. We further demonstrate
that a simple graphical criterion for the identifiability of an optimal
strategy can be given.
|
1206.3246
|
Strategy Selection in Influence Diagrams using Imprecise Probabilities
|
cs.AI
|
This paper describes a new algorithm to solve the decision making problem in
Influence Diagrams based on algorithms for credal networks. Decision nodes are
associated to imprecise probability distributions and a reformulation is
introduced that finds the global maximum strategy with respect to the expected
utility. We work with Limited Memory Influence Diagrams, which generalize most
Influence Diagram proposals and handle simultaneous decisions. Besides the
global optimum method, we explore an anytime approximate solution with a
guaranteed maximum error and show that imprecise probabilities are handled in a
straightforward way. Complexity issues and experiments with random diagrams and
an effects-based military planning problem are discussed.
|
1206.3247
|
Learning Convex Inference of Marginals
|
cs.LG stat.ML
|
Graphical models trained using maximum likelihood are a common tool for
probabilistic inference of marginal distributions. However, this approach
suffers difficulties when either the inference process or the model is
approximate. In this paper, the inference process is first defined to be the
minimization of a convex function, inspired by free energy approximations.
Learning is then done directly in terms of the performance of the inference
process at univariate marginal prediction. The main novelty is that this is a
direct minimization of emperical risk, where the risk measures the accuracy of
predicted marginals.
|
1206.3248
|
Knowledge Combination in Graphical Multiagent Model
|
cs.AI
|
A graphical multiagent model (GMM) represents a joint distribution over the
behavior of a set of agents. One source of knowledge about agents' behavior may
come from gametheoretic analysis, as captured by several graphical game
representations developed in recent years. GMMs generalize this approach to
express arbitrary distributions, based on game descriptions or other sources of
knowledge bearing on beliefs about agent behavior. To illustrate the
flexibility of GMMs, we exhibit game-derived models that allow probabilistic
deviation from equilibrium, as well as models based on heuristic action choice.
We investigate three different methods of integrating these models into a
single model representing the combined knowledge sources. To evaluate the
predictive performance of the combined model, we treat as actual outcome the
behavior produced by a reinforcement learning process. We find that combining
the two knowledge sources, using any of the methods, provides better
predictions than either source alone. Among the combination methods, mixing
data outperforms the opinion pool and direct update methods investigated in
this empirical trial.
|
1206.3249
|
Projected Subgradient Methods for Learning Sparse Gaussians
|
cs.LG stat.ML
|
Gaussian Markov random fields (GMRFs) are useful in a broad range of
applications. In this paper we tackle the problem of learning a sparse GMRF in
a high-dimensional space. Our approach uses the l1-norm as a regularization on
the inverse covariance matrix. We utilize a novel projected gradient method,
which is faster than previous methods in practice and equal to the best
performing of these in asymptotic complexity. We also extend the l1-regularized
objective to the problem of sparsifying entire blocks within the inverse
covariance matrix. Our methods generalize fairly easily to this case, while
other methods do not. We demonstrate that our extensions give better
generalization performance on two real domains--biological network analysis and
a 2D-shape modeling image task.
|
1206.3250
|
Almost Optimal Intervention Sets for Causal Discovery
|
cs.AI
|
We conjecture that the worst case number of experiments necessary and
sufficient to discover a causal graph uniquely given its observational Markov
equivalence class can be specified as a function of the largest clique in the
Markov equivalence class. We provide an algorithm that computes intervention
sets that we believe are optimal for the above task. The algorithm builds on
insights gained from the worst case analysis in Eberhardt et al. (2005) for
sequences of experiments when all possible directed acyclic graphs over N
variables are considered. A simulation suggests that our conjecture is correct.
We also show that a generalization of our conjecture to other classes of
possible graph hypotheses cannot be given easily, and in what sense the
algorithm is then no longer optimal.
|
1206.3251
|
Gibbs Sampling in Factorized Continuous-Time Markov Processes
|
cs.AI stat.CO
|
A central task in many applications is reasoning about processes that change
over continuous time. Continuous-Time Bayesian Networks is a general compact
representation language for multi-component continuous-time processes. However,
exact inference in such processes is exponential in the number of components,
and thus infeasible for most models of interest. Here we develop a novel Gibbs
sampling procedure for multi-component processes. This procedure iteratively
samples a trajectory for one of the components given the remaining ones. We
show how to perform exact sampling that adapts to the natural time scale of the
sampled process. Moreover, we show that this sampling procedure naturally
exploits the structure of the network to reduce the computational cost of each
step. This procedure is the first that can provide asymptotically unbiased
approximation in such processes.
|
1206.3252
|
Convex Point Estimation using Undirected Bayesian Transfer Hierarchies
|
cs.LG stat.ML
|
When related learning tasks are naturally arranged in a hierarchy, an
appealing approach for coping with scarcity of instances is that of transfer
learning using a hierarchical Bayes framework. As fully Bayesian computations
can be difficult and computationally demanding, it is often desirable to use
posterior point estimates that facilitate (relatively) efficient prediction.
However, the hierarchical Bayes framework does not always lend itself naturally
to this maximum aposteriori goal. In this work we propose an undirected
reformulation of hierarchical Bayes that relies on priors in the form of
similarity measures. We introduce the notion of "degree of transfer" weights on
components of these similarity measures, and show how they can be automatically
learned within a joint probabilistic framework. Importantly, our reformulation
results in a convex objective for many learning problems, thus facilitating
optimal posterior point estimation using standard optimization techniques. In
addition, we no longer require proper priors, allowing for flexible and
straightforward specification of joint distributions over transfer hierarchies.
We show that our framework is effective for learning models that are part of
transfer hierarchies for two real-life tasks: object shape modeling using
Gaussian density estimation and document classification.
|
1206.3253
|
Learning and Solving Many-Player Games through a Cluster-Based
Representation
|
cs.GT cs.AI
|
In addressing the challenge of exponential scaling with the number of agents
we adopt a cluster-based representation to approximately solve asymmetric games
of very many players. A cluster groups together agents with a similar
"strategic view" of the game. We learn the clustered approximation from data
consisting of strategy profiles and payoffs, which may be obtained from
observations of play or access to a simulator. Using our clustering we
construct a reduced "twins" game in which each cluster is associated with two
players of the reduced game. This allows our representation to be individually-
responsive because we align the interests of every individual agent with the
strategy of its cluster. Our approach provides agents with higher payoffs and
lower regret on average than model-free methods as well as previous
cluster-based methods, and requires only few observations for learning to be
successful. The "twins" approach is shown to be an important component of
providing these low regret approximations.
|
1206.3254
|
Latent Topic Models for Hypertext
|
cs.IR cs.CL cs.LG stat.ML
|
Latent topic models have been successfully applied as an unsupervised topic
discovery technique in large document collections. With the proliferation of
hypertext document collection such as the Internet, there has also been great
interest in extending these approaches to hypertext [6, 9]. These approaches
typically model links in an analogous fashion to how they model words - the
document-link co-occurrence matrix is modeled in the same way that the
document-word co-occurrence matrix is modeled in standard topic models. In this
paper we present a probabilistic generative model for hypertext document
collections that explicitly models the generation of links. Specifically, links
from a word w to a document d depend directly on how frequent the topic of w is
in d, in addition to the in-degree of d. We show how to perform EM learning on
this model efficiently. By not modeling links as analogous to words, we end up
using far fewer free parameters and obtain better link prediction results.
|
1206.3255
|
Church: a language for generative models
|
cs.PL cs.AI cs.LO
|
We introduce Church, a universal language for describing stochastic
generative processes. Church is based on the Lisp model of lambda calculus,
containing a pure Lisp as its deterministic subset. The semantics of Church is
defined in terms of evaluation histories and conditional distributions on such
histories. Church also includes a novel language construct, the stochastic
memoizer, which enables simple description of many complex non-parametric
models. We illustrate language features through several examples, including: a
generalized Bayes net in which parameters cluster over trials, infinite PCFGs,
planning by inference, and various non-parametric clustering models. Finally,
we show how to implement query on any Church program, exactly and
approximately, using Monte Carlo techniques.
|
1206.3256
|
Multi-View Learning over Structured and Non-Identical Outputs
|
cs.LG stat.ML
|
In many machine learning problems, labeled training data is limited but
unlabeled data is ample. Some of these problems have instances that can be
factored into multiple views, each of which is nearly sufficent in determining
the correct labels. In this paper we present a new algorithm for probabilistic
multi-view learning which uses the idea of stochastic agreement between views
as regularization. Our algorithm works on structured and unstructured problems
and easily generalizes to partial agreement scenarios. For the full agreement
case, our algorithm minimizes the Bhattacharyya distance between the models of
each view, and performs better than CoBoosting and two-view Perceptron on
several flat and structured classification problems.
|
1206.3257
|
Constrained Approximate Maximum Entropy Learning of Markov Random Fields
|
cs.LG stat.ML
|
Parameter estimation in Markov random fields (MRFs) is a difficult task, in
which inference over the network is run in the inner loop of a gradient descent
procedure. Replacing exact inference with approximate methods such as loopy
belief propagation (LBP) can suffer from poor convergence. In this paper, we
provide a different approach for combining MRF learning and Bethe
approximation. We consider the dual of maximum likelihood Markov network
learning - maximizing entropy with moment matching constraints - and then
approximate both the objective and the constraints in the resulting
optimization problem. Unlike previous work along these lines (Teh & Welling,
2003), our formulation allows parameter sharing between features in a general
log-linear model, parameter regularization and conditional training. We show
that piecewise training (Sutton & McCallum, 2005) is a very restricted special
case of this formulation. We study two optimization strategies: one based on a
single convex approximation and one that uses repeated convex approximations.
We show results on several real-world networks that demonstrate that these
algorithms can significantly outperform learning with loopy and piecewise. Our
results also provide a framework for analyzing the trade-offs of different
relaxations of the entropy objective and of the constraints.
|
1206.3258
|
Toward Experiential Utility Elicitation for Interface Customization
|
cs.AI cs.HC
|
User preferences for automated assistance often vary widely, depending on the
situation, and quality or presentation of help. Developing effectivemodels to
learn individual preferences online requires domain models that associate
observations of user behavior with their utility functions, which in turn can
be constructed using utility elicitation techniques. However, most elicitation
methods ask for users' predicted utilities based on hypothetical scenarios
rather than more realistic experienced utilities. This is especially true in
interface customization, where users are asked to assess novel interface
designs. We propose experiential utility elicitation methods for customization
and compare these to predictivemethods. As experienced utilities have been
argued to better reflect true preferences in behavioral decision making, the
purpose here is to investigate accurate and efficient procedures that are
suitable for software domains. Unlike conventional elicitation, our results
indicate that an experiential approach helps people understand stochastic
outcomes, as well as better appreciate the sequential utility of intelligent
assistance.
|
1206.3259
|
Cumulative distribution networks and the derivative-sum-product
algorithm
|
cs.LG stat.ML
|
We introduce a new type of graphical model called a "cumulative distribution
network" (CDN), which expresses a joint cumulative distribution as a product of
local functions. Each local function can be viewed as providing evidence about
possible orderings, or rankings, of variables. Interestingly, we find that the
conditional independence properties of CDNs are quite different from other
graphical models. We also describe a messagepassing algorithm that efficiently
computes conditional cumulative distributions. Due to the unique independence
properties of the CDN, these messages do not in general have a one-to-one
correspondence with messages exchanged in standard algorithms, such as belief
propagation. We demonstrate the application of CDNs for structured ranking
learning using a previously-studied multi-player gaming dataset.
|
1206.3260
|
Causal discovery of linear acyclic models with arbitrary distributions
|
stat.ML cs.AI cs.LG
|
An important task in data analysis is the discovery of causal relationships
between observed variables. For continuous-valued data, linear acyclic causal
models are commonly used to model the data-generating process, and the
inference of such models is a well-studied problem. However, existing methods
have significant limitations. Methods based on conditional independencies
(Spirtes et al. 1993; Pearl 2000) cannot distinguish between
independence-equivalent models, whereas approaches purely based on Independent
Component Analysis (Shimizu et al. 2006) are inapplicable to data which is
partially Gaussian. In this paper, we generalize and combine the two
approaches, to yield a method able to learn the model structure in many cases
for which the previous methods provide answers that are either incorrect or are
not as informative as possible. We give exact graphical conditions for when two
distinct models represent the same family of distributions, and empirically
demonstrate the power of our method through thorough simulations.
|
1206.3261
|
Learning When to Take Advice: A Statistical Test for Achieving A
Correlated Equilibrium
|
cs.GT cs.AI cs.MA
|
We study a multiagent learning problem where agents can either learn via
repeated interactions, or can follow the advice of a mediator who suggests
possible actions to take. We present an algorithmthat each agent can use so
that, with high probability, they can verify whether or not the mediator's
advice is useful. In particular, if the mediator's advice is useful then agents
will reach a correlated equilibrium, but if the mediator's advice is not
useful, then agents are not harmed by using our test, and can fall back to
their original learning algorithm. We then generalize our algorithm and show
that in the limit it always correctly verifies the mediator's advice.
|
1206.3262
|
Convergent Message-Passing Algorithms for Inference over General Graphs
with Convex Free Energies
|
cs.LG stat.ML
|
Inference problems in graphical models can be represented as a constrained
optimization of a free energy function. It is known that when the Bethe free
energy is used, the fixedpoints of the belief propagation (BP) algorithm
correspond to the local minima of the free energy. However BP fails to converge
in many cases of interest. Moreover, the Bethe free energy is non-convex for
graphical models with cycles thus introducing great difficulty in deriving
efficient algorithms for finding local minima of the free energy for general
graphs. In this paper we introduce two efficient BP-like algorithms, one
sequential and the other parallel, that are guaranteed to converge to the
global minimum, for any graph, over the class of energies known as "convex free
energies". In addition, we propose an efficient heuristic for setting the
parameters of the convex free energy based on the structure of the graph.
|
1206.3263
|
Sparse Stochastic Finite-State Controllers for POMDPs
|
cs.AI
|
Bounded policy iteration is an approach to solving infinite-horizon POMDPs
that represents policies as stochastic finite-state controllers and iteratively
improves a controller by adjusting the parameters of each node using linear
programming. In the original algorithm, the size of the linear programs, and
thus the complexity of policy improvement, depends on the number of parameters
of each node, which grows with the size of the controller. But in practice, the
number of parameters of a node with non-zero values is often very small, and
does not grow with the size of the controller. Based on this observation, we
develop a version of bounded policy iteration that leverages the sparse
structure of a stochastic finite-state controller. In each iteration, it
improves a policy by the same amount as the original algorithm, but with much
better scalability.
|
1206.3264
|
Sampling First Order Logical Particles
|
cs.AI
|
Approximate inference in dynamic systems is the problem of estimating the
state of the system given a sequence of actions and partial observations. High
precision estimation is fundamental in many applications like diagnosis,
natural language processing, tracking, planning, and robotics. In this paper we
present an algorithm that samples possible deterministic executions of a
probabilistic sequence. The algorithm takes advantage of a compact
representation (using first order logic) for actions and world states to
improve the precision of its estimation. Theoretical and empirical results show
that the algorithm's expected error is smaller than propositional sampling and
Sequential Monte Carlo (SMC) sampling techniques.
|
1206.3265
|
The Computational Complexity of Sensitivity Analysis and Parameter
Tuning
|
cs.AI
|
While known algorithms for sensitivity analysis and parameter tuning in
probabilistic networks have a running time that is exponential in the size of
the network, the exact computational complexity of these problems has not been
established as yet. In this paper we study several variants of the tuning
problem and show that these problems are NPPP-complete in general. We further
show that the problems remain NP-complete or PP-complete, for a number of
restricted variants. These complexity results provide insight in whether or not
recent achievements in sensitivity analysis and tuning can be extended to more
general, practicable methods.
|
1206.3266
|
Partitioned Linear Programming Approximations for MDPs
|
cs.AI
|
Approximate linear programming (ALP) is an efficient approach to solving
large factored Markov decision processes (MDPs). The main idea of the method is
to approximate the optimal value function by a set of basis functions and
optimize their weights by linear programming (LP). This paper proposes a new
ALP approximation. Comparing to the standard ALP formulation, we decompose the
constraint space into a set of low-dimensional spaces. This structure allows
for solving the new LP efficiently. In particular, the constraints of the LP
can be satisfied in a compact form without an exponential dependence on the
treewidth of ALP constraints. We study both practical and theoretical aspects
of the proposed approach. Moreover, we demonstrate its scale-up potential on an
MDP with more than 2^100 states.
|
1206.3267
|
The Evaluation of Causal Effects in Studies with an Unobserved
Exposure/Outcome Variable: Bounds and Identification
|
stat.ME cs.AI
|
This paper deals with the problem of evaluating the causal effect using
observational data in the presence of an unobserved exposure/ outcome variable,
when cause-effect relationships between variables can be described as a
directed acyclic graph and the corresponding recursive factorization of a joint
distribution. First, we propose identifiability criteria for causal effects
when an unobserved exposure/outcome variable is considered to contain more than
two categories. Next, when unmeasured variables exist between an unobserved
outcome variable and its proxy variables, we provide the tightest bounds based
on the potential outcome approach. The results of this paper are helpful to
evaluate causal effects in the case where it is difficult or expensive to
observe an exposure/ outcome variable in many practical fields.
|
1206.3269
|
Bayesian Out-Trees
|
cs.LG stat.ML
|
A Bayesian treatment of latent directed graph structure for non-iid data is
provided where each child datum is sampled with a directed conditional
dependence on a single unknown parent datum. The latent graph structure is
assumed to lie in the family of directed out-tree graphs which leads to
efficient Bayesian inference. The latent likelihood of the data and its
gradients are computable in closed form via Tutte's directed matrix tree
theorem using determinants and inverses of the out-Laplacian. This novel
likelihood subsumes iid likelihood, is exchangeable and yields efficient
unsupervised and semi-supervised learning algorithms. In addition to handling
taxonomy and phylogenetic datasets the out-tree assumption performs
surprisingly well as a semi-parametric density estimator on standard iid
datasets. Experiments with unsupervised and semisupervised learning are shown
on various UCI and taxonomy datasets.
|
1206.3270
|
Estimation and Clustering with Infinite Rankings
|
cs.LG stat.ML
|
This paper presents a natural extension of stagewise ranking to the the case
of infinitely many items. We introduce the infinite generalized Mallows model
(IGM), describe its properties and give procedures to estimate it from data.
For estimation of multimodal distributions we introduce the
Exponential-Blurring-Mean-Shift nonparametric clustering algorithm. The
experiments highlight the properties of the new model and demonstrate that
infinite models can be simple, elegant and practical.
|
1206.3271
|
Learning Arithmetic Circuits
|
cs.AI
|
Graphical models are usually learned without regard to the cost of doing
inference with them. As a result, even if a good model is learned, it may
perform poorly at prediction, because it requires approximate inference. We
propose an alternative: learning models with a score function that directly
penalizes the cost of inference. Specifically, we learn arithmetic circuits
with a penalty on the number of edges in the circuit (in which the cost of
inference is linear). Our algorithm is equivalent to learning a Bayesian
network with context-specific independence by greedily splitting conditional
distributions, at each step scoring the candidates by compiling the resulting
network into an arithmetic circuit, and using its size as the penalty. We show
how this can be done efficiently, without compiling a circuit from scratch for
each candidate. Experiments on several real-world domains show that our
algorithm is able to learn tractable models with very large treewidth, and
yields more accurate predictions than a standard context-specific Bayesian
network learner, in far less time.
|
1206.3272
|
Improving Gradient Estimation by Incorporating Sensor Data
|
cs.AI
|
An efficient policy search algorithm should estimate the local gradient of
the objective function, with respect to the policy parameters, from as few
trials as possible. Whereas most policy search methods estimate this gradient
by observing the rewards obtained during policy trials, we show, both
theoretically and empirically, that taking into account the sensor data as well
gives better gradient estimates and hence faster learning. The reason is that
rewards obtained during policy execution vary from trial to trial due to noise
in the environment; sensor data, which correlates with the noise, can be used
to partially correct for this variation, resulting in an estimatorwith lower
variance.
|
1206.3273
|
Discovering Cyclic Causal Models by Independent Components Analysis
|
cs.AI stat.ME
|
We generalize Shimizu et al's (2006) ICA-based approach for discovering
linear non-Gaussian acyclic (LiNGAM) Structural Equation Models (SEMs) from
causally sufficient, continuous-valued observational data. By relaxing the
assumption that the generating SEM's graph is acyclic, we solve the more
general problem of linear non-Gaussian (LiNG) SEM discovery. LiNG discovery
algorithms output the distribution equivalence class of SEMs which, in the
large sample limit, represents the population distribution. We apply a LiNG
discovery algorithm to simulated data. Finally, we give sufficient conditions
under which only one of the SEMs in the output class is 'stable'.
|
1206.3274
|
Small Sample Inference for Generalization Error in Classification Using
the CUD Bound
|
cs.LG stat.ML
|
Confidence measures for the generalization error are crucial when small
training samples are used to construct classifiers. A common approach is to
estimate the generalization error by resampling and then assume the resampled
estimator follows a known distribution to form a confidence set [Kohavi 1995,
Martin 1996,Yang 2006]. Alternatively, one might bootstrap the resampled
estimator of the generalization error to form a confidence set. Unfortunately,
these methods do not reliably provide sets of the desired confidence. The poor
performance appears to be due to the lack of smoothness of the generalization
error as a function of the learned classifier. This results in a non-normal
distribution of the estimated generalization error. We construct a confidence
set for the generalization error by use of a smooth upper bound on the
deviation between the resampled estimate and generalization error. The
confidence set is formed by bootstrapping this upper bound. In cases in which
the approximation class for the classifier can be represented as a parametric
additive model, we provide a computationally efficient algorithm. This method
exhibits superior performance across a series of test and simulated data sets.
|
1206.3275
|
Learning Hidden Markov Models for Regression using Path Aggregation
|
cs.LG cs.CE q-bio.QM
|
We consider the task of learning mappings from sequential data to real-valued
responses. We present and evaluate an approach to learning a type of hidden
Markov model (HMM) for regression. The learning process involves inferring the
structure and parameters of a conventional HMM, while simultaneously learning a
regression model that maps features that characterize paths through the model
to continuous responses. Our results, in both synthetic and biological domains,
demonstrate the value of jointly learning the two components of our approach.
|
1206.3276
|
Explanation Trees for Causal Bayesian Networks
|
cs.AI
|
Bayesian networks can be used to extract explanations about the observed
state of a subset of variables. In this paper, we explicate the desiderata of
an explanation and confront them with the concept of explanation proposed by
existing methods. The necessity of taking into account causal approaches when a
causal graph is available is discussed. We then introduce causal explanation
trees, based on the construction of explanation trees using the measure of
causal information ow (Ay and Polani, 2006). This approach is compared to
several other methods on known networks.
|
1206.3278
|
Topic Models Conditioned on Arbitrary Features with
Dirichlet-multinomial Regression
|
cs.IR stat.ME
|
Although fully generative models have been successfully used to model the
contents of text documents, they are often awkward to apply to combinations of
text data and document metadata. In this paper we propose a
Dirichlet-multinomial regression (DMR) topic model that includes a log-linear
prior on document-topic distributions that is a function of observed features
of the document, such as author, publication venue, references, and dates. We
show that by selecting appropriate features, DMR topic models can meet or
exceed the performance of several previously published topic models designed
for specific data.
|
1206.3279
|
The Phylogenetic Indian Buffet Process: A Non-Exchangeable Nonparametric
Prior for Latent Features
|
cs.LG stat.ML
|
Nonparametric Bayesian models are often based on the assumption that the
objects being modeled are exchangeable. While appropriate in some applications
(e.g., bag-of-words models for documents), exchangeability is sometimes assumed
simply for computational reasons; non-exchangeable models might be a better
choice for applications based on subject matter. Drawing on ideas from
graphical models and phylogenetics, we describe a non-exchangeable prior for a
class of nonparametric latent feature models that is nearly as efficient
computationally as its exchangeable counterpart. Our model is applicable to the
general setting in which the dependencies between objects can be expressed
using a tree, where edge lengths indicate the strength of relationships. We
demonstrate an application to modeling probabilistic choice.
|
1206.3280
|
CT-NOR: Representing and Reasoning About Events in Continuous Time
|
cs.AI stat.AP
|
We present a generative model for representing and reasoning about the
relationships among events in continuous time. We apply the model to the domain
of networked and distributed computing environments where we fit the parameters
of the model from timestamp observations, and then use hypothesis testing to
discover dependencies between the events and changes in behavior for monitoring
and diagnosis. After introducing the model, we present an EM algorithm for
fitting the parameters and then present the hypothesis testing approach for
both dependence discovery and change-point detection. We validate the approach
for both tasks using real data from a trace of network events at Microsoft
Research Cambridge. Finally, we formalize the relationship between the proposed
model and the noisy-or gate for cases when time can be discretized.
|
1206.3281
|
Model-Based Bayesian Reinforcement Learning in Large Structured Domains
|
cs.AI
|
Model-based Bayesian reinforcement learning has generated significant
interest in the AI community as it provides an elegant solution to the optimal
exploration-exploitation tradeoff in classical reinforcement learning.
Unfortunately, the applicability of this type of approach has been limited to
small domains due to the high complexity of reasoning about the joint posterior
over model parameters. In this paper, we consider the use of factored
representations combined with online planning techniques, to improve
scalability of these methods. The main contribution of this paper is a Bayesian
framework for learning the structure and parameters of a dynamical system,
while also simultaneously planning a (near-)optimal sequence of actions.
|
1206.3282
|
Improving the Accuracy and Efficiency of MAP Inference for Markov Logic
|
cs.AI
|
In this work we present Cutting Plane Inference (CPI), a Maximum A Posteriori
(MAP) inference method for Statistical Relational Learning. Framed in terms of
Markov Logic and inspired by the Cutting Plane Method, it can be seen as a meta
algorithm that instantiates small parts of a large and complex Markov Network
and then solves these using a conventional MAP method. We evaluate CPI on two
tasks, Semantic Role Labelling and Joint Entity Resolution, while plugging in
two different MAP inference methods: the current method of choice for MAP
inference in Markov Logic, MaxWalkSAT, and Integer Linear Programming. We
observe that when used with CPI both methods are significantly faster than when
used alone. In addition, CPI improves the accuracy of MaxWalkSAT and maintains
the exactness of Integer Linear Programming.
|
1206.3283
|
Observation Subset Selection as Local Compilation of Performance
Profiles
|
cs.AI
|
Deciding what to sense is a crucial task, made harder by dependencies and by
a nonadditive utility function. We develop approximation algorithms for
selecting an optimal set of measurements, under a dependency structure modeled
by a tree-shaped Bayesian network (BN). Our approach is a generalization of
composing anytime algorithm represented by conditional performance profiles.
This is done by relaxing the input monotonicity assumption, and extending the
local compilation technique to more general classes of performance profiles
(PPs). We apply the extended scheme to selecting a subset of measurements for
choosing a maximum expectation variable in a binary valued BN, and for
minimizing the worst variance in a Gaussian BN.
|
1206.3284
|
Bounding Search Space Size via (Hyper)tree Decompositions
|
cs.AI
|
This paper develops a measure for bounding the performance of AND/OR search
algorithms for solving a variety of queries over graphical models. We show how
drawing a connection to the recent notion of hypertree decompositions allows to
exploit determinism in the problem specification and produce tighter bounds. We
demonstrate on a variety of practical problem instances that we are often able
to improve upon existing bounds by several orders of magnitude.
|
1206.3285
|
Dyna-Style Planning with Linear Function Approximation and Prioritized
Sweeping
|
cs.AI cs.LG cs.SY
|
We consider the problem of efficiently learning optimal control policies and
value functions over large state spaces in an online setting in which estimates
must be available after each interaction with the world. This paper develops an
explicitly model-based approach extending the Dyna architecture to linear
function approximation. Dynastyle planning proceeds by generating imaginary
experience from the world model and then applying model-free reinforcement
learning algorithms to the imagined state transitions. Our main results are to
prove that linear Dyna-style planning converges to a unique solution
independent of the generating distribution, under natural conditions. In the
policy evaluation setting, we prove that the limit point is the least-squares
(LSTD) solution. An implication of our results is that prioritized-sweeping can
be soundly extended to the linear approximation case, backing up to preceding
features rather than to preceding states. We introduce two versions of
prioritized sweeping with linear Dyna and briefly illustrate their performance
empirically on the Mountain Car and Boyan Chain problems.
|
1206.3286
|
New Techniques for Algorithm Portfolio Design
|
cs.AI
|
We present and evaluate new techniques for designing algorithm portfolios. In
our view, the problem has both a scheduling aspect and a machine learning
aspect. Prior work has largely addressed one of the two aspects in isolation.
Building on recent work on the scheduling aspect of the problem, we present a
technique that addresses both aspects simultaneously and has attractive
theoretical guarantees. Experimentally, we show that this technique can be used
to improve the performance of state-of-the-art algorithms for Boolean
satisfiability, zero-one integer programming, and A.I. planning.
|
1206.3287
|
Learning the Bayesian Network Structure: Dirichlet Prior versus Data
|
cs.LG stat.ME stat.ML
|
In the Bayesian approach to structure learning of graphical models, the
equivalent sample size (ESS) in the Dirichlet prior over the model parameters
was recently shown to have an important effect on the maximum-a-posteriori
estimate of the Bayesian network structure. In our first contribution, we
theoretically analyze the case of large ESS-values, which complements previous
work: among other results, we find that the presence of an edge in a Bayesian
network is favoured over its absence even if both the Dirichlet prior and the
data imply independence, as long as the conditional empirical distribution is
notably different from uniform. In our second contribution, we focus on
realistic ESS-values, and provide an analytical approximation to the "optimal"
ESS-value in a predictive sense (its accuracy is also validated
experimentally): this approximation provides an understanding as to which
properties of the data have the main effect determining the "optimal"
ESS-value.
|
1206.3288
|
Tightening LP Relaxations for MAP using Message Passing
|
cs.DS cs.AI cs.CE
|
Linear Programming (LP) relaxations have become powerful tools for finding
the most probable (MAP) configuration in graphical models. These relaxations
can be solved efficiently using message-passing algorithms such as belief
propagation and, when the relaxation is tight, provably find the MAP
configuration. The standard LP relaxation is not tight enough in many
real-world problems, however, and this has lead to the use of higher order
cluster-based LP relaxations. The computational cost increases exponentially
with the size of the clusters and limits the number and type of clusters we can
use. We propose to solve the cluster selection problem monotonically in the
dual LP, iteratively selecting clusters with guaranteed improvement, and
quickly re-solving with the added clusters by reusing the existing solution.
Our dual message-passing algorithm finds the MAP configuration in protein
sidechain placement, protein design, and stereo problems, in cases where the
standard LP relaxation fails.
|
1206.3289
|
Efficient inference in persistent Dynamic Bayesian Networks
|
cs.AI
|
Numerous temporal inference tasks such as fault monitoring and anomaly
detection exhibit a persistence property: for example, if something breaks, it
stays broken until an intervention. When modeled as a Dynamic Bayesian Network,
persistence adds dependencies between adjacent time slices, often making exact
inference over time intractable using standard inference algorithms. However,
we show that persistence implies a regular structure that can be exploited for
efficient inference. We present three successively more general classes of
models: persistent causal chains (PCCs), persistent causal trees (PCTs) and
persistent polytrees (PPTs), and the corresponding exact inference algorithms
that exploit persistence. We show that analytic asymptotic bounds for our
algorithms compare favorably to junction tree inference; and we demonstrate
empirically that we can perform exact smoothing on the order of 100 times
faster than the approximate Boyen-Koller method on randomly generated instances
of persistent tree models. We also show how to handle non-persistent variables
and how persistence can be exploited effectively for approximate filtering.
|
1206.3290
|
Modelling local and global phenomena with sparse Gaussian processes
|
cs.LG stat.ML
|
Much recent work has concerned sparse approximations to speed up the Gaussian
process regression from the unfavorable O(n3) scaling in computational time to
O(nm2). Thus far, work has concentrated on models with one covariance function.
However, in many practical situations additive models with multiple covariance
functions may perform better, since the data may contain both long and short
length-scale phenomena. The long length-scales can be captured with global
sparse approximations, such as fully independent conditional (FIC), and the
short length-scales can be modeled naturally by covariance functions with
compact support (CS). CS covariance functions lead to naturally sparse
covariance matrices, which are computationally cheaper to handle than full
covariance matrices. In this paper, we propose a new sparse Gaussian process
model with two additive components: FIC for the long length-scales and CS
covariance function for the short length-scales. We give theoretical and
experimental results and show that under certain conditions the proposed model
has the same computational complexity as FIC. We also compare the model
performance of the proposed model to additive models approximated by fully and
partially independent conditional (PIC). We use real data sets and show that
our model outperforms FIC and PIC approximations for data sets with two
additive phenomena.
|
1206.3291
|
Hierarchical POMDP Controller Optimization by Likelihood Maximization
|
cs.AI
|
Planning can often be simpli ed by decomposing the task into smaller tasks
arranged hierarchically. Charlin et al. [4] recently showed that the hierarchy
discovery problem can be framed as a non-convex optimization problem. However,
the inherent computational di culty of solving such an optimization problem
makes it hard to scale to realworld problems. In another line of research,
Toussaint et al. [18] developed a method to solve planning problems by
maximumlikelihood estimation. In this paper, we show how the hierarchy
discovery problem in partially observable domains can be tackled using a
similar maximum likelihood approach. Our technique rst transforms the problem
into a dynamic Bayesian network through which a hierarchical structure can
naturally be discovered while optimizing the policy. Experimental results
demonstrate that this approach scales better than previous techniques based on
non-convex optimization.
|
1206.3292
|
Identifying Dynamic Sequential Plans
|
cs.AI
|
We address the problem of identifying dynamic sequential plans in the
framework of causal Bayesian networks, and show that the problem is reduced to
identifying causal effects, for which there are complete identi cation
algorithms available in the literature.
|
1206.3293
|
Propagation using Chain Event Graphs
|
cs.AI cs.CL
|
A Chain Event Graph (CEG) is a graphial model which designed to embody
conditional independencies in problems whose state spaces are highly asymmetric
and do not admit a natural product structure. In this paer we present a
probability propagation algorithm which uses the topology of the CEG to build a
transporter CEG. Intriungly,the transporter CEG is directly analogous to the
triangulated Bayesian Network (BN) in the more conventional junction tree
propagation algorithms used with BNs. The propagation method uses factorization
formulae also analogous to (but different from) the ones using potentials on
cliques and separators of the BN. It appears that the methods will be typically
more efficient than the BN algorithms when applied to contexts where there is
significant asymmetry present.
|
1206.3294
|
Flexible Priors for Exemplar-based Clustering
|
cs.LG stat.ML
|
Exemplar-based clustering methods have been shown to produce state-of-the-art
results on a number of synthetic and real-world clustering problems. They are
appealing because they offer computational benefits over latent-mean models and
can handle arbitrary pairwise similarity measures between data points. However,
when trying to recover underlying structure in clustering problems, tailored
similarity measures are often not enough; we also desire control over the
distribution of cluster sizes. Priors such as Dirichlet process priors allow
the number of clusters to be unspecified while expressing priors over data
partitions. To our knowledge, they have not been applied to exemplar-based
models. We show how to incorporate priors, including Dirichlet process priors,
into the recently introduced affinity propagation algorithm. We develop an
efficient maxproduct belief propagation algorithm for our new model and
demonstrate experimentally how the expanded range of clustering priors allows
us to better recover true clusterings in situations where we have some
information about the generating process.
|
1206.3295
|
Refractor Importance Sampling
|
cs.AI
|
In this paper we introduce Refractor Importance Sampling (RIS), an
improvement to reduce error variance in Bayesian network importance sampling
propagation under evidential reasoning. We prove the existence of a collection
of importance functions that are close to the optimal importance function under
evidential reasoning. Based on this theoretic result we derive the RIS
algorithm. RIS approaches the optimal importance function by applying localized
arc changes to minimize the divergence between the evidence-adjusted importance
function and the optimal importance function. The validity and performance of
RIS is empirically tested with a large setof synthetic Bayesian networks and
two real-world networks.
|
1206.3296
|
Inference for Multiplicative Models
|
cs.AI
|
The paper introduces a generalization for known probabilistic models such as
log-linear and graphical models, called here multiplicative models. These
models, that express probabilities via product of parameters are shown to
capture multiple forms of contextual independence between variables, including
decision graphs and noisy-OR functions. An inference algorithm for
multiplicative models is provided and its correctness is proved. The complexity
analysis of the inference algorithm uses a more refined parameter than the
tree-width of the underlying graph, and shows the computational cost does not
exceed that of the variable elimination algorithm in graphical models. The
paper ends with examples where using the new models and algorithm is
computationally beneficial.
|
1206.3297
|
Hybrid Variational/Gibbs Collapsed Inference in Topic Models
|
cs.LG stat.ML
|
Variational Bayesian inference and (collapsed) Gibbs sampling are the two
important classes of inference algorithms for Bayesian networks. Both have
their advantages and disadvantages: collapsed Gibbs sampling is unbiased but is
also inefficient for large count values and requires averaging over many
samples to reduce variance. On the other hand, variational Bayesian inference
is efficient and accurate for large count values but suffers from bias for
small counts. We propose a hybrid algorithm that combines the best of both
worlds: it samples very small counts and applies variational updates to large
counts. This hybridization is shown to significantly improve testset perplexity
relative to variational inference at no computational cost.
|
1206.3298
|
Continuous Time Dynamic Topic Models
|
cs.IR cs.LG stat.ML
|
In this paper, we develop the continuous time dynamic topic model (cDTM). The
cDTM is a dynamic topic model that uses Brownian motion to model the latent
topics through a sequential collection of documents, where a "topic" is a
pattern of word use that we expect to evolve over the course of the collection.
We derive an efficient variational approximate inference algorithm that takes
advantage of the sparsity of observations in text, a property that lets us
easily handle many time points. In contrast to the cDTM, the original
discrete-time dynamic topic model (dDTM) requires that time be discretized.
Moreover, the complexity of variational inference for the dDTM grows quickly as
time granularity increases, a drawback which limits fine-grained
discretization. We demonstrate the cDTM on two news corpora, reporting both
predictive perplexity and the novel task of time stamp prediction.
|
1206.3318
|
On Local Regret
|
cs.AI
|
Online learning aims to perform nearly as well as the best hypothesis in
hindsight. For some hypothesis classes, though, even finding the best
hypothesis offline is challenging. In such offline cases, local search
techniques are often employed and only local optimality guaranteed. For online
decision-making with such hypothesis classes, we introduce local regret, a
generalization of regret that aims to perform nearly as well as only nearby
hypotheses. We then present a general algorithm to minimize local regret with
arbitrary locality graphs. We also show how the graph structure can be
exploited to drastically speed learning. These algorithms are then demonstrated
on a diverse set of online problems: online disjunct learning, online Max-SAT,
and online decision tree learning.
|
1206.3320
|
A two-step Recommendation Algorithm via Iterative Local Least Squares
|
cs.IR
|
Recommender systems can change our life a lot and help us select suitable and
favorite items much more conveniently and easily. As a consequence, various
kinds of algorithms have been proposed in last few years to improve the
performance. However, all of them face one critical problem: data sparsity. In
this paper, we proposed a two-step recommendation algorithm via iterative local
least squares (ILLS). Firstly, we obtain the ratings matrix which is
constructed via users' behavioral records, and it is normally very sparse.
Secondly, we preprocess the "ratings" matrix through ProbS which can convert
the sparse data to a dense one. Then we use ILLS to estimate those missing
values. Finally, the recommendation list is generated. Experimental results on
the three datasets: MovieLens, Netflix, RYM, suggest that the proposed method
can enhance the algorithmic accuracy of AUC. Especially, it performs much
better in dense datasets. Furthermore, since this methods can improve those
missing value more accurately via iteration which might show light in
discovering those inactive users' purchasing intention and eventually solving
cold-start problem.
|
1206.3334
|
Additive Approximation for Near-Perfect Phylogeny Construction
|
cs.DS cs.CE q-bio.PE
|
We study the problem of constructing phylogenetic trees for a given set of
species. The problem is formulated as that of finding a minimum Steiner tree on
$n$ points over the Boolean hypercube of dimension $d$. It is known that an
optimal tree can be found in linear time if the given dataset has a perfect
phylogeny, i.e. cost of the optimal phylogeny is exactly $d$. Moreover, if the
data has a near-perfect phylogeny, i.e. the cost of the optimal Steiner tree is
$d+q$, it is known that an exact solution can be found in running time which is
polynomial in the number of species and $d$, yet exponential in $q$. In this
work, we give a polynomial-time algorithm (in both $d$ and $q$) that finds a
phylogenetic tree of cost $d+O(q^2)$. This provides the best guarantees known -
namely, a $(1+o(1))$-approximation - for the case $\log(d) \ll q \ll \sqrt{d}$,
broadening the range of settings for which near-optimal solutions can be
efficiently found. We also discuss the motivation and reasoning for studying
such additive approximations.
|
1206.3350
|
Coalitional Games for Transmitter Cooperation in MIMO Multiple Access
Channels
|
cs.IT cs.GT math.IT
|
Cooperation between nodes sharing a wireless channel is becoming increasingly
necessary to achieve performance goals in a wireless network. The problem of
determining the feasibility and stability of cooperation between rational nodes
in a wireless network is of great importance in understanding cooperative
behavior. This paper addresses the stability of the grand coalition of
transmitters signaling over a multiple access channel using the framework of
cooperative game theory. The external interference experienced by each TX is
represented accurately by modeling the cooperation game between the TXs in
\emph{partition form}. Single user decoding and successive interference
cancelling strategies are examined at the receiver. In the absence of
coordination costs, the grand coalition is shown to be \emph{sum-rate optimal}
for both strategies. Transmitter cooperation is \emph{stable}, if and only if
the core of the game (the set of all divisions of grand coalition utility such
that no coalition deviates) is nonempty. Determining the stability of
cooperation is a co-NP-complete problem in general. For a single user decoding
receiver, transmitter cooperation is shown to be \emph{stable} at both high and
low SNRs, while for an interference cancelling receiver with a fixed decoding
order, cooperation is stable only at low SNRs and unstable at high SNR. When
time sharing is allowed between decoding orders, it is shown using an
approximate lower bound to the utility function that TX cooperation is also
stable at high SNRs. Thus, this paper demonstrates that ideal zero cost TX
cooperation over a MAC is stable and improves achievable rates for each
individual user.
|
1206.3362
|
An Improved WBF Algorithm for Higher-Speed Decoding of LDPC Codes
|
cs.IT math.IT
|
Due to the speed limitation of the conventional bit-chosen strategy in the
existing weighted bit flipping algorithms, a high-speed LDPC decoder cannot be
realized. To solve this problem, we propose a fast weighted bit flipping (FWBF)
algorithm. Specifically, based on the stochastic error bitmap of the received
vector, a partially parallel bit-choose strategy is adopted to lower the delay
of choosing the bit flipped. Because of its partially parallel structure, the
novel strategy can be well incorporated into the LDPC decoder [1]. The analysis
of the decoding delay demonstrates that, the decoding speed can be greatly
improved by adopting the proposed FWBF algorithm. Further, simulation results
verify the validity of the proposed algorithm.
|
1206.3381
|
On the Cover-Hart Inequality: What's a Sample of Size One Worth?
|
math.ST cs.IT math.IT stat.ML stat.TH
|
Bob predicts a future observation based on a sample of size one. Alice can
draw a sample of any size before issuing her prediction. How much better can
she do than Bob? Perhaps surprisingly, under a large class of loss functions,
which we refer to as the Cover-Hart family, the best Alice can do is to halve
Bob's risk. In this sense, half the information in an infinite sample is
contained in a sample of size one. The Cover-Hart family is a convex cone that
includes metrics and negative definite functions, subject to slight regularity
conditions. These results may help explain the small relative differences in
empirical performance measures in applied classification and forecasting
problems, as well as the success of reasoning and learning by analogy in
general, and nearest neighbor techniques in particular.
|
1206.3382
|
Simple Regret Optimization in Online Planning for Markov Decision
Processes
|
cs.AI cs.LG
|
We consider online planning in Markov decision processes (MDPs). In online
planning, the agent focuses on its current state only, deliberates about the
set of possible policies from that state onwards and, when interrupted, uses
the outcome of that exploratory deliberation to choose what action to perform
next. The performance of algorithms for online planning is assessed in terms of
simple regret, which is the agent's expected performance loss when the chosen
action, rather than an optimal one, is followed.
To date, state-of-the-art algorithms for online planning in general MDPs are
either best effort, or guarantee only polynomial-rate reduction of simple
regret over time. Here we introduce a new Monte-Carlo tree search algorithm,
BRUE, that guarantees exponential-rate reduction of simple regret and error
probability. This algorithm is based on a simple yet non-standard state-space
sampling scheme, MCTS2e, in which different parts of each sample are dedicated
to different exploratory objectives. Our empirical evaluation shows that BRUE
not only provides superior performance guarantees, but is also very effective
in practice and favorably compares to state-of-the-art. We then extend BRUE
with a variant of "learning by forgetting." The resulting set of algorithms,
BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper
bound on its reduction rate, and exhibits even more attractive empirical
performance.
|
1206.3392
|
Secure Compute-and-Forward in a Bidirectional Relay
|
cs.IT math.IT
|
We consider the basic bidirectional relaying problem, in which two users in a
wireless network wish to exchange messages through an intermediate relay node.
In the compute-and-forward strategy, the relay computes a function of the two
messages using the naturally-occurring sum of symbols simultaneously
transmitted by user nodes in a Gaussian multiple access (MAC) channel, and the
computed function value is forwarded to the user nodes in an ensuing broadcast
phase. In this paper, we study the problem under an additional security
constraint, which requires that each user's message be kept secure from the
relay. We consider two types of security constraints: perfect secrecy, in which
the MAC channel output seen by the relay is independent of each user's message;
and strong secrecy, which is a form of asymptotic independence. We propose a
coding scheme based on nested lattices, the main feature of which is that given
a pair of nested lattices that satisfy certain "goodness" properties, we can
explicitly specify probability distributions for randomization at the encoders
to achieve the desired security criteria. In particular, our coding scheme
guarantees perfect or strong secrecy even in the absence of channel noise. The
noise in the channel only affects reliability of computation at the relay, and
for Gaussian noise, we derive achievable rates for reliable and secure
computation. We also present an application of our methods to the multi-hop
line network in which a source needs to transmit messages to a destination
through a series of intermediate relays.
|
1206.3437
|
Improving the Asymmetric TSP by Considering Graph Structure
|
cs.DM cs.AI cs.DS
|
Recent works on cost based relaxations have improved Constraint Programming
(CP) models for the Traveling Salesman Problem (TSP). We provide a short survey
over solving asymmetric TSP with CP. Then, we suggest new implied propagators
based on general graph properties. We experimentally show that such implied
propagators bring robustness to pathological instances and highlight the fact
that graph structure can significantly improve search heuristics behavior.
Finally, we show that our approach outperforms current state of the art
results.
|
1206.3460
|
Constrained Distributed Algebraic Connectivity Maximization in Robotic
Networks
|
cs.SY
|
We consider the problem of maximizing the algebraic connectivity of the
communication graph in a network of mobile robots by moving them into
appropriate positions. We define the Laplacian of the graph as dependent on the
pairwise distance between the robots and we approximate the problem as a
sequence of Semi-Definite Programs (SDP). We propose a distributed solution
consisting of local SDP's which use information only from nearby neighboring
robots. We show that the resulting distributed optimization framework leads to
feasible subproblems and through its repeated execution, the algebraic
connectivity increases monotonically. Moreover, we describe how to adjust the
communication load of the robots based on locally computable measures.
Numerical simulations show the performance of the algorithm with respect to the
centralized solution.
|
1206.3493
|
Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy
Consumption and Inexpensive Hardware
|
stat.AP cs.IT math.IT stat.ML
|
Telemonitoring of electroencephalogram (EEG) through wireless body-area
networks is an evolving direction in personalized medicine. Among various
constraints in designing such a system, three important constraints are energy
consumption, data compression, and device cost. Conventional data compression
methodologies, although effective in data compression, consumes significant
energy and cannot reduce device cost. Compressed sensing (CS), as an emerging
data compression methodology, is promising in catering to these constraints.
However, EEG is non-sparse in the time domain and also non-sparse in
transformed domains (such as the wavelet domain). Therefore, it is extremely
difficult for current CS algorithms to recover EEG with the quality that
satisfies the requirements of clinical diagnosis and engineering applications.
Recently, Block Sparse Bayesian Learning (BSBL) was proposed as a new method to
the CS problem. This study introduces the technique to the telemonitoring of
EEG. Experimental results show that its recovery quality is better than
state-of-the-art CS algorithms, and sufficient for practical use. These results
suggest that BSBL is very promising for telemonitoring of EEG and other
non-sparse physiological signals.
|
1206.3509
|
A Novel Approach for Protein Structure Prediction
|
cs.LG q-bio.BM
|
The idea of this project is to study the protein structure and sequence
relationship using the hidden markov model and artificial neural network. In
this context we have assumed two hidden markov models. In first model we have
taken protein secondary structures as hidden and protein sequences as observed.
In second model we have taken protein sequences as hidden and protein
structures as observed. The efficiencies for both the hidden markov models have
been calculated. The results show that the efficiencies of first model is
greater that the second one .These efficiencies are cross validated using
artificial neural network. This signifies the importance of protein secondary
structures as the main hidden controlling factors due to which we observe a
particular amino acid sequence. This also signifies that protein secondary
structure is more conserved in comparison to amino acid sequence.
|
1206.3520
|
Recovering the tree-like trend of evolution despite extensive lateral
genetic transfer: A probabilistic analysis
|
math.PR cs.CE cs.DS math.ST q-bio.PE stat.TH
|
Lateral gene transfer (LGT) is a common mechanism of non-vertical evolution
where genetic material is transferred between two more or less distantly
related organisms. It is particularly common in bacteria where it contributes
to adaptive evolution with important medical implications. In evolutionary
studies, LGT has been shown to create widespread discordance between gene trees
as genomes become mosaics of gene histories. In particular, the Tree of Life
has been questioned as an appropriate representation of bacterial evolutionary
history. Nevertheless a common hypothesis is that prokaryotic evolution is
primarily tree-like, but that the underlying trend is obscured by LGT.
Extensive empirical work has sought to extract a common tree-like signal from
conflicting gene trees. Here we give a probabilistic perspective on the problem
of recovering the tree-like trend despite LGT. Under a model of randomly
distributed LGT, we show that the species phylogeny can be reconstructed even
in the presence of surprisingly many (almost linear number of) LGT events per
gene tree. Our results, which are optimal up to logarithmic factors, are based
on the analysis of a robust, computationally efficient reconstruction method
and provides insight into the design of such methods. Finally we show that our
results have implications for the discovery of highways of gene sharing.
|
1206.3522
|
General Upper Bounds on the Running Time of Parallel Evolutionary
Algorithms
|
cs.NE
|
We present a new method for analyzing the running time of parallel
evolutionary algorithms with spatially structured populations. Based on the
fitness-level method, it yields upper bounds on the expected parallel running
time. This allows to rigorously estimate the speedup gained by parallelization.
Tailored results are given for common migration topologies: ring graphs, torus
graphs, hypercubes, and the complete graph. Example applications for
pseudo-Boolean optimization show that our method is easy to apply and that it
gives powerful results. In our examples the possible speedup increases with the
density of the topology. Surprisingly, even sparse topologies like ring graphs
lead to a significant speedup for many functions while not increasing the total
number of function evaluations by more than a constant factor. We also identify
which number of processors yield asymptotically optimal speedups, thus giving
hints on how to parametrize parallel evolutionary algorithms.
|
1206.3536
|
Identifying Independence in Relational Models
|
cs.AI
|
The rules of d-separation provide a framework for deriving conditional
independence facts from model structure. However, this theory only applies to
simple directed graphical models. We introduce relational d-separation, a
theory for deriving conditional independence in relational models. We provide a
sound, complete, and computationally efficient method for relational
d-separation, and we present empirical results that demonstrate effectiveness.
|
1206.3543
|
Measurement of statistical evidence on an absolute scale following
thermodynamic principles
|
math.ST cs.IT math.IT physics.data-an q-bio.QM stat.TH
|
Statistical analysis is used throughout biomedical research and elsewhere to
assess strength of evidence. We have previously argued that typical outcome
statistics (including p-values and maximum likelihood ratios) have poor
measure-theoretic properties: they can erroneously indicate decreasing evidence
as data supporting an hypothesis accumulate; and they are not amenable to
calibration, necessary for meaningful comparison of evidence across different
study designs, data types, and levels of analysis. We have also previously
proposed that thermodynamic theory, which allowed for the first time derivation
of an absolute measurement scale for temperature (T), could be used to derive
an absolute scale for evidence (E). Here we present a novel
thermodynamically-based framework in which measurement of E on an absolute
scale, for which "one degree" always means the same thing, becomes possible for
the first time. The new framework invites us to think about statistical
analyses in terms of the flow of (evidential) information, placing this work in
the context of a growing literature on connections among physics, information
theory, and statistics.
|
1206.3551
|
Sensitivity analysis in decision circuits
|
cs.AI
|
Decision circuits have been developed to perform efficient evaluation of
influence diagrams [Bhattacharjya and Shachter, 2007], building on the advances
in arithmetic circuits for belief network inference [Darwiche,2003]. In the
process of model building and analysis, we perform sensitivity analysis to
understand how the optimal solution changes in response to changes in the
model. When sequential decision problems under uncertainty are represented as
decision circuits, we can exploit the efficient solution process embodied in
the decision circuit and the wealth of derivative information available to
compute the value of information for the uncertainties in the problem and the
effects of changes to model parameters on the value and the optimal strategy.
|
1206.3552
|
A Classification for Community Discovery Methods in Complex Networks
|
cs.SI cs.DS physics.soc-ph
|
In the last few years many real-world networks have been found to show a
so-called community structure organization. Much effort has been devoted in the
literature to develop methods and algorithms that can efficiently highlight
this hidden structure of the network, traditionally by partitioning the graph.
Since network representation can be very complex and can contain different
variants in the traditional graph model, each algorithm in the literature
focuses on some of these properties and establishes, explicitly or implicitly,
its own definition of community. According to this definition it then extracts
the communities that are able to reflect only some of the features of real
communities. The aim of this survey is to provide a manual for the community
discovery problem. Given a meta definition of what a community in a social
network is, our aim is to organize the main categories of community discovery
based on their own definition of community. Given a desired definition of
community and the features of a problem (size of network, direction of edges,
multidimensionality, and so on) this review paper is designed to provide a set
of approaches that researchers could focus on.
|
1206.3555
|
A Dynamic Programming Algorithm for Inference in Recursive Probabilistic
Programs
|
cs.AI cs.DS
|
We describe a dynamic programming algorithm for computing the marginal
distribution of discrete probabilistic programs. This algorithm takes a
functional interpreter for an arbitrary probabilistic programming language and
turns it into an efficient marginalizer. Because direct caching of
sub-distributions is impossible in the presence of recursion, we build a graph
of dependencies between sub-distributions. This factored sum-product network
makes (potentially cyclic) dependencies between subproblems explicit, and
corresponds to a system of equations for the marginal distribution. We solve
these equations by fixed-point iteration in topological order. We illustrate
this algorithm on examples used in teaching probabilistic models, computational
cognitive science research, and game theory.
|
1206.3559
|
Real time facial expression recognition using a novel method
|
cs.CV
|
This paper discusses a novel method for Facial Expression Recognition System
which performs facial expression analysis in a near real time from a live web
cam feed. Primary objectives were to get results in a near real time with light
invariant, person independent and pose invariant way. The system is composed of
two different entities trainer and evaluator. Each frame of video feed is
passed through a series of steps including haar classifiers, skin detection,
feature extraction, feature points tracking, creating a learned Support Vector
Machine model to classify emotions to achieve a tradeoff between accuracy and
result rate. A processing time of 100-120 ms per 10 frames was achieved with
accuracy of around 60%. We measure our accuracy in terms of variety of
interaction and classification scenarios. We conclude by discussing relevance
of our work to human computer interaction and exploring further measures that
can be taken.
|
1206.3564
|
Functional Currents : a new mathematical tool to model and analyse
functional shapes
|
cs.CG cs.CV math.DG
|
This paper introduces the concept of functional current as a mathematical
framework to represent and treat functional shapes, i.e. sub-manifold supported
signals. It is motivated by the growing occurrence, in medical imaging and
computational anatomy, of what can be described as geometrico-functional data,
that is a data structure that involves a deformable shape (roughly a finite
dimensional sub manifold) together with a function defined on this shape taking
value in another manifold.
Indeed, if mathematical currents have already proved to be very efficient
theoretically and numerically to model and process shapes as curves or
surfaces, they are limited to the manipulation of purely geometrical objects.
We show that the introduction of the concept of functional currents offers a
genuine solution to the simultaneous processing of the geometric and signal
information of any functional shape. We explain how functional currents can be
equipped with a Hilbertian norm mixing geometrical and functional content of
functional shapes nicely behaving under geometrical and functional
perturbations and paving the way to various processing algorithms. We
illustrate this potential on two problems: the redundancy reduction of
functional shapes representations through matching pursuit schemes on
functional currents and the simultaneous geometric and functional registration
of functional shapes under diffeomorphic transport.
|
1206.3582
|
Decentralized Learning for Multi-player Multi-armed Bandits
|
math.OC cs.LG cs.SY
|
We consider the problem of distributed online learning with multiple players
in multi-armed bandits (MAB) models. Each player can pick among multiple arms.
When a player picks an arm, it gets a reward. We consider both i.i.d. reward
model and Markovian reward model. In the i.i.d. model each arm is modelled as
an i.i.d. process with an unknown distribution with an unknown mean. In the
Markovian model, each arm is modelled as a finite, irreducible, aperiodic and
reversible Markov chain with an unknown probability transition matrix and
stationary distribution. The arms give different rewards to different players.
If two players pick the same arm, there is a "collision", and neither of them
get any reward. There is no dedicated control channel for coordination or
communication among the players. Any other communication between the users is
costly and will add to the regret. We propose an online index-based distributed
learning policy called ${\tt dUCB_4}$ algorithm that trades off
\textit{exploration v. exploitation} in the right way, and achieves expected
regret that grows at most as near-$O(\log^2 T)$. The motivation comes from
opportunistic spectrum access by multiple secondary users in cognitive radio
networks wherein they must pick among various wireless channels that look
different to different users. This is the first distributed learning algorithm
for multi-player MABs to the best of our knowledge.
|
1206.3594
|
Blind PSF estimation and methods of deconvolution optimization
|
cs.CV
|
We have shown that the left side null space of the autoregression (AR) matrix
operator is the lexicographical presentation of the point spread function (PSF)
on condition the AR parameters are common for original and blurred images. The
method of inverse PSF evaluation with regularization functional as the function
of surface area is offered. The inverse PSF was used for primary image
estimation. Two methods of original image estimate optimization were designed
basing on maximum entropy generalization of sought and blurred images
conditional probability density and regularization. The first method uses
balanced variations of convolution and deconvolution transforms to obtaining
iterative schema of image optimization. The variations balance was defined by
dynamic regularization basing on condition of iteration process convergence.
The regularization has dynamic character because depends on current and
previous image estimate variations. The second method implements the
regularization of deconvolution optimization in curved space with metric
defined on image estimate surface. It is basing on target functional invariance
to fluctuations of optimal argument value. The given iterative schemas have
faster convergence in comparison with known ones, so they can be used for
reconstruction of high resolution images series in real time.
|
1206.3599
|
Epidemic Spreading with External Agents
|
cs.SI cs.IT cs.NI math.IT physics.soc-ph
|
We study epidemic spreading processes in large networks, when the spread is
assisted by a small number of external agents: infection sources with bounded
spreading power, but whose movement is unrestricted vis-\`a-vis the underlying
network topology. For networks which are `spatially constrained', we show that
the spread of infection can be significantly speeded up even by a few such
external agents infecting randomly. Moreover, for general networks, we derive
upper-bounds on the order of the spreading time achieved by certain simple
(random/greedy) external-spreading policies. Conversely, for certain common
classes of networks such as line graphs, grids and random geometric graphs, we
also derive lower bounds on the order of the spreading time over all
(potentially network-state aware and adversarial) external-spreading policies;
these adversarial lower bounds match (up to logarithmic factors) the spreading
time achieved by an external agent with a random spreading policy. This
demonstrates that random, state-oblivious infection-spreading by an external
agent is in fact order-wise optimal for spreading in such spatially constrained
networks.
|
1206.3602
|
Robust and Efficient Distributed Compression for Cloud Radio Access
Networks
|
cs.IT math.IT
|
This work studies distributed compression for the uplink of a cloud radio
access network where multiple multi-antenna base stations (BSs) are connected
to a central unit, also referred to as cloud decoder, via capacity-constrained
backhaul links. Since the signals received at different BSs are correlated,
distributed source coding strategies are potentially beneficial, and can be
implemented via sequential source coding with side information. For the problem
of compression with side information, available compression strategies based on
the criteria of maximizing the achievable rate or minimizing the mean square
error are reviewed first. It is observed that, in either case, each BS requires
information about a specific covariance matrix in order to realize the
advantage of distributed source coding. Since this covariance matrix depends on
the channel realizations corresponding to other BSs, a robust compression
method is proposed for a practical scenario in which the information about the
covariance available at each BS is imperfect. The problem is formulated using a
deterministic worst-case approach, and an algorithm is proposed that achieves a
stationary point for the problem. Then, BS selection is addressed with the aim
of reducing the number of active BSs, thus enhancing the energy efficiency of
the network. An optimization problem is formulated in which compression and BS
selection are performed jointly by introducing a sparsity-inducing term into
the objective function. An iterative algorithm is proposed that is shown to
converge to a locally optimal point. From numerical results, it is observed
that the proposed robust compression scheme compensates for a large fraction of
the performance loss induced by the imperfect statistical information.
Moreover, the proposed BS selection algorithm is seen to perform close to the
more complex exhaustive search solution.
|
1206.3612
|
Linear Information Coupling Problems
|
cs.IT math.IT
|
Many network information theory problems face the similar difficulty of
single letterization. We argue that this is due to the lack of a geometric
structure on the space of probability distribution. In this paper, we develop
such a structure by assuming that the distributions of interest are close to
each other. Under this assumption, the K-L divergence is reduced to the squared
Euclidean metric in an Euclidean space. Moreover, we construct the notion of
coordinate and inner product, which will facilitate solving communication
problems. We will also present the application of this approach to the
point-to-point channel and the general broadcast channel, which demonstrates
how our technique simplifies information theory problems.
|
1206.3614
|
A Linear-Programming Approximation of AC Power Flows
|
cs.AI math.OC
|
Linear active-power-only DC power flow approximations are pervasive in the
planning and control of power systems. However, these approximations fail to
capture reactive power and voltage magnitudes, both of which are necessary in
many applications to ensure voltage stability and AC power flow feasibility.
This paper proposes linear-programming models (the LPAC models) that
incorporate reactive power and voltage magnitudes in a linear power flow
approximation. The LPAC models are built on a convex approximation of the
cosine terms in the AC equations, as well as Taylor approximations of the
remaining nonlinear terms. Experimental comparisons with AC solutions on a
variety of standard IEEE and MatPower benchmarks show that the LPAC models
produce accurate values for active and reactive power, phase angles, and
voltage magnitudes. The potential benefits of the LPAC models are illustrated
on two "proof-of-concept" studies in power restoration and capacitor placement.
|
1206.3618
|
Sparse Sequential Dirichlet Coding
|
cs.IT math.IT
|
This short paper describes a simple coding technique, Sparse Sequential
Dirichlet Coding, for multi-alphabet memoryless sources. It is appropriate in
situations where only a small, unknown subset of the possible alphabet symbols
can be expected to occur in any particular data sequence. We provide a
competitive analysis which shows that the performance of Sparse Sequential
Dirichlet Coding will be close to that of a Sequential Dirichlet Coder that
knows in advance the exact subset of occurring alphabet symbols. Empirically we
show that our technique can perform similarly to the more computationally
demanding Sequential Sub-Alphabet Estimator, while using less computational
resources.
|
1206.3633
|
Feature Based Fuzzy Rule Base Design for Image Extraction
|
cs.CV cs.AI
|
In the recent advancement of multimedia technologies, it becomes a major
concern of detecting visual attention regions in the field of image processing.
The popularity of the terminal devices in a heterogeneous environment of the
multimedia technology gives us enough scope for the betterment of image
visualization. Although there exist numerous methods, feature based image
extraction becomes a popular one in the field of image processing. The
objective of image segmentation is the domain-independent partition of the
image into a set of regions, which are visually distinct and uniform with
respect to some property, such as grey level, texture or colour. Segmentation
and subsequent extraction can be considered the first step and key issue in
object recognition, scene understanding and image analysis. Its application
area encompasses mobile devices, industrial quality control, medical
appliances, robot navigation, geophysical exploration, military applications,
etc. In all these areas, the quality of the final results depends largely on
the quality of the preprocessing work. Most of the times, acquiring
spurious-free preprocessing data requires a lot of application cum mathematical
intensive background works. We propose a feature based fuzzy rule guided novel
technique that is functionally devoid of any external intervention during
execution. Experimental results suggest that this approach is an efficient one
in comparison to different other techniques extensively addressed in
literature. In order to justify the supremacy of performance of our proposed
technique in respect of its competitors, we take recourse to effective metrics
like Mean Squared Error (MSE), Mean Absolute Error (MAE) and Peak Signal to
Noise Ratio (PSNR).
|
1206.3658
|
Alan Turing and the "Hard" and "Easy" Problem of Cognition: Doing and
Feeling
|
cs.AI cs.RO
|
The "easy" problem of cognitive science is explaining how and why we can do
what we can do. The "hard" problem is explaining how and why we feel. Turing's
methodology for cognitive science (the Turing Test) is based on doing: Design a
model that can do anything a human can do, indistinguishably from a human, to a
human, and you have explained cognition. Searle has shown that the successful
model cannot be solely computational. Sensory-motor robotic capacities are
necessary to ground some, at least, of the model's words, in what the robot can
do with the things in the world that the words are about. But even grounding is
not enough to guarantee that -- nor to explain how and why -- the model feels
(if it does). That problem is much harder to solve (and perhaps insoluble).
|
1206.3666
|
Unsupervised adaptation of brain machine interface decoders
|
cs.LG q-bio.NC
|
The performance of neural decoders can degrade over time due to
nonstationarities in the relationship between neuronal activity and behavior.
In this case, brain-machine interfaces (BMI) require adaptation of their
decoders to maintain high performance across time. One way to achieve this is
by use of periodical calibration phases, during which the BMI system (or an
external human demonstrator) instructs the user to perform certain movements or
behaviors. This approach has two disadvantages: (i) calibration phases
interrupt the autonomous operation of the BMI and (ii) between two calibration
phases the BMI performance might not be stable but continuously decrease. A
better alternative would be that the BMI decoder is able to continuously adapt
in an unsupervised manner during autonomous BMI operation, i.e. without knowing
the movement intentions of the user.
In the present article, we present an efficient method for such unsupervised
training of BMI systems for continuous movement control. The proposed method
utilizes a cost function derived from neuronal recordings, which guides a
learning algorithm to evaluate the decoding parameters. We verify the
performance of our adaptive method by simulating a BMI user with an optimal
feedback control model and its interaction with our adaptive BMI decoder. The
simulation results show that the cost function and the algorithm yield fast and
precise trajectories towards targets at random orientations on a 2-dimensional
computer screen. For initially unknown and non-stationary tuning parameters,
our unsupervised method is still able to generate precise trajectories and to
keep its performance stable in the long term. The algorithm can optionally work
also with neuronal error signals instead or in conjunction with the proposed
unsupervised adaptation.
|
1206.3667
|
Information Retrieval in Intelligent Systems: Current Scenario & Issues
|
cs.IR cs.AI
|
Web space is the huge repository of data. Everyday lots of new information
get added to this web space. The more the information, more is demand for tools
to access that information. Answering users' queries about the online
information intelligently is one of the great challenges in information
retrieval in intelligent systems. In this paper, we will start with the brief
introduction on information retrieval and intelligent systems and explain how
swoogle, the semantic search engine, uses its algorithms and techniques to
search for the desired contents in the web. We then continue with the
clustering technique that is used to group the similar things together and
discuss the machine learning technique called Self-organizing maps [6] or SOM,
which is a data visualization technique that reduces the dimensions of data
through the use of self-organizing neural networks. We then discuss how SOM is
used to visualize the contents of the data, by following some lines of
algorithm, in the form of maps. So, we could say that websites or machines can
be used to retrieve the information that what exactly users want from them.
|
1206.3709
|
Control Systems: an Application to a High Energy Physics Experiment
(COMPASS)
|
cs.SY hep-ex physics.ins-det
|
The Detector Control System (DCS) of the COMPASS experiment at CERN is
presented. The experiment has a high level of complexity and flexibility and a
long time of operation, that constitute a challenge for its full monitorisation
and control. A strategy to use a limited number of standardised,
cost-effective, industrial solutions of hardware and software was pursued. When
such solutions were not available or could not be used, customised solutions
were developed.
|
1206.3713
|
Learning the Structure and Parameters of Large-Population Graphical
Games from Behavioral Data
|
cs.LG cs.GT stat.ML
|
We consider learning, from strictly behavioral data, the structure and
parameters of linear influence games (LIGs), a class of parametric graphical
games introduced by Irfan and Ortiz (2014). LIGs facilitate causal strategic
inference (CSI): Making inferences from causal interventions on stable behavior
in strategic settings. Applications include the identification of the most
influential individuals in large (social) networks. Such tasks can also support
policy-making analysis. Motivated by the computational work on LIGs, we cast
the learning problem as maximum-likelihood estimation (MLE) of a generative
model defined by pure-strategy Nash equilibria (PSNE). Our simple formulation
uncovers the fundamental interplay between goodness-of-fit and model
complexity: good models capture equilibrium behavior within the data while
controlling the true number of equilibria, including those unobserved. We
provide a generalization bound establishing the sample complexity for MLE in
our framework. We propose several algorithms including convex loss minimization
(CLM) and sigmoidal approximations. We prove that the number of exact PSNE in
LIGs is small, with high probability; thus, CLM is sound. We illustrate our
approach on synthetic data and real-world U.S. congressional voting records. We
briefly discuss our learning framework's generality and potential applicability
to general graphical games.
|
1206.3714
|
How important are Deformable Parts in the Deformable Parts Model?
|
cs.CV cs.AI cs.LG
|
The main stated contribution of the Deformable Parts Model (DPM) detector of
Felzenszwalb et al. (over the Histogram-of-Oriented-Gradients approach of Dalal
and Triggs) is the use of deformable parts. A secondary contribution is the
latent discriminative learning. Tertiary is the use of multiple components. A
common belief in the vision community (including ours, before this study) is
that their ordering of contributions reflects the performance of detector in
practice. However, what we have experimentally found is that the ordering of
importance might actually be the reverse. First, we show that by increasing the
number of components, and switching the initialization step from their
aspect-ratio, left-right flipping heuristics to appearance-based clustering,
considerable improvement in performance is obtained. But more intriguingly, we
show that with these new components, the part deformations can now be
completely switched off, yet obtaining results that are almost on par with the
original DPM detector. Finally, we also show initial results for using multiple
components on a different problem -- scene classification, suggesting that this
idea might have wider applications in addition to object detection.
|
1206.3719
|
Broadcast Approaches to the Diamond Channel
|
cs.IT math.IT
|
The problem of dual-hop transmission from a source to a destination via two
parallel full-duplex relays in block Rayleigh fading environment is
investigated. All nodes in the network are assumed to be oblivious to their
forward channel gains; however, they have perfect information about their
backward channel gains. The focus of this paper is on simple, efficient, and
practical relaying schemes to increase the expected-rate at the destination.
For this purpose, various combinations of relaying protocols and the broadcast
approach (multi-layer coding) are proposed. For the decode-forward (DF)
relaying, the maximum finite-layer expected-rate as well as two upper-bounds on
the continuous-layer expected-rate are obtained. The main feature of the
proposed DF scheme is that the layers being decoded at both relays are added
coherently at the destination although each relay has no information about the
number of layers being successfully decoded by the other relay. It is proved
that the optimal coding scheme is transmitting uncorrelated signals via the
relays. Next, the maximum expected-rate of ON/OFF based amplify-forward (AF)
relaying is analytically derived. For further performance improvement, a hybrid
decode-amplify-forward (DAF) relaying strategy, adopting the broadcast approach
at the source and relays, is proposed and its maximum throughput and maximum
finite-layer expected-rate are presented. Moreover, the maximum throughput and
maximum expected-rate in the compress-forward (CF) relaying adopting the
broadcast approach, using optimal quantizers and Wyner-Ziv compression at the
relays, are fully derived. All theoretical results are illustrated by numerical
simulations. As it turns out from the results, when the ratio of the relay
power to the source power is high, the CF relaying outperforms DAF (and hence
outperforms both DF and AF relaying); otherwise, DAF scheme is superior.
|
1206.3721
|
Constraint-free Graphical Model with Fast Learning Algorithm
|
cs.LG stat.ML
|
In this paper, we propose a simple, versatile model for learning the
structure and parameters of multivariate distributions from a data set.
Learning a Markov network from a given data set is not a simple problem,
because Markov networks rigorously represent Markov properties, and this rigor
imposes complex constraints on the design of the networks. Our proposed model
removes these constraints, acquiring important aspects from the information
geometry. The proposed parameter- and structure-learning algorithms are simple
to execute as they are based solely on local computation at each node.
Experiments demonstrate that our algorithms work appropriately.
|
1206.3728
|
Performance Limits for Distributed Estimation Over LMS Adaptive Networks
|
cs.IT cs.DC cs.SY math.IT
|
In this work we analyze the mean-square performance of different strategies
for distributed estimation over least-mean-squares (LMS) adaptive networks. The
results highlight some useful properties for distributed adaptation in
comparison to fusion-based centralized solutions. The analysis establishes
that, by optimizing over the combination weights, diffusion strategies can
deliver lower excess-mean-square-error than centralized solutions employing
traditional block or incremental LMS strategies. We first study in some detail
the situation involving combinations of two adaptive agents and then extend the
results to generic N-node ad-hoc networks. In the later case, we establish
that, for sufficiently small step-sizes, diffusion strategies can outperform
centralized block or incremental LMS strategies by optimizing over
left-stochastic combination weighting matrices. The results suggest more
efficient ways for organizing and processing data at fusion centers, and
present useful adaptive strategies that are able to enhance performance when
implemented in a distributed manner.
|
1206.3777
|
An Analysis of the Methods Employed for Breast Cancer Diagnosis
|
cs.NE q-bio.TO
|
Breast cancer research over the last decade has been tremendous. The ground
breaking innovations and novel methods help in the early detection, in setting
the stages of the therapy and in assessing the response of the patient to the
treatment. The prediction of the recurrent cancer is also crucial for the
survival of the patient. This paper studies various techniques used for the
diagnosis of breast cancer. Different methods are explored for their merits and
de-merits for the diagnosis of breast lesion. Some of the methods are yet
unproven but the studies look very encouraging. It was found that the recent
use of the combination of Artificial Neural Networks in most of the instances
gives accurate results for the diagnosis of breast cancer and their use can
also be extended to other diseases.
|
1206.3793
|
A distributed classification/estimation algorithm for sensor networks
|
cs.MA cs.SY math.OC
|
In this paper, we address the problem of simultaneous classification and
estimation of hidden parameters in a sensor network with communications
constraints. In particular, we consider a network of noisy sensors which
measure a common scalar unknown parameter. We assume that a fraction of the
nodes represent faulty sensors, whose measurements are poorly reliable. The
goal for each node is to simultaneously identify its class (faulty or
non-faulty) and estimate the common parameter.
We propose a novel cooperative iterative algorithm which copes with the
communication constraints imposed by the network and shows remarkable
performance. Our main result is a rigorous proof of the convergence of the
algorithm and a characterization of the limit behavior. We also show that, in
the limit when the number of sensors goes to infinity, the common unknown
parameter is estimated with arbitrary small error, while the classification
error converges to that of the optimal centralized maximum likelihood
estimator. We also show numerical results that validate the theoretical
analysis and support their possible generalization. We compare our strategy
with the Expectation-Maximization algorithm and we discuss trade-offs in terms
of robustness, speed of convergence and implementation simplicity.
|
1206.3804
|
Locally Repairable Codes
|
cs.IT cs.DC cs.NI math.IT
|
Distributed storage systems for large-scale applications typically use
replication for reliability. Recently, erasure codes were used to reduce the
large storage overhead, while increasing data reliability. A main limitation of
off-the-shelf erasure codes is their high-repair cost during single node
failure events. A major open problem in this area has been the design of codes
that {\it i)} are repair efficient and {\it ii)} achieve arbitrarily high data
rates.
In this paper, we explore the repair metric of {\it locality}, which
corresponds to the number of disk accesses required during a
{\color{black}single} node repair. Under this metric we characterize an
information theoretic trade-off that binds together locality, code distance,
and the storage capacity of each node. We show the existence of optimal {\it
locally repairable codes} (LRCs) that achieve this trade-off. The achievability
proof uses a locality aware flow-graph gadget which leads to a randomized code
construction. Finally, we present an optimal and explicit LRC that achieves
arbitrarily high data-rates. Our locality optimal construction is based on
simple combinations of Reed-Solomon blocks.
|
1206.3806
|
Adaptive vibration suppression system: An iterative control law for a
piezoelectric actuator shunted by a negative capacitor
|
cs.CE cs.SY physics.class-ph
|
An adaptive system for the suppression of vibration transmission using a
single piezoelectric actuator shunted by a negative capacitance circuit is
presented. It is known that using negative capacitance shunt, the spring
constant of piezoelectric actuator can be controlled to extreme values of zero
or infinity. Since the value of spring constant controls a force transmitted
through an elastic element, it is possible to achieve a reduction of
transmissibility of vibrations through a piezoelectric actuator by reducing its
effective spring constant. The narrow frequency range and broad frequency range
vibration isolation systems are analyzed, modeled, and experimentally
investigated. The problem of high sensitivity of the vibration control system
to varying operational conditions is resolved by applying an adaptive control
to the circuit parameters of the negative capacitor. A control law that is
based on the estimation of the value of effective spring constant of shunted
piezoelectric actuator is presented. An adaptive system, which achieves a
self-adjustment of the negative capacitor parameters is presented. It is shown
that such an arrangement allows a design of a simple electronic system, which,
however, offers a great vibration isolation efficiency in variable vibration
conditions.
|
1206.3819
|
EVM and Achievable Data Rate Analysis of Clipped OFDM Signals in Visible
Light Communication
|
cs.IT math.IT
|
Orthogonal frequency division multiplexing (OFDM) has been considered for
visible light communication (VLC) thanks to its ability to boost data rates as
well as its robustness against frequency-selective fading channels. A major
disadvantage of OFDM is the large dynamic range of its time-domain waveforms,
making OFDM vulnerable to nonlinearity of light emitting diodes (LEDs). DC
biased optical OFDM (DCO-OFDM) and asymmetrically clipped optical OFDM
(ACO-OFDM) are two popular OFDM techniques developed for the VLC. In this
paper, we will analyze the performance of the DCO-OFDM and ACO-OFDM signals in
terms of error vector magnitude (EVM), signal-to-distortion ratio (SDR), and
achievable data rates under both average optical power and dynamic optical
power constraints. EVM is a commonly used metric to characterize distortions.
We will describe an approach to numerically calculate the EVM for DCO-OFDM and
ACO-OFDM. We will derive the optimum biasing ratio in the sense of minimizing
EVM for DCO-OFDM. Additionally, we will formulate the EVM minimization problem
as a convex linear optimization problem and obtain an EVM lower bound against
which to compare the DCO-OFDM and ACO-OFDM techniques. We will prove that the
ACO-OFDM can achieve the lower bound. Average optical power and dynamic optical
power are two main constraints in VLC. We will derive the achievable data rates
under these two constraints for both additive white Gaussian noise (AWGN)
channel and frequency-selective channel. We will compare the performance of
DCO-OFDM and ACO-OFDM under different power constraint scenarios.
|
1206.3880
|
Cryptographic Key Management for Smart Power Grids - Approaches and
Issues
|
cs.CR cs.SY
|
The smart power grid promises to improve efficiency and reliability of power
delivery. This report introduces the logical components, associated
technologies, security protocols, and network designs of the system.
Undermining the potential benefits are security threats, and those threats
related to cyber security are described in this report. Concentrating on the
design of the smart meter and its communication links, this report describes
the ZigBee technology and implementation, and the communication between the
smart meter and the collector node, with emphasis on security attributes. It
was observed that many of the secure features are based on keys that must be
maintained; therefore, secure key management techniques become the basis to
securing the entire grid. The descriptions of current key management techniques
are delineated, highlighting their weaknesses. Finally some initial research
directions are outlined.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.