id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1206.6190
|
Low Complexity Maximum Likelihood Detection in Spatial Modulation
Systems
|
cs.IT math.IT
|
Spatial Modulation (SM) is a recently developed low-complexity Multiple-Input
Multiple-Output scheme that uses antenna indices and a conventional signal set
to convey information. It has been shown that the Maximum-Likelihood (ML)
detection in an SM system involves joint detection of the transmit antenna
index and the transmitted symbol, and hence, the ML search complexity grows
linearly with the number of transmit antennas and the size of the signal set.
In this paper, we show that the ML search complexity in an SM system becomes
independent of the constellation size when the signal set employed is a square-
or a rectangular-QAM. Further, we show that Sphere Decoding (SD) algorithms
become essential in SM systems only when the number of transmit antennas is
large and not necessarily when the employed signal set is large. We propose a
novel {\em hard-limiting} enabled sphere decoding detector whose complexity is
lesser than that of the existing detector and a generalized detection scheme
for SM systems with {\em arbitrary} number of transmit antennas. We support our
claims with simulation results that the proposed detectors are ML-optimal and
offer a significantly reduced complexity.
|
1206.6196
|
Discrete Elastic Inner Vector Spaces with Application in Time Series and
Sequence Mining
|
cs.LG cs.DB
|
This paper proposes a framework dedicated to the construction of what we call
discrete elastic inner product allowing one to embed sets of non-uniformly
sampled multivariate time series or sequences of varying lengths into inner
product space structures. This framework is based on a recursive definition
that covers the case of multiple embedded time elastic dimensions. We prove
that such inner products exist in our general framework and show how a simple
instance of this inner product class operates on some prospective applications,
while generalizing the Euclidean inner product. Classification experimentations
on time series and symbolic sequences datasets demonstrate the benefits that we
can expect by embedding time series or sequences into elastic inner spaces
rather than into classical Euclidean spaces. These experiments show good
accuracy when compared to the euclidean distance or even dynamic programming
algorithms while maintaining a linear algorithmic complexity at exploitation
stage, although a quadratic indexing phase beforehand is required.
|
1206.6214
|
An Alternative Approach to the Calculation and Analysis of Connectivity
in the World City Network
|
physics.soc-ph cs.SI
|
Empirical research on world cities often draws on Taylor's (2001) notion of
an 'interlocking network model', in which office networks of globalized service
firms are assumed to shape the spatialities of urban networks. In spite of its
many merits, this approach is limited because the resultant adjacency matrices
are not really fit for network-analytic calculations. We therefore propose a
fresh analytical approach using a primary linkage algorithm that produces a
one-mode directed graph based on Taylor's two-mode city/firm network data. The
procedure has the advantage of creating less dense networks when compared to
the interlocking network model, while nonetheless retaining the network
structure apparent in the initial dataset. We randomize the empirical network
with a bootstrapping simulation approach, and compare the simulated parameters
of this null-model with our empirical network parameter (i.e. betweenness
centrality). We find that our approach produces results that are comparable to
those of the standard interlocking network model. However, because our approach
is based on an actual graph representation and network analysis, we are able to
assess cities' position in the network at large. For instance, we find that
cities such as Tokyo, Sydney, Melbourne, Almaty and Karachi hold more strategic
and valuable positions than suggested in the interlocking networks as they play
a bridging role in connecting cities across regions. In general, we argue that
our graph representation allows for further and deeper analysis of the original
data, further extending world city network research into a theory-based
empirical research approach.
|
1206.6217
|
User Selection for Multi-user MIMO Downlink with Zero-Forcing
Beamforming
|
cs.IT math.IT
|
In this paper, we propose a greedy user selection with swap (GUSS) algorithm
based on zero-forcing beamforming (ZFBF) for the multi-user multiple-input
multiple-output (MIMO) downlink channels. Since existing user selection
algorithms, such as the zero-forcing with selection (ZFS), have `redundant
user' and `local optimum' flaws that compromise the achieved sum rate, GUSS
adds `delete' and `swap' operations to the user selection procedure of ZFS to
improve the performance by eliminating `redundant user' and escaping from
`local optimum', respectively. In addition, an effective channel vector based
effective-channel-gain updating scheme is presented to reduce the complexity of
GUSS. With the help of this updating scheme, GUSS has the same order of
complexity of ZFS with only a linear increment. Simulation results indicate
that on average GUSS achieves 99.3 percent of the sum rate upper bound that is
achieved by exhaustive search, over the range of transmit signal-to-noise
ratios considered with only three to six times the complexity of ZFS.
|
1206.6230
|
Decentralized Data Fusion and Active Sensing with Mobile Sensors for
Modeling and Predicting Spatiotemporal Traffic Phenomena
|
cs.LG cs.AI cs.DC cs.MA cs.RO
|
The problem of modeling and predicting spatiotemporal traffic phenomena over
an urban road network is important to many traffic applications such as
detecting and forecasting congestion hotspots. This paper presents a
decentralized data fusion and active sensing (D2FAS) algorithm for mobile
sensors to actively explore the road network to gather and assimilate the most
informative data for predicting the traffic phenomenon. We analyze the time and
communication complexity of D2FAS and demonstrate that it can scale well with a
large number of observations and sensors. We provide a theoretical guarantee on
its predictive performance to be equivalent to that of a sophisticated
centralized sparse approximation for the Gaussian process (GP) model: The
computation of such a sparse approximate GP model can thus be parallelized and
distributed among the mobile sensors (in a Google-like MapReduce paradigm),
thereby achieving efficient and scalable prediction. We also theoretically
guarantee its active sensing performance that improves under various practical
environmental conditions. Empirical evaluation on real-world urban road network
data shows that our D2FAS algorithm is significantly more time-efficient and
scalable than state-of-the-art centralized algorithms while achieving
comparable predictive performance.
|
1206.6262
|
Scaling Life-long Off-policy Learning
|
cs.AI cs.LG
|
We pursue a life-long learning approach to artificial intelligence that makes
extensive use of reinforcement learning algorithms. We build on our prior work
with general value functions (GVFs) and the Horde architecture. GVFs have been
shown able to represent a wide variety of facts about the world's dynamics that
may be useful to a long-lived agent (Sutton et al. 2011). We have also
previously shown scaling - that thousands of on-policy GVFs can be learned
accurately in real-time on a mobile robot (Modayil, White & Sutton 2011). That
work was limited in that it learned about only one policy at a time, whereas
the greatest potential benefits of life-long learning come from learning about
many policies in parallel, as we explore in this paper. Many new challenges
arise in this off-policy learning setting. To deal with convergence and
efficiency challenges, we utilize the recently introduced GTD({\lambda})
algorithm. We show that GTD({\lambda}) with tile coding can simultaneously
learn hundreds of predictions for five simple target policies while following a
single random behavior policy, assessing accuracy with interspersed on-policy
tests. To escape the need for the tests, which preclude further scaling, we
introduce and empirically vali- date two online estimators of the off-policy
objective (MSPBE). Finally, we use the more efficient of the two estimators to
demonstrate off-policy learning at scale - the learning of value functions for
one thousand policies in real time on a physical robot. This ability
constitutes a significant step towards scaling life-long off-policy learning.
|
1206.6266
|
Random degree-degree correlated networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
Correlations may affect propagation processes on complex networks. To analyze
their effect, it is useful to build ensembles of networks constrained to have a
given value of a structural measure, such as the degree-degree correlation $r$,
being random in other aspects and preserving the degree distribution. This can
be done through Monte Carlo optimization procedures. Meanwhile, when tuning
$r$, other network properties may concomitantly change. Then, in this work we
analyze, for the $r$-ensembles, the impact of $r$ on properties such as
transitivity, branching and characteristic lengths, that are relevant when
investigating spreading phenomena on these networks. The present analysis is
performed for networks with degree distributions of two main types: either
localized around a typical degree (with exponentially bounded asymptotic decay)
or broadly distributed (with power-law decay). Correlation bounds and size
effects are also investigated.
|
1206.6293
|
Cascading map-side joins over HBase for scalable join processing
|
cs.DB cs.DC
|
One of the major challenges in large-scale data processing with MapReduce is
the smart computation of joins. Since Semantic Web datasets published in RDF
have increased rapidly over the last few years, scalable join techniques become
an important issue for SPARQL query processing as well. In this paper, we
introduce the Map-Side Index Nested Loop Join (MAPSIN join) which combines
scalable indexing capabilities of NoSQL storage systems like HBase, that suffer
from an insufficient distributed processing layer, with MapReduce, which in
turn does not provide appropriate storage structures for efficient large-scale
join processing. While retaining the flexibility of commonly used reduce-side
joins, we leverage the effectiveness of map-side joins without any changes to
the underlying framework. We demonstrate the significant benefits of MAPSIN
joins for the processing of SPARQL basic graph patterns on large RDF datasets
by an evaluation with the LUBM and SP2Bench benchmarks. For most queries,
MAPSIN join based query execution outperforms reduce-side join based execution
by an order of magnitude.
|
1206.6294
|
Exact mean field dynamics for epidemic-like processes on heterogeneous
networks
|
physics.soc-ph cs.SI
|
We show that the mean field equations for the SIR epidemic can be exactly
solved for a network with arbitrary degree distribution. Our exact solution
consists of reducing the dynamics to a lone first order differential equation,
which has a solution in terms of an integral over functions dependent on the
degree distribution of the network, and reconstructing all mean field functions
of interest from this integral. Irreversibility of the SIR epidemic is crucial
for the solution. We also find exact solutions to the sexually transmitted
disease SI epidemic on bipartite graphs, to a simplified rumor spreading model,
and to a new model for recommendation spreading, via similar techniques.
Numerical simulations of these processes on scale free networks demonstrate the
qualitative validity of mean field theory in most regimes.
|
1206.6302
|
Maximum Secondary Stable Throughput of a Cooperative Secondary
Transmitter-Receiver Pair: Protocol Design and Stability Analysis
|
cs.IT cs.NI math.IT
|
In this paper, we investigate the impact of cooperation between a secondary
transmitter-receiver pair and a primary transmitter (PT) on the maximum stable
throughput of the primary-secondary network. Each transmitter, primary or
secondary, has a buffer for storing its own traffic. In addition to its own
buffer, the secondary transmitter (ST) has a buffer for storing a fraction of
the undelivered primary packets due to channel impairments. Moreover, the
secondary destination has a relaying queue for storing a fraction of the
undelivered primary packets. In the proposed cooperative system, the ST and the
secondary destination increase the spectrum availability for the secondary
packets by relaying the unsuccessfully transmitted packets of the PT. We
consider two multiple access strategies to be used by the ST and the secondary
destination to utilize the silence sessions of the PT. Numerical results
demonstrate the gains of the proposed cooperative system over the
non-cooperation case.
|
1206.6322
|
A New Scale for Attribute Dependency in Large Database Systems
|
cs.IR cs.DB
|
Large, data centric applications are characterized by its different
attributes. In modern day, a huge majority of the large data centric
applications are based on relational model. The databases are collection of
tables and every table consists of numbers of attributes. The data is accessed
typically through SQL queries. The queries that are being executed could be
analyzed for different types of optimizations. Analysis based on different
attributes used in a set of query would guide the database administrators to
enhance the speed of query execution. A better model in this context would help
in predicting the nature of upcoming query set. An effective prediction model
would guide in different applications of database, data warehouse, data mining
etc. In this paper, a numeric scale has been proposed to enumerate the strength
of associations between independent data attributes. The proposed scale is
built based on some probabilistic analysis of the usage of the attributes in
different queries. Thus this methodology aims to predict future usage of
attributes based on the current usage.
|
1206.6323
|
Local implementations of non-local quantum gates in linear entangled
channel
|
quant-ph cs.IT math.IT
|
In this paper, we demonstrate n-party controlled unitary gate implementations
locally on arbitrary remote state through linear entangled channel where
control parties share entanglement with the adjacent control parties and only
one of them shares entanglement with the target party. In such a network, we
describe the protocol of simultaneous implementation of controlled-Hermitian
gate starting from three party scenario. We also explicate the implementation
of three party controlled-Unitary gate, a generalized form of Toffoli gate and
subsequently generalize the protocol for n-party using minimal cost.
|
1206.6347
|
The observational roots of reference of the semantic web
|
cs.AI cs.DL
|
Shared reference is an essential aspect of meaning. It is also indispensable
for the semantic web, since it enables to weave the global graph, i.e., it
allows different users to contribute to an identical referent. For example, an
essential kind of referent is a geographic place, to which users may contribute
observations. We argue for a human-centric, operational approach towards
reference, based on respective human competences. These competences encompass
perceptual, cognitive as well as technical ones, and together they allow humans
to inter-subjectively refer to a phenomenon in their environment. The
technology stack of the semantic web should be extended by such operations.
This would allow establishing new kinds of observation-based reference systems
that help constrain and integrate the semantic web bottom-up.
|
1206.6356
|
A Spectral Graph Uncertainty Principle
|
cs.IT math.IT
|
The spectral theory of graphs provides a bridge between classical signal
processing and the nascent field of graph signal processing. In this paper, a
spectral graph analogy to Heisenberg's celebrated uncertainty principle is
developed. Just as the classical result provides a tradeoff between signal
localization in time and frequency, this result provides a fundamental tradeoff
between a signal's localization on a graph and in its spectral domain. Using
the eigenvectors of the graph Laplacian as a surrogate Fourier basis,
quantitative definitions of graph and spectral "spreads" are given, and a
complete characterization of the feasibility region of these two quantities is
developed. In particular, the lower boundary of the region, referred to as the
uncertainty curve, is shown to be achieved by eigenvectors associated with the
smallest eigenvalues of an affine family of matrices. The convexity of the
uncertainty curve allows it to be found to within $\varepsilon$ by a fast
approximation algorithm requiring $O(\varepsilon^{-1/2})$ typically sparse
eigenvalue evaluations. Closed-form expressions for the uncertainty curves for
some special classes of graphs are derived, and an accurate analytical
approximation for the expected uncertainty curve of Erd\H{o}s-R\'enyi random
graphs is developed. These theoretical results are validated by numerical
experiments, which also reveal an intriguing connection between diffusion
processes on graphs and the uncertainty bounds.
|
1206.6361
|
Learning Markov Network Structure using Brownian Distance Covariance
|
stat.ML cs.LG
|
In this paper, we present a simple non-parametric method for learning the
structure of undirected graphs from data that drawn from an underlying unknown
distribution. We propose to use Brownian distance covariance to estimate the
conditional independences between the random variables and encodes pairwise
Markov graph. This framework can be applied in high-dimensional setting, where
the number of parameters much be larger than the sample size.
|
1206.6380
|
Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring
|
cs.LG stat.CO stat.ML
|
In this paper we address the following question: Can we approximately sample
from a Bayesian posterior distribution if we are only allowed to touch a small
mini-batch of data-items for every sample we generate?. An algorithm based on
the Langevin equation with stochastic gradients (SGLD) was previously proposed
to solve this, but its mixing rate was slow. By leveraging the Bayesian Central
Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it
will sample from a normal approximation of the posterior, while for slow mixing
rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a
bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic
gradients) and as such an efficient optimizer during burn-in.
|
1206.6381
|
Shortest path distance in random k-nearest neighbor graphs
|
cs.LG stat.ML
|
Consider a weighted or unweighted k-nearest neighbor graph that has been
built on n data points drawn randomly according to some density p on R^d. We
study the convergence of the shortest path distance in such graphs as the
sample size tends to infinity. We prove that for unweighted kNN graphs, this
distance converges to an unpleasant distance function on the underlying space
whose properties are detrimental to machine learning. We also study the
behavior of the shortest path distance in weighted kNN graphs.
|
1206.6382
|
High-Dimensional Covariance Decomposition into Sparse Markov and
Independence Domains
|
cs.LG stat.ML
|
In this paper, we present a novel framework incorporating a combination of
sparse models in different domains. We posit the observed data as generated
from a linear combination of a sparse Gaussian Markov model (with a sparse
precision matrix) and a sparse Gaussian independence model (with a sparse
covariance matrix). We provide efficient methods for decomposition of the data
into two domains, \viz Markov and independence domains. We characterize a set
of sufficient conditions for identifiability and model consistency. Our
decomposition method is based on a simple modification of the popular
$\ell_1$-penalized maximum-likelihood estimator ($\ell_1$-MLE). We establish
that our estimator is consistent in both the domains, i.e., it successfully
recovers the supports of both Markov and independence models, when the number
of samples $n$ scales as $n = \Omega(d^2 \log p)$, where $p$ is the number of
variables and $d$ is the maximum node degree in the Markov model. Our
conditions for recovery are comparable to those of $\ell_1$-MLE for consistent
estimation of a sparse Markov model, and thus, we guarantee successful
high-dimensional estimation of a richer class of models under comparable
conditions. Our experiments validate these results and also demonstrate that
our models have better inference accuracy under simple algorithms such as loopy
belief propagation.
|
1206.6383
|
Feature Selection via Probabilistic Outputs
|
cs.LG stat.ML
|
This paper investigates two feature-scoring criteria that make use of
estimated class probabilities: one method proposed by \citet{shen} and a
complementary approach proposed below. We develop a theoretical framework to
analyze each criterion and show that both estimate the spread (across all
values of a given feature) of the probability that an example belongs to the
positive class. Based on our analysis, we predict when each scoring technique
will be advantageous over the other and give empirical results validating our
predictions.
|
1206.6384
|
Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm
Regularization
|
cs.LG stat.ML
|
We describe novel subgradient methods for a broad class of matrix
optimization problems involving nuclear norm regularization. Unlike existing
approaches, our method executes very cheap iterations by combining low-rank
stochastic subgradients with efficient incremental SVD updates, made possible
by highly optimized and parallelizable dense linear algebra operations on small
matrices. Our practical algorithms always maintain a low-rank factorization of
iterates that can be conveniently held in memory and efficiently multiplied to
generate predictions in matrix completion settings. Empirical comparisons
confirm that our approach is highly competitive with several recently proposed
state-of-the-art solvers for such problems.
|
1206.6385
|
Improved Estimation in Time Varying Models
|
cs.LG stat.ME stat.ML
|
Locally adapted parameterizations of a model (such as locally weighted
regression) are expressive but often suffer from high variance. We describe an
approach for reducing the variance, based on the idea of estimating
simultaneously a transformed space for the model, as well as locally adapted
parameterizations in this new space. We present a new problem formulation that
captures this idea and illustrate it in the important context of time varying
models. We develop an algorithm for learning a set of bases for approximating a
time varying sparse network; each learned basis constitutes an archetypal
sparse network structure. We also provide an extension for learning task-driven
bases. We present empirical results on synthetic data sets, as well as on a BCI
EEG classification task.
|
1206.6386
|
How To Grade a Test Without Knowing the Answers --- A Bayesian Graphical
Model for Adaptive Crowdsourcing and Aptitude Testing
|
cs.LG cs.AI stat.ML
|
We propose a new probabilistic graphical model that jointly models the
difficulties of questions, the abilities of participants and the correct
answers to questions in aptitude testing and crowdsourcing settings. We devise
an active learning/adaptive testing scheme based on a greedy minimization of
expected model entropy, which allows a more efficient resource allocation by
dynamically choosing the next question to be asked based on the previous
responses. We present experimental results that confirm the ability of our
model to infer the required parameters and demonstrate that the adaptive
testing scheme requires fewer questions to obtain the same accuracy as a static
test scenario.
|
1206.6387
|
Fast classification using sparse decision DAGs
|
cs.LG stat.ML
|
In this paper we propose an algorithm that builds sparse decision DAGs
(directed acyclic graphs) from a list of base classifiers provided by an
external learning method such as AdaBoost. The basic idea is to cast the DAG
design task as a Markov decision process. Each instance can decide to use or to
skip each base classifier, based on the current state of the classifier being
built. The result is a sparse decision DAG where the base classifiers are
selected in a data-dependent way. The method has a single hyperparameter with a
clear semantics of controlling the accuracy/speed trade-off. The algorithm is
competitive with state-of-the-art cascade detectors on three object-detection
benchmarks, and it clearly outperforms them when there is a small number of
base classifiers. Unlike cascades, it is also readily applicable for
multi-class classification. Using the multi-class setup, we show on a benchmark
web page ranking data set that we can significantly improve the decision speed
without harming the performance of the ranker.
|
1206.6388
|
Canonical Trends: Detecting Trend Setters in Web Data
|
cs.LG cs.SI stat.ML
|
Much information available on the web is copied, reused or rephrased. The
phenomenon that multiple web sources pick up certain information is often
called trend. A central problem in the context of web data mining is to detect
those web sources that are first to publish information which will give rise to
a trend. We present a simple and efficient method for finding trends dominating
a pool of web sources and identifying those web sources that publish the
information relevant to a trend before others. We validate our approach on real
data collected from influential technology news feeds.
|
1206.6389
|
Poisoning Attacks against Support Vector Machines
|
cs.LG cs.CR stat.ML
|
We investigate a family of poisoning attacks against Support Vector Machines
(SVM). Such attacks inject specially crafted training data that increases the
SVM's test error. Central to the motivation for these attacks is the fact that
most learning algorithms assume that their training data comes from a natural
or well-behaved distribution. However, this assumption does not generally hold
in security-sensitive settings. As we demonstrate, an intelligent adversary
can, to some extent, predict the change of the SVM's decision function due to
malicious input and use this ability to construct malicious data. The proposed
attack uses a gradient ascent strategy in which the gradient is computed based
on properties of the SVM's optimal solution. This method can be kernelized and
enables the attack to be constructed in the input space even for non-linear
kernels. We experimentally demonstrate that our gradient ascent procedure
reliably identifies good local maxima of the non-convex validation error
surface, which significantly increases the classifier's test error.
|
1206.6390
|
Incorporating Causal Prior Knowledge as Path-Constraints in Bayesian
Networks and Maximal Ancestral Graphs
|
cs.AI cs.CE cs.LG
|
We consider the incorporation of causal knowledge about the presence or
absence of (possibly indirect) causal relations into a causal model. Such
causal relations correspond to directed paths in a causal model. This type of
knowledge naturally arises from experimental data, among others. Specifically,
we consider the formalisms of Causal Bayesian Networks and Maximal Ancestral
Graphs and their Markov equivalence classes: Partially Directed Acyclic Graphs
and Partially Oriented Ancestral Graphs. We introduce sound and complete
procedures which are able to incorporate causal prior knowledge in such models.
In simulated experiments, we show that often considering even a few causal
facts leads to a significant number of new inferences. In a case study, we also
show how to use real experimental data to infer causal knowledge and
incorporate it into a real biological causal network. The code is available at
mensxmachina.org.
|
1206.6391
|
Gaussian Process Quantile Regression using Expectation Propagation
|
stat.ME cs.LG stat.AP
|
Direct quantile regression involves estimating a given quantile of a response
variable as a function of input variables. We present a new framework for
direct quantile regression where a Gaussian process model is learned,
minimising the expected tilted loss function. The integration required in
learning is not analytically tractable so to speed up the learning we employ
the Expectation Propagation algorithm. We describe how this work relates to
other quantile regression methods and apply the method on both synthetic and
real data sets. The method is shown to be competitive with state of the art
methods whilst allowing for the leverage of the full Gaussian process
probabilistic framework.
|
1206.6392
|
Modeling Temporal Dependencies in High-Dimensional Sequences:
Application to Polyphonic Music Generation and Transcription
|
cs.LG cs.SD stat.ML
|
We investigate the problem of modeling symbolic sequences of polyphonic music
in a completely general piano-roll representation. We introduce a probabilistic
model based on distribution estimators conditioned on a recurrent neural
network that is able to discover temporal dependencies in high-dimensional
sequences. Our approach outperforms many traditional models of polyphonic music
on a variety of realistic datasets. We show how our musical language model can
serve as a symbolic prior to improve the accuracy of polyphonic transcription.
|
1206.6393
|
Local Loss Optimization in Operator Models: A New Insight into Spectral
Learning
|
cs.LG stat.ML
|
This paper re-visits the spectral method for learning latent variable models
defined in terms of observable operators. We give a new perspective on the
method, showing that operators can be recovered by minimizing a loss defined on
a finite subset of the domain. A non-convex optimization similar to the
spectral method is derived. We also propose a regularized convex relaxation of
this optimization. We show that in practice the availabilty of a continuous
regularization parameter (in contrast with the discrete number of states in the
original method) allows a better trade-off between accuracy and model
complexity. We also prove that in general, a randomized strategy for choosing
the local loss will succeed with high probability.
|
1206.6394
|
Nonparametric Link Prediction in Dynamic Networks
|
cs.LG cs.SI stat.ML
|
We propose a non-parametric link prediction algorithm for a sequence of graph
snapshots over time. The model predicts links based on the features of its
endpoints, as well as those of the local neighborhood around the endpoints.
This allows for different types of neighborhoods in a graph, each with its own
dynamics (e.g, growing or shrinking communities). We prove the consistency of
our estimator, and give a fast implementation based on locality-sensitive
hashing. Experiments with simulated as well as five real-world dynamic graphs
show that we outperform the state of the art, especially when sharp
fluctuations or non-linearities are present.
|
1206.6395
|
Convergence Rates for Differentially Private Statistical Estimation
|
cs.LG cs.CR stat.ML
|
Differential privacy is a cryptographically-motivated definition of privacy
which has gained significant attention over the past few years. Differentially
private solutions enforce privacy by adding random noise to a function computed
over the data, and the challenge in designing such algorithms is to control the
added noise in order to optimize the privacy-accuracy-sample size tradeoff.
This work studies differentially-private statistical estimation, and shows
upper and lower bounds on the convergence rates of differentially private
approximations to statistical estimators. Our results reveal a formal
connection between differential privacy and the notion of Gross Error
Sensitivity (GES) in robust statistics, by showing that the convergence rate of
any differentially private approximation to an estimator that is accurate over
a large class of distributions has to grow with the GES of the estimator. We
then provide an upper bound on the convergence rate of a differentially private
approximation to an estimator with bounded range and bounded GES. We show that
the bounded range condition is necessary if we wish to ensure a strict form of
differential privacy.
|
1206.6396
|
Joint Optimization and Variable Selection of High-dimensional Gaussian
Processes
|
cs.LG stat.ML
|
Maximizing high-dimensional, non-convex functions through noisy observations
is a notoriously hard problem, but one that arises in many applications. In
this paper, we tackle this challenge by modeling the unknown function as a
sample from a high-dimensional Gaussian process (GP) distribution. Assuming
that the unknown function only depends on few relevant variables, we show that
it is possible to perform joint variable selection and GP optimization. We
provide strong performance guarantees for our algorithm, bounding the sample
complexity of variable selection, and as well as providing cumulative regret
bounds. We further provide empirical evidence on the effectiveness of our
algorithm on several benchmark optimization problems.
|
1206.6397
|
Communications Inspired Linear Discriminant Analysis
|
cs.LG stat.ML
|
We study the problem of supervised linear dimensionality reduction, taking an
information-theoretic viewpoint. The linear projection matrix is designed by
maximizing the mutual information between the projected signal and the class
label (based on a Shannon entropy measure). By harnessing a recent theoretical
result on the gradient of mutual information, the above optimization problem
can be solved directly using gradient descent, without requiring simplification
of the objective function. Theoretical analysis and empirical comparison are
made between the proposed method and two closely related methods (Linear
Discriminant Analysis and Information Discriminant Analysis), and comparisons
are also made with a method in which Renyi entropy is used to define the mutual
information (in this case the gradient may be computed simply, under a special
parameter setting). Relative to these alternative approaches, the proposed
method achieves promising results on real datasets.
|
1206.6398
|
Learning Parameterized Skills
|
cs.LG stat.ML
|
We introduce a method for constructing skills capable of solving tasks drawn
from a distribution of parameterized reinforcement learning problems. The
method draws example tasks from a distribution of interest and uses the
corresponding learned policies to estimate the topology of the
lower-dimensional piecewise-smooth manifold on which the skill policies lie.
This manifold models how policy parameters change as task parameters vary. The
method identifies the number of charts that compose the manifold and then
applies non-linear regression in each chart to construct a parameterized skill
by predicting policy parameters from task parameters. We evaluate our method on
an underactuated simulated robotic arm tasked with learning to accurately throw
darts at a parameterized target location.
|
1206.6399
|
Demand-Driven Clustering in Relational Domains for Predicting Adverse
Drug Events
|
cs.LG cs.AI stat.ML
|
Learning from electronic medical records (EMR) is challenging due to their
relational nature and the uncertain dependence between a patient's past and
future health status. Statistical relational learning is a natural fit for
analyzing EMRs but is less adept at handling their inherent latent structure,
such as connections between related medications or diseases. One way to capture
the latent structure is via a relational clustering of objects. We propose a
novel approach that, instead of pre-clustering the objects, performs a
demand-driven clustering during learning. We evaluate our algorithm on three
real-world tasks where the goal is to use EMRs to predict whether a patient
will have an adverse reaction to a medication. We find that our approach is
more accurate than performing no clustering, pre-clustering, and using
expert-constructed medical heterarchies.
|
1206.6400
|
Online Bandit Learning against an Adaptive Adversary: from Regret to
Policy Regret
|
cs.LG stat.ML
|
Online learning algorithms are designed to learn even when their input is
generated by an adversary. The widely-accepted formal definition of an online
algorithm's ability to learn is the game-theoretic notion of regret. We argue
that the standard definition of regret becomes inadequate if the adversary is
allowed to adapt to the online algorithm's actions. We define the alternative
notion of policy regret, which attempts to provide a more meaningful way to
measure an online algorithm's performance against adaptive adversaries.
Focusing on the online bandit setting, we show that no bandit algorithm can
guarantee a sublinear policy regret against an adaptive adversary with
unbounded memory. On the other hand, if the adversary's memory is bounded, we
present a general technique that converts any bandit algorithm with a sublinear
regret bound into an algorithm with a sublinear policy regret bound. We extend
this result to other variants of regret, such as switching regret, internal
regret, and swap regret.
|
1206.6401
|
Consistent Multilabel Ranking through Univariate Losses
|
cs.LG stat.ML
|
We consider the problem of rank loss minimization in the setting of
multilabel classification, which is usually tackled by means of convex
surrogate losses defined on pairs of labels. Very recently, this approach was
put into question by a negative result showing that commonly used pairwise
surrogate losses, such as exponential and logistic losses, are inconsistent. In
this paper, we show a positive result which is arguably surprising in light of
the previous one: the simpler univariate variants of exponential and logistic
surrogates (i.e., defined on single labels) are consistent for rank loss
minimization. Instead of directly proving convergence, we give a much stronger
result by deriving regret bounds and convergence rates. The proposed losses
suggest efficient and scalable algorithms, which are tested experimentally.
|
1206.6402
|
Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process
Bandit Optimization
|
cs.LG stat.ML
|
Can one parallelize complex exploration exploitation tradeoffs? As an
example, consider the problem of optimal high-throughput experimental design,
where we wish to sequentially design batches of experiments in order to
simultaneously learn a surrogate function mapping stimulus to response and
identify the maximum of the function. We formalize the task as a multi-armed
bandit problem, where the unknown payoff function is sampled from a Gaussian
process (GP), and instead of a single arm, in each round we pull a batch of
several arms in parallel. We develop GP-BUCB, a principled algorithm for
choosing batches, based on the GP-UCB algorithm for sequential GP optimization.
We prove a surprising result; as compared to the sequential approach, the
cumulative regret of the parallel algorithm only increases by a constant factor
independent of the batch size B. Our results provide rigorous theoretical
support for exploiting parallelism in Bayesian global optimization. We
demonstrate the effectiveness of our approach on two real-world applications.
|
1206.6403
|
Two Step CCA: A new spectral method for estimating vector models of
words
|
cs.CL cs.LG
|
Unlabeled data is often used to learn representations which can be used to
supplement baseline features in a supervised learner. For example, for text
applications where the words lie in a very high dimensional space (the size of
the vocabulary), one can learn a low rank "dictionary" by an
eigen-decomposition of the word co-occurrence matrix (e.g. using PCA or CCA).
In this paper, we present a new spectral method based on CCA to learn an
eigenword dictionary. Our improved procedure computes two set of CCAs, the
first one between the left and right contexts of the given word and the second
one between the projections resulting from this CCA and the word itself. We
prove theoretically that this two-step procedure has lower sample complexity
than the simple single step procedure and also illustrate the empirical
efficacy of our approach and the richness of representations learned by our Two
Step CCA (TSCCA) procedure on the tasks of POS tagging and sentiment
classification.
|
1206.6404
|
Policy Gradients with Variance Related Risk Criteria
|
cs.LG cs.CY math.OC stat.ML
|
Managing risk in dynamic decision problems is of cardinal importance in many
fields such as finance and process control. The most common approach to
defining risk is through various variance related criteria such as the Sharpe
Ratio or the standard deviation adjusted reward. It is known that optimizing
many of the variance related risk criteria is NP-hard. In this paper we devise
a framework for local policy gradient style algorithms for reinforcement
learning for variance related criteria. Our starting point is a new formula for
the variance of the cost-to-go in episodic tasks. Using this formula we develop
policy gradient algorithms for criteria that involve both the expected cost and
the variance of the cost. We prove the convergence of these algorithms to local
minima and demonstrate their applicability in a portfolio planning problem.
|
1206.6405
|
Bounded Planning in Passive POMDPs
|
cs.LG cs.AI stat.ML
|
In Passive POMDPs actions do not affect the world state, but still incur
costs. When the agent is bounded by information-processing constraints, it can
only keep an approximation of the belief. We present a variational principle
for the problem of maintaining the information which is most useful for
minimizing the cost, and introduce an efficient and simple algorithm for
finding an optimum.
|
1206.6406
|
Bayesian Optimal Active Search and Surveying
|
cs.LG stat.ML
|
We consider two active binary-classification problems with atypical
objectives. In the first, active search, our goal is to actively uncover as
many members of a given class as possible. In the second, active surveying, our
goal is to actively query points to ultimately predict the proportion of a
given class. Numerous real-world problems can be framed in these terms, and in
either case typical model-based concerns such as generalization error are only
of secondary importance.
We approach these problems via Bayesian decision theory; after choosing
natural utility functions, we derive the optimal policies. We provide three
contributions. In addition to introducing the active surveying problem, we
extend previous work on active search in two ways. First, we prove a novel
theoretical result, that less-myopic approximations to the optimal policy can
outperform more-myopic approximations by any arbitrary degree. We then derive
bounds that for certain models allow us to reduce (in practice dramatically)
the exponential search space required by a naive implementation of the optimal
policy, enabling further lookahead while still ensuring that optimal decisions
are always made.
|
1206.6407
|
Large-Scale Feature Learning With Spike-and-Slab Sparse Coding
|
cs.LG stat.ML
|
We consider the problem of object recognition with a large number of classes.
In order to overcome the low amount of labeled examples available in this
setting, we introduce a new feature learning and extraction procedure based on
a factor model we call spike-and-slab sparse coding (S3C). Prior work on S3C
has not prioritized the ability to exploit parallel architectures and scale S3C
to the enormous problem sizes needed for object recognition. We present a novel
inference procedure for appropriate for use with GPUs which allows us to
dramatically increase both the training set size and the amount of latent
factors that S3C may be trained with. We demonstrate that this approach
improves upon the supervised learning capabilities of both sparse coding and
the spike-and-slab Restricted Boltzmann Machine (ssRBM) on the CIFAR-10
dataset. We use the CIFAR-100 dataset to demonstrate that our method scales to
large numbers of classes better than previous methods. Finally, we use our
method to win the NIPS 2011 Workshop on Challenges In Learning Hierarchical
Models? Transfer Learning Challenge.
|
1206.6408
|
Sequential Nonparametric Regression
|
stat.ME astro-ph.IM cs.LG
|
We present algorithms for nonparametric regression in settings where the data
are obtained sequentially. While traditional estimators select bandwidths that
depend upon the sample size, for sequential data the effective sample size is
dynamically changing. We propose a linear time algorithm that adjusts the
bandwidth for each new data point, and show that the estimator achieves the
optimal minimax rate of convergence. We also propose the use of online expert
mixing algorithms to adapt to unknown smoothness of the regression function. We
provide simulations that confirm the theoretical results, and demonstrate the
effectiveness of the methods.
|
1206.6409
|
Scaling Up Coordinate Descent Algorithms for Large $\ell_1$
Regularization Problems
|
cs.LG cs.DC stat.ML
|
We present a generic framework for parallel coordinate descent (CD)
algorithms that includes, as special cases, the original sequential algorithms
Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm.
We introduce two novel parallel algorithms that are also special
cases---Thread-Greedy CD and Coloring-Based CD---and give performance
measurements for an OpenMP implementation of these.
|
1206.6410
|
On the Partition Function and Random Maximum A-Posteriori Perturbations
|
cs.LG stat.ML
|
In this paper we relate the partition function to the max-statistics of
random variables. In particular, we provide a novel framework for approximating
and bounding the partition function using MAP inference on randomly perturbed
models. As a result, we can use efficient MAP solvers such as graph-cuts to
evaluate the corresponding partition function. We show that our method excels
in the typical "high signal - high coupling" regime that results in ragged
energy landscapes difficult for alternative approaches.
|
1206.6411
|
On the Difficulty of Nearest Neighbor Search
|
cs.LG cs.DB cs.IR stat.ML
|
Fast approximate nearest neighbor (NN) search in large databases is becoming
popular. Several powerful learning-based formulations have been proposed
recently. However, not much attention has been paid to a more fundamental
question: how difficult is (approximate) nearest neighbor search in a given
data set? And which data properties affect the difficulty of nearest neighbor
search and how? This paper introduces the first concrete measure called
Relative Contrast that can be used to evaluate the influence of several crucial
data characteristics such as dimensionality, sparsity, and database size
simultaneously in arbitrary normed metric spaces. Moreover, we present a
theoretical analysis to prove how the difficulty measure (relative contrast)
determines/affects the complexity of Local Sensitive Hashing, a popular
approximate NN search method. Relative contrast also provides an explanation
for a family of heuristic hashing algorithms with good practical performance
based on PCA. Finally, we show that most of the previous works in measuring NN
search meaningfulness/difficulty can be derived as special asymptotic cases for
dense vectors of the proposed measure.
|
1206.6412
|
A Simple Algorithm for Semi-supervised Learning with Improved
Generalization Error Bound
|
cs.LG stat.ML
|
In this work, we develop a simple algorithm for semi-supervised regression.
The key idea is to use the top eigenfunctions of integral operator derived from
both labeled and unlabeled examples as the basis functions and learn the
prediction function by a simple linear regression. We show that under
appropriate assumptions about the integral operator, this approach is able to
achieve an improved regression error bound better than existing bounds of
supervised learning. We also verify the effectiveness of the proposed algorithm
by an empirical study.
|
1206.6413
|
A Convex Relaxation for Weakly Supervised Classifiers
|
cs.LG stat.ML
|
This paper introduces a general multi-class approach to weakly supervised
classification. Inferring the labels and learning the parameters of the model
is usually done jointly through a block-coordinate descent algorithm such as
expectation-maximization (EM), which may lead to local minima. To avoid this
problem, we propose a cost function based on a convex relaxation of the
soft-max loss. We then propose an algorithm specifically designed to
efficiently solve the corresponding semidefinite program (SDP). Empirically,
our method compares favorably to standard ones on different datasets for
multiple instance learning and semi-supervised learning as well as on
clustering tasks.
|
1206.6414
|
The Nonparametric Metadata Dependent Relational Model
|
cs.LG cs.SI stat.ML
|
We introduce the nonparametric metadata dependent relational (NMDR) model, a
Bayesian nonparametric stochastic block model for network data. The NMDR allows
the entities associated with each node to have mixed membership in an unbounded
collection of latent communities. Learned regression models allow these
memberships to depend on, and be predicted from, arbitrary node metadata. We
develop efficient MCMC algorithms for learning NMDR models from partially
observed node relationships. Retrospective MCMC methods allow our sampler to
work directly with the infinite stick-breaking representation of the NMDR,
avoiding the need for finite truncations. Our results demonstrate recovery of
useful latent communities from real-world social and ecological networks, and
the usefulness of metadata in link prediction tasks.
|
1206.6415
|
The Big Data Bootstrap
|
cs.LG stat.ML
|
The bootstrap provides a simple and powerful means of assessing the quality
of estimators. However, in settings involving large datasets, the computation
of bootstrap-based quantities can be prohibitively demanding. As an
alternative, we present the Bag of Little Bootstraps (BLB), a new procedure
which incorporates features of both the bootstrap and subsampling to obtain a
robust, computationally efficient means of assessing estimator quality. BLB is
well suited to modern parallel and distributed computing architectures and
retains the generic applicability, statistical efficiency, and favorable
theoretical properties of the bootstrap. We provide the results of an extensive
empirical and theoretical investigation of BLB's behavior, including a study of
its statistical correctness, its large-scale implementation and performance,
selection of hyperparameters, and performance on real data.
|
1206.6416
|
An Infinite Latent Attribute Model for Network Data
|
cs.LG stat.ML
|
Latent variable models for network data extract a summary of the relational
structure underlying an observed network. The simplest possible models
subdivide nodes of the network into clusters; the probability of a link between
any two nodes then depends only on their cluster assignment. Currently
available models can be classified by whether clusters are disjoint or are
allowed to overlap. These models can explain a "flat" clustering structure.
Hierarchical Bayesian models provide a natural approach to capture more complex
dependencies. We propose a model in which objects are characterised by a latent
feature vector. Each feature is itself partitioned into disjoint groups
(subclusters), corresponding to a second layer of hierarchy. In experimental
comparisons, the model achieves significantly improved predictive performance
on social and biological link prediction tasks. The results indicate that
models with a single layer hierarchy over-simplify real networks.
|
1206.6417
|
Learning Task Grouping and Overlap in Multi-task Learning
|
cs.LG stat.ML
|
In the paradigm of multi-task learning, mul- tiple related prediction tasks
are learned jointly, sharing information across the tasks. We propose a
framework for multi-task learn- ing that enables one to selectively share the
information across the tasks. We assume that each task parameter vector is a
linear combi- nation of a finite number of underlying basis tasks. The
coefficients of the linear combina- tion are sparse in nature and the overlap
in the sparsity patterns of two tasks controls the amount of sharing across
these. Our model is based on on the assumption that task pa- rameters within a
group lie in a low dimen- sional subspace but allows the tasks in differ- ent
groups to overlap with each other in one or more bases. Experimental results on
four datasets show that our approach outperforms competing methods.
|
1206.6418
|
Learning Invariant Representations with Local Transformations
|
cs.LG cs.CV stat.ML
|
Learning invariant representations is an important problem in machine
learning and pattern recognition. In this paper, we present a novel framework
of transformation-invariant feature learning by incorporating linear
transformations into the feature learning algorithms. For example, we present
the transformation-invariant restricted Boltzmann machine that compactly
represents data by its weights and their transformations, which achieves
invariance of the feature representation via probabilistic max pooling. In
addition, we show that our transformation-invariant feature learning framework
can also be extended to other unsupervised learning methods, such as
autoencoders or sparse coding. We evaluate our method on several image
classification benchmark datasets, such as MNIST variations, CIFAR-10, and
STL-10, and show competitive or superior classification performance when
compared to the state-of-the-art. Furthermore, our method achieves
state-of-the-art performance on phone classification tasks with the TIMIT
dataset, which demonstrates wide applicability of our proposed algorithms to
other domains.
|
1206.6419
|
Cross-Domain Multitask Learning with Latent Probit Models
|
cs.LG stat.ML
|
Learning multiple tasks across heterogeneous domains is a challenging problem
since the feature space may not be the same for different tasks. We assume the
data in multiple tasks are generated from a latent common domain via sparse
domain transforms and propose a latent probit model (LPM) to jointly learn the
domain transforms, and the shared probit classifier in the common domain. To
learn meaningful task relatedness and avoid over-fitting in classification, we
introduce sparsity in the domain transforms matrices, as well as in the common
classifier. We derive theoretical bounds for the estimation error of the
classifier in terms of the sparsity of domain transforms. An
expectation-maximization algorithm is derived for learning the LPM. The
effectiveness of the approach is demonstrated on several real datasets.
|
1206.6420
|
Distributed Parameter Estimation via Pseudo-likelihood
|
cs.LG cs.DC stat.ML
|
Estimating statistical models within sensor networks requires distributed
algorithms, in which both data and computation are distributed across the nodes
of the network. We propose a general approach for distributed learning based on
combining local estimators defined by pseudo-likelihood components,
encompassing a number of combination methods, and provide both theoretical and
experimental analysis. We show that simple linear combination or max-voting
methods, when combined with second-order information, are statistically
competitive with more advanced and costly joint optimization. Our algorithms
have many attractive properties including low communication and computational
cost and "any-time" behavior.
|
1206.6421
|
Structured Learning from Partial Annotations
|
cs.LG stat.ML
|
Structured learning is appropriate when predicting structured outputs such as
trees, graphs, or sequences. Most prior work requires the training set to
consist of complete trees, graphs or sequences. Specifying such detailed ground
truth can be tedious or infeasible for large outputs. Our main contribution is
a large margin formulation that makes structured learning from only partially
annotated data possible. The resulting optimization problem is non-convex, yet
can be efficiently solve by concave-convex procedure (CCCP) with novel speedup
strategies. We apply our method to a challenging tracking-by-assignment problem
of a variable number of divisible objects. On this benchmark, using only 25% of
a full annotation we achieve a performance comparable to a model learned with a
full annotation. Finally, we offer a unifying perspective of previous work
using the hinge, ramp, or max loss for structured learning, followed by an
empirical comparison on their practical performance.
|
1206.6422
|
An Online Boosting Algorithm with Theoretical Justifications
|
cs.LG stat.ML
|
We study the task of online boosting--combining online weak learners into an
online strong learner. While batch boosting has a sound theoretical foundation,
online boosting deserves more study from the theoretical perspective. In this
paper, we carefully compare the differences between online and batch boosting,
and propose a novel and reasonable assumption for the online weak learner.
Based on the assumption, we design an online boosting algorithm with a strong
theoretical guarantee by adapting from the offline SmoothBoost algorithm that
matches the assumption closely. We further tackle the task of deciding the
number of weak learners using established theoretical results for online convex
programming and predicting with expert advice. Experiments on real-world data
sets demonstrate that the proposed algorithm compares favorably with existing
online boosting algorithms.
|
1206.6423
|
A Joint Model of Language and Perception for Grounded Attribute Learning
|
cs.CL cs.LG cs.RO
|
As robots become more ubiquitous and capable, it becomes ever more important
to enable untrained users to easily interact with them. Recently, this has led
to study of the language grounding problem, where the goal is to extract
representations of the meanings of natural language tied to perception and
actuation in the physical world. In this paper, we present an approach for
joint learning of language and perception models for grounded attribute
induction. Our perception model includes attribute classifiers, for example to
detect object color and shape, and the language model is based on a
probabilistic categorial grammar that enables the construction of rich,
compositional meaning representations. The approach is evaluated on the task of
interpreting sentences that describe sets of objects in a physical workspace.
We demonstrate accurate task performance and effective latent-variable concept
induction in physical grounded scenes.
|
1206.6424
|
Anytime Marginal MAP Inference
|
cs.AI stat.ML
|
This paper presents a new anytime algorithm for the marginal MAP problem in
graphical models. The algorithm is described in detail, its complexity and
convergence rate are studied, and relations to previous theoretical results for
the problem are discussed. It is shown that the algorithm runs in
polynomial-time if the underlying graph of the model has bounded tree-width,
and that it provides guarantees to the lower and upper bounds obtained within a
fixed amount of computational resources. Experiments with both real and
synthetic generated models highlight its main characteristics and show that it
compares favorably against Park and Darwiche's systematic search, particularly
in the case of problems with many MAP variables and moderate tree-width.
|
1206.6425
|
Sparse Stochastic Inference for Latent Dirichlet allocation
|
cs.LG stat.ML
|
We present a hybrid algorithm for Bayesian topic models that combines the
efficiency of sparse Gibbs sampling with the scalability of online stochastic
inference. We used our algorithm to analyze a corpus of 1.2 million books (33
billion words) with thousands of topics. Our approach reduces the bias of
variational inference and generalizes to many Bayesian hidden-variable models.
|
1206.6426
|
A Fast and Simple Algorithm for Training Neural Probabilistic Language
Models
|
cs.CL cs.LG
|
In spite of their superior performance, neural probabilistic language models
(NPLMs) remain far less widely used than n-gram models due to their notoriously
long training times, which are measured in weeks even for moderately-sized
datasets. Training NPLMs is computationally expensive because they are
explicitly normalized, which leads to having to consider all words in the
vocabulary when computing the log-likelihood gradients.
We propose a fast and simple algorithm for training NPLMs based on
noise-contrastive estimation, a newly introduced procedure for estimating
unnormalized continuous distributions. We investigate the behaviour of the
algorithm on the Penn Treebank corpus and show that it reduces the training
times by more than an order of magnitude without affecting the quality of the
resulting models. The algorithm is also more efficient and much more stable
than importance sampling because it requires far fewer noise samples to perform
well.
We demonstrate the scalability of the proposed approach by training several
neural language models on a 47M-word corpus with a 80K-word vocabulary,
obtaining state-of-the-art results on the Microsoft Research Sentence
Completion Challenge dataset.
|
1206.6427
|
Convergence of the EM Algorithm for Gaussian Mixtures with Unbalanced
Mixing Coefficients
|
cs.LG stat.ML
|
The speed of convergence of the Expectation Maximization (EM) algorithm for
Gaussian mixture model fitting is known to be dependent on the amount of
overlap among the mixture components. In this paper, we study the impact of
mixing coefficients on the convergence of EM. We show that when the mixture
components exhibit some overlap, the convergence of EM becomes slower as the
dynamic range among the mixing coefficients increases. We propose a
deterministic anti-annealing algorithm, that significantly improves the speed
of convergence of EM for such mixtures with unbalanced mixing coefficients. The
proposed algorithm is compared against other standard optimization techniques
like BFGS, Conjugate Gradient, and the traditional EM algorithm. Finally, we
propose a similar deterministic anti-annealing based algorithm for the
Dirichlet process mixture model and demonstrate its advantages over the
conventional variational Bayesian approach.
|
1206.6428
|
A Binary Classification Framework for Two-Stage Multiple Kernel Learning
|
cs.LG stat.ML
|
With the advent of kernel methods, automating the task of specifying a
suitable kernel has become increasingly important. In this context, the
Multiple Kernel Learning (MKL) problem of finding a combination of
pre-specified base kernels that is suitable for the task at hand has received
significant attention from researchers. In this paper we show that Multiple
Kernel Learning can be framed as a standard binary classification problem with
additional constraints that ensure the positive definiteness of the learned
kernel. Framing MKL in this way has the distinct advantage that it makes it
easy to leverage the extensive research in binary classification to develop
better performing and more scalable MKL algorithms that are conceptually
simpler, and, arguably, more accessible to practitioners. Experiments on nine
data sets from different domains show that, despite its simplicity, the
proposed technique compares favorably with current leading MKL approaches.
|
1206.6429
|
Incorporating Domain Knowledge in Matching Problems via Harmonic
Analysis
|
cs.LG cs.CV stat.ML
|
Matching one set of objects to another is a ubiquitous task in machine
learning and computer vision that often reduces to some form of the quadratic
assignment problem (QAP). The QAP is known to be notoriously hard, both in
theory and in practice. Here, we investigate if this difficulty can be
mitigated when some additional piece of information is available: (a) that all
QAP instances of interest come from the same application, and (b) the correct
solution for a set of such QAP instances is given. We propose a new approach to
accelerate the solution of QAPs based on learning parameters for a modified
objective function from prior QAP instances. A key feature of our approach is
that it takes advantage of the algebraic structure of permutations, in
conjunction with special methods for optimizing functions over the symmetric
group Sn in Fourier space. Experiments show that in practical domains the new
method can outperform existing approaches.
|
1206.6430
|
Variational Bayesian Inference with Stochastic Search
|
cs.LG stat.CO stat.ML
|
Mean-field variational inference is a method for approximate Bayesian
posterior inference. It approximates a full posterior distribution with a
factorized set of distributions by maximizing a lower bound on the marginal
likelihood. This requires the ability to integrate a sum of terms in the log
joint likelihood using this factorized distribution. Often not all integrals
are in closed form, which is typically handled by using a lower bound. We
present an alternative algorithm based on stochastic optimization that allows
for direct optimization of the variational lower bound. This method uses
control variates to reduce the variance of the stochastic search gradient, in
which existing lower bounds can play an important role. We demonstrate the
approach on two non-conjugate models: logistic regression and an approximation
to the HDP.
|
1206.6431
|
Exact Maximum Margin Structure Learning of Bayesian Networks
|
cs.LG stat.ML
|
Recently, there has been much interest in finding globally optimal Bayesian
network structures. These techniques were developed for generative scores and
can not be directly extended to discriminative scores, as desired for
classification. In this paper, we propose an exact method for finding network
structures maximizing the probabilistic soft margin, a successfully applied
discriminative score. Our method is based on branch-and-bound techniques within
a linear programming framework and maintains an any-time solution, together
with worst-case sub-optimality bounds. We apply a set of order constraints for
enforcing the network structure to be acyclic, which allows a compact problem
representation and the use of general-purpose optimization techniques. In
classification experiments, our methods clearly outperform generatively trained
network structures and compete with support vector machines.
|
1206.6432
|
Sparse Support Vector Infinite Push
|
cs.LG cs.CE stat.ML
|
In this paper, we address the problem of embedded feature selection for
ranking on top of the list problems. We pose this problem as a regularized
empirical risk minimization with $p$-norm push loss function ($p=\infty$) and
sparsity inducing regularizers. We leverage the issues related to this
challenging optimization problem by considering an alternating direction method
of multipliers algorithm which is built upon proximal operators of the loss
function and the regularizer. Our main technical contribution is thus to
provide a numerical scheme for computing the infinite push loss function
proximal operator. Experimental results on toy, DNA microarray and BCI problems
show how our novel algorithm compares favorably to competitors for ranking on
top while using fewer variables in the scoring function.
|
1206.6433
|
Copula Mixture Model for Dependency-seeking Clustering
|
stat.ME cs.LG stat.ML
|
We introduce a copula mixture model to perform dependency-seeking clustering
when co-occurring samples from different data sources are available. The model
takes advantage of the great flexibility offered by the copulas framework to
extend mixtures of Canonical Correlation Analysis to multivariate data with
arbitrary continuous marginal densities. We formulate our model as a
non-parametric Bayesian mixture, while providing efficient MCMC inference.
Experiments on synthetic and real data demonstrate that the increased
flexibility of the copula mixture significantly improves the clustering and the
interpretability of the results.
|
1206.6434
|
A Generative Process for Sampling Contractive Auto-Encoders
|
cs.LG stat.ML
|
The contractive auto-encoder learns a representation of the input data that
captures the local manifold structure around each data point, through the
leading singular vectors of the Jacobian of the transformation from input to
representation. The corresponding singular values specify how much local
variation is plausible in directions associated with the corresponding singular
vectors, while remaining in a high-density region of the input space. This
paper proposes a procedure for generating samples that are consistent with the
local structure captured by a contractive auto-encoder. The associated
stochastic process defines a distribution from which one can sample, and which
experimentally appears to converge quickly and mix well between modes, compared
to Restricted Boltzmann Machines and Deep Belief Networks. The intuitions
behind this procedure can also be used to train the second layer of contraction
that pools lower-level features and learns to be invariant to the local
directions of variation discovered in the first layer. We show that this can
help learn and represent invariances present in the data and improve
classification error.
|
1206.6435
|
Rethinking Collapsed Variational Bayes Inference for LDA
|
cs.LG stat.ML
|
We propose a novel interpretation of the collapsed variational Bayes
inference with a zero-order Taylor expansion approximation, called CVB0
inference, for latent Dirichlet allocation (LDA). We clarify the properties of
the CVB0 inference by using the alpha-divergence. We show that the CVB0
inference is composed of two different divergence projections: alpha=1 and -1.
This interpretation will help shed light on CVB0 works.
|
1206.6436
|
Efficient Structured Prediction with Latent Variables for General
Graphical Models
|
cs.LG stat.ML
|
In this paper we propose a unified framework for structured prediction with
latent variables which includes hidden conditional random fields and latent
structured support vector machines as special cases. We describe a local
entropy approximation for this general formulation using duality, and derive an
efficient message passing algorithm that is guaranteed to converge. We
demonstrate its effectiveness in the tasks of image segmentation as well as 3D
indoor scene understanding from single images, showing that our approach is
superior to latent structured support vector machines and hidden conditional
random fields.
|
1206.6437
|
Large Scale Variational Bayesian Inference for Structured Scale Mixture
Models
|
cs.CV cs.LG stat.ML
|
Natural image statistics exhibit hierarchical dependencies across multiple
scales. Representing such prior knowledge in non-factorial latent tree models
can boost performance of image denoising, inpainting, deconvolution or
reconstruction substantially, beyond standard factorial "sparse" methodology.
We derive a large scale approximate Bayesian inference algorithm for linear
models with non-factorial (latent tree-structured) scale mixture priors.
Experimental results on a range of denoising and inpainting problems
demonstrate substantially improved performance compared to MAP estimation or to
inference with factorial priors.
|
1206.6438
|
Information-Theoretical Learning of Discriminative Clusters for
Unsupervised Domain Adaptation
|
cs.LG stat.ML
|
We study the problem of unsupervised domain adaptation, which aims to adapt
classifiers trained on a labeled source domain to an unlabeled target domain.
Many existing approaches first learn domain-invariant features and then
construct classifiers with them. We propose a novel approach that jointly learn
the both. Specifically, while the method identifies a feature space where data
in the source and the target domains are similarly distributed, it also learns
the feature space discriminatively, optimizing an information-theoretic metric
as an proxy to the expected misclassification error on the target domain. We
show how this optimization can be effectively carried out with simple
gradient-based methods and how hyperparameters can be cross-validated without
demanding any labeled data from the target domain. Empirical studies on
benchmark tasks of object recognition and sentiment analysis validated our
modeling assumptions and demonstrated significant improvement of our method
over competing ones in classification accuracies.
|
1206.6439
|
Gap Filling in the Plant Kingdom---Trait Prediction Using Hierarchical
Probabilistic Matrix Factorization
|
cs.CE cs.LG stat.AP
|
Plant traits are a key to understanding and predicting the adaptation of
ecosystems to environmental changes, which motivates the TRY project aiming at
constructing a global database for plant traits and becoming a standard
resource for the ecological community. Despite its unprecedented coverage, a
large percentage of missing data substantially constrains joint trait analysis.
Meanwhile, the trait data is characterized by the hierarchical phylogenetic
structure of the plant kingdom. While factorization based matrix completion
techniques have been widely used to address the missing data problem,
traditional matrix factorization methods are unable to leverage the
phylogenetic structure. We propose hierarchical probabilistic matrix
factorization (HPMF), which effectively uses hierarchical phylogenetic
information for trait prediction. We demonstrate HPMF's high accuracy,
effectiveness of incorporating hierarchical structure and ability to capture
trait correlation through experiments.
|
1206.6440
|
Predicting Preference Flips in Commerce Search
|
cs.LG stat.ML
|
Traditional approaches to ranking in web search follow the paradigm of
rank-by-score: a learned function gives each query-URL combination an absolute
score and URLs are ranked according to this score. This paradigm ensures that
if the score of one URL is better than another then one will always be ranked
higher than the other. Scoring contradicts prior work in behavioral economics
that showed that users' preferences between two items depend not only on the
items but also on the presented alternatives. Thus, for the same query, users'
preference between items A and B depends on the presence/absence of item C. We
propose a new model of ranking, the Random Shopper Model, that allows and
explains such behavior. In this model, each feature is viewed as a Markov chain
over the items to be ranked, and the goal is to find a weighting of the
features that best reflects their importance. We show that our model can be
learned under the empirical risk minimization framework, and give an efficient
learning algorithm. Experiments on commerce search logs demonstrate that our
algorithm outperforms scoring-based approaches including regression and
listwise ranking.
|
1206.6441
|
A Topic Model for Melodic Sequences
|
cs.LG cs.IR stat.ML
|
We examine the problem of learning a probabilistic model for melody directly
from musical sequences belonging to the same genre. This is a challenging task
as one needs to capture not only the rich temporal structure evident in music,
but also the complex statistical dependencies among different music components.
To address this problem we introduce the Variable-gram Topic Model, which
couples the latent topic formalism with a systematic model for contextual
information. We evaluate the model on next-step prediction. Additionally, we
present a novel way of model evaluation, where we directly compare model
samples with data sequences using the Maximum Mean Discrepancy of string
kernels, to assess how close is the model distribution to the data
distribution. We show that the model has the highest performance under both
evaluation measures when compared to LDA, the Topic Bigram and related
non-topic models.
|
1206.6442
|
Minimizing The Misclassification Error Rate Using a Surrogate Convex
Loss
|
cs.LG stat.ML
|
We carefully study how well minimizing convex surrogate loss functions,
corresponds to minimizing the misclassification error rate for the problem of
binary classification with linear predictors. In particular, we show that
amongst all convex surrogate losses, the hinge loss gives essentially the best
possible bound, of all convex loss functions, for the misclassification error
rate of the resulting linear predictor in terms of the best possible margin
error rate. We also provide lower bounds for specific convex surrogates that
show how different commonly used losses qualitatively differ from each other.
|
1206.6443
|
Isoelastic Agents and Wealth Updates in Machine Learning Markets
|
cs.LG cs.GT stat.ML
|
Recently, prediction markets have shown considerable promise for developing
flexible mechanisms for machine learning. In this paper, agents with isoelastic
utilities are considered. It is shown that the costs associated with
homogeneous markets of agents with isoelastic utilities produce equilibrium
prices corresponding to alpha-mixtures, with a particular form of mixing
component relating to each agent's wealth. We also demonstrate that wealth
accumulation for logarithmic and other isoelastic agents (through payoffs on
prediction of training targets) can implement both Bayesian model updates and
mixture weight updates by imposing different market payoff structures. An
iterative algorithm is given for market equilibrium computation. We demonstrate
that inhomogeneous markets of agents with isoelastic utilities outperform state
of the art aggregate classifiers such as random forests, as well as single
classifiers (neural networks, decision trees) on a number of machine learning
benchmarks, and show that isoelastic combination methods are generally better
than their logarithmic counterparts.
|
1206.6444
|
Statistical Linear Estimation with Penalized Estimators: an Application
to Reinforcement Learning
|
cs.LG stat.ML
|
Motivated by value function estimation in reinforcement learning, we study
statistical linear inverse problems, i.e., problems where the coefficients of a
linear system to be solved are observed in noise. We consider penalized
estimators, where performance is evaluated using a matrix-weighted two-norm of
the defect of the estimator measured with respect to the true, unknown
coefficients. Two objective functions are considered depending whether the
error of the defect measured with respect to the noisy coefficients is squared
or unsquared. We propose simple, yet novel and theoretically well-founded
data-dependent choices for the regularization parameters for both cases that
avoid data-splitting. A distinguishing feature of our analysis is that we
derive deterministic error bounds in terms of the error of the coefficients,
thus allowing the complete separation of the analysis of the stochastic
properties of these errors. We show that our results lead to new insights and
bounds for linear value function estimation in reinforcement learning.
|
1206.6445
|
Deep Lambertian Networks
|
cs.CV cs.LG stat.ML
|
Visual perception is a challenging problem in part due to illumination
variations. A possible solution is to first estimate an illumination invariant
representation before using it for recognition. The object albedo and surface
normals are examples of such representations. In this paper, we introduce a
multilayer generative model where the latent variables include the albedo,
surface normals, and the light source. Combining Deep Belief Nets with the
Lambertian reflectance assumption, our model can learn good priors over the
albedo from 2D images. Illumination variations can be explained by changing
only the lighting latent variable in our model. By transferring learned
knowledge from similar objects, albedo and surface normals estimation from a
single image is possible in our model. Experiments demonstrate that our model
is able to generalize as well as improve over standard baselines in one-shot
face recognition.
|
1206.6446
|
Agglomerative Bregman Clustering
|
cs.LG stat.ML
|
This manuscript develops the theory of agglomerative clustering with Bregman
divergences. Geometric smoothing techniques are developed to deal with
degenerate clusters. To allow for cluster models based on exponential families
with overcomplete representations, Bregman divergences are developed for
nondifferentiable convex functions.
|
1206.6447
|
Small-sample Brain Mapping: Sparse Recovery on Spatially Correlated
Designs with Randomization and Clustering
|
cs.LG cs.CV stat.AP stat.ML
|
Functional neuroimaging can measure the brain?s response to an external
stimulus. It is used to perform brain mapping: identifying from these
observations the brain regions involved. This problem can be cast into a linear
supervised learning task where the neuroimaging data are used as predictors for
the stimulus. Brain mapping is then seen as a support recovery problem. On
functional MRI (fMRI) data, this problem is particularly challenging as i) the
number of samples is small due to limited acquisition time and ii) the
variables are strongly correlated. We propose to overcome these difficulties
using sparse regression models over new variables obtained by clustering of the
original variables. The use of randomization techniques, e.g. bootstrap
samples, and clustering of the variables improves the recovery properties of
sparse methods. We demonstrate the benefit of our approach on an extensive
simulation study as well as two fMRI datasets.
|
1206.6448
|
Online Alternating Direction Method
|
cs.LG stat.ML
|
Online optimization has emerged as powerful tool in large scale optimization.
In this paper, we introduce efficient online algorithms based on the
alternating directions method (ADM). We introduce a new proof technique for ADM
in the batch setting, which yields the O(1/T) convergence rate of ADM and forms
the basis of regret analysis in the online setting. We consider two scenarios
in the online setting, based on whether the solution needs to lie in the
feasible set or not. In both settings, we establish regret bounds for both the
objective function as well as constraint violation for general and strongly
convex functions. Preliminary results are presented to illustrate the
performance of the proposed algorithms.
|
1206.6449
|
Monte Carlo Bayesian Reinforcement Learning
|
cs.LG stat.ML
|
Bayesian reinforcement learning (BRL) encodes prior knowledge of the world in
a model and represents uncertainty in model parameters by maintaining a
probability distribution over them. This paper presents Monte Carlo BRL
(MC-BRL), a simple and general approach to BRL. MC-BRL samples a priori a
finite set of hypotheses for the model parameter values and forms a discrete
partially observable Markov decision process (POMDP) whose state space is a
cross product of the state space for the reinforcement learning task and the
sampled model parameter space. The POMDP does not require conjugate
distributions for belief representation, as earlier works do, and can be solved
relatively easily with point-based approximation algorithms. MC-BRL naturally
handles both fully and partially observable worlds. Theoretical and
experimental results show that the discrete POMDP approximates the underlying
BRL task well with guaranteed performance.
|
1206.6450
|
Conditional Sparse Coding and Grouped Multivariate Regression
|
cs.LG stat.ML
|
We study the problem of multivariate regression where the data are naturally
grouped, and a regression matrix is to be estimated for each group. We propose
an approach in which a dictionary of low rank parameter matrices is estimated
across groups, and a sparse linear combination of the dictionary elements is
estimated to form a model within each group. We refer to the method as
conditional sparse coding since it is a coding procedure for the response
vectors Y conditioned on the covariate vectors X. This approach captures the
shared information across the groups while adapting to the structure within
each group. It exploits the same intuition behind sparse coding that has been
successfully developed in computer vision and computational neuroscience. We
propose an algorithm for conditional sparse coding, analyze its theoretical
properties in terms of predictive accuracy, and present the results of
simulation and brain imaging experiments that compare the new technique to
reduced rank regression.
|
1206.6451
|
The Greedy Miser: Learning under Test-time Budgets
|
cs.LG stat.ML
|
As machine learning algorithms enter applications in industrial settings,
there is increased interest in controlling their cpu-time during testing. The
cpu-time consists of the running time of the algorithm and the extraction time
of the features. The latter can vary drastically when the feature set is
diverse. In this paper, we propose an algorithm, the Greedy Miser, that
incorporates the feature extraction cost during training to explicitly minimize
the cpu-time during testing. The algorithm is a straightforward extension of
stage-wise regression and is equally suitable for regression or multi-class
classification. Compared to prior work, it is significantly more cost-effective
and scales to larger data sets.
|
1206.6452
|
Smoothness and Structure Learning by Proxy
|
cs.LG math.OC stat.ML
|
As data sets grow in size, the ability of learning methods to find structure
in them is increasingly hampered by the time needed to search the large spaces
of possibilities and generate a score for each that takes all of the observed
data into account. For instance, Bayesian networks, the model chosen in this
paper, have a super-exponentially large search space for a fixed number of
variables. One possible method to alleviate this problem is to use a proxy,
such as a Gaussian Process regressor, in place of the true scoring function,
training it on a selection of sampled networks. We prove here that the use of
such a proxy is well-founded, as we can bound the smoothness of a commonly-used
scoring function for Bayesian network structure learning. We show here that,
compared to an identical search strategy using the network?s exact scores, our
proxy-based search is able to get equivalent or better scores on a number of
data sets in a fraction of the time.
|
1206.6453
|
Adaptive Canonical Correlation Analysis Based On Matrix Manifolds
|
cs.LG stat.ML
|
In this paper, we formulate the Canonical Correlation Analysis (CCA) problem
on matrix manifolds. This framework provides a natural way for dealing with
matrix constraints and tools for building efficient algorithms even in an
adaptive setting. Finally, an adaptive CCA algorithm is proposed and applied to
a change detection problem in EEG signals.
|
1206.6454
|
Hierarchical Exploration for Accelerating Contextual Bandits
|
cs.LG stat.ML
|
Contextual bandit learning is an increasingly popular approach to optimizing
recommender systems via user feedback, but can be slow to converge in practice
due to the need for exploring a large feature space. In this paper, we propose
a coarse-to-fine hierarchical approach for encoding prior knowledge that
drastically reduces the amount of exploration required. Intuitively, user
preferences can be reasonably embedded in a coarse low-dimensional feature
space that can be explored efficiently, requiring exploration in the
high-dimensional space only as necessary. We introduce a bandit algorithm that
explores within this coarse-to-fine spectrum, and prove performance guarantees
that depend on how well the coarse space captures the user's preferences. We
demonstrate substantial improvement over conventional bandit algorithms through
extensive simulation as well as a live user study in the setting of
personalized news recommendation.
|
1206.6455
|
Regularizers versus Losses for Nonlinear Dimensionality Reduction: A
Factored View with New Convex Relaxations
|
cs.LG stat.ML
|
We demonstrate that almost all non-parametric dimensionality reduction
methods can be expressed by a simple procedure: regularized loss minimization
plus singular value truncation. By distinguishing the role of the loss and
regularizer in such a process, we recover a factored perspective that reveals
some gaps in the current literature. Beyond identifying a useful new loss for
manifold unfolding, a key contribution is to derive new convex regularizers
that combine distance maximization with rank reduction. These regularizers can
be applied to any loss.
|
1206.6456
|
Lognormal and Gamma Mixed Negative Binomial Regression
|
stat.AP cs.LG stat.ME
|
In regression analysis of counts, a lack of simple and efficient algorithms
for posterior computation has made Bayesian approaches appear unattractive and
thus underdeveloped. We propose a lognormal and gamma mixed negative binomial
(NB) regression model for counts, and present efficient closed-form Bayesian
inference; unlike conventional Poisson models, the proposed approach has two
free parameters to include two different kinds of random effects, and allows
the incorporation of prior information, such as sparsity in the regression
coefficients. By placing a gamma distribution prior on the NB dispersion
parameter r, and connecting a lognormal distribution prior with the logit of
the NB probability parameter p, efficient Gibbs sampling and variational Bayes
inference are both developed. The closed-form updates are obtained by
exploiting conditional conjugacy via both a compound Poisson representation and
a Polya-Gamma distribution based data augmentation approach. The proposed
Bayesian inference can be implemented routinely, while being easily
generalizable to more complex settings involving multivariate dependence
structures. The algorithms are illustrated using real examples.
|
1206.6457
|
Exponential Regret Bounds for Gaussian Process Bandits with
Deterministic Observations
|
cs.LG stat.ML
|
This paper analyzes the problem of Gaussian process (GP) bandits with
deterministic observations. The analysis uses a branch and bound algorithm that
is related to the UCB algorithm of (Srinivas et al, 2010). For GPs with
Gaussian observation noise, with variance strictly greater than zero, Srinivas
et al proved that the regret vanishes at the approximate rate of
$O(1/\sqrt{t})$, where t is the number of observations. To complement their
result, we attack the deterministic case and attain a much faster exponential
convergence rate. Under some regularity assumptions, we show that the regret
decreases asymptotically according to $O(e^{-\frac{\tau t}{(\ln t)^{d/4}}})$
with high probability. Here, d is the dimension of the search space and tau is
a constant that depends on the behaviour of the objective function near its
global maximum.
|
1206.6458
|
Batch Active Learning via Coordinated Matching
|
cs.LG stat.ML
|
Most prior work on active learning of classifiers has focused on sequentially
selecting one unlabeled example at a time to be labeled in order to reduce the
overall labeling effort. In many scenarios, however, it is desirable to label
an entire batch of examples at once, for example, when labels can be acquired
in parallel. This motivates us to study batch active learning, which
iteratively selects batches of $k>1$ examples to be labeled. We propose a novel
batch active learning method that leverages the availability of high-quality
and efficient sequential active-learning policies by attempting to approximate
their behavior when applied for $k$ steps. Specifically, our algorithm first
uses Monte-Carlo simulation to estimate the distribution of unlabeled examples
selected by a sequential policy over $k$ step executions. The algorithm then
attempts to select a set of $k$ examples that best matches this distribution,
leading to a combinatorial optimization problem that we term "bounded
coordinated matching". While we show this problem is NP-hard in general, we
give an efficient greedy solution, which inherits approximation bounds from
supermodular minimization theory. Our experimental results on eight benchmark
datasets show that the proposed approach is highly effective
|
1206.6459
|
Bayesian Conditional Cointegration
|
cs.CE cs.LG stat.ME
|
Cointegration is an important topic for time-series, and describes a
relationship between two series in which a linear combination is stationary.
Classically, the test for cointegration is based on a two stage process in
which first the linear relation between the series is estimated by Ordinary
Least Squares. Subsequently a unit root test is performed on the residuals. A
well-known deficiency of this classical approach is that it can lead to
erroneous conclusions about the presence of cointegration. As an alternative,
we present a framework for estimating whether cointegration exists using
Bayesian inference which is empirically superior to the classical approach.
Finally, we apply our technique to model segmented cointegration in which
cointegration may exist only for limited time. In contrast to previous
approaches our model makes no restriction on the number of possible
cointegration segments.
|
1206.6460
|
Output Space Search for Structured Prediction
|
cs.LG cs.AI stat.ML
|
We consider a framework for structured prediction based on search in the
space of complete structured outputs. Given a structured input, an output is
produced by running a time-bounded search procedure guided by a learned cost
function, and then returning the least cost output uncovered during the search.
This framework can be instantiated for a wide range of search spaces and search
procedures, and easily incorporates arbitrary structured-prediction loss
functions. In this paper, we make two main technical contributions. First, we
define the limited-discrepancy search space over structured outputs, which is
able to leverage powerful classification learning algorithms to improve the
search space quality. Second, we give a generic cost function learning
approach, where the key idea is to learn a cost function that attempts to mimic
the behavior of conducting searches guided by the true loss function. Our
experiments on six benchmark domains demonstrate that using our framework with
only a small amount of search is sufficient for significantly improving on
state-of-the-art structured-prediction performance.
|
1206.6461
|
On the Sample Complexity of Reinforcement Learning with a Generative
Model
|
cs.LG stat.ML
|
We consider the problem of learning the optimal action-value function in the
discounted-reward Markov decision processes (MDPs). We prove a new PAC bound on
the sample-complexity of model-based value iteration algorithm in the presence
of the generative model, which indicates that for an MDP with N state-action
pairs and the discount factor \gamma\in[0,1) only
O(N\log(N/\delta)/((1-\gamma)^3\epsilon^2)) samples are required to find an
\epsilon-optimal estimation of the action-value function with the probability
1-\delta. We also prove a matching lower bound of \Theta
(N\log(N/\delta)/((1-\gamma)^3\epsilon^2)) on the sample complexity of
estimating the optimal action-value function by every RL algorithm. To the best
of our knowledge, this is the first matching result on the sample complexity of
estimating the optimal (action-) value function in which the upper bound
matches the lower bound of RL in terms of N, \epsilon, \delta and 1/(1-\gamma).
Also, both our lower bound and our upper bound significantly improve on the
state-of-the-art in terms of 1/(1-\gamma).
|
1206.6462
|
Learning Object Arrangements in 3D Scenes using Human Context
|
cs.LG cs.CV cs.RO stat.ML
|
We consider the problem of learning object arrangements in a 3D scene. The
key idea here is to learn how objects relate to human poses based on their
affordances, ease of use and reachability. In contrast to modeling
object-object relationships, modeling human-object relationships scales
linearly in the number of objects. We design appropriate density functions
based on 3D spatial features to capture this. We learn the distribution of
human poses in a scene using a variant of the Dirichlet process mixture model
that allows sharing of the density function parameters across the same object
types. Then we can reason about arrangements of the objects in the room based
on these meaningful human poses. In our extensive experiments on 20 different
rooms with a total of 47 objects, our algorithm predicted correct placements
with an average error of 1.6 meters from ground truth. In arranging five real
scenes, it received a score of 4.3/5 compared to 3.7 for the best baseline
method.
|
1206.6463
|
An Iterative Locally Linear Embedding Algorithm
|
cs.LG stat.ML
|
Local Linear embedding (LLE) is a popular dimension reduction method. In this
paper, we first show LLE with nonnegative constraint is equivalent to the
widely used Laplacian embedding. We further propose to iterate the two steps in
LLE repeatedly to improve the results. Thirdly, we relax the kNN constraint of
LLE and present a sparse similarity learning algorithm. The final Iterative LLE
combines these three improvements. Extensive experiment results show that
iterative LLE algorithm significantly improve both classification and
clustering results.
|
1206.6464
|
Estimating the Hessian by Back-propagating Curvature
|
cs.LG stat.ML
|
In this work we develop Curvature Propagation (CP), a general technique for
efficiently computing unbiased approximations of the Hessian of any function
that is computed using a computational graph. At the cost of roughly two
gradient evaluations, CP can give a rank-1 approximation of the whole Hessian,
and can be repeatedly applied to give increasingly precise unbiased estimates
of any or all of the entries of the Hessian. Of particular interest is the
diagonal of the Hessian, for which no general approach is known to exist that
is both efficient and accurate. We show in experiments that CP turns out to
work well in practice, giving very accurate estimates of the Hessian of neural
networks, for example, with a relatively small amount of work. We also apply CP
to Score Matching, where a diagonal of a Hessian plays an integral role in the
Score Matching objective, and where it is usually computed exactly using
inefficient algorithms which do not scale to larger and more complex models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.