id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1206.3881
|
DANCo: Dimensionality from Angle and Norm Concentration
|
cs.LG stat.ML
|
In the last decades the estimation of the intrinsic dimensionality of a
dataset has gained considerable importance. Despite the great deal of research
work devoted to this task, most of the proposed solutions prove to be
unreliable when the intrinsic dimensionality of the input dataset is high and
the manifold where the points lie is nonlinearly embedded in a higher
dimensional space. In this paper we propose a novel robust intrinsic
dimensionality estimator that exploits the twofold complementary information
conveyed both by the normalized nearest neighbor distances and by the angles
computed on couples of neighboring points, providing also closed-forms for the
Kullback-Leibler divergences of the respective distributions. Experiments
performed on both synthetic and real datasets highlight the robustness and the
effectiveness of the proposed algorithm when compared to state of the art
methodologies.
|
1206.3897
|
Sampled-data design for robust control of a single qubit
|
quant-ph cs.SY
|
This paper presents a sampled-data approach for the robust control of a
single qubit (quantum bit). The required robustness is defined using a sliding
mode domain and the control law is designed offline and then utilized online
with a single qubit having bounded uncertainties. Two classes of uncertainties
are considered involving the system Hamiltonian and the coupling strength of
the system-environment interaction. Four cases are analyzed in detail including
without decoherence, with amplitude damping decoherence, phase damping
decoherence and depolarizing decoherence. Sampling periods are specifically
designed for these cases to guarantee the required robustness. Two sufficient
conditions are presented for guiding the design of unitary control for the
cases without decoherence and with amplitude damping decoherence. The proposed
approach has potential applications in quantum error-correction and in
constructing robust quantum gates.
|
1206.3902
|
On the Complexity of Existential Positive Queries
|
cs.LO cs.AI cs.CC
|
We systematically investigate the complexity of model checking the
existential positive fragment of first-order logic. In particular, for a set of
existential positive sentences, we consider model checking where the sentence
is restricted to fall into the set; a natural question is then to classify
which sentence sets are tractable and which are intractable. With respect to
fixed-parameter tractability, we give a general theorem that reduces this
classification question to the corresponding question for primitive positive
logic, for a variety of representations of structures. This general theorem
allows us to deduce that an existential positive sentence set having bounded
arity is fixed-parameter tractable if and only if each sentence is equivalent
to one in bounded-variable logic. We then use the lens of classical complexity
to study these fixed-parameter tractable sentence sets. We show that such a set
can be NP-complete, and consider the length needed by a translation from
sentences in such a set to bounded-variable logic; we prove superpolynomial
lower bounds on this length using the theory of compilability, obtaining an
interesting type of formula size lower bound. Overall, the tools, concepts, and
results of this article set the stage for the future consideration of the
complexity of model checking on more expressive logics.
|
1206.3924
|
Recommendation systems in the scope of opinion formation: a model
|
physics.soc-ph cs.SI physics.data-an
|
Aggregated data in real world recommender applications often feature
fat-tailed distributions of the number of times individual items have been
rated or favored. We propose a model to simulate such data. The model is mainly
based on social interactions and opinion formation taking place on a complex
network with a given topology. A threshold mechanism is used to govern the
decision making process that determines whether a user is or is not interested
in an item. We demonstrate the validity of the model by fitting attendance
distributions from different real data sets. The model is mathematically
analyzed by investigating its master equation. Our approach provides an attempt
to understand recommender system's data as a social process. The model can
serve as a starting point to generate artificial data sets useful for testing
and evaluating recommender systems.
|
1206.3933
|
Prediction of Emerging Technologies Based on Analysis of the U.S. Patent
Citation Network
|
cs.SI physics.soc-ph
|
The network of patents connected by citations is an evolving graph, which
provides a representation of the innovation process. A patent citing another
implies that the cited patent reflects a piece of previously existing knowledge
that the citing patent builds upon. A methodology presented here (i) identifies
actual clusters of patents: i.e. technological branches, and (ii) gives
predictions about the temporal changes of the structure of the clusters. A
predictor, called the {citation vector}, is defined for characterizing
technological development to show how a patent cited by other patents belongs
to various industrial fields. The clustering technique adopted is able to
detect the new emerging recombinations, and predicts emerging new technology
clusters. The predictive ability of our new method is illustrated on the
example of USPTO subcategory 11, Agriculture, Food, Textiles. A cluster of
patents is determined based on citation data up to 1991, which shows
significant overlap of the class 442 formed at the beginning of 1997. These new
tools of predictive analytics could support policy decision making processes in
science and technology, and help formulate recommendations for action.
|
1206.3953
|
Probabilistic Reconstruction in Compressed Sensing: Algorithms, Phase
Diagrams, and Threshold Achieving Matrices
|
cond-mat.stat-mech cs.IT math.IT
|
Compressed sensing is a signal processing method that acquires data directly
in a compressed form. This allows one to make less measurements than what was
considered necessary to record a signal, enabling faster or more precise
measurement protocols in a wide range of applications. Using an
interdisciplinary approach, we have recently proposed in [arXiv:1109.4424] a
strategy that allows compressed sensing to be performed at acquisition rates
approaching to the theoretical optimal limits. In this paper, we give a more
thorough presentation of our approach, and introduce many new results. We
present the probabilistic approach to reconstruction and discuss its optimality
and robustness. We detail the derivation of the message passing algorithm for
reconstruction and expectation max- imization learning of signal-model
parameters. We further develop the asymptotic analysis of the corresponding
phase diagrams with and without measurement noise, for different distribution
of signals, and discuss the best possible reconstruction performances
regardless of the algorithm. We also present new efficient seeding matrices,
test them on synthetic data and analyze their performance asymptotically.
|
1206.3959
|
Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial
Intelligence (2009)
|
cs.AI
|
This is the Proceedings of the Twenty-Fifth Conference on Uncertainty in
Artificial Intelligence, which was held in Montreal, QC, Canada, June 18 - 21
2009.
|
1206.3963
|
Small-world topology of functional connectivity in randomly connected
dynamical systems
|
cs.SI physics.data-an physics.soc-ph q-bio.NC stat.AP
|
Characterization of real-world complex systems increasingly involves the
study of their topological structure using graph theory. Among global network
properties, small-world property, consisting in existence of relatively short
paths together with high clustering of the network, is one of the most
discussed and studied. When dealing with coupled dynamical systems, links among
units of the system are commonly quantified by a measure of pairwise
statistical dependence of observed time series (functional connectivity). We
argue that the functional connectivity approach leads to upwardly biased
estimates of small-world characteristics (with respect to commonly used random
graph models) due to partial transitivity of the accepted functional
connectivity measures such as the correlation coefficient. In particular, this
may lead to observation of small-world characteristics in connectivity graphs
estimated from generic randomly connected dynamical systems. The ubiquity and
robustness of the phenomenon is documented by an extensive parameter study of
its manifestation in a multivariate linear autoregressive process, with
discussion of the potential relevance for nonlinear processes and measures.
|
1206.3965
|
On the Bivariate Nakagami-$m$ Cumulative Distribution Function:
Closed-form Expression and Applications
|
cs.IT math.IT
|
In this paper, we derive exact closed-form expressions for the bivariate
Nakagami-$m$ cumulative distribution function (CDF) with positive integer
fading severity index $m$ in terms of a class of hypergeometric functions.
Particularly, we show that the bivariate Nakagami-$m$ CDF can be expressed as a
finite sum of elementary functions and bivariate confluent hypergeometric
$\Phi_3$ functions. Direct applications which arise from the proposed
closed-form expression are the outage probability (OP) analysis of a
dual-branch selection combiner in correlated Nakagami-$m$ fading, or the
calculation of the level crossing rate (LCR) and average fade duration (AFD) of
a sampled Nakagami-$m$ fading envelope.
|
1206.3975
|
The Ultrasound Visualization Pipeline - A Survey
|
cs.GR cs.CV
|
Ultrasound is one of the most frequently used imaging modality in medicine.
The high spatial resolution, its interactive nature and non-invasiveness makes
it the first choice in many examinations. Image interpretation is one of
ultrasound's main challenges. Much training is required to obtain a confident
skill level in ultrasound-based diagnostics. State-of-the-art graphics
techniques is needed to provide meaningful visualizations of ultrasound in
real-time. In this paper we present the process-pipeline for ultrasound
visualization, including an overview of the tasks performed in the specific
steps. To provide an insight into the trends of ultrasound visualization
research, we have selected a set of significant publications and divided them
into a technique-based taxonomy covering the topics pre-processing,
segmentation, registration, rendering and augmented reality. For the different
technique types we discuss the difference between ultrasound-based techniques
and techniques for other modalities.
|
1206.3988
|
Cooperative localization using angle of arrival measurements: sequential
algorithms and non-line-of-sight suppression
|
cs.NI cs.MA
|
We investigate localization of a source based on angle of arrival (AoA)
measurements made at a geographically dispersed network of cooperating
receivers. The goal is to efficiently compute accurate estimates despite
outliers in the AoA measurements due to multipath reflections in
non-line-of-sight (NLOS) environments. Maximal likelihood (ML) location
estimation in such a setting requires exhaustive testing of estimates from all
possible subsets of "good" measurements, which has exponential complexity in
the number of measurements. We provide a randomized algorithm that approaches
ML performance with linear complexity in the number of measurements. The
building block for this algorithm is a low-complexity sequential algorithm for
updating the source location estimates under line-of-sight (LOS) environments.
Our Bayesian framework can exploit the ability to resolve multiple paths in
wideband systems to provide significant performance gains over narrowband
systems in NLOS environments, and easily extends to accommodate additional
information such as range measurements and prior information about location.
|
1206.3992
|
Evaluating Overlapping Communities with the Conductance of their
Boundary Nodes
|
cs.SI physics.soc-ph
|
Usually the boundary of a community in a network is drawn between nodes and
thus crosses its outgoing links. If we construct overlapping communities by
applying the link-clustering approach nodes and links interchange their roles.
Therefore, boundaries must drawn through the nodes shared by two or more
communities. For the purpose of community evaluation we define a conductance of
boundary nodes of overlapping communities analogously to the graph conductance
of boundary-crossing links used to partition a graph into disjoint communities.
We show that conductance of boundary nodes (or normalised node cut) can be
deduced from ordinary graph conductance of disjoint clusters in the network's
weighted line graph introduced by Evans and Lambiotte (2009) to get overlapping
communities of nodes in the original network. We test whether our definition
can be used to construct meaningful overlapping communities with a local greedy
algorithm of link clustering. In this note we present encouraging results we
obtained for Zachary's karate-club network.
|
1206.4020
|
The Theory of Bonds: A New Method for the Analysis of Linkages
|
math.AG cs.RO math.MG math.RA
|
In this paper we introduce a new technique, based on dual quaternions, for
the analysis of closed linkages with revolute joints: the theory of bonds. The
bond structure comprises a lot of information on closed revolute chains with a
one-parametric mobility. We demonstrate the usefulness of bond theory by giving
a new and transparent proof for the well-known classification of
overconstrained 5R linkages.
|
1206.4042
|
The Stability of Convergence of Curve Evolutions in Vector Fields
|
cs.CV math.AP
|
Curve evolution is often used to solve computer vision problems. If the curve
evolution fails to converge, we would not be able to solve the targeted problem
in a lifetime. This paper studies the theoretical aspect of the convergence of
a type of general curve evolutions. We establish a theory for analyzing and
improving the stability of the convergence of the general curve evolutions.
Based on this theory, we ascertain that the convergence of a known curve
evolution is marginal stable. We propose a way of modifying the original curve
evolution equation to improve the stability of the convergence according to our
theory. Numerical experiments show that the modification improves the
convergence of the curve evolution, which validates our theory.
|
1206.4074
|
A Linear Approximation to the chi^2 Kernel with Geometric Convergence
|
cs.LG cs.CV stat.ML
|
We propose a new analytical approximation to the $\chi^2$ kernel that
converges geometrically. The analytical approximation is derived with
elementary methods and adapts to the input distribution for optimal convergence
rate. Experiments show the new approximation leads to improved performance in
image classification and semantic segmentation tasks using a random Fourier
feature approximation of the $\exp-\chi^2$ kernel. Besides, out-of-core
principal component analysis (PCA) methods are introduced to reduce the
dimensionality of the approximation and achieve better performance at the
expense of only an additional constant factor to the time complexity. Moreover,
when PCA is performed jointly on the training and unlabeled testing data,
further performance improvements can be obtained. Experiments conducted on the
PASCAL VOC 2010 segmentation and the ImageNet ILSVRC 2010 datasets show
statistically significant improvements over alternative approximation methods.
|
1206.4094
|
Maximal-entropy random walk unifies centrality measures
|
physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.SI
|
In this paper analogies between different (dis)similarity matrices are
derived. These matrices, which are connected to path enumeration and random
walks, are used in community detection methods or in computation of centrality
measures for complex networks. The focus is on a number of known centrality
measures, which inherit the connections established for similarity matrices.
These measures are based on the principal eigenvector of the adjacency matrix,
path enumeration, as well as on the stationary state, stochastic matrix or mean
first-passage times of a random walk. Particular attention is paid to the
maximal-entropy random walk, which serves as a very distinct alternative to the
ordinary random walk used in network analysis.
The various importance measures, defined both with the use of ordinary random
walk and the maximal-entropy random walk, are compared numerically on a set of
benchmark graphs. It is shown that groups of centrality measures defined with
the two random walks cluster into two separate families. In particular, the
group of centralities for the maximal-entropy random walk, connected to the
eigenvector centrality and path enumeration, is strongly distinct from all the
other measures and produces largely equivalent results.
|
1206.4110
|
ConeRANK: Ranking as Learning Generalized Inequalities
|
cs.LG cs.IR
|
We propose a new data mining approach in ranking documents based on the
concept of cone-based generalized inequalities between vectors. A partial
ordering between two vectors is made with respect to a proper cone and thus
learning the preferences is formulated as learning proper cones. A pairwise
learning-to-rank algorithm (ConeRank) is proposed to learn a non-negative
subspace, formulated as a polyhedral cone, over document-pair differences. The
algorithm is regularized by controlling the `volume' of the cone. The
experimental studies on the latest and largest ranking dataset LETOR 4.0 shows
that ConeRank is competitive against other recent ranking approaches.
|
1206.4116
|
Dependence Maximizing Temporal Alignment via Squared-Loss Mutual
Information
|
stat.ML cs.AI
|
The goal of temporal alignment is to establish time correspondence between
two sequences, which has many applications in a variety of areas such as speech
processing, bioinformatics, computer vision, and computer graphics. In this
paper, we propose a novel temporal alignment method called least-squares
dynamic time warping (LSDTW). LSDTW finds an alignment that maximizes
statistical dependency between sequences, measured by a squared-loss variant of
mutual information. The benefit of this novel information-theoretic formulation
is that LSDTW can align sequences with different lengths, different
dimensionality, high non-linearity, and non-Gaussianity in a computationally
efficient manner. In addition, model parameters such as an initial alignment
matrix can be systematically optimized by cross-validation. We demonstrate the
usefulness of LSDTW through experiments on synthetic and real-world Kinect
action recognition datasets.
|
1206.4121
|
The information-theoretic costs of simulating quantum measurements
|
quant-ph cs.IT math.IT
|
Winter's measurement compression theorem stands as one of the most
penetrating insights of quantum information theory (QIT). In addition to making
an original and profound statement about measurement in quantum theory, it also
underlies several other general protocols in QIT. In this paper, we provide a
full review of Winter's measurement compression theorem, detailing the
information processing task, giving examples for understanding it, reviewing
Winter's achievability proof, and detailing a new approach to its single-letter
converse theorem. We prove an extension of the theorem to the case in which the
sender is not required to receive the outcomes of the simulated measurement.
The total cost of common randomness and classical communication can be lower
for such a "non-feedback" simulation, and we prove a single-letter converse
theorem demonstrating optimality. We then review the Devetak-Winter theorem on
classical data compression with quantum side information, providing new proofs
of its achievability and converse parts. From there, we outline a new protocol
that we call "measurement compression with quantum side information," announced
previously by two of us in our work on triple trade-offs in quantum Shannon
theory. This protocol has several applications, including its part in the
"classically-assisted state redistribution" protocol, which is the most general
protocol on the static side of the quantum information theory tree, and its
role in reducing the classical communication cost in a task known as local
purity distillation. We also outline a connection between measurement
compression with quantum side information and recent work on entropic
uncertainty relations in the presence of quantum memory. Finally, we prove a
single-letter theorem characterizing measurement compression with quantum side
information when the sender is not required to obtain the measurement outcome.
|
1206.4123
|
On the Confidentiality of Information Dispersal Algorithms and Their
Erasure Codes
|
cs.IT cs.CR cs.DC math.IT
|
\emph{Information Dispersal Algorithms (IDAs)} have been widely applied to
reliable and secure storage and transmission of data files in distributed
systems. An IDA is a method that encodes a file $F$ of size $L=|F|$ into $n$
unrecognizable pieces $F_1$, $F_2$, ..., $F_n$, each of size $L/m$ ($m<n$), so
that the original file $F$ can be reconstructed from any $m$ pieces. The core
of an IDA is the adopted non-systematic $m$-of-$n$ erasure code. This paper
makes a systematic study on the \emph{confidentiality} of an IDA and its
connection with the adopted erasure code. Two levels of confidentiality are
defined: \emph{weak confidentiality} (in the case where some parts of the
original file $F$ can be reconstructed explicitly from fewer than $m$ pieces)
and \emph{strong confidentiality} (in the case where nothing of the original
file $F$ can be reconstructed explicitly from fewer than $m$ pieces). For an
IDA that adopts an arbitrary non-systematic erasure code, its confidentiality
may fall into weak confidentiality. To achieve strong confidentiality, this
paper explores a sufficient and feasible condition on the adopted erasure code.
Then, this paper shows that Rabin's IDA has strong confidentiality. At the same
time, this paper presents an effective way to construct an IDA with strong
confidentiality from an arbitrary $m$-of-$(m+n)$ erasure code. Then, as an
example, this paper constructs an IDA with strong confidentiality from a
Reed-Solomon code, the computation complexity of which is comparable to or
sometimes even lower than that of Rabin's IDA.
|
1206.4169
|
Clustered Bandits
|
cs.LG
|
We consider a multi-armed bandit setting that is inspired by real-world
applications in e-commerce. In our setting, there are a few types of users,
each with a specific response to the different arms. When a user enters the
system, his type is unknown to the decision maker. The decision maker can
either treat each user separately ignoring the previously observed users, or
can attempt to take advantage of knowing that only few types exist and cluster
the users according to their response to the arms. We devise algorithms that
combine the usual exploration-exploitation tradeoff with clustering of users
and demonstrate the value of clustering. In the process of developing
algorithms for the clustered setting, we propose and analyze simple algorithms
for the setup where a decision maker knows that a user belongs to one of few
types, but does not know which one.
|
1206.4176
|
Energy and Spectral Efficiencies Trade-off with Filter Optimization in
Multiple Access Interference-Aware
|
math.OC cs.IT math.IT
|
This work analyzes the optimized deployment of two resources scarcely
available in mobile multiple access systems, i.e., spectrum and energy, as well
as the impact of filter optimization in the system performance. Taking in
perspective the two conflicting metrics, throughput maximization and power
consumption minimization, the distributed energy efficiency (EE) cost function
is formulated. Furthermore, the best energy-spectral efficiencies (EE-SE)
trade-off is achieved when each node allocates exactly the power necessary to
attain the best SINR response, which guarantees the maximal EE. To demonstrate
the validity of our analysis, two low-complexity energy-spectral efficient
algorithms, based on distributed instantaneous SINR level are developed, and
the impact of single and multiuser detection filters on the EE-SE trade-off is
analyzed.
|
1206.4185
|
Ant Robotics: Covering Continuous Domains by Multi-A(ge)nt Systems
|
cs.RO cs.AI cs.MA
|
In this work we present an algorithm for covering continuous connected
domains by ant-like robots with very limited capabilities. The robots can mark
visited places with pheromone marks and sense the level of the pheromone in
their local neighborhood. In case of multiple robots these pheromone marks can
be sensed by all robots and provide the only way of (indirect) communication
between the robots. The robots are assumed to be memoryless, and to have no
global information such as the domain map, their own position (either absolute
or relative), total marked area percentage, maximal pheromone level, etc..
Despite the robots' simplicity, we show that they are able, by running a very
simple rule of behavior, to ensure efficient covering of arbitrary connected
domains, including non-planar and multidimensional ones. The novelty of our
algorithm lies in the fact that, unlike previously proposed methods, our
algorithm works on continuous domains without relying on some "induced"
underlying graph, that effectively reduces the problem to a discrete case of
graph covering. The algorithm guarantees complete coverage of any connected
domain. We also prove that the algorithm is noise immune, i.e., it is able to
cope with any initial pheromone profile (noise). In addition the algorithm
provides a bounded constant time between two successive visits of the robot,
and thus, is suitable for patrolling or surveillance applications.
|
1206.4192
|
Designing Incoherent Dictionaries for Compressed Sensing: Algorithm
Comparison
|
cs.IT cs.DS math.IT
|
A new method presented for design of incoherent dictionaries.
|
1206.4221
|
Distributed Maximum Likelihood for Simultaneous Self-localization and
Tracking in Sensor Networks
|
math.OC cs.DC cs.SY stat.AP
|
We show that the sensor self-localization problem can be cast as a static
parameter estimation problem for Hidden Markov Models and we implement fully
decentralized versions of the Recursive Maximum Likelihood and on-line
Expectation-Maximization algorithms to localize the sensor network
simultaneously with target tracking. For linear Gaussian models, our algorithms
can be implemented exactly using a distributed version of the Kalman filter and
a novel message passing algorithm. The latter allows each node to compute the
local derivatives of the likelihood or the sufficient statistics needed for
Expectation-Maximization. In the non-linear case, a solution based on local
linearization in the spirit of the Extended Kalman Filter is proposed. In
numerical examples we demonstrate that the developed algorithms are able to
learn the localization parameters.
|
1206.4226
|
Three-User Cognitive Interference Channel: Capacity Region with Strong
Interference
|
cs.IT math.IT
|
This study investigates the capacity region of a three-user cognitive radio
network with two primary users and one cognitive user. A three-user Cognitive
Interference Channel (C-IFC) is proposed by considering a three-user
Interference Channel (IFC) where one of the transmitters has cognitive
capabilities and knows the messages of the other two transmitters in a
non-causal manner. First, two inner bounds on the capacity region of the
three-user C-IFC are obtained based on using the schemes which allow all
receivers to decode all messages with two different orders. Next, two sets of
conditions are derived, under which the capacity region of the proposed model
coincides with the capacity region of a three-user C-IFC in which all three
messages are required at all receivers. Under these conditions, referred to as
strong interference conditions, the capacity regions for the proposed
three-user C-IFC are characterized. Moreover, the Gaussian three-user C-IFC is
considered and the capacity results are derived for the Gaussian case. Some
numerical examples are also provided.
|
1206.4229
|
Information field dynamics for simulation scheme construction
|
physics.comp-ph astro-ph.IM cs.IT math.IT
|
Information field dynamics (IFD) is introduced here as a framework to derive
numerical schemes for the simulation of physical and other fields without
assuming a particular sub-grid structure as many schemes do. IFD constructs an
ensemble of non-parametric sub-grid field configurations from the combination
of the data in computer memory, representing constraints on possible field
configurations, and prior assumptions on the sub-grid field statistics. Each of
these field configurations can formally be evolved to a later moment since any
differential operator of the dynamics can act on fields living in continuous
space. However, these virtually evolved fields need again a representation by
data in computer memory. The maximum entropy principle of information theory
guides the construction of updated datasets via entropic matching, optimally
representing these field configurations at the later time. The field dynamics
thereby become represented by a finite set of evolution equations for the data
that can be solved numerically. The sub-grid dynamics is treated within an
auxiliary analytic consideration and the resulting scheme acts solely on the
data space. It should provide a more accurate description of the physical field
dynamics than simulation schemes constructed ad-hoc, due to the more rigorous
accounting of sub-grid physics and the space discretization process.
Assimilation of measurement data into an IFD simulation is conceptually
straightforward since measurement and simulation data can just be merged. The
IFD approach is illustrated using the example of a coarsely discretized
representation of a thermally excited classical Klein-Gordon field. This should
pave the way towards the construction of schemes for more complex systems like
turbulent hydrodynamics.
|
1206.4232
|
Enhanced active power filter control for nonlinear non-stationary
reactive power compensation
|
math.OC cs.SY
|
This paper describes a method to implement Reactive Power Compensation (RPC)
in power systems that possess nonlinear non-stationary current disturbances.
The Empirical Mode Decomposition (EMD) introduced in the Hilbert-Huang
Transform (HHT) is used to separate the disturbances from the original current
waveform. These disturbances are subsequently removed. Following that, Power
Factor Correction (PFC) based on the well-known p-q power theory is conducted
to remove the reactive power. Both operations were implemented in a shunt
Active Power Filter (APF). The EMD significantly simplifies the singulation and
the removal of the current disturbances. This helps to effectively identify the
fundamental current waveform. Hence, it simplifies the implementation of RPC on
nonlinear non-stationary power systems.
|
1206.4245
|
On Lossless Universal Compression of Distributed Identical Sources
|
cs.IT math.IT
|
Slepian-Wolf theorem is a well-known framework that targets almost lossless
compression of (two) data streams with symbol-by-symbol correlation between the
outputs of (two) distributed sources. However, this paper considers a different
scenario which does not fit in the Slepian-Wolf framework. We consider two
identical but spatially separated sources. We wish to study the universal
compression of a sequence of length $n$ from one of the sources provided that
the decoder has access to (i.e., memorized) a sequence of length $m$ from the
other source. Such a scenario occurs, for example, in the universal compression
of data from multiple mirrors of the same server. In this setup, the
correlation does not arise from symbol-by-symbol dependency of two outputs from
the two sources. Instead, the sequences are correlated through the information
that they contain about the unknown source parameter. We show that the
finite-length nature of the compression problem at hand requires considering a
notion of almost lossless source coding, where coding incurs an error
probability $p_e(n)$ that vanishes with sequence length $n$. We obtain a lower
bound on the average minimax redundancy of almost lossless codes as a function
of the sequence length $n$ and the permissible error probability $p_e$ when the
decoder has a memory of length $m$ and the encoders do not communicate. Our
results demonstrate that a strict performance loss is incurred when the two
encoders do not communicate even when the decoder knows the unknown parameter
vector (i.e., $m \to \infty$).
|
1206.4275
|
Joint Transmit Precoding for the Relay Interference Broadcast Channel
|
cs.IT math.IT
|
Relays in cellular systems are interference limited. The highest end-to-end
sum rates are achieved when the relays are jointly optimized with the transmit
strategy. Unfortunately, interference couples the links together making joint
optimization challenging. Further, the end-to-end multi-hop performance is
sensitive to rate mismatch, when some links have a dominant first link while
others have a dominant second link. This paper proposes an algorithm for
designing the linear transmit precoders at the transmitters and relays of the
relay interference broadcast channel, a generic model for relay-based cellular
systems, to maximize the end-to-end sum-rates. First, the relays are designed
to maximize the second-hop sum-rates. Next, approximate end-to-end rates that
depend on the time-sharing fraction and the second-hop rates are used to
formulate a sum-utility maximization problem for designing the transmitters.
This problem is solved by iteratively minimizing the weighted sum of mean
square errors. Finally, the norms of the transmit precoders at the transmitters
are adjusted to eliminate rate mismatch. The proposed algorithm allows for
distributed implementation and has fast convergence. Numerical results show
that the proposed algorithm outperforms a reasonable application of single-hop
interference management strategies separately on two hops.
|
1206.4280
|
Return Migration After Brain Drain: A Simulation Approach
|
physics.soc-ph cs.SI
|
The Brain Drain phenomenon is particularly heterogeneous and is characterized
by peculiar specifications. It influences the economic fundamentals of both the
country of origin and the host one in terms of human capital accumulation.
Here, the brain drain is considered from a microeconomic perspective: more
precisely we focus on the individual rational decision to return, referring it
to the social capital owned by the worker. The presented model compares utility
levels to justify agent migration conduct and to simulate several scenarios
within a computational environment. In particular, we developed a simulation
framework based on two fundamental individual features, i.e. risk aversion and
initial expectation, which characterize the dynamics of different agents
according to the evolution of their social contacts. Our main result is that,
according to the value of risk aversion and initial expectation, the
probability of return migration depends on their ratio, with a certain degree
of approximation: when risk aversion is much bigger than the initial
expectation, the probability of returns is maximal, while, in the opposite
case, the probability for the agents to remain abroad is very high. In between,
when the two values are comparable, it does exist a broad intertwined region
where it is very difficult to draw any analytical forecast.
|
1206.4300
|
Quasi-Succinct Indices
|
cs.IR cs.DS
|
Compressed inverted indices in use today are based on the idea of gap
compression: documents pointers are stored in increasing order, and the gaps
between successive document pointers are stored using suitable codes which
represent smaller gaps using less bits. Additional data such as counts and
positions is stored using similar techniques. A large body of research has been
built in the last 30 years around gap compression, including theoretical
modeling of the gap distribution, specialized instantaneous codes suitable for
gap encoding, and ad hoc document reorderings which increase the efficiency of
instantaneous codes. This paper proposes to represent an index using a
different architecture based on quasi-succinct representation of monotone
sequences. We show that, besides being theoretically elegant and simple, the
new index provides expected constant-time operations and, in practice,
significant performance improvements on conjunctive, phrasal and proximity
queries.
|
1206.4326
|
Joint Reconstruction of Multi-view Compressed Images
|
cs.MM cs.CV
|
The distributed representation of correlated multi-view images is an
important problem that arise in vision sensor networks. This paper concentrates
on the joint reconstruction problem where the distributively compressed
correlated images are jointly decoded in order to improve the reconstruction
quality of all the compressed images. We consider a scenario where the images
captured at different viewpoints are encoded independently using common coding
solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among
different cameras. A central decoder first estimates the underlying correlation
model from the independently compressed images which will be used for the joint
signal recovery. The joint reconstruction is then cast as a constrained convex
optimization problem that reconstructs total-variation (TV) smooth images that
comply with the estimated correlation model. At the same time, we add
constraints that force the reconstructed images to be consistent with their
compressed versions. We show by experiments that the proposed joint
reconstruction scheme outperforms independent reconstruction in terms of image
quality, for a given target bit rate. In addition, the decoding performance of
our proposed algorithm compares advantageously to state-of-the-art distributed
coding schemes based on disparity learning and on the DISCOVER.
|
1206.4327
|
Social Influence in Social Advertising: Evidence from Field Experiments
|
cs.SI physics.soc-ph stat.AP
|
Social advertising uses information about consumers' peers, including peer
affiliations with a brand, product, organization, etc., to target ads and
contextualize their display. This approach can increase ad efficacy for two
main reasons: peers' affiliations reflect unobserved consumer characteristics,
which are correlated along the social network; and the inclusion of social cues
(i.e., peers' association with a brand) alongside ads affect responses via
social influence processes. For these reasons, responses may be increased when
multiple social signals are presented with ads, and when ads are affiliated
with peers who are strong, rather than weak, ties.
We conduct two very large field experiments that identify the effect of
social cues on consumer responses to ads, measured in terms of ad clicks and
the formation of connections with the advertised entity. In the first
experiment, we randomize the number of social cues present in word-of-mouth
advertising, and measure how responses increase as a function of the number of
cues. The second experiment examines the effect of augmenting traditional ad
units with a minimal social cue (i.e., displaying a peer's affiliation below an
ad in light grey text). On average, this cue causes significant increases in ad
performance. Using a measurement of tie strength based on the total amount of
communication between subjects and their peers, we show that these influence
effects are greatest for strong ties. Our work has implications for ad
optimization, user interface design, and central questions in social science
research.
|
1206.4329
|
An Improved Gauss-Newtons Method based Back-propagation Algorithm for
Fast Convergence
|
cs.AI cs.NA
|
The present work deals with an improved back-propagation algorithm based on
Gauss-Newton numerical optimization method for fast convergence. The steepest
descent method is used for the back-propagation. The algorithm is tested using
various datasets and compared with the steepest descent back-propagation
algorithm. In the system, optimization is carried out using multilayer neural
network. The efficacy of the proposed method is observed during the training
period as it converges quickly for the dataset used in test. The requirement of
memory for computing the steps of algorithm is also analyzed.
|
1206.4358
|
Robust Detection of Dynamic Community Structure in Networks
|
physics.data-an cond-mat.dis-nn cs.SI physics.bio-ph physics.soc-ph q-bio.NC
|
We describe techniques for the robust detection of community structure in
some classes of time-dependent networks. Specifically, we consider the use of
statistical null models for facilitating the principled identification of
structural modules in semi-decomposable systems. Null models play an important
role both in the optimization of quality functions such as modularity and in
the subsequent assessment of the statistical validity of identified community
structure. We examine the sensitivity of such methods to model parameters and
show how comparisons to null models can help identify system scales. By
considering a large number of optimizations, we quantify the variance of
network diagnostics over optimizations (`optimization variance') and over
randomizations of network structure (`randomization variance'). Because the
modularity quality function typically has a large number of nearly-degenerate
local optima for networks constructed using real data, we develop a method to
construct representative partitions that uses a null model to correct for
statistical noise in sets of partitions. To illustrate our results, we employ
ensembles of time-dependent networks extracted from both nonlinear oscillators
and empirical neuroscience data.
|
1206.4370
|
Cyclic Codes from Dickson Polynomials
|
cs.IT math.IT
|
Due to their efficient encoding and decoding algorithms cyclic codes, a
subclass of linear codes, have applications in consumer electronics, data
storage systems, and communication systems. In this paper, Dickson polynomials
of the first and second kind over finite fields are employed to construct a
number of classes of cyclic codes. Lower bounds on the minimum weight of some
classes of the cyclic codes are developed. The minimum weights of some other
classes of the codes constructed in this paper are determined. The dimensions
of the codes obtained in this paper are flexible. Most of the codes presented
in this paper are optimal or almost optimal in the sense that they meet some
bound on linear codes. Over ninety cyclic codes of this paper should be used to
update the current database of tables of best linear codes known. Among them
sixty are optimal in the sense that they meet some bound on linear codes and
the rest are cyclic codes having the same parameters as the best linear code in
the current database maintained at http://www.codetables.de/.
|
1206.4389
|
Improving Two-Way Selective Decode-and-forward Wireless Relaying with
Energy-Efficient One-bit Soft Forwarding
|
cs.IT math.IT math.PR
|
Motivated by applications such as battery-operated wireless sensor networks
(WSN), we propose an easy-to-implement energy-efficient two-way relaying
scheme. In particular, we address the challenge of improving the standard
two-way selective decode-and-forward protocol (TW-SDF) in terms of
block-error-rate (BLER) with minor additional complexity and energy
consumption. By following the principle of soft relaying, our solution is the
two-way one-bit soft forwarding (TW-1bSF) protocol in which the relay forwards
the one-bit quantization of a posterior information metric about the
transmitted bits, associated with an appropriately designed reliability
parameter.
In WSN-related standards (such as IEEE802.15.6 and Bluetooth), block codes
are adopted instead of convolutional and other sophisticated codes, due to
their efficient decoder hardware implementation. As the second main
contribution, we derive tight upper bounds on the BLER performance for both
TW-SDF and TW-1bSF, when the two-way relaying network employs block codes and
hard decoding. The error probability analysis confirms the superiority of
TW-1bSF. Moreover, we derive the asymptotic performance gain of TW-1bSF over
TW-SDF, which further suggests that the proposed protocol is a good choice,
especially when long block codes are used.
|
1206.4391
|
Gray Image extraction using Fuzzy Logic
|
cs.CV cs.AI
|
Fuzzy systems concern fundamental methodology to represent and process
uncertainty and imprecision in the linguistic information. The fuzzy systems
that use fuzzy rules to represent the domain knowledge of the problem are known
as Fuzzy Rule Base Systems (FRBS). On the other hand image segmentation and
subsequent extraction from a noise-affected background, with the help of
various soft computing methods, are relatively new and quite popular due to
various reasons. These methods include various Artificial Neural Network (ANN)
models (primarily supervised in nature), Genetic Algorithm (GA) based
techniques, intensity histogram based methods etc. providing an extraction
solution working in unsupervised mode happens to be even more interesting
problem. Literature suggests that effort in this respect appears to be quite
rudimentary. In the present article, we propose a fuzzy rule guided novel
technique that is functional devoid of any external intervention during
execution. Experimental results suggest that this approach is an efficient one
in comparison to different other techniques extensively addressed in
literature. In order to justify the supremacy of performance of our proposed
technique in respect of its competitors, we take recourse to effective metrics
like Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal to Noise
Ratio (PSNR).
|
1206.4436
|
Tiling $R^{5}$ by Crosses
|
cs.IT math.CO math.IT
|
An $n$-dimensional cross comprises $2n+1$ unit cubes: the center cube and
reflections in all its faces. It is well known that there is a tiling of
$R^{n}$ by crosses for all $n.$ AlBdaiwi and the first author proved that if
$2n+1$ is not a prime then there are $2^{\aleph_{0}}$ \ non-congruent regular
(= face-to-face) tilings of $R^{n}$ by crosses, while there is a unique tiling
of $R^{n}$ by crosses for $n=2,3$. They conjectured that this is always the
case if $2n+1$ is a prime. To support the conjecture we prove in this paper
that also for $R^{5}$ there is a unique regular, and no non-regular, tiling by
crosses. So there is a unique tiling of $R^{3}$ by crosses, there are
$2^{\aleph_{0}}$ tilings of $R^{4},$ but for $R^{5}$ there is again only one
tiling by crosses. We guess that this result goes against our intuition that
suggests "the higher the dimension of the \ space, the more freedom we get".
|
1206.4438
|
Inverse Modeling of Climate Responses of Monumental Buildings
|
cs.CE
|
The indoor climate conditions of monumental buildings are very important for
the conservation of these objects. Simplified models with physical meaning are
desired that are capable of simulating temperature and relative humidity. In
this paper we research state-space models as methodology for the inverse
modeling of climate responses of unheated monumental buildings. It is concluded
that this approach is very promising for obtaining physical models and
parameters of indoor climate responses. Furthermore state space models can be
simulated very efficiently: the simulation duration time of a 100 year hourly
based period take less than a second on an ordinary computer.
|
1206.4481
|
Parsimonious Mahalanobis Kernel for the Classification of High
Dimensional Data
|
cs.NA cs.LG
|
The classification of high dimensional data with kernel methods is considered
in this article. Exploit- ing the emptiness property of high dimensional
spaces, a kernel based on the Mahalanobis distance is proposed. The computation
of the Mahalanobis distance requires the inversion of a covariance matrix. In
high dimensional spaces, the estimated covariance matrix is ill-conditioned and
its inversion is unstable or impossible. Using a parsimonious statistical
model, namely the High Dimensional Discriminant Analysis model, the specific
signal and noise subspaces are estimated for each considered class making the
inverse of the class specific covariance matrix explicit and stable, leading to
the definition of a parsimonious Mahalanobis kernel. A SVM based framework is
used for selecting the hyperparameters of the parsimonious Mahalanobis kernel
by optimizing the so-called radius-margin bound. Experimental results on three
high dimensional data sets show that the proposed kernel is suitable for
classifying high dimensional data, providing better classification accuracies
than the conventional Gaussian kernel.
|
1206.4498
|
On a Class of Discrete Memoryless Broadcast Interference Channels
|
cs.IT math.IT
|
We study a class of discrete memoryless broadcast interference channels
(DM-BICs), where one of the broadcast receivers is subject to the interference
from a point-to-point transmission. A general achievable rate region
$\mathcal{R}$ based on rate splitting, superposition coding and binning at the
broadcast transmitter and rate splitting at the interfering transmitter is
derived. Under two partial order broadcast conditions {\em
interference-oblivious less noisy} and {\em interference-cognizant less noisy},
a reduced form of $\mathcal{R}$ is shown to be equivalent to the region based
on a simpler scheme that uses only superposition coding at the broadcast
transmitter. Furthermore, the capacity regions of DM-BIC under the two partial
order broadcast conditions are characterized respectively for the strong and
very strong interference conditions.
|
1206.4504
|
Revisiting Timed Specification Theories: A Linear-Time Perspective
|
cs.SE cs.LO cs.SY
|
We consider the setting of component-based design for real-time systems with
critical timing constraints. Based on our earlier work, we propose a
compositional specification theory for timed automata with I/O distinction,
which supports substitutive refinement. Our theory provides the operations of
parallel composition for composing components at run-time, logical
conjunction/disjunction for independent development, and quotient for
incremental synthesis. The key novelty of our timed theory lies in a weakest
congruence preserving safety as well as bounded liveness properties. We show
that the congruence can be characterised by two linear-time semantics,
timed-traces and timed-strategies, the latter of which is derived from a
game-based interpretation of timed interaction.
|
1206.4509
|
Decentralized Estimation of Laplacian Eigenvalues in Multi-Agent Systems
|
cs.SY
|
In this paper we present a decentralized algorithm to estimate the
eigenvalues of the Laplacian matrix that encodes the network topology of a
multi-agent system. We consider network topologies modeled by undirected
graphs. The basic idea is to provide a local interaction rule among agents so
that their state trajectory is a linear combination of sinusoids oscillating
only at frequencies function of the eigenvalues of the Laplacian matrix. In
this way, the problem of decentralized estimation of the eigenvalues is mapped
into a standard signal processing problem in which the unknowns are the finite
number of frequencies at which the signal oscillates.
|
1206.4522
|
BADREX: In situ expansion and coreference of biomedical abbreviations
using dynamic regular expressions
|
cs.CL
|
BADREX uses dynamically generated regular expressions to annotate term
definition-term abbreviation pairs, and corefers unpaired acronyms and
abbreviations back to their initial definition in the text. Against the
Medstract corpus BADREX achieves precision and recall of 98% and 97%, and
against a much larger corpus, 90% and 85%, respectively. BADREX yields improved
performance over previous approaches, requires no training data and allows
runtime customisation of its input parameters. BADREX is freely available from
https://github.com/philgooch/BADREX-Biomedical-Abbreviation-Expander as a
plugin for the General Architecture for Text Engineering (GATE) framework and
is licensed under the GPLv3.
|
1206.4555
|
Optimal compression of hash-origin prefix trees
|
cs.IT cs.DB cs.DS math.CO math.IT
|
There is a common problem of operating on hash values of elements of some
database. In this paper there will be analyzed informational content of such
general task and how to practically approach such found lower boundaries.
Minimal prefix tree which distinguish elements turns out to require
asymptotically only about 2.77544 bits per element, while standard approaches
use a few times more. While being certain of working inside the database, the
cost of distinguishability can be reduced further to about 2.33275 bits per
elements. Increasing minimal depth of nodes to reduce probability of false
positives leads to simple relation with average depth of such random tree,
which is asymptotically larger by about 1.33275 bits than lg(n) of the perfect
binary tree. This asymptotic case can be also seen as a way to optimally encode
n large unordered numbers - saving lg(n!) bits of information about their
ordering, which can be the major part of contained information. This ability
itself allows to reduce memory requirements even to about 0.693 of required in
Bloom filter for the same false positive probability.
|
1206.4557
|
A model of competition among more than two languages
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We extend the Abrams-Strogatz model for competition between two languages
[Nature 424, 900 (2003)] to the case of n(>=2) competing states (i.e.,
languages). Although the Abrams-Strogatz model for n=2 can be interpreted as
modeling either majority preference or minority aversion, the two mechanisms
are distinct when n>=3. We find that the condition for the coexistence of
different states is independent of n under the pure majority preference,
whereas it depends on n under the pure minority aversion. We also show that the
stable coexistence equilibrium and stable monopoly equilibria can be
multistable under the minority aversion and not under the majority preference.
Furthermore, we obtain the phase diagram of the model when the effects of the
majority preference and minority aversion are mixed, under the condition that
different states have the same attractiveness. We show that the multistability
is a generic property of the model facilitated by large n.
|
1206.4560
|
Residual Component Analysis: Generalising PCA for more flexible
inference in linear-Gaussian models
|
cs.LG stat.ML
|
Probabilistic principal component analysis (PPCA) seeks a low dimensional
representation of a data set in the presence of independent spherical Gaussian
noise. The maximum likelihood solution for the model is an eigenvalue problem
on the sample covariance matrix. In this paper we consider the situation where
the data variance is already partially explained by other actors, for example
sparse conditional dependencies between the covariates, or temporal
correlations leaving some residual variance. We decompose the residual variance
into its components through a generalised eigenvalue problem, which we call
residual component analysis (RCA). We explore a range of new algorithms that
arise from the framework, including one that factorises the covariance of a
Gaussian density into a low-rank and a sparse-inverse component. We illustrate
the ideas on the recovery of a protein-signaling network, a gene expression
time-series data set and the recovery of the human skeleton from motion capture
3-D cloud data.
|
1206.4572
|
Autocorrelations of Binary Sequences and Run Structure
|
cs.IT math.CO math.IT
|
We analyze the connection between the autocorrelation of a binary sequence
and its run structure given by the run length encoding. We show that both the
periodic and the aperiodic autocorrelation of a binary sequence can be
formulated in terms of the run structure. The run structure is given by the
consecutive runs of the sequence. Let C=(C(0), C(1),...,C(n)) denote the
autocorrelation vector of a binary sequence. We prove that the kth component of
the second order difference operator of C can be directly calculated by using
the consecutive runs of total length k. In particular this shows that the kth
autocorrelation is already determined by all consecutive runs of total length
L<k. In the aperiodic case we show how the run vector R can be efficiently
calculated and give a characterization of skew-symmetric sequences in terms of
their run length encoding.
|
1206.4588
|
An Evolutionary Approach to Drug-Design Using Quantam Binary Particle
Swarm Optimization Algorithm
|
cs.NE cs.CE
|
The present work provides a new approach to evolve ligand structures which
represent possible drug to be docked to the active site of the target protein.
The structure is represented as a tree where each non-empty node represents a
functional group. It is assumed that the active site configuration of the
target protein is known with position of the essential residues. In this paper
the interaction energy of the ligands with the protein target is minimized.
Moreover, the size of the tree is difficult to obtain and it will be different
for different active sites. To overcome the difficulty, a variable tree size
configuration is used for designing ligands. The optimization is done using a
quantum discrete PSO. The result using fixed length and variable length
configuration are compared.
|
1206.4599
|
A Unified Robust Classification Model
|
cs.LG stat.ML
|
A wide variety of machine learning algorithms such as support vector machine
(SVM), minimax probability machine (MPM), and Fisher discriminant analysis
(FDA), exist for binary classification. The purpose of this paper is to provide
a unified classification model that includes the above models through a robust
optimization approach. This unified model has several benefits. One is that the
extensions and improvements intended for SVM become applicable to MPM and FDA,
and vice versa. Another benefit is to provide theoretical results to above
learning methods at once by dealing with the unified model. We give a
statistical interpretation of the unified classification model and propose a
non-convex optimization algorithm that can be applied to non-convex variants of
existing learning methods.
|
1206.4600
|
Bayesian Nonexhaustive Learning for Online Discovery and Modeling of
Emerging Classes
|
cs.LG stat.ML
|
We present a framework for online inference in the presence of a
nonexhaustively defined set of classes that incorporates supervised
classification with class discovery and modeling. A Dirichlet process prior
(DPP) model defined over class distributions ensures that both known and
unknown class distributions originate according to a common base distribution.
In an attempt to automatically discover potentially interesting class
formations, the prior model is coupled with a suitably chosen data model, and
sequential Monte Carlo sampling is used to perform online inference. Our
research is driven by a biodetection application, where a new class of pathogen
may suddenly appear, and the rapid increase in the number of samples
originating from this class indicates the onset of an outbreak.
|
1206.4601
|
Convex Multitask Learning with Flexible Task Clusters
|
cs.LG stat.ML
|
Traditionally, multitask learning (MTL) assumes that all the tasks are
related. This can lead to negative transfer when tasks are indeed incoherent.
Recently, a number of approaches have been proposed that alleviate this problem
by discovering the underlying task clusters or relationships. However, they are
limited to modeling these relationships at the task level, which may be
restrictive in some applications. In this paper, we propose a novel MTL
formulation that captures task relationships at the feature-level. Depending on
the interactions among tasks and features, the proposed method construct
different task clusters for different features, without even the need of
pre-specifying the number of clusters. Computationally, the proposed
formulation is strongly convex, and can be efficiently solved by accelerated
proximal methods. Experiments are performed on a number of synthetic and
real-world data sets. Under various degrees of task relationships, the accuracy
of the proposed method is consistently among the best. Moreover, the
feature-specific task clusters obtained agree with the known/plausible task
structures of the data.
|
1206.4602
|
Quasi-Newton Methods: A New Direction
|
cs.NA cs.LG stat.ML
|
Four decades after their invention, quasi-Newton methods are still state of
the art in unconstrained numerical optimization. Although not usually
interpreted thus, these are learning algorithms that fit a local quadratic
approximation to the objective function. We show that many, including the most
popular, quasi-Newton methods can be interpreted as approximations of Bayesian
linear regression under varying prior assumptions. This new notion elucidates
some shortcomings of classical algorithms, and lights the way to a novel
nonparametric quasi-Newton method, which is able to make more efficient use of
available information at computational cost similar to its predecessors.
|
1206.4603
|
Latent Collaborative Retrieval
|
cs.IR cs.AI
|
Retrieval tasks typically require a ranking of items given a query.
Collaborative filtering tasks, on the other hand, learn to model user's
preferences over items. In this paper we study the joint problem of
recommending items to a user with respect to a given query, which is a
surprisingly common task. This setup differs from the standard collaborative
filtering one in that we are given a query x user x item tensor for training
instead of the more traditional user x item matrix. Compared to document
retrieval we do have a query, but we may or may not have content features (we
will consider both cases) and we can also take account of the user's profile.
We introduce a factorized model for this new task that optimizes the top-ranked
items returned for the given query and user. We report empirical results where
it outperforms several baselines.
|
1206.4604
|
Learning the Experts for Online Sequence Prediction
|
cs.LG cs.AI
|
Online sequence prediction is the problem of predicting the next element of a
sequence given previous elements. This problem has been extensively studied in
the context of individual sequence prediction, where no prior assumptions are
made on the origin of the sequence. Individual sequence prediction algorithms
work quite well for long sequences, where the algorithm has enough time to
learn the temporal structure of the sequence. However, they might give poor
predictions for short sequences. A possible remedy is to rely on the general
model of prediction with expert advice, where the learner has access to a set
of $r$ experts, each of which makes its own predictions on the sequence. It is
well known that it is possible to predict almost as well as the best expert if
the sequence length is order of $\log(r)$. But, without firm prior knowledge on
the problem, it is not clear how to choose a small set of {\em good} experts.
In this paper we describe and analyze a new algorithm that learns a good set of
experts using a training set of previously observed sequences. We demonstrate
the merits of our approach by applying it on the task of click prediction on
the web.
|
1206.4606
|
TrueLabel + Confusions: A Spectrum of Probabilistic Models in Analyzing
Multiple Ratings
|
cs.LG cs.AI stat.ML
|
This paper revisits the problem of analyzing multiple ratings given by
different judges. Different from previous work that focuses on distilling the
true labels from noisy crowdsourcing ratings, we emphasize gaining diagnostic
insights into our in-house well-trained judges. We generalize the well-known
DawidSkene model (Dawid & Skene, 1979) to a spectrum of probabilistic models
under the same "TrueLabel + Confusion" paradigm, and show that our proposed
hierarchical Bayesian model, called HybridConfusion, consistently outperforms
DawidSkene on both synthetic and real-world data sets.
|
1206.4607
|
Distributed Tree Kernels
|
cs.LG stat.ML
|
In this paper, we propose the distributed tree kernels (DTK) as a novel
method to reduce time and space complexity of tree kernels. Using a linear
complexity algorithm to compute vectors for trees, we embed feature spaces of
tree fragments in low-dimensional spaces where the kernel computation is
directly done with dot product. We show that DTKs are faster, correlate with
tree kernels, and obtain a statistically similar performance in two natural
language processing tasks.
|
1206.4608
|
A Hybrid Algorithm for Convex Semidefinite Optimization
|
cs.LG cs.DS cs.NA stat.ML
|
We present a hybrid algorithm for optimizing a convex, smooth function over
the cone of positive semidefinite matrices. Our algorithm converges to the
global optimal solution and can be used to solve general large-scale
semidefinite programs and hence can be readily applied to a variety of machine
learning problems. We show experimental results on three machine learning
problems (matrix completion, metric learning, and sparse PCA) . Our approach
outperforms state-of-the-art algorithms.
|
1206.4609
|
On multi-view feature learning
|
cs.CV cs.LG stat.ML
|
Sparse coding is a common approach to learning local features for object
recognition. Recently, there has been an increasing interest in learning
features from spatio-temporal, binocular, or other multi-observation data,
where the goal is to encode the relationship between images rather than the
content of a single image. We provide an analysis of multi-view feature
learning, which shows that hidden variables encode transformations by detecting
rotation angles in the eigenspaces shared among multiple image warps. Our
analysis helps explain recent experimental results showing that
transformation-specific features emerge when training complex cell models on
videos. Our analysis also shows that transformation-invariant features can
emerge as a by-product of learning representations of transformations.
|
1206.4610
|
Manifold Relevance Determination
|
cs.LG cs.CV stat.ML
|
In this paper we present a fully Bayesian latent variable model which
exploits conditional nonlinear(in)-dependence structures to learn an efficient
latent representation. The latent space is factorized to represent shared and
private information from multiple views of the data. In contrast to previous
approaches, we introduce a relaxation to the discrete segmentation and allow
for a "softly" shared latent space. Further, Bayesian techniques allow us to
automatically estimate the dimensionality of the latent spaces. The model is
capable of capturing structure underlying extremely high dimensional spaces.
This is illustrated by modelling unprocessed images with tenths of thousands of
pixels. This also allows us to directly generate novel images from the trained
model by sampling from the discovered latent spaces. We also demonstrate the
model by prediction of human pose in an ambiguous setting. Our Bayesian
framework allows us to perform disambiguation in a principled manner by
including latent space priors which incorporate the dynamic nature of the data.
|
1206.4611
|
A Convex Feature Learning Formulation for Latent Task Structure
Discovery
|
cs.LG stat.ML
|
This paper considers the multi-task learning problem and in the setting where
some relevant features could be shared across few related tasks. Most of the
existing methods assume the extent to which the given tasks are related or
share a common feature space to be known apriori. In real-world applications
however, it is desirable to automatically discover the groups of related tasks
that share a feature space. In this paper we aim at searching the exponentially
large space of all possible groups of tasks that may share a feature space. The
main contribution is a convex formulation that employs a graph-based
regularizer and simultaneously discovers few groups of related tasks, having
close-by task parameters, as well as the feature space shared within each
group. The regularizer encodes an important structure among the groups of tasks
leading to an efficient algorithm for solving it: if there is no feature space
under which a group of tasks has close-by task parameters, then there does not
exist such a feature space for any of its supersets. An efficient active set
algorithm that exploits this simplification and performs a clever search in the
exponentially large space is presented. The algorithm is guaranteed to solve
the proposed formulation (within some precision) in a time polynomial in the
number of groups of related tasks discovered. Empirical results on benchmark
datasets show that the proposed formulation achieves good generalization and
outperforms state-of-the-art multi-task learning algorithms in some cases.
|
1206.4612
|
Exact Soft Confidence-Weighted Learning
|
cs.LG
|
In this paper, we propose a new Soft Confidence-Weighted (SCW) online
learning scheme, which enables the conventional confidence-weighted learning
method to handle non-separable cases. Unlike the previous confidence-weighted
learning algorithms, the proposed soft confidence-weighted learning method
enjoys all the four salient properties: (i) large margin training, (ii)
confidence weighting, (iii) capability to handle non-separable data, and (iv)
adaptive margin. Our experimental results show that the proposed SCW algorithms
significantly outperform the original CW algorithm. When comparing with a
variety of state-of-the-art algorithms (including AROW, NAROW and NHERD), we
found that SCW generally achieves better or at least comparable predictive
accuracy, but enjoys significant advantage of computational efficiency (i.e.,
smaller number of updates and lower time cost).
|
1206.4613
|
Near-Optimal BRL using Optimistic Local Transitions
|
cs.AI cs.LG stat.ML
|
Model-based Bayesian Reinforcement Learning (BRL) allows a found
formalization of the problem of acting optimally while facing an unknown
environment, i.e., avoiding the exploration-exploitation dilemma. However,
algorithms explicitly addressing BRL suffer from such a combinatorial explosion
that a large body of work relies on heuristic algorithms. This paper introduces
BOLT, a simple and (almost) deterministic heuristic algorithm for BRL which is
optimistic about the transition function. We analyze BOLT's sample complexity,
and show that under certain parameters, the algorithm is near-optimal in the
Bayesian sense with high probability. Then, experimental results highlight the
key differences of this method compared to previous work.
|
1206.4614
|
Information-theoretic Semi-supervised Metric Learning via Entropy
Regularization
|
cs.LG stat.ML
|
We propose a general information-theoretic approach called Seraph
(SEmi-supervised metRic leArning Paradigm with Hyper-sparsity) for metric
learning that does not rely upon the manifold assumption. Given the probability
parameterized by a Mahalanobis distance, we maximize the entropy of that
probability on labeled data and minimize it on unlabeled data following entropy
regularization, which allows the supervised and unsupervised parts to be
integrated in a natural and meaningful way. Furthermore, Seraph is regularized
by encouraging a low-rank projection induced from the metric. The optimization
of Seraph is solved efficiently and stably by an EM-like scheme with the
analytical E-Step and convex M-Step. Experiments demonstrate that Seraph
compares favorably with many well-known global and local metric learning
methods.
|
1206.4615
|
Levy Measure Decompositions for the Beta and Gamma Processes
|
stat.ME cs.LG math.ST stat.TH
|
We develop new representations for the Levy measures of the beta and gamma
processes. These representations are manifested in terms of an infinite sum of
well-behaved (proper) beta and gamma distributions. Further, we demonstrate how
these infinite sums may be truncated in practice, and explicitly characterize
truncation errors. We also perform an analysis of the characteristics of
posterior distributions, based on the proposed decompositions. The
decompositions provide new insights into the beta and gamma processes (and
their generalizations), and we demonstrate how the proposed representation
unifies some properties of the two. This paper is meant to provide a rigorous
foundation for and new perspectives on Levy processes, as these are of
increasing importance in machine learning.
|
1206.4616
|
A Hierarchical Dirichlet Process Model with Multiple Levels of
Clustering for Human EEG Seizure Modeling
|
stat.AP cs.LG stat.ML
|
Driven by the multi-level structure of human intracranial
electroencephalogram (iEEG) recordings of epileptic seizures, we introduce a
new variant of a hierarchical Dirichlet Process---the multi-level clustering
hierarchical Dirichlet Process (MLC-HDP)---that simultaneously clusters
datasets on multiple levels. Our seizure dataset contains brain activity
recorded in typically more than a hundred individual channels for each seizure
of each patient. The MLC-HDP model clusters over channels-types, seizure-types,
and patient-types simultaneously. We describe this model and its implementation
in detail. We also present the results of a simulation study comparing the
MLC-HDP to a similar model, the Nested Dirichlet Process and finally
demonstrate the MLC-HDP's use in modeling seizures across multiple patients. We
find the MLC-HDP's clustering to be comparable to independent human physician
clusterings. To our knowledge, the MLC-HDP model is the first in the epilepsy
literature capable of clustering seizures within and between patients.
|
1206.4617
|
Continuous Inverse Optimal Control with Locally Optimal Examples
|
cs.LG cs.AI stat.ML
|
Inverse optimal control, also known as inverse reinforcement learning, is the
problem of recovering an unknown reward function in a Markov decision process
from expert demonstrations of the optimal policy. We introduce a probabilistic
inverse optimal control algorithm that scales gracefully with task
dimensionality, and is suitable for large, continuous domains where even
computing a full policy is impractical. By using a local approximation of the
reward function, our method can also drop the assumption that the
demonstrations are globally optimal, requiring only local optimality. This
allows it to learn from examples that are unsuitable for prior methods.
|
1206.4618
|
Compact Hyperplane Hashing with Bilinear Functions
|
cs.LG stat.ML
|
Hyperplane hashing aims at rapidly searching nearest points to a hyperplane,
and has shown practical impact in scaling up active learning with SVMs.
Unfortunately, the existing randomized methods need long hash codes to achieve
reasonable search accuracy and thus suffer from reduced search speed and large
memory overhead. To this end, this paper proposes a novel hyperplane hashing
technique which yields compact hash codes. The key idea is the bilinear form of
the proposed hash functions, which leads to higher collision probability than
the existing hyperplane hash functions when using random projections. To
further increase the performance, we propose a learning based framework in
which the bilinear functions are directly learned from the data. This results
in short yet discriminative codes, and also boosts the search performance over
the random projection based solutions. Large-scale active learning experiments
carried out on two datasets with up to one million samples demonstrate the
overall superiority of the proposed approach.
|
1206.4619
|
Inductive Kernel Low-rank Decomposition with Priors: A Generalized
Nystrom Method
|
cs.LG
|
Low-rank matrix decomposition has gained great popularity recently in scaling
up kernel methods to large amounts of data. However, some limitations could
prevent them from working effectively in certain domains. For example, many
existing approaches are intrinsically unsupervised, which does not incorporate
side information (e.g., class labels) to produce task specific decompositions;
also, they typically work "transductively", i.e., the factorization does not
generalize to new samples, so the complete factorization needs to be recomputed
when new samples become available. To solve these problems, in this paper we
propose an"inductive"-flavored method for low-rank kernel decomposition with
priors. We achieve this by generalizing the Nystr\"om method in a novel way. On
the one hand, our approach employs a highly flexible, nonparametric structure
that allows us to generalize the low-rank factors to arbitrarily new samples;
on the other hand, it has linear time and space complexities, which can be
orders of magnitudes faster than existing approaches and renders great
efficiency in learning a low-rank kernel decomposition. Empirical results
demonstrate the efficacy and efficiency of the proposed method.
|
1206.4620
|
Improved Information Gain Estimates for Decision Tree Induction
|
cs.LG stat.ML
|
Ensembles of classification and regression trees remain popular machine
learning methods because they define flexible non-parametric models that
predict well and are computationally efficient both during training and
testing. During induction of decision trees one aims to find predicates that
are maximally informative about the prediction target. To select good
predicates most approaches estimate an information-theoretic scoring function,
the information gain, both for classification and regression problems. We point
out that the common estimation procedures are biased and show that by replacing
them with improved estimators of the discrete and the differential entropy we
can obtain better decision trees. In effect our modifications yield improved
predictive performance and are simple to implement in any decision tree code.
|
1206.4621
|
Path Integral Policy Improvement with Covariance Matrix Adaptation
|
cs.LG
|
There has been a recent focus in reinforcement learning on addressing
continuous state and action problems by optimizing parameterized policies. PI2
is a recent example of this approach. It combines a derivation from first
principles of stochastic optimal control with tools from statistical estimation
theory. In this paper, we consider PI2 as a member of the wider family of
methods which share the concept of probability-weighted averaging to
iteratively update parameters to optimize a cost function. We compare PI2 to
other members of the same family - Cross-Entropy Methods and CMAES - at the
conceptual level and in terms of performance. The comparison suggests the
derivation of a novel algorithm which we call PI2-CMA for "Path Integral Policy
Improvement with Covariance Matrix Adaptation". PI2-CMA's main advantage is
that it determines the magnitude of the exploration noise automatically.
|
1206.4622
|
A Graphical Model Formulation of Collaborative Filtering Neighbourhood
Methods with Fast Maximum Entropy Training
|
cs.LG cs.IR stat.ML
|
Item neighbourhood methods for collaborative filtering learn a weighted graph
over the set of items, where each item is connected to those it is most similar
to. The prediction of a user's rating on an item is then given by that rating
of neighbouring items, weighted by their similarity. This paper presents a new
neighbourhood approach which we call item fields, whereby an undirected
graphical model is formed over the item graph. The resulting prediction rule is
a simple generalization of the classical approaches, which takes into account
non-local information in the graph, allowing its best results to be obtained
when using drastically fewer edges than other neighbourhood approaches. A fast
approximate maximum entropy training method based on the Bethe approximation is
presented, which uses a simple gradient ascent procedure. When using
precomputed sufficient statistics on the Movielens datasets, our method is
faster than maximum likelihood approaches by two orders of magnitude.
|
1206.4623
|
On the Size of the Online Kernel Sparsification Dictionary
|
cs.LG stat.ML
|
We analyze the size of the dictionary constructed from online kernel
sparsification, using a novel formula that expresses the expected determinant
of the kernel Gram matrix in terms of the eigenvalues of the covariance
operator. Using this formula, we are able to connect the cardinality of the
dictionary with the eigen-decay of the covariance operator. In particular, we
show that under certain technical conditions, the size of the dictionary will
always grow sub-linearly in the number of data points, and, as a consequence,
the kernel linear regressor constructed from the resulting dictionary is
consistent.
|
1206.4624
|
Robust Multiple Manifolds Structure Learning
|
cs.LG stat.ML
|
We present a robust multiple manifolds structure learning (RMMSL) scheme to
robustly estimate data structures under the multiple low intrinsic dimensional
manifolds assumption. In the local learning stage, RMMSL efficiently estimates
local tangent space by weighted low-rank matrix factorization. In the global
learning stage, we propose a robust manifold clustering method based on local
structure learning results. The proposed clustering method is designed to get
the flattest manifolds clusters by introducing a novel curved-level similarity
function. Our approach is evaluated and compared to state-of-the-art methods on
synthetic data, handwritten digit images, human motion capture data and
motorbike videos. We demonstrate the effectiveness of the proposed approach,
which yields higher clustering accuracy, and produces promising results for
challenging tasks of human motion segmentation and motion flow learning from
videos.
|
1206.4625
|
Optimizing F-measure: A Tale of Two Approaches
|
cs.LG
|
F-measures are popular performance metrics, particularly for tasks with
imbalanced data sets. Algorithms for learning to maximize F-measures follow two
approaches: the empirical utility maximization (EUM) approach learns a
classifier having optimal performance on training data, while the
decision-theoretic approach learns a probabilistic model and then predicts
labels with maximum expected F-measure. In this paper, we investigate the
theoretical justifications and connections for these two approaches, and we
study the conditions under which one approach is preferable to the other using
synthetic and real datasets. Given accurate models, our results suggest that
the two approaches are asymptotically equivalent given large training and test
sets. Nevertheless, empirically, the EUM approach appears to be more robust
against model misspecification, and given a good model, the decision-theoretic
approach appears to be better for handling rare classes and a common domain
adaptation scenario.
|
1206.4626
|
On-Line Portfolio Selection with Moving Average Reversion
|
cs.CE cs.LG q-fin.PM
|
On-line portfolio selection has attracted increasing interests in machine
learning and AI communities recently. Empirical evidences show that stock's
high and low prices are temporary and stock price relatives are likely to
follow the mean reversion phenomenon. While the existing mean reversion
strategies are shown to achieve good empirical performance on many real
datasets, they often make the single-period mean reversion assumption, which is
not always satisfied in some real datasets, leading to poor performance when
the assumption does not hold. To overcome the limitation, this article proposes
a multiple-period mean reversion, or so-called Moving Average Reversion (MAR),
and a new on-line portfolio selection strategy named "On-Line Moving Average
Reversion" (OLMAR), which exploits MAR by applying powerful online learning
techniques. From our empirical results, we found that OLMAR can overcome the
drawback of existing mean reversion algorithms and achieve significantly better
results, especially on the datasets where the existing mean reversion
algorithms failed. In addition to superior trading performance, OLMAR also runs
extremely fast, further supporting its practical applicability to a wide range
of applications.
|
1206.4627
|
Convergence Rates of Biased Stochastic Optimization for Learning Sparse
Ising Models
|
cs.LG stat.ML
|
We study the convergence rate of stochastic optimization of exact (NP-hard)
objectives, for which only biased estimates of the gradient are available. We
motivate this problem in the context of learning the structure and parameters
of Ising models. We first provide a convergence-rate analysis of deterministic
errors for forward-backward splitting (FBS). We then extend our analysis to
biased stochastic errors, by first characterizing a family of samplers and
providing a high probability bound that allows understanding not only FBS, but
also proximal gradient (PG) methods. We derive some interesting conclusions:
FBS requires only a logarithmically increasing number of random samples in
order to converge (although at a very low rate); the required number of random
samples is the same for the deterministic and the biased stochastic setting for
FBS and basic PG; accelerated PG is not guaranteed to converge in the biased
stochastic setting.
|
1206.4628
|
Robust PCA in High-dimension: A Deterministic Approach
|
cs.LG stat.ML
|
We consider principal component analysis for contaminated data-set in the
high dimensional regime, where the dimensionality of each observation is
comparable or even more than the number of observations. We propose a
deterministic high-dimensional robust PCA algorithm which inherits all
theoretical properties of its randomized counterpart, i.e., it is tractable,
robust to contaminated points, easily kernelizable, asymptotic consistent and
achieves maximal robustness -- a breakdown point of 50%. More importantly, the
proposed method exhibits significantly better computational efficiency, which
makes it suitable for large-scale real applications.
|
1206.4629
|
Multiple Kernel Learning from Noisy Labels by Stochastic Programming
|
cs.LG
|
We study the problem of multiple kernel learning from noisy labels. This is
in contrast to most of the previous studies on multiple kernel learning that
mainly focus on developing efficient algorithms and assume perfectly labeled
training examples. Directly applying the existing multiple kernel learning
algorithms to noisily labeled examples often leads to suboptimal performance
due to the incorrect class assignments. We address this challenge by casting
multiple kernel learning from noisy labels into a stochastic programming
problem, and presenting a minimax formulation. We develop an efficient
algorithm for solving the related convex-concave optimization problem with a
fast convergence rate of $O(1/T)$ where $T$ is the number of iterations.
Empirical studies on UCI data sets verify both the effectiveness of the
proposed framework and the efficiency of the proposed optimization algorithm.
|
1206.4630
|
Efficient Decomposed Learning for Structured Prediction
|
cs.LG
|
Structured prediction is the cornerstone of several machine learning
applications. Unfortunately, in structured prediction settings with expressive
inter-variable interactions, exact inference-based learning algorithms, e.g.
Structural SVM, are often intractable. We present a new way, Decomposed
Learning (DecL), which performs efficient learning by restricting the inference
step to a limited part of the structured spaces. We provide characterizations
based on the structure, target parameters, and gold labels, under which DecL is
equivalent to exact learning. We then show that in real world settings, where
our theoretical assumptions may not completely hold, DecL-based algorithms are
significantly more efficient and as accurate as exact learning.
|
1206.4631
|
A Poisson convolution model for characterizing topical content with word
frequency and exclusivity
|
cs.LG cs.CL cs.IR stat.ME stat.ML
|
An ongoing challenge in the analysis of document collections is how to
summarize content in terms of a set of inferred themes that can be interpreted
substantively in terms of topics. The current practice of parametrizing the
themes in terms of most frequent words limits interpretability by ignoring the
differential use of words across topics. We argue that words that are both
common and exclusive to a theme are more effective at characterizing topical
content. We consider a setting where professional editors have annotated
documents to a collection of topic categories, organized into a tree, in which
leaf-nodes correspond to the most specific topics. Each document is annotated
to multiple categories, at different levels of the tree. We introduce a
hierarchical Poisson convolution model to analyze annotated documents in this
setting. The model leverages the structure among categories defined by
professional editors to infer a clear semantic description for each topic in
terms of words that are both frequent and exclusive. We carry out a large
randomized experiment on Amazon Turk to demonstrate that topic summaries based
on the FREX score are more interpretable than currently established frequency
based summaries, and that the proposed model produces more efficient estimates
of exclusivity than with currently models. We also develop a parallelized
Hamiltonian Monte Carlo sampler that allows the inference to scale to millions
of documents.
|
1206.4632
|
A Complete Analysis of the l_1,p Group-Lasso
|
cs.LG math.OC stat.ML
|
The Group-Lasso is a well-known tool for joint regularization in machine
learning methods. While the l_{1,2} and the l_{1,\infty} version have been
studied in detail and efficient algorithms exist, there are still open
questions regarding other l_{1,p} variants. We characterize conditions for
solutions of the l_{1,p} Group-Lasso for all p-norms with 1 <= p <= \infty, and
we present a unified active set algorithm. For all p-norms, a highly efficient
projected gradient algorithm is presented. This new algorithm enables us to
compare the prediction performance of many variants of the Group-Lasso in a
multi-task learning setting, where the aim is to solve many learning problems
in parallel which are coupled via the Group-Lasso constraint. We conduct
large-scale experiments on synthetic data and on two real-world data sets. In
accordance with theoretical characterizations of the different norms we observe
that the weak-coupling norms with p between 1.5 and 2 consistently outperform
the strong-coupling norms with p >> 2.
|
1206.4633
|
Fast Bounded Online Gradient Descent Algorithms for Scalable
Kernel-Based Online Learning
|
cs.LG stat.ML
|
Kernel-based online learning has often shown state-of-the-art performance for
many online learning tasks. It, however, suffers from a major shortcoming, that
is, the unbounded number of support vectors, making it non-scalable and
unsuitable for applications with large-scale datasets. In this work, we study
the problem of bounded kernel-based online learning that aims to constrain the
number of support vectors by a predefined budget. Although several algorithms
have been proposed in literature, they are neither computationally efficient
due to their intensive budget maintenance strategy nor effective due to the use
of simple Perceptron algorithm. To overcome these limitations, we propose a
framework for bounded kernel-based online learning based on an online gradient
descent approach. We propose two efficient algorithms of bounded online
gradient descent (BOGD) for scalable kernel-based online learning: (i) BOGD by
maintaining support vectors using uniform sampling, and (ii) BOGD++ by
maintaining support vectors using non-uniform sampling. We present theoretical
analysis of regret bound for both algorithms, and found promising empirical
performance in terms of both efficacy and efficiency by comparing them to
several well-known algorithms for bounded kernel-based online learning on
large-scale datasets.
|
1206.4634
|
Artist Agent: A Reinforcement Learning Approach to Automatic Stroke
Generation in Oriental Ink Painting
|
cs.LG cs.GR stat.ML
|
Oriental ink painting, called Sumi-e, is one of the most appealing painting
styles that has attracted artists around the world. Major challenges in
computer-based Sumi-e simulation are to abstract complex scene information and
draw smooth and natural brush strokes. To automatically find such strokes, we
propose to model the brush as a reinforcement learning agent, and learn desired
brush-trajectories by maximizing the sum of rewards in the policy search
framework. We also provide elaborate design of actions, states, and rewards
tailored for a Sumi-e agent. The effectiveness of our proposed approach is
demonstrated through simulated Sumi-e experiments.
|
1206.4635
|
Deep Mixtures of Factor Analysers
|
cs.LG stat.ML
|
An efficient way to learn deep density models that have many layers of latent
variables is to learn one layer at a time using a model that has only one layer
of latent variables. After learning each layer, samples from the posterior
distributions for that layer are used as training data for learning the next
layer. This approach is commonly used with Restricted Boltzmann Machines, which
are undirected graphical models with a single hidden layer, but it can also be
used with Mixtures of Factor Analysers (MFAs) which are directed graphical
models. In this paper, we present a greedy layer-wise learning algorithm for
Deep Mixtures of Factor Analysers (DMFAs). Even though a DMFA can be converted
to an equivalent shallow MFA by multiplying together the factor loading
matrices at different levels, learning and inference are much more efficient in
a DMFA and the sharing of each lower-level factor loading matrix by many
different higher level MFAs prevents overfitting. We demonstrate empirically
that DMFAs learn better density models than both MFAs and two types of
Restricted Boltzmann Machine on a wide variety of datasets.
|
1206.4636
|
Modeling Latent Variable Uncertainty for Loss-based Learning
|
cs.LG cs.AI cs.CV
|
We consider the problem of parameter estimation using weakly supervised
datasets, where a training sample consists of the input and a partially
specified annotation, which we refer to as the output. The missing information
in the annotation is modeled using latent variables. Previous methods
overburden a single distribution with two separate tasks: (i) modeling the
uncertainty in the latent variables during training; and (ii) making accurate
predictions for the output and the latent variables during testing. We propose
a novel framework that separates the demands of the two tasks using two
distributions: (i) a conditional distribution to model the uncertainty of the
latent variables for a given input-output pair; and (ii) a delta distribution
to predict the output and the latent variables for a given input. During
learning, we encourage agreement between the two distributions by minimizing a
loss-based dissimilarity coefficient. Our approach generalizes latent SVM in
two important ways: (i) it models the uncertainty over latent variables instead
of relying on a pointwise estimate; and (ii) it allows the use of loss
functions that depend on latent variables, which greatly increases its
applicability. We demonstrate the efficacy of our approach on two challenging
problems---object detection and action detection---using publicly available
datasets.
|
1206.4637
|
Learning to Identify Regular Expressions that Describe Email Campaigns
|
cs.LG cs.CL stat.ML
|
This paper addresses the problem of inferring a regular expression from a
given set of strings that resembles, as closely as possible, the regular
expression that a human expert would have written to identify the language.
This is motivated by our goal of automating the task of postmasters of an email
service who use regular expressions to describe and blacklist email spam
campaigns. Training data contains batches of messages and corresponding regular
expressions that an expert postmaster feels confident to blacklist. We model
this task as a learning problem with structured output spaces and an
appropriate loss function, derive a decoder and the resulting optimization
problem, and a report on a case study conducted with an email service.
|
1206.4638
|
Efficient Euclidean Projections onto the Intersection of Norm Balls
|
cs.LG stat.ML
|
Using sparse-inducing norms to learn robust models has received increasing
attention from many fields for its attractive properties. Projection-based
methods have been widely applied to learning tasks constrained by such norms.
As a key building block of these methods, an efficient operator for Euclidean
projection onto the intersection of $\ell_1$ and $\ell_{1,q}$ norm balls
$(q=2\text{or}\infty)$ is proposed in this paper. We prove that the projection
can be reduced to finding the root of an auxiliary function which is piecewise
smooth and monotonic. Hence, a bisection algorithm is sufficient to solve the
problem. We show that the time complexity of our solution is $O(n+g\log g)$ for
$q=2$ and $O(n\log n)$ for $q=\infty$, where $n$ is the dimensionality of the
vector to be projected and $g$ is the number of disjoint groups; we confirm
this complexity by experimentation. Empirical study reveals that our method
achieves significantly better performance than classical methods in terms of
running time and memory usage. We further show that embedded with our efficient
projection operator, projection-based algorithms can solve regression problems
with composite norm constraints more efficiently than other methods and give
superior accuracy.
|
1206.4639
|
Adaptive Regularization for Weight Matrices
|
cs.LG cs.AI
|
Algorithms for learning distributions over weight-vectors, such as AROW were
recently shown empirically to achieve state-of-the-art performance at various
problems, with strong theoretical guaranties. Extending these algorithms to
matrix models pose challenges since the number of free parameters in the
covariance of the distribution scales as $n^4$ with the dimension $n$ of the
matrix, and $n$ tends to be large in real applications. We describe, analyze
and experiment with two new algorithms for learning distribution of matrix
models. Our first algorithm maintains a diagonal covariance over the parameters
and can handle large covariance matrices. The second algorithm factors the
covariance to capture inter-features correlation while keeping the number of
parameters linear in the size of the original matrix. We analyze both
algorithms in the mistake bound model and show a superior precision performance
of our approach over other algorithms in two tasks: retrieving similar images,
and ranking similar documents. The factored algorithm is shown to attain faster
convergence rate.
|
1206.4640
|
Stability of matrix factorization for collaborative filtering
|
cs.NA cs.LG stat.ML
|
We study the stability vis a vis adversarial noise of matrix factorization
algorithm for matrix completion. In particular, our results include: (I) we
bound the gap between the solution matrix of the factorization method and the
ground truth in terms of root mean square error; (II) we treat the matrix
factorization as a subspace fitting problem and analyze the difference between
the solution subspace and the ground truth; (III) we analyze the prediction
error of individual users based on the subspace stability. We apply these
results to the problem of collaborative filtering under manipulator attack,
which leads to useful insights and guidelines for collaborative filtering
system design.
|
1206.4641
|
Total Variation and Euler's Elastica for Supervised Learning
|
cs.LG cs.CV stat.ML
|
In recent years, total variation (TV) and Euler's elastica (EE) have been
successfully applied to image processing tasks such as denoising and
inpainting. This paper investigates how to extend TV and EE to the supervised
learning settings on high dimensional data. The supervised learning problem can
be formulated as an energy functional minimization under Tikhonov
regularization scheme, where the energy is composed of a squared loss and a
total variation smoothing (or Euler's elastica smoothing). Its solution via
variational principles leads to an Euler-Lagrange PDE. However, the PDE is
always high-dimensional and cannot be directly solved by common methods.
Instead, radial basis functions are utilized to approximate the target
function, reducing the problem to finding the linear coefficients of basis
functions. We apply the proposed methods to supervised learning tasks
(including binary classification, multi-class classification, and regression)
on benchmark data sets. Extensive experiments have demonstrated promising
results of the proposed methods.
|
1206.4642
|
Fast Computation of Subpath Kernel for Trees
|
cs.DS cs.LG stat.ML
|
The kernel method is a potential approach to analyzing structured data such
as sequences, trees, and graphs; however, unordered trees have not been
investigated extensively. Kimura et al. (2011) proposed a kernel function for
unordered trees on the basis of their subpaths, which are vertical
substructures of trees responsible for hierarchical information in them. Their
kernel exhibits practically good performance in terms of accuracy and speed;
however, linear-time computation is not guaranteed theoretically, unlike the
case of the other unordered tree kernel proposed by Vishwanathan and Smola
(2003). In this paper, we propose a theoretically guaranteed linear-time kernel
computation algorithm that is practically fast, and we present an efficient
prediction algorithm whose running time depends only on the size of the input
tree. Experimental results show that the proposed algorithms are quite
efficient in practice.
|
1206.4643
|
Lightning Does Not Strike Twice: Robust MDPs with Coupled Uncertainty
|
cs.LG cs.GT cs.SY
|
We consider Markov decision processes under parameter uncertainty. Previous
studies all restrict to the case that uncertainties among different states are
uncoupled, which leads to conservative solutions. In contrast, we introduce an
intuitive concept, termed "Lightning Does not Strike Twice," to model coupled
uncertain parameters. Specifically, we require that the system can deviate from
its nominal parameters only a bounded number of times. We give probabilistic
guarantees indicating that this model represents real life situations and
devise tractable algorithms for computing optimal control policies using this
concept.
|
1206.4644
|
Groupwise Constrained Reconstruction for Subspace Clustering
|
cs.LG stat.ML
|
Reconstruction based subspace clustering methods compute a self
reconstruction matrix over the samples and use it for spectral clustering to
obtain the final clustering result. Their success largely relies on the
assumption that the underlying subspaces are independent, which, however, does
not always hold in the applications with increasing number of subspaces. In
this paper, we propose a novel reconstruction based subspace clustering model
without making the subspace independence assumption. In our model, certain
properties of the reconstruction matrix are explicitly characterized using the
latent cluster indicators, and the affinity matrix used for spectral clustering
can be directly built from the posterior of the latent cluster indicators
instead of the reconstruction matrix. Experimental results on both synthetic
and real-world datasets show that the proposed model can outperform the
state-of-the-art methods.
|
1206.4645
|
Ensemble Methods for Convex Regression with Applications to Geometric
Programming Based Circuit Design
|
cs.LG cs.NA stat.ME stat.ML
|
Convex regression is a promising area for bridging statistical estimation and
deterministic convex optimization. New piecewise linear convex regression
methods are fast and scalable, but can have instability when used to
approximate constraints or objective functions for optimization. Ensemble
methods, like bagging, smearing and random partitioning, can alleviate this
problem and maintain the theoretical properties of the underlying estimator. We
empirically examine the performance of ensemble methods for prediction and
optimization, and then apply them to device modeling and constraint
approximation for geometric programming based circuit design.
|
1206.4646
|
Partial-Hessian Strategies for Fast Learning of Nonlinear Embeddings
|
cs.LG stat.ML
|
Stochastic neighbor embedding (SNE) and related nonlinear manifold learning
algorithms achieve high-quality low-dimensional representations of similarity
data, but are notoriously slow to train. We propose a generic formulation of
embedding algorithms that includes SNE and other existing algorithms, and study
their relation with spectral methods and graph Laplacians. This allows us to
define several partial-Hessian optimization strategies, characterize their
global and local convergence, and evaluate them empirically. We achieve up to
two orders of magnitude speedup over existing training methods with a strategy
(which we call the spectral direction) that adds nearly no overhead to the
gradient and yet is simple, scalable and applicable to several existing and
future embedding algorithms.
|
1206.4647
|
Active Learning for Matching Problems
|
cs.LG cs.AI cs.IR
|
Effective learning of user preferences is critical to easing user burden in
various types of matching problems. Equally important is active query selection
to further reduce the amount of preference information users must provide. We
address the problem of active learning of user preferences for matching
problems, introducing a novel method for determining probabilistic matchings,
and developing several new active learning strategies that are sensitive to the
specific matching objective. Experiments with real-world data sets spanning
diverse domains demonstrate that matching-sensitive active learning
|
1206.4648
|
Two-Manifold Problems with Applications to Nonlinear System
Identification
|
cs.LG
|
Recently, there has been much interest in spectral approaches to learning
manifolds---so-called kernel eigenmap methods. These methods have had some
successes, but their applicability is limited because they are not robust to
noise. To address this limitation, we look at two-manifold problems, in which
we simultaneously reconstruct two related manifolds, each representing a
different view of the same data. By solving these interconnected learning
problems together, two-manifold algorithms are able to succeed where a
non-integrated approach would fail: each view allows us to suppress noise in
the other, reducing bias. We propose a class of algorithms for two-manifold
problems, based on spectral decomposition of cross-covariance operators in
Hilbert space, and discuss when two-manifold problems are useful. Finally, we
demonstrate that solving a two-manifold problem can aid in learning a nonlinear
dynamical system from limited data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.