id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1306.2091 | A framework for (under)specifying dependency syntax without overloading
annotators | cs.CL | We introduce a framework for lightweight dependency syntax annotation. Our
formalism builds upon the typical representation for unlabeled dependencies,
permitting a simple notation and annotation workflow. Moreover, the formalism
encourages annotators to underspecify parts of the syntax if doing so would
streamline the annotation process. We demonstrate the efficacy of this
annotation on three languages and develop algorithms to evaluate and compare
underspecified annotations.
|
1306.2094 | Predicting Risk-of-Readmission for Congestive Heart Failure Patients: A
Multi-Layer Approach | cs.LG stat.AP | Mitigating risk-of-readmission of Congestive Heart Failure (CHF) patients
within 30 days of discharge is important because such readmissions are not only
expensive but also critical indicator of provider care and quality of
treatment. Accurately predicting the risk-of-readmission may allow hospitals to
identify high-risk patients and eventually improve quality of care by
identifying factors that contribute to such readmissions in many scenarios. In
this paper, we investigate the problem of predicting risk-of-readmission as a
supervised learning problem, using a multi-layer classification approach.
Earlier contributions inadequately attempted to assess a risk value for 30 day
readmission by building a direct predictive model as opposed to our approach.
We first split the problem into various stages, (a) at risk in general (b) risk
within 60 days (c) risk within 30 days, and then build suitable classifiers for
each stage, thereby increasing the ability to accurately predict the risk using
multiple layers of decision. The advantage of our approach is that we can use
different classification models for the subtasks that are more suited for the
respective problems. Moreover, each of the subtasks can be solved using
different features and training data leading to a highly confident diagnosis or
risk compared to a one-shot single layer approach. An experimental evaluation
on actual hospital patient record data from Multicare Health Systems shows that
our model is significantly better at predicting risk-of-readmission of CHF
patients within 30 days after discharge compared to prior attempts.
|
1306.2100 | Discriminative extended canonical correlation analysis for pattern set
matching | cs.CV | In this paper we address the problem of matching sets of vectors embedded in
the same input space. We propose an approach which is motivated by canonical
correlation analysis (CCA), a statistical technique which has proven successful
in a wide variety of pattern recognition problems. Like CCA when applied to the
matching of sets, our extended canonical correlation analysis (E-CCA) aims to
extract the most similar modes of variability within two sets. Our first major
contribution is the formulation of a principled framework for robust inference
of such modes from data in the presence of uncertainty associated with noise
and sampling randomness. E-CCA retains the efficiency and closed form
computability of CCA, but unlike it, does not possess free parameters which
cannot be inferred directly from data (inherent data dimensionality, and the
number of canonical correlations used for set similarity computation). Our
second major contribution is to show that in contrast to CCA, E-CCA is readily
adapted to match sets in a discriminative learning scheme which we call
discriminative extended canonical correlation analysis (DE-CCA). Theoretical
contributions of this paper are followed by an empirical evaluation of its
premises on the task of face recognition from sets of rasterized appearance
images. The results demonstrate that our approach, E-CCA, already outperforms
both CCA and its quasi-discriminative counterpart constrained CCA (C-CCA), for
all values of their free parameters. An even greater improvement is achieved
with the discriminative variant, DE-CCA.
|
1306.2101 | Secrecy Rates in the Broadcast Channel with Confidential Messages and
External Eavesdroppers | cs.IT math.IT | In this paper, we consider the broadcast channel with confidential messages
and external eavesdroppers (BCCE), where a multi-antenna base station
simultaneously communicates to multiple potentially malicious users, in the
presence of randomly located external eavesdroppers. Using the proposed model,
we study the secrecy rates achievable by regularized channel inversion (RCI)
precoding by performing a large-system analysis that combines tools from
stochastic geometry and random matrix theory. We obtain explicit expressions
for the probability of secrecy outage and an upper bound on the rate loss due
to the presence of external eavesdroppers. We show that both these quantities
scale as $\frac{\lambda_e}{\sqrt{N}}$, where $N$ is the number of transmit
antennas and $\lambda_e$ is the density of external eavesdroppers, irrespective
of their collusion strategy. Furthermore, we derive a practical rule for the
choice of the regularization parameter, which is agnostic of channel state
information and location of eavesdroppers, and yet provides close to optimal
performance.
|
1306.2102 | Discriminative k-means clustering | cs.CV | The k-means algorithm is a partitional clustering method. Over 60 years old,
it has been successfully used for a variety of problems. The popularity of
k-means is in large part a consequence of its simplicity and efficiency. In
this paper we are inspired by these appealing properties of k-means in the
development of a clustering algorithm which accepts the notion of "positively"
and "negatively" labelled data. The goal is to discover the cluster structure
of both positive and negative data in a manner which allows for the
discrimination between the two sets. The usefulness of this idea is
demonstrated practically on the problem of face recognition, where the task of
learning the scope of a person's appearance should be done in a manner which
allows this face to be differentiated from others.
|
1306.2109 | Distributed Decision-Making over Adaptive Networks | cs.IT cs.SY math.IT | In distributed processing, agents generally collect data generated by the
same underlying unknown model (represented by a vector of parameters) and then
solve an estimation or inference task cooperatively. In this paper, we consider
the situation in which the data observed by the agents may have risen from two
different models. Agents do not know beforehand which model accounts for their
data and the data of their neighbors. The objective for the network is for all
agents to reach agreement on which model to track and to estimate this model
cooperatively. In these situations, where agents are subject to data from
unknown different sources, conventional distributed estimation strategies would
lead to biased estimates relative to any of the underlying models. We first
show how to modify existing strategies to guarantee unbiasedness. We then
develop a classification scheme for the agents to identify the models that
generated the data, and propose a procedure by which the entire network can be
made to converge towards the same model through a collaborative decision-making
process. The resulting algorithm is applied to model fish foraging behavior in
the presence of two food sources.
|
1306.2118 | A Novel Approach for Single Gene Selection Using Clustering and
Dimensionality Reduction | cs.CE cs.LG | We extend the standard rough set-based approach to deal with huge amounts of
numeric attributes versus small amount of available objects. Here, a novel
approach of clustering along with dimensionality reduction; Hybrid Fuzzy C
Means-Quick Reduct (FCMQR) algorithm is proposed for single gene selection.
Gene selection is a process to select genes which are more informative. It is
one of the important steps in knowledge discovery. The problem is that all
genes are not important in gene expression data. Some of the genes may be
redundant, and others may be irrelevant and noisy. In this study, the entire
dataset is divided in proper grouping of similar genes by applying Fuzzy C
Means (FCM) algorithm. A high class discriminated genes has been selected based
on their degree of dependence by applying Quick Reduct algorithm based on Rough
Set Theory to all the resultant clusters. Average Correlation Value (ACV) is
calculated for the high class discriminated genes. The clusters which have the
ACV value a s 1 is determined as significant clusters, whose classification
accuracy will be equal or high when comparing to the accuracy of the entire
dataset. The proposed algorithm is evaluated using WEKA classifiers and
compared. Finally, experimental results related to the leukemia cancer data
confirm that our approach is quite promising, though it surely requires further
research.
|
1306.2119 | Non-strongly-convex smooth stochastic approximation with convergence
rate O(1/n) | cs.LG math.OC stat.ML | We consider the stochastic approximation problem where a convex function has
to be minimized, given only the knowledge of unbiased estimates of its
gradients at certain points, a framework which includes machine learning
methods based on the minimization of the empirical risk. We focus on problems
without strong convexity, for which all previously known algorithms achieve a
convergence rate for function values of O(1/n^{1/2}). We consider and analyze
two algorithms that achieve a rate of O(1/n) for classical supervised learning
problems. For least-squares regression, we show that averaged stochastic
gradient descent with constant step-size achieves the desired rate. For
logistic regression, this is achieved by a simple novel stochastic gradient
algorithm that (a) constructs successive local quadratic approximations of the
loss functions, while (b) preserving the same running time complexity as
stochastic gradient descent. For these algorithms, we provide a non-asymptotic
analysis of the generalization error (in expectation, and also in high
probability for least-squares), and run extensive experiments on standard
machine learning benchmarks showing that they often outperform existing
approaches.
|
1306.2158 | "Not not bad" is not "bad": A distributional account of negation | cs.CL | With the increasing empirical success of distributional models of
compositional semantics, it is timely to consider the types of textual logic
that such models are capable of capturing. In this paper, we address
shortcomings in the ability of current models to capture logical operations
such as negation. As a solution we propose a tripartite formulation for a
continuous vector space representation of semantics and subsequently use this
representation to develop a formal compositional notion of negation within such
models.
|
1306.2159 | Image segmentation by optimal and hierarchical piecewise constant
approximations | cs.CV | Piecewise constant image approximations of sequential number of segments or
clusters of disconnected pixels are treated. The method of majorizing of
optimal approximation sequence by hierarchical sequence of image approximations
is proposed. A generalization for multidimensional case of color and
multispectral images is foreseen.
|
1306.2230 | Stochastic fluctuations and the detectability limit of network
communities | physics.soc-ph cond-mat.stat-mech cs.SI q-bio.QM | We have analyzed the detectability limits of network communities in the
framework of the popular Girvan and Newman benchmark. By carefully taking into
account the inevitable stochastic fluctuations that affect the construction of
each and every instance of the benchmark, we come to the conclusions that the
native, putative partition of the network is completely lost even before the
in-degree/out-degree ratio becomes equal to the one of a structure-less
Erd\"os-R\'enyi network. We develop a simple iterative scheme, analytically
well described by an infinite branching-process, to provide an estimate of the
true detectability limit. Using various algorithms based on modularity
optimization, we show that all of them behave (semi-quantitatively) in the same
way, with the same functional form of the detectability threshold as a function
of the network parameters. Because the same behavior has also been found by
further modularity-optimization methods and for methods based on different
heuristics implementations, we conclude that indeed a correct definition of the
detectability limit must take into account the stochastic fluctuations of the
network construction.
|
1306.2257 | Using the quaternion's representation of individuals in swarm
intelligence and evolutionary computation | cs.NE | This paper introduces a novel idea for representation of individuals using
quaternions in swarm intelligence and evolutionary algorithms. Quaternions are
a number system, which extends complex numbers. They are successfully applied
to problems of theoretical physics and to those areas needing fast rotation
calculations. We propose the application of quaternions in optimization, more
precisely, we have been using quaternions for representation of individuals in
Bat algorithm. The preliminary results of our experiments when optimizing a
test-suite consisting of ten standard functions showed that this new algorithm
significantly improved the results of the original Bat algorithm. Moreover, the
obtained results are comparable with other swarm intelligence and evolutionary
algorithms, like the artificial bees colony, and differential evolution. We
believe that this representation could also be successfully applied to other
swarm intelligence and evolutionary algorithms.
|
1306.2268 | Accomplishable Tasks in Knowledge Representation | cs.AI cs.CL | Knowledge Representation (KR) is traditionally based on the logic of facts,
expressed in boolean logic. However, facts about an agent can also be seen as a
set of accomplished tasks by the agent. This paper proposes a new approach to
KR: the notion of task logical KR based on Computability Logic. This notion
allows the user to represent both accomplished tasks and accomplishable tasks
by the agent. This notion allows us to build sophisticated KRs about many
interesting agents, which have not been supported by previous logical
languages.
|
1306.2290 | Asymptotically Optimal Sequential Estimation of the Mean Based on
Inclusion Principle | math.ST cs.LG math.PR stat.TH | A large class of problems in sciences and engineering can be formulated as
the general problem of constructing random intervals with pre-specified
coverage probabilities for the mean. Wee propose a general approach for
statistical inference of mean values based on accumulated observational data.
We show that the construction of such random intervals can be accomplished by
comparing the endpoints of random intervals with confidence sequences for the
mean. Asymptotic results are obtained for such sequential methods.
|
1306.2295 | Markov random fields factorization with context-specific independences | cs.AI cs.LG | Markov random fields provide a compact representation of joint probability
distributions by representing its independence properties in an undirected
graph. The well-known Hammersley-Clifford theorem uses these conditional
independences to factorize a Gibbs distribution into a set of factors. However,
an important issue of using a graph to represent independences is that it
cannot encode some types of independence relations, such as the
context-specific independences (CSIs). They are a particular case of
conditional independences that is true only for a certain assignment of its
conditioning set; in contrast to conditional independences that must hold for
all its assignments. This work presents a method for factorizing a Markov
random field according to CSIs present in a distribution, and formally
guarantees that this factorization is correct. This is presented in our main
contribution, the context-specific Hammersley-Clifford theorem, a
generalization to CSIs of the Hammersley-Clifford theorem that applies for
conditional independences.
|
1306.2298 | Generative Model Selection Using a Scalable and Size-Independent Complex
Network Classifier | cs.SI cs.LG physics.soc-ph stat.ML | Real networks exhibit nontrivial topological features such as heavy-tailed
degree distribution, high clustering, and small-worldness. Researchers have
developed several generative models for synthesizing artificial networks that
are structurally similar to real networks. An important research problem is to
identify the generative model that best fits to a target network. In this
paper, we investigate this problem and our goal is to select the model that is
able to generate graphs similar to a given network instance. By the means of
generating synthetic networks with seven outstanding generative models, we have
utilized machine learning methods to develop a decision tree for model
selection. Our proposed method, which is named "Generative Model Selection for
Complex Networks" (GMSCN), outperforms existing methods with respect to
accuracy, scalability and size-independence.
|
1306.2301 | A note on quantum related-key attacks | quant-ph cs.CR cs.IT math.IT | In a basic related-key attack against a block cipher, the adversary has
access to encryptions under keys that differ from the target key by bit-flips.
In this short note we show that for a quantum adversary such attacks are quite
powerful: if the secret key is (i) uniquely determined by a small number of
plaintext-ciphertext pairs, (ii) the block cipher can be evaluated efficiently,
and (iii) a superposition of related keys can be queried, then the key can be
extracted efficiently.
|
1306.2305 | Computing Flowpipe of Nonlinear Hybrid Systems with Numerical Methods | math.OC cs.NA cs.SY math.NA | Modern control-command systems often include controllers that perform
nonlinear computations to control a physical system, which can typically be
described by an hybrid automaton containing high-dimensional systems of
nonlinear differential equations. To prove safety of such systems, one must
compute all the reachable sets from a given initial position, which might be
uncertain (its value is not precisely known). On linear hybrid systems,
efficient and precise techniques exist, but they fail to handle nonlinear flows
or jump conditions. In this article, we present a new tool name HySon which
computes the flowpipes of both linear and nonlinear hybrid systems using
guaranteed generalization of classical efficient numerical simulation methods,
including with variable integration step-size. In particular, we present an
algorithm for detecting discrete events based on guaranteed interpolation
polynomials that turns out to be both precise and efficient. Illustrations of
the techniques developed in this article are given on representative examples.
|
1306.2347 | Auditing: Active Learning with Outcome-Dependent Query Costs | cs.LG | We propose a learning setting in which unlabeled data is free, and the cost
of a label depends on its value, which is not known in advance. We study binary
classification in an extreme case, where the algorithm only pays for negative
labels. Our motivation are applications such as fraud detection, in which
investigating an honest transaction should be avoided if possible. We term the
setting auditing, and consider the auditing complexity of an algorithm: the
number of negative labels the algorithm requires in order to learn a hypothesis
with low relative error. We design auditing algorithms for simple hypothesis
classes (thresholds and rectangles), and show that with these algorithms, the
auditing complexity can be significantly lower than the active label
complexity. We also discuss a general competitive approach for auditing and
possible modifications to the framework.
|
1306.2361 | Joint Transmit Diversity Optimization and Relay Selection for
Cooperative MIMO Systems using Discrete Stochastic Algorithms | cs.IT math.IT | We propose a joint discrete stochastic optimization based transmit diversity
selection (TDS) and relay selection (RS) algorithm for decode-and-forward (DF),
cooperative MIMO systems with a non-negligible direct path. TDS and RS are
performed jointly with continuous least squares channel estimation (CE), linear
minimum mean square error (MMSE) receivers are used at all nodes and no
inter-relay communication is required. The performance of the proposed scheme
is evaluated via bit-error rate (BER) comparisons and diversity analysis, and
is shown to converge to the optimum exhaustive solution.
|
1306.2362 | Bidirectional MMSE Algorithms for Interference Mitigation in CDMA
Systems over Fast Fading Channels | cs.IT math.IT | This paper presents adaptive bidirectional minimum mean-square error (MMSE)
parameter estimation algorithms for fast-fading channels. The time correlation
between successive channel gains is exploited to improve the estimation and
tracking capabilities of adaptive algorithms and provide robustness against
time-varying channels. Bidirectional normalized least mean-square (NLMS) and
conjugate gradient (CG) algorithms are devised along with adaptive mixing
parameters that adjust to the time-varying channel correlation properties. An
analysis of the proposed algorithms is provided along with a discussion of
their performance advantages. Simulations for an application to interference
suppression in DS-CDMA systems show the advantages of the proposed algorithms.
|
1306.2399 | Distributed Detection in Coexisting Large-scale Sensor Networks | cs.IT math.IT | This paper considers signal detection in coexisting wireless sensor networks
(WSNs). We characterize the aggregate signal and interference from a Poisson
random field of nodes and define a binary hypothesis testing problem to detect
a signal in the presence of interference. For the testing problem, we introduce
the maximum likelihood (ML) detector and simpler alternatives. The proposed
mixed-fractional lower order moment (FLOM) detector is computationally simple
and close to the ML performance, and robust to estimation errors in system
parameters. We also derived asymptotic theoretical performances for the
proposed simple detectors. Monte-Carlo simulations are used to supplement our
analytical results and compare the performance of the receivers.
|
1306.2422 | Relative Observability of Discrete-Event Systems and its Supremal
Sublanguages | cs.SY | We identify a new observability concept, called relative observability, in
supervisory control of discrete-event systems under partial observation. A
fixed, ambient language is given, relative to which observability is tested.
Relative observability is stronger than observability, but enjoys the important
property that it is preserved under set union; hence there exists the supremal
relatively observable sublanguage of a given language. Relative observability
is weaker than normality, and thus yields, when combined with controllability,
a generally larger controlled behavior; in particular, no constraint is imposed
that only observable controllable events may be disabled. We design algorithms
which compute the supremal relatively observable (and controllable) sublanguage
of a given language, which is generally larger than the normal counterparts. We
demonstrate the new observability concept and algorithms with a Guideway and an
AGV example.
|
1306.2434 | Compressive Time Delay Estimation Using Interpolation | cs.IT math.IT | Time delay estimation has long been an active area of research. In this work,
we show that compressive sensing with interpolation may be used to achieve good
estimation precision while lowering the sampling frequency. We propose an
Interpolating Band-Excluded Orthogonal Matching Pursuit algorithm that uses one
of two interpolation functions to estimate the time delay parameter. The
numerical results show that interpolation improves estimation precision and
that compressive sensing provides an elegant tradeoff that may lower the
required sampling frequency while still attaining a desired estimation
performance.
|
1306.2459 | Fast Search for Dynamic Multi-Relational Graphs | cs.DB | Acting on time-critical events by processing ever growing social media or
news streams is a major technical challenge. Many of these data sources can be
modeled as multi-relational graphs. Continuous queries or techniques to search
for rare events that typically arise in monitoring applications have been
studied extensively for relational databases. This work is dedicated to answer
the question that emerges naturally: how can we efficiently execute a
continuous query on a dynamic graph? This paper presents an exact subgraph
search algorithm that exploits the temporal characteristics of representative
queries for online news or social media monitoring. The algorithm is based on a
novel data structure called the Subgraph Join Tree (SJ-Tree) that leverages the
structural and semantic characteristics of the underlying multi-relational
graph. The paper concludes with extensive experimentation on several real-world
datasets that demonstrates the validity of this approach.
|
1306.2460 | StreamWorks - A system for Dynamic Graph Search | cs.DB | Acting on time-critical events by processing ever growing social media, news
or cyber data streams is a major technical challenge. Many of these data
sources can be modeled as multi-relational graphs. Mining and searching for
subgraph patterns in a continuous setting requires an efficient approach to
incremental graph search. The goal of our work is to enable real-time search
capabilities for graph databases. This demonstration will present a dynamic
graph query system that leverages the structural and semantic characteristics
of the underlying multi-relational graph.
|
1306.2487 | How many parameters to model states of mind ? | physics.soc-ph cs.SI | A series of examples of computational models is provided, where the model aim
is to interpret numerical results in terms of internal states of agents minds.
Two opposite strategies or research can be distinguished in the literature.
First is to reproduce the richness and complexity of real world as faithfully
as possible, second is to apply simple assumptions and check the results in
depth. As a rule, the results of the latter method agree only qualitatively
with some stylized facts. The price we pay for more detailed predictions within
the former method is that consequences of the rich set of underlying
assumptions remain unchecked. Here we argue that for computational reasons,
complex models with many parameters are less suitable.
|
1306.2491 | Optimal Sensor and Actuator Placement in Complex Dynamical Networks | math.OC cs.SY | Controllability and observability have long been recognized as fundamental
structural properties of dynamical systems, but have recently seen renewed
interest in the context of large, complex networks of dynamical systems. A
basic problem is sensor and actuator placement: choose a subset from a finite
set of possible placements to optimize some real-valued controllability and
observability metrics of the network. Surprisingly little is known about the
structure of such combinatorial optimization problems. In this paper, we show
that an important class of metrics based on the controllability and
observability Gramians has a strong structural property that allows efficient
global optimization: the mapping from possible placements to the trace of the
associated Gramian is a modular set function. We illustrate the results via
placement of power electronic actuators in a model of the European power grid.
|
1306.2499 | Using Arabic Wordnet for semantic indexation in information retrieval
system | cs.IR cs.CL | In the context of arabic Information Retrieval Systems (IRS) guided by arabic
ontology and to enable those systems to better respond to user requirements,
this paper aims to representing documents and queries by the best concepts
extracted from Arabic Wordnet. Identified concepts belonging to Arabic WordNet
synsets are extracted from documents and queries, and those having a single
sense are expanded. The expanded query is then used by the IRS to retrieve the
relevant documents searched. Our experiments are based primarily on a medium
size corpus of arabic text. The results obtained shown us that there are a
global improvement in the performance of the arabic IRS.
|
1306.2533 | DISCOMAX: A Proximity-Preserving Distance Correlation Maximization
Algorithm | cs.LG stat.ML | In a regression setting we propose algorithms that reduce the dimensionality
of the features while simultaneously maximizing a statistical measure of
dependence known as distance correlation between the low-dimensional features
and a response variable. This helps in solving the prediction problem with a
low-dimensional set of features. Our setting is different from subset-selection
algorithms where the problem is to choose the best subset of features for
regression. Instead, we attempt to generate a new set of low-dimensional
features as in a feature-learning setting. We attempt to keep our proposed
approach as model-free and our algorithm does not assume the application of any
specific regression model in conjunction with the low-dimensional features that
it learns. The algorithm is iterative and is fomulated as a combination of the
majorization-minimization and concave-convex optimization procedures. We also
present spectral radius based convergence results for the proposed iterations.
|
1306.2537 | Analysis of communities in a mythological social network | physics.soc-ph cs.SI nlin.AO | The intriguing nature of classical Homeric narratives has always fascinated
the occidental culture contributing to philosophy, history, mythology and
straight forwardly to literature. However what would be so intriguing about
Homer's narratives' At a first gaze we shall recognize the very literal appeal
and aesthetic pleasure presented on every page across Homer's chants in Odyssey
and rhapsodies in Iliad. Secondly we may perceive a biased aspect of its
stories contents, varying from real-historical to fictional-mythological. To
encompass this glance, there are some new archeological finding that supports
historicity of some events described within Iliad, and consequently to Odyssey.
Considering these observations and using complex network theory concepts, we
managed to built and analyze a social network gathered across the classical
epic, Odyssey of Homer. Longing for further understanding, topological
quantities were collected in order to classify its social network qualitatively
into real or fictional. It turns out that most of the found properties belong
to real social networks besides assortativity and giant component's size. In
order to test the network's possibilities to be real, we removed some
mythological members that could imprint a fictional aspect on the network.
Carrying on this maneuver the modified social network resulted on assortative
mixing and reduction of the giant component, as expected for real social
networks. Overall we observe that Odyssey might be an amalgam of fictional
elements plus real based human relations, which corroborates other author's
findings for Iliad and archeological evidences.
|
1306.2547 | Efficient Classification for Metric Data | cs.LG cs.DS stat.ML | Recent advances in large-margin classification of data residing in general
metric spaces (rather than Hilbert spaces) enable classification under various
natural metrics, such as string edit and earthmover distance. A general
framework developed for this purpose by von Luxburg and Bousquet [JMLR, 2004]
left open the questions of computational efficiency and of providing direct
bounds on generalization error.
We design a new algorithm for classification in general metric spaces, whose
runtime and accuracy depend on the doubling dimension of the data points, and
can thus achieve superior classification performance in many common scenarios.
The algorithmic core of our approach is an approximate (rather than exact)
solution to the classical problems of Lipschitz extension and of Nearest
Neighbor Search. The algorithm's generalization performance is guaranteed via
the fat-shattering dimension of Lipschitz classifiers, and we present
experimental evidence of its superiority to some common kernel methods. As a
by-product, we offer a new perspective on the nearest neighbor classifier,
which yields significantly sharper risk asymptotics than the classic analysis
of Cover and Hart [IEEE Trans. Info. Theory, 1967].
|
1306.2550 | Fixed-to-Variable Length Resolution Coding for Target Distributions | cs.IT math.IT | The number of random bits required to approximate a target distribution in
terms of un-normalized informational divergence is considered. It is shown that
for a variable-to-variable length encoder, this number is lower bounded by the
entropy of the target distribution. A fixed-to-variable length encoder is
constructed using M-type quantization and Tunstall coding. It is shown that the
encoder achieves in the limit an un-normalized informational divergence of zero
with the number of random bits per generated symbol equal to the entropy of the
target distribution. Numerical results show that the proposed encoder
significantly outperforms the optimal block-to-block encoder in the finite
length regime.
|
1306.2554 | The association problem in wireless networks: a Policy Gradient
Reinforcement Learning approach | cs.NI cs.IT cs.LG math.IT | The purpose of this paper is to develop a self-optimized association
algorithm based on PGRL (Policy Gradient Reinforcement Learning), which is both
scalable, stable and robust. The term robust means that performance degradation
in the learning phase should be forbidden or limited to predefined thresholds.
The algorithm is model-free (as opposed to Value Iteration) and robust (as
opposed to Q-Learning). The association problem is modeled as a Markov Decision
Process (MDP). The policy space is parameterized. The parameterized family of
policies is then used as expert knowledge for the PGRL. The PGRL converges
towards a local optimum and the average cost decreases monotonically during the
learning process. The properties of the solution make it a good candidate for
practical implementation. Furthermore, the robustness property allows to use
the PGRL algorithm in an "always-on" learning mode.
|
1306.2557 | Concentration bounds for temporal difference learning with linear
function approximation: The case of batch data and uniform sampling | cs.LG stat.ML | We propose a stochastic approximation (SA) based method with randomization of
samples for policy evaluation using the least squares temporal difference
(LSTD) algorithm. Our proposed scheme is equivalent to running regular temporal
difference learning with linear function approximation, albeit with samples
picked uniformly from a given dataset. Our method results in an $O(d)$
improvement in complexity in comparison to LSTD, where $d$ is the dimension of
the data. We provide non-asymptotic bounds for our proposed method, both in
high probability and in expectation, under the assumption that the matrix
underlying the LSTD solution is positive definite. The latter assumption can be
easily satisfied for the pathwise LSTD variant proposed in [23]. Moreover, we
also establish that using our method in place of LSTD does not impact the rate
of convergence of the approximate value function to the true value function.
These rate results coupled with the low computational complexity of our method
make it attractive for implementation in big data settings, where $d$ is large.
A similar low-complexity alternative for least squares regression is well-known
as the stochastic gradient descent (SGD) algorithm. We provide finite-time
bounds for SGD. We demonstrate the practicality of our method as an efficient
alternative for pathwise LSTD empirically by combining it with the least
squares policy iteration (LSPI) algorithm in a traffic signal control
application. We also conduct another set of experiments that combines the SA
based low-complexity variant for least squares regression with the LinUCB
algorithm for contextual bandits, using the large scale news recommendation
dataset from Yahoo.
|
1306.2558 | The Effect of Biased Communications On Both Trusting and Suspicious
Voters | cs.AI | In recent studies of political decision-making, apparently anomalous behavior
has been observed on the part of voters, in which negative information about a
candidate strengthens, rather than weakens, a prior positive opinion about the
candidate. This behavior appears to run counter to rational models of decision
making, and it is sometimes interpreted as evidence of non-rational "motivated
reasoning". We consider scenarios in which this effect arises in a model of
rational decision making which includes the possibility of deceptive
information. In particular, we will consider a model in which there are two
classes of voters, which we will call trusting voters and suspicious voters,
and two types of information sources, which we will call unbiased sources and
biased sources. In our model, new data about a candidate can be efficiently
incorporated by a trusting voter, and anomalous updates are impossible;
however, anomalous updates can be made by suspicious voters, if the information
source mistakenly plans for an audience of trusting voters, and if the partisan
goals of the information source are known by the suspicious voter to be
"opposite" to his own. Our model is based on a formalism introduced by the
artificial intelligence community called "multi-agent influence diagrams",
which generalize Bayesian networks to settings involving multiple agents with
distinct goals.
|
1306.2581 | Preamble-based Channel Estimation in FBMC/OQAM Systems: A Time-Domain
Approach | cs.IT math.IT stat.AP | Filter bank-based multicarrier (FBMC) systems based on offset QAM (FBMC/OQAM)
have recently attracted increased interest in several applications due to their
enhanced flexibility, higher spectral efficiency, and better spectral
containment compared to conventional OFDM. They suffer, however, from an
inter-carrier/inter-symbol interference that complicates signal processing
tasks such as channel estimation. Most of the methods reported thus far rely on
the assumption of (almost) flat subchannels to more easily tackle this problem,
addressing it in a way similar to OFDM. However, this assumption may be often
quite inaccurate, due to the high freq. selectivity of the channel and/or the
small number of subcarriers employed to cope with frequency dispersion in fast
fading. In such cases, severe error floors are exhibited at medium to high SNR
values, which cancel the advantage of FBMC over OFDM. Moreover, the existing
methods provide estimates of the subchannel responses, most commonly in the
frequency domain. The goal of this paper is to revisit this problem through an
alternative formulation that focuses on the estimation of the channel impulse
response itself and makes no assumption on the degree of frequency selectivity
of the subchannels. The possible gains in estimation performance offered by
such an approach are investigated through the design of optimal (in the MSE
sense) preambles, of both the full and sparse types, and of the smallest
possible duration of only one pilot FBMC symbol. Existing designs for flat
subchannels are then shown to result as special cases. Longer preambles,
consisting of two consecutive pilot FBMC symbols, are also analyzed. The
simulation results demonstrate significant improvements from the proposed
approach for both mildly and highly frequency selective channels. Most notably,
no error floors appear anymore over a quite wide range of SNR values.
|
1306.2593 | A Perceptual Alphabet for the 10-dimensional Phonetic-prosodic Space | cs.SD cs.CL | We define an alphabet, the IHA, of the 10-D phonetic-prosodic space. The
dimensions of this space are perceptual observables, rather than articulatory
specifications. Speech is defined as a random chain in time of the 4-D phonetic
subspace, that is, a symbolic sequence, augmented with diacritics of the
remaining 6-D prosodic subspace. The definitions here are based on the model of
speech of oral billiards, and supersedes an earlier version. This paper only
enumerates the IHA in detail as a supplement to the exposition of oral
billiards in a separate paper. The IHA has been implemented as the target
random variable in a speech recognizer.
|
1306.2595 | Capacity Scaling in MIMO Systems with General Unitarily Invariant Random
Matrices | cs.IT math.IT | We investigate the capacity scaling of MIMO systems with the system
dimensions. To that end, we quantify how the mutual information varies when the
number of antennas (at either the receiver or transmitter side) is altered. For
a system comprising $R$ receive and $T$ transmit antennas with $R>T$, we find
the following: By removing as many receive antennas as needed to obtain a
square system (provided the channel matrices before and after the removal have
full rank) the maximum resulting loss of mutual information over all
signal-to-noise ratios (SNRs) depends only on $R$, $T$ and the matrix of
left-singular vectors of the initial channel matrix, but not on its singular
values. In particular, if the latter matrix is Haar distributed the ergodic
rate loss is given by $\sum_{t=1}^{T}\sum_{r=T+1}^{R}\frac{1}{r-t}$ nats. Under
the same assumption, if $T,R\to \infty$ with the ratio $\phi\triangleq T/R$
fixed, the rate loss normalized by $R$ converges almost surely to $H(\phi)$
bits with $H(\cdot)$ denoting the binary entropy function. We also quantify and
study how the mutual information as a function of the system dimensions
deviates from the traditionally assumed linear growth in the minimum of the
system dimensions at high SNR.
|
1306.2597 | Introducing LETOR 4.0 Datasets | cs.IR | LETOR is a package of benchmark data sets for research on LEarning TO Rank,
which contains standard features, relevance judgments, data partitioning,
evaluation tools, and several baselines. Version 1.0 was released in April
2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec.
2008. This version, 4.0, was released in July 2009. Very different from
previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based
on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page
collection (~25M pages) and two query sets from Million Query track of TREC
2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short.
There are about 1700 queries in MQ2007 with labeled documents and about 800
queries in MQ2008 with labeled documents. If you have any questions or
suggestions about the datasets, please kindly email us (letor@microsoft.com).
Our goal is to make the dataset reliable and useful for the community.
|
1306.2599 | Hand Gesture Recognition Based on Karhunen-Loeve Transform | cs.CV | In this paper, we have proposed a system based on K-L Transform to recognize
different hand gestures. The system consists of five steps: skin filtering,
palm cropping, edge detection, feature extraction, and classification. Firstly
the hand is detected using skin filtering and palm cropping was performed to
extract out only the palm portion of the hand. The extracted image was then
processed using the Canny Edge Detection technique to extract the outline
images of palm. After palm extraction, the features of hand were extracted
using K-L Transform technique and finally the input gesture was recognized
using proper classifier. In our system, we have tested for 10 different hand
gestures, and recognizing rate obtained was 96%. Hence we propose an easy
approach to recognize different hand gestures.
|
1306.2607 | A Lower Bound for the Fisher Information Measure | cs.IT math.IT | The problem how to approximately determine the absolute value of the Fisher
information measure for a general parametric probabilistic system is
considered. Having available the first and second moment of the system output
in a parametric form, it is shown that the information measure can be bounded
from below through a replacement of the original system by a Gaussian system
with equivalent moments. The presented technique is applied to a system of
practical importance and the potential quality of the bound is demonstrated.
|
1306.2624 | Stopping Criterion for the Mean Shift Iterative Algorithm | cs.CV math.RA | Image segmentation is a critical step in computer vision tasks constituting
an essential issue for pattern recognition and visual interpretation. In this
paper, we propose a new stopping criterion for the mean shift iterative
algorithm by using images defined in Zn ring, with the goal of reaching a
better segmentation. We carried out also a study on the weak and strong of
equivalence classes between two images. An analysis on the convergence with
this new stopping criterion is carried out too.
|
1306.2663 | Large Margin Low Rank Tensor Analysis | cs.LG cs.NA | Other than vector representations, the direct objects of human cognition are
generally high-order tensors, such as 2D images and 3D textures. From this
fact, two interesting questions naturally arise: How does the human brain
represent these tensor perceptions in a "manifold" way, and how can they be
recognized on the "manifold"? In this paper, we present a supervised model to
learn the intrinsic structure of the tensors embedded in a high dimensional
Euclidean space. With the fixed point continuation procedures, our model
automatically and jointly discovers the optimal dimensionality and the
representations of the low dimensional embeddings. This makes it an effective
simulation of the cognitive process of human brain. Furthermore, the
generalization of our model based on similarity between the learned low
dimensional embeddings can be viewed as counterpart of recognition of human
brain. Experiments on applications for object recognition and face recognition
demonstrate the superiority of our proposed model over state-of-the-art
approaches.
|
1306.2665 | Precisely Verifying the Null Space Conditions in Compressed Sensing: A
Sandwiching Algorithm | cs.IT cs.LG cs.SY math.IT math.OC stat.ML | In this paper, we propose new efficient algorithms to verify the null space
condition in compressed sensing (CS). Given an $(n-m) \times n$ ($m>0$) CS
matrix $A$ and a positive $k$, we are interested in computing $\displaystyle
\alpha_k = \max_{\{z: Az=0,z\neq 0\}}\max_{\{K: |K|\leq k\}}$ ${\|z_K
\|_{1}}{\|z\|_{1}}$, where $K$ represents subsets of $\{1,2,...,n\}$, and $|K|$
is the cardinality of $K$. In particular, we are interested in finding the
maximum $k$ such that $\alpha_k < {1}{2}$. However, computing $\alpha_k$ is
known to be extremely challenging. In this paper, we first propose a series of
new polynomial-time algorithms to compute upper bounds on $\alpha_k$. Based on
these new polynomial-time algorithms, we further design a new sandwiching
algorithm, to compute the \emph{exact} $\alpha_k$ with greatly reduced
complexity. When needed, this new sandwiching algorithm also achieves a smooth
tradeoff between computational complexity and result accuracy. Empirical
results show the performance improvements of our algorithm over existing known
methods; and our algorithm outputs precise values of $\alpha_k$, with much
lower complexity than exhaustive search.
|
1306.2672 | R3MC: A Riemannian three-factor algorithm for low-rank matrix completion | math.OC cs.LG | We exploit the versatile framework of Riemannian optimization on quotient
manifolds to develop R3MC, a nonlinear conjugate-gradient method for low-rank
matrix completion. The underlying search space of fixed-rank matrices is
endowed with a novel Riemannian metric that is tailored to the least-squares
cost. Numerical comparisons suggest that R3MC robustly outperforms
state-of-the-art algorithms across different problem instances, especially
those that combine scarcely sampled and ill-conditioned data.
|
1306.2675 | Kolmogorov Complexity of Categories | math.CT cs.IT cs.LO cs.PL math.IT math.LO | Kolmogorov complexity theory is used to tell what the algorithmic
informational content of a string is. It is defined as the length of the
shortest program that describes the string. We present a programming language
that can be used to describe categories, functors, and natural transformations.
With this in hand, we define the informational content of these categorical
structures as the shortest program that describes such structures. Some basic
consequences of our definition are presented including the fact that equivalent
categories have equal Kolmogorov complexity. We also prove different theorems
about what can and cannot be described by our programming language.
|
1306.2685 | Flexible sampling of discrete data correlations without the marginal
distributions | stat.ML cs.LG stat.CO | Learning the joint dependence of discrete variables is a fundamental problem
in machine learning, with many applications including prediction, clustering
and dimensionality reduction. More recently, the framework of copula modeling
has gained popularity due to its modular parametrization of joint
distributions. Among other properties, copulas provide a recipe for combining
flexible models for univariate marginal distributions with parametric families
suitable for potentially high dimensional dependence structures. More
radically, the extended rank likelihood approach of Hoff (2007) bypasses
learning marginal models completely when such information is ancillary to the
learning task at hand as in, e.g., standard dimensionality reduction problems
or copula parameter estimation. The main idea is to represent data by their
observable rank statistics, ignoring any other information from the marginals.
Inference is typically done in a Bayesian framework with Gaussian copulas, and
it is complicated by the fact this implies sampling within a space where the
number of constraints increases quadratically with the number of data points.
The result is slow mixing when using off-the-shelf Gibbs sampling. We present
an efficient algorithm based on recent advances on constrained Hamiltonian
Markov chain Monte Carlo that is simple to implement and does not require
paying for a quadratic cost in sample size.
|
1306.2691 | Preserving differential privacy under finite-precision semantics | cs.DB | The approximation introduced by finite-precision representation of continuous
data can induce arbitrarily large information leaks even when the computation
using exact semantics is secure. Such leakage can thus undermine design efforts
aimed at protecting sensitive information. We focus here on differential
privacy, an approach to privacy that emerged from the area of statistical
databases and is now widely applied also in other domains. In this approach,
privacy is protected by the addition of noise to a true (private) value. To
date, this approach to privacy has been proved correct only in the ideal case
in which computations are made using an idealized, infinite-precision
semantics. In this paper, we analyze the situation at the implementation level,
where the semantics is necessarily finite-precision, i.e. the representation of
real numbers and the operations on them, are rounded according to some level of
precision. We show that in general there are violations of the differential
privacy property, and we study the conditions under which we can still
guarantee a limited (but, arguably, totally acceptable) variant of the
property, under only a minor degradation of the privacy level. Finally, we
illustrate our results on two cases of noise-generating distributions: the
standard Laplacian mechanism commonly used in differential privacy, and a
bivariate version of the Laplacian recently introduced in the setting of
privacy-aware geolocation.
|
1306.2700 | Hierarchical Interference Mitigation for Massive MIMO Cellular Networks | cs.IT math.IT | We propose a hierarchical interference mitigation scheme for massive MIMO
cellular networks. The MIMO precoder at each base station (BS) is partitioned
into an inner precoder and an outer precoder. The inner precoder controls the
intra-cell interference and is adaptive to local channel state information
(CSI) at each BS (CSIT). The outer precoder controls the inter-cell
interference and is adaptive to channel statistics. Such hierarchical precoding
structure reduces the number of pilot symbols required for CSI estimation in
massive MIMO downlink and is robust to the backhaul latency. We study joint
optimization of the outer precoders, the user selection, and the power
allocation to maximize a general concave utility which has no closed-form
expression. We first apply random matrix theory to obtain an approximated
problem with closed-form objective. We show that the solution of the
approximated problem is asymptotically optimal with respect to the original
problem as the number of antennas per BS grows large. Then using the hidden
convexity of the problem, we propose an iterative algorithm to find the optimal
solution for the approximated problem. We also obtain a low complexity
algorithm with provable convergence. Simulations show that the proposed design
has significant gain over various state-of-the-art baselines.
|
1306.2701 | Cache-Enabled Opportunistic Cooperative MIMO for Video Streaming in
Wireless Systems | cs.IT cs.MM math.IT | We propose a cache-enabled opportunistic cooperative MIMO (CoMP) framework
for wireless video streaming. By caching a portion of the video files at the
relays (RS) using a novel MDS-coded random cache scheme, the base station (BS)
and RSs opportunistically employ CoMP to achieve spatial multiplexing gain
without expensive payload backhaul. We study a two timescale joint optimization
of power and cache control to support real-time video streaming. The cache
control is to create more CoMP opportunities and is adaptive to the long-term
popularity of the video files. The power control is to guarantee the QoS
requirements and is adaptive to the channel state information (CSI), the cache
state at the RS and the queue state information (QSI) at the users. The joint
problem is decomposed into an inner power control problem and an outer cache
control problem. We first derive a closed-form power control policy from an
approximated Bellman equation. Based on this, we transform the outer problem
into a convex stochastic optimization problem and propose a stochastic
subgradient algorithm to solve it. Finally, the proposed solution is shown to
be asymptotically optimal for high SNR and small timeslot duration. Its
superior performance over various baselines is verified by simulations.
|
1306.2727 | Sparse Representation-based Image Quality Assessment | cs.CV cs.MM eess.IV | A successful approach to image quality assessment involves comparing the
structural information between a distorted and its reference image. However,
extracting structural information that is perceptually important to our visual
system is a challenging task. This paper addresses this issue by employing a
sparse representation-based approach and proposes a new metric called the
\emph{sparse representation-based quality} (SPARQ) \emph{index}. The proposed
method learns the inherent structures of the reference image as a set of basis
vectors, such that any structure in the image can be represented by a linear
combination of only a few of those basis vectors. This sparse strategy is
employed because it is known to generate basis vectors that are qualitatively
similar to the receptive field of the simple cells present in the mammalian
primary visual cortex. The visual quality of the distorted image is estimated
by comparing the structures of the reference and the distorted images in terms
of the learnt basis vectors resembling cortical cells. Our approach is
evaluated on six publicly available subject-rated image quality assessment
datasets. The proposed SPARQ index consistently exhibits high correlation with
the subjective ratings on all datasets and performs better or at par with the
state-of-the-art.
|
1306.2733 | Copula Mixed-Membership Stochastic Blockmodel for Intra-Subgroup
Correlations | cs.LG stat.ML | The \emph{Mixed-Membership Stochastic Blockmodel (MMSB)} is a popular
framework for modeling social network relationships. It can fully exploit each
individual node's participation (or membership) in a social structure. Despite
its powerful representations, this model makes an assumption that the
distributions of relational membership indicators between two nodes are
independent. Under many social network settings, however, it is possible that
certain known subgroups of people may have high or low correlations in terms of
their membership categories towards each other, and such prior information
should be incorporated into the model. To this end, we introduce a \emph{Copula
Mixed-Membership Stochastic Blockmodel (cMMSB)} where an individual Copula
function is employed to jointly model the membership pairs of those nodes
within the subgroup of interest. The model enables the use of various Copula
functions to suit the scenario, while maintaining the membership's marginal
distribution, as needed, for modeling membership indicators with other nodes
outside of the subgroup of interest. We describe the proposed model and its
inference algorithm in detail for both the finite and infinite cases. In the
experiment section, we compare our algorithms with other popular models in
terms of link prediction, using both synthetic and real world data.
|
1306.2735 | On the Impact of Relay-side Channel State Information on Opportunistic
Relaying | cs.IT math.IT | In this paper, outage performance of network topology-aware distributed
opportunistic relay selection strategies is studied with focus on the impact of
different levels of channel state information (CSI) available at relays.
Specifically, two scenarios with (a) exact instantaneous and (b) only
statistical CSI are compared with explicit account for both small-scale
Rayleigh fading and path loss due to random inter-node distances. Analytical
results, matching closely to simulations, suggest that although similar
diversity order can be achieved in both cases, the lack of precise CSI to
support relay selection translates into significant increase in the power
required to achieve the same level of QoS. In addition, when only statistical
CSI is available, achieving the same diversity order is associated with a clear
performance degradation at low SNR due to splitting of system resources between
multiple relays.
|
1306.2759 | Horizontal and Vertical Ensemble with Deep Representation for
Classification | cs.LG stat.ML | Representation learning, especially which by using deep learning, has been
widely applied in classification. However, how to use limited size of labeled
data to achieve good classification performance with deep neural network, and
how can the learned features further improve classification remain indefinite.
In this paper, we propose Horizontal Voting Vertical Voting and Horizontal
Stacked Ensemble methods to improve the classification performance of deep
neural networks. In the ICML 2013 Black Box Challenge, via using these methods
independently, Bing Xu achieved 3rd in public leaderboard, and 7th in private
leaderboard; Jingjing Xie achieved 4th in public leaderboard, and 5th in
private leaderboard.
|
1306.2795 | Recurrent Convolutional Neural Networks for Scene Parsing | cs.CV | Scene parsing is a technique that consist on giving a label to all pixels in
an image according to the class they belong to. To ensure a good visual
coherence and a high class accuracy, it is essential for a scene parser to
capture image long range dependencies. In a feed-forward architecture, this can
be simply achieved by considering a sufficiently large input context patch,
around each pixel to be labeled. We propose an approach consisting of a
recurrent convolutional neural network which allows us to consider a large
input context, while limiting the capacity of the model. Contrary to most
standard approaches, our method does not rely on any segmentation methods, nor
any task-specific features. The system is trained in an end-to-end manner over
raw pixels, and models complex spatial dependencies with low inference cost. As
the context size increases with the built-in recurrence, the system identifies
and corrects its own errors. Our approach yields state-of-the-art performance
on both the Stanford Background Dataset and the SIFT Flow Dataset, while
remaining very fast at test time.
|
1306.2801 | Understanding Dropout: Training Multi-Layer Perceptrons with Auxiliary
Independent Stochastic Neurons | cs.NE cs.LG stat.ML | In this paper, a simple, general method of adding auxiliary stochastic
neurons to a multi-layer perceptron is proposed. It is shown that the proposed
method is a generalization of recently successful methods of dropout (Hinton et
al., 2012), explicit noise injection (Vincent et al., 2010; Bishop, 1995) and
semantic hashing (Salakhutdinov & Hinton, 2009). Under the proposed framework,
an extension of dropout which allows using separate dropping probabilities for
different hidden neurons, or layers, is found to be available. The use of
different dropping probabilities for hidden layers separately is empirically
investigated.
|
1306.2838 | The Quantum Challenge in Concept Theory and Natural Language Processing | cs.CL cs.IR quant-ph | The mathematical formalism of quantum theory has been successfully used in
human cognition to model decision processes and to deliver representations of
human knowledge. As such, quantum cognition inspired tools have improved
technologies for Natural Language Processing and Information Retrieval. In this
paper, we overview the quantum cognition approach developed in our Brussels
team during the last two decades, specifically our identification of quantum
structures in human concepts and language, and the modeling of data from
psychological and corpus-text-based experiments. We discuss our
quantum-theoretic framework for concepts and their conjunctions/disjunctions in
a Fock-Hilbert space structure, adequately modeling a large amount of data
collected on concept combinations. Inspired by this modeling, we put forward
elements for a quantum contextual and meaning-based approach to information
technologies in which 'entities of meaning' are inversely reconstructed from
texts, which are considered as traces of these entities' states.
|
1306.2843 | On Some Recent Insights in Integral Biomathics | cs.CE | This paper summarizes the results in Integral Biomathics obtained to this
moment and provides an outlook for future research in the field.
|
1306.2861 | Bayesian Inference and Learning in Gaussian Process State-Space Models
with Particle MCMC | stat.ML cs.LG cs.SY | State-space models are successfully used in many areas of science,
engineering and economics to model time series and dynamical systems. We
present a fully Bayesian approach to inference \emph{and learning} (i.e. state
estimation and system identification) in nonlinear nonparametric state-space
models. We place a Gaussian process prior over the state transition dynamics,
resulting in a flexible model able to capture complex dynamical phenomena. To
enable efficient inference, we marginalize over the transition dynamics
function and infer directly the joint smoothing distribution using specially
tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the
smoothing distribution is computed, the state transition predictive
distribution can be formulated analytically. Our approach preserves the full
nonparametric expressivity of the model and can make use of sparse Gaussian
processes to greatly reduce computational complexity.
|
1306.2862 | On the weight hierarchy of codes coming from semigroups with two
generators | math.CO cs.IT math.IT math.NT | The weight hierarchy of one-point algebraic geometry codes can be estimated
by means of the generalized order bounds, which are described in terms of a
certain Weierstrass semigroup. The asymptotical behaviour of such bounds for r
> 1 differs from that of the classical Feng-Rao distance (r=1) by the so-called
Feng-Rao numbers. This paper is addressed to compute the Feng-Rao numbers for
numerical semigroups of embedding dimension two (with two generators),
obtaining a closed simple formula for the general case by using numerical
semigroup techniques. These involve the computation of the Ap\'ery set with
respect to an integer of the semigroups under consideration. The formula
obtained is applied to lower-bounding the generalized Hamming weights,
improving the bound given by Kirfel and Pellikaan in terms of the classical
Feng-Rao distance. We also compare our bound with a modification of the
Griesmer bound, improving this one in many cases.
|
1306.2863 | Random Drift Particle Swarm Optimization | cs.AI cs.NE math.OC | The random drift particle swarm optimization (RDPSO) algorithm, inspired by
the free electron model in metal conductors placed in an external electric
field, is presented, systematically analyzed and empirically studied in this
paper. The free electron model considers that electrons have both a thermal and
a drift motion in a conductor that is placed in an external electric field. The
motivation of the RDPSO algorithm is described first, and the velocity equation
of the particle is designed by simulating the thermal motion as well as the
drift motion of the electrons, both of which lead the electrons to a location
with minimum potential energy in the external electric field. Then, a
comprehensive analysis of the algorithm is made, in order to provide a deep
insight into how the RDPSO algorithm works. It involves a theoretical analysis
and the simulation of the stochastic dynamical behavior of a single particle in
the RDPSO algorithm. The search behavior of the algorithm itself is also
investigated in detail, by analyzing the interaction between the particles.
Some variants of the RDPSO algorithm are proposed by incorporating different
random velocity components with different neighborhood topologies. Finally,
empirical studies on the RDPSO algorithm are performed by using a set of
benchmark functions from the CEC2005 benchmark suite. Based on the theoretical
analysis of the particle's behavior, two methods of controlling the algorithmic
parameters are employed, followed by an experimental analysis on how to select
the parameter values, in order to obtain a good overall performance of the
RDPSO algorithm and its variants in real-world applications. A further
performance comparison between the RDPSO algorithms and other variants of PSO
is made to prove the efficiency of the RDPSO algorithms.
|
1306.2864 | Finding Academic Experts on a MultiSensor Approach using Shannon's
Entropy | cs.AI cs.IR | Expert finding is an information retrieval task concerned with the search for
the most knowledgeable people, in some topic, with basis on documents
describing peoples activities. The task involves taking a user query as input
and returning a list of people sorted by their level of expertise regarding the
user query. This paper introduces a novel approach for combining multiple
estimators of expertise based on a multisensor data fusion framework together
with the Dempster-Shafer theory of evidence and Shannon's entropy. More
specifically, we defined three sensors which detect heterogeneous information
derived from the textual contents, from the graph structure of the citation
patterns for the community of experts, and from profile information about the
academic experts. Given the evidences collected, each sensor may define
different candidates as experts and consequently do not agree in a final
ranking decision. To deal with these conflicts, we applied the Dempster-Shafer
theory of evidence combined with Shannon's Entropy formula to fuse this
information and come up with a more accurate and reliable final ranking list.
Experiments made over two datasets of academic publications from the Computer
Science domain attest for the adequacy of the proposed approach over the
traditional state of the art approaches. We also made experiments against
representative supervised state of the art algorithms. Results revealed that
the proposed method achieved a similar performance when compared to these
supervised techniques, confirming the capabilities of the proposed framework.
|
1306.2878 | Perfect Output Feedback in the Two-User Decentralized Interference
Channel | cs.IT math.IT | In this paper, the $\eta$-Nash equilibrium ($\eta$-NE) region of the two-user
Gaussian interference channel (IC) with perfect output feedback is approximated
to within $1$ bit/s/Hz and $\eta$ arbitrarily close to $1$ bit/s/Hz. The
relevance of the $\eta$-NE region is that it provides the set of rate-pairs
that are achievable and stable in the IC when both transmitter-receiver pairs
autonomously tune their own transmit-receive configurations seeking an
$\eta$-optimal individual transmission rate. Therefore, any rate tuple outside
the $\eta$-NE region is not stable as there always exists one link able to
increase by at least $\eta$ bits/s/Hz its own transmission rate by updating its
own transmit-receive configuration. The main insights that arise from this work
are: $(i)$ The $\eta$-NE region achieved with feedback is larger than or equal
to the $\eta$-NE region without feedback. More importantly, for each rate pair
achievable at an $\eta$-NE without feedback, there exists at least one rate
pair achievable at an $\eta$-NE with feedback that is weakly Pareto superior.
$(ii)$ There always exists an $\eta$-NE transmit-receive configuration that
achieves a rate pair that is at most $1$ bit/s/Hz per user away from the outer
bound of the capacity region.
|
1306.2898 | Defining a Simulation Strategy for Cancer Immunocompetence | cs.CE q-bio.TO | Although there are various types of cancer treatments, none of these
currently take into account the effect of ageing of the immune system and hence
altered responses to cancer. Recent studies have shown that in vitro
stimulation of T cells can help in the treatment of patients. There are many
factors that have to be considered when simulating an organism's
immunocompetence. Our particular interest lies in the study of loss of
immunocompetence with age. We are trying to answer questions such as: Given a
certain age of a patient, how fit is their immune system to fight cancer? Would
an immune boost improve the effectiveness of a cancer treatment given the
patient's immune phenotype and age? We believe that understanding the processes
of immune system ageing and degradation through computer simulation may help in
answering these questions. Specifically, we have decided to look at the change
in numbers of naive T cells with age, as they play a important role in
responses to cancer and anti-tumour vaccination. In this work we present an
agent-based simulation model to understand the interactions which influence the
naive T cell populations over time. Our agent model is based on existing
mathematical system dynamic model, but in comparisons offers better scope for
customisation and detailed analysis. We believe that the results obtained can
in future help with the modelling of T cell populations inside tumours.
|
1306.2906 | Robust Support Vector Machines for Speaker Verification Task | cs.LG cs.SD stat.ML | An important step in speaker verification is extracting features that best
characterize the speaker voice. This paper investigates a front-end processing
that aims at improving the performance of speaker verification based on the
SVMs classifier, in text independent mode. This approach combines features
based on conventional Mel-cepstral Coefficients (MFCCs) and Line Spectral
Frequencies (LSFs) to constitute robust multivariate feature vectors. To reduce
the high dimensionality required for training these feature vectors, we use a
dimension reduction method called principal component analysis (PCA). In order
to evaluate the robustness of these systems, different noisy environments have
been used. The obtained results using TIMIT database showed that, using the
paradigm that combines these spectral cues leads to a significant improvement
in verification accuracy, especially with PCA reduction for low signal-to-noise
ratio noisy environment.
|
1306.2918 | Reinforcement learning with restrictions on the action set | cs.GT cs.LG math.PR | Consider a 2-player normal-form game repeated over time. We introduce an
adaptive learning procedure, where the players only observe their own realized
payoff at each stage. We assume that agents do not know their own payoff
function, and have no information on the other player. Furthermore, we assume
that they have restrictions on their own action set such that, at each stage,
their choice is limited to a subset of their action set. We prove that the
empirical distributions of play converge to the set of Nash equilibria for
zero-sum and potential games, and games where one player has two actions.
|
1306.2967 | Optimization of Clustering for Clustering-based Image Denoising | cs.CV | In this paper, the problem of de-noising of an image contaminated with
additive white Gaussian noise (AWGN) is studied. This subject has been
continued to be an open problem in signal processing for more than 50 years. In
the present paper, we suggest a method based on global clustering of image
constructing blocks. Noting that the type of clustering plays an important role
in clustering-based de-noising methods, we address two questions about the
clustering. First, which parts of data should be considered for clustering?
Second, what data clustering method is suitable for de-noising? Clustering is
exploited to learn an over complete dictionary. By obtaining sparse
decomposition of the noisy image blocks in terms of the dictionary atoms, the
de-noised version is achieved. Experimental results show that our dictionary
learning framework outperforms traditional dictionary learning methods such as
K-SVD.
|
1306.2972 | Synchronization-Aware and Algorithm-Efficient Chance Constrained Optimal
Power Flow | math.OC cs.SY physics.soc-ph | One of the most common control decisions faced by power system operators is
the question of how to dispatch generation to meet demand for power. This is a
complex optimization problem that includes many nonlinear, non convex
constraints as well as inherent uncertainties about future demand for power and
available generation. In this paper we develop convex formulations to
appropriately model crucial classes of nonlinearities and stochastic effects.
We focus on solving a nonlinear optimal power flow (OPF) problem that includes
loss of synchrony constraints and models wind-farm caused fluctuations. In
particular, we develop (a) a convex formulation of the deterministic
phase-difference nonlinear Optimum Power Flow (OPF) problem; and (b) a
probabilistic chance constrained OPF for angular stability, thermal overloads
and generation limits that is computationally tractable.
|
1306.2979 | Completing Any Low-rank Matrix, Provably | stat.ML cs.IT cs.LG math.IT | Matrix completion, i.e., the exact and provable recovery of a low-rank matrix
from a small subset of its elements, is currently only known to be possible if
the matrix satisfies a restrictive structural constraint---known as {\em
incoherence}---on its row and column spaces. In these cases, the subset of
elements is sampled uniformly at random.
In this paper, we show that {\em any} rank-$ r $ $ n$-by-$ n $ matrix can be
exactly recovered from as few as $O(nr \log^2 n)$ randomly chosen elements,
provided this random choice is made according to a {\em specific biased
distribution}: the probability of any element being sampled should be
proportional to the sum of the leverage scores of the corresponding row, and
column. Perhaps equally important, we show that this specific form of sampling
is nearly necessary, in a natural precise sense; this implies that other
perhaps more intuitive sampling schemes fail.
We further establish three ways to use the above result for the setting when
leverage scores are not known \textit{a priori}: (a) a sampling strategy for
the case when only one of the row or column spaces are incoherent, (b) a
two-phase sampling procedure for general matrices that first samples to
estimate leverage scores followed by sampling for exact recovery, and (c) an
analysis showing the advantages of weighted nuclear/trace-norm minimization
over the vanilla un-weighted formulation for the case of non-uniform sampling.
|
1306.2999 | Dynamic Infinite Mixed-Membership Stochastic Blockmodel | cs.SI cs.LG stat.ML | Directional and pairwise measurements are often used to model
inter-relationships in a social network setting. The Mixed-Membership
Stochastic Blockmodel (MMSB) was a seminal work in this area, and many of its
capabilities were extended since then. In this paper, we propose the
\emph{Dynamic Infinite Mixed-Membership stochastic blockModel (DIM3)}, a
generalised framework that extends the existing work to a potentially infinite
number of communities and mixture memberships for each of the network's nodes.
This model is in a dynamic setting, where additional model parameters are
introduced to reflect the degree of persistence between one's memberships at
consecutive times. Accordingly, two effective posterior sampling strategies and
their results are presented using both synthetic and real data.
|
1306.3002 | A Convergence Theorem for the Graph Shift-type Algorithms | stat.ML cs.LG | Graph Shift (GS) algorithms are recently focused as a promising approach for
discovering dense subgraphs in noisy data. However, there are no theoretical
foundations for proving the convergence of the GS Algorithm. In this paper, we
propose a generic theoretical framework consisting of three key GS components:
simplex of generated sequence set, monotonic and continuous objective function
and closed mapping. We prove that GS algorithms with such components can be
transformed to fit the Zangwill's convergence theorem, and the sequence set
generated by the GS procedures always terminates at a local maximum, or at
worst, contains a subsequence which converges to a local maximum of the
similarity measure function. The framework is verified by expanding it to other
GS-type algorithms and experimental results.
|
1306.3003 | Non-parametric Power-law Data Clustering | cs.LG cs.CV stat.ML | It has always been a great challenge for clustering algorithms to
automatically determine the cluster numbers according to the distribution of
datasets. Several approaches have been proposed to address this issue,
including the recent promising work which incorporate Bayesian Nonparametrics
into the $k$-means clustering procedure. This approach shows simplicity in
implementation and solidity in theory, while it also provides a feasible way to
inference in large scale datasets. However, several problems remains unsolved
in this pioneering work, including the power-law data applicability, mechanism
to merge centers to avoid the over-fitting problem, clustering order problem,
e.t.c.. To address these issues, the Pitman-Yor Process based k-means (namely
\emph{pyp-means}) is proposed in this paper. Taking advantage of the Pitman-Yor
Process, \emph{pyp-means} treats clusters differently by dynamically and
adaptively changing the threshold to guarantee the generation of power-law
clustering results. Also, one center agglomeration procedure is integrated into
the implementation to be able to merge small but close clusters and then
adaptively determine the cluster number. With more discussion on the clustering
order, the convergence proof, complexity analysis and extension to spectral
clustering, our approach is compared with traditional clustering algorithm and
variational inference methods. The advantages and properties of pyp-means are
validated by experiments on both synthetic datasets and real world datasets.
|
1306.3007 | Robustness of cooperation on scale-free networks under continuous
topological change | physics.soc-ph cs.SI q-bio.PE | In this paper, we numerically investigate the robustness of cooperation
clusters in prisoner's dilemma played on scale-free networks, where the network
topologies change by continuous removal and addition of nodes. Each removal and
addition can be either random or intentional. We therefore have four different
strategies in changing network topology: random removal and random addition
(RR), random removal and preferential addition (RP), targeted removal and
random addition (TR), and targeted removal and preferential addition (TP). We
find that cooperation clusters are most fragile against TR, while they are most
robust against RP, even for large values of the temptation coefficient for
defection. The effect of the degree mixing pattern of the network is not the
primary factor for the robustness of cooperation under continuous change in
network topology, which is quite different from the cases observed in static
networks. Cooperation clusters become more robust as the number of links of
hubs occupied by cooperators increase. Our results might infer the fact that a
huge variety of individuals is needed for maintaining global cooperation in
social networks in the real world where each node representing an individual is
constantly removed and added.
|
1306.3011 | Proximity-Aware Calculation of Cable Series Impedance for Systems of
Solid and Hollow Conductors | cs.CE | Wide-band cable models for the prediction of electromagnetic transients in
power systems require the accurate calculation of the cable series impedance as
function of frequency. A surface current approach was recently proposed for
systems of round solid conductors, with inclusion of skin and proximity
effects. In this paper we extend the approach to include tubular conductors,
allowing to model realistic cables with tubular sheaths, armors and pipes. We
also include the effect of a lossy ground. A noteworthy feature of the proposed
technique is the accurate prediction of proximity effects, which can be of
major importance in three-phase, pipe type, and closely-packed single-core
cables. The new approach is highly efficient compared to finite elements. In
the case of a cross-bonded cable system featuring three phase conductors and
three screens, the proposed technique computes the required 120 frequency
samples in only six seconds of CPU time.
|
1306.3018 | Second Order Swarm Intelligence | cs.NE | An artificial Ant Colony System (ACS) algorithm to solve general-purpose
combinatorial Optimization Problems (COP) that extends previous AC models [21]
by the inclusion of a negative pheromone, is here described. Several Travelling
Salesman Problem (TSP) were used as benchmark. We show that by using two
different sets of pheromones, a second-order co-evolved compromise between
positive and negative feedbacks achieves better results than single positive
feedback systems. The algorithm was tested against known NP-complete
combinatorial Optimization Problems, running on symmetrical TSP's. We show that
the new algorithm compares favourably against these benchmarks, accordingly to
recent biological findings by Robinson [26,27], and Gruter [28] where "No
entry" signals and negative feedback allows a colony to quickly reallocate the
majority of its foragers to superior food patches. This is the first time an
extended ACS algorithm is implemented with these successful characteristics.
|
1306.3032 | A Face-like Structure Detection on Planet and Satellite Surfaces using
Image Processing | cs.CV | This paper demonstrates that face-like structures are everywhere, and can be
de-tected automatically even with computers. Huge amount of satellite images of
the Earth, the Moon, the Mars are explored and many interesting face-like
structure are detected. Throughout this fact, we believe that science and
technologies can alert people not to easily become an occultist.
|
1306.3036 | The Ripple Pond: Enabling Spiking Networks to See | cs.NE q-bio.NC | In this paper we present the biologically inspired Ripple Pond Network (RPN),
a simply connected spiking neural network that, operating together with
recently proposed PolyChronous Networks (PCN), enables rapid, unsupervised,
scale and rotation invariant object recognition using efficient spatio-temporal
spike coding. The RPN has been developed as a hardware solution linking
previously implemented neuromorphic vision and memory structures capable of
delivering end-to-end high-speed, low-power and low-resolution recognition for
mobile and autonomous applications where slow, highly sophisticated and power
hungry signal processing solutions are ineffective. Key aspects in the proposed
approach include utilising the spatial properties of physically embedded neural
networks and propagating waves of activity therein for information processing,
using dimensional collapse of imagery information into amenable temporal
patterns and the use of asynchronous frames for information binding.
|
1306.3058 | Physeter catodon localization by sparse coding | cs.LG cs.CE stat.ML | This paper presents a spermwhale' localization architecture using jointly a
bag-of-features (BoF) approach and machine learning framework. BoF methods are
known, especially in computer vision, to produce from a collection of local
features a global representation invariant to principal signal transformations.
Our idea is to regress supervisely from these local features two rough
estimates of the distance and azimuth thanks to some datasets where both
acoustic events and ground-truth position are now available. Furthermore, these
estimates can feed a particle filter system in order to obtain a precise
spermwhale' position even in mono-hydrophone configuration. Anti-collision
system and whale watching are considered applications of this work.
|
1306.3084 | Segmentation et Interpr\'etation de Nuages de Points pour la
Mod\'elisation d'Environnements Urbains | cs.CV | Dans cet article, nous pr\'esentons une m\'ethode pour la d\'etection et la
classification d'artefacts au niveau du sol, comme phase de filtrage
pr\'ealable \`a la mod\'elisation d'environnements urbains. La m\'ethode de
d\'etection est r\'ealis\'ee sur l'image profondeur, une projection de nuage de
points sur un plan image o\`u la valeur du pixel correspond \`a la distance du
point au plan. En faisant l'hypoth\`ese que les artefacts sont situ\'es au sol,
ils sont d\'etect\'es par une transformation de chapeau haut de forme par
remplissage de trous sur l'image de profondeur. Les composantes connexes ainsi
obtenues, sont ensuite caract\'eris\'ees et une analyse des variables est
utilis\'ee pour la s\'election des caract\'eristiques les plus discriminantes.
Les composantes connexes sont donc classifi\'ees en quatre cat\'egories
(lampadaires, pi\'etons, voitures et "Reste") \`a l'aide d'un algorithme
d'apprentissage supervis\'e. La m\'ethode a \'et\'e test\'ee sur des nuages de
points de la ville de Paris, en montrant de bons r\'esultats de d\'etection et
de classification dans l'ensemble de donn\'ees.---In this article, we present a
method for detection and classification of artifacts at the street level, in
order to filter cloud point, facilitating the urban modeling process. Our
approach exploits 3D information by using range image, a projection of 3D
points onto an image plane where the pixel intensity is a function of the
measured distance between 3D points and the plane. By assuming that the
artifacts are on the ground, they are detected using a Top-Hat of the hole
filling algorithm of range images. Then, several features are extracted from
the detected connected components and a stepwise forward variable/model
selection by using the Wilk's Lambda criterion is performed. Afterward, CCs are
classified in four categories (lampposts, pedestrians, cars and others) by
using a supervised machine learning method. The proposed method was tested on
cloud points of Paris, and have shown satisfactory results on the whole
dataset.
|
1306.3093 | Multi-user Scheduling Schemes for Simultaneous Wireless Information and
Power Transfer | cs.IT math.IT | In this paper, we study the downlink multi-user scheduling problem for a
time-slotted system with simultaneous wireless information and power transfer.
In particular, in each time slot, a single user is scheduled to receive
information, while the remaining users opportunistically harvest the ambient
radio frequency (RF) energy. We devise novel scheduling schemes in which the
tradeoff between the users' ergodic capacities and their average amount of
harvested energy can be controlled. To this end, we modify two fair scheduling
schemes used in information-only transfer systems. First, proportionally fair
maximum normalized signal-to-noise ratio (N-SNR) scheduling is modified by
scheduling the user having the jth ascendingly ordered (rather than the
maximum) N-SNR. We refer to this scheme as order-based N-SNR scheduling.
Second, conventional equal-throughput (ET) fair scheduling is modified by
scheduling the user having the minimum moving average throughput among the set
of users whose N-SNR orders fall into a certain set of allowed orders Sa
(rather than the set of all users). We refer to this scheme as order-based ET
scheduling. The feasibility conditions required for the users to achieve ET
with this scheme are also derived. We show that the smaller the selection order
j for the order-based N-SNR scheme, and the lower the orders in Sa for the
order-based ET scheme, the higher the average amount of energy harvested by the
users at the expense of a reduction in their ergodic capacities. We analyze the
performance of the considered scheduling schemes for independent and
non-identically distributed (i.n.d.) Ricean fading channels, and provide
closed-form results for the special case of i.n.d. Rayleigh fading.
|
1306.3108 | Guaranteed Classification via Regularized Similarity Learning | cs.LG | Learning an appropriate (dis)similarity function from the available data is a
central problem in machine learning, since the success of many machine learning
algorithms critically depends on the choice of a similarity function to compare
examples. Despite many approaches for similarity metric learning have been
proposed, there is little theoretical study on the links between similarity
met- ric learning and the classification performance of the result classifier.
In this paper, we propose a regularized similarity learning formulation
associated with general matrix-norms, and establish their generalization
bounds. We show that the generalization error of the resulting linear separator
can be bounded by the derived generalization bound of similarity learning. This
shows that a good gen- eralization of the learnt similarity function guarantees
a good classification of the resulting linear classifier. Our results extend
and improve those obtained by Bellet at al. [3]. Due to the techniques
dependent on the notion of uniform stability [6], the bound obtained there
holds true only for the Frobenius matrix- norm regularization. Our techniques
using the Rademacher complexity [5] and its related Khinchin-type inequality
enable us to establish bounds for regularized similarity learning formulations
associated with general matrix-norms including sparse L 1 -norm and mixed
(2,1)-norm.
|
1306.3111 | Kirkman Equiangular Tight Frames and Codes | cs.IT math.IT | An equiangular tight frame (ETF) is a set of unit vectors in a Euclidean
space whose coherence is as small as possible, equaling the Welch bound. Also
known as Welch-bound-equality sequences, such frames arise in various
applications, such as waveform design and compressed sensing. At the moment,
there are only two known flexible methods for constructing ETFs: harmonic ETFs
are formed by carefully extracting rows from a discrete Fourier transform;
Steiner ETFs arise from a tensor-like combination of a combinatorial design and
a regular simplex. These two classes seem very different: the vectors in
harmonic ETFs have constant amplitude, whereas Steiner ETFs are extremely
sparse. We show that they are actually intimately connected: a large class of
Steiner ETFs can be unitarily transformed into constant-amplitude frames,
dubbed Kirkman ETFs. Moreover, we show that an important class of harmonic ETFs
is a subset of an important class of Kirkman ETFs. This connection informs the
discussion of both types of frames: some Steiner ETFs can be transformed into
constant-amplitude waveforms making them more useful in waveform design; some
harmonic ETFs have low spark, making them less desirable for compressed
sensing. We conclude by showing that real-valued constant-amplitude ETFs are
equivalent to binary codes that achieve the Grey-Rankin bound, and then
construct such codes using Kirkman ETFs.
|
1306.3134 | Opinion dynamics and wisdom under out-group discrimination | cs.MA cs.SI nlin.AO | We study a DeGroot-like opinion dynamics model in which agents may oppose
other agents. As an underlying motivation, in our setup, agents want to adjust
their opinions to match those of the agents of their 'in-group' and, in
addition, they want to adjust their opinions to match the 'inverse' of those of
the agents of their 'out-group'. Our paradigm can account for persistent
disagreement in connected societies as well as bi- and multi-polarization.
Outcomes depend upon network structure and the choice of deviation function
modeling the mode of opposition between agents. For a particular choice of
deviation function, which we call soft opposition, we derive necessary and
sufficient conditions for long-run polarization. We also consider social
influence (who are the opinion leaders in the network?) as well as the question
of wisdom in our naive learning paradigm, finding that wisdom is difficult to
attain when there exist sufficiently strong negative relations between agents.
|
1306.3142 | On quantum Renyi entropies: a new generalization and some properties | quant-ph cs.IT math-ph math.IT math.MP | The Renyi entropies constitute a family of information measures that
generalizes the well-known Shannon entropy, inheriting many of its properties.
They appear in the form of unconditional and conditional entropies, relative
entropies or mutual information, and have found many applications in
information theory and beyond. Various generalizations of Renyi entropies to
the quantum setting have been proposed, most notably Petz's quasi-entropies and
Renner's conditional min-, max- and collision entropy. Here, we argue that
previous quantum extensions are incompatible and thus unsatisfactory.
We propose a new quantum generalization of the family of Renyi entropies that
contains the von Neumann entropy, min-entropy, collision entropy and the
max-entropy as special cases, thus encompassing most quantum entropies in use
today. We show several natural properties for this definition, including
data-processing inequalities, a duality relation, and an entropic uncertainty
relation.
|
1306.3161 | Learning Using Privileged Information: SVM+ and Weighted SVM | stat.ML cs.LG | Prior knowledge can be used to improve predictive performance of learning
algorithms or reduce the amount of data required for training. The same goal is
pursued within the learning using privileged information paradigm which was
recently introduced by Vapnik et al. and is aimed at utilizing additional
information available only at training time -- a framework implemented by SVM+.
We relate the privileged information to importance weighting and show that the
prior knowledge expressible with privileged features can also be encoded by
weights associated with every training example. We show that a weighted SVM can
always replicate an SVM+ solution, while the converse is not true and we
construct a counterexample highlighting the limitations of SVM+. Finally, we
touch on the problem of choosing weights for weighted SVMs when privileged
features are not available.
|
1306.3162 | Learning to encode motion using spatio-temporal synchrony | cs.CV cs.LG stat.ML | We consider the task of learning to extract motion from videos. To this end,
we show that the detection of spatial transformations can be viewed as the
detection of synchrony between the image sequence and a sequence of features
undergoing the motion we wish to detect. We show that learning about synchrony
is possible using very fast, local learning rules, by introducing
multiplicative "gating" interactions between hidden units across frames. This
makes it possible to achieve competitive performance in a wide variety of
motion estimation tasks, using a small fraction of the time required to learn
features, and to outperform hand-crafted spatio-temporal features by a large
margin. We also show how learning about synchrony can be viewed as performing
greedy parameter estimation in the well-known motion energy model.
|
1306.3171 | Confidence Intervals and Hypothesis Testing for High-Dimensional
Regression | stat.ME cs.IT cs.LG math.IT | Fitting high-dimensional statistical models often requires the use of
non-linear parameter estimation procedures. As a consequence, it is generally
impossible to obtain an exact characterization of the probability distribution
of the parameter estimates. This in turn implies that it is extremely
challenging to quantify the \emph{uncertainty} associated with a certain
parameter estimate. Concretely, no commonly accepted procedure exists for
computing classical measures of uncertainty and statistical significance as
confidence intervals or $p$-values for these models.
We consider here high-dimensional linear regression problem, and propose an
efficient algorithm for constructing confidence intervals and $p$-values. The
resulting confidence intervals have nearly optimal size. When testing for the
null hypothesis that a certain parameter is vanishing, our method has nearly
optimal power.
Our approach is based on constructing a `de-biased' version of regularized
M-estimators. The new construction improves over recent work in the field in
that it does not assume a special structure on the design matrix. We test our
method on synthetic data and a high-throughput genomic data set about
riboflavin production rate.
|
1306.3172 | Adapting sample size in particle filters through KLD-resampling | stat.AP cs.RO | This letter provides an adaptive resampling method. It determines the number
of particles to resample so that the Kullback-Leibler distance (KLD) between
distribution of particles before resampling and after resampling does not
exceed a pre-specified error bound. The basis of the method is the same as
Fox's KLD-sampling but implemented differently. The KLD-sampling assumes that
samples are coming from the true posterior distribution and ignores any
mismatch between the true and the proposal distribution. In contrast, we
incorporate the KLD measure into the resampling in which the distribution of
interest is just the posterior distribution. That is to say, for sample size
adjustment, it is more theoretically rigorous and practically flexible to
measure the fit of the distribution represented by weighted particles based on
KLD during resampling than in sampling. Simulations of target tracking
demonstrate the efficiency of our method.
|
1306.3203 | Bregman Alternating Direction Method of Multipliers | math.OC cs.LG stat.ML | The mirror descent algorithm (MDA) generalizes gradient descent by using a
Bregman divergence to replace squared Euclidean distance. In this paper, we
similarly generalize the alternating direction method of multipliers (ADMM) to
Bregman ADMM (BADMM), which allows the choice of different Bregman divergences
to exploit the structure of problems. BADMM provides a unified framework for
ADMM and its variants, including generalized ADMM, inexact ADMM and Bethe ADMM.
We establish the global convergence and the $O(1/T)$ iteration complexity for
BADMM. In some cases, BADMM can be faster than ADMM by a factor of
$O(n/\log(n))$. In solving the linear program of mass transportation problem,
BADMM leads to massive parallelism and can easily run on GPU. BADMM is several
times faster than highly optimized commercial software Gurobi.
|
1306.3212 | Sparse Inverse Covariance Matrix Estimation Using Quadratic
Approximation | cs.LG stat.ML | The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown
to have strong statistical guarantees in recovering a sparse inverse covariance
matrix, or alternatively the underlying graph structure of a Gaussian Markov
Random Field, from very limited samples. We propose a novel algorithm for
solving the resulting optimization problem which is a regularized
log-determinant program. In contrast to recent state-of-the-art methods that
largely use first order gradient information, our algorithm is based on
Newton's method and employs a quadratic approximation, but with some
modifications that leverage the structure of the sparse Gaussian MLE problem.
We show that our method is superlinearly convergent, and present experimental
results using synthetic and real-world application data that demonstrate the
considerable improvements in performance of our method when compared to other
state-of-the-art methods.
|
1306.3252 | Global Stabilization of Nonlinear Delay Systems With a Compact Absorbing
Set | math.OC cs.SY | Predictor-based stabilization results are provided for nonlinear systems with
input delays and a compact absorbing set. The control scheme consists of an
inter-sample predictor, a global observer, an approximate predictor, and a
nominal controller for the delay-free case. The control scheme is applicable
even to the case where the measurement is sampled and possibly delayed. The
closed-loop system is shown to have the properties of global asymptotic
stability and exponential convergence in the disturbance-free case, robustness
with respect to perturbations of the sampling schedule, and robustness with
respect to measurement errors. In contrast to existing predictor feedback laws,
the proposed control scheme utilizes an approximate predictor of a dynamic type
which is expressed by a system described by Integral Delay Equations.
Additional results are provided for systems that can be transformed to systems
with a compact absorbing set by means of a preliminary predictor feedback.
|
1306.3284 | All-Distances Sketches, Revisited: HIP Estimators for Massive Graphs
Analysis | cs.DS cs.SI | Graph datasets with billions of edges, such as social and Web graphs, are
prevalent, and scalable computation is critical. All-distances sketches (ADS)
[Cohen 1997], are a powerful tool for scalable approximation of statistics.
The sketch is a small size sample of the distance relation of a node which
emphasizes closer nodes. Sketches for all nodes are computed using a nearly
linear computation and estimators are applied to sketches of nodes to estimate
their properties.
We provide, for the first time, a unified exposition of ADS algorithms and
applications. We present the Historic Inverse Probability (HIP) estimators
which are applied to the ADS of a node to estimate a large natural class of
statistics. For the important special cases of neighborhood cardinalities (the
number of nodes within some query distance) and closeness centralities, HIP
estimators have at most half the variance of previous estimators and we show
that this is essentially optimal. Moreover, HIP obtains a polynomial
improvement for more general statistics and the estimators are simple,
flexible, unbiased, and elegant.
For approximate distinct counting on data streams, HIP outperforms the
original estimators for the HyperLogLog MinHash sketches (Flajolet et al.
2007), obtaining significantly improved estimation quality for this
state-of-the-art practical algorithm.
|
1306.3293 | Quantifying Long-Term Scientific Impact | cs.DL cs.SI physics.soc-ph | The lack of predictability of citation-based measures frequently used to
gauge impact, from impact factors to short-term citations, raises a fundamental
question: Is there long-term predictability in citation patterns? Here, we
derive a mechanistic model for the citation dynamics of individual papers,
allowing us to collapse the citation histories of papers from different
journals and disciplines into a single curve, indicating that all papers tend
to follow the same universal temporal pattern. The observed patterns not only
help us uncover basic mechanisms that govern scientific impact but also offer
reliable measures of influence that may have potential policy implications.
|
1306.3294 | Feature Learning by Multidimensional Scaling and its Applications in
Object Recognition | cs.CV | We present the MDS feature learning framework, in which multidimensional
scaling (MDS) is applied on high-level pairwise image distances to learn
fixed-length vector representations of images. The aspects of the images that
are captured by the learned features, which we call MDS features, completely
depend on what kind of image distance measurement is employed. With properly
selected semantics-sensitive image distances, the MDS features provide rich
semantic information about the images that is not captured by other feature
extraction techniques. In our work, we introduce the iterated
Levenberg-Marquardt algorithm for solving MDS, and study the MDS feature
learning with IMage Euclidean Distance (IMED) and Spatial Pyramid Matching
(SPM) distance. We present experiments on both synthetic data and real images
--- the publicly accessible UIUC car image dataset. The MDS features based on
SPM distance achieve exceptional performance for the car recognition task.
|
1306.3297 | Matching objects across the textured-smooth continuum | cs.CV | The problem of 3D object recognition is of immense practical importance, with
the last decade witnessing a number of breakthroughs in the state of the art.
Most of the previous work has focused on the matching of textured objects using
local appearance descriptors extracted around salient image points. The
recently proposed bag of boundaries method was the first to address directly
the problem of matching smooth objects using boundary features. However, no
previous work has attempted to achieve a holistic treatment of the problem by
jointly using textural and shape features which is what we describe herein. Due
to the complementarity of the two modalities, we fuse the corresponding
matching scores and learn their relative weighting in a data specific manner by
optimizing discriminative performance on synthetically distorted data. For the
textural description of an object we adopt a representation in the form of a
histogram of SIFT based visual words. Similarly the apparent shape of an object
is represented by a histogram of discretized features capturing local shape. On
a large public database of a diverse set of objects, the proposed method is
shown to outperform significantly both purely textural and purely shape based
approaches for matching across viewpoint variation.
|
1306.3309 | Symmetries in LDDMM with higher order momentum distributions | math.DS cs.CV | In some implementations of the Large Deformation Diffeomorphic Metric Mapping
formulation for image registration we consider the motion of particles which
locally translate image data. We then lift the motion of the particles to
obtain a motion on the entire image. However, it is certainly possible to
consider particles which do more than translate, and this is what will be
described in this paper. As the unreduced Lagrangian associated to EPDiff
possesses $\Diff(M)$ symmetry, it must also exhibit $G \subset \Diff(M)$
symmetry, for any Lie subgroup. In this paper we will describe a tower of Lie
groups $G^{(0)} \subseteq G^{(1)} \subseteq G^{(2)} \subseteq...$ which
correspond to preserving $k$-th order jet-data. The reduced configuration
spaces $Q^{(k)} := \Diff(M) / G^{(k)}$ will be finite-dimensional (in
particular, $Q^{(0)}$ is the configuration manifold for $N$ particles in $M$).
We will observe that $G^{(k)}$ is a normal subgroup of $G^{(0)}$ and so the
quotient $G^{(0)} / G^{(k)}$ is itself a (finite dimensional) Lie group which
acts on $Q^{(k)}$. This makes $Q^{(k)}$ a principle bundle over $Q^{(0)}$ and
the reduced geodesic equations on $Q^{(k)}$ will possess $G^{(0)} /
G^{(k)}$-symmetry. Noether's theorem implies the existence of conserved momenta
for the reduced system on $T^{\ast}Q^{(k)}$.
|
1306.3317 | Sparse Auto-Regressive: Robust Estimation of AR Parameters | cs.AI | In this paper I present a new approach for regression of time series using
their own samples. This is a celebrated problem known as Auto-Regression.
Dealing with outlier or missed samples in a time series makes the problem of
estimation difficult, so it should be robust against them. Moreover for coding
purposes I will show that it is desired the residual of auto-regression be
sparse. To these aims, I first assume a multivariate Gaussian prior on the
residual and then obtain the estimation. Two simple simulations have been done
on spectrum estimation and speech coding.
|
1306.3331 | Sparse Recovery of Streaming Signals Using L1-Homotopy | cs.IT math.IT math.OC stat.ML | Most of the existing methods for sparse signal recovery assume a static
system: the unknown signal is a finite-length vector for which a fixed set of
linear measurements and a sparse representation basis are available and an
L1-norm minimization program is solved for the reconstruction. However, the
same representation and reconstruction framework is not readily applicable in a
streaming system: the unknown signal changes over time, and it is measured and
reconstructed sequentially over small time intervals.
In this paper, we discuss two such streaming systems and a homotopy-based
algorithm for quickly solving the associated L1-norm minimization programs: 1)
Recovery of a smooth, time-varying signal for which, instead of using block
transforms, we use lapped orthogonal transforms for sparse representation. 2)
Recovery of a sparse, time-varying signal that follows a linear dynamic model.
For both the systems, we iteratively process measurements over a sliding
interval and estimate sparse coefficients by solving a weighted L1-norm
minimization program. Instead of solving a new L1 program from scratch at every
iteration, we use an available signal estimate as a starting point in a
homotopy formulation. Starting with a warm-start vector, our homotopy algorithm
updates the solution in a small number of computationally inexpensive steps as
the system changes. The homotopy algorithm presented in this paper is highly
versatile as it can update the solution for the L1 problem in a number of
dynamical settings. We demonstrate with numerical experiments that our proposed
streaming recovery framework outperforms the methods that represent and
reconstruct a signal as independent, disjoint blocks, in terms of quality of
reconstruction, and that our proposed homotopy-based updating scheme
outperforms current state-of-the-art solvers in terms of the computation time
and complexity.
|
1306.3343 | Relaxed Sparse Eigenvalue Conditions for Sparse Estimation via
Non-convex Regularized Regression | cs.LG cs.NA stat.ML | Non-convex regularizers usually improve the performance of sparse estimation
in practice. To prove this fact, we study the conditions of sparse estimations
for the sharp concave regularizers which are a general family of non-convex
regularizers including many existing regularizers. For the global solutions of
the regularized regression, our sparse eigenvalue based conditions are weaker
than that of L1-regularization for parameter estimation and sparseness
estimation. For the approximate global and approximate stationary (AGAS)
solutions, almost the same conditions are also enough. We show that the desired
AGAS solutions can be obtained by coordinate descent (CD) based methods.
Finally, we perform some experiments to show the performance of CD methods on
giving AGAS solutions and the degree of weakness of the estimation conditions
required by the sharp concave regularizers.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.