id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1312.7602 | A Martingale Approach and Time-Consistent Sampling-based Algorithms for
Risk Management in Stochastic Optimal Control | cs.SY cs.RO math.DS math.PR | In this paper, we consider a class of stochastic optimal control problems
with risk constraints that are expressed as bounded probabilities of failure
for particular initial states. We present here a martingale approach that
diffuses a risk constraint into a martingale to construct time-consistent
control policies. The martingale stands for the level of risk tolerance over
time. By augmenting the system dynamics with the controlled martingale, the
original risk-constrained problem is transformed into a stochastic target
problem. We extend the incremental Markov Decision Process (iMDP) algorithm to
approximate arbitrarily well an optimal feedback policy of the original problem
by sampling in the augmented state space and computing proper boundary
conditions for the reformulated problem. We show that the algorithm is both
probabilistically sound and asymptotically optimal. The performance of the
proposed algorithm is demonstrated on motion planning and control problems
subject to bounded probability of collision in uncertain cluttered
environments.
|
1312.7606 | Distributed Policy Evaluation Under Multiple Behavior Strategies | cs.MA cs.AI cs.DC cs.LG | We apply diffusion strategies to develop a fully-distributed cooperative
reinforcement learning algorithm in which agents in a network communicate only
with their immediate neighbors to improve predictions about their environment.
The algorithm can also be applied to off-policy learning, meaning that the
agents can predict the response to a behavior different from the actual
policies they are following. The proposed distributed strategy is efficient,
with linear complexity in both computation time and memory footprint. We
provide a mean-square-error performance analysis and establish convergence
under constant step-size updates, which endow the network with continuous
learning capabilities. The results show a clear gain from cooperation: when the
individual agents can estimate the solution, cooperation increases stability
and reduces bias and variance of the prediction error; but, more importantly,
the network is able to approach the optimal solution even when none of the
individual agents can (e.g., when the individual behavior policies restrict
each agent to sample a small portion of the state space).
|
1312.7630 | Interactive Sensing in Social Networks | cs.SI math.OC physics.soc-ph | This paper presents models and algorithms for interactive sensing in social
networks where individuals act as sensors and the information exchange between
individuals is exploited to optimize sensing. Social learning is used to model
the interaction between individuals that aim to estimate an underlying state of
nature. In this context the following questions are addressed: How can
self-interested agents that interact via social learning achieve a tradeoff
between individual privacy and reputation of the social group? How can
protocols be designed to prevent data incest in online reputation blogs where
individuals make recommendations? How can sensing by individuals that interact
with each other be used by a global decision maker to detect changes in the
underlying state of nature? When individual agents possess limited sensing,
computation and communication capabilities, can a network of agents achieve
sophisticated global behavior? Social and game theoretic learning are natural
settings for addressing these questions. This article presents an overview,
insights and discussion of social learning models in the context of data incest
propagation, change detection and coordination of decision making.
|
1312.7642 | On simultaneous min-entropy smoothing | quant-ph cs.IT math.IT | In the context of network information theory, one often needs a multiparty
probability distribution to be typical in several ways simultaneously. When
considering quantum states instead of classical ones, it is in general
difficult to prove the existence of a state that is jointly typical. Such a
difficulty was recently emphasized and conjectures on the existence of such
states were formulated. In this paper, we consider a one-shot multiparty
typicality conjecture. The question can then be stated easily: is it possible
to smooth the largest eigenvalues of all the marginals of a multipartite state
{\rho} simultaneously while staying close to {\rho}? We prove the answer is yes
whenever the marginals of the state commute. In the general quantum case, we
prove that simultaneous smoothing is possible if the number of parties is two
or more generally if the marginals to optimize satisfy some non-overlap
property.
|
1312.7646 | Short random circuits define good quantum error correcting codes | quant-ph cs.IT math.IT | We study the encoding complexity for quantum error correcting codes with
large rate and distance. We prove that random Clifford circuits with $O(n
\log^2 n)$ gates can be used to encode $k$ qubits in $n$ qubits with a distance
$d$ provided $\frac{k}{n} < 1 - \frac{d}{n} \log_2 3 - h(\frac{d}{n})$. In
addition, we prove that such circuits typically have a depth of $O( \log^3 n)$.
|
1312.7650 | On the Minimum Decoding Delay of Balanced Complex Orthogonal Design | cs.IT math.IT | Complex orthogonal design (COD) with parameter $[p, n, k]$ is a combinatorial
design used in space-time block codes (STBCs). For STBC, $n$ is the number of
antennas, $k/p$ is the rate, and $p$ is the decoding delay. A class of rate
$1/2$ COD called balanced complex orthogonal design (BCOD) has been proposed by
Adams et al., and they constructed BCODs with rate $k/p = 1/2$ and decoding
delay $p = 2^m$ for $n=2m$. Furthermore, they prove that the constructions have
optimal decoding delay when $m$ is congruent to $1$, $2$, or $3$ module $4$.
They conjecture that for the case $m \equiv 0 \pmod 4$, $2^m$ is also a lower
bound of $p$. In this paper, we prove this conjecture.
|
1312.7651 | Petuum: A New Platform for Distributed Machine Learning on Big Data | stat.ML cs.LG cs.SY | What is a systematic way to efficiently apply a wide spectrum of advanced ML
programs to industrial scale problems, using Big Models (up to 100s of billions
of parameters) on Big Data (up to terabytes or petabytes)? Modern
parallelization strategies employ fine-grained operations and scheduling beyond
the classic bulk-synchronous processing paradigm popularized by MapReduce, or
even specialized graph-based execution that relies on graph representations of
ML programs. The variety of approaches tends to pull systems and algorithms
design in different directions, and it remains difficult to find a universal
platform applicable to a wide range of ML programs at scale. We propose a
general-purpose framework that systematically addresses data- and
model-parallel challenges in large-scale ML, by observing that many ML programs
are fundamentally optimization-centric and admit error-tolerant,
iterative-convergent algorithmic solutions. This presents unique opportunities
for an integrative system design, such as bounded-error network synchronization
and dynamic scheduling based on ML program structure. We demonstrate the
efficacy of these system designs versus well-known implementations of modern ML
algorithms, allowing ML programs to run in much less time and at considerably
larger model sizes, even on modestly-sized compute clusters.
|
1312.7658 | Response-Based Approachability and its Application to Generalized
No-Regret Algorithms | cs.LG cs.GT | Approachability theory, introduced by Blackwell (1956), provides fundamental
results on repeated games with vector-valued payoffs, and has been usefully
applied since in the theory of learning in games and to learning algorithms in
the online adversarial setup. Given a repeated game with vector payoffs, a
target set $S$ is approachable by a certain player (the agent) if he can ensure
that the average payoff vector converges to that set no matter what his
adversary opponent does. Blackwell provided two equivalent sets of conditions
for a convex set to be approachable. The first (primary) condition is a
geometric separation condition, while the second (dual) condition requires that
the set be {\em non-excludable}, namely that for every mixed action of the
opponent there exists a mixed action of the agent (a {\em response}) such that
the resulting payoff vector belongs to $S$. Existing approachability algorithms
rely on the primal condition and essentially require to compute at each stage a
projection direction from a given point to $S$. In this paper, we introduce an
approachability algorithm that relies on Blackwell's {\em dual} condition.
Thus, rather than projection, the algorithm relies on computation of the
response to a certain action of the opponent at each stage. The utility of the
proposed algorithm is demonstrated by applying it to certain generalizations of
the classical regret minimization problem, which include regret minimization
with side constraints and regret minimization for global cost functions. In
these problems, computation of the required projections is generally complex
but a response is readily obtainable.
|
1312.7685 | Fully distributed optimal channel assignment for open spectrum access | cs.DC cs.IT cs.NI math.IT math.OC | In this paper we address the problem of fully distributed assignment of users
to sub-bands such that the sum-rate of the system is maximized. We introduce a
modified auction algorithm that can be applied in a fully distributed way using
an opportunistic CSMA assignment scheme and is $\epsilon$ optimal. We analyze
the expected time complexity of the algorithm and suggest a variant to the
algorithm that has lower expected complexity. We then show that in the case of
i.i.d Rayleigh channels a simple greedy scheme is asymptotically optimal as
$\SNR$ increases or as the number of users is increased to infinity. We
conclude by providing simulated results of the suggested algorithms.
|
1312.7695 | A discretization-free sparse and parametric approach for linear array
signal processing | cs.IT math.IT | Direction of arrival (DOA) estimation in array processing using
uniform/sparse linear arrays is concerned in this paper. While sparse methods
via approximate parameter discretization have been popular in the past decade,
the discretization may cause problems, e.g., modeling error and increased
computations due to dense sampling. In this paper, an exact discretization-free
method, named as sparse and parametric approach (SPA), is proposed for uniform
and sparse linear arrays. SPA carries out parameter estimation in the
continuous range based on well-established covariance fitting criteria and
convex optimization. It guarantees to produce a sparse parameter estimate
without discretization required by existing sparse methods. Theoretical
analysis shows that the SPA parameter estimator is a large-snapshot realization
of the maximum likelihood estimator and is statistically consistent (in the
number of snapshots) under uncorrelated sources. Other merits of SPA include
improved resolution, applicability to arbitrary number of snapshots, robustness
to correlation of the sources and no requirement of user-parameters. Numerical
simulations are carried out to verify our analysis and demonstrate advantages
of SPA compared to existing methods.
|
1312.7710 | Total variation regularization for manifold-valued data | math.OC cs.CV physics.med-ph | We consider total variation minimization for manifold valued data. We propose
a cyclic proximal point algorithm and a parallel proximal point algorithm to
minimize TV functionals with $\ell^p$-type data terms in the manifold case.
These algorithms are based on iterative geodesic averaging which makes them
easily applicable to a large class of data manifolds. As an application, we
consider denoising images which take their values in a manifold. We apply our
algorithms to diffusion tensor images, interferometric SAR images as well as
sphere and cylinder valued images. For the class of Cartan-Hadamard manifolds
(which includes the data space in diffusion tensor imaging) we show the
convergence of the proposed TV minimizing algorithms to a global minimizer.
|
1312.7715 | Constrained Parametric Proposals and Pooling Methods for Semantic
Segmentation in RGB-D Images | cs.CV | We focus on the problem of semantic segmentation based on RGB-D data, with
emphasis on analyzing cluttered indoor scenes containing many instances from
many visual categories. Our approach is based on a parametric figure-ground
intensity and depth-constrained proposal process that generates spatial layout
hypotheses at multiple locations and scales in the image followed by a
sequential inference algorithm that integrates the proposals into a complete
scene estimate. Our contributions can be summarized as proposing the following:
(1) a generalization of parametric max flow figure-ground proposal methodology
to take advantage of intensity and depth information, in order to
systematically and efficiently generate the breakpoints of an underlying
spatial model in polynomial time, (2) new region description methods based on
second-order pooling over multiple features constructed using both intensity
and depth channels, (3) an inference procedure that can resolve conflicts in
overlapping spatial partitions, and handles scenes with a large number of
objects category instances, of very different scales, (4) extensive evaluation
of the impact of depth, as well as the effectiveness of a large number of
descriptors, both pre-designed and automatically obtained using deep learning,
in a difficult RGB-D semantic segmentation problem with 92 classes. We report
state of the art results in the challenging NYU Depth v2 dataset, extended for
RMRC 2013 Indoor Segmentation Challenge, where currently the proposed model
ranks first, with an average score of 24.61% and a number of 39 classes won.
Moreover, we show that by combining second-order and deep learning features,
over 15% relative accuracy improvements can be additionally achieved. In a
scene classification benchmark, our methodology further improves the state of
the art by 24%.
|
1312.7724 | The H2 Control Problem for Quadratically Invariant Systems with Delays | cs.SY math.OC | This paper gives a new solution to the output feedback H2 problem for
quadratically invariant communication delay patterns. A characterization of all
stabilizing controllers satisfying the delay constraints is given and the
decentralized H2 problem is cast as a convex model matching problem. The main
result shows that the model matching problem can be reduced to a
finite-dimensional quadratic program. A recursive state-space method for
computing the optimal controller based on vectorization is given.
|
1312.7740 | Assessment of Customer Credit through Combined Clustering of Artificial
Neural Networks, Genetics Algorithm and Bayesian Probabilities | cs.AI | Today, with respect to the increasing growth of demand to get credit from the
customers of banks and finance and credit institutions, using an effective and
efficient method to decrease the risk of non-repayment of credit given is very
necessary. Assessment of customers' credit is one of the most important and the
most essential duties of banks and institutions, and if an error occurs in this
field, it would leads to the great losses for banks and institutions. Thus,
using the predicting computer systems has been significantly progressed in
recent decades. The data that are provided to the credit institutions' managers
help them to make a straight decision for giving the credit or not-giving it.
In this paper, we will assess the customer credit through a combined
classification using artificial neural networks, genetics algorithm and
Bayesian probabilities simultaneously, and the results obtained from three
methods mentioned above would be used to achieve an appropriate and final
result. We use the K_folds cross validation test in order to assess the method
and finally, we compare the proposed method with the methods such as
Clustering-Launched Classification (CLC), Support Vector Machine (SVM) as well
as GA+SVM where the genetics algorithm has been used to improve them.
|
1312.7742 | Information Spreading on Almost Torus Networks | cs.SI physics.soc-ph | Epidemic modeling has been extensively used in the last years in the field of
telecommunications and computer networks. We consider the popular
Susceptible-Infected-Susceptible spreading model as the metric for information
spreading. In this work, we analyze information spreading on a particular class
of networks denoted almost torus networks and over the lattice which can be
considered as the limit when the torus length goes to infinity. Almost torus
networks consist on the torus network topology where some nodes or edges have
been removed. We find explicit expressions for the characteristic polynomial of
these graphs and tight lower bounds for its computation. These expressions
allow us to estimate their spectral radius and thus how the information spreads
on these networks.
|
1312.7793 | Direction of Arrival Estimation Using Co-prime Arrays: A Super
Resolution Viewpoint | cs.IT math.IT | We consider the problem of direction of arrival (DOA) estimation using a
newly proposed structure of non-uniform linear arrays, referred to as co-prime
arrays, in this paper. By exploiting the second order statistical information
of the received signals, co-prime arrays exhibit O(MN) degrees of freedom with
only M + N sensors. A sparsity based recovery method is proposed to fully
utilize these degrees of freedom. Unlike traditional sparse recovery methods,
the proposed method is based on the developing theory of super resolution,
which considers a continuous range of possible sources instead of discretizing
this range into a discrete grid. With this approach, off-grid effects inherited
in traditional sparse recovery can be neglected, thus improving the accuracy of
DOA estimation. In this paper we show that in the noiseless case one can
theoretically detect up to M N sources with only 2M + N sensors. The noise 2
statistics of co-prime arrays are also analyzed to demonstrate the robustness
of the proposed optimization scheme. A source number detection method is
presented based on the spectrum reconstructed from the sparse method. By
extensive numerical examples, we show the superiority of the proposed method in
terms of DOA estimation accuracy, degrees of freedom, and resolution ability
compared with previous methods, such as MUSIC with spatial smoothing and the
discrete sparse recovery method.
|
1312.7794 | On Minimal Trajectories for Mobile Sampling of Bandlimited Fields | cs.IT math.CA math.IT | We study the design of sampling trajectories for stable sampling and the
reconstruction of bandlimited spatial fields using mobile sensors. The spectrum
is assumed to be a symmetric convex set. As a performance metric we use the
path density of the set of sampling trajectories that is defined as the total
distance traveled by the moving sensors per unit spatial volume of the spatial
region being monitored. Focussing first on parallel lines, we identify the set
of parallel lines with minimal path density that contains a set of stable
sampling for fields bandlimited to a known set. We then show that the problem
becomes ill-posed when the optimization is performed over all trajectories by
demonstrating a feasible trajectory set with arbitrarily low path density.
However, the problem becomes well-posed if we explicitly specify the stability
margins. We demonstrate this by obtaining a non-trivial lower bound on the path
density of an arbitrary set of trajectories that contain a sampling set with
explicitly specified stability bounds.
|
1312.7815 | Optimal polygonal L1 linearization and fast interpolation of nonlinear
systems | math.OC cs.SY math.NA | The analysis of complex nonlinear systems is often carried out using simpler
piecewise linear representations of them. A principled and practical technique
is proposed to linearize and evaluate arbitrary continuous nonlinear functions
using polygonal (continuous piecewise linear) models under the L1 norm. A
thorough error analysis is developed to guide an optimal design of two kinds of
polygonal approximations in the asymptotic case of a large budget of evaluation
subintervals N. The method allows the user to obtain the level of linearization
(N) for a target approximation error and vice versa. It is suitable for, but
not limited to, an efficient implementation in modern Graphics Processing Units
(GPUs), allowing real-time performance of computationally demanding
applications. The quality and efficiency of the technique has been measured in
detail on two nonlinear functions that are widely used in many areas of
scientific computing and are expensive to evaluate
|
1312.7832 | Defining implication relation for classical logic | math.LO cs.AI cs.CL cs.LO | In classical logic, "P implies Q" is equivalent to "not-P or Q". It is well
known that the equivalence is problematic. Actually, from "P implies Q", "not-P
or Q" can be inferred ("Implication-to-disjunction" is valid), while from
"not-P or Q", "P implies Q" cannot be inferred in general
("Disjunction-to-implication" is not generally valid), so the equivalence
between them is invalid in general. This work aims to remove exactly the
incorrect Disjunction-to-implication from classical logic (CL). The paper
proposes a logical system (IRL) with the expected properties: (1) CL is simply
obtained by adding Disjunction-to-implication to IRL, and (2)
Disjunction-to-implication is independent of IRL (either
Disjunction-to-implication or its negation cannot be derived in IRL) in the
general case. In other words, IRL is just the system obtained by exactly
removing Disjunction-to-implication from CL.
|
1312.7847 | On Decentralized Estimation with Active Queries | cs.MA cs.IT cs.SY math.IT | We consider the problem of decentralized 20 questions with noise for multiple
players/agents under the minimum entropy criterion in the setting of stochastic
search over a parameter space, with application to target localization. We
propose decentralized extensions of the active query-based stochastic search
strategy that combines elements from the 20 questions approach and social
learning. We prove convergence to correct consensus on the value of the
parameter. This framework provides a flexible and tractable mathematical model
for decentralized parameter estimation systems based on active querying. We
illustrate the effectiveness and robustness of the proposed decentralized
collaborative 20 questions algorithm for random network topologies with
information sharing.
|
1312.7852 | Evolutionary Design of Numerical Methods: Generating Finite Difference
and Integration Schemes by Differential Evolution | cs.NE cs.NA | Classical and new numerical schemes are generated using evolutionary
computing. Differential Evolution is used to find the coefficients of finite
difference approximations of function derivatives, and of single and multi-step
integration methods. The coefficients are reverse engineered based on samples
from a target function and its derivative used for training. The Runge-Kutta
schemes are trained using the order condition equations. An appealing feature
of the evolutionary method is the low number of model parameters. The
population size, termination criterion and number of training points are
determined in a sensitivity analysis. Computational results show good agreement
between evolved and analytical coefficients. In particular, a new fifth-order
Runge-Kutta scheme is computed which adheres to the order conditions with a sum
of absolute errors of order 10^-14. Execution of the evolved schemes proved the
intended orders of accuracy. The outcome of this study is valuable for future
developments in the design of complex numerical methods that are out of reach
by conventional means.
|
1312.7853 | Communication Efficient Distributed Optimization using an Approximate
Newton-type Method | cs.LG math.OC stat.ML | We present a novel Newton-type method for distributed optimization, which is
particularly well suited for stochastic optimization and learning problems. For
quadratic objectives, the method enjoys a linear rate of convergence which
provably \emph{improves} with the data size, requiring an essentially constant
number of iterations under reasonable assumptions. We provide theoretical and
empirical evidence of the advantages of our method compared to other
approaches, such as one-shot parameter averaging and ADMM.
|
1312.7869 | Consistent Bounded-Asynchronous Parameter Servers for Distributed ML | stat.ML cs.DC cs.LG | In distributed ML applications, shared parameters are usually replicated
among computing nodes to minimize network overhead. Therefore, proper
consistency model must be carefully chosen to ensure algorithm's correctness
and provide high throughput. Existing consistency models used in
general-purpose databases and modern distributed ML systems are either too
loose to guarantee correctness of the ML algorithms or too strict and thus fail
to fully exploit the computing power of the underlying distributed system.
Many ML algorithms fall into the category of \emph{iterative convergent
algorithms} which start from a randomly chosen initial point and converge to
optima by repeating iteratively a set of procedures. We've found that many such
algorithms are to a bounded amount of inconsistency and still converge
correctly. This property allows distributed ML to relax strict consistency
models to improve system performance while theoretically guarantees algorithmic
correctness. In this paper, we present several relaxed consistency models for
asynchronous parallel computation and theoretically prove their algorithmic
correctness. The proposed consistency models are implemented in a distributed
parameter server and evaluated in the context of a popular ML application:
topic modeling.
|
1401.0044 | Approximating the Bethe partition function | cs.LG | When belief propagation (BP) converges, it does so to a stationary point of
the Bethe free energy $F$, and is often strikingly accurate. However, it may
converge only to a local optimum or may not converge at all. An algorithm was
recently introduced for attractive binary pairwise MRFs which is guaranteed to
return an $\epsilon$-approximation to the global minimum of $F$ in polynomial
time provided the maximum degree $\Delta=O(\log n)$, where $n$ is the number of
variables. Here we significantly improve this algorithm and derive several
results including a new approach based on analyzing first derivatives of $F$,
which leads to performance that is typically far superior and yields a fully
polynomial-time approximation scheme (FPTAS) for attractive models without any
degree restriction. Further, the method applies to general (non-attractive)
models, though with no polynomial time guarantee in this case, leading to the
important result that approximating $\log$ of the Bethe partition function,
$\log Z_B=-\min F$, for a general model to additive $\epsilon$-accuracy may be
reduced to a discrete MAP inference problem. We explore an application to
predicting equipment failure on an urban power network and demonstrate that the
Bethe approximation can perform well even when BP fails to converge.
|
1401.0050 | Bounds on the rate of superimposed codes | cs.IT math.IT math.PR | A binary code is called a superimposed cover-free $(s,\ell)$-code if the code
is identified by the incidence matrix of a family of finite sets in which no
intersection of $\ell$ sets is covered by the union of $s$ others. A binary
code is called a superimposed list-decoding $s_L$-code if the code is
identified by the incidence matrix of a family of finite sets in which the
union of any $s$ sets can cover not more than $L-1$ other sets of the family.
For $L=\ell=1$, both of the definitions coincide and the corresponding binary
code is called a superimposed $s$-code. Our aim is to obtain new lower and
upper bounds on the rate of given codes. The most interesting result is a lower
bound on the rate of superimposed cover-free $(s,\ell)$-code based on the
ensemble of constant-weight binary codes. If parameter $\ell\ge1$ is fixed and
$s\to\infty$, then the ratio of this lower bound to the best known upper bound
converges to the limit $2\,e^{-2}=0,271$. For the classical case $\ell=1$ and
$s\ge2$, the given Statement means that our recurrent upper bound on the rate
of superimposed $s$-codes obtained in 1982 is attained to within a constant
factor $a$, $0,271\le a\le1$
|
1401.0061 | On dually flat general $(\alpha,\beta)$-metrics | math.DG cs.IT math.IT | In this work, the dual flatness, which is connected with Statistics and
Information geometry, of general $(\alpha,\beta)$-metrics (a new class of
Finsler metrics) is studied. A nice characterization for such metrics to be
dually flat under some suitable conditions is provided and all the solutions
are completely determined. By using an original kind of metrical deformations,
many non-trivial explicit examples are constructed. Moreover, the relationship
of dual flatness and projective flatness of such metrics is shown.
|
1401.0069 | Determining Relevant Relations for Datalog Queries under Access
Limitations is Undecidable | cs.DB | Access limitations are restrictions in the way in which the tuples of a
relation can be accessed. Under access limitations, query answering becomes
more complex than in the traditional case, with no guarantee that the answer
tuples that can be extracted (aka maximal answer) are all those that would be
found without access limitations (aka complete answer). The field of query
answering under access limitations has been broadly investigated in the past.
Attention has been devoted to the problem of determining relations that are
relevant for a query, i.e., those (possibly off-query) relations that might
need to be accessed in order to find all tuples in the maximal answer. In this
short paper, we show that relevance is undecidable for Datalog queries.
|
1401.0077 | Multi-modal filtering for non-linear estimation | cs.SY | Multi-modal densities appear frequently in time series and practical
applications. However, they cannot be represented by common state estimators,
such as the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF),
which additionally suffer from the fact that uncertainty is often not captured
sufficiently well, which can result in incoherent and divergent tracking
performance. In this paper, we address these issues by devising a non-linear
filtering algorithm where densities are represented by Gaussian mixture models,
whose parameters are estimated in closed form. The resulting method exhibits a
superior performance on typical benchmarks.
|
1401.0092 | A Novel Approach For Generating Face Template Using Bda | cs.CV | In identity management system, commonly used biometric recognition system
needs attention towards issue of biometric template protection as far as more
reliable solution is concerned. In view of this biometric template protection
algorithm should satisfy security, discriminability and cancelability. As no
single template protection method is capable of satisfying the basic
requirements, a novel technique for face template generation and protection is
proposed. The novel approach is proposed to provide security and accuracy in
new user enrollment as well as authentication process. This novel technique
takes advantage of both the hybrid approach and the binary discriminant
analysis algorithm. This algorithm is designed on the basis of random
projection, binary discriminant analysis and fuzzy commitment scheme. Three
publicly available benchmark face databases are used for evaluation. The
proposed novel technique enhances the discriminability and recognition accuracy
by 80% in terms of matching score of the face images and provides high
security.
|
1401.0102 | A DDoS-Aware IDS Model Based on Danger Theory and Mobile Agents | cs.DC cs.AI cs.CR cs.MA | We propose an artificial immune model for intrusion detection in distributed
systems based on a relatively recent theory in immunology called Danger theory.
Based on Danger theory, immune response in natural systems is a result of
sensing corruption as well as sensing unknown substances. In contrast,
traditional self-nonself discrimination theory states that immune response is
only initiated by sensing nonself (unknown) patterns. Danger theory solves many
problems that could only be partially explained by the traditional model.
Although the traditional model is simpler, such problems result in high false
positive rates in immune-inspired intrusion detection systems. We believe using
danger theory in a multi-agent environment that computationally emulates the
behavior of natural immune systems is effective in reducing false positive
rates. We first describe a simplified scenario of immune response in natural
systems based on danger theory and then, convert it to a computational model as
a network protocol. In our protocol, we define several immune signals and model
cell signaling via message passing between agents that emulate cells. Most
messages include application-specific patterns that must be meaningfully
extracted from various system properties. We show how to model these messages
in practice by performing a case study on the problem of detecting distributed
denial-of-service attacks in wireless sensor networks. We conduct a set of
systematic experiments to find a set of performance metrics that can accurately
distinguish malicious patterns. The results indicate that the system can be
efficiently used to detect malicious patterns with a high level of accuracy.
|
1401.0104 | PSO-MISMO Modeling Strategy for Multi-Step-Ahead Time Series Prediction | cs.AI cs.LG cs.NE stat.ML | Multi-step-ahead time series prediction is one of the most challenging
research topics in the field of time series modeling and prediction, and is
continually under research. Recently, the multiple-input several
multiple-outputs (MISMO) modeling strategy has been proposed as a promising
alternative for multi-step-ahead time series prediction, exhibiting advantages
compared with the two currently dominating strategies, the iterated and the
direct strategies. Built on the established MISMO strategy, this study proposes
a particle swarm optimization (PSO)-based MISMO modeling strategy, which is
capable of determining the number of sub-models in a self-adaptive mode, with
varying prediction horizons. Rather than deriving crisp divides with equal-size
s prediction horizons from the established MISMO, the proposed PSO-MISMO
strategy, implemented with neural networks, employs a heuristic to create
flexible divides with varying sizes of prediction horizons and to generate
corresponding sub-models, providing considerable flexibility in model
construction, which has been validated with simulated and real datasets.
|
1401.0116 | Controlled Sparsity Kernel Learning | cs.LG | Multiple Kernel Learning(MKL) on Support Vector Machines(SVMs) has been a
popular front of research in recent times due to its success in application
problems like Object Categorization. This success is due to the fact that MKL
has the ability to choose from a variety of feature kernels to identify the
optimal kernel combination. But the initial formulation of MKL was only able to
select the best of the features and misses out many other informative kernels
presented. To overcome this, the Lp norm based formulation was proposed by
Kloft et. al. This formulation is capable of choosing a non-sparse set of
kernels through a control parameter p. Unfortunately, the parameter p does not
have a direct meaning to the number of kernels selected. We have observed that
stricter control over the number of kernels selected gives us an edge over
these techniques in terms of accuracy of classification and also helps us to
fine tune the algorithms to the time requirements at hand. In this work, we
propose a Controlled Sparsity Kernel Learning (CSKL) formulation that can
strictly control the number of kernels which we wish to select. The CSKL
formulation introduces a parameter t which directly corresponds to the number
of kernels selected. It is important to note that a search in t space is finite
and fast as compared to p. We have also provided an efficient Reduced Gradient
Descent based algorithm to solve the CSKL formulation, which is proven to
converge. Through our experiments on the Caltech101 Object Categorization
dataset, we have also shown that one can achieve better accuracies than the
previous formulations through the right choice of t.
|
1401.0118 | Black Box Variational Inference | stat.ML cs.LG stat.CO stat.ME | Variational inference has become a widely used method to approximate
posteriors in complex latent variables models. However, deriving a variational
inference algorithm generally requires significant model-specific analysis, and
these efforts can hinder and deter us from quickly developing and exploring a
variety of models for a problem at hand. In this paper, we present a "black
box" variational inference algorithm, one that can be quickly applied to many
models with little additional derivation. Our method is based on a stochastic
optimization of the variational objective where the noisy gradient is computed
from Monte Carlo samples from the variational distribution. We develop a number
of methods to reduce the variance of the gradient, always maintaining the
criterion that we want to avoid difficult model-based derivations. We evaluate
our method against the corresponding black box sampling based methods. We find
that our method reaches better predictive likelihoods much faster than sampling
methods. Finally, we demonstrate that Black Box Variational Inference lets us
easily explore a wide space of models by quickly constructing and evaluating
several models of longitudinal healthcare data.
|
1401.0131 | System Analysis And Design For Multimedia Retrieval Systems | cs.IR cs.CV cs.MM | Due to the extensive use of information technology and the recent
developments in multimedia systems, the amount of multimedia data available to
users has increased exponentially. Video is an example of multimedia data as it
contains several kinds of data such as text, image, meta-data, visual and
audio. Content based video retrieval is an approach for facilitating the
searching and browsing of large multimedia collections over WWW. In order to
create an effective video retrieval system, visual perception must be taken
into account. We conjectured that a technique which employs multiple features
for indexing and retrieval would be more effective in the discrimination and
search tasks of videos. In order to validate this, content based indexing and
retrieval systems were implemented using color histogram, Texture feature
(GLCM), edge density and motion..
|
1401.0159 | Speeding-Up Convergence via Sequential Subspace Optimization: Current
State and Future Directions | cs.NA cs.LG | This is an overview paper written in style of research proposal. In recent
years we introduced a general framework for large-scale unconstrained
optimization -- Sequential Subspace Optimization (SESOP) and demonstrated its
usefulness for sparsity-based signal/image denoising, deconvolution,
compressive sensing, computed tomography, diffraction imaging, support vector
machines. We explored its combination with Parallel Coordinate Descent and
Separable Surrogate Function methods, obtaining state of the art results in
above-mentioned areas. There are several methods, that are faster than plain
SESOP under specific conditions: Trust region Newton method - for problems with
easily invertible Hessian matrix; Truncated Newton method - when fast
multiplication by Hessian is available; Stochastic optimization methods - for
problems with large stochastic-type data; Multigrid methods - for problems with
nested multilevel structure. Each of these methods can be further improved by
merge with SESOP. One can also accelerate Augmented Lagrangian method for
constrained optimization problems and Alternating Direction Method of
Multipliers for problems with separable objective function and non-separable
constraints.
|
1401.0166 | Medical Image Fusion: A survey of the state of the art | cs.CV cs.AI physics.med-ph | Medical image fusion is the process of registering and combining multiple
images from single or multiple imaging modalities to improve the imaging
quality and reduce randomness and redundancy in order to increase the clinical
applicability of medical images for diagnosis and assessment of medical
problems. Multi-modal medical image fusion algorithms and devices have shown
notable achievements in improving clinical accuracy of decisions based on
medical images. This review article provides a factual listing of methods and
summarizes the broad scientific challenges faced in the field of medical image
fusion. We characterize the medical image fusion research based on (1) the
widely used image fusion methods, (2) imaging modalities, and (3) imaging of
organs that are under study. This review concludes that even though there
exists several open ended technological and scientific challenges, the fusion
of medical images has proved to be useful for advancing the clinical
reliability of using medical imaging for medical diagnostics and analysis, and
is a scientific discipline that has the potential to significantly grow in the
coming years.
|
1401.0180 | Decision Making under Uncertainty: A Quasimetric Approach | cs.AI math.OC | We propose a new approach for solving a class of discrete decision making
problems under uncertainty with positive cost. This issue concerns multiple and
diverse fields such as engineering, economics, artificial intelligence,
cognitive science and many others. Basically, an agent has to choose a single
or series of actions from a set of options, without knowing for sure their
consequences. Schematically, two main approaches have been followed: either the
agent learns which option is the correct one to choose in a given situation by
trial and error, or the agent already has some knowledge on the possible
consequences of his decisions; this knowledge being generally expressed as a
conditional probability distribution. In the latter case, several optimal or
suboptimal methods have been proposed to exploit this uncertain knowledge in
various contexts. In this work, we propose following a different approach,
based on the geometric intuition of distance. More precisely, we define a goal
independent quasimetric structure on the state space, taking into account both
cost function and transition probability. We then compare precision and
computation time with classical approaches.
|
1401.0201 | Sparse Recovery with Very Sparse Compressed Counting | stat.ME cs.DS cs.IT cs.LG math.IT | Compressed sensing (sparse signal recovery) often encounters nonnegative data
(e.g., images). Recently we developed the methodology of using (dense)
Compressed Counting for recovering nonnegative K-sparse signals. In this paper,
we adopt very sparse Compressed Counting for nonnegative signal recovery. Our
design matrix is sampled from a maximally-skewed p-stable distribution (0<p<1),
and we sparsify the design matrix so that on average (1-g)-fraction of the
entries become zero. The idea is related to very sparse stable random
projections (Li et al 2006 and Li 2007), the prior work for estimating summary
statistics of the data.
In our theoretical analysis, we show that, when p->0, it suffices to use M=
K/(1-exp(-gK) log N measurements, so that all coordinates can be recovered in
one scan of the coordinates. If g = 1 (i.e., dense design), then M = K log N.
If g= 1/K or 2/K (i.e., very sparse design), then M = 1.58K log N or M = 1.16K
log N. This means the design matrix can be indeed very sparse at only a minor
inflation of the sample complexity.
Interestingly, as p->1, the required number of measurements is essentially M
= 2.7K log N, provided g= 1/K. It turns out that this result is a general
worst-case bound.
|
1401.0202 | Optimal Control with Noisy Time | math.OC cs.SY | This paper examines stochastic optimal control problems in which the state is
perfectly known, but the controller's measure of time is a stochastic process
derived from a strictly increasing L\'evy process. We provide dynamic
programming results for continuous-time finite-horizon control and specialize
these results to solve a noisy-time variant of the linear quadratic regulator
problem and a portfolio optimization problem with random trade activity rates.
For the linear quadratic case, the optimal controller is linear and can be
computed from a generalization of the classical Riccati differential equation.
|
1401.0207 | Urban Mobility Scaling: Lessons from `Little Data' | physics.soc-ph cs.CY cs.SI physics.data-an stat.AP | Recent mobility scaling research, using new data sources, often relies on
aggregated data alone. Hence, these studies face difficulties characterizing
the influence of factors such as transportation mode on mobility patterns. This
paper attempts to complement this research by looking at a category-rich
mobility data set. In order to shed light on the impact of categories, as a
case study, we use conventionally collected German mobility data. In contrast
to `check-in'-based data, our results are not biased by Euclidean distance
approximations. In our analysis, we show that aggregation can hide crucial
differences between trip length distributions, when subdivided by categories.
For example, we see that on an urban scale (0 to ~15 km), walking, versus
driving, exhibits a highly different scaling exponent, thus universality class.
Moreover, mode share and trip length are responsive to day-of-week and
time-of-day. For example, in Germany, although driving is relatively less
frequent on Sundays than on Wednesdays, trips seem to be longer. In addition,
our work may shed new light on the debate between distance-based and
intervening-opportunity mechanisms affecting mobility patterns, since mode may
be chosen both according to trip length and urban form.
|
1401.0214 | Band Allocation for Cognitive Radios with Buffered Primary and Secondary
Users | cs.NI cs.IT math.IT | In this paper, we study band allocation of $\mathcal{M}_s$ buffered secondary
users (SUs) to $\mathcal{M}_p$ orthogonal primary licensed bands, where each
primary band is assigned to one primary user (PU). Each SU is assigned to one
of the available primary bands with a certain probability designed to satisfy
some specified quality of service (QoS) requirements for the SUs. In the
proposed system, only one SU is assigned to a particular band. The optimization
problem used to obtain the stability region's envelope (closure) is shown to be
a linear program. We compare the stability region of the proposed system with
that of a system where each SU chooses a band randomly with some assignment
probability. We also compare with a fixed (deterministic) assignment system,
where only one SU is assigned to one of the primary bands all the time. We
prove the advantage of the proposed system over the other systems.
|
1401.0245 | A Review: Expert System for Diagnosis of Myocardial Infarction | cs.AI | A computer Program Capable of performing at a human-expert level in a narrow
problem domain area is called an expert system. Management of uncertainty is an
intrinsically important issue in the design of expert systems because much of
the information in the knowledge base of a typical expert system is imprecise,
incomplete or not totally reliable. In this paper, the author present s the
review of past work that has been carried out by various researchers based on
development of expert systems for the diagnosis of cardiac disease
|
1401.0247 | Robust Hierarchical Clustering | cs.LG cs.DS | One of the most widely used techniques for data clustering is agglomerative
clustering. Such algorithms have been long used across many different fields
ranging from computational biology to social sciences to computer vision in
part because their output is easy to interpret. Unfortunately, it is well
known, however, that many of the classic agglomerative clustering algorithms
are not robust to noise. In this paper we propose and analyze a new robust
algorithm for bottom-up agglomerative clustering. We show that our algorithm
can be used to cluster accurately in cases where the data satisfies a number of
natural properties and where the traditional agglomerative algorithms fail. We
also show how to adapt our algorithm to the inductive setting where our given
data is only a small random sample of the entire data set. Experimental
evaluations on synthetic and real world data sets show that our algorithm
achieves better performance than other hierarchical algorithms in the presence
of noise.
|
1401.0255 | Modeling Attractiveness and Multiple Clicks in Sponsored Search Results | cs.IR cs.LG | Click models are an important tool for leveraging user feedback, and are used
by commercial search engines for surfacing relevant search results. However,
existing click models are lacking in two aspects. First, they do not share
information across search results when computing attractiveness. Second, they
assume that users interact with the search results sequentially. Based on our
analysis of the click logs of a commercial search engine, we observe that the
sequential scan assumption does not always hold, especially for sponsored
search results. To overcome the above two limitations, we propose a new click
model. Our key insight is that sharing information across search results helps
in identifying important words or key-phrases which can then be used to
accurately compute attractiveness of a search result. Furthermore, we argue
that the click probability of a position as well as its attractiveness changes
during a user session and depends on the user's past click experience. Our
model seamlessly incorporates the effect of externalities (quality of other
search results displayed in response to a user query), user fatigue, as well as
pre and post-click relevance of a sponsored search result. We propose an
efficient one-pass inference scheme and empirically evaluate the performance of
our model via extensive experiments using the click logs of a large commercial
search engine.
|
1401.0260 | Models for the modern power grid | cs.SY | This article reviews different kinds of models for the electric power grid
that can be used to understand the modern power system, the smart grid. From
the physical network to abstract energy markets, we identify in the literature
different aspects that co-determine the spatio-temporal multilayer dynamics of
power system. We start our review by showing how the generation, transmission
and distribution characteristics of the traditional power grids are already
subject to complex behaviour appearing as a result of the the interplay between
dynamics of the nodes and topology, namely synchronisation and cascade effects.
When dealing with smart grids, the system complexity increases even more: on
top of the physical network of power lines and controllable sources of
electricity, the modernisation brings information networks, renewable
intermittent generation, market liberalisation, prosumers, among other aspects.
In this case, we forecast a dynamical co-evolution of the smart grid and other
kind of networked systems that cannot be understood isolated. This review
compiles recent results that model electric power grids as complex systems,
going beyond pure technological aspects. From this perspective, we then
indicate possible ways to incorporate the diverse co-evolving systems into the
smart grid model using, for example, network theory and multi-agent simulation.
|
1401.0282 | Design of a GIS-based Assistant Software Agent for the Incident
Commander to Coordinate Emergency Response Operations | cs.MA cs.AI | Problem: This paper addresses the design of an intelligent software system
for the IC (incident commander) of a team in order to coordinate actions of
agents (field units or robots) in the domain of emergency/crisis response
operations. Objective: This paper proposes GICoordinator. It is a GIS-based
assistant software agent that assists and collaborates with the human planner
in strategic planning and macro tasks assignment for centralized multi-agent
coordination. Method: Our approach to design GICoordinator was to: analyze the
problem, design a complete data model, design an architecture of GICoordinator,
specify required capabilities of human and system in coordination problem
solving, specify development tools, and deploy. Result: The result was an
architecture/design of GICoordinator that contains system requirements.
Findings: GICoordinator efficiently integrates geoinformatics with artifice
intelligent techniques in order to provide a spatial intelligent coordinator
system for an IC to efficiently coordinate and control agents by making
macro/strategic decisions. Results define a framework for future works to
develop this system.
|
1401.0304 | Learning without Concentration | cs.LG stat.ML | We obtain sharp bounds on the performance of Empirical Risk Minimization
performed in a convex class and with respect to the squared loss, without
assuming that class members and the target are bounded functions or have
rapidly decaying tails.
Rather than resorting to a concentration-based argument, the method used here
relies on a `small-ball' assumption and thus holds for classes consisting of
heavy-tailed functions and for heavy-tailed targets.
The resulting estimates scale correctly with the `noise level' of the
problem, and when applied to the classical, bounded scenario, always improve
the known bounds.
|
1401.0323 | Analysis and Control of Beliefs in Social Networks | cs.SI physics.soc-ph | In this paper, we investigate the problem of how beliefs diffuse among
members of social networks. We propose an information flow model (IFM) of
belief that captures how interactions among members affect the diffusion and
eventual convergence of a belief. The IFM model includes a generalized Markov
Graph (GMG) model as a social network model, which reveals that the diffusion
of beliefs depends heavily on two characteristics of the social network
characteristics, namely degree centralities and clustering coefficients. We
apply the IFM to both converged belief estimation and belief control strategy
optimization. The model is compared with an IFM including the Barabasi-Albert
model, and is evaluated via experiments with published real social network
data.
|
1401.0340 | Optimal Random Access and Random Spectrum Sensing for an Energy
Harvesting Cognitive Radio with and without Primary Feedback Leveraging | cs.IT cs.NI math.IT | We consider a secondary user (SU) with energy harvesting capability. We
design access schemes for the SU which incorporate random spectrum sensing and
random access, and which make use of the primary automatic repeat request (ARQ)
feedback. We study two problem-formulations. In the first problem-formulation,
we characterize the stability region of the proposed schemes. The sensing and
access probabilities are obtained such that the secondary throughput is
maximized under the constraints that both the primary and secondary queues are
stable. Whereas in the second problem-formulation, the sensing and access
probabilities are obtained such that the secondary throughput is maximized
under the stability of the primary queue and that the primary queueing delay is
kept lower than a specified value needed to guarantee a certain quality of
service (QoS) for the primary user (PU). We consider spectrum sensing errors
and assume multipacket reception (MPR) capabilities. Numerical results show the
enhanced performance of our proposed systems.
|
1401.0347 | Distributed Iterative Detection Based on Reduced Message Passing for
Networked MIMO Cellular Systems | cs.IT math.IT | This paper considers base station cooperation (BSC) strategies for the uplink
of a multi-user multi-cell high frequency reuse scenario where distributed
iterative detection (DID) schemes with soft/hard interference cancellation
algorithms are studied. The conventional distributed detection scheme exchanges
{soft symbol estimates} with all cooperating BSs. Since a large amount of
information needs to be shared via the backhaul, the exchange of hard bit
information is preferred, however a performance degradation is experienced. In
this paper, we consider a reduced message passing (RMP) technique in which each
BS generates a detection list with the probabilities for the desired symbol
that are sorted according to the calculated probability. The network then
selects the best {detection candidates} from the lists and conveys the index of
the constellation symbols (instead of double-precision values) among the
cooperating cells. The proposed DID-RMP achieves an inter-cell-interference
(ICI) suppression with low backhaul traffic overhead compared with {the
conventional soft bit exchange} and outperforms the previously reported
hard/soft information exchange algorithms.
|
1401.0362 | EigenGP: Gaussian Process Models with Adaptive Eigenfunctions | cs.LG | Gaussian processes (GPs) provide a nonparametric representation of functions.
However, classical GP inference suffers from high computational cost for big
data. In this paper, we propose a new Bayesian approach, EigenGP, that learns
both basis dictionary elements--eigenfunctions of a GP prior--and prior
precisions in a sparse finite model. It is well known that, among all
orthogonal basis functions, eigenfunctions can provide the most compact
representation. Unlike other sparse Bayesian finite models where the basis
function has a fixed form, our eigenfunctions live in a reproducing kernel
Hilbert space as a finite linear combination of kernel functions. We learn the
dictionary elements--eigenfunctions--and the prior precisions over these
elements as well as all the other hyperparameters from data by maximizing the
model marginal likelihood. We explore computational linear algebra to simplify
the gradient computation significantly. Our experimental results demonstrate
improved predictive performance of EigenGP over alternative sparse GP methods
as well as relevance vector machine.
|
1401.0366 | Quantitative Comparison Between Crowd Models for Evacuation Planning and
Evaluation | cs.MA cs.CY | Crowd simulation is rapidly becoming a standard tool for evacuation planning
and evaluation. However, the many crowd models in the literature are
structurally different, and few have been rigorously calibrated against
real-world egress data, especially in emergency situations. In this paper we
describe a procedure to quantitatively compare different crowd models or
between models and real-world data. We simulated three models: (1) the lattice
gas model, (2) the social force model, and (3) the RVO2 model, and obtained the
distributions of six observables: (1) evacuation time, (2) zoned evacuation
time, (3) passage density, (4) total distance traveled, (5) inconvenience, and
(6) flow rate. We then used the DISTATIS procedure to compute the compromise
matrix of statistical distances between the three models. Projecting the three
models onto the first two principal components of the compromise matrix, we
find the lattice gas and RVO2 models are similar in terms of the evacuation
time, passage density, and flow rates, whereas the social force and RVO2 models
are similar in terms of the total distance traveled. Most importantly, we find
that the zoned evacuation times of the three models to be very different from
each other. Thus we propose to use this variable, if it can be measured, as the
key test between different models, and also between models and the real world.
Finally, we compared the model flow rates against the flow rate of an emergency
evacuation during the May 2008 Sichuan earthquake, and found the social force
model agrees best with this real data.
|
1401.0376 | Generalization Bounds for Representative Domain Adaptation | cs.LG stat.ML | In this paper, we propose a novel framework to analyze the theoretical
properties of the learning process for a representative type of domain
adaptation, which combines data from multiple sources and one target (or
briefly called representative domain adaptation). In particular, we use the
integral probability metric to measure the difference between the distributions
of two domains and meanwhile compare it with the H-divergence and the
discrepancy distance. We develop the Hoeffding-type, the Bennett-type and the
McDiarmid-type deviation inequalities for multiple domains respectively, and
then present the symmetrization inequality for representative domain
adaptation. Next, we use the derived inequalities to obtain the Hoeffding-type
and the Bennett-type generalization bounds respectively, both of which are
based on the uniform entropy number. Moreover, we present the generalization
bounds based on the Rademacher complexity. Finally, we analyze the asymptotic
convergence and the rate of convergence of the learning process for
representative domain adaptation. We discuss the factors that affect the
asymptotic behavior of the learning process and the numerical experiments
support our theoretical findings as well. Meanwhile, we give a comparison with
the existing results of domain adaptation and the classical results under the
same-distribution assumption.
|
1401.0395 | Hybrid Approach to Face Recognition System using Principle component and
Independent component with score based fusion process | cs.CV | Hybrid approach has a special status among Face Recognition Systems as they
combine different recognition approaches in an either serial or parallel to
overcome the shortcomings of individual methods. This paper explores the area
of Hybrid Face Recognition using score based strategy as a combiner/fusion
process. In proposed approach, the recognition system operates in two modes:
training and classification. Training mode involves normalization of the face
images (training set), extracting appropriate features using Principle
Component Analysis (PCA) and Independent Component Analysis (ICA). The
extracted features are then trained in parallel using Back-propagation neural
networks (BPNNs) to partition the feature space in to different face classes.
In classification mode, the trained PCA BPNN and ICA BPNN are fed with new face
image(s). The score based strategy which works as a combiner is applied to the
results of both PCA BPNN and ICA BPNN to classify given new face image(s)
according to face classes obtained during the training mode. The proposed
approach has been tested on ORL and other face databases; the experimented
results show that the proposed system has higher accuracy than face recognition
systems using single feature extractor.
|
1401.0412 | Traffic congestion in interconnected complex networks | physics.soc-ph cs.SI | Traffic congestion in isolated complex networks has been investigated
extensively over the last decade. Coupled network models have recently been
developed to facilitate further understanding of real complex systems. Analysis
of traffic congestion in coupled complex networks, however, is still relatively
unexplored. In this paper, we try to explore the effect of interconnections on
traffic congestion in interconnected BA scale-free networks. We find that
assortative coupling can alleviate traffic congestion more readily than
disassortative and random coupling when the node processing capacity is
allocated based on node usage probability. Furthermore, the optimal coupling
probability can be found for assortative coupling. However, three types of
coupling preferences achieve similar traffic performance if all nodes share the
same processing capacity. We analyze interconnected Internet AS-level graphs of
South Korea and Japan and obtain similar results. Some practical suggestions
are presented to optimize such real-world interconnected networks accordingly.
|
1401.0430 | Schur Complement Based Analysis of MIMO Zero-Forcing for Rician Fading | cs.IT math.IT | For multiple-input/multiple-output (MIMO) spatial multiplexing with
zero-forcing detection (ZF), signal-to-noise ratio (SNR) analysis for Rician
fading involves the cumbersome noncentral-Wishart distribution (NCWD) of the
transmit sample-correlation (Gramian) matrix. An \textsl{approximation} with a
\textsl{virtual} CWD previously yielded for the ZF SNR an approximate (virtual)
Gamma distribution. However, analytical conditions qualifying the accuracy of
the SNR-distribution approximation were unknown. Therefore, we have been
attempting to exactly characterize ZF SNR for Rician fading. Our previous
attempts succeeded only for the sole Rician-fading stream under
Rician--Rayleigh fading, by writing it as scalar Schur complement (SC) in the
Gramian. Herein, we pursue a more general, matrix-SC-based analysis to
characterize SNRs when several streams may undergo Rician fading. On one hand,
for full-Rician fading, the SC distribution is found to be exactly a CWD if and
only if a channel-mean--correlation \textsl{condition} holds. Interestingly,
this CWD then coincides with the \textsl{virtual} CWD ensuing from the
\textsl{approximation}. Thus, under the \textsl{condition}, the actual and
virtual SNR-distributions coincide. On the other hand, for Rician--Rayleigh
fading, the matrix-SC distribution is characterized in terms of determinant of
matrix with elementary-function entries, which also yields a new
characterization of the ZF SNR. Average error probability results validate our
analysis vs.~simulation.
|
1401.0437 | UROP: A Simple, Near-Optimal Scheduling Policy for Energy Harvesting
Sensors | cs.IT math.IT | This paper considers a single-hop wireless network where a central node (or
fusion center, FC) collects data from a set of m energy harvesting (EH) nodes
(e.g. nodes of a wireless sensor network). In each time slot, k of m nodes can
be scheduled by the FC for transmission over k orthogonal channels. FC has no
knowledge about EH processes and current battery states of nodes; however, it
knows outcomes of previous transmission attempts. The objective is to find a
low complexity scheduling policy that maximizes total throughput of the data
backlogged system using the harvested energy, for all types (uniform,
non-uniform, independent, correlated (i.e. Markovian), etc.) EH processes.
Energy is assumed to be stored losslessly in the nodes batteries, up to a
storage capacity (the infinite capacity case is also considered.) The problem
is treated in finite and infinite problem horizons. A low-complexity policy,
UROP (Uniformizing Random Ordered Policy) is proposed, whose near optimality is
shown. Numerical examples indicate that under a reasonable-sized battery
capacity, UROP uses the arriving energy with almost perfect efficiency. As the
problem is a restless multi-armed bandit (RMAB) problem with an average reward
criterion, UROP may have a wider application area than communication networks.
|
1401.0447 | Effect of Memory on the Dynamics of Random Walks on Networks | physics.soc-ph cs.SI | Pathways of diffusion observed in real-world systems often require stochastic
processes going beyond first-order Markov models, as implicitly assumed in
network theory. In this work, we focus on second-order Markov models, and
derive an analytical expression for the effect of memory on the spectral gap
and thus, equivalently, on the characteristic time needed for the stochastic
process to asymptotically reach equilibrium. Perturbation analysis shows that
standard first-order Markov models can either overestimate or underestimate the
diffusion rate of flows across the modular structure of a system captured by a
second-order Markov network. We test the theoretical predictions on a toy
example and on numerical data, and discuss their implications for network
theory, in particular in the case of temporal or multiplex networks.
|
1401.0463 | Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization
with Shrinkage | cs.SY | This letter proposes a novel sparsity-aware adaptive filtering scheme and
algorithms based on an alternating optimization strategy with shrinkage. The
proposed scheme employs a two-stage structure that consists of an alternating
optimization of a diagonally-structured matrix that speeds up the convergence
and an adaptive filter with a shrinkage function that forces the coefficients
with small magnitudes to zero. We devise alternating optimization least-mean
square (LMS) algorithms for the proposed scheme and analyze its mean-square
error. Simulations for a system identification application show that the
proposed scheme and algorithms outperform in convergence and tracking existing
sparsity-aware algorithms.
|
1401.0480 | Chaff from the Wheat : Characterization and Modeling of Deleted
Questions on Stack Overflow | cs.IR cs.SI | Stack Overflow is the most popular CQA for programmers on the web with 2.05M
users, 5.1M questions and 9.4M answers. Stack Overflow has explicit, detailed
guidelines on how to post questions and an ebullient moderation community.
Despite these precise communications and safeguards, questions posted on Stack
Overflow can be extremely off topic or very poor in quality. Such questions can
be deleted from Stack Overflow at the discretion of experienced community
members and moderators. We present the first study of deleted questions on
Stack Overflow. We divide our study into two parts (i) Characterization of
deleted questions over approx. 5 years (2008-2013) of data, (ii) Prediction of
deletion at the time of question creation. Our characterization study reveals
multiple insights on question deletion phenomena. We observe a significant
increase in the number of deleted questions over time. We find that it takes
substantial time to vote a question to be deleted but once voted, the community
takes swift action. We also see that question authors delete their questions to
salvage reputation points. We notice some instances of accidental deletion of
good quality questions but such questions are voted back to be undeleted
quickly. We discover a pyramidal structure of question quality on Stack
Overflow and find that deleted questions lie at the bottom (lowest quality) of
the pyramid. We also build a predictive model to detect the deletion of
question at the creation time. We experiment with 47 features based on User
Profile, Community Generated, Question Content and Syntactic style and report
an accuracy of 66%. Our feature analysis reveals that all four categories of
features are important for the prediction task. Our findings reveal important
suggestions for content quality maintenance on community based question
answering websites.
|
1401.0486 | A Hybrid NN/HMM Modeling Technique for Online Arabic Handwriting
Recognition | cs.CV | In this work we propose a hybrid NN/HMM model for online Arabic handwriting
recognition. The proposed system is based on Hidden Markov Models (HMMs) and
Multi Layer Perceptron Neural Networks (MLPNNs). The input signal is segmented
to continuous strokes called segments based on the Beta-Elliptical strategy by
inspecting the extremum points of the curvilinear velocity profile. A neural
network trained with segment level contextual information is used to extract
class character probabilities. The output of this network is decoded by HMMs to
provide character level recognition. In evaluations on the ADAB database, we
achieved 96.4% character recognition accuracy that is statistically
significantly important in comparison with character recognition accuracies
obtained from state-of-the-art online Arabic systems.8
|
1401.0494 | Flexible SQLf query based on fuzzy linguistic summaries | cs.DB | Data is often partially known, vague or ambiguous in many real world
applications. To deal with such imprecise information, fuzziness is introduced
in the classical model. SQLf is one of the practical language to deal with
flexible fuzzy querying in Fuzzy DataBases (FDB). However, with a huge amount
of fuzzy data, the necessity to work with synthetic views became a challenge
for many DB community researchers. The present work deals with Flexible SQLf
query based on fuzzy linguistic summaries. We use the fuzzy summaries produced
by our Fuzzy-SaintEtiq approach. It provides a description of objects depending
on the fuzzy linguistic labels specified as selection criteria.
|
1401.0496 | Global Stability Results for Traffic Networks | math.OC cs.SY math.DS | This paper provides sufficient conditions for global asymptotic stability and
global exponential stability, which can be applied to nonlinear, large-scale,
uncertain discrete-time systems. The conditions are derived by means of vector
Lyapunov functions. The obtained results are applied to traffic networks for
the derivation of sufficient conditions of global exponential stability of the
uncongested equilibrium point of the network. Specific results and algorithms
are provided for freeway models. Various examples illustrate the applicability
of the obtained results.
|
1401.0509 | Zero-Shot Learning for Semantic Utterance Classification | cs.CL cs.LG | We propose a novel zero-shot learning method for semantic utterance
classification (SUC). It learns a classifier $f: X \to Y$ for problems where
none of the semantic categories $Y$ are present in the training set. The
framework uncovers the link between categories and utterances using a semantic
space. We show that this semantic space can be learned by deep neural networks
trained on large amounts of search engine query log data. More precisely, we
propose a novel method that can learn discriminative semantic features without
supervision. It uses the zero-shot learning framework to guide the learning of
the semantic features. We demonstrate the effectiveness of the zero-shot
semantic learning algorithm on the SUC dataset collected by (Tur, 2012).
Furthermore, we achieve state-of-the-art results by combining the semantic
features with a supervised method.
|
1401.0514 | Structured Generative Models of Natural Source Code | cs.PL cs.LG stat.ML | We study the problem of building generative models of natural source code
(NSC); that is, source code written and understood by humans. Our primary
contribution is to describe a family of generative models for NSC that have
three key properties: First, they incorporate both sequential and hierarchical
structure. Second, we learn a distributed representation of source code
elements. Finally, they integrate closely with a compiler, which allows
leveraging compiler logic and abstractions when building structure into the
model. We also develop an extension that includes more complex structure,
refining how the model generates identifier tokens based on what variables are
currently in scope. Our models can be learned efficiently, and we show
empirically that including appropriate structure greatly improves the models,
measured by the probability of generating test programs.
|
1401.0523 | Solving Poisson Equation by Genetic Algorithms | cs.NE | This paper deals with a method for solving Poisson Equation (PE) based on
genetic algorithms and grammatical evolution. The method forms generations of
solutions expressed in an analytical form. Several examples of PE are tested
and in most cases the exact solution is recovered. But, when the solution
cannot be expressed in an analytical form, our method produces a satisfactory
solution with a good level of accuracy
|
1401.0543 | Beyond the Min-Cut Bound: Deterministic Network Coding for Asynchronous
Multirate Broadcast | cs.IT math.IT | In a single hop broadcast packet erasure network, we demonstrate that it is
possible to provide multirate packet delivery outside of what is given by the
network min-cut. This is achieved by using a deterministic non-block-based
network coding scheme, which allows us to sidestep some of the limitations put
in place by the block coding model used to determine the network capacity.
Under the network coding scheme we outline, the sender is able to transmit
network coded packets above the channel rate of some receivers, while ensuring
that they still experience nonzero delivery rates. Interestingly, in this
generalised form of asynchronous network coded broadcast, receivers are not
required to obtain knowledge of all packets transmitted so far. Instead, causal
feedback from the receivers about packet erasures is used by the sender to
determine a network coded transmission that will allow at least one, but often
multiple receivers, to deliver their next needed packet.
Although the analysis of deterministic coding schemes is generally a
difficult problem, by making some approximations we are able to obtain
tractable estimates of the receivers' delivery rates, which are shown to match
reasonably well with simulation. Using these estimates, we design a fairness
algorithm that allocates the sender's resources so all receivers will
experience fair delivery rate performance.
|
1401.0546 | Low-Complexity Particle Swarm Optimization for Time-Critical
Applications | cs.NE | Particle swam optimization (PSO) is a popular stochastic optimization method
that has found wide applications in diverse fields. However, PSO suffers from
high computational complexity and slow convergence speed. High computational
complexity hinders its use in applications that have limited power resources
while slow convergence speed makes it unsuitable for time critical
applications. In this paper, we propose two techniques to overcome these
limitations. The first technique reduces the computational complexity of PSO
while the second technique speeds up its convergence. These techniques can be
applied, either separately or in conjunction, to any existing PSO variant. The
proposed techniques are robust to the number of dimensions of the optimization
problem. Simulation results are presented for the proposed techniques applied
to the standard PSO as well as to several PSO variants. The results show that
the use of both these techniques in conjunction results in a reduction in the
number of computations required as well as faster convergence speed while
maintaining an acceptable error performance for time-critical applications.
|
1401.0569 | Natural Language Processing in Biomedicine: A Unified System
Architecture Overview | cs.CL | In modern electronic medical records (EMR) much of the clinically important
data - signs and symptoms, symptom severity, disease status, etc. - are not
provided in structured data fields, but rather are encoded in clinician
generated narrative text. Natural language processing (NLP) provides a means of
"unlocking" this important data source for applications in clinical decision
support, quality assurance, and public health. This chapter provides an
overview of representative NLP systems in biomedicine based on a unified
architectural view. A general architecture in an NLP system consists of two
main components: background knowledge that includes biomedical knowledge
resources and a framework that integrates NLP tools to process text. Systems
differ in both components, which we will review briefly. Additionally,
challenges facing current research efforts in biomedical NLP include the
paucity of large, publicly available annotated corpora, although initiatives
that facilitate data sharing, system evaluation, and collaborative work between
researchers in clinical NLP are starting to emerge.
|
1401.0578 | An Improved RIP-Based Performance Guarantee for Sparse Signal Recovery
via Orthogonal Matching Pursuit | cs.IT math.IT | A sufficient condition reported very recently for perfect recovery of a
K-sparse vector via orthogonal matching pursuit (OMP) in K iterations is that
the restricted isometry constant of the sensing matrix satisfies
delta_K+1<1/(sqrt(delta_K+1)+1). By exploiting an approximate orthogonality
condition characterized via the achievable angles between two orthogonal sparse
vectors upon compression, this paper shows that the upper bound on delta can be
further relaxed to delta_K+1<(sqrt(1+4*delta_K+1)-1)/(2K).This result thus
narrows the gap between the so far best known bound and the ultimate
performance guarantee delta_K+1<1/(sqrt(delta_K+1)) that is conjectured by Dai
and Milenkovic in 2009. The proposed approximate orthogonality condition is
also exploited to derive less restricted sufficient conditions for signal
reconstruction in several compressive sensing problems, including signal
recovery via OMP in a noisy environment, compressive domain interference
cancellation, and support identification via the subspace pursuit algorithm.
|
1401.0579 | More Algorithms for Provable Dictionary Learning | cs.DS cs.LG stat.ML | In dictionary learning, also known as sparse coding, the algorithm is given
samples of the form $y = Ax$ where $x\in \mathbb{R}^m$ is an unknown random
sparse vector and $A$ is an unknown dictionary matrix in $\mathbb{R}^{n\times
m}$ (usually $m > n$, which is the overcomplete case). The goal is to learn $A$
and $x$. This problem has been studied in neuroscience, machine learning,
visions, and image processing. In practice it is solved by heuristic algorithms
and provable algorithms seemed hard to find. Recently, provable algorithms were
found that work if the unknown feature vector $x$ is $\sqrt{n}$-sparse or even
sparser. Spielman et al. \cite{DBLP:journals/jmlr/SpielmanWW12} did this for
dictionaries where $m=n$; Arora et al. \cite{AGM} gave an algorithm for
overcomplete ($m >n$) and incoherent matrices $A$; and Agarwal et al.
\cite{DBLP:journals/corr/AgarwalAN13} handled a similar case but with weaker
guarantees.
This raised the problem of designing provable algorithms that allow sparsity
$\gg \sqrt{n}$ in the hidden vector $x$. The current paper designs algorithms
that allow sparsity up to $n/poly(\log n)$. It works for a class of matrices
where features are individually recoverable, a new notion identified in this
paper that may motivate further work.
The algorithm runs in quasipolynomial time because they use limited
enumeration.
|
1401.0583 | Adaptive-Rate Compressive Sensing Using Side Information | cs.CV | We provide two novel adaptive-rate compressive sensing (CS) strategies for
sparse, time-varying signals using side information. Our first method utilizes
extra cross-validation measurements, and the second one exploits extra
low-resolution measurements. Unlike the majority of current CS techniques, we
do not assume that we know an upper bound on the number of significant
coefficients that comprise the images in the video sequence. Instead, we use
the side information to predict the number of significant coefficients in the
signal at the next time instant. For each image in the video sequence, our
techniques specify a fixed number of spatially-multiplexed CS measurements to
acquire, and adjust this quantity from image to image. Our strategies are
developed in the specific context of background subtraction for surveillance
video, and we experimentally validate the proposed methods on real video
sequences.
|
1401.0615 | Message Encoding for Spread and Orbit Codes | cs.IT math.IT | Spread codes and orbit codes are special families of constant dimension
subspace codes. These codes have been well-studied for their error correction
capability and transmission rate, but the question of how to encode messages
has not been investigated. In this work we show how the message space can be
chosen for a given code and how message en- and decoding can be done.
|
1401.0629 | Of course we share! Testing Assumptions about Social Tagging Systems | cs.IR cs.DL cs.SI | Social tagging systems have established themselves as an important part in
today's web and have attracted the interest from our research community in a
variety of investigations. The overall vision of our community is that simply
through interactions with the system, i.e., through tagging and sharing of
resources, users would contribute to building useful semantic structures as
well as resource indexes using uncontrolled vocabulary not only due to the
easy-to-use mechanics. Henceforth, a variety of assumptions about social
tagging systems have emerged, yet testing them has been difficult due to the
absence of suitable data. In this work we thoroughly investigate three
available assumptions - e.g., is a tagging system really social? - by examining
live log data gathered from the real-world public social tagging system
BibSonomy. Our empirical results indicate that while some of these assumptions
hold to a certain extent, other assumptions need to be reflected and viewed in
a very critical light. Our observations have implications for the design of
future search and other algorithms to better reflect the actual user behavior.
|
1401.0640 | Multi-Topic Multi-Document Summarizer | cs.CL | Current multi-document summarization systems can successfully extract summary
sentences, however with many limitations including: low coverage, inaccurate
extraction to important sentences, redundancy and poor coherence among the
selected sentences. The present study introduces a new concept of centroid
approach and reports new techniques for extracting summary sentences for
multi-document. In both techniques keyphrases are used to weigh sentences and
documents. The first summarization technique (Sen-Rich) prefers maximum
richness sentences. While the second (Doc-Rich), prefers sentences from
centroid document. To demonstrate the new summarization system application to
extract summaries of Arabic documents we performed two experiments. First, we
applied Rouge measure to compare the new techniques among systems presented at
TAC2011. The results show that Sen-Rich outperformed all systems in ROUGE-S.
Second, the system was applied to summarize multi-topic documents. Using human
evaluators, the results show that Doc-Rich is the superior, where summary
sentences characterized by extra coverage and more cohesion.
|
1401.0655 | Critical Nodes In Directed Networks | cs.SI physics.soc-ph | Critical nodes or "middlemen" have an essential place in both social and
economic networks when considering the flow of information and trade. This
paper extends the concept of critical nodes to directed networks. We identify
strong and weak middlemen. Node contestability is introduced as a form of
competition in networks; a duality between uncontested intermediaries and
middlemen is established. The brokerage power of middlemen is formally
expressed and a general algorithm is constructed to measure the brokerage power
of each node from the networks adjacency matrix. Augmentations of the brokerage
power measure are discussed to encapsulate relevant centrality measures. We use
these concepts to identify and measure middlemen in two empirical
socio-economic networks, the elite marriage network of Renaissance Florence and
Krackhardt's advice network.
|
1401.0660 | Plurals: individuals and sets in a richly typed semantics | cs.CL | We developed a type-theoretical framework for natural lan- guage semantics
that, in addition to the usual Montagovian treatment of compositional
semantics, includes a treatment of some phenomena of lex- ical semantic:
coercions, meaning, transfers, (in)felicitous co-predication. In this setting
we see how the various readings of plurals (collective, dis- tributive,
coverings,...) can be modelled.
|
1401.0670 | MRF denoising with compressed sensing and adaptive filtering | cs.IT math.IT | The recently proposed Magnetic Resonance Fingerprinting (MRF) technique can
simultaneously estimate multiple parameters through dictionary matching. It has
promising potentials in a wide range of applications. However, MRF introduces
errors due to undersampling during the data acquisition process and the limit
of dictionary resolution. In this paper, we investigate the error source of MRF
and propose the technologies of improving the quality of MRF with compressed
sensing, error prediction by decision trees, and adaptive filtering.
Experimental results support our observations and show significant improvement
of the proposed technologies.
|
1401.0689 | Machine Assisted Authentication of Paper Currency: an Experiment on
Indian Banknotes | cs.CV | Automatic authentication of paper money has been targeted. Indian bank notes
are taken as reference to show how a system can be developed for discriminating
fake notes from genuine ones. Image processing and pattern recognition
techniques are used to design the overall approach. The ability of the embedded
security aspects is thoroughly analysed for detecting fake currencies. Real
forensic samples are involved in the experiment that shows a high precision
machine can be developed for authentication of paper money. The system
performance is reported in terms of both accuracy and processing speed.
Comparison with human subjects namely forensic experts and bank staffs clearly
shows its applicability for mass checking of currency notes in the real world.
The analysis of security features to protect counterfeiting highlights some
facts that should be taken care of in future designing of currency notes.
|
1401.0708 | Quantitative methods for Phylogenetic Inference in Historical
Linguistics: An experimental case study of South Central Dravidian | cs.CL cs.AI | In this paper we examine the usefulness of two classes of algorithms Distance
Methods, Discrete Character Methods (Felsenstein and Felsenstein 2003) widely
used in genetics, for predicting the family relationships among a set of
related languages and therefore, diachronic language change. Applying these
algorithms to the data on the numbers of shared cognates- with-change and
changed as well as unchanged cognates for a group of six languages belonging to
a Dravidian language sub-family given in Krishnamurti et al. (1983), we
observed that the resultant phylogenetic trees are largely in agreement with
the linguistic family tree constructed using the comparative method of
reconstruction with only a few minor differences. Furthermore, we studied these
minor differences and found that they were cases of genuine ambiguity even for
a well-trained historical linguist. We evaluated the trees obtained through our
experiments using a well-defined criterion and report the results here. We
finally conclude that quantitative methods like the ones we examined are quite
useful in predicting family relationships among languages. In addition, we
conclude that a modest degree of confidence attached to the intuition that
there could indeed exist a parallelism between the processes of linguistic and
genetic change is not totally misplaced.
|
1401.0711 | Computing Entropy Rate Of Symbol Sources & A Distribution-free Limit
Theorem | cs.IT cs.LG math.IT math.PR stat.CO stat.ML | Entropy rate of sequential data-streams naturally quantifies the complexity
of the generative process. Thus entropy rate fluctuations could be used as a
tool to recognize dynamical perturbations in signal sources, and could
potentially be carried out without explicit background noise characterization.
However, state of the art algorithms to estimate the entropy rate have markedly
slow convergence; making such entropic approaches non-viable in practice. We
present here a fundamentally new approach to estimate entropy rates, which is
demonstrated to converge significantly faster in terms of input data lengths,
and is shown to be effective in diverse applications ranging from the
estimation of the entropy rate of English texts to the estimation of complexity
of chaotic dynamical systems. Additionally, the convergence rate of entropy
estimates do not follow from any standard limit theorem, and reported
algorithms fail to provide any confidence bounds on the computed values.
Exploiting a connection to the theory of probabilistic automata, we establish a
convergence rate of $O(\log \vert s \vert/\sqrt[3]{\vert s \vert})$ as a
function of the input length $\vert s \vert$, which then yields explicit
uncertainty estimates, as well as required data lengths to satisfy
pre-specified confidence bounds.
|
1401.0730 | What is usual in unusual videos? Trajectory snippet histograms for
discovering unusualness | cs.CV | Unusual events are important as being possible indicators of undesired
consequences. Moreover, unusualness in everyday life activities may also be
amusing to watch as proven by the popularity of such videos shared in social
media. Discovery of unusual events in videos is generally attacked as a problem
of finding usual patterns, and then separating the ones that do not resemble to
those. In this study, we address the problem from the other side, and try to
answer what type of patterns are shared among unusual videos that make them
resemble to each other regardless of the ongoing event. With this challenging
problem at hand, we propose a novel descriptor to encode the rapid motions in
videos utilizing densely extracted trajectories. The proposed descriptor, which
is referred to as trajectory snipped histograms, is used to distinguish unusual
videos from usual videos, and further exploited to discover snapshots in which
unusualness happen. Experiments on domain specific people falling videos and
unrestricted funny videos show the effectiveness of our method in capturing
unusualness.
|
1401.0733 | ConceptVision: A Flexible Scene Classification Framework | cs.CV | We introduce ConceptVision, a method that aims for high accuracy in
categorizing large number of scenes, while keeping the model relatively simpler
and efficient for scalability. The proposed method combines the advantages of
both low-level representations and high-level semantic categories, and
eliminates the distinctions between different levels through the definition of
concepts. The proposed framework encodes the perspectives brought through
different concepts by considering them in concept groups. Different
perspectives are ensembled for the final decision. Extensive experiments are
carried out on benchmark datasets to test the effects of different concepts,
and methods used to ensemble. Comparisons with state-of-the-art studies show
that we can achieve better results with incorporation of concepts in different
levels with different perspectives.
|
1401.0734 | Repairable Fountain Codes | cs.IT math.IT | We introduce a new family of Fountain codes that are systematic and also have
sparse parities. Given an input of $k$ symbols, our codes produce an unbounded
number of output symbols, generating each parity independently by linearly
combining a logarithmic number of randomly selected input symbols. The
construction guarantees that for any $\epsilon>0$ accessing a random subset of
$(1+\epsilon)k$ encoded symbols, asymptotically suffices to recover the $k$
input symbols with high probability.
Our codes have the additional benefit of logarithmic locality: a single lost
symbol can be repaired by accessing a subset of $O(\log k)$ of the remaining
encoded symbols. This is a desired property for distributed storage systems
where symbols are spread over a network of storage nodes. Beyond recovery upon
loss, local reconstruction provides an efficient alternative for reading
symbols that cannot be accessed directly. In our code, a logarithmic number of
disjoint local groups is associated with each systematic symbol, allowing
multiple parallel reads.
Our main mathematical contribution involves analyzing the rank of sparse
random matrices with specific structure over finite fields. We rely on
establishing that a new family of sparse random bipartite graphs have perfect
matchings with high probability.
|
1401.0742 | Data Smashing | cs.LG cs.AI cs.CE cs.IT math.IT stat.ML | Investigation of the underlying physics or biology from empirical data
requires a quantifiable notion of similarity - when do two observed data sets
indicate nearly identical generating processes, and when they do not. The
discriminating characteristics to look for in data is often determined by
heuristics designed by experts, $e.g.$, distinct shapes of "folded" lightcurves
may be used as "features" to classify variable stars, while determination of
pathological brain states might require a Fourier analysis of brainwave
activity. Finding good features is non-trivial. Here, we propose a universal
solution to this problem: we delineate a principle for quantifying similarity
between sources of arbitrary data streams, without a priori knowledge, features
or training. We uncover an algebraic structure on a space of symbolic models
for quantized data, and show that such stochastic generators may be added and
uniquely inverted; and that a model and its inverse always sum to the generator
of flat white noise. Therefore, every data stream has an anti-stream: data
generated by the inverse model. Similarity between two streams, then, is the
degree to which one, when summed to the other's anti-stream, mutually
annihilates all statistical structure to noise. We call this data smashing. We
present diverse applications, including disambiguation of brainwaves pertaining
to epileptic seizures, detection of anomalous cardiac rhythms, and
classification of astronomical objects from raw photometry. In our examples,
the data smashing principle, without access to any domain knowledge, meets or
exceeds the performance of specialized algorithms tuned by domain experts.
|
1401.0750 | An Interaction Model for Simulation and Mitigation of Cascading Failures | cs.SY physics.soc-ph | In this paper the interactions between component failures are quantified and
the interaction matrix and interaction network are obtained. The quantified
interactions can capture the general propagation patterns of the cascades from
utilities or simulation, thus helping to better understand how cascading
failures propagate and to identify key links and key components that are
crucial for cascading failure propagation. By utilizing these interactions a
high-level probabilistic model called interaction model is proposed to study
the influence of interactions on cascading failure risk and to support online
decision-making. It is much more time efficient to first quantify the
interactions between component failures with fewer original cascades from a
more detailed cascading failure model and then perform the interaction model
simulation than it is to directly simulate a large number of cascades with a
more detailed model. Interaction-based mitigation measures are suggested to
mitigate cascading failure risk by weakening key links, which can be achieved
in real systems by wide area protection such as blocking of some specific
protective relays. The proposed interaction quantifying method and interaction
model are validated with line outage data generated by the AC OPA cascading
simulations on the IEEE 118-bus system.
|
1401.0764 | Context-Aware Hypergraph Construction for Robust Spectral Clustering | cs.CV cs.LG | Spectral clustering is a powerful tool for unsupervised data analysis. In
this paper, we propose a context-aware hypergraph similarity measure (CAHSM),
which leads to robust spectral clustering in the case of noisy data. We
construct three types of hypergraph---the pairwise hypergraph, the
k-nearest-neighbor (kNN) hypergraph, and the high-order over-clustering
hypergraph. The pairwise hypergraph captures the pairwise similarity of data
points; the kNN hypergraph captures the neighborhood of each point; and the
clustering hypergraph encodes high-order contexts within the dataset. By
combining the affinity information from these three hypergraphs, the CAHSM
algorithm is able to explore the intrinsic topological information of the
dataset. Therefore, data clustering using CAHSM tends to be more robust.
Considering the intra-cluster compactness and the inter-cluster separability of
vertices, we further design a discriminative hypergraph partitioning criterion
(DHPC). Using both CAHSM and DHPC, a robust spectral clustering algorithm is
developed. Theoretical analysis and experimental evaluation demonstrate the
effectiveness and robustness of the proposed algorithm.
|
1401.0767 | From Kernel Machines to Ensemble Learning | cs.LG cs.CV | Ensemble methods such as boosting combine multiple learners to obtain better
prediction than could be obtained from any individual learner. Here we propose
a principled framework for directly constructing ensemble learning methods from
kernel methods. Unlike previous studies showing the equivalence between
boosting and support vector machines (SVMs), which needs a translation
procedure, we show that it is possible to design boosting-like procedure to
solve the SVM optimization problems.
In other words, it is possible to design ensemble methods directly from SVM
without any middle procedure.
This finding not only enables us to design new ensemble learning methods
directly from kernel methods, but also makes it possible to take advantage of
those highly-optimized fast linear SVM solvers for ensemble learning.
We exemplify this framework for designing binary ensemble learning as well as
a new multi-class ensemble learning methods.
Experimental results demonstrate the flexibility and usefulness of the
proposed framework.
|
1401.0778 | Modeling and Predicting Popularity Dynamics via Reinforced Poisson
Processes | cs.SI physics.soc-ph | An ability to predict the popularity dynamics of individual items within a
complex evolving system has important implications in an array of areas. Here
we propose a generative probabilistic framework using a reinforced Poisson
process to model explicitly the process through which individual items gain
their popularity. This model distinguishes itself from existing models via its
capability of modeling the arrival process of popularity and its remarkable
power at predicting the popularity of individual items. It possesses the
flexibility of applying Bayesian treatment to further improve the predictive
power using a conjugate prior. Extensive experiments on a longitudinal citation
dataset demonstrate that this model consistently outperforms existing
popularity prediction methods.
|
1401.0794 | Properties of phoneme N -grams across the world's language families | cs.CL stat.CO | In this article, we investigate the properties of phoneme N-grams across half
of the world's languages. We investigate if the sizes of three different N-gram
distributions of the world's language families obey a power law. Further, the
N-gram distributions of language families parallel the sizes of the families,
which seem to obey a power law distribution. The correlation between N-gram
distributions and language family sizes improves with increasing values of N.
We applied statistical tests, originally given by physicists, to test the
hypothesis of power law fit to twelve different datasets. The study also raises
some new questions about the use of N-gram distributions in linguistic
research, which we answer by running a statistical test.
|
1401.0799 | User Equilibrium Route Assignment for Microscopic Pedestrian Simulation | physics.soc-ph cs.CE cs.MA cs.RO physics.comp-ph | For the simulation of pedestrians a method is introduced to find routing
alternatives from any origin position to a given destination area in a given
geometry composed of walking areas and obstacles. The method includes a
parameter which sets a threshold for the approximate minimum size of obstacles
to generate routing alternatives. The resulting data structure for navigation
is constructed such that it does not introduce artifacts to the movement of
simulated pedestrians and that locally pedestrians prefer to walk on the
shortest path. The generated set of routes can be used with iterating static or
dynamic assignment methods.
|
1401.0802 | A stochastic model for Case-Based Reasoning | cs.AI math.PR | Case-Bsed Reasoning (CBR) is a recent theory for problem-solving and learning
in computers and people.Broadly construed it is the process of solving new
problems based on the solution of similar past problems. In the present paper
we introduce an absorbing Markov chain on the main steps of the CBR process.In
this way we succeed in obtaining the probabilities for the above process to be
in a certain step at a certain phase of the solution of the corresponding
problem, and a measure for the efficiency of a CBR system. Examples are given
to illustrate our results.
|
1401.0818 | Selective Combining for Hybrid Cooperative Networks | cs.IT math.IT | In this study, we consider the selective combining in hybrid cooperative
networks (SCHCNs scheme) with one source node, one destination node and $N$
relay nodes. In the SCHCN scheme, each relay first adaptively chooses between
amplify-and-forward protocol and decode-and-forward protocol on a per frame
basis by examining the error-detecting code result, and $N_c$ ($1\leq N_c \leq
N$) relays will be selected to forward their received signals to the
destination. We first develop a signal-to-noise ratio (SNR) threshold-based
frame error rate (FER) approximation model. Then, the theoretical FER
expressions for the SCHCN scheme are derived by utilizing the proposed SNR
threshold-based FER approximation model. The analytical FER expressions are
validated through simulation results.
|
1401.0839 | Social Influences in Opinion Dynamics: the Role of Conformity | physics.soc-ph cs.SI nlin.AO | We study the effects of social influences in opinion dynamics. In particular,
we define a simple model, based on the majority rule voting, in order to
consider the role of conformity. Conformity is a central issue in social
psychology as it represents one of people's behaviors that emerges as a result
of their interactions. The proposed model represents agents, arranged in a
network and provided with an individual behavior, that change opinion in
function of those of their neighbors. In particular, agents can behave as
conformists or as nonconformists. In the former case, agents change opinion in
accordance with the majority of their social circle (i.e., their neighbors); in
the latter case, they do the opposite, i.e., they take the minority opinion.
Moreover, we investigate the nonconformity both on a global and on a local
perspective, i.e., in relation to the whole population and to the social circle
of each nonconformist agent, respectively. We perform a computational study of
the proposed model, with the aim to observe if and how the conformity affects
the related outcomes. Moreover, we want to investigate whether it is possible
to achieve some kind of equilibrium, or of order, during the evolution of the
system. Results highlight that the amount of nonconformist agents in the
population plays a central role in these dynamics. In particular, conformist
agents play the role of stabilizers in fully-connected networks, whereas the
opposite happens in complex networks. Furthermore, by analyzing complex
topologies of the agent network, we found that in the presence of radical
nonconformist agents the topology of the system has a prominent role; otherwise
it does not matter since we observed that a conformist behavior is almost
always more convenient. Finally, we analyze the results of the model by
considering that agents can change also their behavior over time.
|
1401.0843 | Least Squares Policy Iteration with Instrumental Variables vs. Direct
Policy Search: Comparison Against Optimal Benchmarks Using Energy Storage | math.OC cs.LG | This paper studies approximate policy iteration (API) methods which use
least-squares Bellman error minimization for policy evaluation. We address
several of its enhancements, namely, Bellman error minimization using
instrumental variables, least-squares projected Bellman error minimization, and
projected Bellman error minimization using instrumental variables. We prove
that for a general discrete-time stochastic control problem, Bellman error
minimization using instrumental variables is equivalent to both variants of
projected Bellman error minimization. An alternative to these API methods is
direct policy search based on knowledge gradient. The practical performance of
these three approximate dynamic programming methods are then investigated in
the context of an application in energy storage, integrated with an
intermittent wind energy supply to fully serve a stochastic time-varying
electricity demand. We create a library of test problems using real-world data
and apply value iteration to find their optimal policies. These benchmarks are
then used to compare the developed policies. Our analysis indicates that API
with instrumental variables Bellman error minimization prominently outperforms
API with least-squares Bellman error minimization. However, these approaches
underperform our direct policy search implementation.
|
1401.0852 | Concave Penalized Estimation of Sparse Gaussian Bayesian Networks | stat.ME cs.LG stat.ML | We develop a penalized likelihood estimation framework to estimate the
structure of Gaussian Bayesian networks from observational data. In contrast to
recent methods which accelerate the learning problem by restricting the search
space, our main contribution is a fast algorithm for score-based structure
learning which does not restrict the search space in any way and works on
high-dimensional datasets with thousands of variables. Our use of concave
regularization, as opposed to the more popular $\ell_0$ (e.g. BIC) penalty, is
new. Moreover, we provide theoretical guarantees which generalize existing
asymptotic results when the underlying distribution is Gaussian. Most notably,
our framework does not require the existence of a so-called faithful DAG
representation, and as a result the theory must handle the inherent
nonidentifiability of the estimation problem in a novel way. Finally, as a
matter of independent interest, we provide a comprehensive comparison of our
approach to several standard structure learning methods using open-source
packages developed for the R language. Based on these experiments, we show that
our algorithm is significantly faster than other competing methods while
obtaining higher sensitivity with comparable false discovery rates for
high-dimensional data. In particular, the total runtime for our method to
generate a solution path of 20 estimates for DAGs with 8000 nodes is around one
hour.
|
1401.0858 | Multimodal Optimization by Sparkling Squid Populations | cs.NE | The swarm intelligence of animals is a natural paradigm to apply to
optimization problems. Ant colony, bee colony, firefly and bat algorithms are
amongst those that have been demonstrated to efficiently to optimize complex
constraints. This paper proposes the new Sparkling Squid Algorithm (SSA) for
multimodal optimization, inspired by the intelligent swarm behavior of its
namesake. After an introduction, formulation and discussion of its
implementation, it will be compared to other popular metaheuristics. Finally,
applications to well - known problems such as image registration and the
traveling salesperson problem will be discussed.
|
1401.0864 | Predicting a Business Star in Yelp from Its Reviews Text Alone | cs.IR | Yelp online reviews are invaluable source of information for users to choose
where to visit or what to eat among numerous available options. But due to
overwhelming number of reviews, it is almost impossible for users to go through
all reviews and find the information they are looking for. To provide a
business overview, one solution is to give the business a 1-5 star(s). This
rating can be subjective and biased toward users personality. In this paper, we
predict a business rating based on user-generated reviews texts alone. This not
only provides an overview of plentiful long review texts but also cancels out
subjectivity. Selecting the restaurant category from Yelp Dataset Challenge, we
use a combination of three feature generation methods as well as four machine
learning models to find the best prediction result. Our approach is to create
bag of words from the top frequent words in all raw text reviews, or top
frequent words/adjectives from results of Part-of-Speech analysis. Our results
show Root Mean Square Error (RMSE) of 0.6 for the combination of Linear
Regression with either of the top frequent words from raw data or top frequent
adjectives after Part-of-Speech (POS).
|
1401.0869 | Schatten-$p$ Quasi-Norm Regularized Matrix Optimization via Iterative
Reweighted Singular Value Minimization | math.OC cs.LG math.NA stat.CO stat.ML | In this paper we study general Schatten-$p$ quasi-norm (SPQN) regularized
matrix minimization problems. In particular, we first introduce a class of
first-order stationary points for them, and show that the first-order
stationary points introduced in [11] for an SPQN regularized $vector$
minimization problem are equivalent to those of an SPQN regularized $matrix$
minimization reformulation. We also show that any local minimizer of the SPQN
regularized matrix minimization problems must be a first-order stationary
point. Moreover, we derive lower bounds for nonzero singular values of the
first-order stationary points and hence also of the local minimizers of the
SPQN regularized matrix minimization problems. The iterative reweighted
singular value minimization (IRSVM) methods are then proposed to solve these
problems, whose subproblems are shown to have a closed-form solution. In
contrast to the analogous methods for the SPQN regularized $vector$
minimization problems, the convergence analysis of these methods is
significantly more challenging. We develop a novel approach to establishing the
convergence of these methods, which makes use of the expression of a specific
solution of their subproblems and avoids the intricate issue of finding the
explicit expression for the Clarke subdifferential of the objective of their
subproblems. In particular, we show that any accumulation point of the sequence
generated by the IRSVM methods is a first-order stationary point of the
problems. Our computational results demonstrate that the IRSVM methods
generally outperform some recently developed state-of-the-art methods in terms
of solution quality and/or speed.
|
1401.0870 | Pectoral Muscles Suppression in Digital Mammograms using Hybridization
of Soft Computing Methods | cs.CV cs.CE | Breast region segmentation is an essential prerequisite in computerized
analysis of mammograms. It aims at separating the breast tissue from the
background of the mammogram and it includes two independent segmentations. The
first segments the background region which usually contains annotations, labels
and frames from the whole breast region, while the second removes the pectoral
muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of
the breast tissue. In this paper we propose hybridization of Connected
Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed
methods worked good for separating pectoral region. After removal pectoral
muscle from the mammogram, further processing is confined to the breast region
alone. To demonstrate the validity of our segmentation algorithm, it is
extensively tested using over 322 mammographic images from the Mammographic
Image Analysis Society (MIAS) database. The segmentation results were evaluated
using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand
Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The
hybridization of fuzzy with straight line method is given more than 96% of the
curve segmentations to be adequate or better. In addition a comparison with
similar approaches from the state of the art has been given, obtaining slightly
improved results. Experimental results demonstrate the effectiveness of the
proposed approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.