id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1312.6802 | Suffix Stripping Problem as an Optimization Problem | cs.IR cs.CL | Stemming or suffix stripping, an important part of the modern Information
Retrieval systems, is to find the root word (stem) out of a given cluster of
words. Existing algorithms targeting this problem have been developed in a
haphazard manner. In this work, we model this problem as an optimization
problem. An Integer Program is being developed to overcome the shortcomings of
the existing approaches. The sample results of the proposed method are also
being compared with an established technique in the field for English language.
An AMPL code for the same IP has also been given.
|
1312.6807 | Iterative Nearest Neighborhood Oversampling in Semisupervised Learning
from Imbalanced Data | cs.LG | Transductive graph-based semi-supervised learning methods usually build an
undirected graph utilizing both labeled and unlabeled samples as vertices.
Those methods propagate label information of labeled samples to neighbors
through their edges in order to get the predicted labels of unlabeled samples.
Most popular semi-supervised learning approaches are sensitive to initial label
distribution happened in imbalanced labeled datasets. The class boundary will
be severely skewed by the majority classes in an imbalanced classification. In
this paper, we proposed a simple and effective approach to alleviate the
unfavorable influence of imbalance problem by iteratively selecting a few
unlabeled samples and adding them into the minority classes to form a balanced
labeled dataset for the learning methods afterwards. The experiments on UCI
datasets and MNIST handwritten digits dataset showed that the proposed approach
outperforms other existing state-of-art methods.
|
1312.6808 | Socially-Aware Venue Recommendation for Conference Participants | cs.IR cs.SI | Current research environments are witnessing high enormities of presentations
occurring in different sessions at academic conferences. This situation makes
it difficult for researchers (especially juniors) to attend the right
presentation session(s) for effective collaboration. In this paper, we propose
an innovative venue recommendation algorithm to enhance smart conference
participation. Our proposed algorithm, Social Aware Recommendation of Venues
and Environments (SARVE), computes the Pearson Correlation and social
characteristic information of conference participants. SARVE further
incorporates the current context of both the smart conference community and
participants in order to model a recommendation process using distributed
community detection. Through the integration of the above computations and
techniques, we are able to recommend presentation sessions of active
participant presenters that may be of high interest to a particular
participant. We evaluate SARVE using a real world dataset. Our experimental
results demonstrate that SARVE outperforms other state-of-the-art methods.
|
1312.6809 | The Micro Dynamics of Collective Violence | physics.soc-ph cs.SI | Collective violence in direct confrontations between two opposing groups
happens in short bursts wherein small subgroups briefly attack small numbers of
opponents, while the others form a non-fighting audience. The mechanism is
fighters' synchronization of intentionalities during preliminary interactions,
by which they feel one and overcome their fear. To explain these bursts,
subgroups' small sizes and leaders' role, a social influence model and a
synchronization model are compared.
|
1312.6813 | New explicit thresholding/shrinkage formulas for one class of
regularization problems with overlapping group sparsity and their
applications | math.NA cs.CV | The least-square regression problems or inverse problems have been widely
studied in many fields such as compressive sensing, signal processing, and
image processing. To solve this kind of ill-posed problems, a regularization
term (i.e., regularizer) should be introduced, under the assumption that the
solutions have some specific properties, such as sparsity and group sparsity.
Widely used regularizers include the $\ell_1$ norm, total variation (TV)
semi-norm, and so on.
Recently, a new regularization term with overlapping group sparsity has been
considered. Majorization minimization iteration method or variable duplication
methods are often applied to solve them. However, there have been no direct
methods for solve the relevant problems because of the difficulty of
overlapping. In this paper, we proposed new explicit shrinkage formulas for one
class of these relevant problems, whose regularization terms have translation
invariant overlapping groups. Moreover, we apply our results in TV deblurring
and denoising with overlapping group sparsity. We use alternating direction
method of multipliers to iterate solve it. Numerical results also verify the
validity and effectiveness of our new explicit shrinkage formulas.
|
1312.6820 | A Fast Greedy Algorithm for Generalized Column Subset Selection | cs.DS cs.LG stat.ML | This paper defines a generalized column subset selection problem which is
concerned with the selection of a few columns from a source matrix A that best
approximate the span of a target matrix B. The paper then proposes a fast
greedy algorithm for solving this problem and draws connections to different
problems that can be efficiently solved using the proposed algorithm.
|
1312.6826 | 3D Interest Point Detection via Discriminative Learning | cs.CV | The task of detecting the interest points in 3D meshes has typically been
handled by geometric methods. These methods, while greatly describing human
preference, can be ill-equipped for handling the variety and subjectivity in
human responses. Different tasks have different requirements for interest point
detection; some tasks may necessitate high precision while other tasks may
require high recall. Sometimes points with high curvature may be desirable,
while in other cases high curvature may be an indication of noise. Geometric
methods lack the required flexibility to adapt to such changes. As a
consequence, interest point detection seems to be well suited for machine
learning methods that can be trained to match the criteria applied on the
annotated training data. In this paper, we formulate interest point detection
as a supervised binary classification problem using a random forest as our
classifier. Among other challenges, we are faced with an imbalanced learning
problem due to the substantial difference in the priors between interest and
non-interest points. We address this by re-sampling the training set. We
validate the accuracy of our method and compare our results to those of five
state of the art methods on a new, standard benchmark.
|
1312.6832 | The Value Iteration Algorithm is Not Strongly Polynomial for Discounted
Dynamic Programming | cs.AI math.OC | This note provides a simple example demonstrating that, if exact computations
are allowed, the number of iterations required for the value iteration
algorithm to find an optimal policy for discounted dynamic programming problems
may grow arbitrarily quickly with the size of the problem. In particular, the
number of iterations can be exponential in the number of actions. Thus, unlike
policy iterations, the value iteration algorithm is not strongly polynomial for
discounted dynamic programming.
|
1312.6834 | Face Detection from still and Video Images using Unsupervised Cellular
Automata with K means clustering algorithm | cs.CV | Pattern recognition problem rely upon the features inherent in the pattern of
images. Face detection and recognition is one of the challenging research areas
in the field of computer vision. In this paper, we present a method to identify
skin pixels from still and video images using skin color. Face regions are
identified from this skin pixel region. Facial features such as eyes, nose and
mouth are then located. Faces are recognized from color images using an RBF
based neural network. Unsupervised Cellular Automata with K means clustering
algorithm is used to locate different facial elements. Orientation is corrected
by using eyes. Parameters like inter eye distance, nose length, mouth position,
Discrete Cosine Transform (DCT) coefficients etc. are computed and used for a
Radial Basis Function (RBF) based neural network. This approach reliably works
for face sequence with orientation in head, expressions etc.
|
1312.6838 | Greedy Column Subset Selection for Large-scale Data Sets | cs.DS cs.LG | In today's information systems, the availability of massive amounts of data
necessitates the development of fast and accurate algorithms to summarize these
data and represent them in a succinct format. One crucial problem in big data
analytics is the selection of representative instances from large and
massively-distributed data, which is formally known as the Column Subset
Selection (CSS) problem. The solution to this problem enables data analysts to
understand the insights of the data and explore its hidden structure. The
selected instances can also be used for data preprocessing tasks such as
learning a low-dimensional embedding of the data points or computing a low-rank
approximation of the corresponding matrix. This paper presents a fast and
accurate greedy algorithm for large-scale column subset selection. The
algorithm minimizes an objective function which measures the reconstruction
error of the data matrix based on the subset of selected columns. The paper
first presents a centralized greedy algorithm for column subset selection which
depends on a novel recursive formula for calculating the reconstruction error
of the data matrix. The paper then presents a MapReduce algorithm which selects
a few representative columns from a matrix whose columns are massively
distributed across several commodity machines. The algorithm first learns a
concise representation of all columns using random projection, and it then
solves a generalized column subset selection problem at each machine in which a
subset of columns are selected from the sub-matrix on that machine such that
the reconstruction error of the concise representation is minimized. The paper
demonstrates the effectiveness and efficiency of the proposed algorithm through
an empirical evaluation on benchmark data sets.
|
1312.6843 | Separating signal from noise | math.PR cs.IT math.CA math.IT math.ST stat.TH | Suppose that a sequence of numbers $x_n$ (a `signal') is transmitted through
a noisy channel. The receiver observes a noisy version of the signal with
additive random fluctuations, $x_n + \xi_n$, where $\xi_n$ is a sequence of
independent standard Gaussian random variables. Suppose further that the signal
is known to come from some fixed space of possible signals. Is it possible to
fully recover the transmitted signal from its noisy version? Is it possible to
at least detect that a non-zero signal was transmitted?
In this paper we consider the case in which signals are infinite sequences
and the recovery or detection are required to hold with probability one. We
provide conditions on the signal space for checking whether detection or
recovery are possible. We also analyze in detail several examples including
spaces of Fourier transforms of measures, spaces with fixed amplitudes and the
space of almost periodic functions. Many of our examples exhibit critical
phenomena, in which a sharp transition is made from a regime in which recovery
is possible to a regime in which even detection is impossible.
|
1312.6849 | Speech Recognition Front End Without Information Loss | cs.CL cs.CV cs.LG | Speech representation and modelling in high-dimensional spaces of acoustic
waveforms, or a linear transformation thereof, is investigated with the aim of
improving the robustness of automatic speech recognition to additive noise. The
motivation behind this approach is twofold: (i) the information in acoustic
waveforms that is usually removed in the process of extracting low-dimensional
features might aid robust recognition by virtue of structured redundancy
analogous to channel coding, (ii) linear feature domains allow for exact noise
adaptation, as opposed to representations that involve non-linear processing
which makes noise adaptation challenging. Thus, we develop a generative
framework for phoneme modelling in high-dimensional linear feature domains, and
use it in phoneme classification and recognition tasks. Results show that
classification and recognition in this framework perform better than analogous
PLP and MFCC classifiers below 18 dB SNR. A combination of the high-dimensional
and MFCC features at the likelihood level performs uniformly better than either
of the individual representations across all noise levels.
|
1312.6872 | Matrix recovery using Split Bregman | cs.NA cs.LG | In this paper we address the problem of recovering a matrix, with inherent
low rank structure, from its lower dimensional projections. This problem is
frequently encountered in wide range of areas including pattern recognition,
wireless sensor networks, control systems, recommender systems, image/video
reconstruction etc. Both in theory and practice, the most optimal way to solve
the low rank matrix recovery problem is via nuclear norm minimization. In this
paper, we propose a Split Bregman algorithm for nuclear norm minimization. The
use of Bregman technique improves the convergence speed of our algorithm and
gives a higher success rate. Also, the accuracy of reconstruction is much
better even for cases where small number of linear measurements are available.
Our claim is supported by empirical results obtained using our algorithm and
its comparison to other existing methods for matrix recovery. The algorithms
are compared on the basis of NMSE, execution time and success rate for varying
ranks and sampling ratios.
|
1312.6875 | Refinement of the random coding bound | cs.IT math.IT | An improved pre-factor for the random coding bound is proved. Specifically,
for channels with critical rate not equal to capacity, if a regularity
condition is satisfied (resp. not satisfied), then for any $\epsilon >0$ a
pre-factor of $O(N^{-\frac{1}{2}\left( 1 - \epsilon + \bar{\rho}^\ast_R
\right)})$ (resp. $O(N^{-\frac{1}{2}})$) is achievable for rates above the
critical rate, where $N$ and $R$ is the blocklength and rate, respectively. The
extra term $\bar{\rho}^\ast_R$ is related to the slope of the random coding
exponent. Further, the relation of these bounds with the authors' recent
refinement of the sphere-packing bound, as well as the pre-factor for the
random coding bound below the critical rate, is discussed.
|
1312.6885 | Deep learning for class-generic object detection | cs.CV cs.LG cs.NE | We investigate the use of deep neural networks for the novel task of class
generic object detection. We show that neural networks originally designed for
image recognition can be trained to detect objects within images, regardless of
their class, including objects for which no bounding box labels have been
provided. In addition, we show that bounding box labels yield a 1% performance
increase on the ImageNet recognition challenge.
|
1312.6911 | QoS-Aware User Association for Load Balancing in Heterogeneous Cellular
Networks | cs.IT cs.NI math.IT | To solve the problem that the low capacity in hot-spots and coverage holes of
conventional cellular networks, the base stations (BSs) having lower transmit
power are deployed to form heterogeneous cellular networks (HetNets). However,
because of these introduced disparate power BSs, the user distributions among
them looked fairly unbalanced if an appropriate user association scheme hasn't
been provided. For effectively tackling this problem, we jointly consider the
load of each BS and user's achievable rate instead of only utilizing the latter
when designing an association algorithm, and formulate it as a network-wide
weighted utility maximization problem. Note that, the load mentioned above
relates to the amount of required subbands decided by actual rate requirements,
i.e., QoS, but the number of associated users, thus it can reflect user's
actual load level. As for the proposed problem, we give a maximum probability
(max-probability) algorithm by relaxing variables as well as a low-complexity
distributed algorithm with a near-optimal solution that provides a theoretical
performance guarantee. Experimental results show that, compared with the
association strategy advocated by Ye, our strategy has a speeder convergence
rate, a lower call blocking probability and a higher load balancing level.
|
1312.6918 | Data Offloading in Load Coupled Networks: A Utility Maximization
Framework | cs.IT cs.NI math.IT | We provide a general framework for the problem of data offloading in a
heterogeneous wireless network, where some demand of cellular users is served
by a complementary network. The complementary network is either a small-cell
network that shares the same resources as the cellular network, or a WiFi
network that uses orthogonal resources. For a given demand served in a cellular
network, the load, or the level of resource usage, of each cell depends in a
non-linear manner on the load of other cells due to the mutual coupling of
interference seen by one another. With load coupling, we optimize the demand to
be served in the cellular or the complementary networks, so as to maximize a
utility function. We consider three representative utility functions that
balance, to varying degrees, the revenue from serving the users vs the user
fairness. We establish conditions for which the optimization problem has a
feasible solution and is convex, and hence tractable to numerical computations.
Finally, we propose a strategy with theoretical justification to constrain the
load to some maximum value, as required for practical implementation. Numerical
studies are conducted for both under-loaded and over-loaded networks.
|
1312.6927 | Structure Analysis on the $k$-error Linear Complexity for $2^n$-periodic
Binary Sequences | cs.CR cs.IT math.IT | In this paper, in order to characterize the critical error linear complexity
spectrum (CELCS) for $2^n$-periodic binary sequences, we first propose a
decomposition based on the cube theory. Based on the proposed $k$-error cube
decomposition, and the famous inclusion-exclusion principle, we obtain the
complete characterization of $i$th descent point (critical point) of the
k-error linear complexity for $i=2,3$. Second, by using the sieve method and
Games-Chan algorithm, we characterize the second descent point (critical point)
distribution of the $k$-error linear complexity for $2^n$-periodic binary
sequences. As a consequence, we obtain the complete counting functions on the
$k$-error linear complexity of $2^n$-periodic binary sequences as the second
descent point for $k=3,4$. This is the first time for the second and the third
descent points to be completely characterized. In fact, the proposed
constructive approach has the potential to be used for constructing
$2^n$-periodic binary sequences with the given linear complexity and $k$-error
linear complexity (or CELCS), which is a challenging problem to be deserved for
further investigation in future.
|
1312.6931 | Multiple routes transmitted epidemics on multiplex networks | cs.SI physics.soc-ph | This letter investigates the multiple routes transmitted epidemic process on
multiplex networks. We propose detailed theoretical analysis that allows us to
accurately calculate the epidemic threshold and outbreak size. It is found that
the epidemic can spread across the multiplex network even if all the network
layers are well below their respective epidemic thresholds. Strong positive
degree-degree correlation of nodes in multiplex network could lead to a much
lower epidemic threshold and a relatively smaller outbreak size. However, the
average similarity of neighbors from different layers of nodes has no obvious
effect on the epidemic threshold and outbreak size.
|
1312.6934 | Hardware and logic implementation of multiple alarm system for GSM BTS
rooms | cs.SY | Cellular communication becomes the major mode of communication in present
century. With the development of this phase of communication the globalization
process is also in its peak of speed. The development of cellular communication
is largely depending on the improvement and stability of Base Transceiver
Station (BTS) room. So for the purpose of the development of cellular
communication a large numbered BTS rooms are installed throughout the world. To
ensure proper support from BTS rooms there must be a security system to avoid
any unnecessary vulnerability. Therefore multiple alarm system is designed to
secure the BTS rooms from any undesired circumstances. This system is designed
with a PIC Microcontroller as a main controller and a several sensors are
interfaced with it to provide high temperature alarm, smoke alarm, door alarm
and water alarm. All these alarms are interfaced with the alarm box in the BTS
room which provides the current status directly to Network Management Centre
(NMC) of a Global System for Mobile (GSM) communication network.
|
1312.6936 | The performance evaluation of IEEE 802.16 physical layer in the basis of
bit error rate considering reference channel models | cs.NI cs.IT math.IT | Fixed Broadband Wireless Access is a promising technology which can offer
high speed data rate from transmitting end to customer end which can offer high
speed text, voice, and video data. IEEE 802.16 WirelessMAN is a standard that
specifies medium access control layer and a set of PHY layer to fixed and
mobile BWA in broad range of frequencies and it supports equipment
manufacturers due to its robust performance in multipath environment.
Consequently WiMAX forum has adopted this version to develop the network world
wide. In this paper the performance of IEEE 802.16 OFDM PHY Layer has been
investigated by using the simulation model in Matlab. The Stanford University
Interim (SUI) channel models are selected for the performance evaluation of
this standard. The Ideal Channel estimation is considered in this work and the
performance evaluation is observed in the basis of BER.
|
1312.6945 | Quantum Ensemble Classification: A Sampling-based Learning Control
Approach | quant-ph cs.SY | Quantum ensemble classification has significant applications in
discrimination of atoms (or molecules), separation of isotopic molecules and
quantum information extraction. However, quantum mechanics forbids
deterministic discrimination among nonorthogonal states. The classification of
inhomogeneous quantum ensembles is very challenging since there exist
variations in the parameters characterizing the members within different
classes. In this paper, we recast quantum ensemble classification as a
supervised quantum learning problem. A systematic classification methodology is
presented by using a sampling-based learning control (SLC) approach for quantum
discrimination. The classification task is accomplished via simultaneously
steering members belonging to different classes to their corresponding target
states (e.g., mutually orthogonal states). Firstly a new discrimination method
is proposed for two similar quantum systems. Then an SLC method is presented
for quantum ensemble classification. Numerical results demonstrate the
effectiveness of the proposed approach for the binary classification of
two-level quantum ensembles and the multiclass classification of multilevel
quantum ensembles.
|
1312.6947 | Formal Ontology Learning on Factual IS-A Corpus in English using
Description Logics | cs.CL cs.AI | Ontology Learning (OL) is the computational task of generating a knowledge
base in the form of an ontology given an unstructured corpus whose content is
in natural language (NL). Several works can be found in this area most of which
are limited to statistical and lexico-syntactic pattern matching based
techniques Light-Weight OL. These techniques do not lead to very accurate
learning mostly because of several linguistic nuances in NL. Formal OL is an
alternative (less explored) methodology were deep linguistics analysis is made
using theory and tools found in computational linguistics to generate formal
axioms and definitions instead simply inducing a taxonomy. In this paper we
propose "Description Logic (DL)" based formal OL framework for learning factual
IS-A type sentences in English. We claim that semantic construction of IS-A
sentences is non trivial. Hence, we also claim that such sentences requires
special studies in the context of OL before any truly formal OL can be
proposed. We introduce a learner tool, called DLOL_IS-A, that generated such
ontologies in the owl format. We have adopted "Gold Standard" based OL
evaluation on IS-A rich WCL v.1.1 dataset and our own Community representative
IS-A dataset. We observed significant improvement of DLOL_IS-A when compared to
the light-weight OL tool Text2Onto and formal OL tool FRED.
|
1312.6948 | Description Logics based Formalization of Wh-Queries | cs.CL cs.AI | The problem of Natural Language Query Formalization (NLQF) is to translate a
given user query in natural language (NL) into a formal language so that the
semantic interpretation has equivalence with the NL interpretation.
Formalization of NL queries enables logic based reasoning during information
retrieval, database query, question-answering, etc. Formalization also helps in
Web query normalization and indexing, query intent analysis, etc. In this paper
we are proposing a Description Logics based formal methodology for wh-query
intent (also called desire) identification and corresponding formal
translation. We evaluated the scalability of our proposed formalism using
Microsoft Encarta 98 query dataset and OWL-S TC v.4.0 dataset.
|
1312.6949 | Joint Phase Tracking and Channel Decoding for OFDM Physical-Layer
Network Coding | cs.IT math.IT | This paper investigates the problem of joint phase tracking and channel
decoding in OFDM based Physical-layer Network Coding (PNC) systems. OFDM
signaling can obviate the need for tight time synchronization among multiple
simultaneous transmissions in the uplink of PNC systems. However, OFDM PNC
systems are susceptible to phase drifts caused by residual carrier frequency
offsets (CFOs). In the traditional OFDM system in which a receiver receives
from only one transmitter, pilot tones are employed to aid phase tracking. In
OFDM PNC systems, multiple transmitters transmit to a receiver, and these pilot
tones must be shared among the multiple transmitters. This reduces the number
of pilots that can be used by each transmitting node. Phase tracking in OFDM
PNC is more challenging as a result. To overcome the degradation due to the
reduced number of per-node pilots, this work supplements the pilots with the
channel information contained in the data. In particular, we propose to solve
the problems of phase tracking and channel decoding jointly. Our solution
consists of the use of the expectation-maximization (EM) algorithm for phase
tracking and the use of the belief propagation (BP) algorithm for channel
decoding. The two problems are solved jointly through iterative processing
between the EM and BP algorithms. Simulations and real experiments based on
software-defined radio show that the proposed method can improve phase tracking
as well as channel decoding performance.
|
1312.6956 | Joint segmentation of multivariate time series with hidden process
regression for human activity recognition | stat.ML cs.LG | The problem of human activity recognition is central for understanding and
predicting the human behavior, in particular in a prospective of assistive
services to humans, such as health monitoring, well being, security, etc. There
is therefore a growing need to build accurate models which can take into
account the variability of the human activities over time (dynamic models)
rather than static ones which can have some limitations in such a dynamic
context. In this paper, the problem of activity recognition is analyzed through
the segmentation of the multidimensional time series of the acceleration data
measured in the 3-d space using body-worn accelerometers. The proposed model
for automatic temporal segmentation is a specific statistical latent process
model which assumes that the observed acceleration sequence is governed by
sequence of hidden (unobserved) activities. More specifically, the proposed
approach is based on a specific multiple regression model incorporating a
hidden discrete logistic process which governs the switching from one activity
to another over time. The model is learned in an unsupervised context by
maximizing the observed-data log-likelihood via a dedicated
expectation-maximization (EM) algorithm. We applied it on a real-world
automatic human activity recognition problem and its performance was assessed
by performing comparisons with alternative approaches, including well-known
supervised static classifiers and the standard hidden Markov model (HMM). The
obtained results are very encouraging and show that the proposed approach is
quite competitive even it works in an entirely unsupervised way and does not
requires a feature extraction preprocessing step.
|
1312.6962 | Subjectivity Classification using Machine Learning Techniques for Mining
Feature-Opinion Pairs from Web Opinion Sources | cs.IR cs.CL cs.LG | Due to flourish of the Web 2.0, web opinion sources are rapidly emerging
containing precious information useful for both customers and manufactures.
Recently, feature based opinion mining techniques are gaining momentum in which
customer reviews are processed automatically for mining product features and
user opinions expressed over them. However, customer reviews may contain both
opinionated and factual sentences. Distillations of factual contents improve
mining performance by preventing noisy and irrelevant extraction. In this
paper, combination of both supervised machine learning and rule-based
approaches are proposed for mining feasible feature-opinion pairs from
subjective review sentences. In the first phase of the proposed approach, a
supervised machine learning technique is applied for classifying subjective and
objective sentences from customer reviews. In the next phase, a rule based
method is implemented which applies linguistic and semantic analysis of texts
to mine feasible feature-opinion pairs from subjective sentences retained after
the first phase. The effectiveness of the proposed methods is established
through experimentation over customer reviews on different electronic products.
|
1312.6965 | An Unsupervised Approach for Automatic Activity Recognition based on
Hidden Markov Model Regression | stat.ML cs.CV cs.LG | Using supervised machine learning approaches to recognize human activities
from on-body wearable accelerometers generally requires a large amount of
labelled data. When ground truth information is not available, too expensive,
time consuming or difficult to collect, one has to rely on unsupervised
approaches. This paper presents a new unsupervised approach for human activity
recognition from raw acceleration data measured using inertial wearable
sensors. The proposed method is based upon joint segmentation of
multidimensional time series using a Hidden Markov Model (HMM) in a multiple
regression context. The model is learned in an unsupervised framework using the
Expectation-Maximization (EM) algorithm where no activity labels are needed.
The proposed method takes into account the sequential appearance of the data.
It is therefore adapted for the temporal acceleration data to accurately detect
the activities. It allows both segmentation and classification of the human
activities. Experimental results are provided to demonstrate the efficiency of
the proposed approach with respect to standard supervised and unsupervised
classification approaches
|
1312.6966 | Model-based functional mixture discriminant analysis with hidden process
regression for curve classification | stat.ME cs.LG math.ST stat.ML stat.TH | In this paper, we study the modeling and the classification of functional
data presenting regime changes over time. We propose a new model-based
functional mixture discriminant analysis approach based on a specific hidden
process regression model that governs the regime changes over time. Our
approach is particularly adapted to handle the problem of complex-shaped
classes of curves, where each class is potentially composed of several
sub-classes, and to deal with the regime changes within each homogeneous
sub-class. The proposed model explicitly integrates the heterogeneity of each
class of curves via a mixture model formulation, and the regime changes within
each sub-class through a hidden logistic process. Each class of complex-shaped
curves is modeled by a finite number of homogeneous clusters, each of them
being decomposed into several regimes. The model parameters of each class are
learned by maximizing the observed-data log-likelihood by using a dedicated
expectation-maximization (EM) algorithm. Comparisons are performed with
alternative curve classification approaches, including functional linear
discriminant analysis and functional mixture discriminant analysis with
polynomial regression mixtures and spline regression mixtures. Results obtained
on simulated data and real data show that the proposed approach outperforms the
alternative approaches in terms of discrimination, and significantly improves
the curves approximation.
|
1312.6967 | Model-based clustering and segmentation of time series with changes in
regime | stat.ME cs.LG math.ST stat.ML stat.TH | Mixture model-based clustering, usually applied to multidimensional data, has
become a popular approach in many data analysis problems, both for its good
statistical properties and for the simplicity of implementation of the
Expectation-Maximization (EM) algorithm. Within the context of a railway
application, this paper introduces a novel mixture model for dealing with time
series that are subject to changes in regime. The proposed approach consists in
modeling each cluster by a regression model in which the polynomial
coefficients vary according to a discrete hidden process. In particular, this
approach makes use of logistic functions to model the (smooth or abrupt)
transitions between regimes. The model parameters are estimated by the maximum
likelihood method solved by an Expectation-Maximization algorithm. The proposed
approach can also be regarded as a clustering approach which operates by
finding groups of time series having common changes in regime. In addition to
providing a time series partition, it therefore provides a time series
segmentation. The problem of selecting the optimal numbers of clusters and
segments is solved by means of the Bayesian Information Criterion (BIC). The
proposed approach is shown to be efficient using a variety of simulated time
series and real-world time series of electrical power consumption from rail
switching operations.
|
1312.6968 | A hidden process regression model for functional data description.
Application to curve discrimination | stat.ME cs.LG stat.ML | A new approach for functional data description is proposed in this paper. It
consists of a regression model with a discrete hidden logistic process which is
adapted for modeling curves with abrupt or smooth regime changes. The model
parameters are estimated in a maximum likelihood framework through a dedicated
Expectation Maximization (EM) algorithm. From the proposed generative model, a
curve discrimination rule is derived using the Maximum A Posteriori rule. The
proposed model is evaluated using simulated curves and real world curves
acquired during railway switch operations, by performing comparisons with the
piecewise regression approach in terms of curve modeling and classification.
|
1312.6969 | Time series modeling by a regression approach based on a latent process | stat.ME cs.LG math.ST stat.ML stat.TH | Time series are used in many domains including finance, engineering,
economics and bioinformatics generally to represent the change of a measurement
over time. Modeling techniques may then be used to give a synthetic
representation of such data. A new approach for time series modeling is
proposed in this paper. It consists of a regression model incorporating a
discrete hidden logistic process allowing for activating smoothly or abruptly
different polynomial regression models. The model parameters are estimated by
the maximum likelihood method performed by a dedicated Expectation Maximization
(EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative
Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process
parameters. To evaluate the proposed approach, an experimental study on
simulated data and real world data was performed using two alternative
approaches: a heteroskedastic piecewise regression model using a global
optimization algorithm based on dynamic programming, and a Hidden Markov
Regression Model whose parameters are estimated by the Baum-Welch algorithm.
Finally, in the context of the remote monitoring of components of the French
railway infrastructure, and more particularly the switch mechanism, the
proposed approach has been applied to modeling and classifying time series
representing the condition measurements acquired during switch operations.
|
1312.6974 | Piecewise regression mixture for simultaneous functional data clustering
and optimal segmentation | stat.ME cs.LG math.ST stat.ML stat.TH | This paper introduces a novel mixture model-based approach for simultaneous
clustering and optimal segmentation of functional data which are curves
presenting regime changes. The proposed model consists in a finite mixture of
piecewise polynomial regression models. Each piecewise polynomial regression
model is associated with a cluster, and within each cluster, each piecewise
polynomial component is associated with a regime (i.e., a segment). We derive
two approaches for learning the model parameters. The former is an estimation
approach and consists in maximizing the observed-data likelihood via a
dedicated expectation-maximization (EM) algorithm. A fuzzy partition of the
curves in K clusters is then obtained at convergence by maximizing the
posterior cluster probabilities. The latter however is a classification
approach and optimizes a specific classification likelihood criterion through a
dedicated classification expectation-maximization (CEM) algorithm. The optimal
curve segmentation is performed by using dynamic programming. In the
classification approach, both the curve clustering and the optimal segmentation
are performed simultaneously as the CEM learning proceeds. We show that the
classification approach is the probabilistic version that generalizes the
deterministic K-means-like algorithm proposed in H\'ebrail et al. (2010). The
proposed approach is evaluated using simulated curves and real-world curves.
Comparisons with alternatives including regression mixture models and the
K-means like algorithm for piecewise regression demonstrate the effectiveness
of the proposed approach.
|
1312.6978 | Mod\`ele \`a processus latent et algorithme EM pour la r\'egression non
lin\'eaire | math.ST cs.LG stat.ME stat.ML stat.TH | A non linear regression approach which consists of a specific regression
model incorporating a latent process, allowing various polynomial regression
models to be activated preferentially and smoothly, is introduced in this
paper. The model parameters are estimated by maximum likelihood performed via a
dedicated expecation-maximization (EM) algorithm. An experimental study using
simulated and real data sets reveals good performances of the proposed
approach.
|
1312.6994 | A regression model with a hidden logistic process for signal
parametrization | stat.ME cs.LG stat.ML | A new approach for signal parametrization, which consists of a specific
regression model incorporating a discrete hidden logistic process, is proposed.
The model parameters are estimated by the maximum likelihood method performed
by a dedicated Expectation Maximization (EM) algorithm. The parameters of the
hidden logistic process, in the inner loop of the EM algorithm, are estimated
using a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm. An
experimental study using simulated and real data reveals good performances of
the proposed approach.
|
1312.6995 | Towards Using Unlabeled Data in a Sparse-coding Framework for Human
Activity Recognition | cs.LG cs.AI stat.ML | We propose a sparse-coding framework for activity recognition in ubiquitous
and mobile computing that alleviates two fundamental problems of current
supervised learning approaches. (i) It automatically derives a compact, sparse
and meaningful feature representation of sensor data that does not rely on
prior expert knowledge and generalizes extremely well across domain boundaries.
(ii) It exploits unlabeled sample data for bootstrapping effective activity
recognizers, i.e., substantially reduces the amount of ground truth annotation
required for model estimation. Such unlabeled data is trivial to obtain, e.g.,
through contemporary smartphones carried by users as they go about their
everyday activities.
Based on the self-taught learning paradigm we automatically derive an
over-complete set of basis vectors from unlabeled data that captures inherent
patterns present within activity data. Through projecting raw sensor data onto
the feature space defined by such over-complete sets of basis vectors effective
feature extraction is pursued. Given these learned feature representations,
classification backends are then trained using small amounts of labeled
training data.
We study the new approach in detail using two datasets which differ in terms
of the recognition tasks and sensor modalities. Primarily we focus on
transportation mode analysis task, a popular task in mobile-phone based
sensing. The sparse-coding framework significantly outperforms the
state-of-the-art in supervised learning approaches. Furthermore, we demonstrate
the great practical potential of the new approach by successfully evaluating
its generalization capabilities across both domain and sensor modalities by
considering the popular Opportunity dataset. Our feature learning approach
outperforms state-of-the-art approaches to analyzing activities in daily
living.
|
1312.6996 | A New Approach to Constraint Weight Learning for Variable Ordering in
CSPs | cs.AI | A Constraint Satisfaction Problem (CSP) is a framework used for modeling and
solving constrained problems. Tree-search algorithms like backtracking try to
construct a solution to a CSP by selecting the variables of the problem one
after another. The order in which these algorithm select the variables
potentially have significant impact on the search performance. Various
heuristics have been proposed for choosing good variable ordering. Many
powerful variable ordering heuristics weigh the constraints first and then
utilize the weights for selecting good order of the variables. Constraint
weighting are basically employed to identify global bottlenecks in a CSP.
In this paper, we propose a new approach for learning weights for the
constraints using competitive coevolutionary Genetic Algorithm (GA). Weights
learned by the coevolutionary GA later help to make better choices for the
first few variables in a search. In the competitive coevolutionary GA,
constraints and candidate solutions for a CSP evolve together through an
inverse fitness interaction process. We have conducted experiments on several
random, quasi-random and patterned instances to measure the efficiency of the
proposed approach. The results and analysis show that the proposed approach is
good at learning weights to distinguish the hard constraints for quasi-random
instances and forced satisfiable random instances generated with the Model RB.
For other type of instances, RNDI still seems to be the best approach as our
experiments show.
|
1312.7001 | A regression model with a hidden logistic process for feature extraction
from time series | stat.ME cs.LG math.ST stat.ML stat.TH | A new approach for feature extraction from time series is proposed in this
paper. This approach consists of a specific regression model incorporating a
discrete hidden logistic process. The model parameters are estimated by the
maximum likelihood method performed by a dedicated Expectation Maximization
(EM) algorithm. The parameters of the hidden logistic process, in the inner
loop of the EM algorithm, are estimated using a multi-class Iterative
Reweighted Least-Squares (IRLS) algorithm. A piecewise regression algorithm and
its iterative variant have also been considered for comparisons. An
experimental study using simulated and real data reveals good performances of
the proposed approach.
|
1312.7003 | Supervised learning of a regression model based on latent process.
Application to the estimation of fuel cell life time | stat.ML cs.LG stat.AP | This paper describes a pattern recognition approach aiming to estimate fuel
cell duration time from electrochemical impedance spectroscopy measurements. It
consists in first extracting features from both real and imaginary parts of the
impedance spectrum. A parametric model is considered in the case of the real
part, whereas regression model with latent variables is used in the latter
case. Then, a linear regression model using different subsets of extracted
features is used fo r the estimation of fuel cell time duration. The
performances of the proposed approach are evaluated on experimental data set to
show its feasibility. This could lead to interesting perspectives for
predictive maintenance policy of fuel cell.
|
1312.7006 | A Convex Formulation for Mixed Regression with Two Components: Minimax
Optimal Rates | stat.ML cs.IT cs.LG math.IT | We consider the mixed regression problem with two components, under
adversarial and stochastic noise. We give a convex optimization formulation
that provably recovers the true solution, and provide upper bounds on the
recovery errors for both arbitrary noise and stochastic noise settings. We also
give matching minimax lower bounds (up to log factors), showing that under
certain assumptions, our algorithm is information-theoretically optimal. Our
results represent the first tractable algorithm guaranteeing successful
recovery with tight bounds on recovery errors and sample complexity.
|
1312.7007 | Functional Mixture Discriminant Analysis with hidden process regression
for curve classification | stat.ME cs.LG stat.ML | We present a new mixture model-based discriminant analysis approach for
functional data using a specific hidden process regression model. The approach
allows for fitting flexible curve-models to each class of complex-shaped curves
presenting regime changes. The model parameters are learned by maximizing the
observed-data log-likelihood for each class by using a dedicated
expectation-maximization (EM) algorithm. Comparisons on simulated data with
alternative approaches show that the proposed approach provides better results.
|
1312.7018 | Mixture model-based functional discriminant analysis for curve
classification | stat.ME cs.LG stat.ML | Statistical approaches for Functional Data Analysis concern the paradigm for
which the individuals are functions or curves rather than finite dimensional
vectors. In this paper, we particularly focus on the modeling and the
classification of functional data which are temporal curves presenting regime
changes over time. More specifically, we propose a new mixture model-based
discriminant analysis approach for functional data using a specific hidden
process regression model. Our approach is particularly adapted to both handle
the problem of complex-shaped classes of curves, where each class is composed
of several sub-classes, and to deal with the regime changes within each
homogeneous sub-class. The model explicitly integrates the heterogeneity of
each class of curves via a mixture model formulation, and the regime changes
within each sub-class through a hidden logistic process. The approach allows
therefore for fitting flexible curve-models to each class of complex-shaped
curves presenting regime changes through an unsupervised learning scheme, to
automatically summarize it into a finite number of homogeneous clusters, each
of them is decomposed into several regimes. The model parameters are learned by
maximizing the observed-data log-likelihood for each class by using a dedicated
expectation-maximization (EM) algorithm. Comparisons on simulated data and real
data with alternative approaches, including functional linear discriminant
analysis and functional mixture discriminant analysis with polynomial
regression mixtures and spline regression mixtures, show that the proposed
approach provides better results regarding the discrimination results and
significantly improves the curves approximation.
|
1312.7022 | Robust EM algorithm for model-based curve clustering | stat.ME cs.LG stat.ML | Model-based clustering approaches concern the paradigm of exploratory data
analysis relying on the finite mixture model to automatically find a latent
structure governing observed data. They are one of the most popular and
successful approaches in cluster analysis. The mixture density estimation is
generally performed by maximizing the observed-data log-likelihood by using the
expectation-maximization (EM) algorithm. However, it is well-known that the EM
algorithm initialization is crucial. In addition, the standard EM algorithm
requires the number of clusters to be known a priori. Some solutions have been
provided in [31, 12] for model-based clustering with Gaussian mixture models
for multivariate data. In this paper we focus on model-based curve clustering
approaches, when the data are curves rather than vectorial data, based on
regression mixtures. We propose a new robust EM algorithm for clustering
curves. We extend the model-based clustering approach presented in [31] for
Gaussian mixture models, to the case of curve clustering by regression
mixtures, including polynomial regression mixtures as well as spline or
B-spline regressions mixtures. Our approach both handles the problem of
initialization and the one of choosing the optimal number of clusters as the EM
learning proceeds, rather than in a two-fold scheme. This is achieved by
optimizing a penalized log-likelihood criterion. A simulation study confirms
the potential benefit of the proposed algorithm in terms of robustness
regarding initialization and funding the actual number of clusters.
|
1312.7024 | Model-based clustering with Hidden Markov Model regression for time
series with regime changes | stat.ML cs.LG stat.ME | This paper introduces a novel model-based clustering approach for clustering
time series which present changes in regime. It consists of a mixture of
polynomial regressions governed by hidden Markov chains. The underlying hidden
process for each cluster activates successively several polynomial regimes
during time. The parameter estimation is performed by the maximum likelihood
method through a dedicated Expectation-Maximization (EM) algorithm. The
proposed approach is evaluated using simulated time series and real-world time
series issued from a railway diagnosis application. Comparisons with existing
approaches for time series clustering, including the stand EM for Gaussian
mixtures, $K$-means clustering, the standard mixture of regression models and
mixture of Hidden Markov Models, demonstrate the effectiveness of the proposed
approach.
|
1312.7035 | Shape-constrained Estimation of Value Functions | math.PR cs.CE math.OC stat.ML | We present a fully nonparametric method to estimate the value function, via
simulation, in the context of expected infinite-horizon discounted rewards for
Markov chains. Estimating such value functions plays an important role in
approximate dynamic programming and applied probability in general. We
incorporate "soft information" into the estimation algorithm, such as knowledge
of convexity, monotonicity, or Lipchitz constants. In the presence of such
information, a nonparametric estimator for the value function can be computed
that is provably consistent as the simulated time horizon tends to infinity. As
an application, we implement our method on price tolling agreement contracts in
energy markets.
|
1312.7036 | Deriving Latent Social Impulses to Determine Longevous Videos | cs.SI cs.MM | Online video websites receive huge amount of videos daily from users all
around the world. How to provide valuable recommendations to viewers is an
important task for both video websites and related third parties, such as
search engines. Previous work conducted numerous analysis on the view counts of
videos, which measure a video's value in terms of popularity. However, the
long-lasting value of an online video, namely longevity, is hidden behind the
history that a video accumulates its "popularity" through time. Generally
speaking, a longevous video tends to constantly draw society's attention. With
focus on one of the leading video websites, Youtube, this paper proposes a
scoring mechanism quantifying a video's longevity. Evaluating a video's
longevity can not only improve a video recommender system, but also help us to
discover videos having greater advertising value, as well as adjust a video
website's strategy of storing videos to shorten its responding time. In order
to accurately quantify longevity, we introduce the concept of latent social
impulses and how to use them measure a video's longevity. In order to derive
latent social impulses, we view the video website as a digital signal filter
and formulate the task as a convex minimization problem. The proposed longevity
computation is based on the derived social impulses. Unfortunately, the
required information to derive social impulses are not always public, which
makes a third party unable to directly evaluate every video's longevity. To
solve this problem, we formulate a semi-supervised learning task by using part
of videos having known longevity scores to predict the unknown longevity
scores. We propose a Gaussian Random Markov model with Loopy Belief Propagation
to solve this problem. The conducted experiments on Youtube demonstrate that
the proposed method significantly improves the prediction results comparing to
baselines.
|
1312.7039 | A Primal Dual Active Set Algorithm with Continuation for Compressed
Sensing | cs.IT math.IT math.OC | The success of compressed sensing relies essentially on the ability to
efficiently find an approximately sparse solution to an under-determined linear
system. In this paper, we developed an efficient algorithm for the sparsity
promoting $\ell_1$-regularized least squares problem by coupling the primal
dual active set strategy with a continuation technique (on the regularization
parameter). In the active set strategy, we first determine the active set from
primal and dual variables, and then update the primal and dual variables by
solving a low-dimensional least square problem on the active set, which makes
the algorithm very efficient. The continuation technique globalizes the
convergence of the algorithm, with provable global convergence under restricted
isometry property (RIP). Further, we adopt two alternative methods, i.e., a
modified discrepancy principle and a Bayesian information criterion, to choose
the regularization parameter. Numerical experiments indicate that our algorithm
is very competitive with state-of-the-art algorithms in terms of accuracy and
efficiency.
|
1312.7050 | Nash Equilibrium Computation in Subnetwork Zero-Sum Games with Switching
Communications | cs.SY cs.GT math.OC | In this paper, we investigate a distributed Nash equilibrium computation
problem for a time-varying multi-agent network consisting of two subnetworks,
where the two subnetworks share the same objective function. We first propose a
subgradient-based distributed algorithm with heterogeneous stepsizes to compute
a Nash equilibrium of a zero-sum game. We then prove that the proposed
algorithm can achieve a Nash equilibrium under uniformly jointly strongly
connected (UJSC) weight-balanced digraphs with homogenous stepsizes. Moreover,
we demonstrate that for weighted-unbalanced graphs a Nash equilibrium may not
be achieved with homogenous stepsizes unless certain conditions on the
objective function hold. We show that there always exist heterogeneous
stepsizes for the proposed algorithm to guarantee that a Nash equilibrium can
be achieved for UJSC digraphs. Finally, in two standard weight-unbalanced
cases, we verify the convergence to a Nash equilibrium by adaptively updating
the stepsizes along with the arc weights in the proposed algorithm.
|
1312.7056 | Development of Display Ads Retrieval System to Match Publisher's
Contents | cs.CY cs.IR | The technological transformation and automation of digital content delivery
has revolutionized the media industry. Advertising landscape is gradually
shifting its traditional media forms to the emergent of Internet advertising.
In this paper, the types of internet advertising to be discussed on are
contextual and sponsored search ads. These types of advertising have the
central challenge of finding the best match between a given context and a
suitable advertisement, through a principled method. Furthermore, there are
four main players that exist in the Internet advertising ecosystem: users,
advertisers, ad exchange and publishers. Hence, to find ways to counter the
central challenge, the paper addresses two objectives: how to successfully make
the best contextual ads selections to match to a web page content to ensure
that there is a valuable connection between the web page and the contextual
ads. All methods, discussions, conclusion and future recommendations are
presented as per sections. Hence, in order to prove the working mechanism of
matching contextual ads and web pages, web pages together with the ads matching
system are developed as a prototype.
|
1312.7076 | A Consensus-Focused Group Recommender System | cs.HC cs.IR cs.SI | In many cases, recommendations are consumed by groups of users rather than
individuals. In this paper, we present a system which recommends social events
to groups. The system helps groups to organize a joint activity and
collectively select which activity to perform among several possible options.
We also facilitate the consensus making, following the principle of group
consensus decision making. Our system allows users to asynchronously vote, add
and comment on alternatives. We observe social influence within groups through
post-recommendation feedback during the group decision making process. We
propose a decision cascading model and estimate such social influence, which
can be used to improve the performance of group recommendation. We conduct
experiments to measure the prediction performance of our model. The result
shows that the model achieves better results than that of independent decision
making model.
|
1312.7077 | Language Modeling with Power Low Rank Ensembles | cs.CL cs.LG stat.ML | We present power low rank ensembles (PLRE), a flexible framework for n-gram
language modeling where ensembles of low rank matrices and tensors are used to
obtain smoothed probability estimates of words in context. Our method can be
understood as a generalization of n-gram modeling to non-integer n, and
includes standard techniques such as absolute discounting and Kneser-Ney
smoothing as special cases. PLRE training is efficient and our approach
outperforms state-of-the-art modified Kneser Ney baselines in terms of
perplexity on large corpora as well as on BLEU score in a downstream machine
translation task.
|
1312.7085 | Finding More Relevance: Propagating Similarity on Markov Random Field
for Image Retrieval | cs.CV | To effectively retrieve objects from large corpus with high accuracy is a
challenge task. In this paper, we propose a method that propagates visual
feature level similarities on a Markov random field (MRF) to obtain a high
level correspondence in image space for image pairs. The proposed
correspondence between image pair reflects not only the similarity of low-level
visual features but also the relations built through other images in the
database and it can be easily integrated into the existing
bag-of-visual-words(BoW) based systems to reduce the missing rate. We evaluate
our method on the standard Oxford-5K, Oxford-105K and Paris-6K dataset. The
experiment results show that the proposed method significantly improves the
retrieval accuracy on three datasets and exceeds the current state-of-the-art
retrieval performance.
|
1312.7135 | Multihop Backhaul Compression for the Uplink of Cloud Radio Access
Networks | cs.IT math.IT | In cloud radio access networks (C-RANs), the baseband processing of the radio
units (RUs) is migrated to remote control units (CUs). This is made possible by
a network of backhaul links that connects RUs and CUs and that carries
compressed baseband signals. While prior work has focused mostly on single-hop
backhaul networks, this paper investigates efficient backhaul compression
strategies for the uplink of C-RANs with a general multihop backhaul topology.
A baseline multiplex-and-forward (MF) scheme is first studied in which each RU
forwards the bit streams received from the connected RUs without any
processing. It is observed that this strategy may cause significant performance
degradation in the presence of a dense deployment of RUs with a well connected
backhaul network. To obviate this problem, a scheme is proposed in which each
RU decompresses the received bit streams and performs linear in-network
processing of the decompressed signals. For both the MF and the
decompress-process-and-recompress (DPR) backhaul schemes, the optimal design is
addressed with the aim of maximizing the sum-rate under the backhaul capacity
constraints. Recognizing the significant demands of the optimal solution of the
DPR scheme in terms of channel state information (CSI) at the RUs,
decentralized optimization algorithms are proposed under the assumption of
limited CSI at the RUs. Numerical results are provided to compare the
performance of the MF and DPR schemes, highlighting the potential advantage of
in-network processing and the impact of CSI limitations.
|
1312.7145 | Some remarks on spatial uniformity of solutions of reaction-diffusion
PDE's and a related synchronization problem for ODE's | cs.SY math.AP | In this note, we present a condition which guarantees spatial uniformity for
the asymptotic behavior of the solutions of a reaction-diffusion PDE with
Neumann boundary conditions in one dimension, using the Jacobian matrix of the
reaction term and the first Dirichlet eigenvalue of the Laplacian operator on
the given spatial domain. We also derive an analog of this PDE result for the
synchronization of a network of identical ODE models coupled by diffusion
terms.
|
1312.7165 | Spatially embedded growing small-world networks | physics.soc-ph cond-mat.stat-mech cs.SI | Networks in nature are often formed within a spatial domain in a dynamical
manner, gaining links and nodes as they develop over time. We propose a class
of spatially-based growing network models and investigate the relationship
between the resulting statistical network properties and the dimension and
topology of the space in which the networks are embedded. In particular, we
consider models in which nodes are placed one by one in random locations in
space, with each such placement followed by configuration relaxation toward
uniform node density, and connection of the new node with spatially nearby
nodes. We find that such growth processes naturally result in networks with
small-world features, including a short characteristic path length and nonzero
clustering. These properties do not appear to depend strongly on the topology
of the embedding space, but do depend strongly on its dimension;
higher-dimensional spaces result in shorter path lengths but less clustering.
|
1312.7167 | Near-separable Non-negative Matrix Factorization with $\ell_1$- and
Bregman Loss Functions | stat.ML cs.CV cs.LG | Recently, a family of tractable NMF algorithms have been proposed under the
assumption that the data matrix satisfies a separability condition Donoho &
Stodden (2003); Arora et al. (2012). Geometrically, this condition reformulates
the NMF problem as that of finding the extreme rays of the conical hull of a
finite set of vectors. In this paper, we develop several extensions of the
conical hull procedures of Kumar et al. (2013) for robust ($\ell_1$)
approximations and Bregman divergences. Our methods inherit all the advantages
of Kumar et al. (2013) including scalability and noise-tolerance. We show that
on foreground-background separation problems in computer vision, robust
near-separable NMFs match the performance of Robust PCA, considered state of
the art on these problems, with an order of magnitude faster training time. We
also demonstrate applications in exemplar selection settings.
|
1312.7179 | Sub-Classifier Construction for Error Correcting Output Code Using
Minimum Weight Perfect Matching | cs.LG cs.IT math.IT | Multi-class classification is mandatory for real world problems and one of
promising techniques for multi-class classification is Error Correcting Output
Code. We propose a method for constructing the Error Correcting Output Code to
obtain the suitable combination of positive and negative classes encoded to
represent binary classifiers. The minimum weight perfect matching algorithm is
applied to find the optimal pairs of subset of classes by using the
generalization performance as a weighting criterion. Based on our method, each
subset of classes with positive and negative labels is appropriately combined
for learning the binary classifiers. Experimental results show that our
technique gives significantly higher performance compared to traditional
methods including the dense random code and the sparse random code both in
terms of accuracy and classification times. Moreover, our method requires
significantly smaller number of binary classifiers while maintaining accuracy
compared to the One-Versus-One.
|
1312.7191 | Special values of Kloosterman sums and binomial bent functions | cs.IT math.IT | Let $p\ge 7$, $q=p^m$. $K_q(a)=\sum_{x\in \mathbb{F}_{p^m}}
\zeta^{\mathrm{Tr}^m_1(x^{p^m-2}+ax)}$ is the Kloosterman sum of $a$ on
$\mathbb{F}_{p^m}$, where $\zeta=e^{\frac{2\pi\sqrt{-1}}{p}}$. The value
$1-\frac{2}{\zeta+\zeta^{-1}}$ of $K_q(a)$ and its conjugate have close
relationship with a class of binomial function with Dillon exponent. This paper
first presents some necessary conditions for $a$ such that
$K_q(a)=1-\frac{2}{\zeta+\zeta^{-1}}$. Further, we prove that if $p=11$, for
any $a$, $K_q(a)\neq 1-\frac{2}{\zeta+\zeta^{-1}}$. And for $p\ge 13$, if $a\in
\mathbb{F}_{p^s}$ and $s=\mathrm{gcd}(2,m)$, $K_q(a)\neq
1-\frac{2}{\zeta+\zeta^{-1}}$. In application, these results explains some
class of binomial regular bent functions does not exits.
|
1312.7198 | Opportunistic Downlink Interference Alignment | cs.IT math.IT | In this paper, we propose an opportunistic downlink interference alignment
(ODIA) for interference-limited cellular downlink, which intelligently combines
user scheduling and downlink IA techniques. The proposed ODIA not only
efficiently reduces the effect of inter-cell interference from other-cell base
stations (BSs) but also eliminates intra-cell interference among spatial
streams in the same cell. We show that the minimum number of users required to
achieve a target degrees-of-freedom (DoF) can be fundamentally reduced, i.e.,
the fundamental user scaling law can be improved by using the ODIA, compared
with the existing downlink IA schemes. In addition, we adopt a limited feedback
strategy in the ODIA framework, and then analyze the required number of
feedback bits leading to the same performance as that of the ODIA assuming
perfect feedback. We also modify the original ODIA in order to further improve
sum-rate, which achieves the optimal multiuser diversity gain, i.e., $\log \log
N$, per spatial stream even in the presence of downlink inter-cell
interference, where $N$ denotes the number of users in a cell. Simulation
results show that the ODIA significantly outperforms existing interference
management techniques in terms of sum-rate in realistic cellular environments.
Note that the ODIA operates in a distributed and decoupled manner, while
requiring no information exchange among BSs and no iterative beamformer
optimization between BSs and users, thus leading to an easier implementation.
|
1312.7219 | Combining persistent homology and invariance groups for shape comparison | math.AT cs.CG cs.CV | In many applications concerning the comparison of data expressed by
$\mathbb{R}^m$-valued functions defined on a topological space $X$, the
invariance with respect to a given group $G$ of self-homeomorphisms of $X$ is
required. While persistent homology is quite efficient in the topological and
qualitative comparison of this kind of data when the invariance group $G$ is
the group $\mathrm{Homeo}(X)$ of all self-homeomorphisms of $X$, this theory is
not tailored to manage the case in which $G$ is a proper subgroup of
$\mathrm{Homeo}(X)$, and its invariance appears too general for several tasks.
This paper proposes a way to adapt persistent homology in order to get
invariance just with respect to a given group of self-homeomorphisms of $X$.
The main idea consists in a dual approach, based on considering the set of all
$G$-invariant non-expanding operators defined on the space of the admissible
filtering functions on $X$. Some theoretical results concerning this approach
are proven and two experiments are presented. An experiment illustrates the
application of the proposed technique to compare 1D-signals, when the
invariance is expressed by the group of affinities, the group of
orientation-preserving affinities, the group of isometries, the group of
translations and the identity group. Another experiment shows how our technique
can be used for image comparison.
|
1312.7223 | Quality Estimation of English-Hindi Outputs using Naive Bayes Classifier | cs.CL | In this paper we present an approach for estimating the quality of machine
translation system. There are various methods for estimating the quality of
output sentences, but in this paper we focus on Na\"ive Bayes classifier to
build model using features which are extracted from the input sentences. These
features are used for finding the likelihood of each of the sentences of the
training data which are then further used for determining the scores of the
test data. On the basis of these scores we determine the class labels of the
test data.
|
1312.7249 | Maximum Coverage and Maximum Connected Covering in Social Networks with
Partial Topology Information | cs.SI physics.soc-ph | Viral marketing campaigns seek to recruit the most influential individuals to
cover the largest target audience. This can be modeled as the well-studied
maximum coverage problem. There is a related problem when the recruited nodes
are connected. It is called the maximum connected cover problem. This problem
ensures a strong coordination between the influential nodes which are the
backbone of the marketing campaign. In this work, we are interested on both of
these problems. Most of the related literature assumes knowledge about the
topology of the network. Even in that case, the problem is known to be NP-hard.
In this work, we propose heuristics to the maximum connected cover problem and
the maximum coverage problem with different knowledge levels about the topology
of the network. We quantify the difference between these heuristics and the
local and global greedy algorithms.
|
1312.7258 | Active Discovery of Network Roles for Predicting the Classes of Network
Nodes | cs.LG cs.SI stat.ML | Nodes in real world networks often have class labels, or underlying
attributes, that are related to the way in which they connect to other nodes.
Sometimes this relationship is simple, for instance nodes of the same class are
may be more likely to be connected. In other cases, however, this is not true,
and the way that nodes link in a network exhibits a different, more complex
relationship to their attributes. Here, we consider networks in which we know
how the nodes are connected, but we do not know the class labels of the nodes
or how class labels relate to the network links. We wish to identify the best
subset of nodes to label in order to learn this relationship between node
attributes and network links. We can then use this discovered relationship to
accurately predict the class labels of the rest of the network nodes.
We present a model that identifies groups of nodes with similar link
patterns, which we call network roles, using a generative blockmodel. The model
then predicts labels by learning the mapping from network roles to class labels
using a maximum margin classifier. We choose a subset of nodes to label
according to an iterative margin-based active learning strategy. By integrating
the discovery of network roles with the classifier optimisation, the active
learning process can adapt the network roles to better represent the network
for node classification. We demonstrate the model by exploring a selection of
real world networks, including a marine food web and a network of English
words. We show that, in contrast to other network classifiers, this model
achieves good classification accuracy for a range of networks with different
relationships between class labels and network links.
|
1312.7292 | Two Timescale Convergent Q-learning for Sleep--Scheduling in Wireless
Sensor Networks | cs.SY cs.LG | In this paper, we consider an intrusion detection application for Wireless
Sensor Networks (WSNs). We study the problem of scheduling the sleep times of
the individual sensors to maximize the network lifetime while keeping the
tracking error to a minimum. We formulate this problem as a
partially-observable Markov decision process (POMDP) with continuous
state-action spaces, in a manner similar to (Fuemmeler and Veeravalli [2008]).
However, unlike their formulation, we consider infinite horizon discounted and
average cost objectives as performance criteria. For each criterion, we propose
a convergent on-policy Q-learning algorithm that operates on two timescales,
while employing function approximation to handle the curse of dimensionality
associated with the underlying POMDP. Our proposed algorithm incorporates a
policy gradient update using a one-simulation simultaneous perturbation
stochastic approximation (SPSA) estimate on the faster timescale, while the
Q-value parameter (arising from a linear function approximation for the
Q-values) is updated in an on-policy temporal difference (TD) algorithm-like
fashion on the slower timescale. The feature selection scheme employed in each
of our algorithms manages the energy and tracking components in a manner that
assists the search for the optimal sleep-scheduling policy. For the sake of
comparison, in both discounted and average settings, we also develop a function
approximation analogue of the Q-learning algorithm. This algorithm, unlike the
two-timescale variant, does not possess theoretical convergence guarantees.
Finally, we also adapt our algorithms to include a stochastic iterative
estimation scheme for the intruder's mobility model. Our simulation results on
a 2-dimensional network setting suggest that our algorithms result in better
tracking accuracy at the cost of only a few additional sensors, in comparison
to a recent prior work.
|
1312.7302 | Learning Human Pose Estimation Features with Convolutional Networks | cs.CV cs.LG cs.NE | This paper introduces a new architecture for human pose estimation using a
multi- layer convolutional network architecture and a modified learning
technique that learns low-level features and higher-level weak spatial models.
Unconstrained human pose estimation is one of the hardest problems in computer
vision, and our new architecture and learning schema shows significant
improvement over the current state-of-the-art results. The main contribution of
this paper is showing, for the first time, that a specific variation of deep
learning is able to outperform all existing traditional architectures on this
task. The paper also discusses several lessons learned while researching
alternatives, most notably, that it is possible to learn strong low-level
feature detectors on features that might even just cover a few pixels in the
image. Higher-level spatial models improve somewhat the overall result, but to
a much lesser extent then expected. Many researchers previously argued that the
kinematic structure and top-down information is crucial for this domain, but
with our purely bottom up, and weak spatial model, we could improve other more
complicated architectures that currently produce the best results. This mirrors
what many other researchers, like those in the speech recognition, object
recognition, and other domains have experienced.
|
1312.7308 | lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits | stat.ML cs.LG | The paper proposes a novel upper confidence bound (UCB) procedure for
identifying the arm with the largest mean in a multi-armed bandit game in the
fixed confidence setting using a small number of total samples. The procedure
cannot be improved in the sense that the number of samples required to identify
the best arm is within a constant factor of a lower bound based on the law of
the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence
bounds to explicitly account for the infinite time horizon of the algorithm. In
addition, by using a novel stopping time for the algorithm we avoid a union
bound over the arms that has been observed in other UCB-type algorithms. We
prove that the algorithm is optimal up to constants and also show through
simulations that it provides superior performance with respect to the
state-of-the-art.
|
1312.7326 | Replica Exchange using q-Gaussian Swarm Quantum Particle Intelligence
Method | cs.AI | We present a newly developed Replica Exchange algorithm using q -Gaussian
Swarm Quantum Particle Optimization (REX@q-GSQPO) method for solving the
problem of finding the global optimum. The basis of the algorithm is to run
multiple copies of independent swarms at different values of q parameter. Based
on an energy criterion, chosen to satisfy the detailed balance, we are swapping
the particle coordinates of neighboring swarms at regular iteration intervals.
The swarm replicas with high q values are characterized by high diversity of
particles allowing escaping local minima faster, while the low q replicas,
characterized by low diversity of particles, are used to sample more
efficiently the local basins. We compare the new algorithm with the standard
Gaussian Swarm Quantum Particle Optimization (GSQPO) and q-Gaussian Swarm
Quantum Particle Optimization (q-GSQPO) algorithms, and we found that the new
algorithm is more robust in terms of the number of fitness function calls, and
more efficient in terms ability convergence to the global minimum. In
additional, we also provide a method of optimally allocating the swarm replicas
among different q values. Our algorithm is tested for three benchmark
functions, which are known to be multimodal problems, at different
dimensionalities. In addition, we considered a polyalanine peptide of 12
residues modeled using a G\=o coarse-graining potential energy function.
|
1312.7335 | Correlation-based construction of neighborhood and edge features | cs.CV cs.LG stat.ML | Motivated by an abstract notion of low-level edge detector filters, we
propose a simple method of unsupervised feature construction based on pairwise
statistics of features. In the first step, we construct neighborhoods of
features by regrouping features that correlate. Then we use these subsets as
filters to produce new neighborhood features. Next, we connect neighborhood
features that correlate, and construct edge features by subtracting the
correlated neighborhood features of each other. To validate the usefulness of
the constructed features, we ran AdaBoost.MH on four multi-class classification
problems. Our most significant result is a test error of 0.94% on MNIST with an
algorithm which is essentially free of any image-specific priors. On CIFAR-10
our method is suboptimal compared to today's best deep learning techniques,
nevertheless, we show that the proposed method outperforms not only boosting on
the raw pixels, but also boosting on Haar filters.
|
1312.7345 | Lesion Border Detection in Dermoscopy Images Using Ensembles of
Thresholding Methods | cs.CV | Dermoscopy is one of the major imaging modalities used in the diagnosis of
melanoma and other pigmented skin lesions. Due to the difficulty and
subjectivity of human interpretation, automated analysis of dermoscopy images
has become an important research area. Border detection is often the first step
in this analysis. In many cases, the lesion can be roughly separated from the
background skin using a thresholding method applied to the blue channel.
However, no single thresholding method appears to be robust enough to
successfully handle the wide variety of dermoscopy images encountered in
clinical practice. In this paper, we present an automated method for detecting
lesion borders in dermoscopy images using ensembles of thresholding methods.
Experiments on a difficult set of 90 images demonstrate that the proposed
method is robust, fast, and accurate when compared to nine state-of-the-art
methods.
|
1312.7366 | Monte Carlo non local means: Random sampling for large-scale image
filtering | cs.CV stat.CO | We propose a randomized version of the non-local means (NLM) algorithm for
large-scale image filtering. The new algorithm, called Monte Carlo non-local
means (MCNLM), speeds up the classical NLM by computing a small subset of image
patch distances, which are randomly selected according to a designed sampling
pattern. We make two contributions. First, we analyze the performance of the
MCNLM algorithm and show that, for large images or large external image
databases, the random outcomes of MCNLM are tightly concentrated around the
deterministic full NLM result. In particular, our error probability bounds show
that, at any given sampling ratio, the probability for MCNLM to have a large
deviation from the original NLM solution decays exponentially as the size of
the image or database grows. Second, we derive explicit formulas for optimal
sampling patterns that minimize the error probability bound by exploiting
partial knowledge of the pairwise similarity weights. Numerical experiments
show that MCNLM is competitive with other state-of-the-art fast NLM algorithms
for single-image denoising. When applied to denoising images using an external
database containing ten billion patches, MCNLM returns a randomized solution
that is within 0.2 dB of the full NLM solution while reducing the runtime by
three orders of magnitude.
|
1312.7373 | Extending Contexts with Ontologies for Multidimensional Data Quality
Assessment | cs.DB | Data quality and data cleaning are context dependent activities. Starting
from this observation, in previous work a context model for the assessment of
the quality of a database instance was proposed. In that framework, the context
takes the form of a possibly virtual database or data integration system into
which a database instance under quality assessment is mapped, for additional
analysis and processing, enabling quality assessment. In this work we extend
contexts with dimensions, and by doing so, we make possible a multidimensional
assessment of data quality assessment. Multidimensional contexts are
represented as ontologies written in Datalog+-. We use this language for
representing dimensional constraints, and dimensional rules, and also for doing
query answering based on dimensional navigation, which becomes an important
auxiliary activity in the assessment of data. We show ideas and mechanisms by
means of examples.
|
1312.7377 | Designing Fully Distributed Consensus Protocols for Linear Multi-agent
Systems with Directed Graphs | math.OC cs.SY | This paper addresses the distributed consensus protocol design problem for
multi-agent systems with general linear dynamics and directed communication
graphs. Existing works usually design consensus protocols using the smallest
real part of the nonzero eigenvalues of the Laplacian matrix associated with
the communication graph, which however is global information. In this paper,
based on only the agent dynamics and the relative states of neighboring agents,
a distributed adaptive consensus protocol is designed to achieve
leader-follower consensus for any communication graph containing a directed
spanning tree with the leader as the root node. The proposed adaptive protocol
is independent of any global information of the communication graph and thereby
is fully distributed. Extensions to the case with multiple leaders are further
studied.
|
1312.7381 | Rate-Distortion Auto-Encoders | cs.LG | A rekindled the interest in auto-encoder algorithms has been spurred by
recent work on deep learning. Current efforts have been directed towards
effective training of auto-encoder architectures with a large number of coding
units. Here, we propose a learning algorithm for auto-encoders based on a
rate-distortion objective that minimizes the mutual information between the
inputs and the outputs of the auto-encoder subject to a fidelity constraint.
The goal is to learn a representation that is minimally committed to the input
data, but that is rich enough to reconstruct the inputs up to certain level of
distortion. Minimizing the mutual information acts as a regularization term
whereas the fidelity constraint can be understood as a risk functional in the
conventional statistical learning setting. The proposed algorithm uses a
recently introduced measure of entropy based on infinitely divisible matrices
that avoids the plug in estimation of densities. Experiments using
over-complete bases show that the rate-distortion auto-encoders can learn a
regularized input-output mapping in an implicit manner.
|
1312.7412 | Model reduction of networked passive systems through clustering | cs.SY math.DS | In this paper, a model reduction procedure for a network of interconnected
identical passive subsystems is presented. Here, rather than performing model
reduction on the subsystems, adjacent subsystems are clustered, leading to a
reduced-order networked system that allows for a convenient physical
interpretation. The identification of the subsystems to be clustered is
performed through controllability and observability analysis of an associated
edge system and it is shown that the property of synchronization (i.e., the
convergence of trajectories of the subsystems to each other) is preserved
during reduction. The results are illustrated by means of an example.
|
1312.7414 | Stopping Rules for Bag-of-Words Image Search and Its Application in
Appearance-Based Localization | cs.CV cs.RO | We propose a technique to improve the search efficiency of the bag-of-words
(BoW) method for image retrieval. We introduce a notion of difficulty for the
image matching problems and propose methods that reduce the amount of
computations required for the feature vector-quantization task in BoW by
exploiting the fact that easier queries need less computational resources.
Measuring the difficulty of a query and stopping the search accordingly is
formulated as a stopping problem. We introduce stopping rules that terminate
the image search depending on the difficulty of each query, thereby
significantly reducing the computational cost. Our experimental results show
the effectiveness of our approach when it is applied to appearance-based
localization problem.
|
1312.7422 | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | cs.AI | This volume contains the papers presented at the sixth workshop on Answer Set
Programming and Other Computing Paradigms (ASPOCP 2013) held on August 25th,
2013 in Istanbul, co-located with the 29th International Conference on Logic
Programming (ICLP 2013). It thus continues a series of previous events
co-located with ICLP, aiming at facilitating the discussion about crossing the
boundaries of current ASP techniques in theory, solving, and applications, in
combination with or inspired by other computing paradigms.
|
1312.7430 | Simultaneous Perturbation Methods for Adaptive Labor Staffing in Service
Systems | cs.SY | Service systems are labor intensive due to the large variation in the tasks
required to address service requests from multiple customers. Aligning the
staffing levels to the forecasted workloads adaptively in such systems is
nontrivial because of a large number of parameters and operational variations
leading to a huge search space. A challenging problem here is to optimize the
staffing while maintaining the system in steady-state and compliant to
aggregate service level agreement (SLA) constraints. Further, because these
parameters change on a weekly basis, the optimization should not take longer
than a few hours. We formulate this problem as a constrained Markov cost
process parameterized by the (discrete) staffing levels. We propose novel
simultaneous perturbation stochastic approximation (SPSA) based SASOC (Staff
Allocation using Stochastic Optimization with Constraints) algorithms for
solving the above problem. The algorithms include both first order as well as
second order methods and incorporate SPSA based gradient estimates in the
primal, with dual ascent for the Lagrange multipliers. Both the algorithms that
we propose are online, incremental and easy to implement. Further, they involve
a certain generalized smooth projection operator, which is essential to project
the continuous-valued worker parameter tuned by SASOC algorithms onto the
discrete set. We validated our algorithms on five real-life service systems and
compared them with a state-of-the-art optimization tool-kit OptQuest. Being 25
times faster than OptQuest, our algorithms are particularly suitable for
adaptive labor staffing. Also, we observe that our algorithms guarantee
convergence and find better solutions than OptQuest in many cases.
|
1312.7432 | A network analysis of Sibiu County, Romania | physics.soc-ph cs.SI | Network science methods have proved to be able to provide useful insights
from both a theoretical and a practical point of view in that they can better
inform governance policies in complex dynamic environments. The tourism
research community has provided an increasing number of works that analyse
destinations from a network science perspective. However, most of the studies
refer to relatively small samples of actors and linkages. With this note we
provide a full network study, although at a preliminary stage, that reports a
complete analysis of a Romanian destination (Sibiu). Our intention is to
increase the set of similar studies with the aim of supporting the
investigations in structural and dynamical characteristics of tourism
destinations.
|
1312.7445 | Distributed average tracking for multiple reference signals with general
linear dynamics | cs.SY | This technical note studies the distributed average tracking problem for
multiple time-varying signals with general linear dynamics, whose reference
inputs are nonzero and not available to any agent in the network. In
distributed fashion, a pair of continuous algorithms with, respectively, static
and adaptive coupling strengths are designed. Based on the boundary layer
concept, the proposed continuous algorithm with static coupling strengths can
asymptotically track the average of the multiple reference signals without
chattering phenomenon. Furthermore, for the case of algorithms with adaptive
coupling strengths, the average tracking errors are uniformly ultimately
bounded and exponentially converge to a small adjustable bounded set. Finally,
a simulation example is presented to show the validity of the theoretical
results.
|
1312.7446 | Shape Primitive Histogram: A Novel Low-Level Face Representation for
Face Recognition | cs.CV | We further exploit the representational power of Haar wavelet and present a
novel low-level face representation named Shape Primitives Histogram (SPH) for
face recognition. Since human faces exist abundant shape features, we address
the face representation issue from the perspective of the shape feature
extraction. In our approach, we divide faces into a number of tiny shape
fragments and reduce these shape fragments to several uniform atomic shape
patterns called Shape Primitives. A convolution with Haar Wavelet templates is
applied to each shape fragment to identify its belonging shape primitive. After
that, we do a histogram statistic of shape primitives in each spatial local
image patch for incorporating the spatial information. Finally, each face is
represented as a feature vector via concatenating all the local histograms of
shape primitives. Four popular face databases, namely ORL, AR, Yale-B and LFW-a
databases, are employed to evaluate SPH and experimentally study the choices of
the parameters. Extensive experimental results demonstrate that the proposed
approach outperform the state-of-the-arts.
|
1312.7447 | Containment Control of Linear Multi-Agent Systems with Multiple Leaders
of Bounded Inputs Using Distributed Continuous Controllers | cs.SY math.OC | This paper considers the containment control problem for multi-agent systems
with general linear dynamics and multiple leaders whose control inputs are
possibly nonzero and time varying. Based on the relative states of neighboring
agents, a distributed static continuous controller is designed, under which the
containment error is uniformly ultimately bounded and the upper bound of the
containment error can be made arbitrarily small, if the subgraph associated
with the followers is undirected and for each follower there exists at least
one leader that has a directed path to that follower. It is noted that the
design of the static controller requires the knowledge of the eigenvalues of
the Laplacian matrix and the upper bounds of the leaders' control inputs. In
order to remove these requirements, a distributed adaptive continuous
controller is further proposed, which can be designed and implemented by each
follower in a fully distributed fashion. Extensions to the case where only
local output information is available are discussed.
|
1312.7463 | Generalized Ambiguity Decomposition for Understanding Ensemble Diversity | stat.ML cs.CV cs.LG | Diversity or complementarity of experts in ensemble pattern recognition and
information processing systems is widely-observed by researchers to be crucial
for achieving performance improvement upon fusion. Understanding this link
between ensemble diversity and fusion performance is thus an important research
question. However, prior works have theoretically characterized ensemble
diversity and have linked it with ensemble performance in very restricted
settings. We present a generalized ambiguity decomposition (GAD) theorem as a
broad framework for answering these questions. The GAD theorem applies to a
generic convex ensemble of experts for any arbitrary twice-differentiable loss
function. It shows that the ensemble performance approximately decomposes into
a difference of the average expert performance and the diversity of the
ensemble. It thus provides a theoretical explanation for the
empirically-observed benefit of fusing outputs from diverse classifiers and
regressors. It also provides a loss function-dependent, ensemble-dependent, and
data-dependent definition of diversity. We present extensions of this
decomposition to common regression and classification loss functions, and
report a simulation-based analysis of the diversity term and the accuracy of
the decomposition. We finally present experiments on standard pattern
recognition data sets which indicate the accuracy of the decomposition for
real-world classification and regression problems.
|
1312.7469 | Collaborative Discriminant Locality Preserving Projections With its
Application to Face Recognition | cs.CV | We present a novel Discriminant Locality Preserving Projections (DLPP)
algorithm named Collaborative Discriminant Locality Preserving Projection
(CDLPP). In our algorithm, the discriminating power of DLPP are further
exploited from two aspects. On the one hand, the global optimum of class
scattering is guaranteed via using the between-class scatter matrix to replace
the original denominator of DLPP. On the other hand, motivated by collaborative
representation, an $L_2$-norm constraint is imposed to the projections to
discover the collaborations of dimensions in the sample space. We apply our
algorithm to face recognition. Three popular face databases, namely AR, ORL and
LFW-A, are employed for evaluating the performance of CDLPP. Extensive
experimental results demonstrate that CDLPP significantly improves the
discriminating power of DLPP and outperforms the state-of-the-arts.
|
1312.7477 | Covering with Excess One: Seeing the Topology | math.GN cs.CG cs.RO | We have initiated the study of topology of the space of coverings on grid
domains. The space has the following constraint: while all the covering agents
can move freely (we allow overlapping) on the domain, their union must cover
the whole domain. A minimal number $N$ of the covering agents is required for a
successful covering of the domain. In this paper, we demonstrate beautiful
topological structures of this space on grid domains in 2D with $N+1$
coverings, the topology of the space has the homotopy type of $1$ dimensional
complex, regardless of the domain shape. We also present the Euler
characteristic formula which connects the topology of the space with that of
the domain itself.
|
1312.7482 | Element-wise uniqueness, prior knowledge, and data-dependent resolution | math.OC cs.IT math.IT math.NA | Techniques for finding regularized solutions to underdetermined linear
systems can be viewed as imposing prior knowledge on the unknown vector. The
success of modern techniques, which can impose priors such as sparsity and
non-negativity, is the result of advances in optimization algorithms to solve
problems which lack closed-form solutions. Techniques for characterization and
analysis of the system to determined when information is recoverable, however,
still typically rely on closed-form solution techniques such as singular value
decomposition or a filter cutoff, for example. In this letter we pose
optimization approaches to broaden the approach to system characterization. We
start by deriving conditions for when each unknown element of a system admits a
unique solution, subject to a broad class of types of prior knowledge. With
this approach we can pose a convex optimization problem to find "how unique"
each element of the solution is, which may be viewed as a generalization of
resolution to incorporate prior knowledge. We find that the result varies with
the unknown vector itself, i.e. is data-dependent, such as when the sparsity of
the solution improves the chance it can be uniquely reconstructed. The approach
can be used to analyze systems on a case-by-case basis, estimate the amount of
important information present in the data, and quantitatively understand the
degree to which the regularized solution may be trusted.
|
1312.7485 | A General Algorithm for Deciding Transportability of Experimental
Results | cs.AI stat.ME stat.ML | Generalizing empirical findings to new environments, settings, or populations
is essential in most scientific explorations. This article treats a particular
problem of generalizability, called "transportability", defined as a license to
transfer information learned in experimental studies to a different population,
on which only observational studies can be conducted. Given a set of
assumptions concerning commonalities and differences between the two
populations, Pearl and Bareinboim (2011) derived sufficient conditions that
permit such transfer to take place. This article summarizes their findings and
supplements them with an effective procedure for deciding when and how
transportability is feasible. It establishes a necessary and sufficient
condition for deciding when causal effects in the target population are
estimable from both the statistical information available and the causal
information transferred from the experiments. The article further provides a
complete algorithm for computing the transport formula, that is, a way of
combining observational and experimental information to synthesize bias-free
estimate of the desired causal relation. Finally, the article examines the
differences between transportability and other variants of generalizability.
|
1312.7511 | A Novel Scheme for Generating Secure Face Templates Using BDA | cs.CV cs.CR | In identity management system, frequently used biometric recognition system
needs awareness towards issue of protecting biometric template as far as more
reliable solution is apprehensive. In sight of this biometric template
protection algorithm should gratify the basic requirements viz. security,
discriminability and cancelability. As no single template protection method is
capable of satisfying these requirements, a novel scheme for face template
generation and protection is proposed. The novel scheme is proposed to provide
security and accuracy in new user enrolment and authentication process. This
novel scheme takes advantage of both the hybrid approach and the binary
discriminant analysis algorithm. This algorithm is designed on the basis of
random projection, binary discriminant analysis and fuzzy commitment scheme.
Publicly available benchmark face databases (FERET, FRGC, CMU-PIE) and other
datasets are used for evaluation. The proposed novel scheme enhances the
discriminability and recognition accuracy in terms of matching score of the
face images for each stage and provides high security against potential attacks
namely brute force and smart attacks. In this paper, we discuss results viz.
averages matching score, computation time and security for hybrid approach and
novel approach.
|
1312.7513 | Distributed Game Theoretic Optimization and Management of Multichannel
ALOHA Networks | cs.NI cs.GT cs.IT math.IT | The problem of distributed rate maximization in multi-channel ALOHA networks
is considered. First, we study the problem of constrained distributed rate
maximization, where user rates are subject to total transmission probability
constraints. We propose a best-response algorithm, where each user updates its
strategy to increase its rate according to the channel state information and
the current channel utilization. We prove the convergence of the algorithm to a
Nash equilibrium in both homogeneous and heterogeneous networks using the
theory of potential games. The performance of the best-response dynamic is
analyzed and compared to a simple transmission scheme, where users transmit
over the channel with the highest collision-free utility. Then, we consider the
case where users are not restricted by transmission probability constraints.
Distributed rate maximization under uncertainty is considered to achieve both
efficiency and fairness among users. We propose a distributed scheme where
users adjust their transmission probability to maximize their rates according
to the current network state, while maintaining the desired load on the
channels. We show that our approach plays an important role in achieving the
Nash bargaining solution among users. Sequential and parallel algorithms are
proposed to achieve the target solution in a distributed manner. The
efficiencies of the algorithms are demonstrated through both theoretical and
simulation results.
|
1312.7523 | Learning Temporal Logical Properties Discriminating ECG models of
Cardiac Arrhytmias | cs.LO cs.CV q-bio.QM | We present a novel approach to learn the formulae characterising the emergent
behaviour of a dynamical system from system observations. At a high level, the
approach starts by devising a statistical dynamical model of the system which
optimally fits the observations. We then propose general optimisation
strategies for selecting high support formulae (under the learnt model of the
system) either within a discrete set of formulae of bounded complexity, or a
parametric family of formulae. We illustrate and apply the methodology on an
in-depth case study of characterising cardiac malfunction from
electro-cardiogram data, where our approach enables us to quantitatively
determine the diagnostic power of a formula in discriminating between different
cardiac conditions.
|
1312.7542 | Towards Connected Enterprises: The Business Network System | cs.DB | The discovery, representation and reconstruction of Business Networks (BN)
from Network Mining (NM) raw data is a difficult problem for enterprises. This
is due to huge amounts of complex business processes within and across
enterprise boundaries, heterogeneous technology stacks, and fragmented data. To
remain competitive, visibility into the enterprise and partner networks on
different, interrelated abstraction levels is desirable. We present a novel
data discovery, mining and network inference system, called Business Network
System (BNS), that reconstructs the BN--integration and business process
networks--from raw data, hidden in the enterprises' landscapes. BNS provides a
new, declarative foundation for gathering information, defining a network
model, inferring the network and check its conformance to the real-world
"as-is" network. The paper covers both the foundation and the key features of
BNS, including its underlying technologies, its overall system architecture,
and its most interesting capabilities.
|
1312.7551 | Information-theoretic interpretation of quantum formalism | cs.IT math.IT quant-ph | We present an information-theoretic interpretation of quantum formalism based
on a Bayesian framework and devoid of any extra axiom or principle. Quantum
information is construed as a technique for analyzing a logical system subject
to classical constraints, based on a question-and-answer procedure. The problem
is posed from a particular batch of queries while the constraints are
represented by the truth table of a set of Boolean functions. The Bayesian
inference technique consists in assigning a probability distribution within a
real-valued probability space to the joint set of queries in order to satisfy
the constraints. The initial query batch is not unique and alternative batches
can be considered at will. They are enabled mechanically from the initial
batch, quite simply by transcribing the probability space into a Hilbert space.
It turns out that this sole procedure leads to exactly recover the standard
quantum information theory and thus provides an information-theoretic rationale
to its technical rules. In this framework, the great challenges of quantum
mechanics become simple platitudes: Why is the theory probabilistic? Why is the
theory linear? Where does the Hilbert space come from? In addition, most of the
paradoxes, such as uncertainty principle, entanglement, contextuality,
nonsignaling correlation, measurement problem, etc., become straightforwards
features. In the end, our major conclusion is that quantum information is
nothing but classical information processed by a mature form of Bayesian
inference technique and, as such, consubstantial with Aristotelian logic.
|
1312.7557 | A Novel Retinal Vessel Segmentation Based On Histogram Transformation
Using 2-D Morlet Wavelet and Supervised Classification | cs.CV | The appearance and structure of blood vessels in retinal images have an
important role in diagnosis of diseases. This paper proposes a method for
automatic retinal vessel segmentation. In this work, a novel preprocessing
based on local histogram equalization is used to enhance the original image
then pixels are classified as vessel and non-vessel using a classifier. For
this classification, special feature vectors are organized based on responses
to Morlet wavelet. Morlet wavelet is a continues transform which has the
ability to filter existing noises after preprocessing. Bayesian classifier is
used and Gaussian mixture model (GMM) is its likelihood function. The
probability distributions are approximated according to training set of manual
that has been segmented by a specialist. After this, morphological transforms
are used in different directions to make the existing discontinuities uniform
on the DRIVE database, it achieves the accuracy about 0.9571 which shows that
it is an accurate method among the available ones for retinal vessel
segmentation.
|
1312.7560 | Implementation of Hand Detection based Techniques for Human Computer
Interaction | cs.CV cs.HC | The computer industry is developing at a fast pace. With this development
almost all of the fields under computers have advanced in the past couple of
decades. But the same technology is being used for human computer interaction
that was used in 1970s. Even today the same type of keyboard and mouse is used
for interacting with computer systems. With the recent boom in the mobile
segment touchscreens have become popular for interaction with cell phones. But
these touchscreens are rarely used on traditional systems. This paper tries to
introduce methods for human computer interaction using the users hand which can
be used both on traditional computer platforms as well as cell phones. The
methods explain how the users detected hand can be used as input for
applications and also explain applications that can take advantage of this type
of interaction mechanism.
|
1312.7567 | Nonparametric Inference For Density Modes | stat.ME cs.LG | We derive nonparametric confidence intervals for the eigenvalues of the
Hessian at modes of a density estimate. This provides information about the
strength and shape of modes and can also be used as a significance test. We use
a data-splitting approach in which potential modes are identified using the
first half of the data and inference is done with the second half of the data.
To get valid confidence sets for the eigenvalues, we use a bootstrap based on
an elementary-symmetric-polynomial (ESP) transformation. This leads to valid
bootstrap confidence sets regardless of any multiplicities in the eigenvalues.
We also suggest a new method for bandwidth selection, namely, choosing the
bandwidth to maximize the number of significant modes. We show by example that
this method works well. Even when the true distribution is singular, and hence
does not have a density, (in which case cross validation chooses a zero
bandwidth), our method chooses a reasonable bandwidth.
|
1312.7570 | Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for
Visual Recognition | cs.CV | Systems based on bag-of-words models from image features collected at maxima
of sparse interest point operators have been used successfully for both
computer visual object and action recognition tasks. While the sparse,
interest-point based approach to recognition is not inconsistent with visual
processing in biological systems that operate in `saccade and fixate' regimes,
the methodology and emphasis in the human and the computer vision communities
remains sharply distinct. Here, we make three contributions aiming to bridge
this gap. First, we complement existing state-of-the art large scale dynamic
computer vision annotated datasets like Hollywood-2 and UCF Sports with human
eye movements collected under the ecological constraints of the visual action
recognition task. To our knowledge these are the first large human eye tracking
datasets to be collected and made publicly available for video,
vision.imar.ro/eyetracking (497,107 frames, each viewed by 16 subjects), unique
in terms of their (a) large scale and computer vision relevance, (b) dynamic,
video stimuli, (c) task control, as opposed to free-viewing. Second, we
introduce novel sequential consistency and alignment measures, which underline
the remarkable stability of patterns of visual search among subjects. Third, we
leverage the significant amount of collected data in order to pursue studies
and build automatic, end-to-end trainable computer vision systems based on
human eye movements. Our studies not only shed light on the differences between
computer vision spatio-temporal interest point image sampling strategies and
the human fixations, as well as their impact for visual recognition
performance, but also demonstrate that human fixations can be accurately
predicted, and when used in an end-to-end automatic system, leveraging some of
the advanced computer vision practice, can lead to state of the art results.
|
1312.7572 | A Top-Down Approach to Managing Variability in Robotics Algorithms | cs.RO | One of the defining features of the field of robotics is its breadth and
heterogeneity. Unfortunately, despite the availability of several robotics
middleware services, robotics software still fails to smoothly handle at least
two kinds of variability: algorithmic variability and lower-level variability.
The consequence is that implementations of algorithms are hard to understand
and impacted by changes to lower-level details such as the choice or
configuration of sensors or actuators. Moreover, when several algorithms or
algorithmic variants are available it is difficult to compare and combine them.
In order to alleviate these problems we propose a top-down approach to
express and implement robotics algorithms and families of algorithms so that
they are both less dependent on lower-level details and easier to understand
and combine. This approach goes top-down from the algorithms and shields them
from lower-level details by introducing very high level abstractions atop the
intermediate abstractions of robotics middleware. This approach is illustrated
on 7 variants of the Bug family that were implemented using both laser and
infra-red sensors.
|
1312.7573 | A Novel Method for Automatic Segmentation of Brain Tumors in MRI Images | cs.CV | The brain tumor segmentation on MRI images is a very difficult and important
task which is used in surgical and medical planning and assessments. If experts
do the segmentation manually with their own medical knowledge, it will be
time-consuming. Therefore, researchers propose methods and systems which can do
the segmentation automatically and without any interference. In this article,
an unsupervised automatic method for brain tumor segmentation on MRI images is
presented. In this method, at first in the pre-processing level, the extra
parts which are outside the skull and don't have any helpful information are
removed and then anisotropic diffusion filter with 8-connected neighborhood is
applied to the MRI images to remove noise. By applying the fast bounding
box(FBB) algorithm, the tumor area is displayed on the MRI image with a
bounding box and the central part is selected as sample points for training of
a One Class SVM classifier. A database is also provided by the Zanjan MRI
Center. The MRI images are related to 10 patients who have brain tumor. 100
T2-weighted MRI images are used in this study. Experimental results show the
high precision and dependability of the proposed algorithm. The results are
also highly helpful for specialists and radiologists to easily estimate the
size and position of a tumor.
|
1312.7580 | On the Learning Behavior of Adaptive Networks - Part II: Performance
Analysis | cs.MA cs.SY math.OC | Part I of this work examined the mean-square stability and convergence of the
learning process of distributed strategies over graphs. The results identified
conditions on the network topology, utilities, and data in order to ensure
stability; the results also identified three distinct stages in the learning
behavior of multi-agent networks related to transient phases I and II and the
steady-state phase. This Part II examines the steady-state phase of distributed
learning by networked agents. Apart from characterizing the performance of the
individual agents, it is shown that the network induces a useful equalization
effect across all agents. In this way, the performance of noisier agents is
enhanced to the same level as the performance of agents with less noisy data.
It is further shown that in the small step-size regime, each agent in the
network is able to achieve the same performance level as that of a centralized
strategy corresponding to a fully connected network. The results in this part
reveal explicitly which aspects of the network topology and operation influence
performance and provide important insights into the design of effective
mechanisms for the processing and diffusion of information over networks.
|
1312.7581 | On the Learning Behavior of Adaptive Networks - Part I: Transient
Analysis | cs.MA cs.SY math.OC | This work carries out a detailed transient analysis of the learning behavior
of multi-agent networks, and reveals interesting results about the learning
abilities of distributed strategies. Among other results, the analysis reveals
how combination policies influence the learning process of networked agents,
and how these policies can steer the convergence point towards any of many
possible Pareto optimal solutions. The results also establish that the learning
process of an adaptive network undergoes three (rather than two) well-defined
stages of evolution with distinctive convergence rates during the first two
stages, while attaining a finite mean-square-error (MSE) level in the last
stage. The analysis reveals what aspects of the network topology influence
performance directly and suggests design procedures that can optimize
performance by adjusting the relevant topology parameters. Interestingly, it is
further shown that, in the adaptation regime, each agent in a sparsely
connected network is able to achieve the same performance level as that of a
centralized stochastic-gradient strategy even for left-stochastic combination
strategies. These results lead to a deeper understanding and useful insights on
the convergence behavior of coupled distributed learners. The results also lead
to effective design mechanisms to help diffuse information more thoroughly over
networks.
|
1312.7595 | Phase transition in the controllability of temporal networks | cond-mat.stat-mech cs.SI physics.soc-ph | The control of complex systems is an ongoing challenge of complexity
research. Recent advances using concepts of structural control deduce a wide
range of control related properties from the network representation of complex
systems. Here, we examine the controllability of complex systems for which the
timescale of the dynamics we control and the timescale of changes in the
network are comparable. We provide both analytical and computational tools to
study controllability based on temporal network characteristics. We apply these
results to investigate the controllable subnetwork using a single input,
present analytical results for a generic class of temporal network models, and
preform measurements using data collected from a real system. Depending upon
the density of the interactions compared to the timescale of the dynamics, we
witness a phase transition describing the sudden emergence of a giant
controllable subspace spanning a finite fraction of the network. We also study
the role of temporal patterns and network topology in real data making use of
various randomization procedures, finding that the overall activity and the
degree distribution of the underlying network are the main features influencing
controllability.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.