id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1112.4060
|
A real time vehicles detection algorithm for vision based sensors
|
cs.CV
|
A vehicle detection plays an important role in the traffic control at
signalised intersections. This paper introduces a vision-based algorithm for
vehicles presence recognition in detection zones. The algorithm uses linguistic
variables to evaluate local attributes of an input image. The image attributes
are categorised as vehicle, background or unknown features. Experimental
results on complex traffic scenes show that the proposed algorithm is effective
for a real-time vehicles detection.
|
1112.4064
|
Vehicles Recognition Using Fuzzy Descriptors of Image Segments
|
cs.CV
|
In this paper a vision-based vehicles recognition method is presented.
Proposed method uses fuzzy description of image segments for automatic
recognition of vehicles recorded in image data. The description takes into
account selected geometrical properties and shape coefficients determined for
segments of reference image (vehicle model). The proposed method was
implemented using reasoning system with fuzzy rules. A vehicles recognition
algorithm was developed based on the fuzzy rules describing shape and
arrangement of the image segments that correspond to visible parts of a
vehicle. An extension of the algorithm with set of fuzzy rules defined for
different reference images (and various vehicle shapes) enables vehicles
classification in traffic scenes. The devised method is suitable for
application in video sensors for road traffic control and surveillance systems.
|
1112.4076
|
Closed-Form Bounds to the Rice and Incomplete Toronto Functions and
Incomplete Lipschitz-Hankel Integrals
|
cs.IT math.IT
|
This article provides novel analytical results for the Rice function, the
incomplete Toronto function and the incomplete Lipschitz-Hankel Integrals.
Firstly, upper and lower bounds are derived for the Rice function, $Ie(k,x)$.
Secondly, explicit expressions are derived for the incomplete Toronto function,
$T_{B}(m,n,r)$, and the incomplete Lipschitz-Hankel Integrals of the modified
Bessel function of the first kind, $Ie_{\mu,n}(a,z)$, for the case that $n$ is
an odd multiple of 0.5 and $m \geq n$. By exploiting these expressions, tight
upper and lower bounds are subsequently proposed for both $T_{B}(m,n,r)$
function and $Ie_{\mu,n}(a,z)$ integrals. Importantly, all new representations
are expressed in closed-form whilst the proposed bounds are shown to be rather
tight. Based on these features, it is evident that the offered results can be
utilized effectively in analytical studies related to wireless communications.
Indicative applications include, among others, the performance evaluation of
digital communications over fading channels and the information-theoretic
analysis of multiple-input multiple-output systems.
|
1112.4090
|
State Amplification Subject To Masking Constraints
|
cs.IT cs.CR math.IT
|
This paper considers a state dependent broadcast channel with one
transmitter, Alice, and two receivers, Bob and Eve. The problem is to
effectively convey ("amplify") the channel state sequence to Bob while
"masking" it from Eve. The extent to which the state sequence cannot be masked
from Eve is referred to as leakage. This can be viewed as a secrecy problem,
where we desire that the channel state itself be minimally leaked to Eve while
being communicated to Bob. The paper is aimed at characterizing the trade-off
region between amplification and leakage rates for such a system. An achievable
coding scheme is presented, wherein the transmitter transmits a partial state
information over the channel to facilitate the amplification process. For the
case when Bob observes a stronger signal than Eve, the achievable coding scheme
is enhanced with secure refinement. Outer bounds on the trade-off region are
also derived, and used in characterizing some special case results. In
particular, the optimal amplification-leakage rate difference, called as
differential amplification capacity, is characterized for the reversely
degraded discrete memoryless channel, the degraded binary, and the degraded
Gaussian channels. In addition, for the degraded Gaussian model, the extremal
corner points of the trade-off region are characterized, and the gap between
the outer bound and achievable rate-regions is shown to be less than half a bit
for a wide set of channel parameters.
|
1112.4105
|
epsilon-Samples of Kernels
|
cs.CG cs.DS cs.LG
|
We study the worst case error of kernel density estimates via subset
approximation. A kernel density estimate of a distribution is the convolution
of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset
(i.e. a point set) of the input distribution, we can compare the kernel density
estimates of the input distribution with that of the subset and bound the worst
case error. If the maximum error is eps, then this subset can be thought of as
an eps-sample (aka an eps-approximation) of the range space defined with the
input distribution as the ground set and the fixed kernel representing the
family of ranges. Interestingly, in this case the ranges are not binary, but
have a continuous range (for simplicity we focus on kernels with range of
[0,1]); these allow for smoother notions of range spaces.
It turns out, the use of this smoother family of range spaces has an added
benefit of greatly decreasing the size required for eps-samples. For instance,
in the plane the size is O((1/eps^{4/3}) log^{2/3}(1/eps)) for disks (based on
VC-dimension arguments) but is only O((1/eps) sqrt{log (1/eps)}) for Gaussian
kernels and for kernels with bounded slope that only affect a bounded domain.
These bounds are accomplished by studying the discrepancy of these "kernel"
range spaces, and here the improvement in bounds are even more pronounced. In
the plane, we show the discrepancy is O(sqrt{log n}) for these kernels, whereas
for balls there is a lower bound of Omega(n^{1/4}).
|
1112.4113
|
Optimal Control of Vehicular Formations with Nearest Neighbor
Interactions
|
math.OC cs.MA cs.SY
|
We consider the design of optimal localized feedback gains for
one-dimensional formations in which vehicles only use information from their
immediate neighbors. The control objective is to enhance coherence of the
formation by making it behave like a rigid lattice. For the single-integrator
model with symmetric gains, we establish convexity, implying that the globally
optimal controller can be computed efficiently. We also identify a class of
convex problems for double-integrators by restricting the controller to
symmetric position and uniform diagonal velocity gains. To obtain the optimal
non-symmetric gains for both the single- and the double-integrator models, we
solve a parameterized family of optimal control problems ranging from an easily
solvable problem to the problem of interest as the underlying parameter
increases. When this parameter is kept small, we employ perturbation analysis
to decouple the matrix equations that result from the optimality conditions,
thereby rendering the unique optimal feedback gain. This solution is used to
initialize a homotopy-based Newton's method to find the optimal localized gain.
To investigate the performance of localized controllers, we examine how the
coherence of large-scale stochastically forced formations scales with the
number of vehicles. We establish several explicit scaling relationships and
show that the best performance is achieved by a localized controller that is
both non-symmetric and spatially-varying.
|
1112.4133
|
Evaluation of Performance Measures for Classifiers Comparison
|
cs.LG
|
The selection of the best classification algorithm for a given dataset is a
very widespread problem, occuring each time one has to choose a classifier to
solve a real-world problem. It is also a complex task with many important
methodological decisions to make. Among those, one of the most crucial is the
choice of an appropriate measure in order to properly assess the classification
performance and rank the algorithms. In this article, we focus on this specific
task. We present the most popular measures and compare their behavior through
discrimination plots. We then discuss their properties from a more theoretical
perspective. It turns out several of them are equivalent for classifiers
comparison purposes. Futhermore. they can also lead to interpretation problems.
Among the numerous measures proposed over the years, it appears that the
classical overall success rate and marginal rates are the more suitable for
classifier comparison task.
|
1112.4134
|
On Accuracy of Community Structure Discovery Algorithms
|
cs.SI physics.soc-ph
|
Community structure discovery in complex networks is a quite challenging
problem spanning many applications in various disciplines such as biology,
social network and physics. Emerging from various approaches numerous
algorithms have been proposed to tackle this problem. Nevertheless little
attention has been devoted to compare their efficiency on realistic simulated
data. To better understand their relative performances, we evaluate
systematically eleven algorithms covering the main approaches. The Normalized
Mutual Information (NMI) measure is used to assess the quality of the
discovered community structure from controlled artificial networks with
realistic topological properties. Results show that along with the network
size, the average proportion of intra-community to inter-community links is the
most influential parameter on performances. Overall, "Infomap" is the leading
algorithm, followed by "Walktrap", "SpinGlass" and "Louvain" which also achieve
good consistency.
|
1112.4135
|
A Reduced Reference Image Quality Measure Using Bessel K Forms Model for
Tetrolet Coefficients
|
cs.CV
|
In this paper, we introduce a Reduced Reference Image Quality Assessment
(RRIQA) measure based on the natural image statistic approach. A new adaptive
transform called "Tetrolet" is applied to both reference and distorted images.
To model the marginal distribution of tetrolet coefficients Bessel K Forms
(BKF) density is proposed. Estimating the parameters of this distribution
allows to summarize the reference image with a small amount of side
information. Five distortion measures based on the BKF parameters of the
original and processed image are used to predict quality scores. A comparison
between these measures is presented showing a good consistency with human
judgment.
|
1112.4149
|
Joint Network Coding for Interfering Wireless Multicast Networks
|
cs.IT cs.NI math.IT
|
Interference in wireless networks is one of the key-capacity limiting factor.
The multicast capacity of an ad- hoc wireless network decreases with an
increasing number of transmitting and/or receiving nodes within a fixed area.
Digital Network Coding (DNC) has been shown to improve the multicast capacity
of non-interfering wireless network. However recently proposed Physical-layer
Network Coding (PNC) and Analog Network Coding (ANC) has shown that it is
possible to decode an unknown packet from the collision of two packet, when one
of the colliding packet is known a priori. Taking advantage of such collision
decoding scheme, in this paper we propose a Joint Network Coding based
Cooperative Retransmission (JNC- CR) scheme, where we show that ANC along with
DNC can offer a much higher retransmission gain than that attainable through
either ANC, DNC or Automatic Repeat reQuest (ARQ) based retransmission. This
scheme can be applied for two wireless multicast groups interfering with each
other. Because of the broadcast nature of the wireless transmission, receivers
of different multicast group can opportunistically listen and cache packets
from the interfering transmitter. These cached packets, along with the packets
the receiver receives from its transmitter can then be used for decoding the
JNC packet. We validate the higher retransmission gain performance of JNC with
an optimal DNC scheme, using simulation.
|
1112.4164
|
A Geometric Approach For Fully Automatic Chromosome Segmentation
|
cs.CV
|
A fundamental task in human chromosome analysis is chromosome segmentation.
Segmentation plays an important role in chromosome karyotyping. The first step
in segmentation is to remove intrusive objects such as stain debris and other
noises. The next step is detection of touching and overlapping chromosomes, and
the final step is separation of such chromosomes. Common methods for separation
between touching chromosomes are interactive and require human intervention for
correct separation between touching and overlapping chromosomes. In this paper,
a geometric-based method is used for automatic detection of touching and
overlapping chromosomes and separating them. The proposed scheme performs
segmentation in two phases. In the first phase, chromosome clusters are
detected using three geometric criteria, and in the second phase, chromosome
clusters are separated using a cut-line. Most of earlier methods did not work
properly in case of chromosome clusters that contained more than two
chromosomes. Our method, on the other hand, is quite efficient in separation of
such chromosome clusters. At each step, one separation will be performed and
this algorithm is repeated until all individual chromosomes are separated.
Another important point about the proposed method is that it uses the geometric
features of chromosomes which are independent of the type of images and it can
easily be applied to any type of images such as binary images and does not
require multispectral images as well. We have applied our method to a database
containing 62 touching and partially overlapping chromosomes and a success rate
of 91.9% is achieved.
|
1112.4167
|
Iterative Deterministic Equivalents for the Performance Analysis of
Communication Systems
|
cs.IT math.IT
|
In this article, we introduce iterative deterministic equivalents as a novel
technique for the performance analysis of communication systems whose channels
are modeled by complex combinations of independent random matrices. This
technique extends the deterministic equivalent approach for the study of
functionals of large random matrices to a broader class of random matrix models
which naturally arise as channel models in wireless communications. We present
two specific applications: First, we consider a multi-hop amplify-and-forward
(AF) MIMO relay channel with noise at each stage and derive deterministic
approximations of the mutual information after the Kth hop. Second, we study a
MIMO multiple access channel (MAC) where the channel between each transmitter
and the receiver is represented by the double-scattering channel model. We
provide deterministic approximations of the mutual information, the
signal-to-interference-plus-noise ratio (SINR) and sum-rate with
minimum-mean-square-error (MMSE) detection and derive the asymptotically
optimal precoding matrices. In both scenarios, the approximations can be
computed by simple and provably converging fixed-point algorithms and are shown
to be almost surely tight in the limit when the number of antennas at each node
grows infinitely large. Simulations suggest that the approximations are
accurate for realistic system dimensions. The technique of iterative
deterministic equivalents can be easily extended to other channel models of
interest and is, therefore, also a new contribution to the field of random
matrix theory.
|
1112.4210
|
Approximate Decoding Approaches for Network Coded Correlated Data
|
cs.NI cs.IT math.IT
|
This paper considers a framework where data from correlated sources are
transmitted with help of network coding in ad-hoc network topologies. The
correlated data are encoded independently at sensors and network coding is
employed in the intermediate nodes in order to improve the data delivery
performance. In such settings, we focus on the problem of reconstructing the
sources at decoder when perfect decoding is not possible due to losses or
bandwidth bottlenecks. We first show that the source data similarity can be
used at decoder to permit decoding based on a novel and simple approximate
decoding scheme. We analyze the influence of the network coding parameters and
in particular the size of finite coding fields on the decoding performance. We
further determine the optimal field size that maximizes the expected decoding
performance as a trade-off between information loss incurred by limiting the
resolution of the source data and the error probability in the reconstructed
data. Moreover, we show that the performance of the approximate decoding
improves when the accuracy of the source model increases even with simple
approximate decoding techniques. We provide illustrative examples about the
possible of our algorithms that can be deployed in sensor networks and
distributed imaging applications. In both cases, the experimental results
confirm the validity of our analysis and demonstrate the benefits of our low
complexity solution for delivery of correlated data sources.
|
1112.4221
|
A closed-form expression for the Sharma-Mittal entropy of exponential
families
|
cs.IT math.IT
|
The Sharma-Mittal entropies generalize the celebrated Shannon, R\'enyi and
Tsallis entropies. We report a closed-form formula for the Sharma-Mittal
entropies and relative entropies for arbitrary exponential family
distributions. We instantiate explicitly the formula for the case of the
multivariate Gaussian distributions and discuss on its estimation.
|
1112.4232
|
Projection Operator in Adaptive Systems
|
nlin.AO cs.SY math.OC
|
The projection algorithm is frequently used in adaptive control and this note
presents a detailed analysis of its properties.
|
1112.4236
|
Error Correcting Codes for Distributed Control
|
cs.IT math.IT math.OC
|
The problem of stabilizing an unstable plant over a noisy communication link
is an increasingly important one that arises in applications of networked
control systems. Although the work of Schulman and Sahai over the past two
decades, and their development of the notions of "tree codes"\phantom{} and
"anytime capacity", provides the theoretical framework for studying such
problems, there has been scant practical progress in this area because explicit
constructions of tree codes with efficient encoding and decoding did not exist.
To stabilize an unstable plant driven by bounded noise over a noisy channel one
needs real-time encoding and real-time decoding and a reliability which
increases exponentially with decoding delay, which is what tree codes
guarantee. We prove that linear tree codes occur with high probability and, for
erasure channels, give an explicit construction with an expected decoding
complexity that is constant per time instant. We give novel sufficient
conditions on the rate and reliability required of the tree codes to stabilize
vector plants and argue that they are asymptotically tight. This work takes an
important step towards controlling plants over noisy channels, and we
demonstrate the efficacy of the method through several examples.
|
1112.4238
|
Vertex-centroid finite volume scheme on tetrahedral grids for
conservation laws
|
cs.NA cs.CE
|
Vertex-centroid schemes are cell-centered finite volume schemes for
conservation laws which make use of vertex values to construct high resolution
schemes. The vertex values must be obtained through a consistent averaging
(interpolation) procedure. A modified interpolation scheme is proposed which is
better than existing schemes in giving positive weights in the interpolation
formula. A simplified reconstruction scheme is also proposed which is also more
accurate and efficient. For scalar conservation laws, we develop limited
versions of the schemes which are stable in maximum norm by constructing
suitable limiters. The schemes are applied to compressible flows governed by
the Euler equations of inviscid gas dynamics.
|
1112.4243
|
Online Learning for Classification of Low-rank Representation Features
and Its Applications in Audio Segment Classification
|
cs.LG cs.MM
|
In this paper, a novel framework based on trace norm minimization for audio
segment is proposed. In this framework, both the feature extraction and
classification are obtained by solving corresponding convex optimization
problem with trace norm regularization. For feature extraction, robust
principle component analysis (robust PCA) via minimization a combination of the
nuclear norm and the $\ell_1$-norm is used to extract low-rank features which
are robust to white noise and gross corruption for audio segments. These
low-rank features are fed to a linear classifier where the weight and bias are
learned by solving similar trace norm constrained problems. For this
classifier, most methods find the weight and bias in batch-mode learning, which
makes them inefficient for large-scale problems. In this paper, we propose an
online framework using accelerated proximal gradient method. This framework has
a main advantage in memory cost. In addition, as a result of the regularization
formulation of matrix classification, the Lipschitz constant was given
explicitly, and hence the step size estimation of general proximal gradient
method was omitted in our approach. Experiments on real data sets for
laugh/non-laugh and applause/non-applause classification indicate that this
novel framework is effective and noise robust.
|
1112.4253
|
Simple and Robust Binary Self-Location Patterns
|
cs.IT math.IT
|
A simple method to generate a two-dimensional binary grid pattern, which
allows for absolute and accurate self-location in a finite planar region, is
proposed. The pattern encodes position information in a local way so that
reading a small number of its black or white pixels at any place provides
sufficient data from which the location can be decoded both efficiently and
robustly.
|
1112.4258
|
A geometric analysis of subspace clustering with outliers
|
cs.IT cs.LG math.IT math.ST stat.ML stat.TH
|
This paper considers the problem of clustering a collection of unlabeled data
points assumed to lie near a union of lower-dimensional planes. As is common in
computer vision or unsupervised learning applications, we do not know in
advance how many subspaces there are nor do we have any information about their
dimensions. We develop a novel geometric analysis of an algorithm named sparse
subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern
Recognition, 2009. CVPR 2009 (2009) 2790-2797. IEEE], which significantly
broadens the range of problems where it is provably effective. For instance, we
show that SSC can recover multiple subspaces, each of dimension comparable to
the ambient dimension. We also prove that SSC can correctly cluster data points
even when the subspaces of interest intersect. Further, we develop an extension
of SSC that succeeds when the data set is corrupted with possibly
overwhelmingly many outliers. Underlying our analysis are clear geometric
insights, which may bear on other sparse recovery problems. A numerical study
complements our theoretical analysis and demonstrates the effectiveness of
these methods.
|
1112.4261
|
Performance Analysis of Enhanced Clustering Algorithm for Gene
Expression Data
|
cs.LG cs.CE cs.DB
|
Microarrays are made it possible to simultaneously monitor the expression
profiles of thousands of genes under various experimental conditions. It is
used to identify the co-expressed genes in specific cells or tissues that are
actively used to make proteins. This method is used to analysis the gene
expression, an important task in bioinformatics research. Cluster analysis of
gene expression data has proved to be a useful tool for identifying
co-expressed genes, biologically relevant groupings of genes and samples. In
this paper we applied K-Means with Automatic Generations of Merge Factor for
ISODATA- AGMFI. Though AGMFI has been applied for clustering of Gene Expression
Data, this proposed Enhanced Automatic Generations of Merge Factor for ISODATA-
EAGMFI Algorithms overcome the drawbacks of AGMFI in terms of specifying the
optimal number of clusters and initialization of good cluster centroids.
Experimental results on Gene Expression Data show that the proposed EAGMFI
algorithms could identify compact clusters with perform well in terms of the
Silhouette Coefficients cluster measure.
|
1112.4294
|
Optimal Disturbance Accommodation with Limited Model Information
|
math.OC cs.SY
|
The design of optimal dynamic disturbance accommodation controller with
limited model information is considered. We adapt the family of limited model
information control design strategies, defined earlier by the authors, to
handle dynamic controllers. This family of limited model information design
strategies construct subcontrollers distributively by accessing only local
plant model information. The closed-loop performance of the dynamic controllers
that they can produce are studied using a performance metric called the
competitive ratio which is the worst case ratio of the cost a control design
strategy to the cost of the optimal control design with full model information.
|
1112.4303
|
Development of Grid e-Infrastructure in South-Eastern Europe
|
cs.DC cs.NI cs.SI physics.comp-ph
|
Over the period of 6 years and three phases, the SEE-GRID programme has
established a strong regional human network in the area of distributed
scientific computing and has set up a powerful regional Grid infrastructure. It
attracted a number of user communities and applications from diverse fields
from countries throughout the South-Eastern Europe. From the infrastructure
point view, the first project phase has established a pilot Grid infrastructure
with more than 20 resource centers in 11 countries. During the subsequent two
phases of the project, the infrastructure has grown to currently 55 resource
centers with more than 6600 CPUs and 750 TBs of disk storage, distributed in 16
participating countries. Inclusion of new resource centers to the existing
infrastructure, as well as a support to new user communities, has demanded
setup of regionally distributed core services, development of new monitoring
and operational tools, and close collaboration of all partner institution in
managing such a complex infrastructure. In this paper we give an overview of
the development and current status of SEE-GRID regional infrastructure and
describe its transition to the NGI-based Grid model in EGI, with the strong SEE
regional collaboration.
|
1112.4312
|
Multiscale Analysis of Spreading in a Large Communication Network
|
physics.soc-ph cs.SI physics.data-an
|
In temporal networks, both the topology of the underlying network and the
timings of interaction events can be crucial in determining how some dynamic
process mediated by the network unfolds. We have explored the limiting case of
the speed of spreading in the SI model, set up such that an event between an
infectious and susceptible individual always transmits the infection. The speed
of this process sets an upper bound for the speed of any dynamic process that
is mediated through the interaction events of the network. With the help of
temporal networks derived from large scale time-stamped data on mobile phone
calls, we extend earlier results that point out the slowing-down effects of
burstiness and temporal inhomogeneities. In such networks, links are not
permanently active, but dynamic processes are mediated by recurrent events
taking place on the links at specific points in time. We perform a multi-scale
analysis and pinpoint the importance of the timings of event sequences on
individual links, their correlations with neighboring sequences, and the
temporal pathways taken by the network-scale spreading process. This is
achieved by studying empirically and analytically different characteristic
relay times of links, relevant to the respective scales, and a set of temporal
reference models that allow for removing selected time-domain correlations one
by one.
|
1112.4323
|
Between theory and practice: guidelines for an optimization scheme with
genetic algorithms - Part I: single-objective continuous global optimization
|
cs.NE
|
The rapid advances in the field of optimization methods in many pure and
applied science pose the difficulty of keeping track of the developments as
well as selecting an appropriate technique that best suits the problem in-hand.
From a practitioner point of view is rightful to wander "which optimization
method is the best for my problem?". Looking at the optimization process as a
"system" of intercon- nected parts, in this paper are collected some ideas
about how to tackle an optimization problem using a class of tools from
evolutionary computations called Genetic Algorithms. Despite the number of
optimization techniques available nowadays the author of this paper thinks that
Genetic Algorithms still play a central role for their versatility, robustness,
theoretical framework and simplicity of use. The paper can be considered a
"collection of tips" (from literature and personal experience) for the
non-computer-scientist that has to deal with optimization problems both in the
science and engineering practice. No original methods or algorithms are
proposed.
|
1112.4344
|
A Scalable Multiclass Algorithm for Node Classification
|
cs.LG cs.GT
|
We introduce a scalable algorithm, MUCCA, for multiclass node classification
in weighted graphs. Unlike previously proposed methods for the same task, MUCCA
works in time linear in the number of nodes. Our approach is based on a
game-theoretic formulation of the problem in which the test labels are
expressed as a Nash Equilibrium of a certain game. However, in order to achieve
scalability, we find the equilibrium on a spanning tree of the original graph.
Experiments on real-world data reveal that MUCCA is much faster than its
competitors while achieving a similar predictive performance.
|
1112.4394
|
Additive Gaussian Processes
|
stat.ML cs.LG
|
We introduce a Gaussian process model of functions which are additive. An
additive function is one which decomposes into a sum of low-dimensional
functions, each depending on only a subset of the input variables. Additive GPs
generalize both Generalized Additive Models, and the standard GP models which
use squared-exponential kernels. Hyperparameter learning in this model can be
seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive
but tractable parameterization of the kernel function, which allows efficient
evaluation of all input interaction terms, whose number is exponential in the
input dimension. The additional structure discoverable by this model results in
increased interpretability, as well as state-of-the-art predictive power in
regression tasks.
|
1112.4422
|
Intermittent social distancing strategy for epidemic control
|
physics.soc-ph cs.SI
|
We study the critical effect of an intermittent social distancing strategy on
the propagation of epidemics in adaptive complex networks. We characterize the
effect of our strategy in the framework of the susceptible-infected-recovered
model. In our model, based on local information, a susceptible individual
interrupts the contact with an infected individual with a probability $\sigma$
and restores it after a fixed time $t_{b}$. We find that, depending on the
network topology, in our social distancing strategy there exists a cutoff
threshold $\sigma_{c}$ beyond which the epidemic phase disappears. Our results
are supported by a theoretical framework and extensive simulations of the
model. Furthermore we show that this strategy is very efficient because it
leads to a "susceptible herd behavior" that protects a large fraction of
susceptibles individuals. We explain our results using percolation arguments.
|
1112.4434
|
Oracle inequalities and minimax rates for non-local means and related
adaptive kernel-based methods
|
math.ST cs.CV cs.IT math.IT stat.TH
|
This paper describes a novel theoretical characterization of the performance
of non-local means (NLM) for noise removal. NLM has proven effective in a
variety of empirical studies, but little is understood fundamentally about how
it performs relative to classical methods based on wavelets or how various
parameters (e.g., patch size) should be chosen. For cartoon images and images
which may contain thin features and regular textures, the error decay rates of
NLM are derived and compared with those of linear filtering, oracle estimators,
variable-bandwidth kernel methods, Yaroslavsky's filter and wavelet
thresholding estimators. The trade-off between global and local search for
matching patches is examined, and the bias reduction associated with the local
polynomial regression version of NLM is analyzed. The theoretical results are
validated via simulations for 2D images corrupted by additive white Gaussian
noise.
|
1112.4438
|
Barcoding-free BAC Pooling Enables Combinatorial Selective Sequencing of
the Barley Gene Space
|
q-bio.GN cs.CE cs.DM cs.DS
|
We propose a new sequencing protocol that combines recent advances in
combinatorial pooling design and second-generation sequencing technology to
efficiently approach de novo selective genome sequencing. We show that
combinatorial pooling is a cost-effective and practical alternative to
exhaustive DNA barcoding when dealing with hundreds or thousands of DNA
samples, such as genome-tiling gene-rich BAC clones. The novelty of the
protocol hinges on the computational ability to efficiently compare hundreds of
million of short reads and assign them to the correct BAC clones so that the
assembly can be carried out clone-by-clone. Experimental results on simulated
data for the rice genome show that the deconvolution is extremely accurate
(99.57% of the deconvoluted reads are assigned to the correct BAC), and the
resulting BAC assemblies have very high quality (BACs are covered by contigs
over about 77% of their length, on average). Experimental results on real data
for a gene-rich subset of the barley genome confirm that the deconvolution is
accurate (almost 70% of left/right pairs in paired-end reads are assigned to
the same BAC, despite being processed independently) and the BAC assemblies
have good quality (the average sum of all assembled contigs is about 88% of the
estimated BAC length).
|
1112.4454
|
Evolutionary Hessian Learning: Forced Optimal Covariance Adaptive
Learning (FOCAL)
|
cs.NE cs.NA quant-ph
|
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) has been the
most successful Evolution Strategy at exploiting covariance information; it
uses a form of Principle Component Analysis which, under certain conditions, is
suggested to converge to the correct covariance matrix, formulated as the
inverse of the mathematically well-defined Hessian matrix. However, in
practice, there exist conditions where CMA-ES converges to the global optimum
(accomplishing its primary goal) while it does not learn the true covariance
matrix (missing an auxiliary objective), likely due to step-size deficiency.
These circumstances can involve high-dimensional landscapes with large
condition numbers. This paper introduces a novel technique entitled Forced
Optimal Covariance Adaptive Learning (FOCAL), with the explicit goal of
determining the Hessian at the global basin of attraction. It begins by
introducing theoretical foundations to the inverse relationship between the
learned covariance and the Hessian matrices. FOCAL is then introduced and
demonstrated to retrieve the Hessian matrix with high fidelity on both model
landscapes and experimental Quantum Control systems, which are observed to
possess a non-separable, non-quadratic search landscape. The recovered Hessian
forms are corroborated by physical knowledge of the systems. This study
constitutes an example for Natural Computing successfully serving other
branches of natural sciences, and introducing at the same time a powerful
generic method for any high-dimensional continuous search seeking landscape
information.
|
1112.4456
|
Cluster Analysis for a Scale-Free Folksodriven Structure Network
|
cs.SI cs.IR physics.soc-ph
|
Folksonomy is said to provide a democratic tagging system that reflects the
opinions of the general public, but it is not a classification system and it is
hard to make sense of. It would be necessary to share a representation of
contexts by all the users to develop a social and collaborative matching. The
solution could be to help the users to choose proper tags thanks to a dynamical
driven system of folksonomy that could evolve during the time. This paper uses
a cluster analysis to measure a new concept of a structure called
"Folksodriven", which consists of tags, source and time. Many approaches
include in their goals the use of folksonomy that could evolve during time to
evaluate characteristics. This paper describes an alternative where the goal is
to develop a weighted network of tags where link strengths are based on the
frequencies of tag co-occurrence, and studied the weight distributions and
connectivity correlations among nodes in this network. The paper proposes and
analyzes the network structure of the Folksodriven tags thought as folksonomy
tags suggestions for the user on a dataset built on chosen websites. It is
observed that the hypergraphs of the Folksodriven are highly connected and that
the relative path lengths are relatively low, facilitating thus the
serendipitous discovery of interesting contents for the users. Then its
characteristics, Clustering Coefficient, is compared with random networks. The
goal of this paper is a useful analysis of the use of folksonomies on some well
known and extensive web sites with real user involvement. The advantages of the
new tagging method using folksonomy are on a new interesting method to be
employed by a knowledge management system.
*** This paper has been accepted to the International Conference on Social
Computing and its Applications (SCA 2011) - Sydney Australia, 12-14 December
2011 ***
|
1112.4553
|
Cooperative Algorithms for MIMO Amplify-and-Forward Relay Networks
|
cs.IT math.IT
|
Interference alignment is a signaling technique that provides high
multiplexing gain in the interference channel. It can be extended to multi-hop
interference channels, where relays aid transmission between sources and
destinations. In addition to coverage extension and capacity enhancement,
relays increase the multiplexing gain in the interference channel. In this
paper, three cooperative algorithms are proposed for a multiple-antenna
amplify-and-forward (AF) relay interference channel. The algorithms design the
transmitters and relays so that interference at the receivers can be aligned
and canceled. The first algorithm minimizes the sum power of enhanced noise
from the relays and interference at the receivers. The second and third
algorithms rely on a connection between mean square error and mutual
information to solve the end-to-end sum-rate maximization problem with either
equality or inequality power constraints via matrix-weighted sum mean square
error minimization. The resulting iterative algorithms converge to stationary
points of the corresponding optimization problems. Simulations show that the
proposed algorithms achieve higher end-to-end sum-rates and multiplexing gains
that existing strategies for AF relays, decode-and-forward relays, and direct
transmission. The first algorithm outperforms the other algorithms at high
signal-to-noise ratio (SNR) but performs worse than them at low SNR. Thanks to
power control, the third algorithm outperforms the second algorithm at the cost
of overhead.
|
1112.4597
|
Evaluating Network Models: A Likelihood Analysis
|
physics.soc-ph cs.SI physics.data-an
|
Many models are put forward to mimic the evolution of real networked systems.
A well-accepted way to judge the validity is to compare the modeling results
with real networks subject to several structural features. Even for a specific
real network, we cannot fairly evaluate the goodness of different models since
there are too many structural features while there is no criterion to select
and assign weights on them. Motivated by the studies on link prediction
algorithms, we propose a unified method to evaluate the network models via the
comparison of the likelihoods of the currently observed network driven by
different models, with an assumption that the higher the likelihood is, the
better the model is. We test our method on the real Internet at the Autonomous
System (AS) level, and the results suggest that the Generalized Linear
Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang),
while both two models are better than the Barab\'asi-Albert (BA) and
Erd\"os-R\'enyi (ER) models. Our method can be further applied in determining
the optimal values of parameters that correspond to the maximal likelihood.
Experiment indicates that the parameters obtained by our method can better
capture the characters of newly-added nodes and links in the AS-level Internet
than the original methods in the literature.
|
1112.4607
|
Alignment Based Kernel Learning with a Continuous Set of Base Kernels
|
cs.LG stat.ML
|
The success of kernel-based learning methods depend on the choice of kernel.
Recently, kernel learning methods have been proposed that use data to select
the most appropriate kernel, usually by combining a set of base kernels. We
introduce a new algorithm for kernel learning that combines a {\em continuous
set of base kernels}, without the common step of discretizing the space of base
kernels. We demonstrate that our new method achieves state-of-the-art
performance across a variety of real-world datasets. Furthermore, we explicitly
demonstrate the importance of combining the right dictionary of kernels, which
is problematic for methods based on a finite set of base kernels chosen a
priori. Our method is not the first approach to work with continuously
parameterized kernels. However, we show that our method requires substantially
less computation than previous such approaches, and so is more amenable to
multiple dimensional parameterizations of base kernels, which we demonstrate.
|
1112.4625
|
Pseudocodewords from Bethe Permanents
|
cs.IT math.IT
|
It was recently conjectured that a vector with components equal to the Bethe
permanent of certain submatrices of a parity-check matrix is a pseudocodeword.
In this paper we prove a stronger version of this conjecture for some important
cases and investigate the families of pseudocodewords obtained by using the
Bethe permanent. We also highlight some interesting properties of the permanent
of block matrices and their effects on pseudocodewords.
|
1112.4628
|
Using Artificial Bee Colony Algorithm for MLP Training on Earthquake
Time Series Data Prediction
|
cs.NE cs.AI cs.LG
|
Nowadays, computer scientists have shown the interest in the study of social
insect's behaviour in neural networks area for solving different combinatorial
and statistical problems. Chief among these is the Artificial Bee Colony (ABC)
algorithm. This paper investigates the use of ABC algorithm that simulates the
intelligent foraging behaviour of a honey bee swarm. Multilayer Perceptron
(MLP) trained with the standard back propagation algorithm normally utilises
computationally intensive training algorithms. One of the crucial problems with
the backpropagation (BP) algorithm is that it can sometimes yield the networks
with suboptimal weights because of the presence of many local optima in the
solution space. To overcome ABC algorithm used in this work to train MLP
learning the complex behaviour of earthquake time series data trained by BP,
the performance of MLP-ABC is benchmarked against MLP training with the
standard BP. The experimental result shows that MLP-ABC performance is better
than MLP-BP for time series data.
|
1112.4631
|
Fuzzy cellular model of signal controlled traffic stream
|
cs.DM cs.SY nlin.CG
|
Microscopic traffic models have recently gained considerable importance as a
mean of optimising traffic control strategies. Computationally efficient and
sufficiently accurate microscopic traffic models have been developed based on
the cellular automata theory. However, the real-time application of the
available cellular automata models in traffic control systems is a difficult
task due to their discrete and stochastic nature. This paper introduces a novel
method of traffic streams modelling, which combines cellular automata and fuzzy
calculus. The introduced fuzzy cellular traffic model eliminates main drawbacks
of the cellular automata approach i.e. necessity of multiple Monte Carlo
simulations and calibration issues. Experimental results show that the
evolution of a simulated traffic stream in the proposed fuzzy cellular model is
consistent with that observed for stochastic cellular automata. The comparison
of both methods confirms that the computational cost of traffic simulation is
considerably lower for the proposed model. The model is suitable for real-time
applications in traffic control systems.
|
1112.4708
|
Transformation Networks: How Innovation and the Availability of
Technology can Increase Economic Performance
|
cs.SI
|
A transformation network describes how one set of resources can be
transformed into another via technological processes. Transformation networks
in economics are useful because they can highlight areas for future
innovations, both in terms of new products, new production techniques, or
better efficiency. They also make it easy to detect areas where an economy
might be fragile. In this paper, we use computational simulations to
investigate how the density of a transformation network affects the economic
performance, as measured by the gross domestic product (GDP), of an artificial
economy. Our results show that on average, the GDP of our economy increases as
the density of the transformation network increases. We also find that while
the average performance increases, the maximum possible performance decreases
and the minimum possible performance increases.
|
1112.4718
|
Inhomogeneous epidemics on weighted networks
|
math.PR cs.SI physics.soc-ph
|
A social (sexual) network is modeled by an extension of the configuration
model to the situation where edges have weights, e.g. reflecting the number of
sex-contacts between the individuals. An epidemic model is defined on the
network such that individuals are heterogeneous in terms of how susceptible and
infectious they are. The basic reproduction number R_0 is derived and studied
for various examples, but also the size and probability of a major outbreak.
The qualitative conclusion is that R_0 gets larger as the community becomes
more heterogeneous but that different heterogeneities (degree distribution,
weight, susceptibility and infectivity) can sometimes have the cumulative
effect of homogenizing the community, thus making $R_0$ smaller. The effect on
the probability and final size of an outbreak is more complicated.
|
1112.4722
|
Modeling transition dynamics in MDPs with RKHS embeddings of conditional
distributions
|
cs.LG
|
We propose a new, nonparametric approach to estimating the value function in
reinforcement learning. This approach makes use of a recently developed
representation of conditional distributions as functions in a reproducing
kernel Hilbert space. Such representations bypass the need for estimating
transition probabilities, and apply to any domain on which kernels can be
defined. Our approach avoids the need to approximate intractable integrals
since expectations are represented as RKHS inner products whose computation has
linear complexity in the sample size. Thus, we can efficiently perform value
function estimation in a wide variety of settings, including finite state
spaces, continuous states spaces, and partially observable tasks where only
sensor measurements are available. A second advantage of the approach is that
we learn the conditional distribution representation from a training sample,
and do not require an exhaustive exploration of the state space. We prove
convergence of our approach either to the optimal policy, or to the closest
projection of the optimal policy in our model class, under reasonable
assumptions. In experiments, we demonstrate the performance of our algorithm on
a learning task in a continuous state space (the under-actuated pendulum), and
on a navigation problem where only images from a sensor are observed. We
compare with least-squares policy iteration where a Gaussian process is used
for value function estimation. Our algorithm achieves better performance in
both tasks.
|
1112.4758
|
A measure of centrality based on the spectrum of the Laplacian
|
physics.data-an cs.SI physics.soc-ph
|
We introduce a family of new centralities, the k-spectral centralities.
k-Spectral centrality is a measurement of importance with respect to the
deformation of the graph Laplacian associated with the graph. Due to this
connection, k-spectral centralities have various interpretations in terms of
spectrally determined information.
We explore this centrality in the context of several examples. While for
sparse unweighted networks 1-spectral centrality behaves similarly to other
standard centralities, for dense weighted networks they show different
properties. In summary, the k-spectral centralities provide a novel and useful
measurement of relevance (for single network elements as well as whole
subnetworks) distinct from other known measures.
|
1112.4788
|
Entropic Inequalities and Marginal Problems
|
cs.IT math.IT math.PR quant-ph
|
A marginal problem asks whether a given family of marginal distributions for
some set of random variables arises from some joint distribution of these
variables. Here we point out that the existence of such a joint distribution
imposes non-trivial conditions already on the level of Shannon entropies of the
given marginals. These entropic inequalities are necessary (but not sufficient)
criteria for the existence of a joint distribution. For every marginal problem,
a list of such Shannon-type entropic inequalities can be calculated by
Fourier-Motzkin elimination, and we offer a software interface to a
Fourier-Motzkin solver for doing so. For the case that the hypergraph of given
marginals is a cycle graph, we provide a complete analytic solution to the
problem of classifying all relevant entropic inequalities, and use this result
to bound the decay of correlations in stochastic processes. Furthermore, we
show that Shannon-type inequalities for differential entropies are not relevant
for continuous-variable marginal problems; non-Shannon-type inequalities are,
both in the discrete and in the continuous case. In contrast to other
approaches, our general framework easily adapts to situations where one has
additional (conditional) independence requirements on the joint distribution,
as in the case of graphical models. We end with a list of open problems.
A complementary article discusses applications to quantum nonlocality and
contextuality.
|
1112.4811
|
Phase-Quantized Block Noncoherent Communication
|
cs.IT math.IT
|
Analog-to-digital conversion (ADC) is a key bottleneck in scaling DSP-centric
receiver architectures to multiGigabit/s speeds. Recent information-theoretic
results, obtained under ideal channel conditions (perfect synchronization, no
dispersion), indicate that low-precision ADC (1-4 bits) could be a suitable
choice for designing such high speed systems. In this work, we study the impact
of employing low-precision ADC in a {\it carrier asynchronous} system.
Specifically, we consider transmission over the block noncoherent Additive
White Gaussian Noise (AWGN) channel, and investigate the achievable performance
under low-precision output quantization. We focus attention on an architecture
in which the receiver quantizes {\it only the phase} of the received signal:
this has the advantage of being implementable without automatic gain control,
using multiple 1-bit ADCs preceded by analog multipliers. For standard uniform
Phase Shift Keying (PSK) modulation, we study the structure of the transition
density of the resulting phase-quantized block noncoherent channel. Several
results, based on the symmetry inherent in the channel model, are provided to
characterize this transition density. Low-complexity procedures for computing
the channel capacity, and for block demodulation, are obtained using these
results. Numerical computations are performed to compare the performance of
quantized and unquantized systems, for different quantization precisions, and
different block lengths. It is observed, for example, that with QPSK
modulation, 8-bin phase quantization of the received signal recovers about
80-85% of the capacity attained with unquantized observations, while 12-bin
phase quantization recovers more than 90% of the unquantized capacity.
Dithering the constellation is shown to improve the performance in the face of
drastic quantization.
|
1112.4826
|
Does strong heterogeneity promote cooperation by group interactions?
|
physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE
|
Previous research has highlighted the importance of strong heterogeneity for
the successful evolution of cooperation in games governed by pairwise
interactions. Here we determine to what extent this is true for games governed
by group interactions. We therefore study the evolution of cooperation in the
public goods game on the square lattice, the triangular lattice and the random
regular graph, whereby the payoffs are distributed either uniformly or
exponentially amongst the players by assigning to them individual scaling
factors that determine the share of the public good they will receive. We find
that uniformly distributed public goods are more successful in maintaining high
levels of cooperation than exponentially distributed public goods. This is not
in agreement with previous results on games governed by pairwise interactions,
indicating that group interactions may be less susceptible to the promotion of
cooperation by means of strong heterogeneity as originally assumed, and that
the role of strongly heterogeneous states should be reexamined for other types
of games.
|
1112.4876
|
Random Coding Bound for the Reliability Function in Quantum Channel:
General Case
|
cs.IT math.IT
|
We complete the proof of conjecture, which allows to complete the derivation
of the random coding bound for the reliability function in quantum channel in
the case of arbitrary signal states
|
1112.4879
|
Interference Alignment: From Degrees-of-Freedom to Constant-Gap Capacity
Approximations
|
cs.IT math.IT
|
Interference alignment is a key technique for communication scenarios with
multiple interfering links. In several such scenarios, interference alignment
was used to characterize the degrees-of-freedom of the channel. However, these
degrees-of-freedom capacity approximations are often too weak to make accurate
predictions about the behavior of channel capacity at finite signal-to-noise
ratios (SNRs). The aim of this paper is to significantly strengthen these
results by showing that interference alignment can be used to characterize
capacity to within a constant gap. We focus on real, time-invariant,
frequency-flat X-channels. The only known solutions achieving the
degrees-of-freedom of this channel are either based on real interference
alignment or on layer-selection schemes. Neither of these solutions seems
sufficient for a constant-gap capacity approximation.
In this paper, we propose a new communication scheme and show that it
achieves the capacity of the Gaussian X-channel to within a constant gap. To
aid in this process, we develop a novel deterministic channel model. This
deterministic model depends on the 0.5log(SNR) most-significant bits of the
channel coefficients rather than only the single most-significant bit used in
conventional deterministic models. The proposed deterministic model admits a
wider range of achievable schemes that can be translated to the Gaussian
channel. For this deterministic model, we find an approximately optimal
communication scheme. We then translate this scheme for the deterministic
channel to the original Gaussian X-channel and show that it achieves capacity
to within a constant gap. This is the first constant-gap result for a general,
fully-connected network requiring interference alignment.
|
1112.4883
|
Computing the Matched Filter in Linear Time
|
cs.IT math.IT
|
A fundamental problem in wireless communication is the time-frequency shift
(TFS) problem: Find the time-frequency shift of a signal in a noisy
environment. The shift is the result of time asynchronization of a sender with
a receiver, and of non-zero speed of a sender with respect to a receiver. A
classical solution of a discrete analog of the TFS problem is called the
matched filter algorithm. It uses a pseudo-random waveform S(t) of the length
p, and its arithmetic complexity is O(p^{2} \cdot log (p)), using fast Fourier
transform. In these notes we introduce a novel approach of designing new
waveforms that allow faster matched filter algorithm. We use techniques from
group representation theory to design waveforms S(t), which enable us to
introduce two fast matched filter (FMF) algorithms, called the flag algorithm,
and the cross algorithm. These methods solve the TFS problem in O(p\cdot log
(p)) operations. We discuss applications of the algorithms to mobile
communication, GPS, and radar.
|
1112.4895
|
3D Finite Element Analysis of HMA Overlay Mix Design to Control
Reflective Cracking
|
cs.CE
|
This study examines the effectiveness of HMA overlay design strategies for
the purpose of controlling the development of reflective cracking. A parametric
study was conducted using a 3D Finite Element (FE) model of a rigid pavement
section including Linear Viscoelastic (LVE) material properties for the Hot Mix
Asphalt (HMA) overlay and non-uniform tire-pavement contact stresses. Several
asphalt mixtures were tested in the surface, intermediate, and leveling course
of the HMA overlay. Results obtained show that no benefits can be anticipated
by using either Polymer-Modified (PM) or Dense-Graded (DG) mixtures instead of
Standard Binder (SB) mixtures in the surface or intermediate course. For the
leveling course, the use of a PM asphalt binder was found beneficial in terms
of mitigating reflective cracking. As compared to the SB mix, the use of PM
asphalt mixture in the leveling course reduced the level of longitudinal
tensile stress at the bottom of the HMA overlay above the PCC joint by
approximately 30%.
|
1112.4906
|
Passive and Driven Trends in the Evolution of Complexity
|
cs.NE q-bio.PE
|
The nature and source of evolutionary trends in complexity is difficult to
assess from the fossil record, and the driven vs. passive nature of such trends
has been debated for decades. There are also questions about how effectively
artificial life software can evolve increasing levels of complexity. We extend
our previous work demonstrating an evolutionary increase in an information
theoretic measure of neural complexity in an artificial life system
(Polyworld), and introduce a new technique for distinguishing driven from
passive trends in complexity. Our experiments show that evolution can and does
select for complexity increases in a driven fashion, in some circumstances, but
under other conditions it can also select for complexity stability. It is
suggested that the evolution of complexity is entirely driven---just not in a
single direction---at the scale of species. This leaves open the question of
evolutionary trends at larger scales.
|
1112.4909
|
A Unit Commitment Model with Demand Response for the Integration of
Renewable Energies
|
cs.SY
|
The output of renewable energy fluctuates significantly depending on weather
conditions. We develop a unit commitment model to analyze requirements of the
forecast output and its error for renewable energies. Our model obtains the
time series for the operational state of thermal power plants that would
maximize the profits of an electric power utility by taking into account both
the forecast of output its error for renewable energies and the demand response
of consumers. We consider a power system consisting of thermal power plants,
photovoltaic systems (PV), and wind farms and analyze the effect of the
forecast error on the operation cost and reserves. We confirm that the
operation cost was increases with the forecast error. The effect of a sudden
decrease in wind power is also analyzed. More thermal power plants need to be
operated to generate power to absorb this sudden decrease in wind power. The
increase in the number of operating thermal power plants within a short period
does not affect the total operation cost significantly; however the
substitution of thermal power plants by wind farms or PV systems is not
expected to be very high. Finally, the effects of the demand response in the
case of a sudden decrease in wind power are analyzed. We confirm that the
number of operating thermal power plants is reduced by the demand response. A
power utility has to continue thermal power plants for ensuring supply-demand
balance; some of these plants can be decommissioned after installing a large
number of wind farms or PV systems, if the demand response is applied using an
appropriate price structure.
|
1112.4915
|
Cheaters in the Steam Community Gaming Social Network
|
cs.SI cs.CY physics.soc-ph
|
Online gaming is a multi-billion dollar industry that entertains a large,
global population. One unfortunate phenomenon, however, poisons the competition
and the fun: cheating. The costs of cheating span from industry-supported
expenditures to detect and limit cheating, to victims' monetary losses due to
cyber crime.
This paper studies cheaters in the Steam Community, an online social network
built on top of the world's dominant digital game delivery platform. We
collected information about more than 12 million gamers connected in a global
social network, of which more than 700 thousand have their profiles flagged as
cheaters. We also collected in-game interaction data of over 10 thousand
players from a popular multiplayer gaming server. We show that cheaters are
well embedded in the social and interaction networks: their network position is
largely undistinguishable from that of fair players. We observe that the
cheating behavior appears to spread through a social mechanism: the presence
and the number of cheater friends of a fair player is correlated with the
likelihood of her becoming a cheater in the future. Also, we observe that there
is a social penalty involved with being labeled as a cheater: cheaters are
likely to switch to more restrictive privacy settings once they are tagged and
they lose more friends than fair players. Finally, we observe that the number
of cheaters is not correlated with the geographical, real-world population
density, or with the local popularity of the Steam Community.
This analysis can ultimately inform the design of mechanisms to deal with
anti-social behavior (e.g., spamming, automated collection of data) in generic
online social networks.
|
1112.5032
|
Decentralized Disturbance Accommodation with Limited Plant Model
Information
|
math.OC cs.SY
|
The design of optimal disturbance accommodation and servomechanism
controllers with limited plant model information is considered in this paper.
Their closed-loop performance are compared using a performance metric called
competitive ratio which is the worst-case ratio of the cost of a given control
design strategy to the cost of the optimal control design with full model
information. It was recently shown that when it comes to designing optimal
centralized or partially structured decentralized state-feedback controllers
with limited model information, the best control design strategy in terms of
competitive ratio is a static one. This is true even though the optimal
structured decentralized state-feedback controller with full model information
is dynamic. In this paper, we show that, in contrast, the best limited model
information control design strategy for the disturbance accommodation problem
gives a dynamic controller. We find an explicit minimizer of the competitive
ratio and we show that it is undominated, that is, there is no other control
design strategy that performs better for all possible plants while having the
same worst-case ratio. This optimal controller can be separated into a static
feedback law and a dynamic disturbance observer. For constant disturbances, it
is shown that this structure corresponds to proportional-integral control.
|
1112.5116
|
Evolution of sustained foraging in 3D environments with physics
|
cs.NE q-bio.NC q-bio.PE
|
Artificially evolving foraging behavior in simulated legged animals has
proved to be a notoriously difficult task. Here, we co-evolve the morphology
and controller for virtual organisms in a three-dimensional physically
realistic environment to produce goal-directed legged locomotion. We show that
following and reaching multiple food sources can evolve de novo, by evaluating
each organism on multiple food sources placed on a basic pattern that is
gradually randomized across generations. We devised a strategy of evolutionary
"staging", where the best organism from a set of evolutionary experiments using
a particular fitness function is used to seed a new set, with a fitness
function that is progressively altered to better challenge organisms as
evolution improves them. We find that an organism's efficiency at reaching the
first food source does not predict its ability at finding subsequent ones
because foraging efficiency crucially depends on the position of the last food
source reached, an effect illustrated by "foraging maps" that capture the
organism's controller state, body position, and orientation. Our best evolved
foragers are able to reach multiple food sources over 90% of the time on
average, a behavior that is key to any biologically realistic simulation where
a self-sustaining population has to survive by collecting food sources in
three-dimensional, physical environments.
|
1112.5121
|
A Model of Collaboration Network Formation with Heterogenous Skills
|
physics.soc-ph cs.SI
|
Collaboration networks provide a method for examining the highly
heterogeneous structure of collaborative communities. However, we still have
limited theoretical understanding of how individual heterogeneity relates to
network heterogeneity. The model presented here provides a framework linking an
individual's skill set to her position in the collaboration network, and the
distribution of skills in the population to the structure of the collaboration
network as a whole. This model suggests that there is a non-trivial
relationship between skills and network position: individuals with a useful
combination of skills will have a disproportionate number of links in the
network. Indeed, in some cases, an individual's degree is non-monotonic in the
number of skills she has--an individual with very few skills may outperform an
individual with many. Special cases of the model suggest that the degree
distribution of the network will be skewed, even when the distribution of
skills is uniform in the population. The degree distribution becomes more
skewed as problems become more difficult, leading to a community dominated by a
few high-degree superstars. This has striking implications for labor market
outcomes in industries where production is largely the result of collaborative
effort.
|
1112.5246
|
Combining One-Class Classifiers via Meta-Learning
|
cs.LG
|
Selecting the best classifier among the available ones is a difficult task,
especially when only instances of one class exist. In this work we examine the
notion of combining one-class classifiers as an alternative for selecting the
best classifier. In particular, we propose two new one-class classification
performance measures to weigh classifiers and show that a simple ensemble that
implements these measures can outperform the most popular one-class ensembles.
Furthermore, we propose a new one-class ensemble scheme, TUPSO, which uses
meta-learning to combine one-class classifiers. Our experiments demonstrate the
superiority of TUPSO over all other tested ensembles and show that the TUPSO
performance is statistically indistinguishable from that of the hypothetical
best classifier.
|
1112.5252
|
Ranking and clustering of nodes in networks with smart teleportation
|
cs.SI physics.soc-ph
|
Random teleportation is a necessary evil for ranking and clustering directed
networks based on random walks. Teleportation enables ergodic solutions, but
the solutions must necessarily depend on the exact implementation and
parametrization of the teleportation. For example, in the commonly used
PageRank algorithm, the teleportation rate must trade off a heavily biased
solution with a uniform solution. Here we show that teleportation to links
rather than nodes enables a much smoother trade-off and effectively more robust
results. We also show that, by not recording the teleportation steps of the
random walker, we can further reduce the effect of teleportation with dramatic
effects on clustering.
|
1112.5282
|
Observability of Strapdown INS Alignment: A Global Perspective
|
cs.RO cs.SY
|
Alignment of the strapdown inertial navigation system (INS) has strong
nonlinearity, even worse when maneuvers, e.g., tumbling techniques, are
employed to improve the alignment. There is no general rule to attack the
observability of a nonlinear system, so most previous works addressed the
observability of the corresponding linearized system by implicitly assuming
that the original nonlinear system and the linearized one have identical
observability characteristics. Strapdown INS alignment is a nonlinear system
that has its own characteristics. Using the inherent properties of strapdown
INS, e.g., the attitude evolution on the SO(3) manifold, we start from the
basic definition and develop a global and constructive approach to investigate
the observability of strapdown INS static and tumbling alignment, highlighting
the effects of the attitude maneuver on observability. We prove that strapdown
INS alignment, considering the unknown constant sensor biases, will be
completely observable if the strapdown INS is rotated successively about two
different axes and will be nearly observable for finite known unobservable
states (no more than two) if it is rotated about a single axis. Observability
from a global perspective provides us with insights into and a clearer picture
of the problem, shedding light on previous theoretical results on strapdown INS
alignment that were not comprehensive or consistent.. The reporting of
inconsistencies calls for a review of all linearization-based observability
studies in the vast literature. Extensive simulations with constructed ideal
observers and an extended Kalman filter are carried out, and the numerical
results accord with the analysis. The conclusions can also assist in designing
the optimal tumbling strategy and the appropriate state observer in practice to
maximize the alignment performance.
|
1112.5283
|
On Position Translation Vector
|
cs.RO
|
The paper derives a new "position translation vector" (PTV) with remarkably
simpler rate equation, and proves its connections with Savage's PTV.
|
1112.5297
|
Robustness of onion-like correlated networks against targeted attacks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Recently, it was found by Schneider et al. [Proc. Natl. Acad. Sci. USA, 108,
3838 (2011)], using simulations, that scale-free networks with "onion
structure" are very robust against targeted high degree attacks. The onion
structure is a network where nodes with almost the same degree are connected.
Motivated by this work, we propose and analyze, based on analytical
considerations, an onion-like candidate for a nearly optimal structure against
simultaneous random and targeted high degree node attacks. The nearly optimal
structure can be viewed as a hierarchically interconnected random regular
graphs, the degrees and populations of which are specified by the degree
distribution. This network structure exhibits an extremely assortative
degree-degree correlation and has a close relationship to the "onion
structure." After deriving a set of exact expressions that enable us to
calculate the critical percolation threshold and the giant component of a
correlated network for an arbitrary type of node removal, we apply the theory
to the cases of random scale-free networks that are highly vulnerable against
targeted high degree node removal. Our results show that this vulnerability can
be significantly reduced by implementing this onion-like type of degree-degree
correlation without much undermining the almost complete robustness against
random node removal. We also investigate in detail the robustness enhancement
due to assortative degree-degree correlation by introducing a joint
degree-degree probability matrix that interpolates between an uncorrelated
network structure and the onion-like structure proposed here by tuning a single
control parameter. The optimal values of the control parameter that maximize
the robustness against simultaneous random and targeted attacks are also
determined. Our analytical calculations are supported by numerical simulations.
|
1112.5298
|
Zero-Temperature Limit of a Convergent Algorithm to Minimize the Bethe
Free Energy
|
cs.CV
|
After the discovery that fixed points of loopy belief propagation coincide
with stationary points of the Bethe free energy, several researchers proposed
provably convergent algorithms to directly minimize the Bethe free energy.
These algorithms were formulated only for non-zero temperature (thus finding
fixed points of the sum-product algorithm) and their possible extension to zero
temperature is not obvious. We present the zero-temperature limit of the
double-loop algorithm by Heskes, which converges a max-product fixed point. The
inner loop of this algorithm is max-sum diffusion. Under certain conditions,
the algorithm combines the complementary advantages of the max-product belief
propagation and max-sum diffusion (LP relaxation): it yields good approximation
of both ground states and max-marginals.
|
1112.5309
|
POWERPLAY: Training an Increasingly General Problem Solver by
Continually Searching for the Simplest Still Unsolvable Problem
|
cs.AI cs.LG
|
Most of computer science focuses on automatically solving given computational
problems. I focus on automatically inventing or discovering problems in a way
inspired by the playful behavior of animals and humans, to train a more and
more general problem solver from scratch in an unsupervised fashion. Consider
the infinite set of all computable descriptions of tasks with possibly
computable solutions. The novel algorithmic framework POWERPLAY (2011)
continually searches the space of possible pairs of new tasks and modifications
of the current problem solver, until it finds a more powerful problem solver
that provably solves all previously learned tasks plus the new one, while the
unmodified predecessor does not. Wow-effects are achieved by continually making
previously learned skills more efficient such that they require less time and
space. New skills may (partially) re-use previously learned skills. POWERPLAY's
search orders candidate pairs of tasks and solver modifications by their
conditional computational (time & space) complexity, given the stored
experience so far. The new task and its corresponding task-solving skill are
those first found and validated. The computational costs of validating new
tasks need not grow with task repertoire size. POWERPLAY's ongoing search for
novelty keeps breaking the generalization abilities of its present solver. This
is related to Goedel's sequence of increasingly powerful formal theories based
on adding formerly unprovable statements to the axioms without affecting
previously provable theorems. The continually increasing repertoire of problem
solving procedures can be exploited by a parallel search for solutions to
additional externally posed tasks. POWERPLAY may be viewed as a greedy but
practical implementation of basic principles of creativity. A first
experimental analysis can be found in separate papers [53,54].
|
1112.5314
|
One-Bit Quantizers for Fading Channels
|
cs.IT math.IT
|
We study channel capacity when a one-bit quantizer is employed at the output
of the discrete-time average-power-limited Rayleigh-fading channel. We focus on
the low signal-to-noise ratio regime, where communication at very low spectral
efficiencies takes place, as in Spread Spectrum and Ultra-Wideband
communications. We demonstrate that, in this regime, the best one-bit quantizer
does not reduce the asymptotic capacity of the coherent channel, but it does
reduce that of the noncoherent channel.
|
1112.5355
|
2P-Med: Building a Personalization Platform for Mediation Systems
|
cs.IR
|
Nowadays, with the increasing number of integrated data sources, there is a
real trend to personalize mediation systems to improve user satisfaction. To
make these systems user sensitive, we propose a personalization platform called
2P-Med. 2P-Med allows personalizing any mediation system used in any domain
following a cyclic process. The process includes building and managing adequate
user profiles and sources profiles, content and quality matching, source
selection, adapting the mediator responses to user preferences and handling
user feedbacks. In this paper, we describe 2P-Med architecture and highlight
its main functionalities. We also illustrate the operation of the platform
through personalizing source selection in a travel planning assistant.
|
1112.5370
|
Enhancing Support for Knowledge Works: A relatively unexplored vista of
computing research
|
cs.AI cs.HC
|
Let us envision a new class of IT systems, the "Support Systems for Knowledge
Works" or SSKW. An SSKW can be defined as a system built for providing
comprehensive support to human knowledge-workers while performing instances of
complex knowledge-works of a particular type within a particular domain of
professional activities To get an idea what an SSKW-enabled work environment
can be like, let us look into a hypothetical scenario that depicts the
interaction between a physician and a patient-care SSKW during the activity of
diagnosing a patient.
|
1112.5381
|
Improving the Efficiency of Approximate Inference for Probabilistic
Logical Models by means of Program Specialization
|
cs.AI
|
We consider the task of performing probabilistic inference with probabilistic
logical models. Many algorithms for approximate inference with such models are
based on sampling. From a logic programming perspective, sampling boils down to
repeatedly calling the same queries on a knowledge base composed of a static
part and a dynamic part. The larger the static part, the more redundancy there
is in these repeated calls. This is problematic since inefficient sampling
yields poor approximations.
We show how to apply logic program specialization to make sampling-based
inference more efficient. We develop an algorithm that specializes the
definitions of the query predicates with respect to the static part of the
knowledge base. In experiments on real-world data we obtain speedups of up to
an order of magnitude, and these speedups grow with the data-size.
|
1112.5404
|
Similarity-based Learning via Data Driven Embeddings
|
cs.LG stat.ML
|
We consider the problem of classification using similarity/distance functions
over data. Specifically, we propose a framework for defining the goodness of a
(dis)similarity function with respect to a given learning task and propose
algorithms that have guaranteed generalization properties when working with
such good functions. Our framework unifies and generalizes the frameworks
proposed by [Balcan-Blum ICML 2006] and [Wang et al ICML 2007]. An attractive
feature of our framework is its adaptability to data - we do not promote a
fixed notion of goodness but rather let data dictate it. We show, by giving
theoretical guarantees that the goodness criterion best suited to a problem can
itself be learned which makes our approach applicable to a variety of domains
and problems. We propose a landmarking-based approach to obtaining a classifier
from such learned goodness criteria. We then provide a novel diversity based
heuristic to perform task-driven selection of landmark points instead of random
selection. We demonstrate the effectiveness of our goodness criteria learning
method as well as the landmark selection heuristic on a variety of
similarity-based learning datasets and benchmark UCI datasets on which our
method consistently outperforms existing approaches by a significant margin.
|
1112.5407
|
Alternating proximal gradient method for nonnegative matrix
factorization
|
cs.IT math.IT math.OC
|
Nonnegative matrix factorization has been widely applied in face recognition,
text mining, as well as spectral analysis. This paper proposes an alternating
proximal gradient method for solving this problem. With a uniformly positive
lower bound assumption on the iterates, any limit point can be proved to
satisfy the first-order optimality conditions. A Nesterov-type extrapolation
technique is then applied to accelerate the algorithm. Though this technique is
at first used for convex program, it turns out to work very well for the
non-convex nonnegative matrix factorization problem. Extensive numerical
experiments illustrate the efficiency of the alternating proximal gradient
method and the accleration technique. Especially for real data tests, the
accelerated method reveals high superiority to state-of-the-art algorithms in
speed with comparable solution qualities.
|
1112.5424
|
Quantum Control Experiments as a Testbed for Evolutionary
Multi-Objective Algorithms
|
cs.NE math-ph math.MP quant-ph
|
Experimental multi-objective Quantum Control is an emerging topic within the
broad physics and chemistry applications domain of controlling quantum
phenomena. This realm offers cutting edge ultrafast laser laboratory
applications, which pose multiple objectives, noise, and possibly constraints
on the high-dimensional search. In this study we introduce the topic of
Multi-Observable Quantum Control (MOQC), and consider specific systems to be
Pareto optimized subject to uncertainty, either experimentally or by means of
simulated systems. The latter include a family of mathematical test-functions
with a practical link to MOQC experiments, which are introduced here for the
first time. We investigate the behavior of the multi-objective version of the
Covariance Matrix Adaptation Evolution Strategy (MO-CMA-ES) and assess its
performance on computer simulations as well as on laboratory closed-loop
experiments. Overall, we propose a comprehensive study on experimental
evolutionary Pareto optimization in high-dimensional continuous domains, draw
some practical conclusions concerning the impact of fitness disturbance on
algorithmic behavior, and raise several theoretical issues in the broad
evolutionary multi-objective context.
|
1112.5441
|
Finding Density Functionals with Machine Learning
|
physics.comp-ph cs.LG physics.chem-ph stat.ML
|
Machine learning is used to approximate density functionals. For the model
problem of the kinetic energy of non-interacting fermions in 1d, mean absolute
errors below 1 kcal/mol on test densities similar to the training set are
reached with fewer than 100 training densities. A predictor identifies if a
test density is within the interpolation region. Via principal component
analysis, a projected functional derivative finds highly accurate
self-consistent densities. Challenges for application of our method to real
electronic structure problems are discussed.
|
1112.5493
|
Critical Data Compression
|
cs.IT cs.AI cs.MM math.IT
|
A new approach to data compression is developed and applied to multimedia
content. This method separates messages into components suitable for both
lossless coding and 'lossy' or statistical coding techniques, compressing
complex objects by separately encoding signals and noise. This is demonstrated
by compressing the most significant bits of data exactly, since they are
typically redundant and compressible, and either fitting a maximally likely
noise function to the residual bits or compressing them using lossy methods.
Upon decompression, the significant bits are decoded and added to a noise
function, whether sampled from a noise model or decompressed from a lossy code.
This results in compressed data similar to the original. For many test images,
a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless
coding produces less mean-squared error than an equal length of JPEG2000.
Computer-generated images typically compress better using this method than
through direct lossy coding, as do many black and white photographs and most
color photographs at sufficiently high quality levels. Examples applying the
method to audio and video coding are also demonstrated. Since two-part codes
are efficient for both periodic and chaotic data, concatenations of roughly
similar objects may be encoded efficiently, which leads to improved inference.
Applications to artificial intelligence are demonstrated, showing that signals
using an economical lossless code have a critical level of redundancy which
leads to better description-based inference than signals which encode either
insufficient data or too much detail.
|
1112.5505
|
A Study on Using Uncertain Time Series Matching Algorithms in MapReduce
Applications
|
cs.DC cs.AI cs.LG cs.PF
|
In this paper, we study CPU utilization time patterns of several Map-Reduce
applications. After extracting running patterns of several applications, the
patterns with their statistical information are saved in a reference database
to be later used to tweak system parameters to efficiently execute unknown
applications in future. To achieve this goal, CPU utilization patterns of new
applications along with its statistical information are compared with the
already known ones in the reference database to find/predict their most
probable execution patterns. Because of different patterns lengths, the Dynamic
Time Warping (DTW) is utilized for such comparison; a statistical analysis is
then applied to DTWs' outcomes to select the most suitable candidates.
Moreover, under a hypothesis, another algorithm is proposed to classify
applications under similar CPU utilization patterns. Three widely used text
processing applications (WordCount, Distributed Grep, and Terasort) and another
application (Exim Mainlog parsing) are used to evaluate our hypothesis in
tweaking system parameters in executing similar applications. Results were very
promising and showed effectiveness of our approach on 5-node Map-Reduce
platform
|
1112.5534
|
Dynamics of competing ideas in complex social systems
|
physics.soc-ph cs.SI math.DS
|
Individuals accepting an idea may intentionally or unintentionally impose
influences in a certain neighborhood area, making other individuals within the
area less likely or even impossible to accept other competing ideas. Depending
on whether such influences strictly prohibit neighborhood individuals from
accepting other ideas or not, we classify them into exclusive and non-exclusive
influences, respectively. Our study reveals for the first time the rich and
complex dynamics of two competing ideas with neighborhood influences in
scale-free social networks: depending on whether they have exclusive or
non-exclusive influences, the final state varies from multiple coexistence to
founder control to exclusion, with different sizes of population accepting each
of the ideas respectively. Such results provide insights helpful for better
understanding the spread (and the control of spread) of ideas in human society.
|
1112.5557
|
Competitive Ratio Analysis of Online Algorithms to Minimize Data
Transmission Time in Energy Harvesting Communication System
|
cs.IT math.IT
|
We consider the optimal online packet scheduling problem in a single-user
energy harvesting wireless communication system, where energy is harvested from
natural renewable sources, making future energy arrivals instants and amounts
random in nature. The most general case of arbitrary energy arrivals is
considered where neither the future energy arrival instants or amount, nor
their distribution is known. The problem considered is to adaptively change the
transmission rate according to the causal energy arrival information, such that
the time by which all packets are delivered is minimized. We assume that all
bits have arrived and are ready at the source before the transmission begins.
For a minimization problem, the utility of an online algorithm is tested by
finding its competitive ratio or competitiveness that is defined to be the
maximum of the ratio of the gain of the online algorithm with the optimal
offline algorithm over all input sequences. We derive a lower and upper bound
on the competitive ratio of any online algorithm to minimize the total
transmission time in an energy harvesting system. The upper bound is obtained
using a `lazy' transmission policy that chooses its transmission power to
minimize the transmission time assuming that no further energy arrivals are
going to occur in future. The lazy transmission policy is shown to be strictly
two-competitive. We also derive an adversarial lower bound that shows that
competitive ratio of any online algorithm is at least 1.325.
|
1112.5625
|
Complex network classification using partially self-avoiding
deterministic walks
|
physics.data-an cs.SI physics.soc-ph
|
Complex networks have attracted increasing interest from various fields of
science. It has been demonstrated that each complex network model presents
specific topological structures which characterize its connectivity and
dynamics. Complex network classification rely on the use of representative
measurements that model topological structures. Although there are a large
number of measurements, most of them are correlated. To overcome this
limitation, this paper presents a new measurement for complex network
classification based on partially self-avoiding walks. We validate the
measurement on a data set composed by 40.000 complex networks of four
well-known models. Our results indicate that the proposed measurement improves
correct classification of networks compared to the traditional ones.
|
1112.5627
|
Minimax Rates for Homology Inference
|
stat.ML cs.LG
|
Often, high dimensional data lie close to a low-dimensional submanifold and
it is of interest to understand the geometry of these submanifolds. The
homology groups of a manifold are important topological invariants that provide
an algebraic summary of the manifold. These groups contain rich topological
information, for instance, about the connected components, holes, tunnels and
sometimes the dimension of the manifold. In this paper, we consider the
statistical problem of estimating the homology of a manifold from noisy samples
under several different noise models. We derive upper and lower bounds on the
minimax risk for this problem. Our upper bounds are based on estimators which
are constructed from a union of balls of appropriate radius around carefully
selected points. In each case we establish complementary lower bounds using Le
Cam's lemma.
|
1112.5629
|
High-Rank Matrix Completion and Subspace Clustering with Missing Data
|
cs.IT cs.LG math.IT stat.ML
|
This paper considers the problem of completing a matrix with many missing
entries under the assumption that the columns of the matrix belong to a union
of multiple low-rank subspaces. This generalizes the standard low-rank matrix
completion problem to situations in which the matrix rank can be quite high or
even full rank. Since the columns belong to a union of subspaces, this problem
may also be viewed as a missing-data version of the subspace clustering
problem. Let X be an n x N matrix whose (complete) columns lie in a union of at
most k subspaces, each of rank <= r < n, and assume N >> kn. The main result of
the paper shows that under mild assumptions each column of X can be perfectly
recovered with high probability from an incomplete version so long as at least
CrNlog^2(n) entries of X are observed uniformly at random, with C>1 a constant
depending on the usual incoherence conditions, the geometrical arrangement of
subspaces, and the distribution of columns over the subspaces. The result is
illustrated with numerical experiments and an application to Internet distance
matrix completion and topology identification.
|
1112.5630
|
A Theoretical Analysis of Authentication, Privacy and Reusability Across
Secure Biometric Systems
|
cs.IT cs.CR math.IT
|
We present a theoretical framework for the analysis of privacy and security
tradeoffs in secure biometric authentication systems. We use this framework to
conduct a comparative information-theoretic analysis of two biometric systems
that are based on linear error correction codes, namely fuzzy commitment and
secure sketches. We derive upper bounds for the probability of false rejection
($P_{FR}$) and false acceptance ($P_{FA}$) for these systems. We use mutual
information to quantify the information leaked about a user's biometric
identity, in the scenario where one or multiple biometric enrollments of the
user are fully or partially compromised. We also quantify the probability of
successful attack ($P_{SA}$) based on the compromised information. Our analysis
reveals that fuzzy commitment and secure sketch systems have identical $P_{FR},
P_{FA}, P_{SA}$ and information leakage, but secure sketch systems have lower
storage requirements. We analyze both single-factor (keyless) and two-factor
(key-based) variants of secure biometrics, and consider the most general
scenarios in which a single user may provide noisy biometric enrollments at
several access control devices, some of which may be subsequently compromised
by an attacker. Our analysis highlights the revocability and reusability
properties of key-based systems and exposes a subtle design tradeoff between
reducing information leakage from compromised systems and preventing successful
attacks on systems whose data have not been compromised.
|
1112.5638
|
Discretization of Parametrizable Signal Manifolds
|
cs.CV
|
Transformation-invariant analysis of signals often requires the computation
of the distance from a test pattern to a transformation manifold. In
particular, the estimation of the distances between a transformed query signal
and several transformation manifolds representing different classes provides
essential information for the classification of the signal. In many
applications the computation of the exact distance to the manifold is costly,
whereas an efficient practical solution is the approximation of the manifold
distance with the aid of a manifold grid. In this paper, we consider a setting
with transformation manifolds of known parameterization. We first present an
algorithm for the selection of samples from a single manifold that permits to
minimize the average error in the manifold distance estimation. Then we propose
a method for the joint discretization of multiple manifolds that represent
different signal classes, where we optimize the transformation-invariant
classification accuracy yielded by the discrete manifold representation.
Experimental results show that sampling each manifold individually by
minimizing the manifold distance estimation error outperforms baseline sampling
solutions with respect to registration and classification accuracy. Performing
an additional joint optimization on all samples improves the classification
performance further. Moreover, given a fixed total number of samples to be
selected from all manifolds, an asymmetric distribution of samples to different
manifolds depending on their geometric structures may also increase the
classification accuracy in comparison with the equal distribution of samples.
|
1112.5640
|
Learning Smooth Pattern Transformation Manifolds
|
cs.CV
|
Manifold models provide low-dimensional representations that are useful for
processing and analyzing data in a transformation-invariant way. In this paper,
we study the problem of learning smooth pattern transformation manifolds from
image sets that represent observations of geometrically transformed signals. In
order to construct a manifold, we build a representative pattern whose
transformations accurately fit various input images. We examine two objectives
of the manifold building problem, namely, approximation and classification. For
the approximation problem, we propose a greedy method that constructs a
representative pattern by selecting analytic atoms from a continuous dictionary
manifold. We present a DC (Difference-of-Convex) optimization scheme that is
applicable to a wide range of transformation and dictionary models, and
demonstrate its application to transformation manifolds generated by rotation,
translation and anisotropic scaling of a reference pattern. Then, we generalize
this approach to a setting with multiple transformation manifolds, where each
manifold represents a different class of signals. We present an iterative
multiple manifold building algorithm such that the classification accuracy is
promoted in the learning of the representative patterns. Experimental results
suggest that the proposed methods yield high accuracy in the approximation and
classification of data compared to some reference methods, while the invariance
to geometric transformations is achieved due to the transformation manifold
model.
|
1112.5670
|
Residual, restarting and Richardson iteration for the matrix
exponential, revised
|
math.NA cs.CE physics.comp-ph
|
A well-known problem in computing some matrix functions iteratively is the
lack of a clear, commonly accepted residual notion. An important matrix
function for which this is the case is the matrix exponential. Suppose the
matrix exponential of a given matrix times a given vector has to be computed.
We develop the approach of Druskin, Greenbaum and Knizhnerman (1998) and
interpret the sought-after vector as the value of a vector function satisfying
the linear system of ordinary differential equations (ODE) whose coefficients
form the given matrix. The residual is then defined with respect to the
initial-value problem for this ODE system. The residual introduced in this way
can be seen as a backward error. We show how the residual can be computed
efficiently within several iterative methods for the matrix exponential. This
completely resolves the question of reliable stopping criteria for these
methods. Further, we show that the residual concept can be used to construct
new residual-based iterative methods. In particular, a variant of the
Richardson method for the new residual appears to provide an efficient way to
restart Krylov subspace methods for evaluating the matrix exponential.
|
1112.5683
|
Epidemic Spreading in Weighted Networks: An Edge-Based Mean-Field
Solution
|
physics.soc-ph cs.SI physics.data-an
|
Weight distribution largely impacts the epidemic spreading taking place on
top of networks. This paper studies a susceptible-infected-susceptible model on
regular random networks with different kinds of weight distributions.
Simulation results show that the more homogeneous weight distribution leads to
higher epidemic prevalence, which, unfortunately, could not be captured by the
traditional mean-field approximation. This paper gives an edge-based mean-field
solution for general weight distribution, which can quantitatively reproduce
the simulation results. This method could find its applications in
characterizing the non-equilibrium steady states of dynamical processes on
weighted networks.
|
1112.5716
|
A Sparsity-Aware Adaptive Algorithm for Distributed Learning
|
cs.IT math.IT
|
In this paper, a sparsity-aware adaptive algorithm for distributed learning
in diffusion networks is developed. The algorithm follows the set-theoretic
estimation rationale. At each time instance and at each node of the network, a
closed convex set, known as property set, is constructed based on the received
measurements; this defines the region in which the solution is searched for. In
this paper, the property sets take the form of hyperslabs. The goal is to find
a point that belongs to the intersection of these hyperslabs. To this end,
sparsity encouraging variable metric projections onto the hyperslabs have been
adopted. Moreover, sparsity is also imposed by employing variable metric
projections onto weighted $\ell_1$ balls. A combine adapt cooperation strategy
is adopted. Under some mild assumptions, the scheme enjoys monotonicity,
asymptotic optimality and strong convergence to a point that lies in the
consensus subspace. Finally, numerical examples verify the validity of the
proposed scheme, compared to other algorithms, which have been developed in the
context of sparse adaptive learning.
|
1112.5745
|
Bayesian Active Learning for Classification and Preference Learning
|
stat.ML cs.LG
|
Information theoretic active learning has been widely studied for
probabilistic models. For simple regression an optimal myopic policy is easily
tractable. However, for other tasks and with more complex models, such as
classification with nonparametric models, the optimal solution is harder to
compute. Current approaches make approximations to achieve tractability. We
propose an approach that expresses information gain in terms of predictive
entropies, and apply this method to the Gaussian Process Classifier (GPC). Our
approach makes minimal approximations to the full information theoretic
objective. Our experimental performance compares favourably to many popular
active learning algorithms, and has equal or lower computational complexity. We
compare well to decision theoretic approaches also, which are privy to more
information and require much more computational time. Secondly, by developing
further a reformulation of binary preference learning to a classification
problem, we extend our algorithm to Gaussian Process preference learning.
|
1112.5756
|
Relay-Assisted Interference Channel: Degrees of Freedom
|
cs.IT math.IT
|
This paper investigates the degrees of freedom of the interference channel in
the presence of a dedicated MIMO relay. The relay is used to manage the
interference at the receivers. It is assumed that all nodes including the relay
have channel state information only for their own links and that the relay has
M (greater than or equal to K) antennas in a K-user network. We pose the
question: What is the benefit of exploiting the direct links from the source to
destinations compared to a simpler two-hop strategy. To answer this question,
we first establish the degrees of freedom of the interference channel with a
MIMO relay, showing that a K-pair network with a MIMO relay has K/2 degrees of
freedom. Thus, appropriate signaling in a two-hop scenario captures the degrees
of freedom without the need for the direct links. We then consider more
sophisticated encoding strategies in search of other ways to exploit the direct
links. Using a number of hybrid encoding strategies, we obtain non-asymptotic
achievable sum-rates. We investigate the case where the relay (unlike other
nodes) has access to abundant power, showing that when sources have power P and
the relay is allowed power proportional to O(P^2), the full degrees of freedom
K are available to the network.
|
1112.5762
|
Characterizing Continuous Time Random Walks on Time Varying Graphs
|
cs.SI physics.soc-ph
|
In this paper we study the behavior of a continuous time random walk (CTRW)
on a stationary and ergodic time varying dynamic graph. We establish conditions
under which the CTRW is a stationary and ergodic process. In general, the
stationary distribution of the walker depends on the walker rate and is
difficult to characterize. However, we characterize the stationary distribution
in the following cases: i) the walker rate is significantly larger or smaller
than the rate in which the graph changes (time-scale separation), ii) the
walker rate is proportional to the degree of the node that it resides on
(coupled dynamics), and iii) the degrees of node belonging to the same
connected component are identical (structural constraints). We provide examples
that illustrate our theoretical findings.
|
1112.5767
|
Optimal Resource Allocation and Relay Selection in Bandwidth Exchange
Based Cooperative Forwarding
|
cs.IT math.IT
|
In this paper, we investigate joint optimal relay selection and resource
allocation under bandwidth exchange (BE) enabled incentivized cooperative
forwarding in wireless networks. We consider an autonomous network where N
nodes transmit data in the uplink to an access point (AP) / base station (BS).
We consider the scenario where each node gets an initial amount (equal, optimal
based on direct path or arbitrary) of bandwidth, and uses this bandwidth as a
flexible incentive for two hop relaying. We focus on alpha-fair network utility
maximization (NUM) and outage reduction in this environment. Our contribution
is two-fold. First, we propose an incentivized forwarding based resource
allocation algorithm which maximizes the global utility while preserving the
initial utility of each cooperative node. Second, defining the link weight of
each relay pair as the utility gain due to cooperation (over noncooperation),
we show that the optimal relay selection in alpha-fair NUM reduces to the
maximum weighted matching (MWM) problem in a non-bipartite graph. Numerical
results show that the proposed algorithms provide 20- 25% gain in spectral
efficiency and 90-98% reduction in outage probability.
|
1112.5771
|
On B-spline framelets derived from the unitary extension principle
|
math.FA cs.CV cs.IT math.IT
|
Spline wavelet tight frames of Ron-Shen have been used widely in frame based
image analysis and restorations. However, except for the tight frame property
and the approximation order of the truncated series, there are few other
properties of this family of spline wavelet tight frames to be known. This
paper is to present a few new properties of this family that will provide
further understanding of it and, hopefully, give some indications why it is
efficient in image analysis and restorations. In particular, we present a
recurrence formula of computing generators of higher order spline wavelet tight
frames from the lower order ones. We also represent each generator of spline
wavelet tight frames as certain order of derivative of some univariate box
spline. With this, we further show that each generator of sufficiently high
order spline wavelet tight frames is close to a right order of derivative of a
properly scaled Gaussian function. This leads to the result that the wavelet
system generated by a finitely many consecutive derivatives of a properly
scaled Gaussian function forms a frame whose frame bounds can be almost tight.
|
1112.5895
|
Online Adaptive Statistical Compressed Sensing of Gaussian Mixture
Models
|
cs.CV
|
A framework of online adaptive statistical compressed sensing is introduced
for signals following a mixture model. The scheme first uses non-adaptive
measurements, from which an online decoding scheme estimates the model
selection. As soon as a candidate model has been selected, an optimal sensing
scheme for the selected model continues to apply. The final signal
reconstruction is calculated from the ensemble of both the non-adaptive and the
adaptive measurements. For signals generated from a Gaussian mixture model, the
online adaptive sensing algorithm is given and its performance is analyzed. On
both synthetic and real image data, the proposed adaptive scheme considerably
reduces the average reconstruction error with respect to standard statistical
compressed sensing that uses fully random measurements, at a marginally
increased computational complexity.
|
1112.5906
|
Power-law distribution functions derived from maximum entropy and a
symmetry relationship
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an
|
Power-law distributions are common, particularly in social physics. Here, we
explore whether power-laws might arise as a consequence of a general
variational principle for stochastic processes. We describe communities of
'social particles', where the cost of adding a particle to the community is
shared equally between the particle joining the cluster and the particles that
are already members of the cluster. Power-law probability distributions of
community sizes arise as a natural consequence of the maximization of entropy,
subject to this 'equal cost sharing' rule. We also explore a generalization in
which there is unequal sharing of the costs of joining a community.
Distributions change smoothly from exponential to power-law as a function of a
sharing-inequality quantity. This work gives an interpretation of power-law
distributions in terms of shared costs.
|
1112.5908
|
Query Answering under Matching Dependencies for Data Cleaning:
Complexity and Algorithms
|
cs.DB cs.LO
|
Matching dependencies (MDs) have been recently introduced as declarative
rules for entity resolution (ER), i.e. for identifying and resolving duplicates
in relational instance $D$. A set of MDs can be used as the basis for a
possibly non-deterministic mechanism that computes a duplicate-free instance
from $D$. The possible results of this process are the clean, "minimally
resolved instances" (MRIs). There might be several MRIs for $D$, and the
"resolved answers" to a query are those that are shared by all the MRIs. We
investigate the problem of computing resolved answers. We look at various sets
of MDs, developing syntactic criteria for determining (in)tractability of the
resolved answer problem, including a dichotomy result. For some tractable
classes of MDs and conjunctive queries, we present a query rewriting
methodology that can be used to retrieve the resolved answers. We also
investigate connections with "consistent query answering", deriving further
tractability results for MD-based ER.
|
1112.5945
|
Controlling edge dynamics in complex networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The interaction of distinct units in physical, social, biological and
technological systems naturally gives rise to complex network structures.
Networks have constantly been in the focus of research for the last decade,
with considerable advances in the description of their structural and dynamical
properties. However, much less effort has been devoted to studying the
controllability of the dynamics taking place on them. Here we introduce and
evaluate a dynamical process defined on the edges of a network, and demonstrate
that the controllability properties of this process significantly differ from
simple nodal dynamics. Evaluation of real-world networks indicates that most of
them are more controllable than their randomized counterparts. We also find
that transcriptional regulatory networks are particularly easy to control.
Analytic calculations show that networks with scale-free degree distributions
have better controllability properties than uncorrelated networks, and
positively correlated in- and out-degrees enhance the controllability of the
proposed dynamics.
|
1112.5947
|
Random Context and Semi-Conditional Insertion-Deletion Systems
|
cs.FL cs.CC cs.CL cs.DM
|
In this article we introduce the operations of insertion and deletion working
in a random-context and semi-conditional manner. We show that the conditional
use of rules strictly increase the computational power. In the case of
semi-conditional insertion-deletion systems context-free insertion and deletion
rules of one symbol are sufficient to get the computational completeness. In
the random context case our results expose an asymmetry between the
computational power of insertion and deletion rules: systems of size $(2,0,0;
1,1,0)$ are computationally complete, while systems of size $(1,1,0;2,0,0)$
(and more generally of size $(1,1,0;p,1,1)$) are not. This is particularly
interesting because other control mechanisms like graph-control or matrix
control used together with insertion-deletion systems do not present such
asymmetry.
|
1112.5953
|
Secure Diversity-Multiplexing Tradeoff of Zero-Forcing Transmit Scheme
at Finite-SNR
|
cs.CR cs.IT math.IT
|
In this paper, we address the finite Signal-to-Noise Ratio (SNR)
Diversity-Multiplexing Tradeoff (DMT) of the Multiple Input Multiple Output
(MIMO) wiretap channel, where a Zero-Forcing (ZF) transmit scheme, that intends
to send the secret information in the orthogonal space of the eavesdropper
channel, is used. First, we introduce the secrecy multiplexing gain at
finite-SNR that generalizes the definition at high-SNR. Then, we provide upper
and lower bounds on the outage probability under secrecy constraint, from which
secrecy diversity gain estimates of ZF are derived. Through asymptotic
analysis, we show that the upper bound underestimates the secrecy diversity
gain, whereas the lower bound is tight at high-SNR, and thus its related
diversity gain estimate is equal to the actual asymptotic secrecy diversity
gain of the MIMO wiretap channel.
|
1112.5957
|
Usage Des Mesures Pour La G\'en\'eration Des R\`egles d'Associations
Cycliques
|
cs.DB
|
The online analytical processing (OLAP) does not provide any explanation of
correlations discovered between data. Thus, the coupling of OLAP and data
mining, especially association rules, is considered as an efficient solution to
this problem. In this context, we mainly focus on a particular class of
association rules which is the cyclic association rules. These rules aimed to
discover patterns that display regular variation over user-defined intervals.
Generally,the generated patterns do not take an advantage from the
specificities of the multidimensional context namely, the consideration of the
measures and their aggregations. In this paper, we introduce a novel method for
extracting cyclic association rules from measures, and we redefine the
evaluation metrics of association rules quality inspired of the temporal
summarizability of measures concept through the integration of appropriate
aggregation functions. To prove the usefulness of our approach, we conduct an
empirical study on a real data warehouse.
|
1112.5980
|
Search space analysis with Wang-Landau sampling and slow adaptive walks
|
cs.NE
|
Two complementary techniques for analyzing search spaces are proposed: (i) an
algorithm to detect search points with potential to be local optima; and (ii) a
slightly adjusted Wang-Landau sampling algorithm to explore larger search
spaces. The detection algorithm assumes that local optima are points which are
easier to reach and harder to leave by a slow adaptive walker. A slow adaptive
walker moves to a nearest fitter point. Thus, points with larger outgoing step
sizes relative to incoming step sizes are marked using the local optima score
formulae as potential local optima points (PLOPs). Defining local optima in
these more general terms allows their detection within the closure of a subset
of a search space, and the sampling of a search space unshackled by a
particular move set. Tests are done with NK and HIFF problems to confirm that
PLOPs detected in the manner proposed retain characteristics of local optima,
and that the adjusted Wang-Landau samples are more representative of the search
space than samples produced by choosing points uniformly at random. While our
approach shows promise, more needs to be done to reduce its computation cost
that it may pave a way toward analyzing larger search spaces of practical
meaning.
|
1112.5995
|
On the Stability of Random Multiple Access with Stochastic Energy
Harvesting
|
cs.IT math.IT
|
In this paper, we consider the random access of nodes having energy
harvesting capability and a battery to store the harvested energy. Each node
attempts to transmit the head-of-line packet in the queue if its battery is
nonempty. The packet and energy arrivals into the queue and the battery are all
modeled as a discrete-time stochastic process. The main contribution of this
paper is the exact characterization of the stability region of the packet
queues given the energy harvesting rates when a pair of nodes are randomly
accessing a common channel having multipacket reception (MPR) capability. The
channel with MPR capability is a generalized form of the wireless channel
modeling which allows probabilistic receptions of the simultaneously
transmitted packets. The results obtained in this paper are fairly general as
the cases with unlimited energy for transmissions both with the collision
channel and the channel with MPR capability can be derived from ours as special
cases. Furthermore, we study the impact of the finiteness of the batteries on
the achievable stability region.
|
1112.5997
|
Multispectral Palmprint Recognition Using a Hybrid Feature
|
cs.CV
|
Personal identification problem has been a major field of research in recent
years. Biometrics-based technologies that exploit fingerprints, iris, face,
voice and palmprints, have been in the center of attention to solve this
problem. Palmprints can be used instead of fingerprints that have been of the
earliest of these biometrics technologies. A palm is covered with the same skin
as the fingertips but has a larger surface, giving us more information than the
fingertips. The major features of the palm are palm-lines, including principal
lines, wrinkles and ridges. Using these lines is one of the most popular
approaches towards solving the palmprint recognition problem. Another robust
feature is the wavelet energy of palms. In this paper we used a hybrid feature
which combines both of these features. %Moreover, multispectral analysis is
applied to improve the performance of the system. At the end, minimum distance
classifier is used to match test images with one of the training samples. The
proposed algorithm has been tested on a well-known multispectral palmprint
dataset and achieved an average accuracy of 98.8\%.
|
1112.6028
|
Entropy of stochastic blockmodel ensembles
|
cond-mat.stat-mech cs.SI physics.data-an physics.soc-ph
|
Stochastic blockmodels are generative network models where the vertices are
separated into discrete groups, and the probability of an edge existing between
two vertices is determined solely by their group membership. In this paper, we
derive expressions for the entropy of stochastic blockmodel ensembles. We
consider several ensemble variants, including the traditional model as well as
the newly introduced degree-corrected version [Karrer et al. Phys. Rev. E 83,
016107 (2011)], which imposes a degree sequence on the vertices, in addition to
the block structure. The imposed degree sequence is implemented both as "soft"
constraints, where only the expected degrees are imposed, and as "hard"
constraints, where they are required to be the same on all samples of the
ensemble. We also consider generalizations to multigraphs and directed graphs.
We illustrate one of many applications of this measure by directly deriving a
log-likelihood function from the entropy expression, and using it to infer
latent block structure in observed data. Due to the general nature of the
ensembles considered, the method works well for ensembles with intrinsic degree
correlations (i.e. with entropic origin) as well as extrinsic degree
correlations, which go beyond the block structure.
|
1112.6045
|
Comparing intermittency and network measurements of words and their
dependency on authorship
|
physics.soc-ph cs.CL cs.SI physics.data-an
|
Many features from texts and languages can now be inferred from statistical
analyses using concepts from complex networks and dynamical systems. In this
paper we quantify how topological properties of word co-occurrence networks and
intermittency (or burstiness) in word distribution depend on the style of
authors. Our database contains 40 books from 8 authors who lived in the 19th
and 20th centuries, for which the following network measurements were obtained:
clustering coefficient, average shortest path lengths, and betweenness. We
found that the two factors with stronger dependency on the authors were the
skewness in the distribution of word intermittency and the average shortest
paths. Other factors such as the betweeness and the Zipf's law exponent show
only weak dependency on authorship. Also assessed was the contribution from
each measurement to authorship recognition using three machine learning
methods. The best performance was a ca. 65 % accuracy upon combining complex
network and intermittency features with the nearest neighbor algorithm. From a
detailed analysis of the interdependence of the various metrics it is concluded
that the methods used here are complementary for providing short- and
long-scale perspectives of texts, which are useful for applications such as
identification of topical words and information retrieval.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.