id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1111.0907
|
Towards Analyzing Crossover Operators in Evolutionary Search via General
Markov Chain Switching Theorem
|
cs.NE
|
Evolutionary algorithms (EAs), simulating the evolution process of natural
species, are used to solve optimization problems. Crossover (also called
recombination), originated from simulating the chromosome exchange phenomena in
zoogamy reproduction, is widely employed in EAs to generate offspring
solutions, of which the effectiveness has been examined empirically in
applications. However, due to the irregularity of crossover operators and the
complicated interactions to mutation, crossover operators are hard to analyze
and thus have few theoretical results. Therefore, analyzing crossover not only
helps in understanding EAs, but also helps in developing novel techniques for
analyzing sophisticated metaheuristic algorithms.
In this paper, we derive the General Markov Chain Switching Theorem (GMCST)
to facilitate theoretical studies of crossover-enabled EAs. The theorem allows
us to analyze the running time of a sophisticated EA from an easy-to-analyze
EA. Using this tool, we analyze EAs with several crossover operators on the
LeadingOnes and OneMax problems, which are noticeably two well studied problems
for mutation-only EAs but with few results for crossover-enabled EAs. We first
derive the bounds of running time of the (2+2)-EA with crossover operators;
then we study the running time gap between the mutation-only (2:2)-EA and the
(2:2)-EA with crossover operators; finally, we develop strategies that apply
crossover operators only when necessary, which improve from the mutation-only
as well as the crossover-all-the-time (2:2)-EA. The theoretical results are
verified by experiments.
|
1111.0920
|
Extracting spatial information from networks with low-order eigenvectors
|
cs.SI physics.soc-ph
|
We consider the problem of inferring meaningful spatial information in
networks from incomplete information on the connection intensity between the
nodes of the network. We consider two spatially distributed networks: a
population migration flow network within the US, and a network of mobile phone
calls between cities in Belgium. For both networks we use the eigenvectors of
the Laplacian matrix constructed from the link intensities to obtain
informative visualizations and capture natural geographical subdivisions. We
observe that some low order eigenvectors localize very well and seem to reveal
small geographically cohesive regions that match remarkably well with political
and administrative boundaries. We discuss possible explanations for this
observation by describing diffusion maps and localized eigenfunctions. In
addition, we discuss a possible connection with the weighted graph cut problem,
and provide numerical evidence supporting the idea that lower order
eigenvectors point out local cuts in the network. However, we do not provide a
formal and rigorous justification for our observations.
|
1111.0952
|
Computing a Nonnegative Matrix Factorization -- Provably
|
cs.DS cs.LG
|
In the Nonnegative Matrix Factorization (NMF) problem we are given an $n
\times m$ nonnegative matrix $M$ and an integer $r > 0$. Our goal is to express
$M$ as $A W$ where $A$ and $W$ are nonnegative matrices of size $n \times r$
and $r \times m$ respectively. In some applications, it makes sense to ask
instead for the product $AW$ to approximate $M$ -- i.e. (approximately)
minimize $\norm{M - AW}_F$ where $\norm{}_F$ denotes the Frobenius norm; we
refer to this as Approximate NMF. This problem has a rich history spanning
quantum mechanics, probability theory, data analysis, polyhedral combinatorics,
communication complexity, demography, chemometrics, etc. In the past decade NMF
has become enormously popular in machine learning, where $A$ and $W$ are
computed using a variety of local search heuristics. Vavasis proved that this
problem is NP-complete. We initiate a study of when this problem is solvable in
polynomial time:
1. We give a polynomial-time algorithm for exact and approximate NMF for
every constant $r$. Indeed NMF is most interesting in applications precisely
when $r$ is small.
2. We complement this with a hardness result, that if exact NMF can be solved
in time $(nm)^{o(r)}$, 3-SAT has a sub-exponential time algorithm. This rules
out substantial improvements to the above algorithm.
3. We give an algorithm that runs in time polynomial in $n$, $m$ and $r$
under the separablity condition identified by Donoho and Stodden in 2003. The
algorithm may be practical since it is simple and noise tolerant (under benign
assumptions). Separability is believed to hold in many practical settings.
To the best of our knowledge, this last result is the first example of a
polynomial-time algorithm that provably works under a non-trivial condition on
the input and we believe that this will be an interesting and important
direction for future work.
|
1111.1014
|
Sparsity and Robustness in Face Recognition
|
cs.CV
|
This report concerns the use of techniques for sparse signal representation
and sparse error correction for automatic face recognition. Much of the recent
interest in these techniques comes from the paper "Robust Face Recognition via
Sparse Representation" by Wright et al. (2009), which showed how, under certain
technical conditions, one could cast the face recognition problem as one of
seeking a sparse representation of a given input face image in terms of a
"dictionary" of training images and images of individual pixels. In this
report, we have attempted to clarify some frequently encountered questions
about this work and particularly, on the validity of using sparse
representation techniques for face recognition.
|
1111.1020
|
Stochastic Belief Propagation: A Low-Complexity Alternative to the
Sum-Product Algorithm
|
cs.IT math.IT stat.ML
|
The sum-product or belief propagation (BP) algorithm is a widely-used
message-passing algorithm for computing marginal distributions in graphical
models with discrete variables. At the core of the BP message updates, when
applied to a graphical model with pairwise interactions, lies a matrix-vector
product with complexity that is quadratic in the state dimension $d$, and
requires transmission of a $(d-1)$-dimensional vector of real numbers
(messages) to its neighbors. Since various applications involve very large
state dimensions, such computation and communication complexities can be
prohibitively complex. In this paper, we propose a low-complexity variant of
BP, referred to as stochastic belief propagation (SBP). As suggested by the
name, it is an adaptively randomized version of the BP message updates in which
each node passes randomly chosen information to each of its neighbors. The SBP
message updates reduce the computational complexity (per iteration) from
quadratic to linear in $d$, without assuming any particular structure of the
potentials, and also reduce the communication complexity significantly,
requiring only $\log{d}$ bits transmission per edge. Moreover, we establish a
number of theoretical guarantees for the performance of SBP, showing that it
converges almost surely to the BP fixed point for any tree-structured graph,
and for graphs with cycles satisfying a contractivity condition. In addition,
for these graphical models, we provide non-asymptotic upper bounds on the
convergence rate, showing that the $\ell_{\infty}$ norm of the error vector
decays no slower than $O(1/\sqrt{t})$ with the number of iterations $t$ on
trees and the mean square error decays as $O(1/t)$ for general graphs. These
analysis show that SBP can provably yield reductions in computational and
communication complexities for various classes of graphical models.
|
1111.1041
|
Accurate Prediction of Phase Transitions in Compressed Sensing via a
Connection to Minimax Denoising
|
cs.IT math.IT math.ST stat.TH
|
Compressed sensing posits that, within limits, one can undersample a sparse
signal and yet reconstruct it accurately. Knowing the precise limits to such
undersampling is important both for theory and practice. We present a formula
that characterizes the allowed undersampling of generalized sparse objects. The
formula applies to Approximate Message Passing (AMP) algorithms for compressed
sensing, which are here generalized to employ denoising operators besides the
traditional scalar soft thresholding denoiser. This paper gives several
examples including scalar denoisers not derived from convex penalization -- the
firm shrinkage nonlinearity and the minimax nonlinearity -- and also nonscalar
denoisers -- block thresholding, monotone regression, and total variation
minimization.
Let the variables eps = k/N and delta = n/N denote the generalized sparsity
and undersampling fractions for sampling the k-generalized-sparse N-vector x_0
according to y=Ax_0. Here A is an n\times N measurement matrix whose entries
are iid standard Gaussian. The formula states that the phase transition curve
delta = delta(eps) separating successful from unsuccessful reconstruction of
x_0 by AMP is given by: delta = M(eps| Denoiser), where M(eps| Denoiser)
denotes the per-coordinate minimax mean squared error (MSE) of the specified,
optimally-tuned denoiser in the directly observed problem y = x + z. In short,
the phase transition of a noiseless undersampling problem is identical to the
minimax MSE in a denoising problem.
|
1111.1048
|
Achievable and Crystallized Rate Regions of the Interference Channel
with Interference as Noise
|
cs.IT math.IT
|
The interference channel achievable rate region is presented when the
interference is treated as noise. The formulation starts with the 2-user
channel, and then extends the results to the n-user case. The rate region is
found to be the convex hull of the union of n power control rate regions, where
each power control rate region is upperbounded by a (n-1)-dimensional
hyper-surface characterized by having one of the transmitters transmitting at
full power. The convex hull operation lends itself to a time-sharing operation
depending on the convexity behavior of those hyper-surfaces. In order to know
when to use time-sharing rather than power control, the paper studies the
hyper-surfaces convexity behavior in details for the 2-user channel with
specific results pertaining to the symmetric channel. It is observed that most
of the achievable rate region can be covered by using simple On/Off binary
power control in conjunction with time-sharing. The binary power control
creates several corner points in the n-dimensional space. The crystallized rate
region, named after its resulting crystal shape, is hence presented as the
time-sharing convex hull imposed onto those corner points; thereby offering a
viable new perspective of looking at the achievable rate region of the
interference channel.
|
1111.1051
|
Multiuser Diversity in Interfering Broadcast Channels: Achievable
Degrees of Freedom and User Scaling Law
|
cs.IT math.IT
|
This paper investigates how multiuser dimensions can effectively be exploited
for target degrees of freedom (DoF) in interfering broadcast channels (IBC)
consisting of K-transmitters and their user groups. First, each transmitter is
assumed to have a single antenna and serve a singe user in its user group where
each user has receive antennas less than K. In this case, a K-transmitter
single-input multiple-output (SIMO) interference channel (IC) is constituted
after user selection. Without help of multiuser diversity, K-1 interfering
signals cannot be perfectly removed at each user since the number of receive
antennas is smaller than or equal to the number of interferers. Only with
proper user selection, non-zero DoF per transmitter is achievable as the number
of users increases. Through geometric interpretation of interfering channels,
we show that the multiuser dimensions have to be used first for reducing the
DoF loss caused by the interfering signals, and then have to be used for
increasing the DoF gain from its own signal. The sufficient number of users for
the target DoF is derived. We also discuss how the optimal strategy of
exploiting multiuser diversity can be realized by practical user selection
schemes. Finally, the single transmit antenna case is extended to the
multiple-input multiple-output (MIMO) IBC where each transmitter with multiple
antennas serves multiple users.
|
1111.1053
|
Modelling and Performance analysis of a Network of Chemical Sensors with
Dynamic Collaboration
|
cs.SI physics.soc-ph
|
The problem of environmental monitoring using a wireless network of chemical
sensors with a limited energy supply is considered. Since the conventional
chemical sensors in active mode consume vast amounts of energy, an optimisation
problem arises in the context of a balance between the energy consumption and
the detection capabilities of such a network. A protocol based on "dynamic
sensor collaboration" is employed: in the absence of any pollutant, majority of
sensors are in the sleep (passive) mode; a sensor is invoked (activated) by
wake-up messages from its neighbors only when more information is required. The
paper proposes a mathematical model of a network of chemical sensors using this
protocol. The model provides valuable insights into the network behavior and
near optimal capacity design (energy consumption against detection). An
analytical model of the environment, using turbulent mixing to capture chaotic
fluctuations, intermittency and non-homogeneity of the pollutant distribution,
is employed in the study. A binary model of a chemical sensor is assumed (a
device with threshold detection). The outcome of the study is a set of simple
analytical tools for sensor network design, optimisation, and performance
analysis.
|
1111.1090
|
A robust, low-cost approach to Face Detection and Face Recognition
|
cs.CV cs.CR eess.IV
|
In the domain of Biometrics, recognition systems based on iris, fingerprint
or palm print scans etc. are often considered more dependable due to extremely
low variance in the properties of these entities with respect to time. However,
over the last decade data processing capability of computers has increased
manifold, which has made real-time video content analysis possible. This shows
that the need of the hour is a robust and highly automated Face Detection and
Recognition algorithm with credible accuracy rate. The proposed Face Detection
and Recognition system using Discrete Wavelet Transform (DWT) accepts face
frames as input from a database containing images from low cost devices such as
VGA cameras, webcams or even CCTV's, where image quality is inferior. Face
region is then detected using properties of L*a*b* color space and only Frontal
Face is extracted such that all additional background is eliminated. Further,
this extracted image is converted to grayscale and its dimensions are resized
to 128 x 128 pixels. DWT is then applied to entire image to obtain the
coefficients. Recognition is carried out by comparison of the DWT coefficients
belonging to the test image with those of the registered reference image. On
comparison, Euclidean distance classifier is deployed to validate the test
image from the database. Accuracy for various levels of DWT Decomposition is
obtained and hence, compared.
|
1111.1093
|
Securing Biometric Images using Reversible Watermarking
|
cs.CV cs.IR
|
Biometric security is a fast growing area. Protecting biometric data is very
important since it can be misused by attackers. In order to increase security
of biometric data there are different methods in which watermarking is widely
accepted. A more acceptable, new important development in this area is
reversible watermarking in which the original image can be completely restored
and the watermark can be retrieved. But reversible watermarking in biometrics
is an understudied area. Reversible watermarking maintains high quality of
biometric data. This paper proposes Rotational Replacement of LSB as a
reversible watermarking scheme for biometric images. PSNR is the regular method
used for quality measurement of biometric data. In this paper we also show that
SSIM Index is a better alternate for effective quality assessment for
reversible watermarked biometric data by comparing with the well known
reversible watermarking scheme using Difference Expansion.
|
1111.1094
|
On Three Challenges of Artificial Living Systems and Embodied Evolution
|
cs.RO cs.ET
|
Creating autonomous, self-supporting, self-replicating, sustainable systems
is a great challenge. To some extent, understanding life means not only being
able to create it from scratch, but also improving, supporting, saving it, or
even making it even more advanced. This can be thought of as a long-term goal
of living technologies and embodied evolution. Current research agenda targets
several short- and middle-term steps towards achieving such a vision:
connection of ICT and bio-/chemo- developments, advances in "soft" and "wet"
robotics, integration of material science into developmental robotics, and
potentially, addressing the self-replication in autonomous systems.
|
1111.1124
|
Tight Bounds on Proper Equivalence Query Learning of DNF
|
cs.LG cs.CC
|
We prove a new structural lemma for partial Boolean functions $f$, which we
call the seed lemma for DNF. Using the lemma, we give the first subexponential
algorithm for proper learning of DNF in Angluin's Equivalence Query (EQ) model.
The algorithm has time and query complexity $2^{(\tilde{O}{\sqrt{n}})}$, which
is optimal. We also give a new result on certificates for DNF-size, a simple
algorithm for properly PAC-learning DNF, and new results on EQ-learning $\log
n$-term DNF and decision trees.
|
1111.1136
|
Universal MMSE Filtering With Logarithmic Adaptive Regret
|
cs.LG cs.IT math.IT
|
We consider the problem of online estimation of a real-valued signal
corrupted by oblivious zero-mean noise using linear estimators. The estimator
is required to iteratively predict the underlying signal based on the current
and several last noisy observations, and its performance is measured by the
mean-square-error. We describe and analyze an algorithm for this task which: 1.
Achieves logarithmic adaptive regret against the best linear filter in
hindsight. This bound is assyptotically tight, and resolves the question of
Moon and Weissman [1]. 2. Runs in linear time in terms of the number of filter
coefficients. Previous constructions required at least quadratic time.
|
1111.1144
|
The State-Dependent Semideterministic Broadcast Channel
|
cs.IT math.IT
|
We derive the capacity region of the state-dependent semideterministic
broadcast channel with noncausal state-information at the transmitter. One of
the two outputs of this channel is a deterministic function of the channel
input and the channel state, and the state is assumed to be known noncausally
to the transmitter but not to the receivers. We show that appending the state
to the deterministic output does not increase capacity.
We also derive an outer bound on the capacity of general (not necessarily
semideterministic) state-dependent broadcast channels.
|
1111.1162
|
The degrees of freedom of the Lasso for general design matrix
|
math.ST cs.IT math.IT stat.TH
|
In this paper, we investigate the degrees of freedom ($\dof$) of penalized
$\ell_1$ minimization (also known as the Lasso) for linear regression models.
We give a closed-form expression of the $\dof$ of the Lasso response. Namely,
we show that for any given Lasso regularization parameter $\lambda$ and any
observed data $y$ belonging to a set of full (Lebesgue) measure, the
cardinality of the support of a particular solution of the Lasso problem is an
unbiased estimator of the degrees of freedom. This is achieved without the need
of uniqueness of the Lasso solution. Thus, our result holds true for both the
underdetermined and the overdetermined case, where the latter was originally
studied in \cite{zou}. We also show, by providing a simple counterexample, that
although the $\dof$ theorem of \cite{zou} is correct, their proof contains a
flaw since their divergence formula holds on a different set of a full measure
than the one that they claim. An effective estimator of the number of degrees
of freedom may have several applications including an objectively guided choice
of the regularization parameter in the Lasso through the $\sure$ framework. Our
theoretical findings are illustrated through several numerical simulations.
|
1111.1191
|
Constant Envelope Precoding for Power-Efficient Downlink Wireless
Communication in Multi-User MIMO Systems Using Large Antenna Arrays
|
cs.IT math.IT
|
We consider downlink cellular multi-user communication between a base station
(BS) having N antennas and M single-antenna users, i.e., an N X M Gaussian
Broadcast Channel (GBC). Under an average only total transmit power constraint
(APC), large antenna arrays at the BS (having tens to a few hundred antennas)
have been recently shown to achieve remarkable multi-user interference (MUI)
suppression with simple precoding techniques. However, building large arrays in
practice, would require cheap/power-efficient Radio-Frequency(RF) electronic
components. The type of transmitted signal that facilitates the use of most
power-efficient RF components is a constant envelope (CE) signal. Under certain
mild channel conditions (including i.i.d. fading), we analytically show that,
even under the stringent per-antenna CE transmission constraint (compared to
APC), MUI suppression can still be achieved with large antenna arrays. Our
analysis also reveals that, with a fixed M and increasing N, the total
transmitted power can be reduced while maintaining a constant
signal-to-interference-noise-ratio (SINR) level at each user. We also propose a
novel low-complexity CE precoding scheme, using which, we confirm our
analytical observations for the i.i.d. Rayleigh fading channel, through
Monte-Carlo simulations. Simulation of the information sum-rate under the
per-antenna CE constraint, shows that, for a fixed M and a fixed desired
sum-rate, the required total transmit power decreases linearly with increasing
N, i.e., an O(N) array power gain. Also, in terms of the total transmit power
required to achieve a fixed desired information sum-rate, despite the stringent
per-antenna CE constraint, the proposed CE precoding scheme performs close to
the GBC sum-capacity (under APC) achieving scheme.
|
1111.1227
|
More Voices Than Ever? Quantifying Media Bias in Networks
|
cs.SI cs.CY physics.soc-ph
|
Social media, such as blogs, are often seen as democratic entities that allow
more voices to be heard than the conventional mass or elite media. Some also
feel that social media exhibits a balancing force against the arguably slanted
elite media. A systematic comparison between social and mainstream media is
necessary but challenging due to the scale and dynamic nature of modern
communication. Here we propose empirical measures to quantify the extent and
dynamics of social (blog) and mainstream (news) media bias. We focus on a
particular form of bias---coverage quantity---as applied to stories about the
111th US Congress. We compare observed coverage of Members of Congress against
a null model of unbiased coverage, testing for biases with respect to political
party, popular front runners, regions of the country, and more. Our measures
suggest distinct characteristics in news and blog media. A simple generative
model, in agreement with data, reveals differences in the process of coverage
selection between the two media.
|
1111.1311
|
Covariant fractional extension of the modified Laplace-operator used in
3D-shape recovery
|
cs.CV
|
Extending the Liouville-Caputo definition of a fractional derivative to a
nonlocal covariant generalization of arbitrary bound operators acting on
multidimensional Riemannian spaces an appropriate approach for the 3D shape
recovery of aperture afflicted 2D slide sequences is proposed. We demonstrate,
that the step from a local to a nonlocal algorithm yields an order of magnitude
in accuracy and by using the specific fractional approach an additional factor
2 in accuracy of the derived results.
|
1111.1315
|
Nonparametric Bayesian Estimation of Periodic Functions
|
cs.LG astro-ph.IM
|
Many real world problems exhibit patterns that have periodic behavior. For
example, in astrophysics, periodic variable stars play a pivotal role in
understanding our universe. An important step when analyzing data from such
processes is the problem of identifying the period: estimating the period of a
periodic function based on noisy observations made at irregularly spaced time
points. This problem is still a difficult challenge despite extensive study in
different disciplines. The paper makes several contributions toward solving
this problem. First, we present a nonparametric Bayesian model for period
finding, based on Gaussian Processes (GP), that does not make strong
assumptions on the shape of the periodic function. As our experiments
demonstrate, the new model leads to significantly better results in period
estimation when the target function is non-sinusoidal. Second, we develop a new
algorithm for parameter optimization for GP which is useful when the likelihood
function is very sensitive to the setting of the hyper-parameters with numerous
local minima, as in the case of period estimation. The algorithm combines
gradient optimization with grid search and incorporates several mechanisms to
overcome the high complexity of inference with GP. Third, we develop a novel
approach for using domain knowledge, in the form of a probabilistic generative
model, and incorporate it into the period estimation algorithm. Experimental
results on astrophysics data validate our approach showing significant
improvement over the state of the art in this domain.
|
1111.1321
|
MIVAR: Transition from Productions to Bipartite Graphs MIVAR Nets and
Practical Realization of Automated Constructor of Algorithms Handling More
than Three Million Production Rules
|
cs.AI
|
The theoretical transition from the graphs of production systems to the
bipartite graphs of the MIVAR nets is shown. Examples of the implementation of
the MIVAR nets in the formalisms of matrixes and graphs are given. The linear
computational complexity of algorithms for automated building of objects and
rules of the MIVAR nets is theoretically proved. On the basis of the MIVAR nets
the UDAV software complex is developed, handling more than 1.17 million objects
and more than 3.5 million rules on ordinary computers. The results of
experiments that confirm a linear computational complexity of the MIVAR method
of information processing are given.
Keywords: MIVAR, MIVAR net, logical inference, computational complexity,
artificial intelligence, intelligent systems, expert systems, General Problem
Solver.
|
1111.1347
|
Wyner-Ziv Coding Based on Multidimensional Nested Lattices
|
cs.IT math.IT
|
Distributed source coding (DSC) addresses the compression of correlated
sources without communication links among them. This paper is concerned with
the Wyner-Ziv problem: coding of an information source with side information
available only at the decoder in the form of a noisy version of the source.
Both the theoretical analysis and code design are addressed in the framework of
multi-dimensional nested lattice coding (NLC). For theoretical analysis,
accurate computation of the rate-distortion function is given under the
high-resolution assumption, and a new upper bound using the derivative of the
theta series is derived. For practical code design, several techniques with low
complexity are proposed. Compared to the existing Slepian-Wolf coded nested
quantization (SWC-NQ) for Wyner-Ziv coding based on one or two-dimensional
lattices, our proposed multi-dimensional NLC can offer better performance at
arguably lower complexity, since it does not require the second stage of
Slepian-Wolf coding.
|
1111.1353
|
An efficient implementation of the simulated annealing heuristic for the
quadratic assignment problem
|
cs.NE
|
The quadratic assignment problem (QAP) is one of the most difficult
combinatorial optimization problems. One of the most powerful and commonly used
heuristics to obtain approximations to the optimal solution of the QAP is
simulated annealing (SA). We present an efficient implementation of the SA
heuristic which performs more than 100 times faster then existing
implementations for large problem sizes and a large number of SA iterations.
|
1111.1365
|
Co-community Structure in Time-varying Networks
|
physics.soc-ph cond-mat.stat-mech cs.SI nlin.AO
|
In this report, we introduce the concept of co-community structure in
time-varying networks. We propose a novel optimization algorithm to rapidly
detect co-community structure in these networks. Both theoretical and numerical
results show that the proposed method not only can resolve detailed
co-communities, but also can effectively identify the dynamical phenomena in
these networks.
|
1111.1373
|
Speculative Parallel Evaluation Of Classification Trees On GPGPU Compute
Engines
|
cs.DC cs.CV
|
We examine the problem of optimizing classification tree evaluation for
on-line and real-time applications by using GPUs. Looking at trees with
continuous attributes often used in image segmentation, we first put the
existing algorithms for serial and data-parallel evaluation on solid footings.
We then introduce a speculative parallel algorithm designed for single
instruction, multiple data (SIMD) architectures commonly found in GPUs. A
theoretical analysis shows how the run times of data and speculative
decompositions compare assuming independent processors. To compare the
algorithms in the SIMD environment, we implement both on a CUDA 2.0
architecture machine and compare timings to a serial CPU implementation.
Various optimizations and their effects are discussed, and results are given
for all algorithms. Our specific tests show a speculative algorithm improves
run time by 25% compared to a data decomposition.
|
1111.1386
|
Confidence Estimation in Structured Prediction
|
cs.LG
|
Structured classification tasks such as sequence labeling and dependency
parsing have seen much interest by the Natural Language Processing and the
machine learning communities. Several online learning algorithms were adapted
for structured tasks such as Perceptron, Passive- Aggressive and the recently
introduced Confidence-Weighted learning . These online algorithms are easy to
implement, fast to train and yield state-of-the-art performance. However,
unlike probabilistic models like Hidden Markov Model and Conditional random
fields, these methods generate models that output merely a prediction with no
additional information regarding confidence in the correctness of the output.
In this work we fill the gap proposing few alternatives to compute the
confidence in the output of non-probabilistic algorithms.We show how to compute
confidence estimates in the prediction such that the confidence reflects the
probability that the word is labeled correctly. We then show how to use our
methods to detect mislabeled words, trade recall for precision and active
learning. We evaluate our methods on four noun-phrase chunking and named entity
recognition sequence labeling tasks, and on dependency parsing for 14
languages.
|
1111.1396
|
Improving the Thresholds of Sparse Recovery: An Analysis of a Two-Step
Reweighted Basis Pursuit Algorithm
|
cs.IT math.IT
|
It is well known that $\ell_1$ minimization can be used to recover
sufficiently sparse unknown signals from compressed linear measurements. In
fact, exact thresholds on the sparsity, as a function of the ratio between the
system dimensions, so that with high probability almost all sparse signals can
be recovered from i.i.d. Gaussian measurements, have been computed and are
referred to as "weak thresholds" \cite{D}. In this paper, we introduce a
reweighted $\ell_1$ recovery algorithm composed of two steps: a standard
$\ell_1$ minimization step to identify a set of entries where the signal is
likely to reside, and a weighted $\ell_1$ minimization step where entries
outside this set are penalized. For signals where the non-sparse component
entries are independent and identically drawn from certain classes of
distributions, (including most well known continuous distributions), we prove a
\emph{strict} improvement in the weak recovery threshold. Our analysis suggests
that the level of improvement in the weak threshold depends on the behavior of
the distribution at the origin. Numerical simulations verify the distribution
dependence of the threshold improvement very well, and suggest that in the case
of i.i.d. Gaussian nonzero entries, the improvement can be quite
impressive---over 20% in the example we consider.
|
1111.1414
|
Revisiting algorithms for generating surrogate time series
|
physics.data-an astro-ph.HE cs.CE nlin.CD
|
The method of surrogates is one of the key concepts of nonlinear data
analysis. Here, we demonstrate that commonly used algorithms for generating
surrogates often fail to generate truly linear time series. Rather, they create
surrogate realizations with Fourier phase correlations leading to
non-detections of nonlinearities. We argue that reliable surrogates can only be
generated, if one tests separately for static and dynamic nonlinearities.
|
1111.1418
|
Efficient Nonparametric Conformal Prediction Regions
|
math.ST cs.LG stat.TH
|
We investigate and extend the conformal prediction method due to
Vovk,Gammerman and Shafer (2005) to construct nonparametric prediction regions.
These regions have guaranteed distribution free, finite sample coverage,
without any assumptions on the distribution or the bandwidth. Explicit
convergence rates of the loss function are established for such regions under
standard regularity conditions. Approximations for simplifying implementation
and data driven bandwidth selection methods are also discussed. The theoretical
properties of our method are demonstrated through simulations.
|
1111.1422
|
Robust Interactive Learning
|
cs.LG
|
In this paper we propose and study a generalization of the standard
active-learning model where a more general type of query, class conditional
query, is allowed. Such queries have been quite useful in applications, but
have been lacking theoretical understanding. In this work, we characterize the
power of such queries under two well-known noise models. We give nearly tight
upper and lower bounds on the number of queries needed to learn both for the
general agnostic setting and for the bounded noise model. We further show that
our methods can be made adaptive to the (unknown) noise rate, with only
negligible loss in query complexity.
|
1111.1423
|
Face Recognition Using Discrete Cosine Transform for Global and Local
Features
|
cs.CV cs.CR cs.IT math.IT
|
Face Recognition using Discrete Cosine Transform (DCT) for Local and Global
Features involves recognizing the corresponding face image from the database.
The face image obtained from the user is cropped such that only the frontal
face image is extracted, eliminating the background. The image is restricted to
a size of 128 x 128 pixels. All images in the database are gray level images.
DCT is applied to the entire image. This gives DCT coefficients, which are
global features. Local features such as eyes, nose and mouth are also extracted
and DCT is applied to these features. Depending upon the recognition rate
obtained for each feature, they are given weightage and then combined. Both
local and global features are used for comparison. By comparing the ranks for
global and local features, the false acceptance rate for DCT can be minimized.
|
1111.1426
|
SLIQ: Simple Linear Inequalities for Efficient Contig Scaffolding
|
q-bio.GN cs.CE
|
Scaffolding is an important subproblem in "de novo" genome assembly in which
mate pair data are used to construct a linear sequence of contigs separated by
gaps. Here we present SLIQ, a set of simple linear inequalities derived from
the geometry of contigs on the line that can be used to predict the relative
positions and orientations of contigs from individual mate pair reads and thus
produce a contig digraph. The SLIQ inequalities can also filter out unreliable
mate pairs and can be used as a preprocessing step for any scaffolding
algorithm. We tested the SLIQ inequalities on five real data sets ranging in
complexity from simple bacterial genomes to complex mammalian genomes and
compared the results to the majority voting procedure used by many other
scaffolding algorithms. SLIQ predicted the relative positions and orientations
of the contigs with high accuracy in all cases and gave more accurate position
predictions than majority voting for complex genomes, in particular the human
genome. Finally, we present a simple scaffolding algorithm that produces linear
scaffolds given a contig digraph. We show that our algorithm is very efficient
compared to other scaffolding algorithms while maintaining high accuracy in
predicting both contig positions and orientations for real data sets.
|
1111.1432
|
Universal Lossless Data Compression Via Binary Decision Diagrams
|
cs.IT math.IT
|
A binary string of length $2^k$ induces the Boolean function of $k$ variables
whose Shannon expansion is the given binary string. This Boolean function then
is representable via a unique reduced ordered binary decision diagram (ROBDD).
The given binary string is fully recoverable from this ROBDD. We exhibit a
lossless data compression algorithm in which a binary string of length a power
of two is compressed via compression of the ROBDD associated to it as described
above.
We show that when binary strings of length $n$ a power of two are compressed
via this algorithm, the maximal pointwise redundancy/sample with respect to any
s-state binary information source has the upper bound
$(4\log_2s+16+o(1))/\log_2n $. To establish this result, we exploit a result of
Liaw and Lin stating that the ROBDD representation of a Boolean function of $k$
variables contains a number of vertices on the order of $(2+o(1))2^{k}/k$.
|
1111.1461
|
Multimodal diff-hash
|
cs.CV
|
Many applications require comparing multimodal data with different structure
and dimensionality that cannot be compared directly. Recently, there has been
increasing interest in methods for learning and efficiently representing such
multimodal similarity. In this paper, we present a simple algorithm for
multimodal similarity-preserving hashing, trying to map multimodal data into
the Hamming space while preserving the intra- and inter-modal similarities. We
show that our method significantly outperforms the state-of-the-art method in
the field.
|
1111.1486
|
Embedding Description Logic Programs into Default Logic
|
cs.AI
|
Description logic programs (dl-programs) under the answer set semantics
formulated by Eiter {\em et al.} have been considered as a prominent formalism
for integrating rules and ontology knowledge bases. A question of interest has
been whether dl-programs can be captured in a general formalism of nonmonotonic
logic. In this paper, we study the possibility of embedding dl-programs into
default logic. We show that dl-programs under the strong and weak answer set
semantics can be embedded in default logic by combining two translations, one
of which eliminates the constraint operator from nonmonotonic dl-atoms and the
other translates a dl-program into a default theory. For dl-programs without
nonmonotonic dl-atoms but with the negation-as-failure operator, our embedding
is polynomial, faithful, and modular. In addition, our default logic encoding
can be extended in a simple way to capture recently proposed weakly
well-supported answer set semantics, for arbitrary dl-programs. These results
reinforce the argument that default logic can serve as a fruitful foundation
for query-based approaches to integrating ontology and rules. With its simple
syntax and intuitive semantics, plus available computational results, default
logic can be considered an attractive approach to integration of ontology and
rules.
|
1111.1492
|
The Gathering Problem for Two Oblivious Robots with Unreliable Compasses
|
cs.DC cs.DS cs.RO
|
Anonymous mobile robots are often classified into synchronous,
semi-synchronous and asynchronous robots when discussing the pattern formation
problem. For semi-synchronous robots, all patterns formable with memory are
also formable without memory, with the single exception of forming a point
(i.e., the gathering) by two robots. However, the gathering problem for two
semi-synchronous robots without memory is trivially solvable when their local
coordinate systems are consistent, and the impossibility proof essentially uses
the inconsistencies in their coordinate systems. Motivated by this, this paper
investigates the magnitude of consistency between the local coordinate systems
necessary and sufficient to solve the gathering problem for two oblivious
robots under semi-synchronous and asynchronous models. To discuss the magnitude
of consistency, we assume that each robot is equipped with an unreliable
compass, the bearings of which may deviate from an absolute reference
direction, and that the local coordinate system of each robot is determined by
its compass. We consider two families of unreliable compasses, namely,static
compasses with constant bearings, and dynamic compasses the bearings of which
can change arbitrarily.
For each of the combinations of robot and compass models, we establish the
condition on deviation \phi that allows an algorithm to solve the gathering
problem, where the deviation is measured by the largest angle formed between
the x-axis of a compass and the reference direction of the global coordinate
system: \phi < \pi/2 for semi-synchronous and asynchronous robots with static
compasses, \phi < \pi/4 for semi-synchronous robots with dynamic compasses, and
\phi < \pi/6 for asynchronous robots with dynamic compasses. Except for
asynchronous robots with dynamic compasses, these sufficient conditions are
also necessary.
|
1111.1497
|
An IR-based Evaluation Framework for Web Search Query Segmentation
|
cs.IR
|
This paper presents the first evaluation framework for Web search query
segmentation based directly on IR performance. In the past, segmentation
strategies were mainly validated against manual annotations. Our work shows
that the goodness of a segmentation algorithm as judged through evaluation
against a handful of human annotated segmentations hardly reflects its
effectiveness in an IR-based setup. In fact, state-of the-art algorithms are
shown to perform as good as, and sometimes even better than human annotations
-- a fact masked by previous validations. The proposed framework also provides
us an objective understanding of the gap between the present best and the best
possible segmentation algorithm. We draw these conclusions based on an
extensive evaluation of six segmentation strategies, including three most
recent algorithms, vis-a-vis segmentations from three human annotators. The
evaluation framework also gives insights about which segments should be
necessarily detected by an algorithm for achieving the best retrieval results.
The meticulously constructed dataset used in our experiments has been made
public for use by the research community.
|
1111.1498
|
H_2-Optimal Decentralized Control over Posets: A State-Space Solution
for State-Feedback
|
math.OC cs.SY
|
We develop a complete state-space solution to H_2-optimal decentralized
control of poset-causal systems with state-feedback. Our solution is based on
the exploitation of a key separability property of the problem, that enables an
efficient computation of the optimal controller by solving a small number of
uncoupled standard Riccati equations. Our approach gives important insight into
the structure of optimal controllers, such as controller degree bounds that
depend on the structure of the poset. A novel element in our state-space
characterization of the controller is a remarkable pair of transfer functions,
that belong to the incidence algebra of the poset, are inverses of each other,
and are intimately related to prediction of the state along the different paths
on the poset. The results are illustrated by a numerical example.
|
1111.1555
|
A scheme to protect against multiple quantum erasures
|
cs.IT math.IT quant-ph
|
We present a scheme able to protect k >= 3 qubits of information against the
occurrence of multiple erasures, based on the code proposed by Yang et al.
(2004 JETP Letters 79 236). In this scheme redundant blocks are used and we
restrict to the case that each erasure must occur in distinct blocks. We
explicitly characterize the encoding operation and the restoring operation
required to implement this scheme. The operators used in these operations can
be adjusted to construct different quantum erasure-correcting codes. A special
feature of this scheme is that no measurement is required. To illustrate our
scheme, we present an example in which five-qubits of information are protected
against the occurrence of two erasures.
|
1111.1562
|
Iris Recognition Based on LBP and Combined LVQ Classifier
|
cs.CV
|
Iris recognition is considered as one of the best biometric methods used for
human identification and verification, this is because of its unique features
that differ from one person to another, and its importance in the security
field. This paper proposes an algorithm for iris recognition and classification
using a system based on Local Binary Pattern and histogram properties as a
statistical approaches for feature extraction, and Combined Learning Vector
Quantization Classifier as Neural Network approach for classification, in order
to build a hybrid model depends on both features. The localization and
segmentation techniques are presented using both Canny edge detection and Hough
Circular Transform in order to isolate an iris from the whole eye image and for
noise detection .Feature vectors results from LBP is applied to a Combined LVQ
classifier with different classes to determine the minimum acceptable
performance, and the result is based on majority voting among several LVQ
classifier. Different iris datasets CASIA, MMU1, MMU2, and LEI with different
extensions and size are presented. Since LBP is working on a grayscale level so
colored iris images should be transformed into a grayscale level. The proposed
system gives a high recognition rate 99.87 % on different iris datasets
compared with other methods.
|
1111.1564
|
Particle Swarm Optimization Framework for Low Power Testing of VLSI
Circuits
|
cs.NE
|
Power dissipation in sequential circuits is due to increased toggling count
of Circuit under Test, which depends upon test vectors applied. If successive
test vectors sequences have more toggling nature then it is sure that toggling
rate of flip flops is higher. Higher toggling for flip flops results more power
dissipation. To overcome this problem, one method is to use GA to have test
vectors of high fault coverage in short interval, followed by Hamming distance
management on test patterns. This approach is time consuming and needs more
efforts. Another method which is purposed in this paper is a PSO based Frame
Work to optimize power dissipation. Here target is to set the entire test
vector in a frame for time period 'T', so that the frame consists of all those
vectors strings which not only provide high fault coverage but also arrange
vectors in frame to produce minimum toggling.
|
1111.1570
|
Semantic Grounding Strategies for Tagbased Recommender Systems
|
cs.IR cs.SI
|
Recommender systems usually operate on similarities between recommended items
or users. Tag based recommender systems utilize similarities on tags. The tags
are however mostly free user entered phrases. Therefore, similarities computed
without their semantic groundings might lead to less relevant recommendations.
In this paper, we study a semantic grounding used for tag similarity calculus.
We show a comprehensive analysis of semantic grounding given by 20 ontologies
from different domains. The study besides other things reveals that currently
available OWL ontologies are very narrow and the percentage of the similarity
expansions is rather small. WordNet scores slightly better as it is broader but
not much as it does not support several semantic relationships. Furthermore,
the study reveals that even with such number of expansions, the recommendations
change considerably.
|
1111.1596
|
Multi-Stage Complex Contagions
|
cs.SI math.DS nlin.AO physics.soc-ph
|
The spread of ideas across a social network can be studied using complex
contagion models, in which agents are activated by contact with multiple
activated neighbors. The investigation of complex contagions can provide
crucial insights into social influence and behavior-adoption cascades on
networks. In this paper, we introduce a model of a multi-stage complex
contagion on networks. Agents at different stages --- which could, for example,
represent differing levels of support for a social movement or differing levels
of commitment to a certain product or idea --- exert different amounts of
influence on their neighbors. We demonstrate that the presence of even one
additional stage introduces novel dynamical behavior, including interplay
between multiple cascades, that cannot occur in single-stage contagion models.
We find that cascades --- and hence collective action --- can be driven not
only by high-stage influencers but also by low-stage influencers.
|
1111.1599
|
Efficient Hierarchical Markov Random Fields for Object Detection on a
Mobile Robot
|
cs.CV
|
Object detection and classification using video is necessary for intelligent
planning and navigation on a mobile robot. However, current methods can be too
slow or not sufficient for distinguishing multiple classes. Techniques that
rely on binary (foreground/background) labels incorrectly identify areas with
multiple overlapping objects as single segment. We propose two Hierarchical
Markov Random Field models in efforts to distinguish connected objects using
tiered, binary label sets. Near-realtime performance has been achieved using
efficient optimization methods which runs up to 11 frames per second on a dual
core 2.2 Ghz processor. Evaluation of both models is done using footage taken
from a robot obstacle course at the 2010 Intelligent Ground Vehicle
Competition.
|
1111.1648
|
Sentiment Analysis of Document Based on Annotation
|
cs.IR cs.CL
|
I present a tool which tells the quality of document or its usefulness based
on annotations. Annotation may include comments, notes, observation,
highlights, underline, explanation, question or help etc. comments are used for
evaluative purpose while others are used for summarization or for expansion
also. Further these comments may be on another annotation. Such annotations are
referred as meta-annotation. All annotation may not get equal weightage. My
tool considered highlights, underline as well as comments to infer the
collective sentiment of annotators. Collective sentiments of annotators are
classified as positive, negative, objectivity. My tool computes collective
sentiment of annotations in two manners. It counts all the annotation present
on the documents as well as it also computes sentiment scores of all annotation
which includes comments to obtain the collective sentiments about the document
or to judge the quality of document. I demonstrate the use of tool on research
paper.
|
1111.1673
|
Algebras over a field and semantics for context based reasoning
|
cs.CL cs.LO
|
This paper introduces context algebras and demonstrates their application to
combining logical and vector-based representations of meaning. Other approaches
to this problem attempt to reproduce aspects of logical semantics within new
frameworks. The approach we present here is different: We show how logical
semantics can be embedded within a vector space framework, and use this to
combine distributional semantics, in which the meanings of words are
represented as vectors, with logical semantics, in which the meaning of a
sentence is represented as a logical form.
|
1111.1684
|
Simulation Techniques and Prosthetic Approach Towards Biologically
Efficient Artificial Sense Organs- An Overview
|
cs.RO cs.SY
|
An overview of the applications of control theory to prosthetic sense organs
including the senses of vision, taste and odor is being presented in this
paper. Simulation aspect nowadays has been the centre of research in the field
of prosthesis. There have been various successful applications of prosthetic
organs, in case of natural biological organs dis-functioning patients.
Simulation aspects and control modeling are indispensible for knowing system
performance, and to generate an original approach of artificial organs. This
overview focuses mainly on control techniques, by far a theoretical overview
and fusion of artificial sense organs trying to mimic the efficacies of
biologically active sensory organs. Keywords: virtual reality, prosthetic
vision, artificial
|
1111.1738
|
Quantization via Empirical Divergence Maximization
|
cs.IT math.IT
|
Empirical divergence maximization (EDM) refers to a recently proposed
strategy for estimating f-divergences and likelihood ratio functions. This
paper extends the idea to empirical vector quantization where one seeks to
empirically derive quantization rules that maximize the Kullback-Leibler
divergence between two statistical hypotheses. We analyze the estimator's error
convergence rate leveraging Tsybakov's margin condition and show that rates as
fast as 1/n are possible, where n equals the number of training samples. We
also show that the Flynn and Gray algorithm can be used to efficiently compute
EDM estimates and show that they can be efficiently and accurately represented
by recursive dyadic partitions. The EDM formulation have several advantages.
First, the formulation gives access to the tools and results of empirical
process theory that quantify the estimator's error convergence rate. Second,
the formulation provides a previously unknown derivation for the Flynn and Gray
algorithm. Third, the flexibility it affords allows one to avoid a small-cell
assumption common in other approaches. Finally, we illustrate the potential use
of the method through an example.
|
1111.1752
|
New Method for 3D Shape Retrieval
|
cs.CV
|
The recent technological progress in acquisition, modeling and processing of
3D data leads to the proliferation of a large number of 3D objects databases.
Consequently, the techniques used for content based 3D retrieval has become
necessary. In this paper, we introduce a new method for 3D objects recognition
and retrieval by using a set of binary images CLI (Characteristic level
images). We propose a 3D indexing and search approach based on the similarity
between characteristic level images using Hu moments for it indexing. To
measure the similarity between 3D objects we compute the Hausdorff distance
between a vectors descriptor. The performance of this new approach is evaluated
at set of 3D object of well known database, is NTU (National Taiwan University)
database.
|
1111.1771
|
Information Security Synthesis in Online Universities
|
cs.CR cs.CY cs.SI
|
Information assurance is at the core of every initiative that an organization
executes. For online universities, a common and complex initiative is
maintaining user lifecycle and providing seamless access using one identity in
a large virtual infrastructure. To achieve information assurance the management
of user privileges affected by events in the user's identity lifecycle needs to
be the determining factor for access control. While the implementation of
identity and access management systems makes this initiative feasible, it is
the construction and maintenance of the infrastructure that makes it complex
and challenging. The objective of this paper1 is to describe the complexities,
propose a practical approach to building a foundation for consistent user
experience and realizing security synthesis in online universities.
|
1111.1784
|
UPAL: Unbiased Pool Based Active Learning
|
stat.ML cs.AI cs.LG
|
In this paper we address the problem of pool based active learning, and
provide an algorithm, called UPAL, that works by minimizing the unbiased
estimator of the risk of a hypothesis in a given hypothesis space. For the
space of linear classifiers and the squared loss we show that UPAL is
equivalent to an exponentially weighted average forecaster. Exploiting some
recent results regarding the spectra of random matrices allows us to establish
consistency of UPAL when the true hypothesis is a linear hypothesis. Empirical
comparison with an active learner implementation in Vowpal Wabbit, and a
previously proposed pool based active learner implementation show good
empirical performance and better scalability.
|
1111.1788
|
Robust PCA as Bilinear Decomposition with Outlier-Sparsity
Regularization
|
stat.ML cs.IT math.IT
|
Principal component analysis (PCA) is widely used for dimensionality
reduction, with well-documented merits in various applications involving
high-dimensional data, including computer vision, preference measurement, and
bioinformatics. In this context, the fresh look advocated here permeates
benefits from variable selection and compressive sampling, to robustify PCA
against outliers. A least-trimmed squares estimator of a low-rank bilinear
factor analysis model is shown closely related to that obtained from an
$\ell_0$-(pseudo)norm-regularized criterion encouraging sparsity in a matrix
explicitly modeling the outliers. This connection suggests robust PCA schemes
based on convex relaxation, which lead naturally to a family of robust
estimators encompassing Huber's optimal M-class as a special case. Outliers are
identified by tuning a regularization parameter, which amounts to controlling
sparsity of the outlier matrix along the whole robustification path of (group)
least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its
neat ties to robust statistics, the developed outlier-aware PCA framework is
versatile to accommodate novel and scalable algorithms to: i) track the
low-rank signal subspace robustly, as new data are acquired in real time; and
ii) determine principal components robustly in (possibly) infinite-dimensional
feature spaces. Synthetic and real data tests corroborate the effectiveness of
the proposed robust PCA schemes, when used to identify aberrant responses in
personality assessment surveys, as well as unveil communities in social
networks, and intruders from video surveillance data.
|
1111.1797
|
Analysis of Thompson Sampling for the multi-armed bandit problem
|
cs.LG cs.DS
|
The multi-armed bandit problem is a popular model for studying
exploration/exploitation trade-off in sequential decision problems. Many
algorithms are now available for this well-studied problem. One of the earliest
algorithms, given by W. R. Thompson, dates back to 1933. This algorithm,
referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic
idea is to choose an arm to play according to its probability of being the best
arm. Thompson Sampling algorithm has experimentally been shown to be close to
optimal. In addition, it is efficient to implement and exhibits several
desirable properties such as small regret for delayed feedback. However,
theoretical understanding of this algorithm was quite limited. In this paper,
for the first time, we show that Thompson Sampling algorithm achieves
logarithmic expected regret for the multi-armed bandit problem. More precisely,
for the two-armed bandit problem, the expected regret in time $T$ is
$O(\frac{\ln T}{\Delta} + \frac{1}{\Delta^3})$. And, for the $N$-armed bandit
problem, the expected regret in time $T$ is $O([(\sum_{i=2}^N
\frac{1}{\Delta_i^2})^2] \ln T)$. Our bounds are optimal but for the dependence
on $\Delta_i$ and the constant factors in big-Oh.
|
1111.1827
|
One-Hop Throughput of Wireless Networks with Random Connections
|
cs.IT math.IT
|
We consider one-hop communication in wireless networks with random
connections. In the random connection model, the channel powers between
different nodes are drawn from a common distribution in an i.i.d. manner. An
scheme achieving the throughput scaling of order $n^{1/3-\delta}$, for any
$\delta>0$, is proposed, where $n$ is the number of nodes. Such achievable
throughput, along with the order $n^{1/3}$ upper bound derived by Cui et al.,
characterizes the throughput capacity of one-hop schemes for the class of
connection models with finite mean and variance.
|
1111.1896
|
Dynamical Classes of Collective Attention in Twitter
|
cs.SI cs.CY cs.HC physics.soc-ph
|
Micro-blogging systems such as Twitter expose digital traces of social
discourse with an unprecedented degree of resolution of individual behaviors.
They offer an opportunity to investigate how a large-scale social system
responds to exogenous or endogenous stimuli, and to disentangle the temporal,
spatial and topical aspects of users' activity. Here we focus on spikes of
collective attention in Twitter, and specifically on peaks in the popularity of
hashtags. Users employ hashtags as a form of social annotation, to define a
shared context for a specific event, topic, or meme. We analyze a large-scale
record of Twitter activity and find that the evolution of hastag popularity
over time defines discrete classes of hashtags. We link these dynamical classes
to the events the hashtags represent and use text mining techniques to provide
a semantic characterization of the hastag classes. Moreover, we track the
propagation of hashtags in the Twitter social network and find that epidemic
spreading plays a minor role in hastag popularity, which is mostly driven by
exogenous factors.
|
1111.1941
|
Semantic-Driven e-Government: Application of Uschold and King Ontology
Building Methodology for Semantic Ontology Models Development
|
cs.AI
|
Electronic government (e-government) has been one of the most active areas of
ontology development during the past six years. In e-government, ontologies are
being used to describe and specify e-government services (e-services) because
they enable easy composition, matching, mapping and merging of various
e-government services. More importantly, they also facilitate the semantic
integration and interoperability of e-government services. However, it is still
unclear in the current literature how an existing ontology building methodology
can be applied to develop semantic ontology models in a government service
domain. In this paper the Uschold and King ontology building methodology is
applied to develop semantic ontology models in a government service domain.
Firstly, the Uschold and King methodology is presented, discussed and applied
to build a government domain ontology. Secondly, the domain ontology is
evaluated for semantic consistency using its semi-formal representation in
Description Logic. Thirdly, an alignment of the domain ontology with the
Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) upper
level ontology is drawn to allow its wider visibility and facilitate its
integration with existing metadata standard. Finally, the domain ontology is
formally written in Web Ontology Language (OWL) to enable its automatic
processing by computers. The study aims to provide direction for the
application of existing ontology building methodologies in the Semantic Web
development processes of e-government domain specific ontology models; which
would enable their repeatability in other e-government projects and strengthen
the adoption of semantic technologies in e-government.
|
1111.1947
|
Discriminative Local Sparse Representations for Robust Face Recognition
|
cs.CV
|
A key recent advance in face recognition models a test face image as a sparse
linear combination of a set of training face images. The resulting sparse
representations have been shown to possess robustness against a variety of
distortions like random pixel corruption, occlusion and disguise. This approach
however makes the restrictive (in many scenarios) assumption that test faces
must be perfectly aligned (or registered) to the training data prior to
classification. In this paper, we propose a simple yet robust local block-based
sparsity model, using adaptively-constructed dictionaries from local features
in the training data, to overcome this misalignment problem. Our approach is
inspired by human perception: we analyze a series of local discriminative
features and combine them to arrive at the final classification decision. We
propose a probabilistic graphical model framework to explicitly mine the
conditional dependencies between these distinct sparse local features. In
particular, we learn discriminative graphs on sparse representations obtained
from distinct local slices of a face. Conditional correlations between these
sparse features are first discovered (in the training phase), and subsequently
exploited to bring about significant improvements in recognition rates.
Experimental results obtained on benchmark face databases demonstrate the
effectiveness of the proposed algorithms in the presence of multiple
registration errors (such as translation, rotation, and scaling) as well as
under variations of pose and illumination.
|
1111.1958
|
Widescope - A social platform for serious conversations on the Web
|
cs.SI cs.CY
|
There are several web platforms that people use to interact and exchange
ideas, such as social networks like Facebook, Twitter, and Google+; Q&A sites
like Quora and Yahoo! Answers; and myriad independent fora. However, there is a
scarcity of platforms that facilitate discussion of complex subjects where
people with divergent views can easily rationalize their points of view using a
shared knowledge base, and leverage it towards shared objectives, e.g. to
arrive at a mutually acceptable compromise.
In this paper, as a first step, we present Widescope, a novel collaborative
web platform for catalyzing shared understanding of the US Federal and State
budget debates in order to help users reach data-driven consensus about the
complex issues involved. It aggregates disparate sources of financial data from
different budgets (i.e. from past, present, and proposed) and presents a
unified interface using interactive visualizations. It leverages distributed
collaboration to encourage exploration of ideas and debate. Users can propose
budgets ab-initio, support existing proposals, compare between different
budgets, and collaborate with others in real time.
We hypothesize that such a platform can be useful in bringing people's
thoughts and opinions closer. Toward this, we present preliminary evidence from
a simple pilot experiment, using triadic voting (which we also formally analyze
to show that is better than hot-or-not voting), that 5 out of 6 groups of users
with divergent views (conservatives vs liberals) come to a consensus while
aiming to halve the deficit using Widescope. We believe that tools like
Widescope could have a positive impact on other complex, data-driven social
issues.
|
1111.1977
|
On Refined Versions of the Azuma-Hoeffding Inequality with Applications
in Information Theory
|
cs.IT math.IT
|
This is a survey paper with some original results of the author on refined
versions of the Azuma-Hoeffding inequality with some examples that are related
to information theory. This work has evolved to the joint paper with Maxim
Raginsky in arXiv:1212.4663v3.
|
1111.1982
|
On the Concentration of the Crest Factor for OFDM Signals
|
cs.IT math.IT
|
This paper applies several concentration inequalities to prove concentration
results for the crest factor of OFDM signals. The considered approaches are, to
the best of our knowledge, new in the context of establishing concentration for
OFDM signals.
|
1111.1992
|
On Concentration and Revisited Large Deviations Analysis of Binary
Hypothesis Testing
|
cs.IT math.IT
|
This paper first introduces a refined version of the Azuma-Hoeffding
inequality for discrete-parameter martingales with uniformly bounded jumps. The
refined inequality is used to revisit the large deviations analysis of binary
hypothesis testing.
|
1111.1995
|
Moderate Deviations Analysis of Binary Hypothesis Testing
|
cs.IT math.IT
|
This paper is focused on the moderate-deviations analysis of binary
hypothesis testing. The analysis relies on a concentration inequality for
discrete-parameter martingales with bounded jumps, where this inequality forms
a refinement to the Azuma-Hoeffding inequality. Relations of the analysis to
the moderate deviations principle for i.i.d. random variables and to the
relative entropy are considered.
|
1111.2001
|
Projection-Based and Look Ahead Strategies for Atom Selection
|
cs.SY
|
In this paper, we improve iterative greedy search algorithms in which atoms
are selected serially over iterations, i.e., one-by-one over iterations. For
serial atom selection, we devise two new schemes to select an atom from a set
of potential atoms in each iteration. The two new schemes lead to two new
algorithms. For both the algorithms, in each iteration, the set of potential
atoms is found using a standard matched filter. In case of the first scheme, we
propose an orthogonal projection strategy that selects an atom from the set of
potential atoms. Then, for the second scheme, we propose a look ahead strategy
such that the selection of an atom in the current iteration has an effect on
the future iterations. The use of look ahead strategy requires a higher
computational resource. To achieve a trade-off between performance and
complexity, we use the two new schemes in cascade and develop a third new
algorithm. Through experimental evaluations, we compare the proposed algorithms
with existing greedy search and convex relaxation algorithms.
|
1111.2018
|
Intrinsically Dynamic Network Communities
|
cs.SI physics.soc-ph
|
Community finding algorithms for networks have recently been extended to
dynamic data. Most of these recent methods aim at exhibiting community
partitions from successive graph snapshots and thereafter connecting or
smoothing these partitions using clever time-dependent features and sampling
techniques. These approaches are nonetheless achieving longitudinal rather than
dynamic community detection. We assume that communities are fundamentally
defined by the repetition of interactions among a set of nodes over time.
According to this definition, analyzing the data by considering successive
snapshots induces a significant loss of information: we suggest that it blurs
essentially dynamic phenomena - such as communities based on repeated
inter-temporal interactions, nodes switching from a community to another across
time, or the possibility that a community survives while its members are being
integrally replaced over a longer time period. We propose a formalism which
aims at tackling this issue in the context of time-directed datasets (such as
citation networks), and present several illustrations on both empirical and
synthetic dynamic networks. We eventually introduce intrinsically dynamic
metrics to qualify temporal community structure and emphasize their possible
role as an estimator of the quality of the community detection - taking into
account the fact that various empirical contexts may call for distinct
`community' definitions and detection criteria.
|
1111.2085
|
Ag-dependent (in silico) approach implies a deterministic kinetics for
homeostatic memory cell turnover
|
q-bio.CB cs.NE
|
Verhulst-like mathematical modeling has been used to investigate several
complex biological issues, such as immune memory equilibrium and cell-mediated
immunity in mammals. The regulation mechanisms of both these processes are
still not sufficiently understood. In a recent paper, Choo et al. [J. Immunol.,
v. 185, pp. 3436-44, 2010], used an Ag-independent approach to quantitatively
analyze memory cell turnover from some empirical data, and concluded that
immune homeostasis behaves stochastically, rather than deterministically. In
the paper here presented, we use an in silico Ag-dependent approach to simulate
the process of antigenic mutation and study its implications for memory
dynamics. Our results have suggested a deterministic kinetics for homeostatic
equilibrium, what contradicts the Choo et al. findings. Accordingly, our
calculations are an indication that a more extensive empirical protocol for
studying the homeostatic turnover should be considered.
|
1111.2092
|
Pushing Your Point of View: Behavioral Measures of Manipulation in
Wikipedia
|
cs.SI cs.LG
|
As a major source for information on virtually any topic, Wikipedia serves an
important role in public dissemination and consumption of knowledge. As a
result, it presents tremendous potential for people to promulgate their own
points of view; such efforts may be more subtle than typical vandalism. In this
paper, we introduce new behavioral metrics to quantify the level of controversy
associated with a particular user: a Controversy Score (C-Score) based on the
amount of attention the user focuses on controversial pages, and a Clustered
Controversy Score (CC-Score) that also takes into account topical clustering.
We show that both these measures are useful for identifying people who try to
"push" their points of view, by showing that they are good predictors of which
editors get blocked. The metrics can be used to triage potential POV pushers.
We apply this idea to a dataset of users who requested promotion to
administrator status and easily identify some editors who significantly changed
their behavior upon becoming administrators. At the same time, such behavior is
not rampant. Those who are promoted to administrator status tend to have more
stable behavior than comparable groups of prolific editors. This suggests that
the Adminship process works well, and that the Wikipedia community is not
overwhelmed by users who become administrators to promote their own points of
view.
|
1111.2098
|
The Half-Duplex AWGN Single-Relay Channel: Full Decoding or Partial
Decoding?
|
cs.IT math.IT
|
This paper compares the partial-decode-forward and the
complete-decode-forward coding strategies for the half-duplex Gaussian
single-relay channel. We analytically show that the rate achievable by
partial-decode-forward outperforms that of the more straightforward
complete-decode-forward by at most 12.5%. Furthermore, in the following
asymptotic cases, the gap between the partial-decode-forward and the
complete-decode-forward rates diminishes: (i) when the relay is close to the
source, (ii) when the relay is close to the destination, and (iii) when the SNR
is low. In addition, when the SNR increases, this gap, when normalized to the
complete-decode-forward rate, also diminishes. Consequently, significant
performance improvements are not achieved by optimizing the fraction of data
the relay should decode and forward, over simply decoding the entire source
message.
|
1111.2102
|
The Capacity Region of the Restricted Two-Way Relay Channel with Any
Deterministic Uplink
|
cs.IT math.IT
|
This paper considers the two-way relay channel (TWRC) where two users
communicate via a relay. For the restricted TWRC where the uplink from the
users to the relay is any deterministic function and the downlink from the
relay to the users is any arbitrary channel, the capacity region is obtained.
The TWRC considered is restricted in the sense that each user can only transmit
a function of its message.
|
1111.2108
|
A criterion of simultaneously symmetrization and spectral finiteness for
a finite set of real 2-by-2 matrices
|
cs.SY cs.NA math.OC
|
In this paper, we consider the simultaneously symmetrization and spectral
finiteness for a finite set of real 2-by-2 matrices.
|
1111.2111
|
Generic Multiplicative Methods for Implementing Machine Learning
Algorithms on MapReduce
|
cs.DS cs.LG
|
In this paper we introduce a generic model for multiplicative algorithms
which is suitable for the MapReduce parallel programming paradigm. We implement
three typical machine learning algorithms to demonstrate how similarity
comparison, gradient descent, power method and other classic learning
techniques fit this model well. Two versions of large-scale matrix
multiplication are discussed in this paper, and different methods are developed
for both cases with regard to their unique computational characteristics and
problem settings. In contrast to earlier research, we focus on fundamental
linear algebra techniques that establish a generic approach for a range of
algorithms, rather than specific ways of scaling up algorithms one at a time.
Experiments show promising results when evaluated on both speedup and accuracy.
Compared with a standard implementation with computational complexity $O(m^3)$
in the worst case, the large-scale matrix multiplication experiments prove our
design is considerably more efficient and maintains a good speedup as the
number of cores increases. Algorithm-specific experiments also produce
encouraging results on runtime performance.
|
1111.2125
|
Exploring Maps with Greedy Navigators
|
physics.soc-ph cs.SI
|
During the last decade of network research focusing on structural and
dynamical properties of networks, the role of network users has been more or
less underestimated from the bird's-eye view of global perspective. In this era
of global positioning system equipped smartphones, however, a user's ability to
access local geometric information and find efficient pathways on networks
plays a crucial role, rather than the globally optimal pathways. We present a
simple greedy spatial navigation strategy as a probe to explore spatial
networks. These greedy navigators use directional information in every move
they take, without being trapped in a dead end based on their memory about
previous routes. We suggest that the centralities measures have to be modified
to incorporate the navigators' behavior, and present the intriguing effect of
navigators' greediness where removing some edges may actually enhance the
routing efficiency, which is reminiscent of Braess's paradox. In addition,
using samples of road structures in large cities around the world, it is shown
that the navigability measure we define reflects unique structural properties,
which are not easy to predict from other topological characteristics. In this
respect, we believe that our routing scheme significantly moves the routing
problem on networks one step closer to reality, incorporating the inevitable
incompleteness of navigators' information.
|
1111.2211
|
High Performance Controllers for Speed and Position Induction Motor
Drive using New Reaching Law
|
cs.RO
|
This paper present new approach in robust indirect rotor field oriented
(IRFOC) induction motor (IM) control. The introduction of new exponential
reaching law (ERL) based sliding mode control (SMC) improve significantly the
performances compared to the conventional SMC which are well known susceptible
to the annoying chattering phenomenon, so, the elimination of the chattering is
achieved while simplicity and high performance speed and position tracking are
maintained. Simulation results are given to discuss the performances of the
proposed control method.
|
1111.2217
|
Moderate-Deviations of Lossy Source Coding for Discrete and Gaussian
Sources
|
cs.IT math.IT
|
We study the moderate-deviations (MD) setting for lossy source coding of
stationary memoryless sources. More specifically, we derive fundamental
compression limits of source codes whose rates are $R(D) \pm \epsilon_n$, where
$R(D)$ is the rate-distortion function and $\epsilon_n$ is a sequence that
dominates $\sqrt{1/n}$. This MD setting is complementary to the
large-deviations and central limit settings and was studied by Altug and Wagner
for the channel coding setting. We show, for finite alphabet and Gaussian
sources, that as in the central limit-type results, the so-called dispersion
for lossy source coding plays a fundamental role in the MD setting for the
lossy source coding problem.
|
1111.2221
|
Scaling Up Estimation of Distribution Algorithms For Continuous
Optimization
|
cs.NE cs.AI cs.LG
|
Since Estimation of Distribution Algorithms (EDA) were proposed, many
attempts have been made to improve EDAs' performance in the context of global
optimization. So far, the studies or applications of multivariate probabilistic
model based continuous EDAs are still restricted to rather low dimensional
problems (smaller than 100D). Traditional EDAs have difficulties in solving
higher dimensional problems because of the curse of dimensionality and their
rapidly increasing computational cost. However, scaling up continuous EDAs for
higher dimensional optimization is still necessary, which is supported by the
distinctive feature of EDAs: Because a probabilistic model is explicitly
estimated, from the learnt model one can discover useful properties or features
of the problem. Besides obtaining a good solution, understanding of the problem
structure can be of great benefit, especially for black box optimization. We
propose a novel EDA framework with Model Complexity Control (EDA-MCC) to scale
up EDAs. By using Weakly dependent variable Identification (WI) and Subspace
Modeling (SM), EDA-MCC shows significantly better performance than traditional
EDAs on high dimensional problems. Moreover, the computational cost and the
requirement of large population sizes can be reduced in EDA-MCC. In addition to
being able to find a good solution, EDA-MCC can also produce a useful problem
structure characterization. EDA-MCC is the first successful instance of
multivariate model based EDAs that can be effectively applied a general class
of up to 500D problems. It also outperforms some newly developed algorithms
designed specifically for large scale optimization. In order to understand the
strength and weakness of EDA-MCC, we have carried out extensive computational
studies of EDA-MCC. Our results have revealed when EDA-MCC is likely to
outperform others on what kind of benchmark functions.
|
1111.2249
|
SATzilla: Portfolio-based Algorithm Selection for SAT
|
cs.AI
|
It has been widely observed that there is no single "dominant" SAT solver;
instead, different solvers perform best on different instances. Rather than
following the traditional approach of choosing the best solver for a given
class of instances, we advocate making this decision online on a per-instance
basis. Building on previous work, we describe SATzilla, an automated approach
for constructing per-instance algorithm portfolios for SAT that use so-called
empirical hardness models to choose among their constituent solvers. This
approach takes as input a distribution of problem instances and a set of
component solvers, and constructs a portfolio optimizing a given objective
function (such as mean runtime, percent of instances solved, or score in a
competition). The excellent performance of SATzilla was independently verified
in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one
silver and one bronze medal. In this article, we go well beyond SATzilla07 by
making the portfolio construction scalable and completely automated, and
improving it by integrating local search solvers as candidate solvers, by
predicting performance score instead of runtime, and by using hierarchical
hardness models that take into account different types of SAT instances. We
demonstrate the effectiveness of these new techniques in extensive experimental
results on data sets including instances from the most recent SAT competition.
|
1111.2258
|
Design and Implementation of Prosthetic Arm using Gear Motor Control
Technique with Appropriate Testing
|
cs.RO cs.SY
|
Any part of the human body replication procedure commences the prosthetic
control science. This paper highlights the hardware design technique of a
prosthetic arm with implementation of gear motor control aspect. The prosthetic
control arm movement has been demonstrated in this paper applying processor
programming and with the successful testing of the designed prosthetic model.
The architectural design of the prosthetic arm here has been replaced by
lighter material instead of heavy metal, as well as the traditional EMG
(electro myographic) signal has been replaced by the muscle strain.
|
1111.2259
|
A Survey on Open Problems for Mobile Robots
|
cs.RO cs.MA
|
Gathering mobile robots is a widely studied problem in robotic research. This
survey first introduces the related work, summarizing models and results. Then,
the focus shifts on the open problem of gathering fat robots. In this context,
"fat" means that the robot is not represented by a point in a bidimensional
space, but it has an extent. Moreover, it can be opaque in the sense that other
robots cannot "see through" it. All these issues lead to a redefinition of the
original problem and an extension of the CORDA model. For at most 4 robots an
algorithm is provided in the literature, but is gathering always possible for
n>4 fat robots? Another open problem is considered: Boundary Patrolling by
mobile robots. A set of mobile robots with constraints only on speed and
visibility is working in a polygonal environment having boundary and possibly
obstacles. The robots have to perform a perpetual movement (possibly within the
environment) so that the maximum timespan in which a point of the boundary is
not being watched by any robot is minimized.
|
1111.2262
|
Improved Bound for the Nystrom's Method and its Application to Kernel
Classification
|
cs.LG cs.NA
|
We develop two approaches for analyzing the approximation error bound for the
Nystr\"{o}m method, one based on the concentration inequality of integral
operator, and one based on the compressive sensing theory. We show that the
approximation error, measured in the spectral norm, can be improved from
$O(N/\sqrt{m})$ to $O(N/m^{1 - \rho})$ in the case of large eigengap, where $N$
is the total number of data points, $m$ is the number of sampled data points,
and $\rho \in (0, 1/2)$ is a positive constant that characterizes the eigengap.
When the eigenvalues of the kernel matrix follow a $p$-power law, our analysis
based on compressive sensing theory further improves the bound to $O(N/m^{p -
1})$ under an incoherence assumption, which explains why the Nystr\"{o}m method
works well for kernel matrix with skewed eigenvalues. We present a kernel
classification approach based on the Nystr\"{o}m method and derive its
generalization performance using the improved bound. We show that when the
eigenvalues of kernel matrix follow a $p$-power law, we can reduce the number
of support vectors to $N^{2p/(p^2 - 1)}$, a number less than $N$ when $p >
1+\sqrt{2}$, without seriously sacrificing its generalization performance.
|
1111.2285
|
Large-scale games in large-scale systems
|
cs.SY cs.GT math-ph math.DS math.MP math.OC
|
Many real-world problems modeled by stochastic games have huge state and/or
action spaces, leading to the well-known curse of dimensionality. The
complexity of the analysis of large-scale systems is dramatically reduced by
exploiting mean field limit and dynamical system viewpoints. Under regularity
assumptions and specific time-scaling techniques, the evolution of the mean
field limit can be expressed in terms of deterministic or stochastic equation
or inclusion (difference or differential). In this paper, we overview recent
advances of large-scale games in large-scale systems. We focus in particular on
population games, stochastic population games and mean field stochastic games.
Considering long-term payoffs, we characterize the mean field systems using
Bellman and Kolmogorov forward equations.
|
1111.2391
|
A Novel Approach to Texture classification using statistical feature
|
cs.CV
|
Texture is an important spatial feature which plays a vital role in content
based image retrieval. The enormous growth of the internet and the wide use of
digital data have increased the need for both efficient image database creation
and retrieval procedure. This paper describes a new approach for texture
classification by combining statistical texture features of Local Binary
Pattern and Texture spectrum. Since most significant information of a texture
often appears in the high frequency channels, the features are extracted by the
computation of LBP and Texture Spectrum and Legendre Moments. Euclidean
distance is used for similarity measurement. The experimental result shows that
97.77% classification accuracy is obtained by the proposed method.
|
1111.2399
|
Genetic Algorithm (GA) in Feature Selection for CRF Based Manipuri
Multiword Expression (MWE) Identification
|
cs.CL cs.NE
|
This paper deals with the identification of Multiword Expressions (MWEs) in
Manipuri, a highly agglutinative Indian Language. Manipuri is listed in the
Eight Schedule of Indian Constitution. MWE plays an important role in the
applications of Natural Language Processing(NLP) like Machine Translation, Part
of Speech tagging, Information Retrieval, Question Answering etc. Feature
selection is an important factor in the recognition of Manipuri MWEs using
Conditional Random Field (CRF). The disadvantage of manual selection and
choosing of the appropriate features for running CRF motivates us to think of
Genetic Algorithm (GA). Using GA we are able to find the optimal features to
run the CRF. We have tried with fifty generations in feature selection along
with three fold cross validation as fitness function. This model demonstrated
the Recall (R) of 64.08%, Precision (P) of 86.84% and F-measure (F) of 73.74%,
showing an improvement over the CRF based Manipuri MWE identification without
GA application.
|
1111.2430
|
Achievable Rates for a Two-Relay Network with Relays-Transmitter
Feedbacks
|
cs.IT math.IT
|
We consider a relay network with two relays and two feedback links from the
relays to the sender. To obtain the achievability results, we use the
compress-and-forward and the decode-and-forward strategies to superimpose
facility and cooperation analogue to what proposed by Cover and El Gamal for a
relay channel. In addition to random binning, we use deterministic binning to
perform restricted decoding. We show how to use the feedback links for
cooperation between the sender and the relays to transmit the information which
is compressed in the sender and the relays.
|
1111.2451
|
Unitary Precoding and Basis Dependency of MMSE Performance for Gaussian
Erasure Channels
|
cs.IT math.IT
|
We consider the transmission of a Gaussian vector source over a
multi-dimensional Gaussian channel where a random or a fixed subset of the
channel outputs are erased. Within the setup where the only encoding operation
allowed is a linear unitary transformation on the source, we investigate the
MMSE performance, both in average, and also in terms of guarantees that hold
with high probability as a function of the system parameters. Under the
performance criterion of average MMSE, necessary conditions that should be
satisfied by the optimal unitary encoders are established and explicit
solutions for a class of settings are presented. For random sampling of signals
that have a low number of degrees of freedom, we present MMSE bounds that hold
with high probability. Our results illustrate how the spread of the eigenvalue
distribution and the unitary transformation contribute to these performance
guarantees. The performance of the discrete Fourier transform (DFT) is also
investigated. As a benchmark, we investigate the equidistant sampling of
circularly wide-sense stationary (c.w.s.s.) signals, and present the explicit
error expression that quantifies the effects of the sampling rate and the
eigenvalue distribution of the covariance matrix of the signal.
These findings may be useful in understanding the geometric dependence of
signal uncertainty in a stochastic process. In particular, unlike information
theoretic measures such as entropy, we highlight the basis dependence of
uncertainty in a signal with another perspective. The unitary encoding space
restriction exhibits the most and least favorable signal bases for estimation.
|
1111.2456
|
Repeated Games With Intervention: Theory and Applications in
Communications
|
cs.IT cs.GT math.IT
|
In communication systems where users share common resources, users' selfish
behavior usually results in suboptimal resource utilization. There have been
extensive works that model communication systems with selfish users as one-shot
games and propose incentive schemes to achieve Pareto optimal action profiles
as non-cooperative equilibria. However, in many communication systems, due to
strong negative externalities among users, the sets of feasible payoffs in
one-shot games are nonconvex. Thus, it is possible to expand the set of
feasible payoffs by having users choose convex combinations of different
payoffs. In this paper, we propose a repeated game model generalized by
intervention. First, we use repeated games to convexify the set of feasible
payoffs in one-shot games. Second, we combine conventional repeated games with
intervention, originally proposed for one-shot games, to achieve a larger set
of equilibrium payoffs and loosen requirements for users' patience to achieve
it. We study the problem of maximizing a welfare function defined on users'
equilibrium payoffs, subject to minimum payoff guarantees. Given the optimal
equilibrium payoff, we derive the minimum intervention capability required and
design corresponding equilibrium strategies. The proposed generalized repeated
game model applies to various communication systems, such as power control and
flow control.
|
1111.2514
|
A more appropriate Protein Classification using Data Mining
|
cs.CE
|
Research in bioinformatics is a complex phenomenon as it overlaps two
knowledge domains, namely, biological and computer sciences. This paper has
tried to introduce an efficient data mining approach for classifying proteins
into some useful groups by representing them in hierarchy tree structure. There
are several techniques used to classify proteins but most of them had few
drawbacks on their grouping. Among them the most efficient grouping technique
is used by PSIMAP. Even though PSIMAP (Protein Structural Interactome Map)
technique was successful to incorporate most of the protein but it fails to
classify the scale free property proteins. Our technique overcomes this
drawback and successfully maps all the protein in different groups, including
the scale free property proteins failed to group by PSIMAP. Our approach
selects the six major attributes of protein: a) Structure comparison b)
Sequence Comparison c) Connectivity d) Cluster Index e) Interactivity f)
Taxonomic to group the protein from the databank by generating a hierarchal
tree structure. The proposed approach calculates the degree (probability) of
similarity of each protein newly entered in the system against of existing
proteins in the system by using probability theorem on each six properties of
proteins.
|
1111.2530
|
A semantically enriched web usage based recommendation model
|
cs.DB
|
With the rapid growth of internet technologies, Web has become a huge
repository of information and keeps growing exponentially under no editorial
control. However the human capability to read, access and understand Web
content remains constant. This motivated researchers to provide Web
personalized online services such as Web recommendations to alleviate the
information overload problem and provide tailored Web experiences to the Web
users. Recent studies show that Web usage mining has emerged as a popular
approach in providing Web personalization. However conventional Web usage based
recommender systems are limited in their ability to use the domain knowledge of
the Web application. The focus is only on Web usage data. As a consequence the
quality of the discovered patterns is low. In this paper, we propose a novel
framework integrating semantic information in the Web usage mining process.
Sequential Pattern Mining technique is applied over the semantic space to
discover the frequent sequential patterns. The frequent navigational patterns
are extracted in the form of Ontology instances instead of Web page views and
the resultant semantic patterns are used for generating Web page
recommendations to the user. Experimental results shown are promising and
proved that incorporating semantic information into Web usage mining process
can provide us with more interesting patterns which consequently make the
recommendation system more functional, smarter and comprehensive.
|
1111.2581
|
Hybrid Approximate Message Passing
|
cs.IT math.IT
|
Gaussian and quadratic approximations of message passing algorithms on graphs
have attracted considerable recent attention due to their computational
simplicity, analytic tractability, and wide applicability in optimization and
statistical inference problems. This paper presents a systematic framework for
incorporating such approximate message passing (AMP) methods in general
graphical models. The key concept is a partition of dependencies of a general
graphical model into strong and weak edges, with the weak edges representing
interactions through aggregates of small, linearizable couplings of variables.
AMP approximations based on the Central Limit Theorem can be readily applied to
aggregates of many weak edges and integrated with standard message passing
updates on the strong edges. The resulting algorithm, which we call hybrid
generalized approximate message passing (HyGAMP), can yield significantly
simpler implementations of sum-product and max-sum loopy belief propagation. By
varying the partition of strong and weak edges, a performance--complexity
trade-off can be achieved. Group sparsity and multinomial logistic regression
problems are studied as examples of the proposed methodology.
|
1111.2616
|
Ensuring convergence in total-variation-based reconstruction for
accurate microcalcification imaging in breast X-ray CT
|
physics.med-ph cs.CE math.OC
|
Breast X-ray CT imaging is being considered in screening as an extension to
mammography. As a large fraction of the population will be exposed to
radiation, low-dose imaging is essential. Iterative image reconstruction based
on solving an optimization problem, such as Total-Variation minimization, shows
potential for reconstruction from sparse-view data. For iterative methods it is
important to ensure convergence to an accurate solution, since important image
features, such as presence of microcalcifications indicating breast cancer, may
not be visible in a non-converged reconstruction, and this can have clinical
significance. To prevent excessively long computational times, which is a
practical concern for the large image arrays in CT, it is desirable to keep the
number of iterations low, while still ensuring a sufficiently accurate
reconstruction for the specific imaging task. This motivates the study of
accurate convergence criteria for iterative image reconstruction. In simulation
studies with a realistic breast phantom with microcalcifications we compare
different convergence criteria for reliable reconstruction. Our results show
that it can be challenging to ensure a sufficiently accurate microcalcification
reconstruction, when using standard convergence criteria. In particular, the
gray level of the small microcalcifications may not have converged long after
the background tissue is reconstructed uniformly. We propose the use of the
individual objective function gradient components to better monitor possible
regions of non-converged variables. For microcalcifications we find empirically
a large correlation between nonzero gradient components and non-converged
variables, which occur precisely within the microcalcifications. This supports
our claim that gradient components can be used to ensure convergence to a
sufficiently accurate reconstruction.
|
1111.2618
|
Full-Duplex MIMO Relaying: Achievable Rates under Limited Dynamic Range
|
cs.IT math.IT
|
In this paper we consider the problem of full-duplex multiple-input
multiple-output (MIMO) relaying between multi-antenna source and destination
nodes. The principal difficulty in implementing such a system is that, due to
the limited attenuation between the relay's transmit and receive antenna
arrays, the relay's outgoing signal may overwhelm its limited-dynamic-range
input circuitry, making it difficult---if not impossible---to recover the
desired incoming signal. While explicitly modeling transmitter/receiver
dynamic-range limitations and channel estimation error, we derive tight upper
and lower bounds on the end-to-end achievable rate of decode-and-forward-based
full-duplex MIMO relay systems, and propose a transmission scheme based on
maximization of the lower bound. The maximization requires us to (numerically)
solve a nonconvex optimization problem, for which we detail a novel approach
based on bisection search and gradient projection. To gain insights into system
design tradeoffs, we also derive an analytic approximation to the achievable
rate and numerically demonstrate its accuracy. We then study the behavior of
the achievable rate as a function of signal-to-noise ratio,
interference-to-noise ratio, transmitter/receiver dynamic range, number of
antennas, and training length, using optimized half-duplex signaling as a
baseline.
|
1111.2637
|
Some Extremal Self-Dual Codes and Unimodular Lattices in Dimension 40
|
math.CO cs.IT math.IT math.NT
|
In this paper, binary extremal singly even self-dual codes of length 40 and
extremal odd unimodular lattices in dimension 40 are studied. We give a
classification of extremal singly even self-dual codes of length 40. We also
give a classification of extremal odd unimodular lattices in dimension 40 with
shadows having 80 vectors of norm 2 through their relationships with extremal
doubly even self-dual codes of length 40.
|
1111.2640
|
Power Allocation for Outage Minimization in Cognitive Radio Networks
with Limited Feedback
|
cs.IT math.IT math.OC
|
We address an optimal transmit power allocation problem that minimizes the
outage probability of a secondary user (SU) who is allowed to coexist with a
primary user (PU) in a narrowband spectrum sharing cognitive radio network,
under a long term average transmit power constraint at the secondary
transmitter (SU-TX) and an average interference power constraint at the primary
receiver (PU-RX), with quantized channel state information (CSI) (including
both the channels from SU-TX to SU-RX, denoted as $g_1$ and the channel from
SU-TX to PU-RX, denoted as $g_0$) at the SU-TX. The optimal quantization
regions in the vector channel space is shown to have a 'stepwise' structure.
With this structure, the above outage minimization problem can be explicitly
formulated and solved by employing the Karush-Kuhn-Tucker (KKT) necessary
optimality conditions to obtain a locally optimal quantized power codebook. A
low-complexity near-optimal quantized power allocation algorithm is derived for
the case of large number of feedback bits. An explicit expression for the
asymptotic SU outage probability at high rate quantization (as the number of
feedback bits goes to infinity) is also provided, and is shown to approximate
the optimal outage behavior extremely well for large number of bits of feedback
via numerical simulations. Numerical results also illustrate that with 6 bits
of feedback, the derived algorithms provide SU outage performance very close to
that with full CSI at the SU-TX.
|
1111.2651
|
Value, Variety and Viability: Designing For Co-creation in a Complex
System of Direct and Indirect (goods) Service Value Proposition
|
cs.SY
|
While service-dominant logic proposes that all "Goods are a distribution
mechanism for service provision" (FP3), there is a need to understand when and
why a firm would utilise direct or indirect (goods) service provision, and the
interactions between them, to co-create value with the customer. Three
longitudinal case studies in B2B equipment-based 'complex service' systems were
analysed to gain an understanding of customers' co-creation activities to
achieve outcomes. We found the nature of value, degree of contextual variety
and the firm's legacy viability to be viability threats. To counter this, the
firm uses (a) Direct Service Provision for Scalability and Replicability, (b)
Indirect Service Provision for variety absorption and co-creating emotional
value and customer experience and (c) designing direct and indirect provision
for Scalability and Absorptive Resources of the customer. The co-creation of
complex multidimensional value could be delivered through different value
propositions of the firm. The research proposes a value-centric way of
understanding the interactions between direct and indirect service provision in
the design of the firm's value proposition and proposes a viable systems
approach towards reorganising the firm. The study provides a way for managers
to understand the effectiveness (rather than efficiency) of the firm in
co-creating value as a major issue in the design of complex socio-technical
systems. Goods are often designed within the domain of engineering and product
design, often placing human activity as a supporting role to the equipment.
Through an SDLogic lens, this study considers the design of both equipment and
human activity on an equal footing for value co-creation with the customer, and
it yielded interesting results on when direct provisioning (goods) should be
redesigned, considering all activities equally.
|
1111.2664
|
A Collaborative Mechanism for Crowdsourcing Prediction Problems
|
cs.LG cs.GT
|
Machine Learning competitions such as the Netflix Prize have proven
reasonably successful as a method of "crowdsourcing" prediction tasks. But
these competitions have a number of weaknesses, particularly in the incentive
structure they create for the participants. We propose a new approach, called a
Crowdsourced Learning Mechanism, in which participants collaboratively "learn"
a hypothesis for a given prediction task. The approach draws heavily from the
concept of a prediction market, where traders bet on the likelihood of a future
event. In our framework, the mechanism continues to publish the current
hypothesis, and participants can modify this hypothesis by wagering on an
update. The critical incentive property is that a participant will profit an
amount that scales according to how much her update improves performance on a
released test set.
|
1111.2669
|
A Novel Approach for Web Page Set Mining
|
cs.DB
|
The one of the most time consuming steps for association rule mining is the
computation of the frequency of the occurrences of itemsets in the database.
The hash table index approach converts a transaction database to an hash index
tree by scanning the transaction database only once. Whenever user requests for
any Uniform Resource Locator (URL), the request entry is stored in the Log File
of the server. This paper presents the hash index table structure, a general
and dense structure which provides web page set extraction from Log File of
server. This hash table provides information about the original database. Web
Page set mining (WPs-Mine) provides a complete representation of the original
database. This approach works well for both sparse and dense data
distributions. Web page set mining supported by hash table index shows the
performance always comparable with and often better than algorithms accessing
data on flat files. Incremental update is feasible without reaccessing the
original transactional database.
|
1111.2763
|
8-Valent Fuzzy Logic for Iris Recognition and Biometry
|
cs.AI
|
This paper shows that maintaining logical consistency of an iris recognition
system is a matter of finding a suitable partitioning of the input space in
enrollable and unenrollable pairs by negotiating the user comfort and the
safety of the biometric system. In other words, consistent enrollment is
mandatory in order to preserve system consistency. A fuzzy 3-valued
disambiguated model of iris recognition is proposed and analyzed in terms of
completeness, consistency, user comfort and biometric safety. It is also shown
here that the fuzzy 3-valued model of iris recognition is hosted by an 8-valued
Boolean algebra of modulo 8 integers that represents the computational
formalization in which a biometric system (a software agent) can achieve the
artificial understanding of iris recognition in a logically consistent manner.
|
1111.2837
|
On Compress-Forward without Wyner-Ziv Binning for Relay Networks
|
cs.IT math.IT
|
Noisy network coding is recently proposed for the general multi-source
network by Lim, Kim, El Gamal and Chung. This scheme builds on compress-forward
(CF) relaying but involves three new ideas, namely no Wyner-Ziv binning,
relaxed simultaneous decoding and message repetition. In this paper, using the
two-way relay channel as the underlining example, we analyze the impact of each
of these ideas on the achievable rate region of relay networks. First, CF
without binning but with joint decoding of both the message and compression
index can achieve a larger rate region than the original CF scheme for
multi-destination relay networks. With binning and successive decoding, the
compression rate at each relay is constrained by the weakest link from the
relay to a destination; but without binning, this constraint is relaxed.
Second, simultaneous decoding of all messages over all blocks without uniquely
decoding the compression indices can remove the constraints on compression rate
completely, but is still subject to the message block boundary effect. Third,
message repetition is necessary to overcome this boundary effect and achieve
the noisy network coding region for multi-source networks. The rate region is
enlarged with increasing repetition times. We also apply CF without binning
specifically to the one-way and two-way relay channels and analyze the rate
regions in detail. For the one-way relay channel, it achieves the same rate as
the original CF and noisy network coding but has only 1 block decoding delay.
For the two-way relay channel, we derive the explicit channel conditions in the
Gaussian and fading cases for CF without binning to achieve the same rate
region or sum rate as noisy network coding. These analyses may be appealing to
practical implementation because of the shorter encoding and decoding delay in
CF without binning.
|
1111.2852
|
Principles of Distributed Data Management in 2020?
|
cs.DB
|
With the advents of high-speed networks, fast commodity hardware, and the
web, distributed data sources have become ubiquitous. The third edition of the
\"Ozsu-Valduriez textbook Principles of Distributed Database Systems [10]
reflects the evolution of distributed data management and distributed database
systems. In this new edition, the fundamental principles of distributed data
management could be still presented based on the three dimensions of earlier
editions: distribution, heterogeneity and autonomy of the data sources. In
retrospect, the focus on fundamental principles and generic techniques has been
useful not only to understand and teach the material, but also to enable an
infinite number of variations. The primary application of these generic
techniques has been obviously for distributed and parallel DBMS versions.
Today, to support the requirements of important data-intensive applications
(e.g. social networks, web data analytics, scientific applications, etc.), new
distributed data management techniques and systems (e.g. MapReduce, Hadoop,
SciDB, Peanut, Pig latin, etc.) are emerging and receiving much attention from
the research community. Although they do well in terms of
consistency/flexibility/performance trade-offs for specific applications, they
seem to be ad-hoc and might hurt data interoperability. The key questions I
discuss are: What are the fundamental principles behind the emerging solutions?
Is there any generic architectural model, to explain those principles? Do we
need new foundations to look at data distribution?
|
1111.2896
|
The Laplacian Spectra of Graphs and Complex Networks
|
math.CO cs.SI physics.data-an physics.soc-ph
|
The paper is a brief survey of some recent new results and progress of the
Laplacian spectra of graphs and complex networks (in particular, random graph
and the small world network). The main contents contain the spectral radius of
the graph Laplacian for given a degree sequence, the Laplacian coefficients,
the algebraic connectivity and the graph doubly stochastic matrix, and the
spectra of random graphs and the small world networks. In addition, some
questions are proposed.
|
1111.2904
|
Spatio-Temporal Analysis of Topic Popularity in Twitter
|
cs.SI cs.CY
|
We present the first comprehensive characterization of the diffusion of ideas
on Twitter, studying more than 4000 topics that include both popular and less
popular topics. On a data set containing approximately 10 million users and a
comprehensive scraping of all the tweets posted by these users between June
2009 and August 2009 (approximately 200 million tweets), we perform a rigorous
temporal and spatial analysis, investigating the time-evolving properties of
the subgraphs formed by the users discussing each topic. We focus on two
different notions of the spatial: the network topology formed by
follower-following links on Twitter, and the geospatial location of the users.
We investigate the effect of initiators on the popularity of topics and find
that users with a high number of followers have a strong impact on popularity.
We deduce that topics become popular when disjoint clusters of users discussing
them begin to merge and form one giant component that grows to cover a
significant fraction of the network. Our geospatial analysis shows that highly
popular topics are those that cross regional boundaries aggressively.
|
1111.2948
|
Using Contextual Information as Virtual Items on Top-N Recommender
Systems
|
cs.LG cs.IR
|
Traditionally, recommender systems for the Web deal with applications that
have two dimensions, users and items. Based on access logs that relate these
dimensions, a recommendation model can be built and used to identify a set of N
items that will be of interest to a certain user. In this paper we propose a
method to complement the information in the access logs with contextual
information without changing the recommendation algorithm. The method consists
in representing context as virtual items. We empirically test this method with
two top-N recommender systems, an item-based collaborative filtering technique
and association rules, on three data sets. The results show that our method is
able to take advantage of the context (new dimensions) when it is informative.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.