id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1307.4685 | Factors determining nestedness in complex networks | physics.soc-ph cs.SI q-bio.MN q-bio.NC | Understanding the causes and effects of network structural features is a key
task in deciphering complex systems. In this context, the property of network
nestedness has aroused a fair amount of interest as regards ecological
networks. Indeed, Bastolla et al. introduced a simple measure of network
nestedness which opened the door to analytical understanding, allowing them to
conclude that biodiversity is strongly enhanced in highly nested mutualistic
networks. Here, we suggest a slightly refined version of such a measure and go
on to study how it is influenced by the most basic structural properties of
networks, such as degree distribution and degree-degree correlations (i.e.
assortativity). We find that heterogeneity in the degree has a very strong
influence on nestedness. Once such an influence has been discounted, we find
that nestedness is strongly correlated with disassortativity and hence, as
random (neutral) networks have been recently found to be naturally
disassortative, they tend to be naturally nested just as the result of chance.
|
1307.4689 | DASH: Dynamic Approach for Switching Heuristics | cs.AI | Complete tree search is a highly effective method for tackling MIP problems,
and over the years, a plethora of branching heuristics have been introduced to
further refine the technique for varying problems. Recently, portfolio
algorithms have taken the process a step further, trying to predict the best
heuristic for each instance at hand. However, the motivation behind algorithm
selection can be taken further still, and used to dynamically choose the most
appropriate algorithm for each encountered subproblem. In this paper we
identify a feature space that captures both the evolution of the problem in the
branching tree and the similarity among subproblems of instances from the same
MIP models. We show how to exploit these features to decide the best time to
switch the branching heuristic and then show how such a system can be trained
efficiently. Experiments on a highly heterogeneous collection of MIP instances
show significant gains over the pure algorithm selection approach that for a
given instance uses only a single heuristic throughout the search.
|
1307.4700 | Lorentzian Iterative Hard Thresholding: Robust Compressed Sensing with
Prior Information | cs.IT math.IT | Commonly employed reconstruction algorithms in compressed sensing (CS) use
the $L_2$ norm as the metric for the residual error. However, it is well-known
that least squares (LS) based estimators are highly sensitive to outliers
present in the measurement vector leading to a poor performance when the noise
no longer follows the Gaussian assumption but, instead, is better characterized
by heavier-than-Gaussian tailed distributions. In this paper, we propose a
robust iterative hard Thresholding (IHT) algorithm for reconstructing sparse
signals in the presence of impulsive noise. To address this problem, we use a
Lorentzian cost function instead of the $L_2$ cost function employed by the
traditional IHT algorithm. We also modify the algorithm to incorporate prior
signal information in the recovery process. Specifically, we study the case of
CS with partially known support. The proposed algorithm is a fast method with
computational load comparable to the LS based IHT, whilst having the advantage
of robustness against heavy-tailed impulsive noise. Sufficient conditions for
stability are studied and a reconstruction error bound is derived. We also
derive sufficient conditions for stable sparse signal recovery with partially
known support. Theoretical analysis shows that including prior support
information relaxes the conditions for successful reconstruction. Simulation
results demonstrate that the Lorentzian-based IHT algorithm significantly
outperform commonly employed sparse reconstruction techniques in impulsive
environments, while providing comparable performance in less demanding,
light-tailed environments. Numerical results also demonstrate that the
partially known support inclusion improves the performance of the proposed
algorithm, thereby requiring fewer samples to yield an approximate
reconstruction.
|
1307.4716 | Cloud Template, a Big Data Solution | cs.DC cs.NI cs.SE cs.SI | Today cloud computing has become as a new concept for hosting and delivering
different services over the Internet for big data solutions. Cloud computing is
attractive to different business owners of both small and enterprise as it
eliminates the requirement for users to plan ahead for provisioning, and allows
enterprises to start from the small and increase resources only when there is a
rise in service demand. Despite the fact that cloud computing offers huge
opportunities to the IT industry, the development of cloud computing technology
is currently has several issues. This study presents an idea for introducing
cloud templates which will be used for analyzing, designing, developing and
implementing cloud computing systems. We will present a template based design
for cloud computing systems, highlighting its key concepts, architectural
principles and state of the art implementation, as well as research challenges
and future work requirements. The aim of this idea is to provide a better
understanding of the design challenges of cloud computing and identify
important research directions in this big data increasingly important area. We
will describe a series of studies by which we and other researchers have
assessed the effectiveness of these techniques in practical situations.
Finally, in this study we will show how this idea could be implemented in a
practical and useful way in industry.
|
1307.4717 | Content Based Image Retrieval System using Feature Classification with
Modified KNN Algorithm | cs.CV | Feature means countenance, remote sensing scene objects with similar
characteristics, associated to interesting scene elements in the image
formation process. They are classified into three types in image processing,
that is low, middle and high. Low level features are color, texture and middle
level feature is shape and high level feature is semantic gap of objects. An
image retrieval system is a computer system for browsing, searching and
retrieving images from a large image database. Content Based Image Retrieval is
a technique which uses visual features of image such as color, shape, texture
to search user required image from large image database according to user
requests in the form of a query. MKNN is an enhancing method of KNN. The
proposed KNN classification is called MKNN. MKNN contains two parts for
processing, they are validity of the train samples and applying weighted KNN.
The validity of each point is computed according to its neighbors. In our
proposal, Modified K-Nearest Neighbor can be considered a kind of weighted KNN
so that the query label is approximated by weighting the neighbors of the
query.
|
1307.4733 | Performance Limits of a Cloud Radio | cs.IT math.IT | Cooperation in a cellular network is seen as a key technique in managing
other cell interference to observe a gain in achievable rate. In this paper, we
present the achievable rate regions for a cloud radio network using a
sub-optimal zero forcing equalizer with dirty paper precoding. We show that
when complete channel state information is available at the cloud, rates close
to those achievable with total interference cancellation can be achieved. With
mean capacity gains, of up to 2 fold over the conventional cellular network in
both uplink and downlink, this precoding scheme shows great promise for
implementation in a cloud radio network. To simplify the analysis, we use a
stochastic geometric framework based of Poisson point processes instead of the
traditional grid based cellular network model.
We also study the impact of limiting the channel state information and
geographical clustering to limit the cloud size on the achievable rate. We have
observed that using this zero forcing-dirty paper coding technique, the adverse
effect of inter-cluster interference can be minimized thereby transforming an
interference limited network into a noise limited network as experienced by an
average user in the network for low operating signal-to-noise-ratios. However,
for higher signal-to-noise-ratios, both the average achievable rate and
cell-edge achievable rate saturate as observed in literature. As the
implementation of dirty paper coding is practically not feasible, we present a
practical design of a cloud radio network using cloud a minimum mean square
equalizer for processing the uplink streams and use Tomlinson-Harashima
precoder as a sub-optimal substitute for a dirty paper precoder in downlink.
|
1307.4744 | On the Coexistence of a Primary User with an Energy Harvesting Secondary
User: A Case of Cognitive Cooperation | cs.IT cs.NI math.IT | In this paper, we consider a cognitive scenario where an energy harvesting
secondary user (SU) shares the spectrum with a primary user (PU). The secondary
source helps the primary source in delivering its undelivered packets during
periods of silence of the primary source. The primary source has a queue for
storing its data packets, whereas the secondary source has two data queues; a
queue for storing its own packets and the other for storing the fraction of the
undelivered primary packets accepted for relaying. The secondary source is
assumed to be a battery-based node which harvests energy packets from the
environment. In addition to its data queues, the SU has an energy queue to
store the harvested energy packets. The secondary energy packets are used for
primary packets decoding and data packets transmission. More specifically, if
the secondary energy queue is empty, the secondary source can neither help the
primary source nor transmit a packet from the data queues. The energy queue is
modeled as a discrete time queue with Markov arrival and service processes. Due
to the interaction of the queues, we provide inner and outer bounds on the
stability region of the proposed system. We investigate the impact of the
energy arrival rate on the stability region. Numerical results show the
significant gain of cooperation.
|
1307.4790 | Time-Frequency Foundations of Communications | cs.IT math.IT | In the tradition of Gabor's 1946 landmark paper [1], we advocate a
time-frequency (TF) approach to communications. TF methods for communications
have been proposed very early (see the box History). While several tutorial
papers and book chapters on the topic are available (see, e.g., [2]-[4] and
references therein), the goal of this paper is to present the fundamental
aspects in a coherent and easily accessible manner. Specifically, we establish
the role of TF methods in communications across a range of subject areas
including TF dispersive channels, orthogonal frequency division multiplexing
(OFDM), information-theoretic limits, and system identification and channel
estimation. Furthermore, we present fundamental results that are stated in the
literature for the continuous-time case in simple linear algebra terms.
|
1307.4798 | Attention and Visibility in an Information Rich World | cs.SI nlin.AO physics.soc-ph | As the rate of content production grows, we must make a staggering number of
daily decisions about what information is worth acting on. For any flourishing
online social media system, users can barely keep up with the new content
shared by friends. How does the user-interface design help or hinder users'
ability to find interesting content? We analyze the choices people make about
which information to propagate on the social media sites Twitter and Digg. We
observe regularities in behavior which can be attributed directly to cognitive
limitations of humans, resulting from the different visibility policies of each
site. We quantify how people divide their limited attention among competing
sources of information, and we show how the user-interface design can mediate
information spread.
|
1307.4799 | Cooperative Relaying at Finite SNR -- Role of Quantize-Map-and-Forward | cs.IT math.IT | Quantize-Map-and-Forward (QMF) relaying has been shown to achieve the optimal
diversity-multiplexing trade-off (DMT) for arbitrary slow fading full-duplex
networks as well as for the single-relay half-duplex network. A key reason for
this is that quantizing at the noise level suffices to achieve the cut-set
bound approximately to within an additive gap, without any requirement of
instantaneous channel state information (CSI). However, DMT only captures the
high SNR performance and potentially, limited CSI at the relay can improve
performance at moderate SNRs. In this work we propose an optimization framework
for QMF relaying over slow fading channels. Focusing on vector Gaussian
quantizers, we optimize the outage probability for the full-duplex and
half-duplex single relay by finding the best quantization level and relay
schedule according to the available CSI at the relays. For the N-relay diamond
network, we derive an universal quantizer that sharpens the additive
approximation gap of QMF from the conventional \Theta(N) bits/s/Hz to
\Theta(log(N)) bits/s/Hz using only network topology information. Analytical
solutions to channel-aware optimal quantizers for two-relay and symmetric
N-relay diamond networks are also derived. In addition, we prove that suitable
hybridizations of our optimized QMF schemes with Decode-Forward (DF) or Dynamic
DF protocols provide significant finite SNR gains over the individual schemes.
|
1307.4801 | Estimating 3D Signals with Kalman Filter | cs.IT math.IT | In this paper, the standard Kalman filter was implemented to denoise the
three dimensional signals affected by additive white Gaussian noise (AWGN), we
used fast algorithm based on Laplacian operator to measure the noise variance
and a fast median filter to predict the state variable. The Kalman algorithm is
modeled by adjusting its parameters for better performance in both filtering
and in reducing the computational load while conserving the information
contained in the signal
|
1307.4815 | Linear Precoder Design for MIMO Interference Channels with
Finite-Alphabet Signaling | cs.IT math.IT | This paper investigates the linear precoder design for $K$-user interference
channels of multiple-input multiple-output (MIMO) transceivers under finite
alphabet inputs. We first obtain general explicit expressions of the achievable
rate for users in the MIMO interference channel systems. We study optimal
transmission strategies in both low and high signal-to-noise ratio (SNR)
regions. Given finite alphabet inputs, we show that a simple power allocation
design achieves optimal performance at high SNR whereas the well-known
interference alignment technique for Gaussian inputs only utilizes a partial
interference-free signal space for transmission and leads to a constant rate
loss when applied naively to finite-alphabet inputs. Moreover, we establish
necessary conditions for the linear precoder design to achieve weighted
sum-rate maximization. We also present an efficient iterative algorithm for
determining precoding matrices of all the users. Our numerical results
demonstrate that the proposed iterative algorithm achieves considerably higher
sum-rate under practical QAM inputs than other known methods.
|
1307.4822 | Outage Exponent: A Unified Performance Metric for Parallel Fading
Channels | cs.IT math.IT | The parallel fading channel, which consists of finite number of subchannels,
is very important, because it can be used to formulate many practical
communication systems. The outage probability, on the other hand, is widely
used to analyze the relationship among the communication efficiency,
reliability, SNR, and channel fading. To the best of our knowledge, the
previous works only studied the asymptotic outage performance of the parallel
fading channel which are only valid for a large number of subchannels or high
SNRs. In this paper, a unified performance metric, which we shall refer to as
the outage exponent, will be proposed. Our approach is mainly based on the
large deviations theory and the Meijer's G-function. It is shown that the
proposed outage exponent is not only an accurate estimation of the outage
probability for any number of subchannels, any SNR, and any target transmission
rate, but also provides an easy way to compute the outage capacity, finite-SNR
diversity-multiplexing tradeoff, and SNR gain. The asymptotic performance
metrics, such as the delay-limited capacity, ergodic capacity, and
diversity-multiplexing tradeoff can be directly obtained by letting the number
of subchannels or SNR tends to infinity. Similar to Gallager's error exponent,
a reliable function for parallel fading channels, which illustrates a
fundamental relationship between the transmission reliability and efficiency,
can also be defined from the outage exponent. Therefore, the proposed outage
exponent provides a complete and comprehensive performance measure for parallel
fading channels.
|
1307.4847 | Efficient Reinforcement Learning in Deterministic Systems with Value
Function Generalization | cs.LG cs.AI cs.SY stat.ML | We consider the problem of reinforcement learning over episodes of a
finite-horizon deterministic system and as a solution propose optimistic
constraint propagation (OCP), an algorithm designed to synthesize efficient
exploration and value function generalization. We establish that when the true
value function lies within a given hypothesis class, OCP selects optimal
actions over all but at most K episodes, where K is the eluder dimension of the
given hypothesis class. We establish further efficiency and asymptotic
performance guarantees that apply even if the true value function does not lie
in the given hypothesis class, for the special case where the hypothesis class
is the span of pre-specified indicator functions over disjoint sets. We also
discuss the computational complexity of OCP and present computational results
involving two illustrative examples.
|
1307.4879 | Says who? Automatic Text-Based Content Analysis of Television News | cs.CL cs.IR | We perform an automatic analysis of television news programs, based on the
closed captions that accompany them. Specifically, we collect all the news
broadcasted in over 140 television channels in the US during a period of six
months. We start by segmenting, processing, and annotating the closed captions
automatically. Next, we focus on the analysis of their linguistic style and on
mentions of people using NLP methods. We present a series of key insights about
news providers, people in the news, and we discuss the biases that can be
uncovered by automatic means. These insights are contrasted by looking at the
data from multiple points of view, including qualitative assessment.
|
1307.4891 | Robust Subspace Clustering via Thresholding | stat.ML cs.IT cs.LG math.IT | The problem of clustering noisy and incompletely observed high-dimensional
data points into a union of low-dimensional subspaces and a set of outliers is
considered. The number of subspaces, their dimensions, and their orientations
are assumed unknown. We propose a simple low-complexity subspace clustering
algorithm, which applies spectral clustering to an adjacency matrix obtained by
thresholding the correlations between data points. In other words, the
adjacency matrix is constructed from the nearest neighbors of each data point
in spherical distance. A statistical performance analysis shows that the
algorithm exhibits robustness to additive noise and succeeds even when the
subspaces intersect. Specifically, our results reveal an explicit tradeoff
between the affinity of the subspaces and the tolerable noise level. We
furthermore prove that the algorithm succeeds even when the data points are
incompletely observed with the number of missing entries allowed to be (up to a
log-factor) linear in the ambient dimension. We also propose a simple scheme
that provably detects outliers, and we present numerical results on real and
synthetic data.
|
1307.4894 | Source localization in reverberant rooms using sparse modeling and
narrowband measurements | cs.IT math.IT | We study two cases of acoustic source localization in a reverberant room,
from a number of point-wise narrowband measurements. In the first case, the
room is perfectly known. We show that using a sparse recovery algorithm with a
dictionary of sources computed a priori requires measurements at multiple
frequencies. Furthermore, we study the choice of frequencies for these
measurements, and show that one should avoid the modal frequencies of the room.
In the second case, when the shape and the boundary conditions of the room are
unknown, we propose a model of the acoustical field based on the Vekua theory,
still allowing the localization of sources, at the cost of an increased number
of measurements. Numerical results are given, using simple adaptations of
standard sparse recovery methods.
|
1307.4952 | The Pin-Bang Theory: Discovering The Pinterest World | cs.SI cs.SY physics.soc-ph | Pinterest is an image-based online social network, which was launched in the
year 2010 and has gained a lot of traction, ever since. Within 3 years,
Pinterest has attained 48.7 million unique users. This stupendous growth makes
it interesting to study Pinterest, and gives rise to multiple questions about
it's users, and content. We characterized Pinterest on the basis of large scale
crawls of 3.3 million user profiles, and 58.8 million pins. In particular, we
explored various attributes of users, pins, boards, pin sources, and user
locations, in detail and performed topical analysis of user generated textual
content. The characterization revealed most prominent topics among users and
pins, top image sources, and geographical distribution of users on Pinterest.
We then investigated this social network from a privacy and security
standpoint, and found traces of malware in the form of pin sources. Instances
of Personally Identifiable Information (PII) leakage were also discovered in
the form of phone numbers, BBM (Blackberry Messenger) pins, and email
addresses. Further, our analysis demonstrated how Pinterest is a potential
venue for copyright infringement, by showing that almost half of the images
shared on Pinterest go uncredited. To the best of our knowledge, this is the
first attempt to characterize Pinterest at such a large scale.
|
1307.4980 | Multi-keyword multi-click advertisement option contracts for sponsored
search | cs.GT cs.IR | In sponsored search, advertisement (abbreviated ad) slots are usually sold by
a search engine to an advertiser through an auction mechanism in which
advertisers bid on keywords. In theory, auction mechanisms have many desirable
economic properties. However, keyword auctions have a number of limitations
including: the uncertainty in payment prices for advertisers; the volatility in
the search engine's revenue; and the weak loyalty between advertiser and search
engine. In this paper we propose a special ad option that alleviates these
problems. In our proposal, an advertiser can purchase an option from a search
engine in advance by paying an upfront fee, known as the option price. He then
has the right, but no obligation, to purchase among the pre-specified set of
keywords at the fixed cost-per-clicks (CPCs) for a specified number of clicks
in a specified period of time. The proposed option is closely related to a
special exotic option in finance that contains multiple underlying assets
(multi-keyword) and is also multi-exercisable (multi-click). This novel
structure has many benefits: advertisers can have reduced uncertainty in
advertising; the search engine can improve the advertisers' loyalty as well as
obtain a stable and increased expected revenue over time. Since the proposed ad
option can be implemented in conjunction with the existing keyword auctions,
the option price and corresponding fixed CPCs must be set such that there is no
arbitrage between the two markets. Option pricing methods are discussed and our
experimental results validate the development. Compared to keyword auctions, a
search engine can have an increased expected revenue by selling an ad option.
|
1307.4983 | A Sharp Double Inequality for the Inverse Tangent Function | cs.IT math.IT | The inverse tangent function can be bounded by different inequalities, for
example by Shafer's inequality. In this publication, we propose a new sharp
double inequality, consisting of a lower and an upper bound, for the inverse
tangent function. In particular, we sharpen Shafer's inequality and calculate
the best corresponding constants. The maximum relative errors of the obtained
bounds are approximately smaller than 0.27% and 0.23% for the lower and upper
bound, respectively. Furthermore, we determine an upper bound on the relative
errors of the proposed bounds in order to describe their tightness
analytically. Moreover, some important properties of the obtained bounds are
discussed in order to describe their behavior and achieved accuracy.
|
1307.4986 | On the Necessity of Mixed Models: Dynamical Frustrations in the Mind | nlin.CD cs.CL math.DS | In the present work we will present and analyze some basic processes at the
local and global level in linguistic derivations that seem to go beyond the
limits of Markovian or Turing-like computation, and require, in our opinion, a
quantum processor. We will first present briefly the working hypothesis and
then focus on the empirical domain. At the same time, we will argue that a
model appealing to only one kind of computation (be it quantum or not) is
necessarily insufficient, and thus both linear and non-linear formal models are
to be invoked in order to pursue a fuller understanding of mental computations
within a unified framework.
|
1307.4990 | Video Text Localization using Wavelet and Shearlet Transforms | cs.CV | Text in video is useful and important in indexing and retrieving the video
documents efficiently and accurately. In this paper, we present a new method of
text detection using a combined dictionary consisting of wavelets and a
recently introduced transform called shearlets. Wavelets provide optimally
sparse expansion for point-like structures and shearlets provide optimally
sparse expansions for curve-like structures. By combining these two features we
have computed a high frequency sub-band to brighten the text part. Then K-means
clustering is used for obtaining text pixels from the Standard Deviation (SD)
of combined coefficient of wavelets and shearlets as well as the union of
wavelets and shearlets features. Text parts are obtained by grouping
neighboring regions based on geometric properties of the classified output
frame of unsupervised K-means classification. The proposed method tested on a
standard as well as newly collected database shows to be superior to some
existing methods.
|
1307.5057 | Avoiding Whitewashing in Unstructured Peer-to-Peer Resource Sharing
Network | cs.NI cs.MA | In peer-to-peer file sharing network, it is hard to distinguish between a
legitimate newcomer and a whitewasher. This makes whitewashing a big problem in
peer-to-peer networks. Although the problem of whitewashing can be solved using
permanent identities, it may take away the right of anonymity for users. In
this paper, we a have proposed a novel algorithm to avoid this problem when
network uses free temporary identities. In this algorithm, the initial
reputation is adjusted according to the level of whitewashing in the network.
|
1307.5076 | Low-rank Approximations for Computing Observation Impact in 4D-Var Data
Assimilation | cs.CE | We present an efficient computational framework to quantify the impact of
individual observations in four dimensional variational data assimilation. The
proposed methodology uses first and second order adjoint sensitivity analysis,
together with matrix-free algorithms to obtain low-rank approximations of ob-
servation impact matrix. We illustrate the application of this methodology to
important applications such as data pruning and the identification of faulty
sensors for a two dimensional shallow water test system.
|
1307.5095 | Enabling Complexity-Performance Trade-Offs for Successive Cancellation
Decoding of Polar Codes | cs.IT math.IT | Polar codes are one of the most recent advancements in coding theory and they
have attracted significant interest. While they are provably capacity achieving
over various channels, they have seen limited practical applications.
Unfortunately, the successive nature of successive cancellation based decoders
hinders fine-grained adaptation of the decoding complexity to design
constraints and operating conditions. In this paper, we propose a systematic
method for enabling complexity-performance trade-offs by constructing polar
codes based on an optimization problem which minimizes the complexity under a
suitably defined mutual information based performance constraint. Moreover, a
low-complexity greedy algorithm is proposed in order to solve the optimization
problem efficiently for very large code lengths.
|
1307.5101 | Large-scale Multi-label Learning with Missing Labels | cs.LG | The multi-label classification problem has generated significant interest in
recent years. However, existing approaches do not adequately address two key
challenges: (a) the ability to tackle problems with a large number (say
millions) of labels, and (b) the ability to handle data with missing labels. In
this paper, we directly address both these problems by studying the multi-label
problem in a generic empirical risk minimization (ERM) framework. Our
framework, despite being simple, is surprisingly able to encompass several
recent label-compression based methods which can be derived as special cases of
our method. To optimize the ERM problem, we develop techniques that exploit the
structure of specific loss functions - such as the squared loss function - to
offer efficient algorithms. We further show that our learning framework admits
formal excess risk bounds even in the presence of missing labels. Our risk
bounds are tight and demonstrate better generalization performance for low-rank
promoting trace-norm regularization when compared to (rank insensitive)
Frobenius norm regularization. Finally, we present extensive empirical results
on a variety of benchmark datasets and show that our methods perform
significantly better than existing label compression based methods and can
scale up to very large datasets such as the Wikipedia dataset.
|
1307.5102 | Automated Defect Localization via Low Rank Plus Outlier Modeling of
Propagating Wavefield Data | cs.CV | This work proposes an agnostic inference strategy for material diagnostics,
conceived within the context of laser-based non-destructive evaluation methods,
which extract information about structural anomalies from the analysis of
acoustic wavefields measured on the structure's surface by means of a scanning
laser interferometer. The proposed approach couples spatiotemporal windowing
with low rank plus outlier modeling, to identify a priori unknown deviations in
the propagating wavefields caused by material inhomogeneities or defects, using
virtually no knowledge of the structural and material properties of the medium.
This characteristic makes the approach particularly suitable for diagnostics
scenarios where the mechanical and material models are complex, unknown, or
unreliable. We demonstrate our approach in a simulated environment using
benchmark point and line defect localization problems based on propagating
flexural waves in a thin plate.
|
1307.5118 | Model-Based Policy Gradients with Parameter-Based Exploration by
Least-Squares Conditional Density Estimation | stat.ML cs.LG | The goal of reinforcement learning (RL) is to let an agent learn an optimal
control policy in an unknown environment so that future expected rewards are
maximized. The model-free RL approach directly learns the policy based on data
samples. Although using many samples tends to improve the accuracy of policy
learning, collecting a large number of samples is often expensive in practice.
On the other hand, the model-based RL approach first estimates the transition
model of the environment and then learns the policy based on the estimated
transition model. Thus, if the transition model is accurately learned from a
small amount of data, the model-based approach can perform better than the
model-free approach. In this paper, we propose a novel model-based RL method by
combining a recently proposed model-free policy search method called policy
gradients with parameter-based exploration and the state-of-the-art transition
model estimator called least-squares conditional density estimation. Through
experiments, we demonstrate the practical usefulness of the proposed method.
|
1307.5161 | Random Binary Mappings for Kernel Learning and Efficient SVM | cs.CV cs.LG stat.ML | Support Vector Machines (SVMs) are powerful learners that have led to
state-of-the-art results in various computer vision problems. SVMs suffer from
various drawbacks in terms of selecting the right kernel, which depends on the
image descriptors, as well as computational and memory efficiency. This paper
introduces a novel kernel, which serves such issues well. The kernel is learned
by exploiting a large amount of low-complex, randomized binary mappings of the
input feature. This leads to an efficient SVM, while also alleviating the task
of kernel selection. We demonstrate the capabilities of our kernel on 6
standard vision benchmarks, in which we combine several common image
descriptors, namely histograms (Flowers17 and Daimler), attribute-like
descriptors (UCI, OSR, and a-VOC08), and Sparse Quantization (ImageNet).
Results show that our kernel learning adapts well to the different descriptors
types, achieving the performance of the kernels specifically tuned for each
image descriptor, and with similar evaluation cost as efficient SVM methods.
|
1307.5210 | Approaching the Rate-Distortion Limit with Spatial Coupling, Belief
propagation and Decimation | cs.IT math.IT | We investigate an encoding scheme for lossy compression of a binary symmetric
source based on simple spatially coupled Low-Density Generator-Matrix codes.
The degree of the check nodes is regular and the one of code-bits is Poisson
distributed with an average depending on the compression rate. The performance
of a low complexity Belief Propagation Guided Decimation algorithm is
excellent. The algorithmic rate-distortion curve approaches the optimal curve
of the ensemble as the width of the coupling window grows. Moreover, as the
check degree grows both curves approach the ultimate Shannon rate-distortion
limit. The Belief Propagation Guided Decimation encoder is based on the
posterior measure of a binary symmetric test-channel. This measure can be
interpreted as a random Gibbs measure at a "temperature" directly related to
the "noise level of the test-channel". We investigate the links between the
algorithmic performance of the Belief Propagation Guided Decimation encoder and
the phase diagram of this Gibbs measure. The phase diagram is investigated
thanks to the cavity method of spin glass theory which predicts a number of
phase transition thresholds. In particular the dynamical and condensation
"phase transition temperatures" (equivalently test-channel noise thresholds)
are computed. We observe that: (i) the dynamical temperature of the spatially
coupled construction saturates towards the condensation temperature; (ii) for
large degrees the condensation temperature approaches the temperature (i.e.
noise level) related to the information theoretic Shannon test-channel noise
parameter of rate-distortion theory. This provides heuristic insight into the
excellent performance of the Belief Propagation Guided Decimation algorithm.
The paper contains an introduction to the cavity method.
|
1307.5228 | Unified Performance Analysis of Orthogonal Transmit Beamforming Methods
with User Selection | cs.IT math.IT | Simultaneous multiuser beamforming in multiantenna downlink channels can
entail dirty paper (DP) precoding (optimal and high complexity) or linear
precoding (suboptimal and low complexity) approaches. The system performance is
typically characterized by the sum capacity with homogenous users with perfect
channel state information at the transmitter. The sum capacity performance
analysis requires the exact probability distributions of the user
signal-to-noise ratios (SNRs) or signal-to-interference plus noise ratios
(SINRs). The standard techniques from order statistics can be sufficient to
obtain the probability distributions of SNRs for DP precoding due to the
removal of known interference at the transmitter. Derivation of such
probability distributions for linear precoding techniques on the other hand is
much more challenging. For example, orthogonal beamforming techniques do not
completely cancel the interference at the user locations, thereby requiring the
analysis with SINRs. In this paper, we derive the joint probability
distributions of the user SINRs for two orthogonal beamforming methods combined
with user scheduling: adaptive orthogonal beamforming and orthogonal linear
beamforming. We obtain compact and unified solutions for the joint probability
distributions of the scheduled users' SINRs. Our analytical results can be
applied for similar algorithms and are verified by computer simulations.
|
1307.5240 | Performance Analysis of Optimum Zero-Forcing Beamforming with Greedy
User Selection | cs.IT math.IT | In this letter, an exact performance analysis is presented on the sum rate of
zero-forcing beamforming with a greedy user scheduling algorithm in a downlink
system. Adopting water-filling power allocation, we derive a compact form for
the joint probability density function of the scheduled users' squared
subchannel gains when a transmitter with multiple antennas sends information to
at most two scheduled users with each having a single antenna. The analysis is
verified by numerical results.
|
1307.5251 | Period doubling, information entropy, and estimates for Feigenbaum's
constants | nlin.AO cs.IT math.IT nlin.CD | The relationship between period doubling bifurcations and Feigenbaum's
constants has been studied for nearly 40 years and this relationship has helped
uncover many fundamental aspects of universal scaling across multiple nonlinear
dynamical systems. This paper will combine information entropy with symbolic
dynamics to demonstrate how period doubling can be defined using these tools
alone. In addition, the technique allows us to uncover some unexpected, simple
estimates for Feigenbaum's constants which relate them to log 2 and the golden
ratio, phi, as well as to each other.
|
1307.5296 | First-Come-First-Served for Online Slot Allocation and Huffman Coding | cs.DS cs.IT math.IT | Can one choose a good Huffman code on the fly, without knowing the underlying
distribution? Online Slot Allocation (OSA) models this and similar problems:
There are n slots, each with a known cost. There are n items. Requests for
items are drawn i.i.d. from a fixed but hidden probability distribution p.
After each request, if the item, i, was not previously requested, then the
algorithm (knowing the slot costs and the requests so far, but not p) must
place the item in some vacant slot j(i). The goal is to minimize the sum, over
the items, of the probability of the item times the cost of its assigned slot.
The optimal offline algorithm is trivial: put the most probable item in the
cheapest slot, the second most probable item in the second cheapest slot, etc.
The optimal online algorithm is First Come First Served (FCFS): put the first
requested item in the cheapest slot, the second (distinct) requested item in
the second cheapest slot, etc. The optimal competitive ratios for any online
algorithm are 1+H(n-1) ~ ln n for general costs and 2 for concave costs. For
logarithmic costs, the ratio is, asymptotically, 1: FCFS gives cost opt + O(log
opt).
For Huffman coding, FCFS yields an online algorithm (one that allocates
codewords on demand, without knowing the underlying probability distribution)
that guarantees asymptotically optimal cost: at most opt + 2 log(1+opt) + 2.
|
1307.5302 | Kernel Adaptive Metropolis-Hastings | stat.ML cs.LG | A Kernel Adaptive Metropolis-Hastings algorithm is introduced, for the
purpose of sampling from a target distribution with strongly nonlinear support.
The algorithm embeds the trajectory of the Markov chain into a reproducing
kernel Hilbert space (RKHS), such that the feature space covariance of the
samples informs the choice of proposal. The procedure is computationally
efficient and straightforward to implement, since the RKHS moves can be
integrated out analytically: our proposal distribution in the original space is
a normal distribution whose mean and covariance depend on where the current
sample lies in the support of the target distribution, and adapts to its local
covariance structure. Furthermore, the procedure requires neither gradients nor
any other higher order information about the target, making it particularly
attractive for contexts such as Pseudo-Marginal MCMC. Kernel Adaptive
Metropolis-Hastings outperforms competing fixed and adaptive samplers on
multivariate, highly nonlinear target distributions, arising in both real-world
and synthetic examples. Code may be downloaded at
https://github.com/karlnapf/kameleon-mcmc.
|
1307.5304 | Entangling mobility and interactions in social media | physics.soc-ph cs.SI | Daily interactions naturally define social circles. Individuals tend to be
friends with the people they spend time with and they choose to spend time with
their friends, inextricably entangling physical location and social
relationships. As a result, it is possible to predict not only someone's
location from their friends' locations but also friendship from spatial and
temporal co-occurrence. While several models have been developed to separately
describe mobility and the evolution of social networks, there is a lack of
studies coupling social interactions and mobility. In this work, we introduce a
new model that bridges this gap by explicitly considering the feedback of
mobility on the formation of social ties. Data coming from three online social
networks (Twitter, Gowalla and Brightkite) is used for validation. Our model
reproduces various topological and physical properties of these networks such
as: i) the size of the connected components, ii) the distance distribution
between connected users, iii) the dependence of the reciprocity on the
distance, iv) the variation of the social overlap and the clustering with the
distance. Besides numerical simulations, a mean-field approach is also used to
study analytically the main statistical features of the networks generated by
the model. The robustness of the results to changes in the model parameters is
explored, finding that a balance between friend visits and long-range random
connections is essential to reproduce the geographical features of the
empirical networks.
|
1307.5322 | Ontology alignment repair through modularization and confidence-based
heuristics | cs.AI | Ontology Matching aims to find a set of semantic correspondences, called an
alignment, between related ontologies. In recent years, there has been a
growing interest in efficient and effective matching methods for large
ontologies. However, most of the alignments produced for large ontologies are
logically incoherent. It was only recently that the use of repair techniques to
improve the quality of ontology alignments has been explored. In this paper we
present a novel technique for detecting incoherent concepts based on ontology
modularization, and a new repair algorithm that minimizes the incoherence of
the resulting alignment and the number of matches removed from the input
alignment. An implementation was done as part of a lightweight version of
AgreementMaker system, a successful ontology matching platform, and evaluated
using a set of four benchmark biomedical ontology matching tasks. Our results
show that our implementation is efficient and produces better alignments with
respect to their coherence and f-measure than the state of the art repairing
tools. They also show that our implementation is a better alternative for
producing coherent silver standard alignments.
|
1307.5336 | Good Debt or Bad Debt: Detecting Semantic Orientations in Economic Texts | cs.CL cs.IR q-fin.CP | The use of robo-readers to analyze news texts is an emerging technology trend
in computational finance. In recent research, a substantial effort has been
invested to develop sophisticated financial polarity-lexicons that can be used
to investigate how financial sentiments relate to future company performance.
However, based on experience from other fields, where sentiment analysis is
commonly applied, it is well-known that the overall semantic orientation of a
sentence may differ from the prior polarity of individual words. The objective
of this article is to investigate how semantic orientations can be better
detected in financial and economic news by accommodating the overall
phrase-structure information and domain-specific use of language. Our three
main contributions are: (1) establishment of a human-annotated finance
phrase-bank, which can be used as benchmark for training and evaluating
alternative models; (2) presentation of a technique to enhance financial
lexicons with attributes that help to identify expected direction of events
that affect overall sentiment; (3) development of a linearized phrase-structure
model for detecting contextual semantic orientations in financial and economic
news texts. The relevance of the newly added lexicon features and the benefit
of using the proposed learning-algorithm are demonstrated in a comparative
study against previously used general sentiment models as well as the popular
word frequency models used in recent financial studies. The proposed framework
is parsimonious and avoids the explosion in feature-space caused by the use of
conventional n-gram features.
|
1307.5348 | Tensor-based formulation and nuclear norm regularization for
multi-energy computed tomography | cs.CV physics.med-ph | The development of energy selective, photon counting X-ray detectors allows
for a wide range of new possibilities in the area of computed tomographic image
formation. Under the assumption of perfect energy resolution, here we propose a
tensor-based iterative algorithm that simultaneously reconstructs the X-ray
attenuation distribution for each energy. We use a multi-linear image model
rather than a more standard "stacked vector" representation in order to develop
novel tensor-based regularizers. Specifically, we model the multi-spectral
unknown as a 3-way tensor where the first two dimensions are space and the
third dimension is energy. This approach allows for the design of tensor
nuclear norm regularizers, which like its two dimensional counterpart, is a
convex function of the multi-spectral unknown. The solution to the resulting
convex optimization problem is obtained using an alternating direction method
of multipliers (ADMM) approach. Simulation results shows that the generalized
tensor nuclear norm can be used as a stand alone regularization technique for
the energy selective (spectral) computed tomography (CT) problem and when
combined with total variation regularization it enhances the regularization
capabilities especially at low energy images where the effects of noise are
most prominent.
|
1307.5368 | Quantum enigma machines and the locking capacity of a quantum channel | quant-ph cs.IT math.IT | The locking effect is a phenomenon which is unique to quantum information
theory and represents one of the strongest separations between the classical
and quantum theories of information. The Fawzi-Hayden-Sen (FHS) locking
protocol harnesses this effect in a cryptographic context, whereby one party
can encode n bits into n qubits while using only a constant-size secret key.
The encoded message is then secure against any measurement that an eavesdropper
could perform in an attempt to recover the message, but the protocol does not
necessarily meet the composability requirements needed in quantum key
distribution applications. In any case, the locking effect represents an
extreme violation of Shannon's classical theorem, which states that
information-theoretic security holds in the classical case if and only if the
secret key is the same size as the message. Given this intriguing phenomenon,
it is of practical interest to study the effect in the presence of noise, which
can occur in the systems of both the legitimate receiver and the eavesdropper.
This paper formally defines the locking capacity of a quantum channel as the
maximum amount of locked information that can be reliably transmitted to a
legitimate receiver by exploiting many independent uses of a quantum channel
and an amount of secret key sublinear in the number of channel uses. We provide
general operational bounds on the locking capacity in terms of other well-known
capacities from quantum Shannon theory. We also study the important case of
bosonic channels, finding limitations on these channels' locking capacity when
coherent-state encodings are employed and particular locking protocols for
these channels that might be physically implementable.
|
1307.5393 | Clustering Algorithm for Gujarati Language | cs.CL | Natural language processing area is still under research. But now a day it is
on platform for worldwide researchers. Natural language processing includes
analyzing the language based on its structure and then tagging of each word
appropriately with its grammar base. Here we have 50,000 tagged words set and
we try to cluster those Gujarati words based on proposed algorithm, we have
defined our own algorithm for processing. Many clustering techniques are
available Ex. Single linkage, complete, linkage,average linkage, Hear no of
clusters to be formed are not known, so it is all depends on the type of data
set provided . Clustering is preprocess for stemming . Stemming is the process
where root is extracted from its word. Ex. cats= cat+S, meaning. Cat: Noun and
plural form.
|
1307.5437 | Algorithm and approaches to handle large Data- A Survey | cs.DB | Data mining environment produces a large amount of data, that need to be
analyzed, patterns have to be extracted from that to gain knowledge. In this
new era with boom of data both structured and unstructured, in the field of
genomics, meteorology, biology, environmental research and many others, it has
become difficult to process, manage and analyze patterns using traditional
databases and architectures. So, a proper architecture should be understood to
gain knowledge about the Big Data. This paper presents a review of various
algorithms from 1994-2013 necessary for handling such large data set. These
algorithms define various structures and methods implemented to handle Big
Data, also in the paper are listed various tool that were developed for
analyzing them.
|
1307.5438 | Towards Distribution-Free Multi-Armed Bandits with Combinatorial
Strategies | cs.LG | In this paper we study a generalized version of classical multi-armed bandits
(MABs) problem by allowing for arbitrary constraints on constituent bandits at
each decision point. The motivation of this study comes from many situations
that involve repeatedly making choices subject to arbitrary constraints in an
uncertain environment: for instance, regularly deciding which advertisements to
display online in order to gain high click-through-rate without knowing user
preferences, or what route to drive home each day under uncertain weather and
traffic conditions. Assume that there are $K$ unknown random variables (RVs),
i.e., arms, each evolving as an \emph{i.i.d} stochastic process over time. At
each decision epoch, we select a strategy, i.e., a subset of RVs, subject to
arbitrary constraints on constituent RVs.
We then gain a reward that is a linear combination of observations on
selected RVs.
The performance of prior results for this problem heavily depends on the
distribution of strategies generated by corresponding learning policy. For
example, if the reward-difference between the best and second best strategy
approaches zero, prior result may lead to arbitrarily large regret.
Meanwhile, when there are exponential number of possible strategies at each
decision point, naive extension of a prior distribution-free policy would cause
poor performance in terms of regret, computation and space complexity.
To this end, we propose an efficient Distribution-Free Learning (DFL) policy
that achieves zero regret, regardless of the probability distribution of the
resultant strategies.
Our learning policy has both $O(K)$ time complexity and $O(K)$ space
complexity. In successive generations, we show that even if finding the optimal
strategy at each decision point is NP-hard, our policy still allows for
approximated solutions while retaining near zero-regret.
|
1307.5449 | Non-stationary Stochastic Optimization | math.PR cs.LG stat.ML | We consider a non-stationary variant of a sequential stochastic optimization
problem, in which the underlying cost functions may change along the horizon.
We propose a measure, termed variation budget, that controls the extent of said
change, and study how restrictions on this budget impact achievable
performance. We identify sharp conditions under which it is possible to achieve
long-run-average optimality and more refined performance measures such as rate
optimality that fully characterize the complexity of such problems. In doing
so, we also establish a strong connection between two rather disparate strands
of literature: adversarial online convex optimization; and the more traditional
stochastic approximation paradigm (couched in a non-stationary setting). This
connection is the key to deriving well performing policies in the latter, by
leveraging structure of optimal policies in the former. Finally, tight bounds
on the minimax regret allow us to quantify the "price of non-stationarity,"
which mathematically captures the added complexity embedded in a temporally
changing environment versus a stationary one.
|
1307.5459 | Convex Clustering via Optimal Mass Transport | cs.SY | We consider approximating distributions within the framework of optimal mass
transport and specialize to the problem of clustering data sets. Distances
between distributions are measured in the Wasserstein metric. The main problem
we consider is that of approximating sample distributions by ones with sparse
support. This provides a new viewpoint to clustering. We propose different
relaxations of a cardinality function which penalizes the size of the support
set. We establish that a certain relaxation provides the tightest convex lower
approximation to the cardinality penalty. We compare the performance of
alternative relaxations on a numerical study on clustering.
|
1307.5483 | Approaching Gaussian Relay Network Capacity in the High SNR Regime:
End-to-End Lattice Codes | cs.IT math.IT | We present a natural and low-complexity technique for achieving the capacity
of the Gaussian relay network in the high SNR regime. Specifically, we propose
the use of end-to-end structured lattice codes with the amplify-and-forward
strategy, where the source uses a nested lattice code to encode the messages
and the destination decodes the messages by lattice decoding. All intermediate
relays simply amplify and forward the received signals over the network to the
destination. We show that the end-to-end lattice-coded amplify-and-forward
scheme approaches the capacity of the layered Gaussian relay network in the
high SNR regime. Next, we extend our scheme to non-layered Gaussian relay
networks under the amplify-and-forward scheme, which can be viewed as a
Gaussian intersymbol interference (ISI) channel. Compared with other schemes,
our approach is significantly simpler and requires only the end-to-end design
of the lattice precoding and decoding. It does not require any knowledge of the
network topology or the individual channel gains.
|
1307.5494 | On GROUSE and Incremental SVD | cs.NA cs.LG stat.ML | GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental
algorithm for identifying a subspace of Rn from a sequence of vectors in this
subspace, where only a subset of components of each vector is revealed at each
iteration. Recent analysis has shown that GROUSE converges locally at an
expected linear rate, under certain assumptions. GROUSE has a similar flavor to
the incremental singular value decomposition algorithm, which updates the SVD
of a matrix following addition of a single column. In this paper, we modify the
incremental SVD approach to handle missing data, and demonstrate that this
modified approach is equivalent to GROUSE, for a certain choice of an
algorithmic parameter.
|
1307.5497 | A scalable stage-wise approach to large-margin multi-class loss based
boosting | cs.LG | We present a scalable and effective classification model to train multi-class
boosting for multi-class classification problems. Shen and Hao introduced a
direct formulation of multi- class boosting in the sense that it directly
maximizes the multi- class margin [C. Shen and Z. Hao, "A direct formulation
for totally-corrective multi- class boosting", in Proc. IEEE Conf. Comp. Vis.
Patt. Recogn., 2011]. The major problem of their approach is its high
computational complexity for training, which hampers its application on
real-world problems. In this work, we propose a scalable and simple stage-wise
multi-class boosting method, which also directly maximizes the multi-class
margin. Our approach of- fers a few advantages: 1) it is simple and
computationally efficient to train. The approach can speed up the training time
by more than two orders of magnitude without sacrificing the classification
accuracy. 2) Like traditional AdaBoost, it is less sensitive to the choice of
parameters and empirically demonstrates excellent generalization performance.
Experimental results on challenging multi-class machine learning and vision
tasks demonstrate that the proposed approach substantially improves the
convergence rate and accuracy of the final visual detector at no additional
computational cost compared to existing multi-class boosting.
|
1307.5503 | Mathematical models for epidemic spreading on complex networks | physics.soc-ph cs.SI math.PR | We propose a model for epidemic spreading on a finite complex network with a
restriction to at most one contamination per time step. Because of a highly
discrete character of the process, the analysis cannot use the continous
approximation, widely exploited for most of the models. Using discrete approach
we investigate the epidemic threshold and the quasi-stationary distribution.
The main result is a theorem about mixing time for the process, which scales
like logarithm of the network size and which is proportional to the inverse of
the distance from the epidemic threshold. In order to present the model in the
full context, we review modern approach to epidemic spreading modeling based on
complex networks and present necessary information about random networks,
discrete-time Markov chains and their quasi-stationary distributions.
|
1307.5510 | Improved Bounds on the Finite Length Scaling of Polar Codes | cs.IT math.IT | Improved bounds on the blocklength required to communicate over binary-input
channels using polar codes, below some given error probability, are derived.
For that purpose, an improved bound on the number of non-polarizing channels is
obtained. The main result is that the blocklength required to communicate
reliably scales at most as $O((I(W)-R)^{-5.77})$ where $R$ is the code rate and
$I(W)$ the symmetric capacity of the channel, $W$. The results are then
extended to polar lossy source coding at rate $R$ of a source with symmetric
distortion-rate function $D(\cdot)$. The blocklength required scales at most as
$O((D_N-D(R))^{-5.77})$ where $D_N$ is the actual distortion.
|
1307.5519 | Optimal Recombination in Genetic Algorithms | cs.NE cs.DS | This paper surveys results on complexity of the optimal recombination problem
(ORP), which consists in finding the best possible offspring as a result of a
recombination operator in a genetic algorithm, given two parent solutions. We
consider efficient reductions of the ORPs, allowing to establish polynomial
solvability or NP-hardness of the ORPs, as well as direct proofs of hardness
results.
|
1307.5524 | The Random Coding Bound Is Tight for the Average Linear Code or Lattice | cs.IT math.IT | In 1973, Gallager proved that the random-coding bound is exponentially tight
for the random code ensemble at all rates, even below expurgation. This result
explained that the random-coding exponent does not achieve the expurgation
exponent due to the properties of the random ensemble, irrespective of the
utilized bounding technique. It has been conjectured that this same behavior
holds true for a random ensemble of linear codes. This conjecture is proved in
this paper. Additionally, it is shown that this property extends to Poltyrev's
random-coding exponent for a random ensemble of lattices.
|
1307.5534 | A New Optimization Approach Based on Rotational Mutation and Crossover
Operator | cs.NE math.OC | Evaluating a global optimal point in many global optimization problems in
large space is required to more calculations. In this paper, there is presented
a new approach for the continuous functions optimization with rotational
mutation and crossover operator. This proposed method (RMC) starts from the
point which has best fitness value by elitism mechanism and after that
rotational mutation and crossover operator are used to reach optimal point. RMC
method is implemented by GA (Briefly RMCGA) and is compared with other
wellknown algorithms such as: DE, PGA, Grefensstette and Eshelman[15,16] and
numerical and simulating results show that RMCGA achieve global optimal point
with more decision by smaller generations.
|
1307.5549 | Insufficiency of Linear-Feedback Schemes In Gaussian Broadcast Channels
with Common Message | cs.IT math.IT | We consider the $K\geq 2$-user memoryless Gaussian broadcast channel (BC)
with feedback and common message only. We show that linear-feedback schemes
with a message point, in the spirit of Schalkwijk & Kailath's scheme for
point-to-point channels or Ozarow & Leung's scheme for BCs with private
messages, are strictly suboptimal for this setup. Even with perfect feedback,
the largest rate achieved by these schemes is strictly smaller than capacity
$C$ (which is the same with and without feedback). In the extreme case where
the number of receivers $K\to \infty$, the largest rate achieved by
linear-feedback schemes with a message point tends to 0.
To contrast this negative result, we describe a scheme for
\emph{rate-limited} feedback that uses the feedback in an intermittent way,
i.e., the receivers send feedback signals only in few channel uses. This scheme
achieves all rates $R$ up to capacity $C$ with an $L$-th order exponential
decay of the probability of error if the feedback rate $R_{\textnormal{fb}}$ is
at least $(L-1)R$ for some positive integer $L$.
|
1307.5551 | Regularized Discrete Optimal Transport | cs.CV cs.DM math.OC | This article introduces a generalization of the discrete optimal transport,
with applications to color image manipulations. This new formulation includes a
relaxation of the mass conservation constraint and a regularization term. These
two features are crucial for image processing tasks, which necessitate to take
into account families of multimodal histograms, with large mass variation
across modes.
The corresponding relaxed and regularized transportation problem is the
solution of a convex optimization problem. Depending on the regularization
used, this minimization can be solved using standard linear programming methods
or first order proximal splitting schemes.
The resulting transportation plan can be used as a color transfer map, which
is robust to mass variation across images color palettes. Furthermore, the
regularization of the transport plan helps to remove colorization artifacts due
to noise amplification.
We also extend this framework to the computation of barycenters of
distributions. The barycenter is the solution of an optimization problem, which
is separately convex with respect to the barycenter and the transportation
plans, but not jointly convex. A block coordinate descent scheme converges to a
stationary point of the energy. We show that the resulting algorithm can be
used for color normalization across several images. The relaxed and regularized
barycenter defines a common color palette for those images. Applying color
transfer toward this average palette performs a color normalization of the
input images.
|
1307.5552 | Any Positive Feedback Rate Increases the Capacity of Strictly Less-Noisy
Broadcast Channels | cs.IT math.IT | We propose two coding schemes for discrete memoryless broadcast channels
(DMBCs) with rate-limited feedback from only one receiver. For any positive
feedback rate and for the class of strictly less-noisy DMBCs, our schemes
strictly improve over the no-feedback capacity region.
|
1307.5583 | Characterizations and construction methods for linear functional-repair
storage codes | cs.IT math.IT | We present a precise characterization of linear functional-repair storage
codes in terms of {\em admissible states/}, with each state made up from a
collection of vector spaces over some fixed finite field. To illustrate the
usefulness of our characterization, we provide several applications. We first
describe a simple construction of functional-repair storage codes for a family
of code parameters meeting the cutset bound outside the MBR and MSR points;
these codes are conjectured to have optimal rate with respect to their repair
locality. Then, we employ our characterization to develop a construction method
to obtain functional repair codes for given parameters using symmetry groups,
which can be used both to find new codes and to improve known ones. As an
example of the latter use, we describe a beautiful functional-repair storage
code that was found by this method, with parameters belonging to the family
investigated earlier, which can be specified in terms of only eight different
vector spaces.
|
1307.5591 | A Novel Equation based Classifier for Detecting Human in Images | cs.CV | Shape based classification is one of the most challenging tasks in the field
of computer vision. Shapes play a vital role in object recognition. The basic
shapes in an image can occur in varying scale, position and orientation. And
specially when detecting human, the task becomes more challenging owing to the
largely varying size, shape, posture and clothing of human. So, in our work we
detect human, based on the head-shoulder shape as it is the most unvarying part
of human body. Here, firstly a new and a novel equation named as the Omega
Equation that describes the shape of human head-shoulder is developed and based
on this equation, a classifier is designed particularly for detecting human
presence in a scene. The classifier detects human by analyzing some of the
discriminative features of the values of the parameters obtained from the Omega
equation. The proposed method has been tested on a variety of shape dataset
taking into consideration the complexities of human head-shoulder shape. In all
the experiments the proposed method demonstrated satisfactory results.
|
1307.5599 | Performance comparison of State-of-the-art Missing Value Imputation
Algorithms on Some Bench mark Datasets | cs.LG stat.ML | Decision making from data involves identifying a set of attributes that
contribute to effective decision making through computational intelligence. The
presence of missing values greatly influences the selection of right set of
attributes and this renders degradation in classification accuracies of the
classifiers. As missing values are quite common in data collection phase during
field experiments or clinical trails appropriate handling would improve the
classifier performance. In this paper we present a review of recently developed
missing value imputation algorithms and compare their performance on some bench
mark datasets.
|
1307.5613 | Optimal Primary-Secondary user Cooperation Policies in Cognitive Radio
Networks | cs.NI cs.SY | In cognitive radio networks, secondary users (SUs) may cooperate with the
primary user (PU), so that the success probability of PU transmissions are
improved, while SUs obtain more transmission opportunities. Thus, SUs have to
take intelligent decisions on whether to cooperate or not and with what power
level, in order to maximize their throughput subject to average power
constraints. Cooperation policies in this framework require the solution of a
constrained Markov decision problem with infinite state space. In our work, we
restrict attention to the class of stationary policies that take randomized
decisions in every time slot based only on spectrum sensing. The proposed class
of policies is shown to achieve the same set of SU rates as the more general
policies, and enlarge the stability region of PU queue. Moreover, algorithms
for the distributed calculation of the set of probabilities used by the
proposed class of policies are presented.
|
1307.5636 | A generalized back-door criterion | stat.ME cs.AI | We generalize Pearl's back-door criterion for directed acyclic graphs (DAGs)
to more general types of graphs that describe Markov equivalence classes of
DAGs and/or allow for arbitrarily many hidden variables. We also give easily
checkable necessary and sufficient graphical criteria for the existence of a
set of variables that satisfies our generalized back-door criterion, when
considering a single intervention and a single outcome variable. Moreover, if
such a set exists, we provide an explicit set that fulfills the criterion. We
illustrate the results in several examples. R-code is available in the
R-package pcalg.
|
1307.5641 | Robotic Arm for Remote Surgery | cs.RO | Recent advances in telecommunications have enabled surgeons to operate
remotely on patients with the use of robotics. The investigation and testing of
remote surgery using a robotic arm is presented. The robotic arm is designed to
have four degrees of freedom that track the surgeon's x, y, z positions and the
rotation angle of the forearm {\theta}. The system comprises two main
subsystems viz. the detecting and actuating systems. The detection system uses
infrared light-emitting diodes, a retroreflective bracelet and two infrared
cameras which as a whole determine the coordinates of the surgeon's forearm.
The actuation system, or robotic arm, is based on a lead screw mechanism which
can obtain a maximum speed of 0.28 m/s with a 1.5 degree/step for the
end-effector. The infrared detection and encoder resolutions are below 0.6
mm/pixel and 0.4 mm respectively, which ensures the robotic arm can operate
precisely. The surgeon is able to monitor the patient with the use of a
graphical user interface on the display computer. The lead screw system is
modelled and compared to experimentation results. The system is controlled
using a simple proportional-integrator (PI) control scheme which is implemented
on a dSpace control unit. The control design results in a rise time of less
than 0.5 s, a steady-state error of less than 1 mm and settling time of less
than 1.4 s. The system accumulates, over an extended period of time, an error
of approximately 4 mm due to inertial effects of the robotic arm. The results
show promising system performance characteristics for a relatively inexpensive
solution to a relatively advanced application.
|
1307.5653 | Online Tracking Parameter Adaptation based on Evaluation | cs.CV | Parameter tuning is a common issue for many tracking algorithms. In order to
solve this problem, this paper proposes an online parameter tuning to adapt a
tracking algorithm to various scene contexts. In an offline training phase,
this approach learns how to tune the tracker parameters to cope with different
contexts. In the online control phase, once the tracking quality is evaluated
as not good enough, the proposed approach computes the current context and
tunes the tracking parameters using the learned values. The experimental
results show that the proposed approach improves the performance of the
tracking algorithm and outperforms recent state of the art trackers. This paper
brings two contributions: (1) an online tracking evaluation, and (2) a method
to adapt online tracking parameters to scene contexts.
|
1307.5664 | Expander Chunked Codes | cs.IT math.IT | Chunked codes are efficient random linear network coding (RLNC) schemes with
low computational cost, where the input packets are encoded into small chunks
(i.e., subsets of the coded packets). During the network transmission, RLNC is
performed within each chunk. In this paper, we first introduce a simple
transfer matrix model to characterize the transmission of chunks, and derive
some basic properties of the model to facilitate the performance analysis. We
then focus on the design of overlapped chunked codes, a class of chunked codes
whose chunks are non-disjoint subsets of input packets, which are of special
interest since they can be encoded with negligible computational cost and in a
causal fashion. We propose expander chunked (EC) codes, the first class of
overlapped chunked codes that have an analyzable performance,where the
construction of the chunks makes use of regular graphs. Numerical and
simulation results show that in some practical settings, EC codes can achieve
rates within 91 to 97 percent of the optimum and outperform the
state-of-the-art overlapped chunked codes significantly.
|
1307.5667 | New Optimization Approach Using Clustering-Based Parallel Genetic
Algorithm | cs.NE math.OC | In many global Optimization Problems, it is required to evaluate a global
point (min or max) in large space that calculation effort is very high. In this
paper is presented new approach for optimization problem with subdivision
labeling method (SLM) but in this method for higher dimensional has high
calculation effort. Clustering-Based Parallel Genetic Algorithm (CBPGA) in
optimization problems is one of the solutions of this problem. That the initial
population is crossing points and subdividing in each step is according to
mutation. After labeling all of crossing points, selecting is according to
polytope that has complete label. In this method we propose an algorithm, based
on parallelization scheme using master-slave. SLM algorithm is implemented by
CBPGA and compared the experimental results. The numerical examples and
numerical results show that SLMCBPGA is improved speed up and efficiency.
|
1307.5674 | Solving Traveling Salesman Problem by Marker Method | cs.NE cs.DS math.OC | In this paper we use marker method and propose a new mutation operator that
selects the nearest neighbor among all near neighbors solving Traveling
Salesman Problem.
|
1307.5675 | Models, Entropy and Information of Temporal Social Networks | physics.soc-ph cs.SI nlin.AO | Temporal social networks are characterized by {heterogeneous} duration of
contacts, which can either follow a power-law distribution, such as in
face-to-face interactions, or a Weibull distribution, such as in mobile-phone
communication. Here we model the dynamics of face-to-face interaction and
mobile phone communication by a reinforcement dynamics, which explains the data
observed in these different types of social interactions. We quantify the
information encoded in the dynamics of these networks by the entropy of
temporal networks. Finally, we show evidence that human dynamics is able to
modulate the information present in social network dynamics when it follows
circadian rhythms and when it is interfacing with a new technology such as the
mobile-phone communication technology.
|
1307.5679 | Sub-Dividing Genetic Method for Optimization Problems | cs.NE math.OC | Nowadays, optimization problem have more application in all major but they
have problem in computation. Computation global point in continuous functions
have high calculation and this became clearer in large space .In this paper, we
proposed Sub- Dividing Genetic Method(SGM) that have less computation than
other method for achieving global points . This method userotation mutation and
crossover based sub-division method that sub diving method is used for minimize
search space and rotation mutation with crossover is used for finding global
optimal points. In experimental, SGM algorithm is implemented on De Jong
function. The numerical examples show that SGM is performed more optimal than
other methods such as Grefensstette, Random Value, and PNG.
|
1307.5684 | Using a Dynamic Neural Field Model to Explore a Direct Collicular
Inhibition Account of Inhibition of Return | q-bio.NC cs.CV | When the interval between a transient ash of light (a "cue") and a second
visual response signal (a "target") exceeds at least 200ms, responding is
slowest in the direction indicated by the first signal. This phenomenon is
commonly referred to as inhibition of return (IOR). The dynamic neural field
model (DNF) has proven to have broad explanatory power for IOR, effectively
capturing many empirical results. Previous work has used a short-term
depression (STD) implementation of IOR, but this approach fails to explain many
behavioral phenomena observed in the literature. Here, we explore a variant
model of IOR involving a combination of STD and delayed direct collicular
inhibition. We demonstrate that this hybrid model can better reproduce
established behavioural results. We use the results of this model to propose
several experiments that would yield particularly valuable insight into the
nature of the neurophysiological mechanisms underlying IOR.
|
1307.5691 | A study of parameters affecting visual saliency assessment | cs.CV | Since the early 2000s, computational visual saliency has been a very active
research area. Each year, more and more new models are published in the main
computer vision conferences. Nowadays, one of the big challenges is to find a
way to fairly evaluate all of these models. In this paper, a new framework is
proposed to assess models of visual saliency. This evaluation is divided into
three experiments leading to the proposition of a new evaluation framework.
Each experiment is based on a basic question: 1) there are two ground truths
for saliency evaluation: what are the differences between eye fixations and
manually segmented salient regions?, 2) the properties of the salient regions:
for example, do large, medium and small salient regions present different
difficulties for saliency models? and 3) the metrics used to assess saliency
models: what advantages would there be to mix them with PCA? Statistical
analysis is used here to answer each of these three questions.
|
1307.5693 | Visual saliency estimation by integrating features using multiple kernel
learning | cs.CV | In the last few decades, significant achievements have been attained in
predicting where humans look at images through different computational models.
However, how to determine contributions of different visual features to overall
saliency still remains an open problem. To overcome this issue, a recent class
of models formulates saliency estimation as a supervised learning problem and
accordingly apply machine learning techniques. In this paper, we also address
this challenging problem and propose to use multiple kernel learning (MKL) to
combine information coming from different feature dimensions and to perform
integration at an intermediate level. Besides, we suggest to use responses of a
recently proposed filterbank of object detectors, known as Object-Bank, as
additional semantic high-level features. Here we show that our MKL-based
framework together with the proposed object-specific features provide
state-of-the-art performance as compared to SVM or AdaBoost-based saliency
models.
|
1307.5697 | Dimension Reduction via Colour Refinement | cs.DS cs.DM cs.LG math.OC | Colour refinement is a basic algorithmic routine for graph isomorphism
testing, appearing as a subroutine in almost all practical isomorphism solvers.
It partitions the vertices of a graph into "colour classes" in such a way that
all vertices in the same colour class have the same number of neighbours in
every colour class. Tinhofer (Disc. App. Math., 1991), Ramana, Scheinerman, and
Ullman (Disc. Math., 1994) and Godsil (Lin. Alg. and its App., 1997)
established a tight correspondence between colour refinement and fractional
isomorphisms of graphs, which are solutions to the LP relaxation of a natural
ILP formulation of graph isomorphism.
We introduce a version of colour refinement for matrices and extend existing
quasilinear algorithms for computing the colour classes. Then we generalise the
correspondence between colour refinement and fractional automorphisms and
develop a theory of fractional automorphisms and isomorphisms of matrices.
We apply our results to reduce the dimensions of systems of linear equations
and linear programs. Specifically, we show that any given LP L can efficiently
be transformed into a (potentially) smaller LP L' whose number of variables and
constraints is the number of colour classes of the colour refinement algorithm,
applied to a matrix associated with the LP. The transformation is such that we
can easily (by a linear mapping) map both feasible and optimal solutions back
and forth between the two LPs. We demonstrate empirically that colour
refinement can indeed greatly reduce the cost of solving linear programs.
|
1307.5702 | Is Bottom-Up Attention Useful for Scene Recognition? | cs.CV | The human visual system employs a selective attention mechanism to understand
the visual world in an eficient manner. In this paper, we show how
computational models of this mechanism can be exploited for the computer vision
application of scene recognition. First, we consider saliency weighting and
saliency pruning, and provide a comparison of the performance of different
attention models in these approaches in terms of classification accuracy.
Pruning can achieve a high degree of computational savings without
significantly sacrificing classification accuracy. In saliency weighting,
however, we found that classification performance does not improve. In
addition, we present a new method to incorporate salient and non-salient
regions for improved classification accuracy. We treat the salient and
non-salient regions separately and combine them using Multiple Kernel Learning.
We evaluate our approach using the UIUC sports dataset and find that with a
small training size, our method improves upon the classification accuracy of
the baseline bag of features approach.
|
1307.5708 | Vertex-Frequency Analysis on Graphs | math.FA cs.IT cs.SI math.IT | One of the key challenges in the area of signal processing on graphs is to
design dictionaries and transform methods to identify and exploit structure in
signals on weighted graphs. To do so, we need to account for the intrinsic
geometric structure of the underlying graph data domain. In this paper, we
generalize one of the most important signal processing tools - windowed Fourier
analysis - to the graph setting. Our approach is to first define generalized
convolution, translation, and modulation operators for signals on graphs, and
explore related properties such as the localization of translated and modulated
graph kernels. We then use these operators to define a windowed graph Fourier
transform, enabling vertex-frequency analysis. When we apply this transform to
a signal with frequency components that vary along a path graph, the resulting
spectrogram matches our intuition from classical discrete-time signal
processing. Yet, our construction is fully generalized and can be applied to
analyze signals on any undirected, connected, weighted graph.
|
1307.5710 | Saliency-Guided Perceptual Grouping Using Motion Cues in Region-Based
Artificial Visual Attention | cs.CV | Region-based artificial attention constitutes a framework for bio-inspired
attentional processes on an intermediate abstraction level for the use in
computer vision and mobile robotics. Segmentation algorithms produce regions of
coherently colored pixels. These serve as proto-objects on which the
attentional processes determine image portions of relevance. A single
region---which not necessarily represents a full object---constitutes the focus
of attention. For many post-attentional tasks, however, such as identifying or
tracking objects, single segments are not sufficient. Here, we present a
saliency-guided approach that groups regions that potentially belong to the
same object based on proximity and similarity of motion. We compare our results
to object selection by thresholding saliency maps and a further
attention-guided strategy.
|
1307.5713 | Understanding Humans' Strategies in Maze Solving | cs.CV cs.AI q-bio.NC | Navigating through a visual maze relies on the strategic use of eye movements
to select and identify the route. When navigating the maze, there are
trade-offs between exploring to the environment and relying on memory. This
study examined strategies used to navigating through novel and familiar mazes
that were viewed from above and traversed by a mouse cursor. Eye and mouse
movements revealed two modes that almost never occurred concurrently:
exploration and guidance. Analyses showed that people learned mazes and were
able to devise and carry out complex, multi-faceted strategies that traded-off
visual exploration against active motor performance. These strategies took into
account available visual information, memory, confidence, the estimated cost in
time for exploration, and idiosyncratic tolerance for error. Understanding the
strategies humans used for maze solving is valuable for applications in
cognitive neuroscience as well as in AI, robotics and human-robot interactions.
|
1307.5720 | Top-down and Bottom-up Feature Combination for Multi-sensor Attentive
Robots | cs.RO cs.CV | The information available to robots in real tasks is widely distributed both
in time and space, requiring the agent to search for relevant data. In humans,
that face the same problem when sounds, images and smells are presented to
their sensors in a daily scene, a natural system is applied: Attention. As
vision plays an important role in our routine, most research regarding
attention has involved this sensorial system and the same has been replicated
to the robotics field. However,most of the robotics tasks nowadays do not rely
only in visual data, that are still costly. To allow the use of attentive
concepts with other robotics sensors that are usually used in tasks such as
navigation, self-localization, searching and mapping, a generic attentional
model has been previously proposed. In this work, feature mapping functions
were designed to build feature maps to this attentive model from data from
range scanner and sonar sensors. Experiments were performed in a high fidelity
simulated robotics environment and results have demonstrated the capability of
the model on dealing with both salient stimuli and goal-driven attention over
multiple features extracted from multiple sensors.
|
1307.5725 | Damping Noise-Folding and Enhanced Support Recovery in Compressed
Sensing | math.NA cs.IT math.IT | The practice of compressed sensing suffers importantly in terms of the
efficiency/accuracy trade-off when acquiring noisy signals prior to
measurement. It is rather common to find results treating the noise affecting
the measurements, avoiding in this way to face the so-called
$\textit{noise-folding}$ phenomenon, related to the noise in the signal,
eventually amplified by the measurement procedure. In this paper, we present
two new decoding procedures, combining $\ell_1$-minimization followed by either
a regularized selective least $p$-powers or an iterative hard thresholding,
which not only are able to reduce this component of the original noise, but
also have enhanced properties in terms of support identification with respect
to the sole $\ell_1$-minimization or iteratively re-weighted
$\ell_1$-minimization. We prove such features, providing relatively simple and
precise theoretical guarantees. We additionally confirm and support the
theoretical results by extensive numerical simulations, which give a statistics
of the robustness of the new decoding procedures with respect to more classical
$\ell_1$-minimization and iteratively re-weighted $\ell_1$-minimization.
|
1307.5730 | A New Strategy of Cost-Free Learning in the Class Imbalance Problem | cs.LG | In this work, we define cost-free learning (CFL) formally in comparison with
cost-sensitive learning (CSL). The main difference between them is that a CFL
approach seeks optimal classification results without requiring any cost
information, even in the class imbalance problem. In fact, several CFL
approaches exist in the related studies, such as sampling and some
criteria-based pproaches. However, to our best knowledge, none of the existing
CFL and CSL approaches are able to process the abstaining classifications
properly when no information is given about errors and rejects. Based on
information theory, we propose a novel CFL which seeks to maximize normalized
mutual information of the targets and the decision outputs of classifiers.
Using the strategy, we can deal with binary/multi-class classifications
with/without abstaining. Significant features are observed from the new
strategy. While the degree of class imbalance is changing, the proposed
strategy is able to balance the errors and rejects accordingly and
automatically. Another advantage of the strategy is its ability of deriving
optimal rejection thresholds for abstaining classifications and the
"equivalent" costs in binary classifications. The connection between rejection
thresholds and ROC curve is explored. Empirical investigation is made on
several benchmark data sets in comparison with other existing approaches. The
classification results demonstrate a promising perspective of the strategy in
machine learning.
|
1307.5736 | Speaker Independent Continuous Speech to Text Converter for Mobile
Application | cs.CL cs.NE cs.SD | An efficient speech to text converter for mobile application is presented in
this work. The prime motive is to formulate a system which would give optimum
performance in terms of complexity, accuracy, delay and memory requirements for
mobile environment. The speech to text converter consists of two stages namely
front-end analysis and pattern recognition. The front end analysis involves
preprocessing and feature extraction. The traditional voice activity detection
algorithms which track only energy cannot successfully identify potential
speech from input because the unwanted part of the speech also has some energy
and appears to be speech. In the proposed system, VAD that calculates energy of
high frequency part separately as zero crossing rate to differentiate noise
from speech is used. Mel Frequency Cepstral Coefficient (MFCC) is used as
feature extraction method and Generalized Regression Neural Network is used as
recognizer. MFCC provides low word error rate and better feature extraction.
Neural Network improves the accuracy. Thus a small database containing all
possible syllable pronunciation of the user is sufficient to give recognition
accuracy closer to 100%. Thus the proposed technique entertains realization of
real time speaker independent applications like mobile phones, PDAs etc.
|
1307.5748 | Appearance Descriptors for Person Re-identification: a Comprehensive
Review | cs.CV | In video-surveillance, person re-identification is the task of recognising
whether an individual has already been observed over a network of cameras.
Typically, this is achieved by exploiting the clothing appearance, as classical
biometric traits like the face are impractical in real-world video surveillance
scenarios. Clothing appearance is represented by means of low-level
\textit{local} and/or \textit{global} features of the image, usually extracted
according to some part-based body model to treat different body parts (e.g.
torso and legs) independently. This paper provides a comprehensive review of
current approaches to build appearance descriptors for person
re-identification. The most relevant techniques are described in detail, and
categorised according to the body models and features used. The aim of this
work is to provide a structured body of knowledge and a starting point for
researchers willing to conduct novel investigations on this challenging topic.
|
1307.5800 | An Adaptive GMM Approach to Background Subtraction for Application in
Real Time Surveillance | cs.CV | Efficient security management has become an important parameter in todays
world. As the problem is growing, there is an urgent need for the introduction
of advanced technology and equipment to improve the state-of art of
surveillance. In this paper we propose a model for real time background
subtraction using AGMM. The proposed model is robust and adaptable to dynamic
background, fast illumination changes, repetitive motion. Also we have
incorporated a method for detecting shadows using the Horpresert color model.
The proposed model can be employed for monitoring areas where movement or entry
is highly restricted. So on detection of any unexpected events in the scene an
alarm can be triggered and hence we can achieve real time surveillance even in
the absence of constant human monitoring.
|
1307.5827 | Cooperative Energy Harvesting Networks with Spatially Random Users | cs.IT math.IT | This paper considers a cooperative network with multiple source-destination
pairs and one energy harvesting relay. The outage probability experienced by
users in this network is characterized by taking the spatial randomness of user
locations into consideration. In addition, the cooperation among users is
modeled as a canonical coalitional game and the grand coalition is shown to be
stable in the addressed scenario. Simulation results are provided to
demonstrate the accuracy of the developed analytical results.
|
1307.5837 | An Information Theoretic Measure of Judea Pearl's Identifiability and
Causal Influence | cs.IT cs.AI math.IT | In this paper, we define a new information theoretic measure that we call the
"uprooted information". We show that a necessary and sufficient condition for a
probability $P(s|do(t))$ to be "identifiable" (in the sense of Pearl) in a
graph $G$ is that its uprooted information be non-negative for all models of
the graph $G$. In this paper, we also give a new algorithm for deciding, for a
Bayesian net that is semi-Markovian, whether a probability $P(s|do(t))$ is
identifiable, and, if it is identifiable, for expressing it without allusions
to confounding variables. Our algorithm is closely based on a previous
algorithm by Tian and Pearl, but seems to correct a small flaw in theirs. In
this paper, we also find a {\it necessary and sufficient graphical condition}
for a probability $P(s|do(t))$ to be identifiable when $t$ is a singleton set.
So far, in the prior literature, it appears that only a {\it sufficient
graphical condition} has been given for this. By "graphical" we mean that it is
directly based on Judea Pearl's 3 rules of do-calculus.
|
1307.5838 | Rotational Mutation Genetic Algorithm on optimization Problems | cs.NE math.OC | Optimization problem, nowadays, have more application in all major but they
have problem in computation. Calculation of the optimum point in the spaces
with the above dimensions is very time consuming. In this paper, there is
presented a new approach for the optimization of continuous functions with
rotational mutation that is called RM. The proposed algorithm starts from the
point which has best fitness value by elitism mechanism. Then, method of
rotational mutation is used to reach optimal point. In this paper, RM algorithm
is implemented by GA(Briefly RMGA) and is compared with other well- known
algorithms: DE, PGA, Grefensstette and Eshelman [15, 16] and numerical and
simulation results show that RMGA achieve global optimal point with more
decision by smaller generations.
|
1307.5839 | A New Approach for Finding the Global Optimal Point Using Subdividing
Labeling Method (SLM) | cs.NE math.OC | In most global optimization problems, finding global optimal point inthe
multidimensional and great search space needs high computations. In this paper,
we present a new approach to find global optimal point with the low computation
and few steps using subdividing labeling method (SLM) which can also be used in
the multi-dimensional and great search space. In this approach, in each step,
crossing points will be labeled and complete label polytope search space of
selected polytope will be subdivided after being selected. SLM algorithm finds
the global point until h (subdivision function) turns into zero. SLM will be
implemented on five applications and compared with the latest techniques such
as random search, random search-walk and simulated annealing method. The
results of the proposed method demonstrate that our new approach is faster and
more reliable and presents an optimal time complexity O (logn).
|
1307.5840 | Sub- Diving Labeling Method for Optimization Problem by Genetic
Algorithm | cs.NE math.OC | In many global Optimization Problems, it is required to evaluate a global
point (min or max) in large space that calculation effort is very high. In this
paper is presented new approach for optimization problem with subdivision
labeling method (SLM) but in this method for higher dimensional has high
computational. SLM Genetic Algorithm (SLMGA) in optimization problems is one of
the solutions of this problem. In proposed algorithm the initial population is
crossing points and subdividing in each step is according to mutation. RSLMGA
is compared with other well known algorithms: DE, PGA, Grefensstette and
Eshelman and numerical results show that RSLMGA achieve global optimal point
with more decision by smaller generations.
|
1307.5870 | Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery | stat.ML cs.LG | Recovering a low-rank tensor from incomplete information is a recurring
problem in signal processing and machine learning. The most popular convex
relaxation of this problem minimizes the sum of the nuclear norms of the
unfoldings of the tensor. We show that this approach can be substantially
suboptimal: reliably recovering a $K$-way tensor of length $n$ and Tucker rank
$r$ from Gaussian measurements requires $\Omega(r n^{K-1})$ observations. In
contrast, a certain (intractable) nonconvex formulation needs only $O(r^K +
nrK)$ observations. We introduce a very simple, new convex relaxation, which
partially bridges this gap. Our new formulation succeeds with $O(r^{\lfloor K/2
\rfloor}n^{\lceil K/2 \rceil})$ observations. While these results pertain to
Gaussian measurements, simulations strongly suggest that the new norm also
outperforms the sum of nuclear norms for tensor completion from a random subset
of entries.
Our lower bound for the sum-of-nuclear-norms model follows from a new result
on recovering signals with multiple sparse structures (e.g. sparse, low rank),
which perhaps surprisingly demonstrates the significant suboptimality of the
commonly used recovery approach via minimizing the sum of individual sparsity
inducing norms (e.g. $l_1$, nuclear norm). Our new formulation for low-rank
tensor recovery however opens the possibility in reducing the sample complexity
by exploiting several structures jointly.
|
1307.5894 | MIRAGE: An Iterative MapReduce based FrequentSubgraph Mining Algorithm | cs.DB cs.DC | Frequent subgraph mining (FSM) is an important task for exploratory data
analysis on graph data. Over the years, many algorithms have been proposed to
solve this task. These algorithms assume that the data structure of the mining
task is small enough to fit in the main memory of a computer. However, as the
real-world graph data grows, both in size and quantity, such an assumption does
not hold any longer. To overcome this, some graph database-centric methods have
been proposed in recent years for solving FSM; however, a distributed solution
using MapReduce paradigm has not been explored extensively. Since, MapReduce is
becoming the de- facto paradigm for computation on massive data, an efficient
FSM algorithm on this paradigm is of huge demand. In this work, we propose a
frequent subgraph mining algorithm called MIRAGE which uses an iterative
MapReduce based framework. MIRAGE is complete as it returns all the frequent
subgraphs for a given user-defined support, and it is efficient as it applies
all the optimizations that the latest FSM algorithms adopt. Our experiments
with real life and large synthetic datasets validate the effectiveness of
MIRAGE for mining frequent subgraphs from large graph datasets. The source code
of MIRAGE is available from www.cs.iupui.edu/alhasan/software/
|
1307.5906 | Embedding Noise Prediction into List-Viterbi Decoding using Error
Detection Codes for Magnetic Tape Systems | cs.IT math.IT | A List Viterbi detector produces a rank ordered list of the N globally best
candidates in a trellis search. A List Viterbi detector structure is proposed
that incorporates the noise prediction with periodic state-metric updates based
on outer error detection codes (EDCs). More specifically, a periodic decision
making process is utilized for a non-overlapping sliding windows of P bits
based on the use of outer EDCs. In a number of magnetic recording applications,
Error Correction Coding (ECC) is adversely effected by the presence of long and
dominant error events. Unlike the conventional post processing methods that are
usually tailored to a specific set of dominant error events or the joint
modulation code trellis architectures that are operating on larger state spaces
at the expense of increased implementation complexity, the proposed detector
does not use any a priori information about the error event distributions and
operates at reduced state trellis. We present pre ECC bit error rate
performance as well as the post ECC codeword failure rates of the proposed
detector using perfect detection scenario as well as practical detection codes
as the EDCs are not essential to the overall design. Furthermore, it is
observed that proposed algorithm does not introduce new error events.
Simulation results show that the proposed algorithm gives improved bit error
and post ECC codeword failure rates at the expense of some increase in
complexity.
|
1307.5910 | How to minimize the energy consumption in mobile ad-hoc networks | cs.AI cs.NI | In this work we are interested in the problem of energy management in Mobile
Ad-hoc Network (MANET). The solving and optimization of MANET allow assisting
the users to efficiently use their devices in order to minimize the batteries
power consumption. In this framework, we propose a modelling of the MANET in
form of a Constraint Optimization Problem called COMANET. Then, in the
objective to minimize the consumption of batteries power, we present an
approach based on an adaptation of the A star algorithm to the MANET problem
called MANED. Finally, we expose some experimental results showing utility of
this approach.
|
1307.5934 | A Near-Optimal Dynamic Learning Algorithm for Online Matching Problems
with Concave Returns | cs.DS cs.LG math.OC | We consider an online matching problem with concave returns. This problem is
a significant generalization of the Adwords allocation problem and has vast
applications in online advertising. In this problem, a sequence of items arrive
sequentially and each has to be allocated to one of the bidders, who bid a
certain value for each item. At each time, the decision maker has to allocate
the current item to one of the bidders without knowing the future bids and the
objective is to maximize the sum of some concave functions of each bidder's
aggregate value. In this work, we propose an algorithm that achieves
near-optimal performance for this problem when the bids arrive in a random
order and the input data satisfies certain conditions. The key idea of our
algorithm is to learn the input data pattern dynamically: we solve a sequence
of carefully chosen partial allocation problems and use their optimal solutions
to assist with the future decision. Our analysis belongs to the primal-dual
paradigm, however, the absence of linearity of the objective function and the
dynamic feature of the algorithm makes our analysis quite unique.
|
1307.5942 | A unified modeling approach for the static-dynamic uncertainty strategy
in stochastic lot-sizing | math.OC cs.SY math.PR | In this paper, we develop mixed integer linear programming models to compute
near-optimal policy parameters for the non-stationary stochastic lot sizing
problem under Bookbinder and Tan's static-dynamic uncertainty strategy. Our
models build on piecewise linear upper and lower bounds of the first order loss
function. We discuss different formulations of the stochastic lot sizing
problem, in which the quality of service is captured by means of backorder
penalty costs, non-stockout probability, or fill rate constraints. These models
can be easily adapted to operate in settings in which unmet demand is
backordered or lost. The proposed approach has a number of advantages with
respect to existing methods in the literature: it enables seamless modelling of
different variants of the above problem, which have been previously tackled via
ad-hoc solution methods; and it produces an accurate estimation of the expected
total cost, expressed in terms of upper and lower bounds. Our computational
study demonstrates the effectiveness and flexibility of our models.
|
1307.5944 | Online Optimization in Dynamic Environments | stat.ML cs.LG math.OC | High-velocity streams of high-dimensional data pose significant "big data"
analysis challenges across a range of applications and settings. Online
learning and online convex programming play a significant role in the rapid
recovery of important or anomalous information from these large datastreams.
While recent advances in online learning have led to novel and rapidly
converging algorithms, these methods are unable to adapt to nonstationary
environments arising in real-world problems. This paper describes a dynamic
mirror descent framework which addresses this challenge, yielding low
theoretical regret bounds and accurate, adaptive, and computationally efficient
algorithms which are applicable to broad classes of problems. The methods are
capable of learning and adapting to an underlying and possibly time-varying
dynamical model. Empirical results in the context of dynamic texture analysis,
solar flare detection, sequential compressed sensing of a dynamic scene,
traffic surveillance,tracking self-exciting point processes and network
behavior in the Enron email corpus support the core theoretical findings.
|
1307.5996 | Bayesian Fusion of Multi-Band Images | cs.CV physics.data-an stat.ME | In this paper, a Bayesian fusion technique for remotely sensed multi-band
images is presented. The observed images are related to the high spectral and
high spatial resolution image to be recovered through physical degradations,
e.g., spatial and spectral blurring and/or subsampling defined by the sensor
characteristics. The fusion problem is formulated within a Bayesian estimation
framework. An appropriate prior distribution exploiting geometrical
consideration is introduced. To compute the Bayesian estimator of the scene of
interest from its posterior distribution, a Markov chain Monte Carlo algorithm
is designed to generate samples asymptotically distributed according to the
target distribution. To efficiently sample from this high-dimension
distribution, a Hamiltonian Monte Carlo step is introduced in the Gibbs
sampling strategy. The efficiency of the proposed fusion method is evaluated
with respect to several state-of-the-art fusion techniques. In particular, low
spatial resolution hyperspectral and multispectral images are fused to produce
a high spatial resolution hyperspectral image.
|
1307.6008 | Numerical Methods for Coupled Reconstruction and Registration in Digital
Breast Tomosynthesis | cs.CV physics.med-ph | Digital Breast Tomosynthesis (DBT) provides an insight into the fine details
of normal fibroglandular tissues and abnormal lesions by reconstructing a
pseudo-3D image of the breast. In this respect, DBT overcomes a major
limitation of conventional X-ray mammography by reducing the confounding
effects caused by the superposition of breast tissue. In a breast cancer
screening or diagnostic context, a radiologist is interested in detecting
change, which might be indicative of malignant disease. To help automate this
task image registration is required to establish spatial correspondence between
time points. Typically, images, such as MRI or CT, are first reconstructed and
then registered. This approach can be effective if reconstructing using a
complete set of data. However, for ill-posed, limited-angle problems such as
DBT, estimating the deformation is complicated by the significant artefacts
associated with the reconstruction, leading to severe inaccuracies in the
registration. This paper presents a mathematical framework, which couples the
two tasks and jointly estimates both image intensities and the parameters of a
transformation.
We evaluate our methods using various computational digital phantoms,
uncompressed breast MR images, and in-vivo DBT simulations. Firstly, we compare
both iterative and simultaneous methods to the conventional, sequential method
using an affine transformation model. We show that jointly estimating image
intensities and parametric transformations gives superior results with respect
to reconstruction fidelity and registration accuracy. Also, we incorporate a
non-rigid B-spline transformation model into our simultaneous method. The
results demonstrate a visually plausible recovery of the deformation with
preservation of the reconstruction fidelity.
|
1307.6018 | Beyond the entropy power inequality, via rearrangements | cs.IT math.FA math.IT math.PR | A lower bound on the R\'enyi differential entropy of a sum of independent
random vectors is demonstrated in terms of rearrangements. For the special case
of Boltzmann-Shannon entropy, this lower bound is better than that given by the
entropy power inequality. Several applications are discussed, including a new
proof of the classical entropy power inequality and an entropy inequality
involving symmetrization of L\'evy processes.
|
1307.6023 | The Use of Cuckoo Search in Estimating the Parameters of Software
Reliability Growth Models | cs.AI cs.SE | This work aims to investigate the reliability of software products as an
important attribute of computer programs; it helps to decide the degree of
trustworthiness a program has in accomplishing its specific functions. This is
done using the Software Reliability Growth Models (SRGMs) through the
estimation of their parameters. The parameters are estimated in this work based
on the available failure data and with the search techniques of Swarm
Intelligence, namely, the Cuckoo Search (CS) due to its efficiency,
effectiveness and robustness. A number of SRGMs is studied, and the results are
compared to Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO)
and extended ACO. Results show that CS outperformed both PSO and ACO in finding
better parameters tested using identical datasets. It was sometimes
outperformed by the extended ACO. Also in this work, the percentages of
training data to testing data are investigated to show their impact on the
results.
|
1307.6033 | Sparse Reconstruction-based Detection of Spatial Dimension Holes in
Cognitive Radio Networks | cs.IT cs.NI math.IT math.OC | In this paper, we investigate a spectrum sensing algorithm for detecting
spatial dimension holes in Multiple Inputs Multiple Outputs (MIMO)
transmissions for OFDM systems using Compressive Sensing (CS) tools. This
extends the energy detector to allow for detecting transmission opportunities
even if the band is already energy filled. We show that the task described
above is not performed efficiently by regular MIMO decoders (such as MMSE
decoder) due to possible sparsity in the transmit signal. Since CS
reconstruction tools take into account the sparsity order of the signal, they
are more efficient in detecting the activity of the users. Building on
successful activity detection by the CS detector, we show that the use of a
CS-aided MMSE decoders yields better performance rather than using either
CS-based or MMSE decoders separately. Simulations are conducted to verify the
gains from using CS detector for Primary user activity detection and the
performance gain in using CS-aided MMSE decoders for decoding the PU
information for future relaying.
|
1307.6041 | Quantum Optical Realization of Classical Linear Stochastic Systems | quant-ph cs.SY | The purpose of this paper is to show how a class of classical linear
stochastic systems can be physically implemented using quantum optical
components. Quantum optical systems typically have much higher bandwidth than
electronic devices, meaning faster response and processing times, and hence has
the potential for providing better performance than classical systems. A
procedure is provided for constructing the quantum optical realization. The
paper also describes the use of the quantum optical realization in a
measurement feedback loop. Some examples are given to illustrate the
application of the main results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.