id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1004.3257
|
Offline Handwriting Recognition using Genetic Algorithm
|
cs.CV
|
Handwriting Recognition enables a person to scribble something on a piece of
paper and then convert it into text. If we look into the practical reality
there are enumerable styles in which a character may be written. These styles
can be self combined to generate more styles. Even if a small child knows the
basic styles a character can be written, he would be able to recognize
characters written in styles intermediate between them or formed by their
mixture. This motivates the use of Genetic Algorithms for the problem. In order
to prove this, we made a pool of images of characters. We converted them to
graphs. The graph of every character was intermixed to generate styles
intermediate between the styles of parent character. Character recognition
involved the matching of the graph generated from the unknown character image
with the graphs generated by mixing. Using this method we received an accuracy
of 98.44%.
|
1004.3260
|
Decision Support Systems (DSS) in Construction Tendering Processes
|
cs.AI
|
The successful execution of a construction project is heavily impacted by
making the right decision during tendering processes. Managing tender
procedures is very complex and uncertain involving coordination of many tasks
and individuals with different priorities and objectives. Bias and inconsistent
decision are inevitable if the decision-making process is totally depends on
intuition, subjective judgement or emotion. In making transparent decision and
healthy competition tendering, there exists a need for flexible guidance tool
for decision support. Aim of this paper is to give a review on current
practices of Decision Support Systems (DSS) technology in construction
tendering processes. Current practices of general tendering processes as
applied to the most countries in different regions such as United States,
Europe, Middle East and Asia are comprehensively discussed. Applications of
Web-based tendering processes is also summarised in terms of its properties.
Besides that, a summary of Decision Support System (DSS) components is included
in the next section. Furthermore, prior researches on implementation of DSS
approaches in tendering processes are discussed in details. Current issues
arise from both of paper-based and Web-based tendering processes are outlined.
Finally, conclusion is included at the end of this paper.
|
1004.3272
|
Database Reverse Engineering based on Association Rule Mining
|
cs.DB
|
Maintaining a legacy database is a difficult task especially when system
documentation is poor written or even missing. Database reverse engineering is
an attempt to recover high-level conceptual design from the existing database
instances. In this paper, we propose a technique to discover conceptual schema
using the association mining technique. The discovered schema corresponds to
the normalization at the third normal form, which is a common practice in many
business organizations. Our algorithm also includes the rule filtering
heuristic to solve the problem of exponential growth of discovered rules
inherited with the association mining technique.
|
1004.3273
|
Sampling and Recovery of Pulse Streams
|
cs.IT math.IT
|
Compressive Sensing (CS) is a new technique for the efficient acquisition of
signals, images, and other data that have a sparse representation in some
basis, frame, or dictionary. By sparse we mean that the N-dimensional basis
representation has just K<<N significant coefficients; in this case, the CS
theory maintains that just M = K log N random linear signal measurements will
both preserve all of the signal information and enable robust signal
reconstruction in polynomial time. In this paper, we extend the CS theory to
pulse stream data, which correspond to S-sparse signals/images that are
convolved with an unknown F-sparse pulse shape. Ignoring their convolutional
structure, a pulse stream signal is K=SF sparse. Such signals figure
prominently in a number of applications, from neuroscience to astronomy. Our
specific contributions are threefold. First, we propose a pulse stream signal
model and show that it is equivalent to an infinite union of subspaces. Second,
we derive a lower bound on the number of measurements M required to preserve
the essential information present in pulse streams. The bound is linear in the
total number of degrees of freedom S + F, which is significantly smaller than
the naive bound based on the total signal sparsity K=SF. Third, we develop an
efficient signal recovery algorithm that infers both the shape of the impulse
response as well as the locations and amplitudes of the pulses. The algorithm
alternatively estimates the pulse locations and the pulse shape in a manner
reminiscent of classical deconvolution algorithms. Numerical experiments on
synthetic and real data demonstrate the advantages of our approach over
standard CS.
|
1004.3274
|
A New Approach to Keyphrase Extraction Using Neural Networks
|
cs.IR
|
Keyphrases provide a simple way of describing a document, giving the reader
some clues about its contents. Keyphrases can be useful in a various
applications such as retrieval engines, browsing interfaces, thesaurus
construction, text mining etc.. There are also other tasks for which keyphrases
are useful, as we discuss in this paper. This paper describes a neural network
based approach to keyphrase extraction from scientific articles. Our results
show that the proposed method performs better than some state-of-the art
keyphrase extraction approaches.
|
1004.3276
|
Color Image Compression Based On Wavelet Packet Best Tree
|
cs.CV
|
In Image Compression, the researchers' aim is to reduce the number of bits
required to represent an image by removing the spatial and spectral
redundancies. Recently discrete wavelet transform and wavelet packet has
emerged as popular techniques for image compression. The wavelet transform is
one of the major processing components of image compression. The result of the
compression changes as per the basis and tap of the wavelet used. It is
proposed that proper selection of mother wavelet on the basis of nature of
images, improve the quality as well as compression ratio remarkably. We suggest
the novel technique, which is based on wavelet packet best tree based on
Threshold Entropy with enhanced run-length encoding. This method reduces the
time complexity of wavelet packets decomposition as complete tree is not
decomposed. Our algorithm selects the sub-bands, which include significant
information based on threshold entropy. The enhanced run length encoding
technique is suggested provides better results than RLE. The result when
compared with JPEG-2000 proves to be better.
|
1004.3282
|
Wireless Network Code Design and Performance Analysis using
Diversity-Multiplexing Tradeoff
|
cs.IT math.IT
|
Network coding and cooperative communication have received considerable
attention from the research community recently in order to mitigate the adverse
effects of fading in wireless transmissions and at the same time to achieve
high throughput and better spectral efficiency. In this work, we design and
analyze deterministic and random network coding schemes for a cooperative
communication setup with multiple sources and destinations. We show that our
schemes outperform conventional cooperation in terms of the
diversity-multiplexing tradeoff (DMT). Specifically, it achieves the
full-diversity order at the expense of a slightly reduced multiplexing rate. We
establish the link between the parity-check matrix for a $(N+M,M,N+1)$
systematic MDS code and the network coding coefficients in a cooperative
communication system of $N$ source-destination pairs and $M$ relays. We present
two ways to generate the network coding matrix: using the Cauchy matrices and
the Vandermonde matrices, and establish that they both offer the maximum
diversity order.
|
1004.3332
|
Estimation in Gaussian Noise: Properties of the Minimum Mean-Square
Error
|
cs.IT math.IT
|
Consider the minimum mean-square error (MMSE) of estimating an arbitrary
random variable from its observation contaminated by Gaussian noise. The MMSE
can be regarded as a function of the signal-to-noise ratio (SNR) as well as a
functional of the input distribution (of the random variable to be estimated).
It is shown that the MMSE is concave in the input distribution at any given
SNR. For a given input distribution, the MMSE is found to be infinitely
differentiable at all positive SNR, and in fact a real analytic function in SNR
under mild conditions. The key to these regularity results is that the
posterior distribution conditioned on the observation through Gaussian channels
always decays at least as quickly as some Gaussian density. Furthermore, simple
expressions for the first three derivatives of the MMSE with respect to the SNR
are obtained. It is also shown that, as functions of the SNR, the curves for
the MMSE of a Gaussian input and that of a non-Gaussian input cross at most
once over all SNRs. These properties lead to simple proofs of the facts that
Gaussian inputs achieve both the secrecy capacity of scalar Gaussian wiretap
channels and the capacity of scalar Gaussian broadcast channels, as well as a
simple proof of the entropy power inequality in the special case where one of
the variables is Gaussian.
|
1004.3334
|
Generation and Interpretation of Temporal Decision Rules
|
cs.LG
|
We present a solution to the problem of understanding a system that produces
a sequence of temporally ordered observations. Our solution is based on
generating and interpreting a set of temporal decision rules. A temporal
decision rule is a decision rule that can be used to predict or retrodict the
value of a decision attribute, using condition attributes that are observed at
times other than the decision attribute's time of observation. A rule set,
consisting of a set of temporal decision rules with the same decision
attribute, can be interpreted by our Temporal Investigation Method for
Enregistered Record Sequences (TIMERS) to signify an instantaneous, an acausal
or a possibly causal relationship between the condition attributes and the
decision attribute. We show the effectiveness of our method, by describing a
number of experiments with both synthetic and real temporal data.
|
1004.3361
|
From open quantum systems to open quantum maps
|
math.AP cs.LG math-ph math.DS math.MP nlin.CD
|
For a class of quantized open chaotic systems satisfying a natural dynamical
assumption, we show that the study of the resolvent, and hence of scattering
and resonances, can be reduced to the study of a family of open quantum maps,
that is of finite dimensional operators obtained by quantizing the Poincar\'e
map associated with the flow near the set of trapped trajectories.
|
1004.3371
|
Improving Update Summarization by Revisiting the MMR Criterion
|
cs.IR
|
This paper describes a method for multi-document update summarization that
relies on a double maximization criterion. A Maximal Marginal Relevance like
criterion, modified and so called Smmr, is used to select sentences that are
close to the topic and at the same time, distant from sentences used in already
read documents. Summaries are then generated by assembling the high ranked
material and applying some ruled-based linguistic post-processing in order to
obtain length reduction and maintain coherency. Through a participation to the
Text Analysis Conference (TAC) 2008 evaluation campaign, we have shown that our
method achieves promising results.
|
1004.3372
|
Adaptive Single-Trial Error/Erasure Decoding for Binary Codes
|
cs.IT math.IT
|
We investigate adaptive single-trial error/erasure decoding of binary codes
whose decoder is able to correct e errors and t erasures if le+t<=d-1. Thereby,
d is the minimum Hamming distance of the code and 1<l<=2 is the tradeoff
parameter between errors and erasures. The error/erasure decoder allows to
exploit soft information by treating a set of most unreliable received symbols
as erasures. The obvious question here is, how this erasing should be
performed, i.e. how the unreliable symbols which must be erased to obtain the
smallest possible residual codeword error probability are determined. In a
previous paper, we answer this question for the case of fixed erasing, where
only the channel state and not the individual symbol reliabilities are taken
into consideration. In this paper, we address the adaptive case, where the
optimal erasing strategy is determined for every given received vector.
|
1004.3390
|
Publishing Math Lecture Notes as Linked Data
|
cs.DL cs.AI math.HO
|
We mark up a corpus of LaTeX lecture notes semantically and expose them as
Linked Data in XHTML+MathML+RDFa. Our application makes the resulting documents
interactively browsable for students. Our ontology helps to answer queries from
students and lecturers, and paves the path towards an integration of our corpus
with external sites.
|
1004.3408
|
An Energy Efficient Scheme for Data Gathering in Wireless Sensor
Networks Using Particle Swarm Optimization
|
cs.NI cs.DC cs.NE
|
This paper has been withdrawn by the author due to a crucial sign error in
equation 1
|
1004.3427
|
An Achievability Scheme for the Compound Channel with State Noncausally
Available at the Encoder
|
cs.IT math.IT
|
A new achievability scheme for the compound channel with discrete memoryless
(DM) state noncausally available at the encoder is established. Achievability
is proved using superposition coding, Marton coding, joint typicality encoding,
and indirect decoding. The scheme is shown to achieve strictly higher rate than
the straightforward extension of the Gelfand-Pinsker coding scheme for a single
DMC with DM state, and is optimal for some classes of channels.
|
1004.3460
|
PCA 4 DCA: The Application Of Principal Component Analysis To The
Dendritic Cell Algorithm
|
cs.AI cs.NE
|
As one of the newest members in the field of artificial immune systems (AIS),
the Dendritic Cell Algorithm (DCA) is based on behavioural models of natural
dendritic cells (DCs). Unlike other AIS, the DCA does not rely on training
data, instead domain or expert knowledge is required to predetermine the
mapping between input signals from a particular instance to the three
categories used by the DCA. This data preprocessing phase has received the
criticism of having manually over-?tted the data to the algorithm, which is
undesirable. Therefore, in this paper we have attempted to ascertain if it is
possible to use principal component analysis (PCA) techniques to automatically
categorise input data while still generating useful and accurate classication
results. The integrated system is tested with a biometrics dataset for the
stress recognition of automobile drivers. The experimental results have shown
the application of PCA to the DCA for the purpose of automated data
preprocessing is successful.
|
1004.3478
|
Learning Better Context Characterizations: An Intelligent Information
Retrieval Approach
|
cs.IR cs.AI
|
This paper proposes an incremental method that can be used by an intelligent
system to learn better descriptions of a thematic context. The method starts
with a small number of terms selected from a simple description of the topic
under analysis and uses this description as the initial search context. Using
these terms, a set of queries are built and submitted to a search engine. New
documents and terms are used to refine the learned vocabulary. Evaluations
performed on a large number of topics indicate that the learned vocabulary is
much more effective than the original one at the time of constructing queries
to retrieve relevant material.
|
1004.3517
|
Lower bounds for the error decay incurred by coarse quantization schemes
|
cs.IT math.IT
|
Several analog-to-digital conversion methods for bandlimited signals used in
applications, such as Sigma Delta quantization schemes, employ coarse
quantization coupled with oversampling. The standard mathematical model for the
error accrued from such methods measures the performance of a given scheme by
the rate at which the associated reconstruction error decays as a function of
the oversampling ratio L > 1. It was recently shown that exponential accuracy
of the form O(2(-r L)) can be achieved by appropriate one-bit Sigma Delta
modulation schemes. However, the best known achievable rate constants r in this
setting differ significantly from the general information theoretic lower
bound. In this paper, we provide the first lower bound specific to coarse
quantization, thus narrowing the gap between existing upper and lower bounds.
In particular, our results imply a quantitative correspondence between the
maximal signal amplitude and the best possible error decay rate. Our method
draws from the theory of large deviations.
|
1004.3524
|
From Local Measurements to Network Spectral Properties: Beyond Degree
Distributions
|
math.OC cs.DM cs.MA
|
It is well-known that the behavior of many dynamical processes running on
networks is intimately related to the eigenvalue spectrum of the network. In
this paper, we address the problem of inferring global information regarding
the eigenvalue spectrum of a network from a set of local samples of its
structure. In particular, we find explicit relationships between the so-called
spectral moments of a graph and the presence of certain small subgraphs, also
called motifs, in the network. Since the eigenvalues of the network have a
direct influence on the network dynamical behavior, our result builds a bridge
between local network measurements (i.e., the presence of small subgraphs) and
global dynamical behavior (via the spectral moments). Furthermore, based on our
result, we propose a novel decentralized scheme to compute the spectral moments
of a network by aggregating local measurements of the network topology. Our
final objective is to understand the relationships between the behavior of
dynamical processes taking place in a large-scale complex network and its local
topological properties.
|
1004.3527
|
On Asymptotic Consensus Value in Directed Random Networks
|
cs.MA math.OC
|
We study the asymptotic properties of distributed consensus algorithms over
switching directed random networks. More specifically, we focus on consensus
algorithms over independent and identically distributed, directed random
graphs, where each agent can communicate with any other agent with some
exogenously specified probability. While different aspects of consensus
algorithms over random switching networks have been widely studied, a complete
characterization of the distribution of the asymptotic value for general
\textit{asymmetric} random consensus algorithms remains an open problem. In
this paper, we derive closed-form expressions for the mean and an upper bound
for the variance of the asymptotic consensus value, when the underlying network
evolves according to an i.i.d. \textit{directed} random graph process. We also
provide numerical simulations that illustrate our results.
|
1004.3549
|
Signature Region of Interest using Auto cropping
|
cs.CV
|
A new approach for signature region of interest pre-processing was presented.
It used new auto cropping preparation on the basis of the image content, where
the intensity value of pixel is the source of cropping. This approach provides
both the possibility of improving the performance of security systems based on
signature images, and also the ability to use only the region of interest of
the used image to suit layout design of biometric systems. Underlying the
approach is a novel segmentation method which identifies the exact region of
foreground of signature for feature extraction usage. Evaluation results of
this approach shows encouraging prospects by eliminating the need for false
region isolating, reduces the time cost associated with signature false points
detection, and addresses enhancement issues. A further contribution of this
paper is an automated cropping stage in bio-secure based systems.
|
1004.3557
|
Neuroevolutionary optimization
|
cs.NE
|
This paper presents an application of evolutionary search procedures to
artificial neural networks. Here, we can distinguish among three kinds of
evolution in artificial neural networks, i.e. the evolution of connection
weights, of architectures, and of learning rules. We review each kind of
evolution in detail and analyse critical issues related to different
evolutions. This article concentrates on finding the suitable way of using
evolutionary algorithms for optimizing the artificial neural network
parameters.
|
1004.3565
|
An Optimized Weighted Association Rule Mining On Dynamic Content
|
cs.DB
|
Association rule mining aims to explore large transaction databases for
association rules. Classical Association Rule Mining (ARM) model assumes that
all items have the same significance without taking their weight into account.
It also ignores the difference between the transactions and importance of each
and every itemsets. But, the Weighted Association Rule Mining (WARM) does not
work on databases with only binary attributes. It makes use of the importance
of each itemset and transaction. WARM requires each item to be given weight to
reflect their importance to the user. The weights may correspond to special
promotions on some products, or the profitability of different items. This
research work first focused on a weight assignment based on a directed graph
where nodes denote items and links represent association rules. A generalized
version of HITS is applied to the graph to rank the items, where all nodes and
links are allowed to have weights. This research then uses enhanced HITS
algorithm by developing an online eigenvector calculation method that can
compute the results of mutual reinforcement voting in case of frequent updates.
For Example in Share Market Shares price may go down or up. So we need to
carefully watch the market and our association rule mining has to produce the
items that have undergone frequent changes. These are done by estimating the
upper bound of perturbation and postponing of the updates whenever possible.
Next we prove that enhanced algorithm is more efficient than the original HITS
under the context of dynamic data.
|
1004.3568
|
Integrating User's Domain Knowledge with Association Rule Mining
|
cs.DB cs.AI
|
This paper presents a variation of Apriori algorithm that includes the role
of domain expert to guide and speed up the overall knowledge discovery task.
Usually, the user is interested in finding relationships between certain
attributes instead of the whole dataset. Moreover, he can help the mining
algorithm to select the target database which in turn takes less time to find
the desired association rules. Variants of the standard Apriori and Interactive
Apriori algorithms have been run on artificial datasets. The results show that
incorporating user's preference in selection of target attribute helps to
search the association rules efficiently both in terms of space and time.
|
1004.3571
|
Computer Aided Design Modeling for Heterogeneous Objects
|
cs.CE
|
Heterogeneous object design is an active research area in recent years. The
conventional CAD modeling approaches only provide geometry and topology of the
object, but do not contain any information with regard to the materials of the
object and so can not be used for the fabrication of heterogeneous objects (HO)
through rapid prototyping. Current research focuses on computer-aided design
issues in heterogeneous object design. A new CAD modeling approach is proposed
to integrate the material information into geometric regions thus model the
material distributions in the heterogeneous object. The gradient references are
used to represent the complex geometry heterogeneous objects which have
simultaneous geometry intricacies and accurate material distributions. The
gradient references helps in flexible manipulability and control to
heterogeneous objects, which guarantees the local control over gradient regions
of developed heterogeneous objects. A systematic approach on data flow,
processing, computer visualization, and slicing of heterogeneous objects for
rapid prototyping is also presented.
|
1004.3629
|
Simultaneous Bayesian inference of motion velocity fields and
probabilistic models in successive video-frames described by spatio-temporal
MRFs
|
cs.CV
|
We numerically investigate a mean-field Bayesian approach with the assistance
of the Markov chain Monte Carlo method to estimate motion velocity fields and
probabilistic models simultaneously in consecutive digital images described by
spatio-temporal Markov random fields. Preliminary to construction of our
procedure, we find that mean-field variables in the iteration diverge due to
improper normalization factor of regularization terms appearing in the
posterior. To avoid this difficulty, we rescale the regularization term by
introducing a scaling factor and optimizing it by means of minimization of the
mean-square error. We confirm that the optimal scaling factor stabilizes the
mean-field iterative process of the motion velocity estimation. We next attempt
to estimate the optimal values of hyper-parameters including the regularization
term, which define our probabilistic model macroscopically, by using the
Boltzmann-machine type learning algorithm based on gradient descent of marginal
likelihood (type-II likelihood) with respect to the hyper-parameters. In our
framework, one can estimate both the probabilistic model (hyper-parameters) and
motion velocity fields simultaneously. We find that our motion estimation is
much better than the result obtained by Zhang and Hanouer (1995) in which the
hyper-parameters are set to some ad-hoc values without any theoretical
justification.
|
1004.3692
|
Compound Poisson Approximation via Information Functionals
|
math.PR cs.IT math.IT
|
An information-theoretic development is given for the problem of compound
Poisson approximation, which parallels earlier treatments for Gaussian and
Poisson approximation. Let $P_{S_n}$ be the distribution of a sum $S_n=\Sumn
Y_i$ of independent integer-valued random variables $Y_i$. Nonasymptotic bounds
are derived for the distance between $P_{S_n}$ and an appropriately chosen
compound Poisson law. In the case where all $Y_i$ have the same conditional
distribution given $\{Y_i\neq 0\}$, a bound on the relative entropy distance
between $P_{S_n}$ and the compound Poisson distribution is derived, based on
the data-processing property of relative entropy and earlier Poisson
approximation results. When the $Y_i$ have arbitrary distributions,
corresponding bounds are derived in terms of the total variation distance. The
main technical ingredient is the introduction of two "information functionals,"
and the analysis of their properties. These information functionals play a role
analogous to that of the classical Fisher information in normal approximation.
Detailed comparisons are made between the resulting inequalities and related
bounds.
|
1004.3708
|
Parcellation of fMRI Datasets with ICA and PLS-A Data Driven Approach
|
cs.CV cs.AI cs.NE
|
Inter-subject parcellation of functional Magnetic Resonance Imaging (fMRI)
data based on a standard General Linear Model (GLM)and spectral clustering was
recently proposed as a means to alleviate the issues associated with spatial
normalization in fMRI. However, for all its appeal, a GLM-based parcellation
approach introduces its own biases, in the form of a priori knowledge about the
shape of Hemodynamic Response Function (HRF) and task-related signal changes,
or about the subject behaviour during the task. In this paper, we introduce a
data-driven version of the spectral clustering parcellation, based on
Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of
the GLM. First, a number of independent components are automatically selected.
Seed voxels are then obtained from the associated ICA maps and we compute the
PLS latent variables between the fMRI signal of the seed voxels (which covers
regional variations of the HRF) and the principal components of the signal
across all voxels. Finally, we parcellate all subjects data with a spectral
clustering of the PLS latent variables. We present results of the application
of the proposed method on both single-subject and multi-subject fMRI datasets.
Preliminary experimental results, evaluated with intra-parcel variance of GLM
t-values and PLS derived t-values, indicate that this data-driven approach
offers improvement in terms of parcellation accuracy over GLM based techniques.
|
1004.3714
|
An Upper Bound on Multi-hop Transmission Capacity with Dynamic Routing
Selection
|
cs.IT math.IT
|
This paper develops upper bounds on the end-to-end transmission capacity of
multi-hop wireless networks. Potential source-destination paths are dynamically
selected from a pool of randomly located relays, from which a closed-form lower
bound on the outage probability is derived in terms of the expected number of
potential paths. This is in turn used to provide an upper bound on the number
of successful transmissions that can occur per unit area, which is known as the
transmission capacity. The upper bound results from assuming independence among
the potential paths, and can be viewed as the maximum diversity case. A useful
aspect of the upper bound is its simple form for an arbitrary-sized network,
which allows insights into how the number of hops and other network parameters
affect spatial throughput in the non-asymptotic regime. The outage probability
analysis is then extended to account for retransmissions with a maximum number
of allowed attempts. In contrast to prevailing wisdom, we show that
predetermined routing (such as nearest-neighbor) is suboptimal, since more hops
are not useful once the network is interference-limited. Our results also make
clear that randomness in the location of relay sets and dynamically varying
channel states is helpful in obtaining higher aggregate throughput, and that
dynamic route selection should be used to exploit path diversity.
|
1004.3725
|
A Gibbs distribution that learns from GA dynamics
|
cs.NE
|
A general procedure of average-case performance evaluation for population
dynamics such as genetic algorithms (GAs) is proposed and its validity is
numerically examined. We introduce a learning algorithm of Gibbs distributions
from training sets which are gene configurations (strings) generated by GA in
order to figure out the statistical properties of GA from the view point of
thermodynamics. The learning algorithm is constructed by means of minimization
of the Kullback-Leibler information between a parametric Gibbs distribution and
the empirical distribution of gene configurations. The formulation is applied
to the solvable probabilistic models having multi-valley energy landscapes,
namely, the spin glass chain and the Sherrington-Kirkpatrick model. By using
computer simulations, we discuss the asymptotic behaviour of the effective
temperature scheduling and the residual energy induced by the GA dynamics.
|
1004.3732
|
Solving the Cold-Start Problem in Recommender Systems with Social Tags
|
cs.IR physics.soc-ph
|
In this paper, based on the user-tag-object tripartite graphs, we propose a
recommendation algorithm, which considers social tags as an important role for
information retrieval. Besides its low cost of computational time, the
experiment results of two real-world data sets, \emph{Del.icio.us} and
\emph{MovieLens}, show it can enhance the algorithmic accuracy and diversity.
Especially, it can obtain more personalized recommendation results when users
have diverse topics of tags. In addition, the numerical results on the
dependence of algorithmic accuracy indicates that the proposed algorithm is
particularly effective for small degree objects, which reminds us of the
well-known \emph{cold-start} problem in recommender systems. Further empirical
study shows that the proposed algorithm can significantly solve this problem in
social tagging systems with heterogeneous object degree distributions.
|
1004.3742
|
Threshold Saturation on BMS Channels via Spatial Coupling
|
cs.IT math.IT
|
We consider spatially coupled code ensembles. A particular instance are
convolutional LDPC ensembles. It was recently shown that, for transmission over
the binary erasure channel, this coupling increases the belief propagation
threshold of the ensemble to the maximum a-priori threshold of the underlying
component ensemble. We report on empirical evidence which suggest that the same
phenomenon also occurs when transmission takes place over a general binary
memoryless symmetric channel. This is confirmed both by simulations as well as
by computing EBP GEXIT curves and by comparing the empirical BP thresholds of
coupled ensembles to the empirically determined MAP thresholds of the
underlying regular ensembles. We further consider ways of reducing the
rate-loss incurred by such constructions.
|
1004.3745
|
An Algorithm for Odd Graceful Labeling of the Union of Paths and Cycles
|
cs.IT cs.NI math.IT
|
In 1991, Gnanajothi [4] proved that the path graph P_n with n vertex and n-1
edge is odd graceful, and the cycle graph C_m with m vertex and m edges is odd
graceful if and only if m even, she proved the cycle graph is not graceful if m
odd. In this paper, firstly, we studied the graph C_m $\cup$ P_m when m = 4,
6,8,10 and then we proved that the graph C_ $\cup$ P_n is odd graceful if m is
even. Finally, we described an algorithm to label the vertices and the edges of
the vertex set V(C_m $\cup$ P_n) and the edge set E(C_m $\cup$ P_n).
|
1004.3755
|
The SIMO Pre-Log Can Be Larger Than the SISO Pre-Log
|
cs.IT math.IT
|
We establish a lower bound on the noncoherent capacity pre-log of a
temporally correlated Rayleigh block-fading single-input multiple-output (SIMO)
channel. Surprisingly, when the covariance matrix of the channel satisfies a
certain technical condition related to the cardinality of its smallest set of
linearly dependent rows, this lower bound reveals that the capacity pre-log in
the SIMO case is larger than that in the single-input single-output (SISO)
case.
|
1004.3774
|
Incidence structures from the blown-up plane and LDPC codes
|
cs.IT math.AG math.CO math.IT
|
In this article, new regular incidence structures are presented. They arise
from sets of conics in the affine plane blown-up at its rational points. The
LDPC codes given by these incidence matrices are studied. These sparse
incidence matrices turn out to be redundant, which means that their number of
rows exceeds their rank. Such a feature is absent from random LDPC codes and is
in general interesting for the efficiency of iterative decoding. The
performance of some codes under iterative decoding is tested. Some of them turn
out to perform better than regular Gallager codes having similar rate and row
weight.
|
1004.3806
|
Information Theory and Quadrature Rules
|
cs.IT math.IT math.NA
|
Quadrature rules estimate the value of an integral when the function is given
by a table of values. Every binary string defines a quadrature rule by choosing
which endpoint of each interval represents the interval. The standard rules,
such as Simpson's Rule, correspond to strings of low Kolmogorov complexity,
making it possible to define new quadrature rules with no smoothness
assumptions, as well as in higher dimensions. Error results depend on concepts
from compressed sensing. Good quadrature rules exist for "sparse" functions,
which also satisfy an error--information duality principle.
|
1004.3807
|
Interference Cancellation at the Relay for Multi-User Wireless
Cooperative Networks
|
cs.IT math.IT
|
We study multi-user transmission and detection schemes for a multi-access
relay network (MARN) with linear constraints at all nodes. In a $(J, J_a, R_a,
M)$ MARN, $J$ sources, each equipped with $J_a$ antennas, communicate to one
$M$-antenna destination through one $R_a$-antenna relay. A new protocol called
IC-Relay-TDMA is proposed which takes two phases. During the first phase,
symbols of different sources are transmitted concurrently to the relay. At the
relay, interference cancellation (IC) techniques, previously proposed for
systems with direct transmission, are applied to decouple the information of
different sources without decoding. During the second phase, symbols of
different sources are forwarded to the destination in a time division
multi-access (TDMA) fashion. At the destination, the maximum-likelihood (ML)
decoding is performed source-by-source. The protocol of IC-Relay-TDMA requires
the number of relay antennas no less than the number of sources, i.e., $R_a\ge
J$. Through outage analysis, the achievable diversity gain of the proposed
scheme is shown to be $\min\{J_a(R_a-J+1),R_aM\}$. When {\small$M\le
J_a\left(1-\frac{J-1}{R_a}\right)$}, the proposed scheme achieves the maximum
interference-free (int-free) diversity gain $R_aM$. Since concurrent
transmission is allowed during the first phase, compared to full TDMA
transmission, the proposed scheme achieves the same diversity, but with a
higher symbol rate.
|
1004.3809
|
Artificial Immune Systems Metaphor for Agent Based Modeling of Crisis
Response Operations
|
cs.MA cs.AI cs.CY
|
Crisis response requires information intensive efforts utilized for reducing
uncertainty, calculating and comparing costs and benefits, and managing
resources in a fashion beyond those regularly available to handle routine
problems. This paper presents an Artificial Immune Systems (AIS) metaphor for
agent based modeling of crisis response operations. The presented model
proposes integration of hybrid set of aspects (multi-agent systems, built-in
defensive model of AIS, situation management, and intensity-based learning) for
crisis response operations. In addition, the proposed response model is applied
on the spread of pandemic influenza in Egypt as a case study.
|
1004.3811
|
Resolving the Complexity of Some Data Privacy Problems
|
cs.CC cs.DB
|
We formally study two methods for data sanitation that have been used
extensively in the database community: k-anonymity and l-diversity. We settle
several open problems concerning the difficulty of applying these methods
optimally, proving both positive and negative results:
1. 2-anonymity is in P.
2. The problem of partitioning the edges of a triangle-free graph into
4-stars (degree-three vertices) is NP-hard. This yields an alternative proof
that 3-anonymity is NP-hard even when the database attributes are all binary.
3. 3-anonymity with only 27 attributes per record is MAX SNP-hard.
4. For databases with n rows, k-anonymity is in O(4^n poly(n)) time for all k
> 1.
5. For databases with n rows and l <= log_{2c+2} log n attributes over an
alphabet of cardinality c = O(1), k-anonymity is in P. Assuming c, l = O(1),
k-anonymity is in O(n).
6. 3-diversity with binary attributes is NP-hard, with one sensitive
attribute.
7. 2-diversity with binary attributes is NP-hard, with three sensitive
attributes.
|
1004.3814
|
Bregman Distance to L1 Regularized Logistic Regression
|
cs.LG
|
In this work we investigate the relationship between Bregman distances and
regularized Logistic Regression model. We present a detailed study of Bregman
Distance minimization, a family of generalized entropy measures associated with
convex functions. We convert the L1-regularized logistic regression into this
more general framework and propose a primal-dual method based algorithm for
learning the parameters. We pose L1-regularized logistic regression into
Bregman distance minimization and then apply non-linear constrained
optimization techniques to estimate the parameters of the logistic model.
|
1004.3833
|
Normal Factor Graphs and Holographic Transformations
|
cs.IT math.IT
|
This paper stands at the intersection of two distinct lines of research. One
line is "holographic algorithms," a powerful approach introduced by Valiant for
solving various counting problems in computer science; the other is "normal
factor graphs," an elegant framework proposed by Forney for representing codes
defined on graphs. We introduce the notion of holographic transformations for
normal factor graphs, and establish a very general theorem, called the
generalized Holant theorem, which relates a normal factor graph to its
holographic transformation. We show that the generalized Holant theorem on the
one hand underlies the principle of holographic algorithms, and on the other
hand reduces to a general duality theorem for normal factor graphs, a special
case of which was first proved by Forney. In the course of our development, we
formalize a new semantics for normal factor graphs, which highlights various
linear algebraic properties that potentially enable the use of normal factor
graphs as a linear algebraic tool.
|
1004.3878
|
Where is Randomness Needed to Break the Square-Root Bottleneck?
|
cs.IT math.IT
|
As shown by Tropp, 2008, for the concatenation of two orthonormal bases
(ONBs), breaking the square-root bottleneck in compressed sensing does not
require randomization over all the positions of the nonzero entries of the
sparse coefficient vector. Rather the positions corresponding to one of the two
ONBs can be chosen arbitrarily. The two-ONB structure is, however, restrictive
and does not reveal the property that is responsible for allowing to break the
bottleneck with reduced randomness. For general dictionaries we show that if a
sub-dictionary with small enough coherence and large enough cardinality can be
isolated, the bottleneck can be broken under the same probabilistic model on
the sparse coefficient vector as in the two-ONB case.
|
1004.3884
|
Oil Price Trackers Inspired by Immune Memory
|
cs.AI cs.NE
|
We outline initial concepts for an immune inspired algorithm to evaluate and
predict oil price time series data. The proposed solution evolves a short term
pool of trackers dynamically, with each member attempting to map trends and
anticipate future price movements. Successful trackers feed into a long term
memory pool that can generalise across repeating trend patterns. The resulting
sequence of trackers, ordered in time, can be used as a forecasting tool.
Examination of the pool of evolving trackers also provides valuable insight
into the properties of the crude oil market.
|
1004.3887
|
Motif Detection Inspired by Immune Memory
|
cs.AI cs.NE q-bio.QM
|
The search for patterns or motifs in data represents an area of key interest
to many researchers. In this paper we present the Motif Tracking Algorithm, a
novel immune inspired pattern identification tool that is able to identify
variable length unknown motifs which repeat within time series data. The
algorithm searches from a completely neutral perspective that is independent of
the data being analysed and the underlying motifs. In this paper we test the
flexibility of the motif tracking algorithm by applying it to the search for
patterns in two industrial data sets. The algorithm is able to identify a
population of motifs successfully in both cases, and the value of these motifs
is discussed.
|
1004.3919
|
Performance Evaluation of DCA and SRC on a Single Bot Detection
|
cs.AI cs.CR cs.NE
|
Malicious users try to compromise systems using new techniques. One of the
recent techniques used by the attacker is to perform complex distributed
attacks such as denial of service and to obtain sensitive data such as password
information. These compromised machines are said to be infected with malicious
software termed a "bot". In this paper, we investigate the correlation of
behavioural attributes such as keylogging and packet flooding behaviour to
detect the existence of a single bot on a compromised machine by applying (1)
Spearman's rank correlation (SRC) algorithm and (2) the Dendritic Cell
Algorithm (DCA). We also compare the output results generated from these two
methods to the detection of a single bot. The results show that the DCA has a
better performance in detecting malicious activities.
|
1004.3932
|
Modelling Immunological Memory
|
cs.AI cs.NE q-bio.CB
|
Accurate immunological models offer the possibility of performing
highthroughput experiments in silico that can predict, or at least suggest, in
vivo phenomena. In this chapter, we compare various models of immunological
memory. We first validate an experimental immunological simulator, developed by
the authors, by simulating several theories of immunological memory with known
results. We then use the same system to evaluate the predicted effects of a
theory of immunological memory. The resulting model has not been explored
before in artificial immune systems research, and we compare the simulated in
silico output with in vivo measurements. Although the theory appears valid, we
suggest that there are a common set of reasons why immunological memory models
are a useful support tool; not conclusive in themselves.
|
1004.3939
|
Price Trackers Inspired by Immune Memory
|
cs.AI cs.NE physics.data-an q-fin.PM
|
In this paper we outline initial concepts for an immune inspired algorithm to
evaluate price time series data. The proposed solution evolves a short term
pool of trackers dynamically through a process of proliferation and mutation,
with each member attempting to map to trends in price movements. Successful
trackers feed into a long term memory pool that can generalise across repeating
trend patterns. Tests are performed to examine the algorithm's ability to
successfully identify trends in a small data set. The influence of the long
term memory pool is then examined. We find the algorithm is able to identify
price trends presented successfully and efficiently.
|
1004.3966
|
A Message-Passing Algorithm for Counting Short Cycles in a Graph
|
cs.IT math.IT
|
A message-passing algorithm for counting short cycles in a graph is
presented. For bipartite graphs, which are of particular interest in coding,
the algorithm is capable of counting cycles of length g, g +2,..., 2g - 2,
where g is the girth of the graph. For a general (non-bipartite) graph, cycles
of length g; g + 1, ..., 2g - 1 can be counted. The algorithm is based on
performing integer additions and subtractions in the nodes of the graph and
passing extrinsic messages to adjacent nodes. The complexity of the proposed
algorithm grows as $O(g|E|^2)$, where $|E|$ is the number of edges in the
graph. For sparse graphs, the proposed algorithm significantly outperforms the
existing algorithms in terms of computational complexity and memory
requirements.
|
1004.3980
|
Hashing Image Patches for Zooming
|
cs.CV
|
In this paper we present a Bayesian image zooming/super-resolution algorithm
based on a patch based representation. We work on a patch based model with
overlap and employ a Locally Linear Embedding (LLE) based approach as our data
fidelity term in the Bayesian inference. The image prior imposes continuity
constraints across the overlapping patches. We apply an error back-projection
technique, with an approximate cross bilateral filter. The problem of nearest
neighbor search is handled by a variant of the locality sensitive hashing (LSH)
scheme. The novelty of our work lies in the speed up achieved by the hashing
scheme and the robustness and inherent modularity and parallel structure
achieved by the LLE setup. The ill-posedness of the image reconstruction
problem is handled by the introduction of regularization priors which encode
the knowledge present in vast collections of natural images. We present
comparative results for both run-time as well as visual image quality based
measurements.
|
1004.4017
|
Optimal-Rate Code Constructions for Computationally Simple Channels
|
cs.IT cs.CC math.IT
|
We consider coding schemes for computationally bounded channels, which can
introduce an arbitrary set of errors as long as (a) the fraction of errors is
bounded with high probability by a parameter $p$ and (b) the process which adds
the errors can be described by a sufficiently simple circuit. Codes for such
channel models are attractive since, like codes for standard adversarial
errors, they can handle channels whose true behavior is unknown or varying over
time.
For two classes of channels, we provide explicit, efficiently
encodable/decodable codes of optimal rate where only inefficiently decodable
codes were previously known. In each case, we provide one encoder/decoder that
works for every channel in the class. The encoders are randomized, and
probabilities are taken over the (local, unknown to the decoder) coins of the
encoder and those of the channel.
(1) Unique decoding for additive errors: We give the first construction of a
polynomial-time encodable/decodable code for additive (a.k.a. oblivious)
channels that achieve the Shannon capacity $1-H(p)$. These channels add an
arbitrary error vector $e\in\{0,1\}^N$ of weight at most $pN$ to the
transmitted word; the vector $e$ can depend on the code but not on the
particular transmitted word.
(2) List-decoding for polynomial-time channels: For every constant $c>0$, we
give a Monte Carlo construction of an code with optimal rate (arbitrarily close
to $1-H(p)$) that efficiently recovers a short list containing the correct
message with high probability for channels describable by circuits of size at
most $N^c$. We justify the relaxation to list-decoding by showing that even
with bounded channels, uniquely decodable codes cannot have positive rate for
$p>1/4$.
|
1004.4020
|
Analysis and Design of Binary Message-Passing Decoders
|
cs.IT math.IT
|
Binary message-passing decoders for low-density parity-check (LDPC) codes are
studied by using extrinsic information transfer (EXIT) charts. The channel
delivers hard or soft decisions and the variable node decoder performs all
computations in the L-value domain. A hard decision channel results in the
well-know Gallager B algorithm, and increasing the output alphabet from hard
decisions to two bits yields a gain of more than 1.0 dB in the required signal
to noise ratio when using optimized codes. The code optimization requires
adapting the mixing property of EXIT functions to the case of binary
message-passing decoders. Finally, it is shown that errors on cycles consisting
only of degree two and three variable nodes cannot be corrected and a necessary
and sufficient condition for the existence of a cycle-free subgraph is derived.
|
1004.4022
|
Database Security: A Historical Perspective
|
cs.DB
|
The importance of security in database research has greatly increased over
the years as most of critical functionality of the business and military
enterprises became digitized. Database is an integral part of any information
system and they often hold sensitive data. The security of the data depends on
physical security, OS security and DBMS security. Database security can be
compromised by obtaining sensitive data, changing data or degrading
availability of the database. Over the last 30 years the information technology
environment have gone through many changes of evolution and the database
research community have tried to stay a step ahead of the upcoming threats to
the database security. The database research community has thoughts about these
issues long before they were address by the implementations. This paper will
examine the different topics pertaining to database security and see the
adaption of the research to the changing environment. Some short term database
research trends will be ascertained at the conclusion.
|
1004.4044
|
Sparsity Pattern Recovery in Bernoulli-Gaussian Signal Model
|
cs.IT math.IT
|
In compressive sensing, sparse signals are recovered from underdetermined
noisy linear observations. One of the interesting problems which attracted a
lot of attention in recent times is the support recovery or sparsity pattern
recovery problem. The aim is to identify the non-zero elements in the original
sparse signal. In this article we consider the sparsity pattern recovery
problem under a probabilistic signal model where the sparse support follows a
Bernoulli distribution and the signal restricted to this support follows a
Gaussian distribution. We show that the energy in the original signal
restricted to the missed support of the MAP estimate is bounded above and this
bound is of the order of energy in the projection of the noise signal to the
subspace spanned by the active coefficients. We also derive sufficient
conditions for no misdetection and no false alarm in support recovery.
|
1004.4063
|
On two variations of identifying codes
|
cs.DM cs.IT math.CO math.IT
|
Identifying codes have been introduced in 1998 to model fault-detection in
multiprocessor systems. In this paper, we introduce two variations of
identifying codes: weak codes and light codes. They correspond to
fault-detection by successive rounds. We give exact bounds for those two
definitions for the family of cycles.
|
1004.4070
|
Constructions of Optical Queues With a Limited Number of
Recirculations--Part I: Greedy Constructions
|
cs.IT math.IT math.NT
|
In this two-part paper, we consider SDL constructions of optical queues with
a limited number of recirculations through the optical switches and the fiber
delay lines. We show that the constructions of certain types of optical queues,
including linear compressors, linear decompressors, and 2-to-1 FIFO
multiplexers, under a simple packet routing scheme and under the constraint of
a limited number of recirculations can be transformed into equivalent integer
representation problems under a corresponding constraint. Given $M$ and $k$,
the problem of finding an \emph{optimal} construction, in the sense of
maximizing the maximum delay (resp., buffer size), among our constructions of
linear compressors/decompressors (resp., 2-to-1 FIFO multiplexers) is
equivalent to the problem of finding an optimal sequence ${\dbf^*}_1^M$ in
$\Acal_M$ (resp., $\Bcal_M$) such that $B({\dbf^*}_1^M;k)=\max_{\dbf_1^M\in
\Acal_M}B(\dbf_1^M;k)$ (resp., $B({\dbf^*}_1^M;k)=\max_{\dbf_1^M\in
\Bcal_M}B(\dbf_1^M;k)$), where $\Acal_M$ (resp., $\Bcal_M$) is the set of all
sequences of fiber delays allowed in our constructions of linear
compressors/decompressors (resp., 2-to-1 FIFO multiplexers). In Part I, we
propose a class of \emph{greedy} constructions of linear
compressors/decompressors and 2-to-1 FIFO multiplexers by specifying a class
$\Gcal_{M,k}$ of sequences such that $\Gcal_{M,k}\subseteq \Bcal_M\subseteq
\Acal_M$ and each sequence in $\Gcal_{M,k}$ is obtained recursively in a greedy
manner. We then show that every optimal construction must be a greedy
construction. In Part II, we further show that there are at most two optimal
constructions and give a simple algorithm to obtain the optimal
construction(s).
|
1004.4075
|
Secrecy Gain: a Wiretap Lattice Code Design
|
cs.IT cs.CR math.IT
|
We propose the notion of secrecy gain as a code design criterion for wiretap
lattice codes to be used over an additive white Gaussian noise channel. Our
analysis relies on the error probabilites of both the legitimate user and the
eavesdropper. We focus on geometrical properties of lattices, described by
their theta series, to characterize good wiretap codes.
|
1004.4089
|
Real-Time Alert Correlation with Type Graphs
|
cs.AI cs.CR
|
The premise of automated alert correlation is to accept that false alerts
from a low level intrusion detection system are inevitable and use attack
models to explain the output in an understandable way. Several algorithms exist
for this purpose which use attack graphs to model the ways in which attacks can
be combined. These algorithms can be classified in to two broad categories
namely scenario-graph approaches, which create an attack model starting from a
vulnerability assessment and type-graph approaches which rely on an abstract
model of the relations between attack types. Some research in to improving the
efficiency of type-graph correlation has been carried out but this research has
ignored the hypothesizing of missing alerts. Our work is to present a novel
type-graph algorithm which unifies correlation and hypothesizing in to a single
operation. Our experimental results indicate that the approach is extremely
efficient in the face of intensive alerts and produces compact output graphs
comparable to other techniques.
|
1004.4095
|
STORM - A Novel Information Fusion and Cluster Interpretation Technique
|
cs.AI cs.NE
|
Analysis of data without labels is commonly subject to scrutiny by
unsupervised machine learning techniques. Such techniques provide more
meaningful representations, useful for better understanding of a problem at
hand, than by looking only at the data itself. Although abundant expert
knowledge exists in many areas where unlabelled data is examined, such
knowledge is rarely incorporated into automatic analysis. Incorporation of
expert knowledge is frequently a matter of combining multiple data sources from
disparate hypothetical spaces. In cases where such spaces belong to different
data types, this task becomes even more challenging. In this paper we present a
novel immune-inspired method that enables the fusion of such disparate types of
data for a specific set of problems. We show that our method provides a better
visual understanding of one hypothetical space with the help of data from
another hypothetical space. We believe that our model has implications for the
field of exploratory data analysis and knowledge discovery.
|
1004.4170
|
A New Metaheuristic Bat-Inspired Algorithm
|
math.OC cs.NE physics.bio-ph physics.comp-ph
|
Metaheuristic algorithms such as particle swarm optimization, firefly
algorithm and harmony search are now becoming powerful methods for solving many
tough optimization problems. In this paper, we propose a new metaheuristic
method, the Bat Algorithm, based on the echolocation behaviour of bats. We also
intend to combine the advantages of existing algorithms into the new bat
algorithm. After a detailed formulation and explanation of its implementation,
we will then compare the proposed algorithm with other existing algorithms,
including genetic algorithms and particle swarm optimization. Simulations show
that the proposed algorithm seems much superior to other algorithms, and
further studies are also discussed.
|
1004.4181
|
Displacement Calculus
|
cs.CL
|
The Lambek calculus provides a foundation for categorial grammar in the form
of a logic of concatenation. But natural language is characterized by
dependencies which may also be discontinuous. In this paper we introduce the
displacement calculus, a generalization of Lambek calculus, which preserves its
good proof-theoretic properties while embracing discontinuiity and subsuming
it. We illustrate linguistic applications and prove Cut-elimination, the
subformula property, and decidability
|
1004.4216
|
Symmetric M-tree
|
cs.DB cs.DS
|
The M-tree is a paged, dynamically balanced metric access method that
responds gracefully to the insertion of new objects. To date, no algorithm has
been published for the corresponding Delete operation. We believe this to be
non-trivial because of the design of the M-tree's Insert algorithm. We propose
a modification to Insert that overcomes this problem and give the corresponding
Delete algorithm. The performance of the tree is comparable to the M-tree and
offers additional benefits in terms of supported operations, which we briefly
discuss.
|
1004.4222
|
Performance Analysis of Sparse Recovery Based on Constrained Minimal
Singular Values
|
cs.IT math.IT
|
The stability of sparse signal reconstruction is investigated in this paper.
We design efficient algorithms to verify the sufficient condition for unique
$\ell_1$ sparse recovery. One of our algorithm produces comparable results with
the state-of-the-art technique and performs orders of magnitude faster. We show
that the $\ell_1$-constrained minimal singular value ($\ell_1$-CMSV) of the
measurement matrix determines, in a very concise manner, the recovery
performance of $\ell_1$-based algorithms such as the Basis Pursuit, the Dantzig
selector, and the LASSO estimator. Compared with performance analysis involving
the Restricted Isometry Constant, the arguments in this paper are much less
complicated and provide more intuition on the stability of sparse signal
recovery. We show also that, with high probability, the subgaussian ensemble
generates measurement matrices with $\ell_1$-CMSVs bounded away from zero, as
long as the number of measurements is relatively large. To compute the
$\ell_1$-CMSV and its lower bound, we design two algorithms based on the
interior point algorithm and the semi-definite relaxation.
|
1004.4223
|
Settling the Polynomial Learnability of Mixtures of Gaussians
|
cs.LG cs.DS
|
Given data drawn from a mixture of multivariate Gaussians, a basic problem is
to accurately estimate the mixture parameters. We give an algorithm for this
problem that has a running time, and data requirement polynomial in the
dimension and the inverse of the desired accuracy, with provably minimal
assumptions on the Gaussians. As simple consequences of our learning algorithm,
we can perform near-optimal clustering of the sample points and density
estimation for mixtures of k Gaussians, efficiently. The building blocks of our
algorithm are based on the work Kalai et al. [STOC 2010] that gives an
efficient algorithm for learning mixtures of two Gaussians by considering a
series of projections down to one dimension, and applying the method of moments
to each univariate projection. A major technical hurdle in Kalai et al. is
showing that one can efficiently learn univariate mixtures of two Gaussians. In
contrast, because pathological scenarios can arise when considering univariate
projections of mixtures of more than two Gaussians, the bulk of the work in
this paper concerns how to leverage an algorithm for learning univariate
mixtures (of many Gaussians) to yield an efficient algorithm for learning in
high dimensions. Our algorithm employs hierarchical clustering and rescaling,
together with delicate methods for backtracking and recovering from failures
that can occur in our univariate algorithm. Finally, while the running time and
data requirements of our algorithm depend exponentially on the number of
Gaussians in the mixture, we prove that such a dependence is necessary.
|
1004.4277
|
Constructions of Optical Queues With a Limited Number of
Recirculations--Part II: Optimal Constructions
|
cs.IT math.IT math.NT
|
One of the main problems in all-optical packet-switched networks is the lack
of optical buffers, and one feasible technology for the constructions of
optical buffers is to use optical crossbar Switches and fiber Delay Lines
(SDL). In this two-part paper, we consider SDL constructions of optical queues
with a limited number of recirculations through the optical switches and the
fiber delay lines. Such a problem arises from practical feasibility
considerations. In Part I, we have proposed a class of greedy constructions for
certain types of optical queues, including linear compressors, linear
decompressors, and 2-to-1 FIFO multiplexers, and have shown that every optimal
construction among our previous constructions of these types of optical queues
under the constraint of a limited number of recirculations must be a greedy
construction. In Part II, the present paper, we further show that there are at
most two optimal constructions and give a simple algorithm to obtain the
optimal construction(s). The main idea in Part II is to use \emph{pairwise
comparison} to remove a sequence $\dbf_1^M\in \Gcal_{M,k}$ such that
$B(\dbf_1^M;k)<B({\dbf'}_1^M;k)$ for some ${\dbf'}_1^M\in \Gcal_{M,k}$. To our
surprise, the simple algorithm for obtaining the optimal construction(s) is
related to the well-known \emph{Euclid's algorithm} for finding the greatest
common divisor (gcd) of two integers. In particular, we show that if
$\gcd(M,k)=1$, then there is only one optimal construction; if $\gcd(M,k)=2$,
then there are two optimal constructions; and if $\gcd(M,k)\geq 3$, then there
are at most two optimal constructions.
|
1004.4299
|
Distributed Data Storage with Minimum Storage Regenerating Codes - Exact
and Functional Repair are Asymptotically Equally Efficient
|
cs.IT math.IT
|
We consider a set up where a file of size M is stored in n distributed
storage nodes, using an (n,k) minimum storage regenerating (MSR) code, i.e., a
maximum distance separable (MDS) code that also allows efficient exact-repair
of any failed node. The problem of interest in this paper is to minimize the
repair bandwidth B for exact regeneration of a single failed node, i.e., the
minimum data to be downloaded by a new node to replace the failed node by its
exact replica. Previous work has shown that a bandwidth of B=[M(n-1)]/[k(n-k)]
is necessary and sufficient for functional (not exact) regeneration. It has
also been shown that if k < = max(n/2, 3), then there is no extra cost of exact
regeneration over functional regeneration. The practically relevant setting of
low-redundancy, i.e., k/n>1/2 remains open for k>3 and it has been shown that
there is an extra bandwidth cost for exact repair over functional repair in
this case. In this work, we adopt into the distributed storage context an
asymptotically optimal interference alignment scheme previously proposed by
Cadambe and Jafar for large wireless interference networks. With this scheme we
solve the problem of repair bandwidth minimization for (n,k) exact-MSR codes
for all (n,k) values including the previously open case of k > \max(n/2,3). Our
main result is that, for any (n,k), and sufficiently large file sizes, there is
no extra cost of exact regeneration over functional regeneration in terms of
the repair bandwidth per bit of regenerated data. More precisely, we show that
in the limit as M approaches infinity, the ratio B/M = (n-1)/(k(n-k))$.
|
1004.4308
|
Segmented compressed sampling for analog-to-information conversion:
Method and performance analysis
|
cs.IT math.IT stat.AP
|
A new segmented compressed sampling method for analog-to-information
conversion (AIC) is proposed. An analog signal measured by a number of parallel
branches of mixers and integrators (BMIs), each characterized by a specific
random sampling waveform, is first segmented in time into $M$ segments. Then
the sub-samples collected on different segments and different BMIs are reused
so that a larger number of samples than the number of BMIs is collected. This
technique is shown to be equivalent to extending the measurement matrix, which
consists of the BMI sampling waveforms, by adding new rows without actually
increasing the number of BMIs. We prove that the extended measurement matrix
satisfies the restricted isometry property with overwhelming probability if the
original measurement matrix of BMI sampling waveforms satisfies it. We also
show that the signal recovery performance can be improved significantly if our
segmented AIC is used for sampling instead of the conventional AIC. Simulation
results verify the effectiveness of the proposed segmented compressed sampling
method and the validity of our theoretical studies.
|
1004.4334
|
New Results on Secret Key Establishment over a Pair of Broadcast
Channels
|
cs.IT cs.CR math.IT
|
The problem of Secret Key Establishment (SKE) over a pair of independent
Discrete Memoryless Broadcast Channels (DMBCs) has already been studied in
\cite{Ah10}, where we provided lower and upper bounds on the secret-key
capacity. In this paper, we study the above setup under each of the following
two cases: (1) the DMBCs have secrecy potential, and (2) the DMBCs are
stochastically degraded with independent channels. In the former case, we
propose a simple SKE protocol based on a novel technique, called Interactive
Channel Coding (ICC), and prove that it achieves the lower bound. In the latter
case, we give a simplified expression for the lower bound and prove a
single-letter capacity formula under the condition that one of the legitimate
parties sends only i.i.d. variables.
|
1004.4342
|
Towards Closed World Reasoning in Dynamic Open Worlds (Extended Version)
|
cs.AI
|
The need for integration of ontologies with nonmonotonic rules has been
gaining importance in a number of areas, such as the Semantic Web. A number of
researchers addressed this problem by proposing a unified semantics for hybrid
knowledge bases composed of both an ontology (expressed in a fragment of
first-order logic) and nonmonotonic rules. These semantics have matured over
the years, but only provide solutions for the static case when knowledge does
not need to evolve. In this paper we take a first step towards addressing the
dynamics of hybrid knowledge bases. We focus on knowledge updates and,
considering the state of the art of belief update, ontology update and rule
update, we show that current solutions are only partial and difficult to
combine. Then we extend the existing work on ABox updates with rules, provide a
semantics for such evolving hybrid knowledge bases and study its basic
properties. To the best of our knowledge, this is the first time that an update
operator is proposed for hybrid knowledge bases.
|
1004.4361
|
Reduction of behavior of additive cellular automata on groups
|
nlin.CG cs.NE
|
A class of additive cellular automata (ACA) on a finite group is defined by
an index-group $\m g$ and a finite field $\m F_p$ for a prime modulus $p$
\cite{Bul_arch_1}. This paper deals mainly with ACA on infinite commutative
groups and direct products of them with some non commutative $p$-groups. It
appears that for all abelian groups, the rules and initial states with finite
supports define behaviors which being restricted to some infinite regular
series of time moments become significantly simplified. In particular, for free
abelian groups with $n$ generators states $V^{[t]}$ of ACA with a rule $R$ at
time moments $t=p^k,k>k_0,$ can be viewed as $||R||$ copies of initial state
$V^{[0]}$ moving through an $n$-dimensional Euclidean space. That is the
behavior is similar to gliders from J.Conway's automaton {\sl Life}. For some
other special infinite series of time moments the automata states approximate
self-similar structures and the approximation becomes better with time. An
infinite class $\mathrm{DHC}(\mbf S,\theta)$ of non-commutative $p$-groups is
described which in particular includes quaternion and dihedral $p$-groups. It
is shown that the simplification of behaviors takes place as well for direct
products of non-commutative groups from the class $\mathrm{DHC}(\mbf S,\theta)$
with commutative groups. Finally, an automaton on a non-commutative group is
constructed such that its behavior at time moments $2^k,k\ge2,$ is similar to a
glider gun. It is concluded that ACA on non-commutative groups demonstrate more
diverse variety of behaviors comparing to ACA on commutative groups.
|
1004.4373
|
Spatially-Adaptive Reconstruction in Computed Tomography Based on
Statistical Learning
|
cs.CV
|
We propose a direct reconstruction algorithm for Computed Tomography, based
on a local fusion of a few preliminary image estimates by means of a non-linear
fusion rule. One such rule is based on a signal denoising technique which is
spatially adaptive to the unknown local smoothness. Another, more powerful
fusion rule, is based on a neural network trained off-line with a high-quality
training set of images. Two types of linear reconstruction algorithms for the
preliminary images are employed for two different reconstruction tasks. For an
entire image reconstruction from full projection data, the proposed scheme uses
a sequence of Filtered Back-Projection algorithms with a gradually growing
cut-off frequency. To recover a Region Of Interest only from local projections,
statistically-trained linear reconstruction algorithms are employed. Numerical
experiments display the improvement in reconstruction quality when compared to
linear reconstruction algorithms.
|
1004.4398
|
Compressive MUSIC: A Missing Link Between Compressive Sensing and Array
Signal Processing
|
cs.IT math.IT
|
The multiple measurement vector (MMV) problem addresses the identification of
unknown input vectors that share common sparse support. Even though MMV
problems had been traditionally addressed within the context of sensor array
signal processing, the recent trend is to apply compressive sensing (CS) due to
its capability to estimate sparse support even with an insufficient number of
snapshots, in which case classical array signal processing fails. However, CS
guarantees the accurate recovery in a probabilistic manner, which often shows
inferior performance in the regime where the traditional array signal
processing approaches succeed. The apparent dichotomy between the {\em
probabilistic} CS and {\em deterministic} sensor array signal processing have
not been fully understood. The main contribution of the present article is a
unified approach that unveils a {missing link} between CS and array signal
processing. The new algorithm, which we call {\em compressive MUSIC},
identifies the parts of support using CS, after which the remaining supports
are estimated using a novel generalized MUSIC criterion. Using a large system
MMV model, we show that our compressive MUSIC requires a smaller number of
sensor elements for accurate support recovery than the existing CS methods and
can approach the optimal $l_0$-bound with finite number of snapshots.
|
1004.4421
|
Efficient Learning with Partially Observed Attributes
|
cs.LG
|
We describe and analyze efficient algorithms for learning a linear predictor
from examples when the learner can only view a few attributes of each training
example. This is the case, for instance, in medical research, where each
patient participating in the experiment is only willing to go through a small
number of tests. Our analysis bounds the number of additional examples
sufficient to compensate for the lack of full information on each training
example. We demonstrate the efficiency of our algorithms by showing that when
running on digit recognition data, they obtain a high prediction accuracy even
when the learner gets to see only four pixels of each image.
|
1004.4432
|
Throughput-Delay-Reliability Tradeoff with ARQ in Wireless Ad Hoc
Networks
|
cs.IT math.IT
|
Delay-reliability (D-R), and throughput-delay-reliability (T-D-R) tradeoffs
in an ad hoc network are derived for single hop and multi-hop transmission with
automatic repeat request (ARQ) on each hop. The delay constraint is modeled by
assuming that each packet is allowed at most $D$ retransmissions end-to-end,
and the reliability is defined as the probability that the packet is
successfully decoded in at most $D$ retransmissions. The throughput of the ad
hoc network is characterized by the transmission capacity, which is defined to
be the maximum allowable density of transmitting nodes satisfying a per
transmitter receiver rate, and an outage probability constraint, multiplied
with the rate of transmission and the success probability. Given an end-to-end
retransmission constraint of $D$, the optimal allocation of the number of
retransmissions allowed at each hop is derived that maximizes a lower bound on
the transmission capacity. Optimizing over the number of hops, single hop
transmission is shown to be optimal for maximizing a lower bound on the
transmission capacity in the sparse network regime.
|
1004.4438
|
A Survey on Network Codes for Distributed Storage
|
cs.IT cs.DC cs.NI math.IT
|
Distributed storage systems often introduce redundancy to increase
reliability. When coding is used, the repair problem arises: if a node storing
encoded information fails, in order to maintain the same level of reliability
we need to create encoded information at a new node. This amounts to a partial
recovery of the code, whereas conventional erasure coding focuses on the
complete recovery of the information from a subset of encoded packets. The
consideration of the repair network traffic gives rise to new design
challenges. Recently, network coding techniques have been instrumental in
addressing these challenges, establishing that maintenance bandwidth can be
reduced by orders of magnitude compared to standard erasure codes. This paper
provides an overview of the research results on this topic.
|
1004.4448
|
Deblured Gaussian Blurred Images
|
cs.CV
|
This paper attempts to undertake the study of Restored Gaussian Blurred
Images. by using four types of techniques of deblurring image as Wiener filter,
Regularized filter, Lucy Richardson deconvlutin algorithm and Blind
deconvlution algorithm with an information of the Point Spread Function (PSF)
corrupted blurred image with Different values of Size and Alfa and then
corrupted by Gaussian noise. The same is applied to the remote sensing image
and they are compared with one another, So as to choose the base technique for
restored or deblurring image.This paper also attempts to undertake the study of
restored Gaussian blurred image with no any information about the Point Spread
Function (PSF) by using same four techniques after execute the guess of the
PSF, the number of iterations and the weight threshold of it. To choose the
base guesses for restored or deblurring image of this techniques.
|
1004.4450
|
Improving Supply Chain Coordination by Linking Dynamic Procurement
Decision to Multi-Agent System
|
cs.MA
|
The Internet has changed the way business is conducted in many ways. For
example, in the field of procurement, the possibility to directly interact with
a trading partner has given rise to new mechanisms in the supply chain
management. One such interactive dynamic procurement, which lets both buyer and
seller software agents bid by potential buyer agents instead of static
procurement by vendors. Dynamic procurement decision could provide the buying
and selling channel to buyer, to avoid occurring condition that seller could
not deliver on the contract promise. Using NYOP(Name Your Own Price) to be the
core of dynamic procurement negotiation algorithm sets up multi-agent dynamic
supply chain system, to present the DSINs(Dynamic Supply Chain Information
Networks) by JADE, and to present the dynamic supply chain logistic simulation
by eM-Plant. Finally, evaluating supply chain performance with supply chain
performance metrics (such as bullwhip, fill rate), to be the reference of
enterprise making deciding in the future.
|
1004.4454
|
Crowd simulation influenced by agent's socio-psychological state
|
cs.MA
|
The aim our work is to create virtual humans as intelligent entities, which
includes approximate the maximum as possible the virtual agent animation to the
natural human behavior. In order to accomplish this task, our agent must be
capable to interact with the environment, interacting with objects and other
agents. The virtual agent needs to act as real person, so he should be capable
to extract semantic information from the geometric model of the world where he
is inserted, based on his own perception, and he realizes his own decision. The
movement of the individuals is representing by the combination of two
approaches of movement which are, the social force model and the based-rule
model. These movements are influenced by a set of socio-psychological rules to
give a more realistic result.
|
1004.4460
|
Handling Overload Conditions In High Performance Trustworthy Information
Retrieval Systems
|
cs.IR
|
Web search engines retrieve a vast amount of information for a given search
query. But the user needs only trustworthy and high-quality information from
this vast retrieved data. The response time of the search engine must be a
minimum value in order to satisfy the user. An optimum level of response time
should be maintained even when the system is overloaded. This paper proposes an
optimal Load Shedding algorithm which is used to handle overload conditions in
real-time data stream applications and is adapted to the Information Retrieval
System of a web search engine. Experiment results show that the proposed
algorithm enables a web search engine to provide trustworthy search results to
the user within an optimum response time, even during overload conditions.
|
1004.4462
|
BiLingual Information Retrieval System for English and Tamil
|
cs.IR
|
This paper addresses the design and implementation of BiLingual Information
Retrieval system on the domain, Festivals. A generic platform is built for
BiLingual Information retrieval which can be extended to any foreign or Indian
language working with the same efficiency. Search for the solution of the query
is not done in a specific predefined set of standard languages but is chosen
dynamically on processing the user's query. This paper deals with Indian
language Tamil apart from English. The task is to retrieve the solution for the
user given query in the same language as that of the query. In this process, a
Ontological tree is built for the domain in such a way that there are entries
in the above listed two languages in every node of the tree. A Part-Of-Speech
(POS) Tagger is used to determine the keywords from the given query. Based on
the context, the keywords are translated to appropriate languages using the
Ontological tree. A search is performed and documents are retrieved based on
the keywords. With the use of the Ontological tree, Information Extraction is
done. Finally, the solution for the query is translated back to the query
language (if necessary) and produced to the user.
|
1004.4464
|
Audio enabled information extraction system for cricket and hockey
domains
|
cs.IR cs.MM cs.SD
|
The proposed system aims at the retrieval of the summarized information from
the documents collected from web based search engine as per the user query
related to cricket and hockey domain. The system is designed in a manner that
it takes the voice commands as keywords for search. The parts of speech in the
query are extracted using the natural language extractor for English. Based on
the keywords the search is categorized into 2 types: - 1.Concept wise -
information retrieved to the query is retrieved based on the keywords and the
concept words related to it. The retrieved information is summarized using the
probabilistic approach and weighted means algorithm.2.Keyword search - extracts
the result relevant to the query from the highly ranked document retrieved from
the search by the search engine. The relevant search results are retrieved and
then keywords are used for summarizing part. During summarization it follows
the weighted and probabilistic approaches in order to identify the data
comparable to the keywords extracted. The extracted information is then refined
repeatedly through the aggregation process to reduce redundancy. Finally the
resultant data is submitted to the user in the form of audio output.
|
1004.4467
|
An Efficient Watermarking Algorithm to Improve Payload and Robustness
without Affecting Image Perceptual Quality
|
cs.CV
|
Capacity, Robustness, & Perceptual quality of watermark data are very
important issues to be considered. A lot of research is going on to increase
these parameters for watermarking of the digital images, as there is always a
tradeoff among them. . In this paper an efficient watermarking algorithm to
improve payload and robustness without affecting perceptual quality of image
data based on DWT is discussed. The aim of the paper is to employ the nested
watermarks in wavelet domain which increases the capacity and ultimately the
robustness against attacks and selection of different scaling factor values for
LL & HH bands and during embedding not to create the visible artifacts in the
original image and therefore the original and watermarked image is similar.
|
1004.4488
|
Apologizing Comment on `Quantum Quasi-Cyclic Low-Density Parity-Check
codes$\,$"
|
cs.IT math.IT
|
In our recent paper entitled "Quantum Quasi-Cyclic Low-Density Parity-Check
codes" [ICIC 2009. LNCS 5754], it was claimed that some new quantum codes can
be constructed via the CSS encoding/decoding approach with various lengths and
rates. However, the further investigation shows that the proposed construction
may steal some ideas from the paper entitled "Quantum Quasi-Cyclic LDPC codes"
[quant-ph/0701020v2]. We feel that the apologizing point of the original
protocol is that some results are almost similar to that of construction
methods with algebraic combinatorics although we suggest the different approach
for improving them. Also, there is a weak point of the original coding approach
while considering the application of codes in imperfect channels.
|
1004.4489
|
MIREX: MapReduce Information Retrieval Experiments
|
cs.IR
|
We propose to use MapReduce to quickly test new retrieval approaches on a
cluster of machines by sequentially scanning all documents. We present a small
case study in which we use a cluster of 15 low cost ma- chines to search a web
crawl of 0.5 billion pages showing that sequential scanning is a viable
approach to running large-scale information retrieval experiments with little
effort. The code is available to other researchers at:
http://mirex.sourceforge.net
|
1004.4490
|
On MMSE Properties and I-MMSE Implications in Parallel MIMO Gaussian
Channels
|
cs.IT math.IT
|
This paper extends the "single crossing point" property of the scalar MMSE
function, derived by Guo, Shamai and Verd\'u (first presented in ISIT 2008), to
the parallel degraded MIMO scenario. It is shown that the matrix Q(t), which is
the difference between the MMSE assuming a Gaussian input and the MMSE assuming
an arbitrary input, has, at most, a single crossing point for each of its
eigenvalues. Together with the I-MMSE relationship, a fundamental connection
between Information Theory and Estimation Theory, this new property is employed
to derive results in Information Theory. As a simple application of this
property we provide an alternative converse proof for the broadcast channel
(BC) capacity region under covariance constraint in this specific setting.
|
1004.4492
|
Optimal Beamforming in Interference Networks with Perfect Local Channel
Information
|
cs.IT math.IT
|
We consider settings in which T multi-antenna transmitters and K
single-antenna receivers concurrently utilize the available communication
resources. Each transmitter sends useful information only to its intended
receivers and can degrade the performance of unintended systems. Here, we
assume the performance measures associated with each receiver are monotonic
with the received power gains. In general, the systems' joint operation is
desired to be Pareto optimal. However, designing Pareto optimal resource
allocation schemes is known to be difficult. In order to reduce the complexity
of achieving efficient operating points, we show that it is sufficient to
consider rank-1 transmit covariance matrices and propose a framework for
determining the efficient beamforming vectors. These beamforming vectors are
thereby also parameterized by T(K-1) real-valued parameters each between zero
and one. The framework is based on analyzing each transmitter's power
gain-region which is composed of all jointly achievable power gains at the
receivers. The efficient beamforming vectors are on a specific boundary section
of the power gain-region, and in certain scenarios it is shown that it is
necessary to perform additional power allocation on the beamforming vectors.
Two examples which include broadcast and multicast data as well as a cognitive
radio application scenario illustrate the results.
|
1004.4520
|
Non-Systematic Codes for Physical Layer Security
|
cs.IT math.IT
|
This paper is a first study on the topic of achieving physical layer security
by exploiting non-systematic channel codes. The chance of implementing
transmission security at the physical layer is known since many years in
information theory, but it is now gaining an increasing interest due to its
many possible applications. It has been shown that channel coding techniques
can be effectively exploited for designing physical layer security schemes,
able to ensure that an unauthorized receiver, experiencing a channel different
from that of the the authorized receiver, is not able to gather any
information. Recently, it has been proposed to exploit puncturing techniques in
order to reduce the security gap between the authorized and unauthorized
channels. In this paper, we show that the same target can also be achieved by
using non-systematic codes, able to scramble information bits within the
transmitted codeword.
|
1004.4529
|
Rank Awareness in Joint Sparse Recovery
|
cs.IT math.IT
|
In this paper we revisit the sparse multiple measurement vector (MMV) problem
where the aim is to recover a set of jointly sparse multichannel vectors from
incomplete measurements. This problem has received increasing interest as an
extension of the single channel sparse recovery problem which lies at the heart
of the emerging field of compressed sensing. However the sparse approximation
problem has origins which include links to the field of array signal processing
where we find the inspiration for a new family of MMV algorithms based on the
MUSIC algorithm. We highlight the role of the rank of the coefficient matrix X
in determining the difficulty of the recovery problem. We derive the necessary
and sufficient conditions for the uniqueness of the sparse MMV solution, which
indicates that the larger the rank of X the less sparse X needs to be to ensure
uniqueness. We also show that the larger the rank of X the less the
computational effort required to solve the MMV problem through a combinatorial
search. In the second part of the paper we consider practical suboptimal
algorithms for solving the sparse MMV problem. We examine the rank awareness of
popular algorithms such as SOMP and mixed norm minimization techniques and show
them to be rank blind in terms of worst case analysis. We then consider a
family of greedy algorithms that are rank aware. The simplest such algorithm is
a discrete version of MUSIC and is guaranteed to recover the sparse vectors in
the full rank MMV case under mild conditions. We extend this idea to develop a
rank aware pursuit algorithm that naturally reduces to Order Recursive Matching
Pursuit (ORMP) in the single measurement case and also provides guaranteed
recovery in the full rank multi-measurement case. Numerical simulations
demonstrate that the rank aware algorithms are significantly better than
existing algorithms in dealing with multiple measurements.
|
1004.4530
|
Coding Theorems for a (2,2)-Threshold Scheme with Detectability of
Impersonation Attacks
|
cs.IT cs.CR math.IT
|
In this paper, we discuss coding theorems on a $(2, 2)$--threshold scheme in
the presence of an opponent who impersonates one of the two shareholders in an
asymptotic setup. We consider a situation where $n$ secrets $S^n$ from a
memoryless source is blockwisely encoded to two shares and the two shares are
decoded to $S^n$ with permitting negligible decoding error. We introduce
correlation level of the two shares and characterize the minimum attainable
rates of the shares and a uniform random number for realizing a $(2,
2)$--threshold scheme that is secure against the impersonation attack by an
opponent. It is shown that, if the correlation level between the two shares
equals to an $\ell \ge 0$, the minimum attainable rates coincide with
$H(S)+\ell$, where $H(S)$ denotes the entropy of the source, and the maximum
attainable exponent of the success probability of the impersonation attack
equals to $\ell$. We also give a simple construction of an encoder and a
decoder using an ordinary $(2,2)$--threshold scheme where the two shares are
correlated and attains all the bounds.
|
1004.4590
|
Concatenated Coding for the AWGN Channel with Noisy Feedback
|
cs.IT math.IT
|
The use of open-loop coding can be easily extended to a closed-loop
concatenated code if the channel has access to feedback. This can be done by
introducing a feedback transmission scheme as an inner code. In this paper,
this process is investigated for the case when a linear feedback scheme is
implemented as an inner code and, in particular, over an additive white
Gaussian noise (AWGN) channel with noisy feedback. To begin, we look to derive
the optimal linear feedback scheme by optimizing over the received
signal-to-noise ratio. From this optimization, an asymptotically optimal linear
feedback scheme is produced and compared to other well-known schemes. Then, the
linear feedback scheme is implemented as an inner code to a concatenated code
over the AWGN channel with noisy feedback. This code shows improvements not
only in error exponent bounds, but also in bit-error-rate and frame-error-rate.
It is also shown that the if the concatenated code has total blocklength L and
the inner code has blocklength, N, the inner code blocklength should scale as N
= O(C/R), where C is the capacity of the channel and R is the rate of the outer
code. Simulations with low density parity check (LDPC) and turbo codes are
provided to display these advantages.
|
1004.4601
|
Data Stream Algorithms for Codeword Testing
|
cs.IT math.IT
|
Motivated by applications in storage systems and property testing, we study
data stream algorithms for local testing and tolerant testing of codes.
Ideally, we would like to know whether there exist asymptotically good codes
that can be local/tolerant tested with one-pass, poly-log space data stream
algorithms. We show that for the error detection problem (and hence, the local
testing problem), there exists a one-pass, log-space data stream algorithm for
a broad class of asymptotically good codes, including the Reed-Solomon (RS)
code and expander codes. In our technically more involved result, we give a
one-pass, $O(e\log^2{n})$-space algorithm for RS (and related) codes with
dimension $k$ and block length $n$ that can distinguish between the cases when
the Hamming distance between the received word and the code is at most $e$ and
at least $a\cdot e$ for some absolute constant $a>1$. For RS codes with random
errors, we can obtain $e\le O(n/k)$. For folded RS codes, we obtain similar
results for worst-case errors as long as $e\le (n/k)^{1-\eps}$ for any constant
$\eps>0$. These results follow by reducing the tolerant testing problem to the
error detection problem using results from group testing and the list
decodability of the code. We also show that using our techniques, the space
requirement and the upper bound of $e\le O(n/k)$ cannot be improved by more
than logarithmic factors.
|
1004.4610
|
Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks
|
cs.NE
|
Mobility prediction allows estimating the stability of paths in a mobile
wireless Ad Hoc networks. Identifying stable paths helps to improve routing by
reducing the overhead and the number of connection interruptions. In this
paper, we introduce a neural network based method for mobility prediction in Ad
Hoc networks. This method consists of a multi-layer and recurrent neural
network using back propagation through time algorithm for training.
|
1004.4663
|
On the Existence of Optimal Exact-Repair MDS Codes for Distributed
Storage
|
cs.IT math.IT
|
The high repair cost of (n,k) Maximum Distance Separable (MDS) erasure codes
has recently motivated a new class of codes, called Regenerating Codes, that
optimally trade off storage cost for repair bandwidth. In this paper, we
address bandwidth-optimal (n,k,d) Exact-Repair MDS codes, which allow for any
failed node to be repaired exactly with access to arbitrary d survivor nodes,
where k<=d<=n-1. We show the existence of Exact-Repair MDS codes that achieve
minimum repair bandwidth (matching the cutset lower bound) for arbitrary
admissible (n,k,d), i.e., k<n and k<=d<=n-1. Our approach is based on
interference alignment techniques and uses vector linear codes which allow to
split symbols into arbitrarily small subsymbols.
|
1004.4668
|
Evolutionary Inference for Function-valued Traits: Gaussian Process
Regression on Phylogenies
|
q-bio.QM cs.LG physics.data-an stat.ML
|
Biological data objects often have both of the following features: (i) they
are functions rather than single numbers or vectors, and (ii) they are
correlated due to phylogenetic relationships. In this paper we give a flexible
statistical model for such data, by combining assumptions from phylogenetics
with Gaussian processes. We describe its use as a nonparametric Bayesian prior
distribution, both for prediction (placing posterior distributions on ancestral
functions) and model selection (comparing rates of evolution across a
phylogeny, or identifying the most likely phylogenies consistent with the
observed data). Our work is integrative, extending the popular phylogenetic
Brownian Motion and Ornstein-Uhlenbeck models to functional data and Bayesian
inference, and extending Gaussian Process regression to phylogenies. We provide
a brief illustration of the application of our method.
|
1004.4689
|
Quantum Location Verification in Noisy Channels
|
quant-ph cs.IT math.IT
|
Recently it has been shown how the use of quantum entanglement can lead to
the creation of real-time communication channels whose viability can be made
location dependent. Such functionality leads to new security paradigms that are
not possible in classical communication networks. Key to these new security
paradigms are quantum protocols that can unconditionally determine that a
receiver is in fact at an a priori assigned location. A limiting factor of such
quantum protocols will be the decoherence of states held in quantum memory.
Here we investigate the performance of quantum location verification protocols
under decoherence effects. More specifically, we address the issue of how
decoherence impacts the verification using N = 2 qubits entangled as Bell
states, as compared to N > 2 qubits entangled as GHZ states. We study the
original quantum location verification protocol, as well as a variant protocol,
introduced here, which utilizes teleportation. We find that the performance of
quantum location verification is in fact similar for Bell states and some N > 2
GHZ states, even though quantum decoherence degrades larger-qubit entanglements
faster. Our results are important for the design and implementation of
location-dependent communications in emerging quantum networks.
|
1004.4704
|
Homophily and Contagion Are Generically Confounded in Observational
Social Network Studies
|
stat.AP cs.SI physics.data-an physics.soc-ph
|
We consider processes on social networks that can potentially involve three
factors: homophily, or the formation of social ties due to matching individual
traits; social contagion, also known as social influence; and the causal effect
of an individual's covariates on their behavior or other measurable responses.
We show that, generically, all of these are confounded with each other.
Distinguishing them from one another requires strong assumptions on the
parametrization of the social process or on the adequacy of the covariates used
(or both). In particular we demonstrate, with simple examples, that asymmetries
in regression coefficients cannot identify causal effects, and that very simple
models of imitation (a form of social contagion) can produce substantial
correlations between an individual's enduring traits and their choices, even
when there is no intrinsic affinity between them. We also suggest some possible
constructive responses to these results.
|
1004.4713
|
Construction of Short Protocol Sequences with Worst-Case Throughput
Guarantee
|
cs.IT cs.DM math.IT
|
Protocol sequences are used in channel access for the multiple-access
collision channel without feedback. A new construction of protocol sequences
with a guarantee of worst-case system throughput is proposed. The construction
is based on Chinese remainder theorem. The Hamming crosscorrelation is proved
to be concentrated around the mean. The sequence period is much shorter than
existing protocol sequences with the same throughput performance. The new
construction reduces the complexity in implementation and also shortens the
waiting time until a packet can be sent successfully.
|
1004.4718
|
A Data Cleansing Method for Clustering Large-scale Transaction Databases
|
cs.DB
|
In this paper, we emphasize the need for data cleansing when clustering
large-scale transaction databases and propose a new data cleansing method that
improves clustering quality and performance. We evaluate our data cleansing
method through a series of experiments. As a result, the clustering quality and
performance were significantly improved by up to 165% and 330%, respectively.
|
1004.4729
|
On the Complexity of the $k$-Anonymization Problem
|
cs.CC cs.DB
|
We study the problem of anonymizing tables containing personal information
before releasing them for public use. One of the formulations considered in
this context is the $k$-anonymization problem: given a table, suppress a
minimum number of cells so that in the transformed table, each row is identical
to atleast $k-1$ other rows. The problem is known to be NP-hard and
MAXSNP-hard; but in the known reductions, the number of columns in the
constructed tables is arbitrarily large. However, in practical settings the
number of columns is much smaller. So, we study the complexity of the practical
setting in which the number of columns $m$ is small. We show that the problem
is NP-hard, even when the number of columns $m$ is a constant ($m=3$). We also
prove MAXSNP-hardness for this restricted version and derive that the problem
cannot be approximated within a factor of (6238/6237). Our reduction uses
alphabets $\Sigma$ of arbitrarily large size. A natural question is whether the
problem remains NP-hard when both $m$ and $|\Sigma|$ are small. We prove that
the $k$-anonymization problem is in $P$ when both $m$ and $|\Sigma|$ are
constants.
|
1004.4732
|
Minimum energy required to copy one bit of information
|
cs.IT math.IT
|
In this paper, we calculate energy required to copy one bit of useful
information in the presence of thermal noise. For this purpose, we consider a
quantum system capable of storing one bit of classical information, which is
initially in a mixed state corresponding to temperature T. We calculate how
many of these systems must be used to store useful information and control bits
protecting the content against transmission errors. Finally, we analyze how
adding these extra bits changes the total energy consumed during the copying.
|
1004.4734
|
On the comparison of plans: Proposition of an instability measure for
dynamic machine scheduling
|
cs.AI
|
On the basis of an analysis of previous research, we present a generalized
approach for measuring the difference of plans with an exemplary application to
machine scheduling. Our work is motivated by the need for such measures, which
are used in dynamic scheduling and planning situations. In this context,
quantitative approaches are needed for the assessment of the robustness and
stability of schedules. Obviously, any `robustness' or `stability' of plans has
to be defined w. r. t. the particular situation and the requirements of the
human decision maker. Besides the proposition of an instability measure, we
therefore discuss possibilities of obtaining meaningful information from the
decision maker for the implementation of the introduced approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.