id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1001.4181
|
Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function
for Gaussian Stationary Sources
|
cs.IT math.IT
|
We improve the existing achievable rate regions for causal and for zero-delay
source coding of stationary Gaussian sources under an average mean squared
error (MSE) distortion measure. To begin with, we find a closed-form expression
for the information-theoretic causal rate-distortion function (RDF) under such
distortion measure, denoted by $R_{c}^{it}(D)$, for first-order Gauss-Markov
processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically
attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that,
for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq
Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze
$R_{c}^{it}(D)$ for arbitrary zero-mean Gaussian stationary sources, we
introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the
reconstruction error is jointly stationary with the source. Based upon
\bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate
loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two
of these bounds are strictly smaller than 0.5 bits/sample at all rates. These
bounds differ from one another in their tightness and ease of evaluation; the
tighter the bound, the more involved its evaluation. We then show that, for any
source spectral density and any positive distortion D\leq \sigma_{x}^{2},
\bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set
of causal pre-, post-, and feedback filters. We show that finding such filters
constitutes a convex optimization problem. In order to solve the latter, we
propose an iterative optimization procedure that yields the optimal filters and
is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a
connection to feedback quantization we design a causal and a zero-delay coding
scheme which, for Gaussian sources, achieves...
|
1001.4189
|
Detection and Demarcation of Tumor using Vector Quantization in MRI
images
|
cs.CV
|
Segmenting a MRI images into homogeneous texture regions representing
disparate tissue types is often a useful preprocessing step in the
computer-assisted detection of breast cancer. That is why we proposed new
algorithm to detect cancer in mammogram breast cancer images. In this paper we
proposed segmentation using vector quantization technique. Here we used Linde
Buzo-Gray algorithm (LBG) for segmentation of MRI images. Initially a codebook
of size 128 was generated for MRI images. These code vectors were further
clustered in 8 clusters using same LBG algorithm. These 8 images were displayed
as a result. This approach does not leads to over segmentation or under
segmentation. For the comparison purpose we displayed results of watershed
segmentation and Entropy using Gray Level Co-occurrence Matrix along with this
method.
|
1001.4251
|
A Decidable Class of Nested Iterated Schemata (extended version)
|
cs.LO cs.AI
|
Many problems can be specified by patterns of propositional formulae
depending on a parameter, e.g. the specification of a circuit usually depends
on the number of bits of its input. We define a logic whose formulae, called
"iterated schemata", allow to express such patterns. Schemata extend
propositional logic with indexed propositions, e.g. P_i, P_i+1, P_1, and with
generalized connectives, e.g. /\i=1..n or i=1..n (called "iterations") where n
is an (unbound) integer variable called a "parameter". The expressive power of
iterated schemata is strictly greater than propositional logic: it is even out
of the scope of first-order logic. We define a proof procedure, called DPLL*,
that can prove that a schema is satisfiable for at least one value of its
parameter, in the spirit of the DPLL procedure. However the converse problem,
i.e. proving that a schema is unsatisfiable for every value of the parameter,
is undecidable so DPLL* does not terminate in general. Still, we prove that it
terminates for schemata of a syntactic subclass called "regularly nested". This
is the first non trivial class for which DPLL* is proved to terminate.
Furthermore the class of regularly nested schemata is the first decidable class
to allow nesting of iterations, i.e. to allow schemata of the form /\i=1..n
(/\j=1..n ...).
|
1001.4267
|
Thermodynamic properties of finite binary strings
|
cs.IT math.IT
|
Thermodynamic properties such as temperature, pressure, and internal energy
have been defined for finite binary strings from equilibrium distribution of a
chosen computable measure. It is demonstrated a binary string can be associated
with one-dimensional gas of quasi-particles of certain mass, momentum, and
energy.
|
1001.4271
|
Divide-and-conquer: Approaching the capacity of the two-pair
bidirectional Gaussian relay network
|
cs.IT math.IT
|
The capacity region of multi-pair bidirectional relay networks, in which a
relay node facilitates the communication between multiple pairs of users, is
studied. This problem is first examined in the context of the linear shift
deterministic channel model. The capacity region of this network when the relay
is operating at either full-duplex mode or half-duplex mode for arbitrary
number of pairs is characterized. It is shown that the cut-set upper-bound is
tight and the capacity region is achieved by a so called divide-and-conquer
relaying strategy. The insights gained from the deterministic network are then
used for the Gaussian bidirectional relay network. The strategy in the
deterministic channel translates to a specific superposition of lattice codes
and random Gaussian codes at the source nodes and successive interference
cancelation at the receiving nodes for the Gaussian network. The achievable
rate of this scheme with two pairs is analyzed and it is shown that for all
channel gains it achieves to within 3 bits/sec/Hz per user of the cut-set
upper-bound. Hence, the capacity region of the two-pair bidirectional Gaussian
relay network to within 3 bits/sec/Hz per user is characterized.
|
1001.4273
|
Sentence Simplification Aids Protein-Protein Interaction Extraction
|
cs.CL
|
Accurate systems for extracting Protein-Protein Interactions (PPIs)
automatically from biomedical articles can help accelerate biomedical research.
Biomedical Informatics researchers are collaborating to provide metaservices
and advance the state-of-art in PPI extraction. One problem often neglected by
current Natural Language Processing systems is the characteristic complexity of
the sentences in biomedical literature. In this paper, we report on the impact
that automatic simplification of sentences has on the performance of a
state-of-art PPI extraction system, showing a substantial improvement in recall
(8%) when the sentence simplification method is applied, without significant
impact to precision.
|
1001.4277
|
Towards Effective Sentence Simplification for Automatic Processing of
Biomedical Text
|
cs.CL
|
The complexity of sentences characteristic to biomedical articles poses a
challenge to natural language parsers, which are typically trained on
large-scale corpora of non-technical text. We propose a text simplification
process, bioSimplify, that seeks to reduce the complexity of sentences in
biomedical abstracts in order to improve the performance of syntactic parsers
on the processed sentences. Syntactic parsing is typically one of the first
steps in a text mining pipeline. Thus, any improvement in performance would
have a ripple effect over all processing steps. We evaluated our method using a
corpus of biomedical sentences annotated with syntactic links. Our empirical
results show an improvement of 2.90% for the Charniak-McClosky parser and of
4.23% for the Link Grammar parser when processing simplified sentences rather
than the original sentences in the corpus.
|
1001.4278
|
Weight Optimization for Distributed Average Consensus Algorithm in
Symmetric, CCS & KCS Star Networks
|
cs.IT cs.DC math.CO math.IT
|
This paper addresses weight optimization problem in distributed consensus
averaging algorithm over networks with symmetric star topology. We have
determined optimal weights and convergence rate of the network in terms of its
topological parameters. In addition, two alternative topologies with more rapid
convergence rates have been introduced. The new topologies are Complete-Cored
Symmetric (CCS) star and K-Cored Symmetric (KCS) star topologies. It has been
shown that the optimal weights for the edges of central part in symmetric and
CCS star configurations are independent of their branches. By simulation
optimality of obtained weights under quantization constraints have been
verified.
|
1001.4295
|
"Compressed" Compressed Sensing
|
cs.IT math.IT
|
The field of compressed sensing has shown that a sparse but otherwise
arbitrary vector can be recovered exactly from a small number of randomly
constructed linear projections (or samples). The question addressed in this
paper is whether an even smaller number of samples is sufficient when there
exists prior knowledge about the distribution of the unknown vector, or when
only partial recovery is needed. An information-theoretic lower bound with
connections to free probability theory and an upper bound corresponding to a
computationally simple thresholding estimator are derived. It is shown that in
certain cases (e.g. discrete valued vectors or large distortions) the number of
samples can be decreased. Interestingly though, it is also shown that in many
cases no reduction is possible.
|
1001.4297
|
Multi-camera Realtime 3D Tracking of Multiple Flying Animals
|
cs.CV
|
Automated tracking of animal movement allows analyses that would not
otherwise be possible by providing great quantities of data. The additional
capability of tracking in realtime - with minimal latency - opens up the
experimental possibility of manipulating sensory feedback, thus allowing
detailed explorations of the neural basis for control of behavior. Here we
describe a new system capable of tracking the position and body orientation of
animals such as flies and birds. The system operates with less than 40 msec
latency and can track multiple animals simultaneously. To achieve these
results, a multi target tracking algorithm was developed based on the Extended
Kalman Filter and the Nearest Neighbor Standard Filter data association
algorithm. In one implementation, an eleven camera system is capable of
tracking three flies simultaneously at 60 frames per second using a gigabit
network of nine standard Intel Pentium 4 and Core 2 Duo computers. This
manuscript presents the rationale and details of the algorithms employed and
shows three implementations of the system. An experiment was performed using
the tracking system to measure the effect of visual contrast on the flight
speed of Drosophila melanogaster. At low contrasts, speed is more variable and
faster on average than at high contrasts. Thus, the system is already a useful
tool to study the neurobiology and behavior of freely flying animals. If
combined with other techniques, such as `virtual reality'-type computer
graphics or genetic manipulation, the tracking system would offer a powerful
new way to investigate the biology of flying animals.
|
1001.4298
|
Statistical Mechanical Analysis of a Typical Reconstruction Limit of
Compressed Sensing
|
cs.IT cond-mat.dis-nn math.IT
|
We use the replica method of statistical mechanics to examine a typical
performance of correctly reconstructing $N$-dimensional sparse vector
$bx=(x_i)$ from its linear transformation $by=bF bx$ of $P$ dimensions on the
basis of minimization of the $L_p$-norm $||bx||_p= lim_{epsilon to +0}
sum_{i=1}^N |x_i|^{p+epsilon}$. We characterize the reconstruction performance
by the critical relation of the successful reconstruction between the ratio
$alpha=P/N$ and the density $rho$ of non-zero elements in $bx$ in the limit
$P,,N to infty$ while keeping $alpha sim O(1)$ and allowing asymptotically
negligible reconstruction errors. We show that the critical relation
$alpha_c(rho)$ holds universally as long as $bF^{rm T}bF$ can be characterized
asymptotically by a rotationally invariant random matrix ensemble and $bF
bF^{rm T}$ is typically of full rank. This supports the universality of the
critical relation observed by Donoho and Tanner ({em Phil. Trans. R. Soc. A},
vol.~367, pp.~4273--4293, 2009; arXiv: 0807.3590) for various ensembles of
compression matrices.
|
1001.4301
|
Probabilistic Approach to Neural Networks Computation Based on Quantum
Probability Model Probabilistic Principal Subspace Analysis Example
|
cs.NE cs.LG
|
In this paper, we introduce elements of probabilistic model that is suitable
for modeling of learning algorithms in biologically plausible artificial neural
networks framework. Model is based on two of the main concepts in quantum
physics - a density matrix and the Born rule. As an example, we will show that
proposed probabilistic interpretation is suitable for modeling of on-line
learning algorithms for PSA, which are preferably realized by a parallel
hardware based on very simple computational units. Proposed concept (model) can
be used in the context of improving algorithm convergence speed, learning
factor choice, or input signal scale robustness. We are going to see how the
Born rule and the Hebbian learning rule are connected
|
1001.4305
|
On Exponential Sums, Nowton identities and Dickson Polynomials over
Finite Fields
|
cs.IT math.IT
|
Let $\mathbb{F}_{q}$ be a finite field, $\mathbb{F}_{q^s}$ be an extension of
$\mathbb{F}_q$, let $f(x)\in \mathbb{F}_q[x]$ be a polynomial of degree $n$
with $\gcd(n,q)=1$. We present a recursive formula for evaluating the
exponential sum $\sum_{c\in \mathbb{F}_{q^s}}\chi^{(s)}(f(x))$. Let $a$ and $b$
be two elements in $\mathbb{F}_q$ with $a\neq 0$, $u$ be a positive integer. We
obtain an estimate for the exponential sum $\sum_{c\in
\mathbb{F}^*_{q^s}}\chi^{(s)}(ac^u+bc^{-1})$, where $\chi^{(s)}$ is the lifting
of an additive character $\chi$ of $\mathbb{F}_q$. Some properties of the
sequences constructed from these exponential sums are provided also.
|
1001.4334
|
An Enumerative Method for Encoding Spectrum Shaped Binary Run-Length
Constrained Sequences
|
cs.IT math.IT
|
A method for encoding and decoding spectrum shaped binary run-length
constrained sequences is described. The binary sequences with predefined range
of exponential sums are introduced. On the base of Cover's enumerative scheme,
recurrence relations for calculating the number of these sequences are derived.
Implementation of encoding and decoding procedures is also shown.
|
1001.4361
|
Statistical Mechanical Analysis of Compressed Sensing Utilizing
Correlated Compression Matrix
|
cs.IT cond-mat.dis-nn math.IT
|
We investigate a reconstruction limit of compressed sensing for a
reconstruction scheme based on the L1-norm minimization utilizing a correlated
compression matrix with a statistical mechanics method. We focus on the
compression matrix modeled as the Kronecker-type random matrix studied in
research on multi-input multi-output wireless communication systems. We found
that strong one-dimensional correlations between expansion bases of original
information slightly degrade reconstruction performance.
|
1001.4368
|
Implicit media frames: Automated analysis of public debate on artificial
sweeteners
|
cs.IR cs.CL
|
The framing of issues in the mass media plays a crucial role in the public
understanding of science and technology. This article contributes to research
concerned with diachronic analysis of media frames by making an analytical
distinction between implicit and explicit media frames, and by introducing an
automated method for analysing diachronic changes of implicit frames. In
particular, we apply a semantic maps method to a case study on the newspaper
debate about artificial sweeteners, published in The New York Times (NYT)
between 1980 and 2006. Our results show that the analysis of semantic changes
enables us to filter out the dynamics of implicit frames, and to detect
emerging metaphors in public debates. Theoretically, we discuss the relation
between implicit frames in public debates and codification of information in
scientific discourses, and suggest further avenues for research interested in
the automated analysis of frame changes and trends in public debates.
|
1001.4382
|
Training Over Sparse Multipath Channels in the Low SNR Regime
|
cs.IT math.IT
|
Training over sparse multipath channels is explored. The energy allocation
and the optimal shape of training signals that enable error free communications
over unknown channels are characterized as a function of the channels'
statistics. The performance of training is evaluated by the reduction of the
mean square error of the channel estimate and by the decrease in the
uncertainty of the channel. A connection between the entropy of the wideband
channel and the required energy for training is shown. In addition, there is a
linkage between the sparsity and the entropy of the channel to the number of
required channel measurements when the training is based on compressed sensing.
The ability to learn the channel from few measurements is connected to the low
entropy of sparse channels that enables training in the low SNR regime.
|
1001.4387
|
Convex Feasibility Methods for Compressed Sensing
|
cs.IT math.IT
|
We present a computationally-efficient method for recovering sparse signals
from a series of noisy observations, known as the problem of compressed sensing
(CS). CS theory requires solving a convex constrained minimization problem. We
propose to transform this optimization problem into a convex feasibility
problem (CFP), and solve it using subgradient projection methods, which are
iterative, fast, robust and convergent schemes for solving CFPs. As opposed to
some of the recently-introduced CS algorithms, such as Bayesian CS and gradient
projections for sparse reconstruction, which become inefficient as the problem
dimension and sparseness degree increase, the newly-proposed methods exhibit a
marked robustness with respect to these factors. This renders the subgradient
projection methods highly viable for large-scale compressible scenarios.
|
1001.4405
|
A Formal Framework of Virtual Organisations as Agent Societies
|
cs.LO cs.AI cs.MA
|
We propose a formal framework that supports a model of agent-based Virtual
Organisations (VOs) for service grids and provides an associated operational
model for the creation of VOs. The framework is intended to be used for
describing different service grid applications based on multiple agents and, as
a result, it abstracts away from any realisation choices of the service grid
application, the agents involved to support the applications and their
interactions. Within the proposed framework VOs are seen as emerging from
societies of agents, where agents are abstractly characterised by goals and
roles they can play within VOs. In turn, VOs are abstractly characterised by
the agents participating in them with specific roles, as well as the workflow
of services and corresponding contracts suitable for achieving the goals of the
participating agents. We illustrate the proposed framework with an earth
observation scenario.
|
1001.4413
|
Structure and Behaviour of Virtual Organisation Breeding Environments
|
cs.SE cs.MA
|
This paper provides an outline of a formal approach that we are developing
for modelling Virtual Organisations (VOs) and their Breeding Environments
(VBEs). We propose different levels of representation for the functional
structures and processes that VBEs and VOs involve, which are independent of
the specificities of the infrastructures (organisational and technical) that
support the functioning of VBEs. This allows us to reason about properties of
tasks performed within VBEs and services provided through VOs without
committing to the way in which they are implemented.
|
1001.4419
|
A Framework to Manage the Complex Organisation of Collaborating: Its
Application to Autonomous Systems
|
cs.MA
|
In this paper we present an analysis of the complexities of large group
collaboration and its application to develop detailed requirements for
collaboration schema for Autonomous Systems (AS). These requirements flow from
our development of a framework for collaboration that provides a basis for
designing, supporting and managing complex collaborative systems that can be
applied and tested in various real world settings. We present the concepts of
"collaborative flow" and "working as one" as descriptive expressions of what
good collaborative teamwork can be in such scenarios. The paper considers the
application of the framework within different scenarios and discuses the
utility of the framework in modelling and supporting collaboration in complex
organisational structures.
|
1001.4423
|
A New Decoding Scheme for Errorless Codes for Overloaded CDMA with
Active User Detection
|
cs.IT math.IT
|
Recently, a new class of binary codes for overloaded CDMA systems are
proposed that not only has the ability of errorless communication but also
suitable for detecting active users. These codes are called COWDA [1]. In [1],
a Maximum Likelihood (ML) decoder is proposed for this class of codes. Although
the proposed scheme of coding/decoding show impressive performance, the decoder
can be improved. In this paper by assuming more practical conditions for the
traffic in the system, we suggest an algorithm that increases the performance
of the decoder several orders of magnitude (the Bit-Error-Rate (BER) is divided
by a factor of 400 in some Eb/N0's The algorithm supposes the Poison
distribution for the time of activation/deactivation of the users.
|
1001.4431
|
Algebraic Network Coding Approach to Deterministic Wireless Relay
Networks
|
cs.IT cs.NI math.IT
|
The deterministic wireless relay network model, introduced by Avestimehr et
al., has been proposed for approximating Gaussian relay networks. This model,
known as the ADT network model, takes into account the broadcast nature of
wireless medium and interference. Avestimehr et al. showed that the Min-cut
Max-flow theorem holds in the ADT network.
In this paper, we show that the ADT network model can be described within the
algebraic network coding framework introduced by Koetter and Medard. We prove
that the ADT network problem can be captured by a single matrix, called the
"system matrix". We show that the min-cut of an ADT network is the rank of the
system matrix; thus, eliminating the need to optimize over exponential number
of cuts between two nodes to compute the min-cut of an ADT network.
We extend the capacity characterization for ADT networks to a more general
set of connections. Our algebraic approach not only provides the Min-cut
Max-flow theorem for a single unicast/multicast connection, but also extends to
non-multicast connections such as multiple multicast, disjoint multicast, and
two-level multicast. We also provide sufficiency conditions for achievability
in ADT networks for any general connection set. In addition, we show that the
random linear network coding, a randomized distributed algorithm for network
code construction, achieves capacity for the connections listed above.
Finally, we extend the ADT networks to those with random erasures and cycles
(thus, allowing bi-directional links). Note that ADT network was proposed for
approximating the wireless networks; however, ADT network is acyclic.
Furthermore, ADT network does not model the stochastic nature of the wireless
links. With our algebraic framework, we incorporate both cycles as well as
random failures into ADT network model.
|
1001.4432
|
Joint Range of f-divergences
|
cs.IT math.IT math.ST stat.TH
|
We provide a general method for evaluation of the joint range of
f-divergences for two different functions f. Via topological arguments we prove
that the joint range for general distributions equals the convex hull of the
joint range achieved by the distributions on a two-element set. The joint range
technique provides important inequalities between different f-divergences with
various applications in information theory and statistics.
|
1001.4448
|
R\'enyi Divergence and Majorization
|
cs.IT math.IT
|
R\'enyi divergence is related to R\'enyi entropy much like information
divergence (also called Kullback-Leibler divergence or relative entropy) is
related to Shannon's entropy, and comes up in many settings. It was introduced
by R\'enyi as a measure of information that satisfies almost the same axioms as
information divergence. We review the most important properties of R\'enyi
divergence, including its relation to some other distances. We show how R\'enyi
divergence appears when the theory of majorization is generalized from the
finite to the continuous setting. Finally, R\'enyi divergence plays a role in
analyzing the number of binary questions required to guess the values of a
sequence of random variables.
|
1001.4462
|
Not Every Domain of a Plain Decompressor Contains the Domain of a
Prefix-Free One
|
cs.IT cs.CC math.IT math.LO
|
C.Calude, A.Nies, L.Staiger, and F.Stephan posed the following question about
the relation between plain and prefix Kolmogorov complexities (see their paper
in DLT 2008 conference proceedings): does the domain of every optimal
decompressor contain the domain of some optimal prefix-free decompressor? In
this paper we provide a negative answer to this question.
|
1001.4475
|
X-Armed Bandits
|
cs.LG cs.SY math.OC math.ST stat.TH
|
We consider a generalization of stochastic bandits where the set of arms,
$\cX$, is allowed to be a generic measurable space and the mean-payoff function
is "locally Lipschitz" with respect to a dissimilarity function that is known
to the decision maker. Under this condition we construct an arm selection
policy, called HOO (hierarchical optimistic optimization), with improved regret
bounds compared to previous results for a large class of problems. In
particular, our results imply that if $\cX$ is the unit hypercube in a
Euclidean space and the mean-payoff function has a finite number of global
maxima around which the behavior of the function is locally continuous with a
known smoothness degree, then the expected regret of HOO is bounded up to a
logarithmic factor by $\sqrt{n}$, i.e., the rate of growth of the regret is
independent of the dimension of the space. We also prove the minimax optimality
of our algorithm when the dissimilarity is a metric. Our basic strategy has
quadratic computational complexity as a function of the number of time steps
and does not rely on the doubling trick. We also introduce a modified strategy,
which relies on the doubling trick but runs in linearithmic time. Both results
are improvements with respect to previous approaches.
|
1001.4519
|
Communication in a Poisson Field of Interferers -- Part I: Interference
Distribution and Error Probability
|
cs.IT cs.NI math.IT
|
We present a mathematical model for communication subject to both network
interference and noise. We introduce a framework where the interferers are
scattered according to a spatial Poisson process, and are operating
asynchronously in a wireless environment subject to path loss, shadowing, and
multipath fading. We consider both cases of slow and fast-varying interferer
positions. The paper is comprised of two separate parts. In Part I, we
determine the distribution of the aggregate network interference at the output
of a linear receiver. We characterize the error performance of the link, in
terms of average and outage probabilities. The proposed model is valid for any
linear modulation scheme (e.g., M-ary phase shift keying or M-ary quadrature
amplitude modulation), and captures all the essential physical parameters that
affect network interference. Our work generalizes the conventional analysis of
communication in the presence of additive white Gaussian noise and fast fading,
allowing the traditional results to be extended to include the effect of
network interference. In Part II of the paper, we derive the capacity of the
link when subject to network interference and noise, and characterize the
spectrum of the aggregate interference.
|
1001.4520
|
Communication in a Poisson Field of Interferers -- Part II: Channel
Capacity and Interference Spectrum
|
cs.IT cs.NI math.IT
|
In Part I of this paper, we presented a mathematical model for communication
subject to both network interference and noise, where the interferers are
scattered according to a spatial Poisson process, and are operating
asynchronously in a wireless environment subject to path loss, shadowing, and
multipath fading. We determined the distribution of the aggregate interference
and the error performance of the link. In this second part, we characterize the
capacity of the link subject to both network interference and noise. Then, we
put forth the concept of spectral outage probability (SOP), a new
characterization of the aggregate radio-frequency emission generated by
communicating nodes in a wireless network. We present some applications of the
SOP, namely the establishment of spectral regulations and the design of covert
military networks. The proposed framework captures all the essential physical
parameters that affect the aggregate network emission, yet is simple enough to
provide insights that may be of value in the design and deployment of wireless
networks.
|
1001.4521
|
On the BICM Capacity
|
cs.IT math.IT
|
Optimal binary labelings, input distributions, and input alphabets are
analyzed for the so-called bit-interleaved coded modulation (BICM) capacity,
paying special attention to the low signal-to-noise ratio (SNR) regime. For
8-ary pulse amplitude modulation (PAM) and for 0.75 bit/symbol, the folded
binary code results in a higher capacity than the binary reflected gray code
(BRGC) and the natural binary code (NBC). The 1 dB gap between the additive
white Gaussian noise (AWGN) capacity and the BICM capacity with the BRGC can be
almost completely removed if the input symbol distribution is properly
selected. First-order asymptotics of the BICM capacity for arbitrary input
alphabets and distributions, dimensions, mean, variance, and binary labeling
are developed. These asymptotics are used to define first-order optimal (FOO)
constellations for BICM, i.e. constellations that make BICM achieve the Shannon
limit $-1.59 \tr{dB}$. It is shown that the $\Eb/N_0$ required for reliable
transmission at asymptotically low rates in BICM can be as high as infinity,
that for uniform input distributions and 8-PAM there are only 72 classes of
binary labelings with a different first-order asymptotic behavior, and that
this number is reduced to only 26 for 8-ary phase shift keying (PSK). A general
answer to the question of FOO constellations for BICM is also given: using the
Hadamard transform, it is found that for uniform input distributions, a
constellation for BICM is FOO if and only if it is a linear projection of a
hypercube. A constellation based on PAM or quadrature amplitude modulation
input alphabets is FOO if and only if they are labeled by the NBC; if the
constellation is based on PSK input alphabets instead, it can never be FOO if
the input alphabet has more than four points, regardless of the labeling.
|
1001.4548
|
On the BICM Capacity
|
cs.IT math.IT
|
Optimal binary labelings, input distributions, and input alphabets are
analyzed for the so-called bit-interleaved coded modulation (BICM) capacity,
paying special attention to the low signal-to-noise ratio (SNR) regime. For
8-ary pulse amplitude modulation (PAM) and for 0.75 bit/symbol, the folded
binary code results in a higher capacity than the binary reflected gray code
(BRGC) and the natural binary code (NBC). The 1 dB gap between the additive
white Gaussian noise (AWGN) capacity and the BICM capacity with the BRGC can be
almost completely removed if the input symbol distribution is properly
selected. First-order asymptotics of the BICM capacity for arbitrary input
alphabets and distributions, dimensions, mean, variance, and binary labeling
are developed. These asymptotics are used to define first-order optimal (FOO)
constellations for BICM, i.e. constellations that make BICM achieve the Shannon
limit $-1.59 \tr{dB}$. It is shown that the $\Eb/N_0$ required for reliable
transmission at asymptotically low rates in BICM can be as high as infinity,
that for uniform input distributions and 8-PAM there are only 72 classes of
binary labelings with a different first-order asymptotic behavior, and that
this number is reduced to only 26 for 8-ary phase shift keying (PSK). A general
answer to the question of FOO constellations for BICM is also given: using the
Hadamard transform, it is found that for uniform input distributions, a
constellation for BICM is FOO if and only if it is a linear projection of a
hypercube. A constellation based on PAM or quadrature amplitude modulation
input alphabets is FOO if and only if they are labeled by the NBC; if the
constellation is based on PSK input alphabets instead, it can never be FOO if
the input alphabet has more than four points, regardless of the labeling.
|
1001.4588
|
Interference Decoding for Deterministic Channels
|
cs.IT math.IT
|
An inner bound to the capacity region of a class of deterministic
interference channels with three user pairs is presented. The key idea is to
simultaneously decode the combined interference signal and the intended message
at each receiver. It is shown that this interference-decoding inner bound is
tight under certain strong interference conditions. The inner bound is also
shown to strictly contain the inner bound obtained by treating interference as
noise, which includes interference alignment for deterministic channels. The
gain comes from judicious analysis of the number of combined interference
sequences in different regimes of input distributions and message rates.
Finally, the inner bound is generalized to the case where each channel output
is observed through a noisy channel.
|
1001.4597
|
Learning to Blend by Relevance
|
cs.IR
|
Emergence of various vertical search engines highlights the fact that a
single ranking technology cannot deal with the complexity and scale of search
problems. For example, technology behind video and image search is very
different from general web search. Their ranking functions share few features.
Question answering websites (e.g., Yahoo! Answer) can make use of text matching
and click features developed for general web, but they have unique page
structures and rich user feedback, e.g., thumbs up and thumbs down ratings in
Yahoo! answer, which greatly benefit their own ranking. Even for those features
shared by answer and general web, the correlation between features and
relevance could be very different. Therefore, dedicated functions are needed in
order to better rank documents within individual domains. These dedicated
functions are defined on distinct feature spaces. However, having one search
box for each domain, is neither efficient nor scalable. Rather than typing the
same query two times into both Yahoo! Search and Yahoo! Answer and retrieving
two ranking lists, we would prefer putting it only once but receiving a
comprehensive list of documents from both domains on the subject. This
situation calls for new technology that blends documents from different sources
into a single ranking list. Despite the content richness of the blended list,
it has to be sorted by relevance none the less. We call such technology
blending, which is the main subject of this paper.
|
1001.4689
|
Balancing Egoism and Altruism on the Interference Channel: The MIMO case
|
cs.IT math.IT
|
This paper considers the so-called MIMO interference channel. This situation
has relevance in applications such as multi-cell coordination in cellular
networks as well as spectrum sharing in cognitive radio networks among others.
We address the design of precoding (i.e. beamforming) vectors at each sender
with the aim of striking a compromise between beamforming gain at the intended
receiver (Egoism) and the mitigation of interference created towards other
receivers (Altruism). Combining egoistic and altruistic beamforming has been
shown previously to be instrumental to optimizing the rates in a MISO
interference channel (i.e. where receivers have no interference canceling
capability) . Here we explore these game-theoretic concepts in the more general
context of MIMO channels and using the framework of Bayesian games, allowing us
to derive (semi-)distributed precoding techniques. We draw parallels with
existing work on the MIMO interference channel, including rate-optimizing and
interference-alignement precoding techniques, showing how such techniques may
be improved and re-interpretated through a common prism based on balancing
egoistic and altruistic beamforming.
|
1001.4703
|
Neyman-Pearson Detection of a Gaussian Source using Dumb Wireless
Sensors
|
cs.IT math.IT
|
We investigate the performance of the Neyman-Pearson detection of a
stationary Gaussian process in noise, using a large wireless sensor network
(WSN). In our model, each sensor compresses its observation sequence using a
linear precoder. The final decision is taken by a fusion center (FC) based on
the compressed information. Two families of precoders are studied: random iid
precoders and orthogonal precoders. We analyse their performance in the regime
where both the number of sensors k and the number of samples n per sensor tend
to infinity at the same rate, that is, k/n tends to c in (0, 1). Contributions
are as follows. 1) Using results of random matrix theory and on large Toeplitz
matrices, it is proved that the miss probability of the Neyman-Pearson detector
converges exponentially to zero, when the above families of precoders are used.
Closed form expressions of the corresponding error exponents are provided. 2)
In particular, we propose a practical orthogonal precoding strategy, the
Principal Frequencies Strategy (PFS), which achieves the best error exponent
among all orthogonal strategies, and which requires very few signaling overhead
between the central processor and the nodes of the network. 3) Moreover, when
the PFS is used, a simplified low-complexity testing procedure can be
implemented at the FC. We show that the proposed suboptimal test enjoys the
same error exponent as the Neyman-Pearson test, which indicates a similar
asymptotic behaviour of the performance. We illustrate our findings by
numerical experiments on some examples.
|
1001.4737
|
Optimization of Planck/LFI on--board data handling
|
astro-ph.IM astro-ph.CO cs.IT math.IT
|
To asses stability against 1/f noise, the Low Frequency Instrument (LFI)
onboard the Planck mission will acquire data at a rate much higher than the
data rate allowed by its telemetry bandwith of 35.5 kbps. The data are
processed by an onboard pipeline, followed onground by a reversing step. This
paper illustrates the LFI scientific onboard processing to fit the allowed
datarate. This is a lossy process tuned by using a set of 5 parameters Naver,
r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level
of distortion introduced by the onboard processing, EpsilonQ, as a function of
these parameters. It describes the method of optimizing the onboard processing
chain. The tuning procedure is based on a optimization algorithm applied to
unprocessed and uncompressed raw data provided either by simulations, prelaunch
tests or data taken from LFI operating in diagnostic mode. All the needed
optimization steps are performed by an automated tool, OCA2, which ends with
optimized parameters and produces a set of statistical indicators, among them
the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr =
2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup
the process an analytical model is developed that is able to extract most of
the relevant information on EpsilonQ and Cr as a function of the signal
statistics and the processing parameters. This model will be of interest for
the instrument data analysis. The method was applied during ground tests when
the instrument was operating in conditions representative of flight. Optimized
parameters were obtained and the performance has been verified, the required
data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of
3.8% of white noise rms well within the requirements.
|
1001.4739
|
Rate Region of the Gaussian Scalar-Help-Vector Source-Coding Problem
|
cs.IT math.IT
|
We determine the rate region of the Gaussian scalar-help-vector source-coding
problem under a covariance matrix distortion constraint. The rate region is
achieved by a Gaussian achievable scheme. We introduce a novel outer bounding
technique to establish the converse of the main result. Our approach is based
on lower bounding the problem with a potentially reduced dimensional problem by
projecting the main source and imposing the distortion constraint in certain
directions determined by the optimal Gaussian scheme. We also prove several
properties that the optimal solution to the point-to-point rate-distortion
problem for a vector Gaussian source under a covariance matrix distortion
constraint satisfies. These properties play an important role in our converse
proof. We further establish an outer bound to the rate region of the more
general problem in which there are distortion constraints on both the sources.
The outer bound is partially tight in general. We also study its tightness in
some nontrivial cases.
|
1001.4880
|
The WebContent XML Store
|
cs.DB
|
In this article, we describe the XML storage system used in the WebContent
project. We begin by advocating the use of an XML database in order to store
WebContent documents, and we present two different ways of storing and querying
these documents : the use of a centralized XML database and the use of a P2P
XML database.
|
1001.4892
|
Janus: Automatic Ontology Builder from XSD Files
|
cs.DB cs.AI
|
The construction of a reference ontology for a large domain still remains an
hard human task. The process is sometimes assisted by software tools that
facilitate the information extraction from a textual corpus. Despite of the
great use of XML Schema files on the internet and especially in the B2B domain,
tools that offer a complete semantic analysis of XML schemas are really rare.
In this paper we introduce Janus, a tool for automatically building a reference
knowledge base starting from XML Schema files. Janus also provides different
useful views to simplify B2B application integration.
|
1001.4901
|
Deriving Ontologies from XML Schema
|
cs.PL cs.DB
|
In this paper, we present a method and a tool for deriving a skeleton of an
ontology from XML schema files. We first recall what an is ontology and its
relationships with XML schemas. Next, we focus on ontology building methodology
and associated tool requirements. Then, we introduce Janus, a tool for building
an ontology from various XML schemas in a given domain. We summarize the main
features of Janus and illustrate its functionalities through a simple example.
Finally, we compare our approach to other existing ontology building tools.
|
1001.4919
|
Johnson-Lindenstrauss lemma for circulant matrices
|
math.FA cs.IT math.IT
|
We prove a variant of a Johnson-Lindenstrauss lemma for matrices with
circulant structure. This approach allows to minimise the randomness used, is
easy to implement and provides good running times. The price to be paid is the
higher dimension of the target space $k=O(\epsilon^{-2}\log^3n)$ instead of the
classical bound $k=O(\epsilon^{-2}\log n)$.
|
1001.5007
|
Trajectory Clustering and an Application to Airspace Monitoring
|
cs.LG
|
This paper presents a framework aimed at monitoring the behavior of aircraft
in a given airspace. Nominal trajectories are determined and learned using data
driven methods. Standard procedures are used by air traffic controllers (ATC)
to guide aircraft, ensure the safety of the airspace, and to maximize the
runway occupancy. Even though standard procedures are used by ATC, the control
of the aircraft remains with the pilots, leading to a large variability in the
flight patterns observed. Two methods to identify typical operations and their
variability from recorded radar tracks are presented. This knowledge base is
then used to monitor the conformance of current operations against operations
previously identified as standard. A tool called AirTrajectoryMiner is
presented, aiming at monitoring the instantaneous health of the airspace, in
real time. The airspace is "healthy" when all aircraft are flying according to
the nominal procedures. A measure of complexity is introduced, measuring the
conformance of current flight to nominal flight patterns. When an aircraft does
not conform, the complexity increases as more attention from ATC is required to
ensure a safe separation between aircraft.
|
1001.5016
|
Mapping the Geography of Science: Distribution Patterns and Networks of
Relations among Cities and Institutes
|
cs.DL cs.IR physics.soc-ph
|
Using Google Earth, Google Maps and/or network visualization programs such as
Pajek, one can overlay the network of relations among addresses in scientific
publications on the geographic map. We discuss the pros en cons of the various
options, and provide software (freeware) for bridging existing gaps between the
Science Citation Indices and Scopus, on the one side, and these various
visualization tools, on the other. At the level of city names, the global map
can be drawn reliably on the basis of the available address information. At the
level of the names of organizations and institutes, there are problems of
unification both in the ISI-databases and Scopus. Pajek enables us to combine
the visualization with statistical analysis, whereas the Google Maps and its
derivates provide superior tools at the Internet.
|
1001.5073
|
Sparse Recovery using Smoothed $\ell^0$ (SL0): Convergence Analysis
|
cs.IT math.IT
|
Finding the sparse solution of an underdetermined system of linear equations
has many applications, especially, it is used in Compressed Sensing (CS),
Sparse Component Analysis (SCA), and sparse decomposition of signals on
overcomplete dictionaries. We have recently proposed a fast algorithm, called
Smoothed $\ell^0$ (SL0), for this task. Contrary to many other sparse recovery
algorithms, SL0 is not based on minimizing the $\ell^1$ norm, but it tries to
directly minimize the $\ell^0$ norm of the solution. The basic idea of SL0 is
optimizing a sequence of certain (continuous) cost functions approximating the
$\ell^0$ norm of a vector. However, in previous papers, we did not provide a
complete convergence proof for SL0. In this paper, we study the convergence
properties of SL0, and show that under a certain sparsity constraint in terms
of Asymmetric Restricted Isometry Property (ARIP), and with a certain choice of
parameters, the convergence of SL0 to the sparsest solution is guaranteed.
Moreover, we study the complexity of SL0, and we show that whenever the
dimension of the dictionary grows, the complexity of SL0 increases with the
same order as Matching Pursuit (MP), which is one of the fastest existing
sparse recovery methods, while contrary to MP, its convergence to the sparsest
solution is guaranteed under certain conditions which are satisfied through the
choice of parameters.
|
1001.5074
|
Computing coset leaders of binary codes
|
cs.IT math.IT
|
We present an algorithm for computing the set of all coset leaders of a
binary code $\mathcal C \subset \mathbb{F}_2^n$. The method is adapted from
some of the techniques related to the computation of Gr\"obner representations
associated with codes. The algorithm provides a Gr\"obner representation of the
binary code and the set of coset leaders $\mathrm{CL}(\mathcal C)$. Its
efficiency stands of the fact that its complexity is linear on the number of
elements of $\mathrm{CL}(\mathcal C)$, which is smaller than exhaustive search
in $\mathbb{F}_2^n$.
|
1001.5079
|
An Optimal Family of Exponentially Accurate One-Bit Sigma-Delta
Quantization Schemes
|
cs.IT math.CA math.IT
|
Sigma-Delta modulation is a popular method for analog-to-digital conversion
of bandlimited signals that employs coarse quantization coupled with
oversampling. The standard mathematical model for the error analysis of the
method measures the performance of a given scheme by the rate at which the
associated reconstruction error decays as a function of the oversampling ratio
$\lambda$. It was recently shown that exponential accuracy of the form
$O(2^{-r\lambda})$ can be achieved by appropriate one-bit Sigma-Delta
modulation schemes. By general information-entropy arguments $r$ must be less
than 1. The current best known value for $r$ is approximately 0.088. The
schemes that were designed to achieve this accuracy employ the "greedy"
quantization rule coupled with feedback filters that fall into a class we call
"minimally supported". In this paper, we study the minimization problem that
corresponds to optimizing the error decay rate for this class of feedback
filters. We solve a relaxed version of this problem exactly and provide
explicit asymptotics of the solutions. From these relaxed solutions, we find
asymptotically optimal solutions of the original problem, which improve the
best known exponential error decay rate to $r \approx 0.102$. Our method draws
from the theory of orthogonal polynomials; in particular, it relates the
optimal filters to the zero sets of Chebyshev polynomials of the second kind.
|
1001.5100
|
On Exponential Sums, Nowton identities and Dickson Polynomials over
Finite Fields
|
cs.IT math.IT
|
Let $\mathbb{F}_{q}$ be a finite field, $\mathbb{F}_{q^s}$ be an extension of
$\mathbb{F}_q$, let $f(x)\in \mathbb{F}_q[x]$ be a polynomial of degree $n$
with $\gcd(n,q)=1$. We present a recursive formula for evaluating the
exponential sum $\sum_{c\in \mathbb{F}_{q^s}}\chi^{(s)}(f(x))$. Let $a$ and $b$
be two elements in $\mathbb{F}_q$ with $a\neq 0$, $u$ be a positive integer. We
obtain an estimate for the exponential sum $\sum_{c\in
\mathbb{F}^*_{q^s}}\chi^{(s)}(ac^u+bc^{-1})$, where $\chi^{(s)}$ is the lifting
of an additive character $\chi$ of $\mathbb{F}_q$. Some properties of the
sequences constructed from these exponential sums are provided also.
|
1001.5113
|
Worst Configurations (Instantons) for Compressed Sensing over Reals: a
Channel Coding Approach
|
cs.IT math.IT
|
We consider the Linear Programming (LP) solution of the Compressed Sensing
(CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The
BasP allows interpretation as a channel-coding problem, and it guarantees
error-free reconstruction with a properly chosen measurement matrix and
sufficiently sparse error vectors. In this manuscript, we examine how the BasP
performs on a given measurement matrix and develop an algorithm to discover the
sparsest vectors for which the BasP fails. The resulting algorithm is a
generalization of our previous results on finding the most probable
error-patterns degrading performance of a finite size Low-Density Parity-Check
(LDPC) code in the error-floor regime. The BasP fails when its output is
different from the actual error-pattern. We design a CS-Instanton Search
Algorithm (ISA) generating a sparse vector, called a CS-instanton, such that
the BasP fails on the CS-instanton, while the BasP recovery is successful for
any modification of the CS-instanton replacing a nonzero element by zero. We
also prove that, given a sufficiently dense random input for the error-vector,
the CS-ISA converges to an instanton in a small finite number of steps. The
performance of the CS-ISA is illustrated on a randomly generated $120\times
512$ matrix. For this example, the CS-ISA outputs the shortest instanton (error
vector) pattern of length 11.
|
1001.5130
|
BOOST: A fast approach to detecting gene-gene interactions in
genome-wide case-control studies
|
q-bio.GN cs.CE q-bio.QM
|
Gene-gene interactions have long been recognized to be fundamentally
important to understand genetic causes of complex disease traits. At present,
identifying gene-gene interactions from genome-wide case-control studies is
computationally and methodologically challenging. In this paper, we introduce a
simple but powerful method, named `BOolean Operation based Screening and
Testing'(BOOST). To discover unknown gene-gene interactions that underlie
complex diseases, BOOST allows examining all pairwise interactions in
genome-wide case-control studies in a remarkably fast manner. We have carried
out interaction analyses on seven data sets from the Wellcome Trust Case
Control Consortium (WTCCC). Each analysis took less than 60 hours on a standard
3.0 GHz desktop with 4G memory running Windows XP system. The interaction
patterns identified from the type 1 diabetes data set display significant
difference from those identified from the rheumatoid arthritis data set, while
both data sets share a very similar hit region in the WTCCC report. BOOST has
also identified many undiscovered interactions between genes in the major
histocompatibility complex (MHC) region in the type 1 diabetes data set. In the
coming era of large-scale interaction mapping in genome-wide case-control
studies, our method can serve as a computationally and statistically useful
tool.
|
1001.5135
|
An efficient CDMA decoder for correlated information sources
|
cs.IT cond-mat.stat-mech math.IT
|
We consider the detection of correlated information sources in the ubiquitous
Code-Division Multiple-Access (CDMA) scheme. We propose a message-passing based
scheme for detecting correlated sources directly, with no need for source
coding. The detection is done simultaneously over a block of transmitted binary
symbols (word). Simulation results are provided demonstrating a substantial
improvement in bit-error-rate in comparison with the unmodified detector and
the alternative of source compression. The robustness of the error-performance
improvement is shown under practical model settings, including wrong estimation
of the generating Markov transition matrix and finite-length spreading codes.
|
1001.5178
|
A Matroid Framework for Noncoherent Random Network Communications
|
cs.IT cs.NI math.CO math.IT
|
Models for noncoherent error control in random linear network coding (RLNC)
and store and forward (SAF) have been recently proposed. In this paper, we
model different types of random network communications as the transmission of
flats of matroids. This novel framework encompasses RLNC and SAF and allows us
to introduce a novel protocol, referred to as random affine network coding
(RANC), based on affine combinations of packets. Although the models previously
proposed for RLNC and SAF only consider error control, using our framework, we
first evaluate and compare the performance of different network protocols in
the error-free case. We define and determine the rate, average delay, and
throughput of such protocols, and we also investigate the possibilities of
partial decoding before the entire message is received. We thus show that RANC
outperforms RLNC in terms of data rate and throughput thanks to a more
efficient encoding of messages into packets. Second, we model the possible
alterations of a message by the network as an operator channel, which
generalizes the channels proposed for RLNC and SAF. Error control is thus
reduced to a coding-theoretic problem on flats of a matroid, where two distinct
metrics can be used for error correction. We study the maximum cardinality of
codes on flats in general, and codes for error correction in RANC in
particular. We finally design a class of nearly optimal codes for RANC based on
rank metric codes for which we propose a low-complexity decoding algorithm. The
gain of RANC over RLNC is thus preserved with no additional cost in terms of
complexity.
|
1001.5244
|
Computing Networks: A General Framework to Contrast Neural and Swarm
Cognitions
|
cs.NE cs.AI nlin.AO
|
This paper presents the Computing Networks (CNs) framework. CNs are used to
generalize neural and swarm architectures. Artificial neural networks, ant
colony optimization, particle swarm optimization, and realistic biological
models are used as examples of instantiations of CNs. The description of these
architectures as CNs allows their comparison. Their differences and
similarities allow the identification of properties that enable neural and
swarm architectures to perform complex computations and exhibit complex
cognitive abilities. In this context, the most relevant characteristics of CNs
are the existence multiple dynamical and functional scales. The relationship
between multiple dynamical and functional scales with adaptation, cognition (of
brains and swarms) and computation is discussed.
|
1001.5275
|
An Agent-Based Modeling for Pandemic Influenza in Egypt
|
cs.MA cs.CY
|
Pandemic influenza has great potential to cause large and rapid increases in
deaths and serious illness. The objective of this paper is to develop an
agent-based model to simulate the spread of pandemic influenza (novel H1N1) in
Egypt. The proposed multi-agent model is based on the modeling of individuals'
interactions in a space time context. The proposed model involves different
types of parameters such as: social agent attributes, distribution of Egypt
population, and patterns of agents' interactions. Analysis of modeling results
leads to understanding the characteristics of the modeled pandemic,
transmission patterns, and the conditions under which an outbreak might occur.
In addition, the proposed model is used to measure the effectiveness of
different control strategies to intervene the pandemic spread.
|
1001.5311
|
Distilled Sensing: Adaptive Sampling for Sparse Detection and Estimation
|
math.ST cs.IT math.IT stat.ML stat.TH
|
Adaptive sampling results in dramatic improvements in the recovery of sparse
signals in white Gaussian noise. A sequential adaptive sampling-and-refinement
procedure called Distilled Sensing (DS) is proposed and analyzed. DS is a form
of multi-stage experimental design and testing. Because of the adaptive nature
of the data collection, DS can detect and localize far weaker signals than
possible from non-adaptive measurements. In particular, reliable detection and
localization (support estimation) using non-adaptive samples is possible only
if the signal amplitudes grow logarithmically with the problem dimension. Here
it is shown that using adaptive sampling, reliable detection is possible
provided the amplitude exceeds a constant, and localization is possible when
the amplitude exceeds any arbitrarily slowly growing function of the dimension.
|
1001.5319
|
Communicating the sum of sources over a network
|
cs.IT cs.NI math.IT
|
We consider the network communication scenario, over directed acyclic
networks with unit capacity edges in which a number of sources $s_i$ each
holding independent unit-entropy information $X_i$ wish to communicate the sum
$\sum{X_i}$ to a set of terminals $t_j$. We show that in the case in which
there are only two sources or only two terminals, communication is possible if
and only if each source terminal pair $s_i/t_j$ is connected by at least a
single path. For the more general communication problem in which there are
three sources and three terminals, we prove that a single path connecting the
source terminal pairs does not suffice to communicate $\sum{X_i}$. We then
present an efficient encoding scheme which enables the communication of
$\sum{X_i}$ for the three sources, three terminals case, given that each source
terminal pair is connected by {\em two} edge disjoint paths.
|
1001.5336
|
Asymptotic Capacity of Large Fading Relay Networks with Random Node
Failures
|
cs.IT math.IT
|
To understand the network response to large-scale physical attacks, we
investigate the asymptotic capacity of a half-duplex fading relay network with
random node failures when the number of relays $N$ is infinitely large. In this
paper, a simplified independent attack model is assumed where each relay node
fails with a certain probability. The noncoherent relaying scheme is
considered, which corresponds to the case of zero forward-link channel state
information (CSI) at the relays. Accordingly, the whole relay network can be
shown equivalent to a Rayleigh fading channel, where we derive the
$\epsilon$-outage capacity upper bound according to the multiple access (MAC)
cut-set, and the $\epsilon$-outage achievable rates for both the
amplify-and-forward (AF) and decode-and-forward (DF) strategies. Furthermore,
we show that the DF strategy is asymptotically optimal as the outage
probability $\epsilon$ goes to zero, with the AF strategy strictly suboptimal
over all signal to noise ratio (SNR) regimes. Regarding the rate loss due to
random attacks, the AF strategy suffers a less portion of rate loss than the DF
strategy in the high SNR regime, while the DF strategy demonstrates more robust
performance in the low SNR regime.
|
1001.5348
|
Performance Comparisons of PSO based Clustering
|
cs.NE cs.LG
|
In this paper we have investigated the performance of PSO Particle Swarm
Optimization based clustering on few real world data sets and one artificial
data set. The performances are measured by two metric namely quantization error
and inter-cluster distance. The K means clustering algorithm is first
implemented for all data sets, the results of which form the basis of
comparison of PSO based approaches. We have explored different variants of PSO
such as gbest, lbest ring, lbest vonneumann and Hybrid PSO for comparison
purposes. The results reveal that PSO based clustering algorithms perform
better compared to K means in all data sets.
|
1001.5352
|
Kannada Character Recognition System A Review
|
cs.CV
|
Intensive research has been done on optical character recognition ocr and a
large number of articles have been published on this topic during the last few
decades. Many commercial OCR systems are now available in the market, but most
of these systems work for Roman, Chinese, Japanese and Arabic characters. There
are no sufficient number of works on Indian language character recognition
especially Kannada script among 12 major scripts in India. This paper presents
a review of existing work on printed Kannada script and their results. The
characteristics of Kannada script and Kannada Character Recognition System kcr
are discussed in detail. Finally fusion at the classifier level is proposed to
increase the recognition accuracy.
|
1001.5359
|
Threshold Based Indexing of Commercial Shoe Print to Create Reference
and Recovery Images
|
cs.CV
|
One of the important evidence in a crime scene that is normally overlooked
but very important evidence is shoe print as the criminal is normally unaware
of the mask for this. In this paper we use image processing technique to
process reference shoe images to make it index-able for a search from the
database the shoe print impressions available in the commercial market. This is
achieved first by converting the commercially available image through the
process of converting them to gray scale then apply image enhancement and
restoration techniques and finally do image segmentation to store the segmented
parameter as index in the database storage. We use histogram method for image
enhancement, inverse filtering for image restoration and threshold method for
indexing. We use global threshold as index of the shoe print. The paper
describes this method and simulation results are included to validate the
method.
|
1001.5364
|
MIMO Detection for High-Order QAM Based on a Gaussian Tree Approximation
|
cs.IT math.IT
|
This paper proposes a new detection algorithm for MIMO communication systems
employing high order QAM constellations. The factor graph that corresponds to
this problem is very loopy; in fact, it is a complete graph. Hence, a
straightforward application of the Belief Propagation (BP) algorithm yields
very poor results. Our algorithm is based on an optimal tree approximation of
the Gaussian density of the unconstrained linear system. The finite-set
constraint is then applied to obtain a loop-free discrete distribution. It is
shown that even though the approximation is not directly applied to the exact
discrete distribution, applying the BP algorithm to the loop-free factor graph
outperforms current methods in terms of both performance and complexity. The
improved performance of the proposed algorithm is demonstrated on the problem
of MIMO detection.
|
1001.5421
|
A note on evolutionary stochastic portfolio optimization and
probabilistic constraints
|
q-fin.PM cs.CE cs.NE
|
In this note, we extend an evolutionary stochastic portfolio optimization
framework to include probabilistic constraints. Both the stochastic
programming-based modeling environment as well as the evolutionary optimization
environment are ideally suited for an integration of various types of
probabilistic constraints. We show an approach on how to integrate these
constraints. Numerical results using recent financial data substantiate the
applicability of the presented approach.
|
1001.5454
|
Non-Equilibrium Statistical Physics of Currents in Queuing Networks
|
cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT math.PR
|
We consider a stable open queuing network as a steady non-equilibrium system
of interacting particles. The network is completely specified by its underlying
graphical structure, type of interaction at each node, and the Markovian
transition rates between nodes. For such systems, we ask the question ``What is
the most likely way for large currents to accumulate over time in a network
?'', where time is large compared to the system correlation time scale. We
identify two interesting regimes. In the first regime, in which the
accumulation of currents over time exceeds the expected value by a small to
moderate amount (moderate large deviation), we find that the large-deviation
distribution of currents is universal (independent of the interaction details),
and there is no long-time and averaged over time accumulation of particles
(condensation) at any nodes. In the second regime, in which the accumulation of
currents over time exceeds the expected value by a large amount (severe large
deviation), we find that the large-deviation current distribution is sensitive
to interaction details, and there is a long-time accumulation of particles
(condensation) at some nodes. The transition between the two regimes can be
described as a dynamical second order phase transition. We illustrate these
ideas using the simple, yet non-trivial, example of a single node with
feedback.
|
1002.0007
|
Curvature based triangulation of metric measure spaces
|
math.DG cs.IT math.CV math.IT
|
We prove that a Ricci curvature based method of triangulation of compact
Riemannian manifolds, due to Grove and Petersen, extends to the context of
weighted Riemannian manifolds and more general metric measure spaces. In both
cases the role of the lower bound on Ricci curvature is replaced by the
curvature-dimension condition ${\rm CD}(K,N)$. We show also that for weighted
Riemannian manifolds the triangulation can be improved to become a thick one
and that, in consequence, such manifolds admit weight-sensitive
quasimeromorphic mappings. An application of this last result to information
manifolds is considered.
Further more, we extend to weak ${\rm CD}(K,N)$ spaces the results of Kanai
regarding the discretization of manifolds, and show that the volume growth of
such a space is the same as that of any of its discretizations.
|
1002.0019
|
Regularized Modified BPDN for Noisy Sparse Reconstruction with Partial
Erroneous Support and Signal Value Knowledge
|
cs.IT math.IT
|
We study the problem of sparse reconstruction from noisy undersampled
measurements when the following two things are available. (1) We are given
partial, and partly erroneous, knowledge of the signal's support, denoted by
$T$. (2) We are also given an erroneous estimate of the signal values on $T$,
denoted by $(\hat{\mu})_T$. In practice, both these may be available from
available prior knowledge. Alternatively, in recursive reconstruction
applications, like real-time dynamic MRI, one can use the support estimate and
the signal value estimate from the previous time instant as $T$ and
$(\hat{\mu})_T$. In this work, we introduce regularized modified-BPDN
(reg-mod-BPDN) and obtain computable bounds on its reconstruction error.
Reg-mod-BPDN tries to find the signal that is sparsest outside the set $T$,
while being "close enough" to $(\hat{\mu})_T$ on $T$ and while satisfying the
data constraint. Corresponding results for modified-BPDN and BPDN follow as
direct corollaries. A second key contribution is an approach to obtain
computable error bounds that hold without any sufficient conditions. This makes
it easy to compare the bounds for the various approaches. Empirical
reconstruction error comparisons with many existing approaches are also
provided.
|
1002.0026
|
Perfect Z2Z4-linear codes in Steganography
|
cs.IT cs.CR math.IT
|
Steganography is an information hiding application which aims to hide secret
data imperceptibly into a commonly used media. Unfortunately, the theoretical
hiding asymptotical capacity of steganographic systems is not attained by
algorithms developed so far. In this paper, we describe a novel coding method
based on Z2Z4-linear codes that conforms to +/-1-steganography, that is secret
data is embedded into a cover message by distorting each symbol by one unit at
most. This method solves some problems encountered by the most efficient
methods known today, based on ternary Hamming codes. Finally, the performance
of this new technique is compared with that of the mentioned methods and with
the well-known theoretical upper bound.
|
1002.0043
|
A Rate-Distortion Exponent Approach to Multiple Decoding Attempts for
Reed-Solomon Codes
|
cs.IT math.IT
|
Algorithms based on multiple decoding attempts of Reed-Solomon (RS) codes
have recently attracted new attention. Choosing decoding candidates based on
rate-distortion (R-D) theory, as proposed previously by the authors, currently
provides the best performance-versus-complexity trade-off. In this paper, an
analysis based on the rate-distortion exponent (RDE) is used to directly
minimize the exponential decay rate of the error probability. This enables
rigorous bounds on the error probability for finite-length RS codes and leads
to modest performance gains. As a byproduct, a numerical method is derived that
computes the rate-distortion exponent for independent non-identical sources.
Analytical results are given for errors/erasures decoding.
|
1002.0097
|
On the Construction of Prefix-Free and Fix-Free Codes with Specified
Codeword Compositions
|
cs.IT math.IT
|
We investigate the construction of prefix-free and fix-free codes with
specified codeword compositions. We present a polynomial time algorithm which
constructs a fix-free code with the same codeword compositions as a given code
for a special class of codes called distinct codes. We consider the
construction of optimal fix-free codes which minimizes the average codeword
cost for general letter costs with uniform distribution of the codewords and
present an approximation algorithm to find a near optimal fix-free code with a
given constant cost.
|
1002.0102
|
$\alpha$-Discounting Multi-Criteria Decision Making ($\alpha$-D MCDM)
|
cs.AI
|
In this book we introduce a new procedure called \alpha-Discounting Method
for Multi-Criteria Decision Making (\alpha-D MCDM), which is as an alternative
and extension of Saaty Analytical Hierarchy Process (AHP). It works for any
number of preferences that can be transformed into a system of homogeneous
linear equations. A degree of consistency (and implicitly a degree of
inconsistency) of a decision-making problem are defined. \alpha-D MCDM is
afterwards generalized to a set of preferences that can be transformed into a
system of linear and or non-linear homogeneous and or non-homogeneous equations
and or inequalities. The general idea of \alpha-D MCDM is to assign non-null
positive parameters \alpha_1, \alpha_2, and so on \alpha_p to the coefficients
in the right-hand side of each preference that diminish or increase them in
order to transform the above linear homogeneous system of equations which has
only the null-solution, into a system having a particular non-null solution.
After finding the general solution of this system, the principles used to
assign particular values to all parameters \alpha is the second important part
of \alpha-D, yet to be deeper investigated in the future. In the current book
we propose the Fairness Principle, i.e. each coefficient should be discounted
with the same percentage (we think this is fair: not making any favoritism or
unfairness to any coefficient), but the reader can propose other principles.
For consistent decision-making problems with pairwise comparisons,
\alpha-Discounting Method together with the Fairness Principle give the same
result as AHP. But for weak inconsistent decision-making problem,
\alpha-Discounting together with the Fairness Principle give a different result
from AHP. Many consistent, weak inconsistent, and strong inconsistent examples
are given in this book.
|
1002.0108
|
Genetic algorithm for robotic telescope scheduling
|
cs.AI astro-ph.IM
|
This work was inspired by author experiences with a telescope scheduling.
Author long time goal is to develop and further extend software for an
autonomous observatory. The software shall provide users with all the
facilities they need to take scientific images of the night sky, cooperate with
other autonomous observatories, and possibly more. This works shows how genetic
algorithm can be used for scheduling of a single observatory, as well as
network of observatories.
|
1002.0110
|
On Unbiased Estimation of Sparse Vectors Corrupted by Gaussian Noise
|
cs.IT math.IT
|
We consider the estimation of a sparse parameter vector from measurements
corrupted by white Gaussian noise. Our focus is on unbiased estimation as a
setting under which the difficulty of the problem can be quantified
analytically. We show that there are infinitely many unbiased estimators but
none of them has uniformly minimum mean-squared error. We then provide lower
and upper bounds on the Barankin bound, which describes the performance
achievable by unbiased estimators. These bounds are used to predict the
threshold region of practical estimators.
|
1002.0123
|
Achievable rate regions and outer bounds for a multi-pair bi-directional
relay network
|
cs.IT math.IT
|
In a bi-directional relay channel, a pair of nodes wish to exchange
independent messages over a shared wireless half-duplex channel with the help
of relays. Recent work has mostly considered information theoretic limits of
the bi-directional relay channel with two terminal nodes (or end users) and one
relay. In this work we consider bi-directional relaying with one base station,
multiple terminal nodes and one relay, all of which operate in half-duplex
modes. We assume that each terminal node communicates with the base-station in
a bi-directional fashion through the relay and do not place any restrictions on
the channels between the users, relays and base-stations; that is, each node
has a direct link with every other node.
Our contributions are three-fold: 1) the introduction of four new temporal
protocols which fully exploit the two-way nature of the data and outperform
simple routing or multi-hop communication schemes by carefully combining
network coding, random binning and user cooperation which exploit over-heard
and own-message side information, 2) derivations of inner and outer bounds on
the capacity region of the discrete-memoryless multi-pair two-way network, and
3) a numerical evaluation of the obtained achievable rate regions and outer
bounds in Gaussian noise which illustrate the performance of the proposed
protocols compared to simpler schemes, to each other, to the outer bounds,
which highlight the relative gains achieved by network coding, random binning
and compress-and-forward-type cooperation between terminal nodes.
|
1002.0134
|
Constraint solvers: An empirical evaluation of design decisions
|
cs.AI cs.PF
|
This paper presents an evaluation of the design decisions made in four
state-of-the-art constraint solvers; Choco, ECLiPSe, Gecode, and Minion. To
assess the impact of design decisions, instances of the five problem classes
n-Queens, Golomb Ruler, Magic Square, Social Golfers, and Balanced Incomplete
Block Design are modelled and solved with each solver. The results of the
experiments are not meant to give an indication of the performance of a solver,
but rather investigate what influence the choice of algorithms and data
structures has.
The analysis of the impact of the design decisions focuses on the different
ways of memory management, behaviour with increasing problem size, and
specialised algorithms for specific types of variables. It also briefly
considers other, less significant decisions.
|
1002.0136
|
Dominion -- A constraint solver generator
|
cs.AI
|
This paper proposes a design for a system to generate constraint solvers that
are specialised for specific problem models. It describes the design in detail
and gives preliminary experimental results showing the feasibility and
effectiveness of the approach.
|
1002.0139
|
Extraction of Flat and Nested Data Records from Web Pages
|
cs.DB
|
This paper studies the problem of identification and extraction of flat and
nested data records from a given web page. With the explosive growth of
information sources available on the World Wide Web, it has become increasingly
difficult to identify the relevant pieces of information, since web pages are
often cluttered with irrelevant content like advertisements, navigation-panels,
copyright notices etc., surrounding the main content of the web page. Hence, it
is useful to mine such data regions and data records in order to extract
information from such web pages to provide value-added services. Currently
available automatic techniques to mine data regions and data records from web
pages are still unsatisfactory because of their poor performance. In this paper
a novel method to identify and extract the flat and nested data records from
the web pages automatically is proposed. It comprises of two steps : (1)
Identification and Extraction of the data regions based on visual clues
information. (2) Identification and extraction of flat and nested data records
from the data region of a web page automatically. For step1, a novel and more
effective method is proposed, which finds the data regions formed by all types
of tags using visual clues. For step2, a more effective and efficient method
namely, Visual Clue based Extraction of web Data (VCED), is proposed, which
extracts each record from the data region and identifies it whether it is a
flat or nested data record based on visual clue information the area covered by
and the number of data items present in each record. Our experimental results
show that the proposed technique is effective and better than existing
techniques.
|
1002.0169
|
Moment-Based Analysis of Synchronization in Small-World Networks of
Oscillators
|
cs.MA cs.CE cs.DM nlin.AO
|
In this paper, we investigate synchronization in a small-world network of
coupled nonlinear oscillators. This network is constructed by introducing
random shortcuts in a nearest-neighbors ring. The local stability of the
synchronous state is closely related with the support of the eigenvalue
distribution of the Laplacian matrix of the network. We introduce, for the
first time, analytical expressions for the first three moments of the
eigenvalue distribution of the Laplacian matrix as a function of the
probability of shortcuts and the connectivity of the underlying
nearest-neighbor coupled ring. We apply these expressions to estimate the
spectral support of the Laplacian matrix in order to predict synchronization in
small-world networks. We verify the efficiency of our predictions with
numerical simulations.
|
1002.0170
|
Spectral Analysis of Virus Spreading in Random Geometric Networks
|
cs.MA cs.CE cs.DM nlin.AO
|
In this paper, we study the dynamics of a viral spreading process in random
geometric graphs (RGG). The spreading of the viral process we consider in this
paper is closely related with the eigenvalues of the adjacency matrix of the
graph. We deduce new explicit expressions for all the moments of the eigenvalue
distribution of the adjacency matrix as a function of the spatial density of
nodes and the radius of connection. We apply these expressions to study the
behavior of the viral infection in an RGG. Based on our results, we deduce an
analytical condition that can be used to design RGG's in order to tame an
initial viral infection. Numerical simulations are in accordance with our
analytical predictions.
|
1002.0177
|
Logical Evaluation of Consciousness: For Incorporating Consciousness
into Machine Architecture
|
cs.AI
|
Machine Consciousness is the study of consciousness in a biological,
philosophical, mathematical and physical perspective and designing a model that
can fit into a programmable system architecture. Prime objective of the study
is to make the system architecture behave consciously like a biological model
does. Present work has developed a feasible definition of consciousness, that
characterizes consciousness with four parameters i.e., parasitic, symbiotic,
self referral and reproduction. Present work has also developed a biologically
inspired consciousness architecture that has following layers: quantum layer,
cellular layer, organ layer and behavioral layer and traced the characteristics
of consciousness at each layer. Finally, the work has estimated physical and
algorithmic architecture to devise a system that can behave consciously.
|
1002.0179
|
B\'{e}zout Identities Associated to a Finite Sequence
|
cs.IT cs.SC math.IT
|
We consider finite sequences $s\in D^n$ where $D$ is a commutative, unital,
integral domain. We prove three sets of identities (possibly with repetitions),
each involving $2n$ polynomials associated to $s$. The right-hand side of these
identities is a recursively-defined (non-zero) 'product-of-discrepancies'.
There are implied iterative algorithms (of quadratic complexity) for the
left-hand side coefficients; when the ground domain is factorial, the
identities are in effect B\'ezout identities.
We give a number of applications: an algorithm to compute B\'ezout
coefficients over a field; the outputs of the Berlekamp-Massey algorithm;
sequences with perfect linear complexity profile; annihilating polynomials
which do not vanish at zero and have minimal degree: we simplify and extend an
algorithm of Salagean to sequences over $D$. In the Appendix, we give a new
proof of a theorem of Imamura and Yoshida on the linear complexity of reverse
sequences, initially proved using Hankel matrices over a field and now valid
for sequences over a factorial domain.
|
1002.0182
|
Sobolev Duals for Random Frames and Sigma-Delta Quantization of
Compressed Sensing Measurements
|
cs.IT math.IT
|
Quantization of compressed sensing measurements is typically justified by the
robust recovery results of Cand\`es, Romberg and Tao, and of Donoho. These
results guarantee that if a uniform quantizer of step size $\delta$ is used to
quantize $m$ measurements $y = \Phi x$ of a $k$-sparse signal $x \in \R^N$,
where $\Phi$ satisfies the restricted isometry property, then the approximate
recovery $x^#$ via $\ell_1$-minimization is within $O(\delta)$ of $x$. The
simplest and commonly assumed approach is to quantize each measurement
independently. In this paper, we show that if instead an $r$th order
$\Sigma\Delta$ quantization scheme with the same output alphabet is used to
quantize $y$, then there is an alternative recovery method via Sobolev dual
frames which guarantees a reduction of the approximation error by a factor of
$(m/k)^{(r-1/2)\alpha}$ for any $0 < \alpha < 1$, if $m \gtrsim_r k (\log
N)^{1/(1-\alpha)}$. The result holds with high probability on the initial draw
of the measurement matrix $\Phi$ from the Gaussian distribution, and uniformly
for all $k$-sparse signals $x$ that satisfy a mild size condition on their
supports.
|
1002.0184
|
Some considerations on how the human brain must be arranged in order to
make its replication in a thinking machine possible
|
cs.AI q-bio.NC
|
For the most of my life, I have earned my living as a computer vision
professional busy with image processing tasks and problems. In the computer
vision community there is a widespread belief that artificial vision systems
faithfully replicate human vision abilities or at least very closely mimic
them. It was a great surprise to me when one day I have realized that computer
and human vision have next to nothing in common. The former is occupied with
extensive data processing, carrying out massive pixel-based calculations, while
the latter is busy with meaningful information processing, concerned with smart
objects-based manipulations. And the gap between the two is insurmountable. To
resolve this confusion, I had had to return and revaluate first the vision
phenomenon itself, define more carefully what visual information is and how to
treat it properly. In this work I have not been, as it is usually accepted,
biologically inspired . On the contrary, I have drawn my inspirations from a
pure mathematical theory, the Kolmogorov s complexity theory. The results of my
work have been already published elsewhere. So the objective of this paper is
to try and apply the insights gained in course of this my enterprise to a more
general case of information processing in human brain and the challenging issue
of human intelligence.
|
1002.0205
|
On the Generality of $1+\mathbf{i}$ as a Non-Norm Element
|
cs.IT math.IT
|
Full-rate space-time block codes with nonvanishing determinants have been
extensively designed with cyclic division algebras. For these designs, smaller
pairwise error probabilities of maximum likelihood detections require larger
normalized diversity products, which can be obtained by choosing integer
non-norm elements with smaller absolute values. All known methods have
constructed $1+\bi$ and $2+\bi$ to be integer non-norm elements with the
smallest absolute values over QAM for the number of transmit antennas $n$:
$\{n:5\leq n\leq 40,8\nmid n\}$ and $\{n:5\leq n\leq 40,8\mid n\}$,
respectively. Via explicit constructions, this paper proves that $1+\bi$ is an
integer non-norm element with the smallest absolute value over QAM for every
$n\geq 5$.
|
1002.0215
|
Extraction de termes, reconnaissance et labellisation de relations dans
un th\'esaurus
|
cs.IR
|
Within the documentary system domain, the integration of thesauri for
indexing and retrieval information steps is usual. In libraries, documents own
rich descriptive information made by librarians, under descriptive notice based
on Rameau thesaurus. We exploit two kinds of information in order to create a
first semantic structure. A step of conceptualization allows us to define the
various modules used to automatically build the semantic structure of the
indexation work. Our current work focuses on an approach that aims to define an
ontology based on a thesaurus. We hope to integrate new knowledge
characterizing the territory of our structure (adding "toponyms" and links
between concepts) thanks to a geographic information system (GIS).
|
1002.0235
|
Asymptotic Sum-Capacity of Random Gaussian Interference Networks Using
Interference Alignment
|
cs.IT math.IT
|
We consider a dense n-user Gaussian interference network formed by paired
transmitters and receivers placed independently at random in Euclidean space.
Under natural conditions on the node position distributions and signal
attenuation, we prove convergence in probability of the average per-user
capacity C_Sigma/n to 1/2 E log(1 + 2SNR).
The achievability result follows directly from results based on an
interference alignment scheme presented in recent work of Nazer et al. Our main
contribution comes through the converse result, motivated by ideas of
`bottleneck links' developed in recent work of Jafar. An information theoretic
argument gives a capacity bound on such bottleneck links, and probabilistic
counting arguments show there are sufficiently many such links to tightly bound
the sum-capacity of the whole network.
|
1002.0239
|
Construction et enrichissement automatique d'ontologie \`a partir de
ressources externes
|
cs.IR
|
Automatic construction of ontologies from text is generally based on
retrieving text content. For a much more rich ontology we extend these
approaches by taking into account the document structure and some external
resources (like thesaurus of indexing terms of near domain). In this paper we
describe how these external resources are at first analyzed and then exploited.
This method has been applied on a geographical domain and the benefit has been
evaluated.
|
1002.0276
|
Dendritic Cells for SYN Scan Detection
|
cs.AI cs.CR cs.NE
|
Artificial immune systems have previously been applied to the problem of
intrusion detection. The aim of this research is to develop an intrusion
detection system based on the function of Dendritic Cells (DCs). DCs are
antigen presenting cells and key to activation of the human immune system,
behaviour which has been abstracted to form the Dendritic Cell Algorithm (DCA).
In algorithmic terms, individual DCs perform multi-sensor data fusion,
asynchronously correlating the the fused data signals with a secondary data
stream. Aggregate output of a population of cells, is analysed and forms the
basis of an anomaly detection system. In this paper the DCA is applied to the
detection of outgoing port scans using TCP SYN packets. Results show that
detection can be achieved with the DCA, yet some false positives can be
encountered when simultaneously scanning and using other network services.
Suggestions are made for using adaptive signals to alleviate this uncovered
problem.
|
1002.0295
|
On lifting perfect codes
|
cs.IT cs.DM math.CO math.IT
|
In this paper we consider completely regular codes, obtained from perfect
(Hamming) codes by lifting the ground field. More exactly, for a given perfect
code C of length n=(q^m-1)/(q-1) over F_q with a parity check matrix H_m, we
define a new code C_{(m,r)} of length n over F_{q^r}, r > 1, with this parity
check matrix H_m. The resulting code C_{(m,r)} is completely regular with
covering radius R = min{r,m}. We compute the intersection numbers of such codes
and, finally, we prove that Hamming codes are the only codes that, after
lifting the ground field, result in completely regular codes.
|
1002.0378
|
A Grey-Box Approach to Automated Mechanism Design
|
cs.GT cs.AI cs.MA
|
Auctions play an important role in electronic commerce, and have been used to
solve problems in distributed computing. Automated approaches to designing
effective auction mechanisms are helpful in reducing the burden of traditional
game theoretic, analytic approaches and in searching through the large space of
possible auction mechanisms. This paper presents an approach to automated
mechanism design (AMD) in the domain of double auctions. We describe a novel
parametrized space of double auctions, and then introduce an evolutionary
search method that searches this space of parameters. The approach evaluates
auction mechanisms using the framework of the TAC Market Design Game and
relates the performance of the markets in that game to their constituent parts
using reinforcement learning. Experiments show that the strongest mechanisms we
found using this approach not only win the Market Design Game against known,
strong opponents, but also exhibit desirable economic properties when they run
in isolation.
|
1002.0382
|
Face Recognition by Fusion of Local and Global Matching Scores using DS
Theory: An Evaluation with Uni-classifier and Multi-classifier Paradigm
|
cs.CV cs.AI
|
Faces are highly deformable objects which may easily change their appearance
over time. Not all face areas are subject to the same variability. Therefore
decoupling the information from independent areas of the face is of paramount
importance to improve the robustness of any face recognition technique. This
paper presents a robust face recognition technique based on the extraction and
matching of SIFT features related to independent face areas. Both a global and
local (as recognition from parts) matching strategy is proposed. The local
strategy is based on matching individual salient facial SIFT features as
connected to facial landmarks such as the eyes and the mouth. As for the global
matching strategy, all SIFT features are combined together to form a single
feature. In order to reduce the identification errors, the Dempster-Shafer
decision theory is applied to fuse the two matching techniques. The proposed
algorithms are evaluated with the ORL and the IITK face databases. The
experimental results demonstrate the effectiveness and potential of the
proposed face recognition techniques also in the case of partially occluded
faces or with missing information.
|
1002.0383
|
Feature Level Clustering of Large Biometric Database
|
cs.CV cs.DB cs.LG
|
This paper proposes an efficient technique for partitioning large biometric
database during identification. In this technique feature vector which
comprises of global and local descriptors extracted from offline signature are
used by fuzzy clustering technique to partition the database. As biometric
features posses no natural order of sorting, thus it is difficult to index them
alphabetically or numerically. Hence, some supervised criteria is required to
partition the search space. At the time of identification the fuzziness
criterion is introduced to find the nearest clusters for declaring the identity
of query sample. The system is tested using bin-miss rate and performs better
in comparison to traditional k-means approach.
|
1002.0406
|
MIMO Transmission with Residual Transmit-RF Impairments
|
cs.IT math.IT
|
Physical transceiver implementations for multiple-input multiple-output
(MIMO) wireless communication systems suffer from transmit-RF (Tx-RF)
impairments. In this paper, we study the effect on channel capacity and
error-rate performance of residual Tx-RF impairments that defy proper
compensation. In particular, we demonstrate that such residual distortions
severely degrade the performance of (near-)optimum MIMO detection algorithms.
To mitigate this performance loss, we propose an efficient algorithm, which is
based on an i.i.d. Gaussian model for the distortion caused by these
impairments. In order to validate this model, we provide measurement results
based on a 4-stream Tx-RF chain implementation for MIMO orthogonal
frequency-division multiplexing (OFDM).
|
1002.0411
|
Face Identification by SIFT-based Complete Graph Topology
|
cs.CV cs.AI
|
This paper presents a new face identification system based on Graph Matching
Technique on SIFT features extracted from face images. Although SIFT features
have been successfully used for general object detection and recognition, only
recently they were applied to face recognition. This paper further investigates
the performance of identification techniques based on Graph matching topology
drawn on SIFT features which are invariant to rotation, scaling and
translation. Face projections on images, represented by a graph, can be matched
onto new images by maximizing a similarity function taking into account spatial
distortions and the similarities of the local features. Two graph based
matching techniques have been investigated to deal with false pair assignment
and reducing the number of features to find the optimal feature set between
database and query face SIFT features. The experimental results, performed on
the BANCA database, demonstrate the effectiveness of the proposed system for
automatic face identification.
|
1002.0412
|
SIFT-based Ear Recognition by Fusion of Detected Keypoints from Color
Similarity Slice Regions
|
cs.CV cs.AI
|
Ear biometric is considered as one of the most reliable and invariant
biometrics characteristics in line with iris and fingerprint characteristics.
In many cases, ear biometrics can be compared with face biometrics regarding
many physiological and texture characteristics. In this paper, a robust and
efficient ear recognition system is presented, which uses Scale Invariant
Feature Transform (SIFT) as feature descriptor for structural representation of
ear images. In order to make it more robust to user authentication, only the
regions having color probabilities in a certain ranges are considered for
invariant SIFT feature extraction, where the K-L divergence is used for keeping
color consistency. Ear skin color model is formed by Gaussian mixture model and
clustering the ear color pattern using vector quantization. Finally, K-L
divergence is applied to the GMM framework for recording the color similarity
in the specified ranges by comparing color similarity between a pair of
reference model and probe ear images. After segmentation of ear images in some
color slice regions, SIFT keypoints are extracted and an augmented vector of
extracted SIFT features are created for matching, which is accomplished between
a pair of reference model and probe ear images. The proposed technique has been
tested on the IITK Ear database and the experimental results show improvements
in recognition accuracy while invariant features are extracted from color slice
regions to maintain the robustness of the system.
|
1002.0414
|
Feature Level Fusion of Biometrics Cues: Human Identification with
Doddingtons Caricature
|
cs.CV cs.AI
|
This paper presents a multimodal biometric system of fingerprint and ear
biometrics. Scale Invariant Feature Transform (SIFT) descriptor based feature
sets extracted from fingerprint and ear are fused. The fused set is encoded by
K-medoids partitioning approach with less number of feature points in the set.
K-medoids partition the whole dataset into clusters to minimize the error
between data points belonging to the clusters and its center. Reduced feature
set is used to match between two biometric sets. Matching scores are generated
using wolf-lamb user-dependent feature weighting scheme introduced by
Doddington. The technique is tested to exhibit its robust performance.
|
1002.0416
|
Fusion of Multiple Matchers using SVM for Offline Signature
Identification
|
cs.CV cs.LG
|
This paper uses Support Vector Machines (SVM) to fuse multiple classifiers
for an offline signature system. From the signature images, global and local
features are extracted and the signatures are verified with the help of
Gaussian empirical rule, Euclidean and Mahalanobis distance based classifiers.
SVM is used to fuse matching scores of these matchers. Finally, recognition of
query signatures is done by comparing it with all signatures of the database.
The proposed system is tested on a signature database contains 5400 offline
signatures of 600 individuals and the results are found to be promising.
|
1002.0424
|
Cooperative Algorithms for MIMO Interference Channels
|
cs.IT math.IT
|
Interference alignment is a transmission technique for exploiting all
available degrees of freedom in the interference channel with an arbitrary
number of users. Most prior work on interference alignment, however, neglects
interference from other nodes in the network not participating in the alignment
operation. This paper proposes three generalizations of interference alignment
for the multiple-antenna interference channel with multiple users that account
for colored noise, which models uncoordinated interference. First, a minimum
interference-plus-noise leakage algorithm is presented, and shown to be
equivalent to previous subspace methods when noise is spatially white or
negligible. A joint minimum mean squared error design is then proposed that
jointly optimizes the transmit precoders and receive spatial filters, whereas
previous designs neglect the receive spatial filter. This algorithm is shown to
be a generalization of previous joint MMSE designs for other system
configurations such as the broadcast channel. Finally, a maximum
signal-to-interference-plus-noise ratio algorithm is developed that is proven
to converge, unlike previous maximum SINR algorithms. The latter two designs
are shown to have increased complexity due to non-orthogonal precoders, more
required iterations, or more channel state knowledge than the min INL or
subspace methods. The sum throughput performance of these algorithms is
simulated in the context of a network with uncoordinated co-channel interferers
not participating in the alignment protocol. It is found that a network with
cochannel interference can benefit from employing precoders designed to
consider that interference, but in some cases, ignoring the co-channel
interference is advantageous.
|
1002.0432
|
Detecting Motifs in System Call Sequences
|
cs.AI cs.CR cs.NE
|
The search for patterns or motifs in data represents an area of key interest
to many researchers. In this paper we present the Motif Tracking Algorithm, a
novel immune inspired pattern identification tool that is able to identify
unknown motifs which repeat within time series data. The power of the algorithm
is derived from its use of a small number of parameters with minimal
assumptions. The algorithm searches from a completely neutral perspective that
is independent of the data being analysed, and the underlying motifs. In this
paper the motif tracking algorithm is applied to the search for patterns within
sequences of low level system calls between the Linux kernel and the operating
system's user space. The MTA is able to compress data found in large system
call data sets to a limited number of motifs which summarise that data. The
motifs provide a resource from which a profile of executed processes can be
built. The potential for these profiles and new implications for security
research are highlighted. A higher level call system language for measuring
similarity between patterns of such calls is also suggested.
|
1002.0449
|
Some improved results on communication between information systems
|
cs.AI
|
To study the communication between information systems, Wang et al. [C. Wang,
C. Wu, D. Chen, Q. Hu, and C. Wu, Communicating between information systems,
Information Sciences 178 (2008) 3228-3239] proposed two concepts of type-1 and
type-2 consistent functions. Some properties of such functions and induced
relation mappings have been investigated there. In this paper, we provide an
improvement of the aforementioned work by disclosing the symmetric relationship
between type-1 and type-2 consistent functions. We present more properties of
consistent functions and induced relation mappings and improve upon several
deficient assertions in the original work. In particular, we unify and extend
type-1 and type-2 consistent functions into the so-called
neighborhood-consistent functions. This provides a convenient means for
studying the communication between information systems based on various
neighborhoods.
|
1002.0478
|
\'Etude et traitement automatique de l'anglais du XVIIe si\`ecle :
outils morphosyntaxiques et dictionnaires
|
cs.CL
|
In this article, we record the main linguistic differences or singularities
of 17th century English, analyse them morphologically and syntactically and
propose equivalent forms in contemporary English. We show how 17th century
texts may be transcribed into modern English, combining the use of electronic
dictionaries with rules of transcription implemented as transducers. Apr\`es
avoir expos\'e la constitution du corpus, nous recensons les principales
diff\'erences ou particularit\'es linguistiques de la langue anglaise du XVIIe
si\`ecle, les analysons du point de vue morphologique et syntaxique et
proposons des \'equivalents en anglais contemporain (AC). Nous montrons comment
nous pouvons effectuer une transcription automatique de textes anglais du XVIIe
si\`ecle en anglais moderne, en combinant l'utilisation de dictionnaires
\'electroniques avec des r\`egles de transcriptions impl\'ement\'ees sous forme
de transducteurs.
|
1002.0479
|
"Mind your p's and q's": or the peregrinations of an apostrophe in 17th
Century English
|
cs.CL
|
If the use of the apostrophe in contemporary English often marks the Saxon
genitive, it may also indicate the omission of one or more let-ters. Some
writers (wrongly?) use it to mark the plural in symbols or abbreviations,
visual-ised thanks to the isolation of the morpheme "s". This punctuation mark
was imported from the Continent in the 16th century. During the 19th century
its use was standardised. However the rules of its usage still seem problematic
to many, including literate speakers of English. "All too often, the apostrophe
is misplaced", or "errant apostrophes are springing up every-where" is a
complaint that Internet users fre-quently come across when visiting grammar
websites. Many of them detail its various uses and misuses, and attempt to
correct the most common mistakes about it, especially its mis-use in the
plural, called greengrocers' apostro-phes and humorously misspelled
"greengro-cers apostrophe's". While studying English travel accounts published
in the seventeenth century, we noticed that the different uses of this symbol
may accompany various models of metaplasms. We were able to highlight the
linguistic variations of some lexemes, and trace the origin of modern grammar
rules gov-erning its usage.
|
1002.0481
|
Recognition and translation Arabic-French of Named Entities: case of the
Sport places
|
cs.CL
|
The recognition of Arabic Named Entities (NE) is a problem in different
domains of Natural Language Processing (NLP) like automatic translation.
Indeed, NE translation allows the access to multilingual in-formation. This
translation doesn't always lead to expected result especially when NE contains
a person name. For this reason and in order to ameliorate translation, we can
transliterate some part of NE. In this context, we propose a method that
integrates translation and transliteration together. We used the linguis-tic
NooJ platform that is based on local grammars and transducers. In this paper,
we focus on sport domain. We will firstly suggest a refinement of the
typological model presented at the MUC Conferences we will describe the
integration of an Arabic transliteration module into translation system.
Finally, we will detail our method and give the results of the evaluation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.