id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1001.1374
|
Distance bounds for algebraic geometric codes
|
cs.IT math.AG math.IT
|
Various methods have been used to obtain improvements of the Goppa lower
bound for the minimum distance of an algebraic geometric code. The main methods
divide into two categories and all but a few of the known bounds are special
cases of either the Lundell-McCullough floor bound or the Beelen order bound.
The exceptions are recent improvements of the floor bound by
Guneri-Stichtenoth-Taskin, and Duursma-Park, and of the order bound by
Duursma-Park and Duursma-Kirov. In this paper we provide short proofs for all
floor bounds and most order bounds in the setting of the van Lint and Wilson AB
method. Moreover, we formulate unifying theorems for order bounds and formulate
the DP and DK order bounds as natural but different generalizations of the
Feng-Rao bound for one-point codes.
|
1001.1386
|
On the List-Decodability of Random Linear Codes
|
cs.IT math.CO math.IT
|
For every fixed finite field $\F_q$, $p \in (0,1-1/q)$ and $\epsilon > 0$, we
prove that with high probability a random subspace $C$ of $\F_q^n$ of dimension
$(1-H_q(p)-\epsilon)n$ has the property that every Hamming ball of radius $pn$
has at most $O(1/\epsilon)$ codewords.
This answers a basic open question concerning the list-decodability of linear
codes, showing that a list size of $O(1/\epsilon)$ suffices to have rate within
$\epsilon$ of the "capacity" $1-H_q(p)$. Our result matches up to constant
factors the list-size achieved by general random codes, and gives an
exponential improvement over the best previously known list-size bound of
$q^{O(1/\epsilon)}$.
The main technical ingredient in our proof is a strong upper bound on the
probability that $\ell$ random vectors chosen from a Hamming ball centered at
the origin have too many (more than $\Theta(\ell)$) vectors from their linear
span also belong to the ball.
|
1001.1389
|
Optimal Cooperative Relaying Schemes for Improving Wireless Physical
Layer Security
|
cs.IT math.IT
|
We consider a cooperative wireless network in the presence of one of more
eavesdroppers, and exploit node cooperation for achieving physical (PHY) layer
based security. Two different cooperation schemes are considered. In the first
scheme, cooperating nodes retransmit a weighted version of the source signal in
a decode-and-forward (DF) fashion. In the second scheme, while the source is
transmitting, cooperating nodes transmit weighted noise to confound the
eavesdropper (cooperative jamming (CJ)). We investigate two objectives, i.e.,
maximization of achievable secrecy rate subject to a total power constraint,
and minimization of total power transmit power under a secrecy rate constraint.
For the first design objective with a single eavesdropper we obtain expressions
for optimal weights under the DF protocol in closed form, and give an algorithm
that converges to the optimal solution for the CJ scheme; while for multiple
eavesdroppers we give an algorithm for the solution using the DF protocol that
is guaranteed to converge to the optimal solution for two eavesdroppers. For
the second design objective, existing works introduced additional constraints
in order to reduce the degree of difficulty, thus resulting in suboptimal
solutions. In this work, either a closed form solution is obtained, or
algorithms to search for the solution are proposed. Numerical results are
presented to illustrate the proposed schemes and demonstrate the advantages of
cooperation as compared to direct transmission.
|
1001.1401
|
Incorporating characteristics of human creativity into an evolutionary
art algorithm
|
cs.AI cs.NE q-bio.NC
|
A perceived limitation of evolutionary art and design algorithms is that they
rely on human intervention; the artist selects the most aesthetically pleasing
variants of one generation to produce the next. This paper discusses how
computer generated art and design can become more creatively human-like with
respect to both process and outcome. As an example of a step in this direction,
we present an algorithm that overcomes the above limitation by employing an
automatic fitness function. The goal is to evolve abstract portraits of Darwin,
using our 2nd generation fitness function which rewards genomes that not just
produce a likeness of Darwin but exhibit certain strategies characteristic of
human artists. We note that in human creativity, change is less choosing
amongst randomly generated variants and more capitalizing on the associative
structure of a conceptual network to hone in on a vision. We discuss how to
achieve this fluidity algorithmically.
|
1001.1445
|
Graph-Constrained Group Testing
|
cs.DM cs.IT math.IT
|
Non-adaptive group testing involves grouping arbitrary subsets of $n$ items
into different pools. Each pool is then tested and defective items are
identified. A fundamental question involves minimizing the number of pools
required to identify at most $d$ defective items. Motivated by applications in
network tomography, sensor networks and infection propagation, a variation of
group testing problems on graphs is formulated. Unlike conventional group
testing problems, each group here must conform to the constraints imposed by a
graph. For instance, items can be associated with vertices and each pool is any
set of nodes that must be path connected. In this paper, a test is associated
with a random walk. In this context, conventional group testing corresponds to
the special case of a complete graph on $n$ vertices.
For interesting classes of graphs a rather surprising result is obtained,
namely, that the number of tests required to identify $d$ defective items is
substantially similar to what is required in conventional group testing
problems, where no such constraints on pooling is imposed. Specifically, if
T(n) corresponds to the mixing time of the graph $G$, it is shown that with
$m=O(d^2T^2(n)\log(n/d))$ non-adaptive tests, one can identify the defective
items. Consequently, for the Erdos-Renyi random graph $G(n,p)$, as well as
expander graphs with constant spectral gap, it follows that $m=O(d^2\log^3n)$
non-adaptive tests are sufficient to identify $d$ defective items. Next, a
specific scenario is considered that arises in network tomography, for which it
is shown that $m=O(d^3\log^3n)$ non-adaptive tests are sufficient to identify
$d$ defective items. Noisy counterparts of the graph constrained group testing
problem are considered, for which parallel results are developed. We also
briefly discuss extensions to compressive sensing on graphs.
|
1001.1446
|
Using Financial Ratios to Identify Romanian Distressed Companies
|
q-fin.PM cs.CE q-bio.GN
|
In the context of the current financial crisis, when more companies are
facing bankruptcy or insolvency, the paper aims to find methods to identify
distressed firms by using financial ratios. The study will focus on identifying
a group of Romanian listed companies, for which financial data for the year
2008 were available. For each company a set of 14 financial indicators was
calculated and then used in a principal component analysis, followed by a
cluster analysis, a logit model, and a CHAID classification tree.
|
1001.1454
|
Multidimensional Data Structures and Techniques for Efficient Decision
Making
|
cs.CG cs.DB cs.DS
|
In this paper we present several novel efficient techniques and
multidimensional data structures which can improve the decision making process
in many domains. We consider online range aggregation, range selection and
range weighted median queries; for most of them, the presented data structures
and techniques can provide answers in polylogarithmic time. The presented
results have applications in many business and economic scenarios, some of
which are described in detail in the paper.
|
1001.1468
|
An information inequality and evaluation of Marton's inner bound for
binary input broadcast channels
|
cs.IT math.IT
|
We establish an information inequality that is intimately connected to the
evaluation of the sum rate given by Marton's inner bound for two receiver
broadcast channels with a binary input alphabet. This generalizes a recent
result where the inequality was established for a particular channel, the
binary skew-symmetric broadcast channel. The inequality implies that randomized
time-division strategy indeed achieves the sum rate of Marton's inner bound for
all binary input broadcast channels.
|
1001.1478
|
Ergodic and Outage Performance of Fading Broadcast Channels with 1-Bit
Feedback
|
cs.IT math.IT
|
In this paper, the ergodic sum-rate and outage probability of a downlink
single-antenna channel with K users are analyzed in the presence of Rayleigh
flat fading, where limited channel state information (CSI) feedback is assumed.
Specifically, only 1-bit feedback per fading block per user is available at the
base station. We first study the ergodic sum-rate of the 1-bit feedback scheme,
and consider the impact of feedback delay on the system. A closed-form
expression for the achievable ergodic sum-rate is presented as a function of
the fading temporal correlation coefficient. It is proved that the sum-rate
scales as loglogK, which is the same scaling law achieved by the optimal
non-delayed full CSI feedback scheme. The sum-rate degradation due to outdated
CSI is also evaluated in the asymptotic regimes of either large K or low SNR.
The outage performance of the 1-bit feedback scheme for both instantaneous and
outdated feedback is then investigated. Expressions for the outage
probabilities are derived, along with the corresponding diversity-multiplexing
tradeoffs (DMT). It is shown that with instantaneous feedback, a power
allocation based on the feedback bits enables to double the DMT compared to the
case with short-term power constraint in which a dynamic power allocation is
not allowed. But, with outdated feedback, the advantage of power allocation is
lost, and the DMT reverts to that achievable with no CSI feedback.
Nevertheless, for finite SNR, improvement in terms of outage probability can
still be obtained.
|
1001.1482
|
Performance of Optimum Combining in a Poisson Field of Interferers and
Rayleigh Fading Channels
|
cs.IT math.IT
|
This paper studies the performance of antenna array processing in distributed
multiple access networks without power control. The interference is represented
as a Poisson point process. Desired and interfering signals are subject to both
path-loss fading (with an exponent greater than 2) and to independent Rayleigh
fading. Using these assumptions, we derive the exact closed form expression for
the cumulative distribution function of the output
signal-to-interference-plus-noise ratio when optimum combining is applied. This
results in a pertinent measure of the network performance in terms of the
outage probability, which in turn provides insights into the network capacity
gain that could be achieved with antenna array processing. We present and
discuss examples of applications, as well as some numerical results.
|
1001.1597
|
The Berlekamp-Massey Algorithm via Minimal Polynomials
|
cs.IT cs.SC math.IT
|
We present a recursive minimal polynomial theorem for finite sequences over a
commutative integral domain $D$. This theorem is relative to any element of
$D$. The ingredients are: the arithmetic of Laurent polynomials over $D$, a
recursive 'index function' and simple mathematical induction. Taking
reciprocals gives a 'Berlekamp-Massey theorem' i.e. a recursive construction of
the polynomials arising in the Berlekamp-Massey algorithm, relative to any
element of $D$. The recursive theorem readily yields the iterative minimal
polynomial algorithm due to the author and a transparent derivation of the
iterative Berlekamp-Massey algorithm.
We give an upper bound for the sum of the linear complexities of $s$ which is
tight if $s$ has a perfect linear complexity profile. This implies that over a
field, both iterative algorithms require at most $2\lfloor
\frac{n^2}{4}\rfloor$ multiplications.
|
1001.1603
|
Soft Decision Decoding of the Orthogonal Complex MIMO Codes for Three
and Four Transmit Antennas
|
cs.IT math.IT
|
Orthogonality is a much desired property for MIMO coding. It enables
symbol-wise decoding, where the errors in other symbol estimates do not affect
the result, thus providing an optimality that is worth pursuing. Another
beneficial property is a low complexity soft decision decoder, which for
orthogonal complex MIMO codes is known for two transmit (Tx) antennas i.e. for
the Alamouti code. We propose novel soft decision decoders for the orthogonal
complex MIMO codes on three and four Tx antennas and extend the old result of
maximal ratio combining (MRC) to cover all orthogonal codes up to four Tx
antennas.
As a rule, a sophisticated transmission scheme encompasses forward error
correction (FEC) coding, and its performance is measured at the FEC decoder
instead of at the MIMO decoder. We introduce the receiver structure that
delivers the MIMO decoder's soft decisions to the demodulator, which in turn
cranks out the logarithm of likelihood ratio (LLR) of each bit and delivers
them to the FEC decoder. This makes a significant improvement on the receiver,
where a maximum likelihood (ML) MIMO decoder makes hard decisions at a too
early stage. Further, the additional gain is achieved with stunningly low
complexity.
|
1001.1625
|
Augmented Lattice Reduction for MIMO decoding
|
cs.IT math.IT
|
Lattice reduction algorithms, such as the LLL algorithm, have been proposed
as preprocessing tools in order to enhance the performance of suboptimal
receivers in MIMO communications. In this paper we introduce a new kind of
lattice reduction-aided decoding technique, called augmented lattice reduction,
which recovers the transmitted vector directly from the change of basis matrix,
and therefore doesn't entail the computation of the pseudo-inverse of the
channel matrix or its QR decomposition. We prove that augmented lattice
reduction attains the maximum receive diversity order of the channel;
simulation results evidence that it significantly outperforms LLL-SIC detection
without entailing any additional complexity. A theoretical bound on the
complexity is also derived.
|
1001.1653
|
A betting interpretation for probabilities and Dempster-Shafer degrees
of belief
|
math.ST cs.AI stat.TH
|
There are at least two ways to interpret numerical degrees of belief in terms
of betting: (1) you can offer to bet at the odds defined by the degrees of
belief, or (2) you can judge that a strategy for taking advantage of such
betting offers will not multiply the capital it risks by a large factor. Both
interpretations can be applied to ordinary additive probabilities and used to
justify updating by conditioning. Only the second can be applied to
Dempster-Shafer degrees of belief and used to justify Dempster's rule of
combination.
|
1001.1658
|
On the Capacity of Non-Coherent Network Coding
|
cs.IT math.IT
|
We consider the problem of multicasting information from a source to a set of
receivers over a network where intermediate network nodes perform randomized
network coding operations on the source packets. We propose a channel model for
the non-coherent network coding introduced by Koetter and Kschischang in [6],
that captures the essence of such a network operation, and calculate the
capacity as a function of network parameters. We prove that use of subspace
coding is optimal, and show that, in some cases, the capacity-achieving
distribution uses subspaces of several dimensions, where the employed
dimensions depend on the packet length. This model and the results also allow
us to give guidelines on when subspace coding is beneficial for the proposed
model and by how much, in comparison to a coding vector approach, from a
capacity viewpoint. We extend our results to the case of multiple source
multicast that creates a virtual multiple access channel.
|
1001.1679
|
Cascade and Triangular Source Coding with Side Information at the First
Two Nodes
|
cs.IT math.IT
|
We consider the cascade and triangular rate-distortion problem where side
information is known to the source encoder and to the first user but not to the
second user. We characterize the rate-distortion region for these problems. For
the quadratic Gaussian case, we show that it is sufficient to consider jointly
Gaussian distributions, a fact that leads to an explicit solution.
|
1001.1685
|
Assessing Cognitive Load on Web Search Tasks
|
cs.HC cs.IR
|
Assessing cognitive load on web search is useful for characterizing search
system features and search tasks with respect to their demands on the
searcher's mental effort. It is also helpful for examining how individual
differences among searchers (e.g. cognitive abilities) affect the search
process. We examined cognitive load from the perspective of primary and
secondary task performance. A controlled web search study was conducted with 48
participants. The primary task performance components were found to be
significantly related to both the objective and the subjective task difficulty.
However, the relationship between objective and subjective task difficulty and
the secondary task performance measures was weaker than expected. The results
indicate that the dual-task approach needs to be used with caution.
|
1001.1705
|
On the Pseudocodeword Redundancy
|
cs.IT math.IT
|
We define the AWGNC, BSC, and max-fractional pseudocodeword redundancy of a
code as the smallest number of rows in a parity-check matrix such that the
corresponding minimum pseudoweight is equal to the minimum Hamming distance. We
show that most codes do not have a finite pseudocodeword redundancy. We also
provide bounds on the pseudocodeword redundancy for some families of codes,
including codes based on designs.
|
1001.1730
|
Divide & Concur and Difference-Map BP Decoders for LDPC Codes
|
cs.IT cs.DS math.IT
|
The "Divide and Concur'' (DC) algorithm, recently introduced by Gravel and
Elser, can be considered a competitor to the belief propagation (BP) algorithm,
in that both algorithms can be applied to a wide variety of constraint
satisfaction, optimization, and probabilistic inference problems. We show that
DC can be interpreted as a message-passing algorithm on a constraint graph,
which helps make the comparison with BP more clear. The "difference-map''
dynamics of the DC algorithm enables it to avoid "traps'' which may be related
to the "trapping sets'' or "pseudo-codewords'' that plague BP decoders of
low-density parity check (LDPC) codes in the error-floor regime.
We investigate two decoders for low-density parity-check (LDPC) codes based
on these ideas. The first decoder is based directly on DC, while the second
decoder borrows the important "difference-map'' concept from the DC algorithm
and translates it into a BP-like decoder. We show that this "difference-map
belief propagation'' (DMBP) decoder has dramatically improved error-floor
performance compared to standard BP decoders, while maintaining a similar
computational complexity. We present simulation results for LDPC codes on the
additive white Gaussian noise and binary symmetric channels, comparing DC and
DMBP decoders with other decoders based on BP, linear programming, and
mixed-integer linear programming.
|
1001.1732
|
Trade-off capacities of the quantum Hadamard channels
|
quant-ph cs.IT math.IT
|
Coding theorems in quantum Shannon theory express the ultimate rates at which
a sender can transmit information over a noisy quantum channel. More often than
not, the known formulas expressing these transmission rates are intractable,
requiring an optimization over an infinite number of uses of the channel.
Researchers have rarely found quantum channels with a tractable classical or
quantum capacity, but when such a finding occurs, it demonstrates a complete
understanding of that channel's capabilities for transmitting classical or
quantum information. Here, we show that the three-dimensional capacity region
for entanglement-assisted transmission of classical and quantum information is
tractable for the Hadamard class of channels. Examples of Hadamard channels
include generalized dephasing channels, cloning channels, and the Unruh
channel. The generalized dephasing channels and the cloning channels are
natural processes that occur in quantum systems through the loss of quantum
coherence or stimulated emission, respectively. The Unruh channel is a noisy
process that occurs in relativistic quantum information theory as a result of
the Unruh effect and bears a strong relationship to the cloning channels. We
give exact formulas for the entanglement-assisted classical and quantum
communication capacity regions of these channels. The coding strategy for each
of these examples is superior to a naive time-sharing strategy, and we
introduce a measure to determine this improvement.
|
1001.1763
|
Infinite-message Interactive Function Computation in Collocated Networks
|
cs.IT math.IT
|
An interactive function computation problem in a collocated network is
studied in a distributed block source coding framework. With the goal of
computing a desired function at the sink, the source nodes exchange messages
through a sequence of error-free broadcasts. The infinite-message minimum
sum-rate is viewed as a functional of the joint source pmf and is characterized
as the least element in a partially ordered family of functionals having
certain convex-geometric properties. This characterization leads to a family of
lower bounds for the infinite-message minimum sum-rate and a simple optimality
test for any achievable infinite-message sum-rate. An iterative algorithm for
evaluating the infinite-message minimum sum-rate functional is proposed and is
demonstrated through an example of computing the minimum function of three
sources.
|
1001.1768
|
On the Secure DoF of the Single-Antenna MAC
|
cs.IT math.IT
|
A new achievability rate region for the secure discrete memoryless
Multiple-Access-Channel (MAC) is presented. Thereafter, a novel secure coding
scheme is proposed to achieve a positive Secure Degrees-of-Freedom (S-DoF) in
the single-antenna MAC. This scheme converts the single-antenna system into a
multiple-dimension system with fractional dimensions. The achievability scheme
is based on the alignment of signals into a small sub-space at the
eavesdropper, and the simultaneous separation of the signals at the intended
receiver. Tools from the field of Diophantine Approximation in number theory
are used to analyze the probability of error in the coding scheme.
|
1001.1781
|
Two Theorems in List Decoding
|
cs.IT math.IT
|
We prove the following results concerning the list decoding of
error-correcting codes:
(i) We show that for \textit{any} code with a relative distance of $\delta$
(over a large enough alphabet), the following result holds for \textit{random
errors}: With high probability, for a $\rho\le \delta -\eps$ fraction of random
errors (for any $\eps>0$), the received word will have only the transmitted
codeword in a Hamming ball of radius $\rho$ around it. Thus, for random errors,
one can correct twice the number of errors uniquely correctable from worst-case
errors for any code. A variant of our result also gives a simple algorithm to
decode Reed-Solomon codes from random errors that, to the best of our
knowledge, runs faster than known algorithms for certain ranges of parameters.
(ii) We show that concatenated codes can achieve the list decoding capacity
for erasures. A similar result for worst-case errors was proven by Guruswami
and Rudra (SODA 08), although their result does not directly imply our result.
Our results show that a subset of the random ensemble of codes considered by
Guruswami and Rudra also achieve the list decoding capacity for erasures.
Our proofs employ simple counting and probabilistic arguments.
|
1001.1798
|
Fountain Codes with Varying Probability Distributions
|
cs.IT math.CO math.IT
|
Fountain codes are rateless erasure-correcting codes, i.e., an essentially
infinite stream of encoded packets can be generated from a finite set of data
packets. Several fountain codes have been proposed recently to minimize
overhead, many of which involve modifications of the Luby transform (LT) code.
These fountain codes, like the LT code, have the implicit assumption that the
probability distribution is fixed throughout the encoding process. In this
paper, we will use the theory of posets to show that this assumption is
unnecessary, and by dropping it, we can achieve overhead reduction by as much
as 64% lower than LT codes. We also present the fundamental theory of
probability distribution designs for fountain codes with non-constant
probability distributions that minimize overhead.
|
1001.1799
|
The capacity region of a class of broadcast channels with a sequence of
less noisy receivers
|
cs.IT math.IT
|
The capacity region of a broadcast channel consisting of k-receivers that lie
in a less noisy sequence is an open problem, when k >= 3. We solve this problem
for the case k=3. We prove that superposition coding is optimal for a class of
broadcast channels with a sequence of less noisy receivers. T
|
1001.1806
|
An Exposition of a Result in "Conjugate Codes for Secure and Reliable
Information Transmission"
|
cs.IT math.IT
|
An elementary proof of the attainability of random coding exponent with
linear codes for additive channels is presented. The result and proof are from
Hamada (Proc. ITW, Chendu, China, 2006), and the present material explains the
proof in detail for those unfamiliar with elementary calculations on
probabilities related to linear codes.
|
1001.1808
|
Performance Analysis for Data Compression Based Signal Classification
Methods
|
cs.IT math.IT
|
In this paper, we present an information theoretic analysis of the blind
signal classification algorithm. We show that the algorithm is equivalent to a
Maximum A Posteriori (MAP) estimator based on estimated parametric probability
models. We prove a lower bound on the error exponents of the parametric model
estimation. It is shown that the estimated model parameters converge in
probability to the true model parameters except some small bias terms.
|
1001.1826
|
Threshold Saturation via Spatial Coupling: Why Convolutional LDPC
Ensembles Perform so well over the BEC
|
cs.IT math.IT
|
Convolutional LDPC ensembles, introduced by Felstrom and Zigangirov, have
excellent thresholds and these thresholds are rapidly increasing as a function
of the average degree. Several variations on the basic theme have been proposed
to date, all of which share the good performance characteristics of
convolutional LDPC ensembles. We describe the fundamental mechanism which
explains why "convolutional-like" or "spatially coupled" codes perform so well.
In essence, the spatial coupling of the individual code structure has the
effect of increasing the belief-propagation (BP) threshold of the new ensemble
to its maximum possible value, namely the maximum-a-posteriori (MAP) threshold
of the underlying ensemble. For this reason we call this phenomenon "threshold
saturation." This gives an entirely new way of approaching capacity. One
significant advantage of such a construction is that one can create
capacity-approaching ensembles with an error correcting radius which is
increasing in the blocklength. Our proof makes use of the area theorem of the
BP-EXIT curve and the connection between the MAP and BP threshold recently
pointed out by Measson, Montanari, Richardson, and Urbanke. Although we prove
the connection between the MAP and the BP threshold only for a very specific
ensemble and only for the binary erasure channel, empirically a threshold
saturation phenomenon occurs for a wide class of ensembles and channels. More
generally, we conjecture that for a large range of graphical systems a similar
saturation of the "dynamical" threshold occurs once individual components are
coupled sufficiently strongly. This might give rise to improved algorithms as
well as to new techniques for analysis.
|
1001.1836
|
Web-Based Expert System for Civil Service Regulations: RCSES
|
cs.AI
|
Internet and expert systems have offered new ways of sharing and distributing
knowledge, but there is a lack of researches in the area of web based expert
systems. This paper introduces a development of a web-based expert system for
the regulations of civil service in the Kingdom of Saudi Arabia named as RCSES.
It is the first time to develop such system (application of civil service
regulations) as well the development of it using web based approach. The
proposed system considers 17 regulations of the civil service system. The
different phases of developing the RCSES system are presented, as knowledge
acquiring and selection, ontology and knowledge representations using XML
format. XML Rule-based knowledge sources and the inference mechanisms were
implemented using ASP.net technique. An interactive tool for entering the
ontology and knowledge base, and the inferencing was built. It gives the
ability to use, modify, update, and extend the existing knowledge base in an
easy way. The knowledge was validated by experts in the domain of civil service
regulations, and the proposed RCSES was tested, verified, and validated by
different technical users and the developers staff. The RCSES system is
compared with other related web based expert systems, that comparison proved
the goodness, usability, and high performance of RCSES.
|
1001.1872
|
Reduced ML-Decoding Complexity, Full-Rate STBCs for 4 Transmit Antenna
Systems
|
cs.IT math.IT
|
For an $n_t$ transmit, $n_r$ receive antenna system ($n_t \times n_r$
system), a {\it{full-rate}} space time block code (STBC) transmits
$min(n_t,n_r)$ complex symbols per channel use. In this paper, a scheme to
obtain a full-rate STBC for 4 transmit antennas and any $n_r$, with reduced
ML-decoding complexity is presented. The weight matrices of the proposed STBC
are obtained from the unitary matrix representations of Clifford Algebra. By
puncturing the symbols of the STBC, full rate designs can be obtained for $n_r
< 4$. For any value of $n_r$, the proposed design offers the least ML-decoding
complexity among known codes. The proposed design is comparable in error
performance to the well known perfect code for 4 transmit antennas while
offering lower ML-decoding complexity. Further, when $n_r < 4$, the proposed
design has higher ergodic capacity than the punctured Perfect code. Simulation
results which corroborate these claims are presented.
|
1001.1873
|
Optimal incorporation of sparsity information by weighted $\ell_1$
optimization
|
cs.IT math.IT
|
Compressed sensing of sparse sources can be improved by incorporating prior
knowledge of the source. In this paper we demonstrate a method for optimal
selection of weights in weighted $L_1$ norm minimization for a noiseless
reconstruction model, and show the improvements in compression that can be
achieved.
|
1001.1889
|
Cheating for Problem Solving: A Genetic Algorithm with Social
Interactions
|
cs.NE cs.AI cs.GT
|
We propose a variation of the standard genetic algorithm that incorporates
social interaction between the individuals in the population. Our goal is to
understand the evolutionary role of social systems and its possible application
as a non-genetic new step in evolutionary algorithms. In biological
populations, ie animals, even human beings and microorganisms, social
interactions often affect the fitness of individuals. It is conceivable that
the perturbation of the fitness via social interactions is an evolutionary
strategy to avoid trapping into local optimum, thus avoiding a fast convergence
of the population. We model the social interactions according to Game Theory.
The population is, therefore, composed by cooperator and defector individuals
whose interactions produce payoffs according to well known game models
(prisoner's dilemma, chicken game, and others). Our results on Knapsack
problems show, for some game models, a significant performance improvement as
compared to a standard genetic algorithm.
|
1001.1896
|
Generalized Degrees of Freedom of the Interference Channel with a Signal
Cognitive Relay
|
cs.IT math.IT
|
We study the interference channel with a signal cognitive relay. A signal
cognitive relay knows the transmit signals (but not the messages) of the
sources non-causally, and tries to help them communicating with their
respective destinations. We derive upper bounds and provide achievable schemes
for this channel. These upper and lower bounds are shown to be tight from
generalized degrees of freedom point of view. As a result, a characterization
of the generalized degrees of freedom of the interference channel with a signal
cognitive relay is given.
|
1001.1912
|
M\'ethode du point proximal: principe et applications aux algorithmes
it\'eratifs
|
cs.IT math.IT
|
This paper recalls the proximal point method. We study two iterative
algorithms: the Blahut-Arimoto algorithm for computing the capacity of
arbitrary discrete memoryless channels, as an example of an iterative algorithm
working with probability density estimates and the iterative decoding of the
Bit Interleaved Coded Modulation (BICM-ID). For these iterative algorithms, we
apply the proximal point method which allows new interpretations with improved
convergence rate.
|
1001.1915
|
Geometrical interpretation and improvements of the Blahut-Arimoto's
algorithm
|
cs.IT math.IT
|
The paper first recalls the Blahut Arimoto algorithm for computing the
capacity of arbitrary discrete memoryless channels, as an example of an
iterative algorithm working with probability density estimates. Then, a
geometrical interpretation of this algorithm based on projections onto linear
and exponential families of probabilities is provided. Finally, this
understanding allows also to propose to write the Blahut-Arimoto algorithm, as
a true proximal point algorithm. it is shown that the corresponding version has
an improved convergence rate, compared to the initial algorithm, as well as in
comparison with other improved versions.
|
1001.1917
|
New Criteria for Iterative Decoding
|
cs.IT math.IT
|
Iterative decoding was not originally introduced as the solution to an
optimization problem rendering the analysis of its convergence very difficult.
In this paper, we investigate the link between iterative decoding and classical
optimization techniques. We first show that iterative decoding can be rephrased
as two embedded minimization processes involving the Fermi-Dirac distance.
Based on this new formulation, an hybrid proximal point algorithm is first
derived with the additional advantage of decreasing a desired criterion. In a
second part, an hybrid minimum entropy algorithm is proposed with improved
performance compared to the classical iterative decoding. Even if this paper
focus on iterative decoding for BICM, the results can be applied to the large
class of turbo-like decoders.
|
1001.1948
|
Collision Helps - Algebraic Collision Recovery for Wireless Erasure
Networks
|
cs.IT cs.NI math.IT
|
Current medium access control mechanisms are based on collision avoidance and
collided packets are discarded. The recent work on ZigZag decoding departs from
this approach by recovering the original packets from multiple collisions. In
this paper, we present an algebraic representation of collisions which allows
us to view each collision as a linear combination of the original packets. The
transmitted, colliding packets may themselves be a coded version of the
original packets.
We propose a new acknowledgment (ACK) mechanism for collisions based on the
idea that if a set of packets collide, the receiver can afford to ACK exactly
one of them and still decode all the packets eventually. We analytically
compare delay and throughput performance of such collision recovery schemes
with other collision avoidance approaches in the context of a single hop
wireless erasure network. In the multiple receiver case, the broadcast
constraint calls for combining collision recovery methods with network coding
across packets at the sender. From the delay perspective, our scheme, without
any coordination, outperforms not only a ALOHA-type random access mechanisms,
but also centralized scheduling. For the case of streaming arrivals, we propose
a priority-based ACK mechanism and show that its stability region coincides
with the cut-set bound of the packet erasure network.
|
1001.1966
|
A New Method to Extract Dorsal Hand Vein Pattern using Quadratic
Inference Function
|
cs.CV cs.CR
|
Among all biometric, dorsal hand vein pattern is attracting the attention of
researchers, of late. Extensive research is being carried out on various
techniques in the hope of finding an efficient one which can be applied on
dorsal hand vein pattern to improve its accuracy and matching time. One of the
crucial step in biometric is the extraction of features. In this paper, we
propose a method based on quadratic inference function to the dorsal hand vein
features to extract its features. The biometric system developed was tested on
a database of 100 images. The false acceptance rate (FAR), false rejection rate
(FRR) and the matching time are being computed.
|
1001.1968
|
A Topological derivative based image segmentation for sign language
recognition system using isotropic filter
|
cs.CV
|
The need of sign language is increasing radically especially to hearing
impaired community. Only few research groups try to automatically recognize
sign language from video, colored gloves and etc. Their approach requires a
valid segmentation of the data that is used for training and of the data that
is used to be recognized. Recognition of a sign language image sequence is
challenging because of the variety of hand shapes and hand motions. Here, this
paper proposes to apply a combination of image segmentation with restoration
using topological derivatives for achieving high recognition accuracy. Image
quality measures are conceded here to differentiate the methods both
subjectively as well as objectively. Experiments show that the additional use
of the restoration before segmenting the postures significantly improves the
correct rate of hand detection, and that the discrete derivatives yields a high
rate of discrimination between different static hand postures as well as
between hand postures and the scene background. Eventually, the research is to
contribute to the implementation of automated sign language recognition system
mainly established for the welfare purpose.
|
1001.1972
|
A New Image Steganography Based On First Component Alteration Technique
|
cs.MM cs.CV
|
In this paper, A new image steganography scheme is proposed which is a kind
of spatial domain technique. In order to hide secret data in cover-image, the
first component alteration technique is used. Techniques used so far focuses
only on the two or four bits of a pixel in a image (at the most five bits at
the edge of an image) which results in less peak to signal noise ratio and high
root mean square error. In this technique, 8 bits of blue components of pixels
are replaced with secret data bits. Proposed scheme can embed more data than
previous schemes and shows better image quality. To prove this scheme, several
experiments are performed, and are compared the experimental results with the
related previous works.
|
1001.1979
|
ICD 10 Based Medical Expert System Using Fuzzy Temporal Logic
|
cs.AI cs.LO
|
Medical diagnosis process involves many levels and considerable amount of
time and money are invariably spent for the first level of diagnosis usually
made by the physician for all the patients every time. Hence there is a need
for a computer based system which not only asks relevant questions to the
patients but also aids the physician by giving a set of possible diseases from
the symptoms obtained using logic at inference. In this work, an ICD10 based
Medical Expert System that provides advice, information and recommendation to
the physician using fuzzy temporal logic. The knowledge base used in this
system consists of facts of symptoms and rules on diseases. It also provides
fuzzy severity scale and weight factor for symptom and disease and can vary
with respect to time. The system generates the possible disease conditions
based on modified Euclidean metric using Elders algorithm for effective
clustering. The minimum similarity value is used as the decision parameter to
identify a disease.
|
1001.1984
|
DNA-MATRIX a tool for DNA motif discovery and weight matrix construction
|
q-bio.GN cs.CE
|
In computational molecular biology, gene regulatory binding sites prediction
in whole genome remains a challenge for the researchers. Now a days, the genome
wide regulatory binding site prediction tools required either direct pattern
sequence or weight matrix. Although there are known transcription factor
binding sites databases available for genome wide prediction but no tool is
available which can construct different weight matrices as per need of user or
tools available for large data set scanning by first aligning the input
upstream or promoter sequences and than construct the matrices in different
level and file format. Considering this, we developed a DNA MATRIX tool for
searching putative regulatory binding sites in gene upstream sequences. This
tool uses the simple biological rule based heuristic algorithm for weight
matrix construction, which can be transformed into different formats after
motif alignment and therefore provides the possibility to identify the most
potential conserved binding sites in the regulated genes. The user may
construct and save specific weight or frequency matrices in different form and
file formats based on user based selection of conserved aligned block of short
sequences ranges from 6 to 20 base pairs and prior nucleotide frequency before
weight scoring.
|
1001.1985
|
Multiprocessor Scheduling For Tasks With Priority Using GA
|
cs.NE cs.DC
|
Multiprocessors have emerged as a powerful computing means for running
realtime applications, especially where a uniprocessor system would not be
sufficient enough to execute all the tasks. The high performance and
reliability of multiprocessors have made them a powerful computing resource.
Such computing environment requires an efficient algorithm to determine when
and on which processor a given task should execute. In multiprocessor systems,
an efficient scheduling of a parallel program onto the processors that
minimizes the entire execution time is vital for achieving a high performance.
This scheduling problem is known to be NPHard. In multiprocessor scheduling
problem, a given program is to be scheduled in a given multiprocessor system
such that the programs execution time is minimized. The last job must be
completed as early as possible. Genetic algorithm (GA) is one of the widely
used techniques for constrained optimization problems. Genetic algorithms are
basically search algorithms based on the mechanics of natural selection and
natural genesis. The main goal behind research on genetic algorithms is
robustness i.e. balance between efficiency and efficacy. This paper proposes
Genetic algorithm to solve scheduling problem of multiprocessors that minimizes
the make span.
|
1001.1988
|
An Improved Image Mining Technique For Brain Tumour Classification Using
Efficient Classifier
|
cs.CV cs.IR
|
An improved image mining technique for brain tumor classification using
pruned association rule with MARI algorithm is presented in this paper. The
method proposed makes use of association rule mining technique to classify the
CT scan brain images into three categories namely normal, benign and malign. It
combines the low level features extracted from images and high level knowledge
from specialists. The developed algorithm can assist the physicians for
efficient classification with multiple keywords per image to improve the
accuracy. The experimental result on prediagnosed database of brain images
showed 96 percent and 93 percent sensitivity and accuracy respectively.
|
1001.1991
|
Mining Spatial Gene Expression Data Using Negative Association Rules
|
cs.DB cs.CE q-bio.GN
|
Over the years, data mining has attracted most of the attention from the
research community. The researchers attempt to develop faster, more scalable
algorithms to navigate over the ever increasing volumes of spatial gene
expression data in search of meaningful patterns. Association rules are a data
mining technique that tries to identify intrinsic patterns in spatial gene
expression data. It has been widely used in different applications, a lot of
algorithms introduced to discover these rules. However Priori like algorithms
has been used to find positive association rules. In contrast to positive
rules, negative rules encapsulate relationship between the occurrences of one
set of items with absence of the other set of items. In this paper, an
algorithm for mining negative association rules from spatial gene expression
data is introduced. The algorithm intends to discover the negative association
rules which are complementary to the association rules often generated by
Priori like algorithm. Our study shows that negative association rules can be
discovered efficiently from spatial gene expression data.
|
1001.2024
|
Wireless Networks with Asynchronous Users
|
cs.IT math.IT
|
This paper addresses an interference channel consisting of $\mathbf{n}$
active users sharing $u$ frequency sub-bands. Users are asynchronous meaning
there exists a mutual delay between their transmitted codes. A stationary model
for interference is considered by assuming the starting point of an
interferer's data is uniformly distributed along the codeword of any user. This
model is not ergodic, however, we show that the noise plus interference process
satisfies an Asymptotic Equipartition Property (AEP) under certain conditions.
This enables us to define achievable rates in the conventional Shannon sense.
The spectrum is divided to private and common bands. Each user occupies its
assigned private band and the common band upon activation. In a scenario where
all transmitters are unaware of the number of active users and the channel
gains, the optimum spectrum assignment is obtained such that the so-called
outage capacity per user is maximized. If $\Pr\{\mathbf{n}>2\}>0$, all users
follow a locally Randomized On-Off signaling scheme on the common band where
each transmitter quits transmitting its Gaussian signals independently from
transmission to transmission. Achievable rates are developed using a
conditional version of Entropy Power Inequality (EPI) and an upper bound on the
differential entropy of a mixed Gaussian random variable. Thereafter, the
activation probability on each transmission slot together with the spectrum
assignment are designed resulting in the largest outage capacity.
|
1001.2038
|
Collaborative Spectrum Sensing from Sparse Observations Using Matrix
Completion for Cognitive Radio Networks
|
cs.IT math.IT
|
In cognitive radio, spectrum sensing is a key component to detect spectrum
holes (i.e., channels not used by any primary users). Collaborative spectrum
sensing among the cognitive radio nodes is expected to improve the ability of
checking complete spectrum usage states. Unfortunately, due to power limitation
and channel fading, available channel sensing information is far from being
sufficient to tell the unoccupied channels directly. Aiming at breaking this
bottleneck, we apply recent matrix completion techniques to greatly reduce the
sensing information needed. We formulate the collaborative sensing problem as a
matrix completion subproblem and a joint-sparsity reconstruction subproblem.
Results of numerical simulations that validated the effectiveness and
robustness of the proposed approach are presented. In particular, in noiseless
cases, when number of primary user is small, exact detection was obtained with
no more than 8% of the complete sensing information, whilst as number of
primary user increases, to achieve a detection rate of 95.55%, the required
information percentage was merely 16.8%.
|
1001.2050
|
Scheduling in Wireless Networks under Uncertainties: A Greedy
Primal-Dual Approach
|
cs.IT cs.NI math.IT math.OC
|
This paper proposes a dynamic primal-dual type algorithm to solve the optimal
scheduling problem in wireless networks subject to uncertain parameters, which
are generated by stochastic network processes such as random packet arrivals,
channel fading, and node mobilities. The algorithm is a generalization of the
well-known max-weight scheduling algorithm proposed by Tassiulas et al., where
only queue length information is used for computing the schedules when the
arrival rates are uncertain. Using the technique of fluid limits, sample path
convergence of the algorithm to an arbitrarily close to optimal solution is
proved, under the assumption that the Strong Law of Large Numbers (SLLN)
applies to the random processes which generate the uncertain parameters. The
performance of the algorithm is further verified by simulation results. The
method may potentially be applied to other applications where dynamic
algorithms for convex problems with uncertain parameters are needed.
|
1001.2059
|
Multishot Codes for Network Coding using Rank-Metric Codes
|
cs.IT math.IT
|
The multiplicative-additive finite-field matrix channel arises as an adequate
model for linear network coding systems when links are subject to errors and
erasures, and both the network topology and the network code are unknown. In a
previous work we proposed a general construction of multishot codes for this
channel based on the multilevel coding theory. Herein we apply this
construction to the rank-metric space, obtaining multishot rank-metric codes
which, by lifting, can be converted to codes for the aforementioned channel. We
also adapt well-known encoding and decoding algorithms to the considered
situation.
|
1001.2062
|
On broadcast channels with binary inputs and symmetric outputs
|
cs.IT math.IT
|
We study the capacity regions of broadcast channels with binary inputs and
symmetric outputs. We study the partial order induced by the more capable
ordering of broadcast channels for channels belonging to this class. This study
leads to some surprising connections regarding various notions of dominance of
receivers. The results here also help us isolate some classes of symmetric
channels where the best known inner and outer bounds differ.
|
1001.2067
|
Refined rate of channel polarization
|
cs.IT math.IT
|
A rate-dependent upper bound of the best achievable block error probability
of polar codes with successive-cancellation decoding is derived.
|
1001.2076
|
Fast-Group-Decodable STBCs via Codes over GF(4)
|
cs.IT math.IT
|
In this paper we construct low decoding complexity STBCs by using the Pauli
matrices as linear dispersion matrices. In this case the Hurwitz-Radon
orthogonality condition is shown to be easily checked by transferring the
problem to $\mathbb{F}_4$ domain. The problem of constructing low decoding
complexity STBCs is shown to be equivalent to finding certain codes over
$\mathbb{F}_4$. It is shown that almost all known low complexity STBCs can be
obtained by this approach. New codes are given that have the least known
decoding complexity in particular ranges of rate.
|
1001.2077
|
On Random Linear Network Coding for Butterfly Network
|
cs.IT math.IT
|
Random linear network coding is a feasible encoding tool for network coding,
specially for the non-coherent network, and its performance is important in
theory and application. In this letter, we study the performance of random
linear network coding for the well-known butterfly network by analyzing the
failure probabilities. We determine the failure probabilities of random linear
network coding for the well-known butterfly network and the butterfly network
with channel failure probability p.
|
1001.2097
|
Predictability of PV power grid performance on insular sites without
weather stations: use of artificial neural networks
|
cs.NE
|
The official meteorological network is poor on the island of Corsica: only
three sites being about 50 km apart are equipped with pyranometers which enable
measurements by hourly and daily step. These sites are Ajaccio (41\degree 55'N
and 8\degree 48'E, seaside), Bastia (42\degree 33'N, 9\degree 29'E, seaside)
and Corte (42\degree 30'N, 9\degree 15'E average altitude of 486 meters). This
lack of weather station makes difficult the predictability of PV power grid
performance. This work intends to study a methodology which can predict global
solar irradiation using data available from another location for daily and
hourly horizon. In order to achieve this prediction, we have used Artificial
Neural Network which is a popular artificial intelligence technique in the
forecasting domain. A simulator has been obtained using data available for the
station of Ajaccio that is the only station for which we have a lot of data: 16
years from 1972 to 1987. Then we have tested the efficiency of this simulator
in two places with different geographical features: Corte, a mountainous region
and Bastia, a coastal region. On daily horizon, the relocation has implied
fewer errors than a "na\"ive" prediction method based on the persistence
(RMSE=1468 Vs 1383Wh/m^2 to Bastia and 1325 Vs 1213Wh/m^2 to Corte). On hourly
case, the results were still satisfactory, and widely better than persistence
(RMSE=138.8 Vs 109.3 Wh/m^2 to Bastia and 135.1 Vs 114.7 Wh/m^2 to Corte). The
last experiment was to evaluate the accuracy of our simulator on a PV power
grid localized at 10 km from the station of Ajaccio. We got errors very
suitable (nRMSE=27.9%, RMSE=99.0 W.h) compared to those obtained with the
persistence (nRMSE=42.2%, RMSE=149.7 W.h).
|
1001.2112
|
Outage Capacity of Bursty Amplify-and-Forward with Incremental Relaying
|
cs.IT math.IT
|
We derive the outage capacity of a bursty version of the amplify-and-forward
(BAF) protocol for small signal-to-noise ratios when incremental relaying is
used. We show that the ratio between the outage capacities of BAF and the
cut-set bound is independent of the relay position and that BAF is outage
optimal for certain conditions on the target rate R. This is in contrast to
decode-and-forward with incremental relaying, where the relay location strongly
determines the performance of the cooperative protocol. We further derive the
outage capacity for a network consisting of an arbitrary number of relay nodes.
In this case the relays transmit in subsequent partitions of the overall
transmission block and the destination accumulates signal-to-noise ratio until
it is able to decode.
|
1001.2117
|
On Outage Capacity for Incremental Relaying with Imperfect Feedback
|
cs.IT math.IT
|
We investigate the effect of imperfect feedback on the \epsilon-outage
capacity of incremental relaying in the low signal-to-noise ratio (SNR) regime.
We show that imperfect feedback leads to a rescaling of the pre-log factor
(comparable to the multiplexing gain for networks operating in the high SNR
regime) and thus reduces the \epsilon-outage capacity considerably. Moreover,
we investigate the effect of different degrees of feedback reliability on the
system performance. We further derive a simple binary tree-based construction
rule to analyze networks with an arbitrary number of relay nodes with respect
to imperfect feedback. This rule can directly be mapped to a comprehensive
matrix notation.
|
1001.2155
|
Cooperative Automated Worm Response and Detection Immune Algorithm
|
cs.AI cs.CR cs.NE
|
The role of T-cells within the immune system is to confirm and assess
anomalous situations and then either respond to or tolerate the source of the
effect. To illustrate how these mechanisms can be harnessed to solve real-world
problems, we present the blueprint of a T-cell inspired algorithm for computer
security worm detection. We show how the three central T-cell processes, namely
T-cell maturation, differentiation and proliferation, naturally map into this
domain and further illustrate how such an algorithm fits into a complete immune
inspired computer security system and framework.
|
1001.2164
|
The Capacity of a Class of Linear Deterministic Networks
|
cs.IT math.IT
|
In this paper, we investigate optimal coding strategies for a class of linear
deterministic relay networks. The network under study is a relay network, with
one source, one destination, and two relay nodes. Additionally, there is a
disturbing source of signals that causes interference with the information
signals received by the relay nodes. Our model captures the effect of the
interference of message signals and disturbing signals on a single relay
network, or the interference of signals from multiple relay networks with each
other in the linear deterministic framework. For several ranges of the network
parameters we find upper bounds on the maximum achievable source--destination
rate in the presense of the disturbing node and in each case we find an optimal
coding scheme that achieves the upper bound.
|
1001.2170
|
Comparing Simulation Output Accuracy of Discrete Event and Agent Based
Models: A Quantitive Approach
|
cs.AI cs.MA
|
In our research we investigate the output accuracy of discrete event
simulation models and agent based simulation models when studying human centric
complex systems. In this paper we focus on human reactive behaviour as it is
possible in both modelling approaches to implement human reactive behaviour in
the model by using standard methods. As a case study we have chosen the retail
sector, and here in particular the operations of the fitting room in the women
wear department of a large UK department store. In our case study we looked at
ways of determining the efficiency of implementing new management policies for
the fitting room operation through modelling the reactive behaviour of staff
and customers of the department. First, we have carried out a validation
experiment in which we compared the results from our models to the performance
of the real system. This experiment also allowed us to establish differences in
output accuracy between the two modelling methids. In a second step a
multi-scenario experiment was carried out to study the behaviour of the models
when they are used for the purpose of operational improvement. Overall we have
found that for our case study example both discrete event simulation and agent
based simulation have the same potential to support the investigation into the
efficiency of implementing new management policies.
|
1001.2186
|
Building reputation systems for better ranking
|
cs.IR cs.DB
|
How to rank web pages, scientists and online resources has recently attracted
increasing attention from both physicists and computer scientists. In this
paper, we study the ranking problem of rating systems where users vote objects
by discrete ratings. We propose an algorithm that can simultaneously evaluate
the user reputation and object quality in an iterative refinement way.
According to both the artificially generated data and the real data from
MovieLens and Amazon, our algorithm can considerably enhance the ranking
accuracy. This work highlights the significance of reputation systems in the
Internet era and points out a way to evaluate and compare the performances of
different reputation systems.
|
1001.2190
|
Characterizations of generalized entropy functions by functional
equations
|
cs.IT math.IT
|
We shall show that a two-parameter extended entropy function is characterized
by a functional equation. As a corollary of this result, we obtain that the
Tsallis entropy function is characterized by a functional equation, which is a
different form used in \cite{ST} i.e., in Proposition \ref{prop01} in the
present paper. We also give an interpretation of the functional equation giving
the Tsallis entropy function, in the relation with two non-additive properties.
|
1001.2195
|
DCA for Bot Detection
|
cs.AI cs.CR cs.NE
|
Ensuring the security of computers is a non-trivial task, with many
techniques used by malicious users to compromise these systems. In recent years
a new threat has emerged in the form of networks of hijacked zombie machines
used to perform complex distributed attacks such as denial of service and to
obtain sensitive data such as password information. These zombie machines are
said to be infected with a 'bot' - a malicious piece of software which is
installed on a host machine and is controlled by a remote attacker, termed the
'botmaster of a botnet'. In this work, we use the biologically inspired
Dendritic Cell Algorithm (DCA) to detect the existence of a single bot on a
compromised host machine. The DCA is an immune-inspired algorithm based on an
abstract model of the behaviour of the dendritic cells of the human body. The
basis of anomaly detection performed by the DCA is facilitated using the
correlation of behavioural attributes such as keylogging and packet flooding
behaviour. The results of the application of the DCA to the detection of a
single bot show that the algorithm is a successful technique for the detection
of such malicious software without responding to normally running programs.
|
1001.2198
|
Performance of Interference Alignment in Clustered Wireless Ad Hoc
Networks
|
cs.IT math.IT
|
Spatial interference alignment among a finite number of users is proposed as
a technique to increase the probability of successful transmission in an
interference limited clustered wireless ad hoc network. Using techniques from
stochastic geometry, we build on the work of Ganti and Haenggi dealing with
Poisson cluster processes with a fixed number of cluster points and provide a
numerically integrable expression for the outage probability using an
intra-cluster interference alignment strategy with multiplexing gain one. For a
special network setting we derive a closed-form upper bound. We demonstrate
significant performance gains compared to single-antenna systems without local
cooperation.
|
1001.2205
|
Deriving the Probabilistic Capacity of General Run-Length Sets Using
Generating Functions
|
cs.IT cs.FL math.CO math.IT
|
In "Reliable Communication in the Absence of a Common Clock" (Yeung et al.,
2009), the authors introduce general run-length sets, which form a class of
constrained systems that permit run-lengths from a countably infinite set. For
a particular definition of probabilistic capacity, they show that probabilistic
capacity is equal to combinatorial capacity. In the present work, it is shown
that the same result also holds for Shannon's original definition of
probabilistic capacity. The derivation presented here is based on generating
functions of constrained systems as developed in "On the Capacity of
Constrained Systems" (Boecherer et al., 2010) and provides a unified
information-theoretic treatment of general run-length sets.
|
1001.2208
|
Biological Inspiration for Artificial Immune Systems
|
cs.AI cs.NE
|
Artificial immune systems (AISs) to date have generally been inspired by
naive biological metaphors. This has limited the effectiveness of these
systems. In this position paper two ways in which AISs could be made more
biologically realistic are discussed. We propose that AISs should draw their
inspiration from organisms which possess only innate immune systems, and that
AISs should employ systemic models of the immune system to structure their
overall design. An outline of plant and invertebrate immune systems is
presented, and a number of contemporary research that more
biologically-realistic AISs could have is also discussed.
|
1001.2218
|
Bounds on the Capacity of the Relay Channel with Noncausal State
Information at Source
|
cs.IT math.IT
|
We consider a three-terminal state-dependent relay channel with the channel
state available non-causally at only the source. Such a model may be of
interest for node cooperation in the framework of cognition, i.e.,
collaborative signal transmission involving cognitive and non-cognitive radios.
We study the capacity of this communication model. One principal problem in
this setup is caused by the relay's not knowing the channel state. In the
discrete memoryless (DM) case, we establish lower bounds on channel capacity.
For the Gaussian case, we derive lower and upper bounds on the channel
capacity. The upper bound is strictly better than the cut-set upper bound. We
show that one of the developed lower bounds comes close to the upper bound,
asymptotically, for certain ranges of rates.
|
1001.2228
|
Estimation with Random Linear Mixing, Belief Propagation and Compressed
Sensing
|
cs.IT math.IT
|
We apply Guo and Wang's relaxed belief propagation (BP) method to the
estimation of a random vector from linear measurements followed by a
componentwise probabilistic measurement channel. Relaxed BP uses a Gaussian
approximation in standard BP to obtain significant computational savings for
dense measurement matrices. The main contribution of this paper is to extend
the relaxed BP method and analysis to general (non-AWGN) output channels.
Specifically, we present detailed equations for implementing relaxed BP for
general channels and show that relaxed BP has an identical asymptotic large
sparse limit behavior as standard BP, as predicted by the Guo and Wang's state
evolution (SE) equations. Applications are presented to compressed sensing and
estimation with bounded noise.
|
1001.2263
|
Syllable Analysis to Build a Dictation System in Telugu language
|
cs.CL cs.HC
|
In recent decades, Speech interactive systems gained increasing importance.
To develop Dictation System like Dragon for Indian languages it is most
important to adapt the system to a speaker with minimum training. In this paper
we focus on the importance of creating speech database at syllable units and
identifying minimum text to be considered while training any speech recognition
system. There are systems developed for continuous speech recognition in
English and in few Indian languages like Hindi and Tamil. This paper gives the
statistical details of syllables in Telugu and its use in minimizing the search
space during recognition of speech. The minimum words that cover maximum
syllables are identified. This words list can be used for preparing a small
text which can be used for collecting speech sample while training the
dictation system. The results are plotted for frequency of syllables and the
number of syllables in each word. This approach is applied on the CIIL Mysore
text corpus which is of 3 million words.
|
1001.2267
|
Speech Recognition by Machine, A Review
|
cs.CL
|
This paper presents a brief survey on Automatic Speech Recognition and
discusses the major themes and advances made in the past 60 years of research,
so as to provide a technological perspective and an appreciation of the
fundamental progress that has been accomplished in this important area of
speech communication. After years of research and development the accuracy of
automatic speech recognition remains one of the important research challenges
(e.g., variations of the context, speakers, and environment).The design of
Speech Recognition system requires careful attentions to the following issues:
Definition of various types of speech classes, speech representation, feature
extraction techniques, speech classifiers, database and performance evaluation.
The problems that are existing in ASR and the various techniques to solve these
problems constructed by various research workers have been presented in a
chronological order. Hence authors hope that this work shall be a contribution
in the area of speech recognition. The objective of this review paper is to
summarize and compare some of the well known methods used in various stages of
speech recognition system and identify research topic and applications which
are at the forefront of this exciting and challenging field.
|
1001.2270
|
An Improved Approach to High Level Privacy Preserving Itemset Mining
|
cs.DB cs.IR
|
Privacy preserving association rule mining has triggered the development of
many privacy preserving data mining techniques. A large fraction of them use
randomized data distortion techniques to mask the data for preserving. This
paper proposes a new transaction randomization method which is a combination of
the fake transaction randomization method and a new per transaction
randomization method. This method distorts the items within each transaction
and ensures a higher level of data privacy in comparison to the previous
approaches. The pertransaction randomization method involves a randomization
function to replace the item by a random number guarantying privacy within the
transaction also. A tool has also been developed to implement the proposed
approach to mine frequent itemsets and association rules from the data
guaranteeing the antimonotonic property.
|
1001.2274
|
Network Capacity Region of Multi-Queue Multi-Server Queueing System with
Time Varying Connectivities
|
cs.IT cs.NI math.IT math.OC
|
Network capacity region of multi-queue multi-server queueing system with
random ON-OFF connectivities and stationary arrival processes is derived in
this paper. Specifically, the necessary and sufficient conditions for the
stability of the system are derived under general arrival processes with finite
first and second moments. In the case of stationary arrival processes, these
conditions establish the network capacity region of the system. It is also
shown that AS/LCQ (Any Server/Longest Connected Queue) policy stabilizes the
system when it is stabilizable. Furthermore, an upper bound for the average
queue occupancy is derived for this policy.
|
1001.2275
|
Efficient Candidacy Reduction For Frequent Pattern Mining
|
cs.DB
|
Certainly, nowadays knowledge discovery or extracting knowledge from large
amount of data is a desirable task in competitive businesses. Data mining is a
main step in knowledge discovery process. Meanwhile frequent patterns play
central role in data mining tasks such as clustering, classification, and
association analysis. Identifying all frequent patterns is the most time
consuming process due to a massive number of candidate patterns. For the past
decade there have been an increasing number of efficient algorithms to mine the
frequent patterns. However reducing the number of candidate patterns and
comparisons for support counting are still two problems in this field which
have made the frequent pattern mining one of the active research themes in data
mining. A reasonable solution is identifying a small candidate pattern set from
which can generate all frequent patterns. In this paper, a method is proposed
based on a new candidate set called candidate head set or H which forms a small
set of candidate patterns. The experimental results verify the accuracy of the
proposed method and reduction of the number of candidate patterns and
comparisons.
|
1001.2277
|
Application of a Fuzzy Programming Technique to Production Planning in
the Textile Industry
|
cs.AI
|
Many engineering optimization problems can be considered as linear
programming problems where all or some of the parameters involved are
linguistic in nature. These can only be quantified using fuzzy sets. The aim of
this paper is to solve a fuzzy linear programming problem in which the
parameters involved are fuzzy quantities with logistic membership functions. To
explore the applicability of the method a numerical example is considered to
determine the monthly production planning quotas and profit of a home textile
group.
|
1001.2279
|
The Application of Mamdani Fuzzy Model for Auto Zoom Function of a
Digital Camera
|
cs.AI
|
Mamdani Fuzzy Model is an important technique in Computational Intelligence
(CI) study. This paper presents an implementation of a supervised learning
method based on membership function training in the context of Mamdani fuzzy
models. Specifically, auto zoom function of a digital camera is modelled using
Mamdani technique. The performance of control method is verified through a
series of simulation and numerical results are provided as illustrations.
|
1001.2283
|
Mutual Information of IID Complex Gaussian Signals on Block
Rayleigh-faded Channels
|
cs.IT math.IT
|
We present a method to compute, quickly and efficiently, the mutual
information achieved by an IID (independent identically distributed) complex
Gaussian input on a block Rayleigh-faded channel without side information at
the receiver. The method accommodates both scalar and MIMO (multiple-input
multiple-output) settings. Operationally, the mutual information thus computed
represents the highest spectral efficiency that can be attained using standard
Gaussian codebooks. Examples are provided that illustrate the loss in spectral
efficiency caused by fast fading and how that loss is amplified by the use of
multiple transmit antennas. These examples are further enriched by comparisons
with the channel capacity under perfect channel-state information at the
receiver, and with the spectral efficiency attained by pilot-based
transmission.
|
1001.2284
|
An Efficient Approach Toward the Asymptotic Analysis of Node-Based
Recovery Algorithms in Compressed Sensing
|
cs.IT math.IT
|
In this paper, we propose a general framework for the asymptotic analysis of
node-based verification-based algorithms. In our analysis we tend the signal
length $n$ to infinity. We also let the number of non-zero elements of the
signal $k$ scale linearly with $n$. Using the proposed framework, we study the
asymptotic behavior of the recovery algorithms over random sparse matrices
(graphs) in the context of compressive sensing. Our analysis shows that there
exists a success threshold on the density ratio $k/n$, before which the
recovery algorithms are successful, and beyond which they fail. This threshold
is a function of both the graph and the recovery algorithm. We also demonstrate
that there is a good agreement between the asymptotic behavior of recovery
algorithms and finite length simulations for moderately large values of $n$.
|
1001.2298
|
Turbo Receiver Design for Phase Noise Mitigation in OFDM Systems
|
cs.IT math.IT
|
This paper addresses the issue of phase noise in OFDM systems. Phase noise
(PHN) is a transceiver impairment resulting from the non-idealities of the
local oscillator. We present a case for designing a turbo receiver for systems
corrupted by phase noise by taking a closer look at the effects of the common
phase error (CPE). Using an approximate probabilistic framework called
variational inference (VI), we develop a soft-in soft-out (SISO) algorithm that
generates posterior bit-level soft estimates while taking into account the
effect of phase noise. The algorithm also provides an estimate of the phase
noise sequence. Using this SISO algorithm, a turbo receiver is designed by
passing soft information between the SISO detector and an outer forward error
correcting (FEC) decoder that uses a soft decoding algorithm. It is shown that
the turbo receiver achieves close to optimal performance.
|
1001.2307
|
Tranceiver Design using Linear Precoding in a Multiuser MIMO System with
Limited Feedback
|
cs.IT math.IT
|
We investigate quantization and feedback of channel state information in a
multiuser (MU) multiple input multiple output (MIMO) system. Each user may
receive multiple data streams. Our design minimizes the sum mean squared error
(SMSE) while accounting for the imperfections in channel state information
(CSI) at the transmitter. This paper makes three contributions: first, we
provide an end-to-end SMSE transceiver design that incorporates receiver
combining, feedback policy and transmit precoder design with channel
uncertainty. This enables the proposed transceiver to outperform the previously
derived limited feedback MU linear transceivers. Second, we remove
dimensionality constraints on the MIMO system, for the scenario with multiple
data streams per user, using a combination of maximum expected signal combining
(MESC) and minimum MSE receiver. This makes the feedback of each user
independent of the others and the resulting feedback overhead scales linearly
with the number of data streams instead of the number of receiving antennas.
Finally, we analyze SMSE of the proposed algorithm at high signal-to-noise
ratio (SNR) and large number of transmit antennas. As an aside, we show
analytically why the bit error rate, in the high SNR regime, increases if
quantization error is ignored.
|
1001.2327
|
Wiretap Channel with Causal State Information
|
cs.IT math.IT
|
A lower bound on the secrecy capacity of the wiretap channel with state
information available causally at both the encoder and decoder is established.
The lower bound is shown to be strictly larger than that for the noncausal case
by Liu and Chen. Achievability is proved using block Markov coding, Shannon
strategy, and key generation from common state information. The state sequence
available at the end of each block is used to generate a key, which is used to
enhance the transmission rate of the confidential message in the following
block. An upper bound on the secrecy capacity when the state is available
noncausally at the encoder and decoder is established and is shown to coincide
with the lower bound for several classes of wiretap channels with state.
|
1001.2331
|
Information Theoretic Bounds for Low-Rank Matrix Completion
|
cs.IT cs.CC math.IT math.PR
|
This paper studies the low-rank matrix completion problem from an information
theoretic perspective. The completion problem is rephrased as a communication
problem of an (uncoded) low-rank matrix source over an erasure channel. The
paper then uses achievability and converse arguments to present order-wise
optimal bounds for the completion problem.
|
1001.2334
|
Network-Level Cooperative Protocols for Wireless Multicasting: Stable
Throughput Analysis and Use of Network Coding
|
cs.IT math.IT
|
In this paper, we investigate the impact of network coding at the relay node
on the stable throughput rate in multicasting cooperative wireless networks.
The proposed protocol adopts Network-level cooperation in contrast to the
traditional physical layer cooperative protocols and in addition uses random
linear network coding at the relay node. The traffic is assumed to be bursty
and the relay node forwards its packets during the periods of source silence
which allows better utilization for channel resources. Our results show that
cooperation will lead to higher stable throughput rates than conventional
retransmission policies and that the use of random linear network coding at the
relay can further increase the stable throughput with increasing Network Coding
field size or number of packets over which encoding is performed.
|
1001.2356
|
Multi-Error-Correcting Amplitude Damping Codes
|
quant-ph cs.IT math.IT
|
We construct new families of multi-error-correcting quantum codes for the
amplitude damping channel. Our key observation is that, with proper encoding,
two uses of the amplitude damping channel simulate a quantum erasure channel.
This allows us to use concatenated codes with quantum erasure-correcting codes
as outer codes for correcting multiple amplitude damping errors. Our new codes
are degenerate stabilizer codes and have parameters which are better than the
amplitude damping codes obtained by any previously known construction.
|
1001.2362
|
Dense Error Correction for Low-Rank Matrices via Principal Component
Pursuit
|
cs.IT math.IT
|
We consider the problem of recovering a low-rank matrix when some of its
entries, whose locations are not known a priori, are corrupted by errors of
arbitrarily large magnitude. It has recently been shown that this problem can
be solved efficiently and effectively by a convex program named Principal
Component Pursuit (PCP), provided that the fraction of corrupted entries and
the rank of the matrix are both sufficiently small. In this paper, we extend
that result to show that the same convex program, with a slightly improved
weighting parameter, exactly recovers the low-rank matrix even if "almost all"
of its entries are arbitrarily corrupted, provided the signs of the errors are
random. We corroborate our result with simulations on randomly generated
matrices and errors.
|
1001.2363
|
Stable Principal Component Pursuit
|
cs.IT math.IT
|
In this paper, we study the problem of recovering a low-rank matrix (the
principal components) from a high-dimensional data matrix despite both small
entry-wise noise and gross sparse errors. Recently, it has been shown that a
convex program, named Principal Component Pursuit (PCP), can recover the
low-rank matrix when the data matrix is corrupted by gross sparse errors. We
further prove that the solution to a related convex program (a relaxed PCP)
gives an estimate of the low-rank matrix that is simultaneously stable to small
entrywise noise and robust to gross sparse errors. More precisely, our result
shows that the proposed convex program recovers the low-rank matrix even though
a positive fraction of its entries are arbitrarily corrupted, with an error
bound proportional to the noise level. We present simulation results to support
our result and demonstrate that the new convex program accurately recovers the
principal components (the low-rank matrix) under quite broad conditions. To our
knowledge, this is the first result that shows the classical Principal
Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to
gross sparse errors; or the first that shows the newly proposed PCP can be made
stable to small entry-wise perturbations.
|
1001.2376
|
A Hybrid RTS-BP Algorithm for Improved Detection of Large-MIMO M-QAM
Signals
|
cs.IT math.IT
|
Low-complexity near-optimal detection of large-MIMO signals has attracted
recent research. Recently, we proposed a local neighborhood search algorithm,
namely `reactive tabu search' (RTS) algorithm, as well as a factor-graph based
`belief propagation' (BP) algorithm for low-complexity large-MIMO detection.
The motivation for the present work arises from the following two observations
on the above two algorithms: $i)$ RTS works for general M-QAM. Although RTS was
shown to achieve close to optimal performance for 4-QAM in large dimensions,
significant performance improvement was still possible for higher-order QAM
(e.g., 16- and 64-QAM). ii) BP also was shown to achieve near-optimal
performance for large dimensions, but only for $\{\pm 1\}$ alphabet. In this
paper, we improve the large-MIMO detection performance of higher-order QAM
signals by using a hybrid algorithm that employs RTS and BP. In particular,
motivated by the observation that when a detection error occurs at the RTS
output, the least significant bits (LSB) of the symbols are mostly in error, we
propose to first reconstruct and cancel the interference due to bits other than
LSBs at the RTS output and feed the interference cancelled received signal to
the BP algorithm to improve the reliability of the LSBs. The output of the BP
is then fed back to RTS for the next iteration. Our simulation results show
that in a 32 x 32 V-BLAST system, the proposed RTS-BP algorithm performs better
than RTS by about 3.5 dB at $10^{-3}$ uncoded BER and by about 2.5 dB at
$3\times 10^{-4}$ rate-3/4 turbo coded BER with 64-QAM at the same order of
complexity as RTS. We also illustrate the performance of large-MIMO detection
in frequency-selective fading channels.
|
1001.2391
|
A Little More, a Lot Better: Improving Path Quality by a Simple Path
Merging Algorithm
|
cs.RO cs.AI
|
Sampling-based motion planners are an effective means for generating
collision-free motion paths. However, the quality of these motion paths (with
respect to quality measures such as path length, clearance, smoothness or
energy) is often notoriously low, especially in high-dimensional configuration
spaces. We introduce a simple algorithm for merging an arbitrary number of
input motion paths into a hybrid output path of superior quality, for a broad
and general formulation of path quality. Our approach is based on the
observation that the quality of certain sub-paths within each solution may be
higher than the quality of the entire path. A dynamic-programming algorithm,
which we recently developed for comparing and clustering multiple motion paths,
reduces the running time of the merging algorithm significantly. We tested our
algorithm in motion-planning problems with up to 12 degrees of freedom. We show
that our algorithm is able to merge a handful of input paths produced by
several different motion planners to produce output paths of much higher
quality.
|
1001.2405
|
Dendritic Cells for Real-Time Anomaly Detection
|
cs.AI cs.NE
|
Dendritic Cells (DCs) are innate immune system cells which have the power to
activate or suppress the immune system. The behaviour of human of human DCs is
abstracted to form an algorithm suitable for anomaly detection. We test this
algorithm on the real-time problem of port scan detection. Our results show a
significant difference in artificial DC behaviour for an outgoing portscan when
compared to behaviour for normal processes.
|
1001.2410
|
On the Secrecy Degress of Freedom of the Multi-Antenna Block Fading
Wiretap Channels
|
cs.IT math.IT
|
We consider the multi-antenna wiretap channel in which the transmitter wishes
to send a confidential message to its receiver while keeping it secret to the
eavesdropper. It has been known that the secrecy capacity of such a channel
does not increase with signal-to-noise ratio when the transmitter has no
channel state information (CSI) under mild conditions. Motivated by Jafar's
robust interference alignment technique, we study the so-called staggered
multi-antenna block-fading wiretap channel where the legitimate receiver and
the eavesdropper have different temporal correlation structures. Assuming no
CSI at transmitter, we characterize lower and upper bounds on the secrecy
degrees of freedom (s.d.o.f.) of the channel at hand. Our results show that a
positive s.d.o.f. can be ensured whenever two receivers experience different
fading variation. Remarkably, very simple linear precoding schemes provide the
optimal s.d.o.f. in some cases of interest.
|
1001.2411
|
Dendritic Cells for Anomaly Detection
|
cs.AI cs.NE
|
Artificial immune systems, more specifically the negative selection
algorithm, have previously been applied to intrusion detection. The aim of this
research is to develop an intrusion detection system based on a novel concept
in immunology, the Danger Theory. Dendritic Cells (DCs) are antigen presenting
cells and key to the activation of the human signals from the host tissue and
correlate these signals with proteins know as antigens. In algorithmic terms,
individual DCs perform multi-sensor data fusion based on time-windows. The
whole population of DCs asynchronously correlates the fused signals with a
secondary data stream. The behaviour of human DCs is abstracted to form the DC
Algorithm (DCA), which is implemented using an immune inspired framework,
libtissue. This system is used to detect context switching for a basic machine
learning dataset and to detect outgoing portscans in real-time. Experimental
results show a significant difference between an outgoing portscan and normal
traffic.
|
1001.2421
|
Outage Efficient Strategies for Network MIMO with Partial CSIT
|
cs.IT math.IT
|
We consider a multi-cell MIMO downlink (network MIMO) where $B$ base-stations
(BS) with $M$ antennas connected to a central station (CS) serve $K$
single-antenna user terminals (UT). Although many works have shown the
potential benefits of network MIMO, the conclusion critically depends on the
underlying assumptions such as channel state information at transmitters (CSIT)
and backhaul links. In this paper, by focusing on the impact of partial CSIT,
we propose an outage-efficient strategy. Namely, with side information of all
UT's messages and local CSIT, each BS applies zero-forcing (ZF) beamforming in
a distributed manner. For a small number of UTs ($K\leq M$), the ZF beamforming
creates $K$ parallel MISO channels. Based on the statistical knowledge of these
parallel channels, the CS performs a robust power allocation that
simultaneously minimizes the outage probability of all UTs and achieves a
diversity gain of $B(M-K+1)$ per UT. With a large number of UTs ($K \geq M$),
we propose a so-called distributed diversity scheduling (DDS) scheme to select
a subset of $\Ks$ UTs with limited backhaul communication. It is proved that
DDS achieves a diversity gain of $B\frac{K}{\Ks}(M-\Ks+1)$, which scales
optimally with the number of cooperative BSs $B$ as well as UTs. Numerical
results confirm that even under realistic assumptions such as partial CSIT and
limited backhaul communications, network MIMO can offer high data rates with a
sufficient reliability to individual UTs.
|
1001.2447
|
PPM demodulation: On approaching fundamental limits of optical
communications
|
quant-ph cs.IT math.IT
|
We consider the problem of demodulating M-ary optical PPM (pulse-position
modulation) waveforms, and propose a structured receiver whose mean probability
of symbol error is smaller than all known receivers, and approaches the quantum
limit. The receiver uses photodetection coupled with optimized phase-coherent
optical feedback control and a phase-sensitive parametric amplifier. We present
a general framework of optical receivers known as the conditional pulse nulling
receiver, and present new results on ultimate limits and achievable regions of
spectral versus photon efficiency tradeoffs for the single-spatial-mode
pure-loss optical communication channel.
|
1001.2463
|
On the Threshold of Maximum-Distance Separable Codes
|
cs.IT cs.DM math.IT
|
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme
published in Indocrypt'09, this paper deals with the threshold of linear
$q$-ary error-correcting codes. The security of this scheme is based on the
intractability of polynomial reconstruction when there is too much noise in the
vector. Our approach switches from this paradigm to an Information Theoretical
point of view: is there a class of elements that are so far away from the code
that the list size is always superpolynomial? Or, dually speaking, is
Maximum-Likelihood decoding almost surely impossible?
We relate this issue to the decoding threshold of a code, and show that when
the minimal distance of the code is high enough, the threshold effect is very
sharp. In a second part, we explicit lower-bounds on the threshold of
Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the
threshold for the toy example that motivates this study.
|
1001.2464
|
Linear Finite-Field Deterministic Networks With Many Sources and One
Destination
|
cs.IT math.IT
|
We find the capacity region of linear finite-field deterministic networks
with many sources and one destination. Nodes in the network are subject to
interference and broadcast constraints, specified by the linear finite-field
deterministic model. Each node can inject its own information as well as relay
other nodes' information. We show that the capacity region coincides with the
cut-set region. Also, for a specific case of correlated sources we provide
necessary and sufficient conditions for the sources transmissibility. Given the
"deterministic model" approximation for the corresponding Gaussian network
model, our results may be relevant to wireless sensor networks where the
sensing nodes multiplex the relayed data from the other nodes with their own
data, and where the goal is to decode all data at a single "collector" node.
|
1001.2488
|
A Tight Bound on the Performance of a Minimal-Delay Joint Source-Channel
Coding Scheme
|
cs.IT math.IT
|
An analog source is to be transmitted across a Gaussian channel in more than
one channel use per source symbol. This paper derives a lower bound on the
asymptotic mean squared error for a strategy that consists of repeatedly
quantizing the source, transmitting the quantizer outputs in the first channel
uses, and sending the remaining quantization error uncoded in the last channel
use. The bound coincides with the performance achieved by a suboptimal decoder
studied by the authors in a previous paper, thereby establishing that the bound
is tight.
|
1001.2503
|
Check Reliability Based Bit-Flipping Decoding Algorithms for LDPC Codes
|
cs.IT math.IT
|
We introduce new reliability definitions for bit and check nodes. Maximizing
global reliability, which is the sum reliability of all bit nodes, is shown to
be equivalent to minimizing a decoding metric which is closely related to the
maximum likelihood decoding metric. We then propose novel bit-flipping (BF)
decoding algorithms that take into account the check node reliability. Both
hard-decision (HD) and soft-decision (SD) versions are considered. The former
performs better than the conventional BF algorithm and, in most cases, suffers
less than 1 dB performance loss when compared with some well known SD BF
decoders. For one particular code it even outperforms those SD BF decoders. The
performance of the SD version is superior to that of SD BF decoders and is
comparable to or even better than that of the sum-product algorithm (SPA). The
latter is achieved with a complexity much less than that required by the SPA.
|
1001.2545
|
Concatenated Polar Codes
|
cs.IT math.IT
|
Polar codes have attracted much recent attention as the first codes with low
computational complexity that provably achieve optimal rate-regions for a large
class of information-theoretic problems. One significant drawback, however, is
that for current constructions the probability of error decays
sub-exponentially in the block-length (more detailed designs improve the
probability of error at the cost of significantly increased computational
complexity \cite{KorUS09}). In this work we show how the the classical idea of
code concatenation -- using "short" polar codes as inner codes and a
"high-rate" Reed-Solomon code as the outer code -- results in substantially
improved performance. In particular, code concatenation with a careful choice
of parameters boosts the rate of decay of the probability of error to almost
exponential in the block-length with essentially no loss in computational
complexity. We demonstrate such performance improvements for three sets of
information-theoretic problems -- a classical point-to-point channel coding
problem, a class of multiple-input multiple output channel coding problems, and
some network source coding problems.
|
1001.2547
|
On Zero-Error Source Coding with Feedback
|
cs.IT math.IT
|
We consider the problem of zero error source coding with limited feedback
when side information is present at the receiver. First, we derive an
achievable rate region for arbitrary joint distributions on the source and the
side information. When all source pairs of source and side information symbols
are observable with non-zero probability, we show that this characterization
gives the entire rate region. Next, we demonstrate a class of sources for which
asymptotically zero feedback suffices to achieve zero-error coding at the rate
promised by the Slepian-Wolf bound for asymptotically lossless coding. Finally,
we illustrate these results with the aid of three simple examples.
|
1001.2554
|
A new proof of Delsarte, Goethals and Mac Williams theorem on minimal
weight codewords of generalized Reed-Muller code
|
cs.IT math.IT
|
We give a new proof of Delsarte, Goethals and Mac williams theorem on minimal
weight codewords of generalized Reed-Muller codes published in 1970. To prove
this theorem, we consider intersection of support of minimal weight codewords
with affine hyperplanes and we proceed by recursion.
|
1001.2566
|
On Achievable Rates for Non-Linear Deterministic Interference Channels
|
cs.IT math.IT
|
This paper extends the literature on interference alignment to more general
classes of deterministic channels which incorporate non-linear input-output
relationships. It is found that the concept of alignment extends naturally to
these deterministic interference channels, and in many cases, the achieved
degrees of freedom (DoF) can be shown to be optimal.
|
1001.2582
|
Delay-rate tradeoff for ergodic interference alignment in the Gaussian
case
|
cs.IT math.IT
|
In interference alignment, users sharing a wireless channel are each able to
achieve data rates of up to half of the non-interfering channel capacity, no
matter the number of users. In an ergodic setting, this is achieved by pairing
complementary channel realizations in order to amplify signals and cancel
interference. However, this scheme has the possibility for large delays in
decoding message symbols. We show that delay can be mitigated by using outputs
from potentially more than two channel realizations, although data rate may be
reduced. We further demonstrate the tradeoff between rate and delay via a
time-sharing strategy. Our analysis considers Gaussian channels; an extension
to finite field channels is also possible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.