id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1001.2596
|
On Optimum End-to-End Distortion in Wideband MIMO Systems
|
cs.IT math.IT
|
This paper presents the impact of frequency diversity on the optimum expected
end-to-end distortion (EED) in an outage-free wideband multiple-input
multiple-output (MIMO) system. We provide the closed-form expression of optimum
asymptotic expected EED comprised of the optimum distortion exponent and the
multiplicative optimum distortion factor for high signal-to-noise ratio (SNR).
It is shown that frequency diversity can improve EED though it has no effect on
ergodic capacity. The improvement becomes slight when the frequency diversity
order is greater than a certain number. The lower bounds related to infinite
frequency diversity are derived. The results for outage-free systems are the
bounds for outage-suffering systems and they are instructive for system design.
|
1001.2605
|
An Explicit Nonlinear Mapping for Manifold Learning
|
cs.CV cs.LG
|
Manifold learning is a hot research topic in the field of computer science
and has many applications in the real world. A main drawback of manifold
learning methods is, however, that there is no explicit mappings from the input
data manifold to the output embedding. This prohibits the application of
manifold learning methods in many practical problems such as classification and
target detection. Previously, in order to provide explicit mappings for
manifold learning methods, many methods have been proposed to get an
approximate explicit representation mapping with the assumption that there
exists a linear projection between the high-dimensional data samples and their
low-dimensional embedding. However, this linearity assumption may be too
restrictive. In this paper, an explicit nonlinear mapping is proposed for
manifold learning, based on the assumption that there exists a polynomial
mapping between the high-dimensional data samples and their low-dimensional
representations. As far as we know, this is the first time that an explicit
nonlinear mapping for manifold learning is given. In particular, we apply this
to the method of Locally Linear Embedding (LLE) and derive an explicit
nonlinear manifold learning algorithm, named Neighborhood Preserving Polynomial
Embedding (NPPE). Experimental results on both synthetic and real-world data
show that the proposed mapping is much more effective in preserving the local
neighborhood information and the nonlinear geometry of the high-dimensional
data samples than previous work.
|
1001.2612
|
On distributed convex optimization under inequality and equality
constraints via primal-dual subgradient methods
|
math.OC cs.SY
|
We consider a general multi-agent convex optimization problem where the
agents are to collectively minimize a global objective function subject to a
global inequality constraint, a global equality constraint, and a global
constraint set. The objective function is defined by a sum of local objective
functions, while the global constraint set is produced by the intersection of
local constraint sets. In particular, we study two cases: one where the
equality constraint is absent, and the other where the local constraint sets
are identical. We devise two distributed primal-dual subgradient algorithms
which are based on the characterization of the primal-dual optimal solutions as
the saddle points of the Lagrangian and penalty functions. These algorithms can
be implemented over networks with changing topologies but satisfying a standard
connectivity property, and allow the agents to asymptotically agree on optimal
solutions and optimal values of the optimization problem under the Slater's
condition.
|
1001.2620
|
Discontinuities and hysteresis in quantized average consensus
|
math.OC cs.SY
|
We consider continuous-time average consensus dynamics in which the agents'
states are communicated through uniform quantizers. Solutions to the resulting
system are defined in the Krasowskii sense and are proven to converge to
conditions of "practical consensus". To cope with undesired chattering
phenomena we introduce a hysteretic quantizer, and we study the convergence
properties of the resulting dynamics by a hybrid system approach.
|
1001.2623
|
A Steganography Based on CT-CDMA Communication Scheme Using Complete
Complementary Codes
|
cs.IT cs.CR math.IT
|
It has been shown that complete complementary codes can be applied into some
communication systems like approximately synchronized CDMA systems because of
its good correlation properties. CT-CDMA is one of the communication systems
based on complete complementary codes. In this system, the information data of
the multiple users can be transmitted by using the same set of complementary
codes through a single frequency band. In this paper, we propose to apply
CT-CDMA systems into a kind of steganography. It is shown that a large amount
of secret data can be embedded in the stego image by the proposed method
through some numerical experiments using color images.
|
1001.2625
|
Finding top-k similar pairs of objects annotated with terms from an
ontology
|
cs.DB
|
With the growing focus on semantic searches and interpretations, an
increasing number of standardized vocabularies and ontologies are being
designed and used to describe data. We investigate the querying of objects
described by a tree-structured ontology. Specifically, we consider the case of
finding the top-k best pairs of objects that have been annotated with terms
from such an ontology when the object descriptions are available only at
runtime. We consider three distance measures. The first one defines the object
distance as the minimum pairwise distance between the sets of terms describing
them, and the second one defines the distance as the average pairwise term
distance. The third and most useful distance measure, earth mover's distance,
finds the best way of matching the terms and computes the distance
corresponding to this best matching. We develop lower bounds that can be
aggregated progressively and utilize them to speed up the search for top-k
object pairs when the earth mover's distance is used. For the minimum pairwise
distance, we devise an algorithm that runs in O(D + Tk log k) time, where D is
the total information size and T is the total number of terms in the ontology.
We also develop a novel best-first search strategy for the average pairwise
distance that utilizes lower bounds generated in an ordered manner. Experiments
on real and synthetic datasets demonstrate the practicality and scalability of
our algorithms.
|
1001.2636
|
Analytical shape determination of fiber-like objects with Virtual Image
Correlation
|
cs.CV physics.comp-ph
|
This paper reports a method allowing for the determination of the shape of
deformed fiber-like objects. Compared to existing methods, it provides
analytical results including the local slope and curvature which are of first
importance, for instance, in beam mechanics. The presented VIC (Virtual Image
Correlation) method consists in looking for the best correlation between the
image of the fiber-like object and a virtual beam image, using an algorithm
close to the Digital Image Correlation method developed in experimental solid
mechanics. The computation only involves the part of the image in the vicinity
of the fiber: the method is thus insensitive to the picture background and the
computational cost remains low. Two examples are reported: the first proves the
precision of the method, the second its ability to identify a complex shape
with multiple loops.
|
1001.2647
|
A General Euclidean Geometric Representation for the Classical Detection
Theory
|
cs.IT math.IT
|
We propose an Euclidean geometric representation for the classical detection
theory. The proposed representation is so generic that can be employed to
almost all communication problems. The hypotheses and observations are mapped
into R^N in such a way that a posteriori probability of an hypothesis given an
observation decreases exponentially with the square of the Euclidean distance
between the vectors corresponding to the hypothesis and the observation.
|
1001.2662
|
Channel Polarization on q-ary Discrete Memoryless Channels by Arbitrary
Kernels
|
cs.IT math.IT
|
A method of channel polarization, proposed by Arikan, allows us to construct
efficient capacity-achieving channel codes. In the original work, binary input
discrete memoryless channels are considered. A special case of $q$-ary channel
polarization is considered by Sasoglu, Telatar, and Arikan. In this paper, we
consider more general channel polarization on $q$-ary channels. We further show
explicit constructions using Reed-Solomon codes, on which asymptotically fast
channel polarization is induced.
|
1001.2665
|
Detecting Botnets Through Log Correlation
|
cs.AI cs.CR
|
Botnets, which consist of thousands of compromised machines, can cause
significant threats to other systems by launching Distributed Denial of Service
(SSoS) attacks, keylogging, and backdoors. In response to these threats, new
effective techniques are needed to detect the presence of botnets. In this
paper, we have used an interception technique to monitor Windows Application
Programming Interface (API) functions calls made by communication applications
and store these calls with their arguments in log files. Our algorithm detects
botnets based on monitoring abnormal activity by correlating the changes in log
file sizes from different hosts.
|
1001.2686
|
Effective complexity of stationary process realizations
|
cs.IT math.IT
|
The concept of effective complexity of an object as the minimal description
length of its regularities has been initiated by Gell-Mann and Lloyd. The
regularities are modeled by means of ensembles, that is probability
distributions on finite binary strings. In our previous paper we propose a
definition of effective complexity in precise terms of algorithmic information
theory. Here we investigate the effective complexity of binary strings
generated by stationary, in general not computable, processes. We show that
under not too strong conditions long typical process realizations are
effectively simple. Our results become most transparent in the context of
coarse effective complexity which is a modification of the original notion of
effective complexity that uses less parameters in its definition. A similar
modification of the related concept of sophistication has been suggested by
Antunes and Fortnow.
|
1001.2709
|
Kernel machines with two layers and multiple kernel learning
|
cs.LG cs.AI
|
In this paper, the framework of kernel machines with two layers is
introduced, generalizing classical kernel methods. The new learning methodology
provide a formal connection between computational architectures with multiple
layers and the theme of kernel learning in standard regularization methods.
First, a representer theorem for two-layer networks is presented, showing that
finite linear combinations of kernels on each layer are optimal architectures
whenever the corresponding functions solve suitable variational problems in
reproducing kernel Hilbert spaces (RKHS). The input-output map expressed by
these architectures turns out to be equivalent to a suitable single-layer
kernel machines in which the kernel function is also learned from the data.
Recently, the so-called multiple kernel learning methods have attracted
considerable attention in the machine learning literature. In this paper,
multiple kernel learning methods are shown to be specific cases of kernel
machines with two layers in which the second layer is linear. Finally, a simple
and effective multiple kernel learning method called RLS2 (regularized least
squares with two layers) is introduced, and his performances on several
learning problems are extensively analyzed. An open source MATLAB toolbox to
train and validate RLS2 models with a Graphic User Interface is available.
|
1001.2735
|
Stochastic Budget Optimization in Internet Advertising
|
cs.CC cs.GT cs.SI
|
Internet advertising is a sophisticated game in which the many advertisers
"play" to optimize their return on investment. There are many "targets" for the
advertisements, and each "target" has a collection of games with a potentially
different set of players involved. In this paper, we study the problem of how
advertisers allocate their budget across these "targets". In particular, we
focus on formulating their best response strategy as an optimization problem.
Advertisers have a set of keywords ("targets") and some stochastic information
about the future, namely a probability distribution over scenarios of cost vs
click combinations. This summarizes the potential states of the world assuming
that the strategies of other players are fixed. Then, the best response can be
abstracted as stochastic budget optimization problems to figure out how to
spread a given budget across these keywords to maximize the expected number of
clicks.
We present the first known non-trivial poly-logarithmic approximation for
these problems as well as the first known hardness results of getting better
than logarithmic approximation ratios in the various parameters involved. We
also identify several special cases of these problems of practical interest,
such as with fixed number of scenarios or with polynomial-sized parameters
related to cost, which are solvable either in polynomial time or with improved
approximation ratios. Stochastic budget optimization with scenarios has
sophisticated technical structure. Our approximation and hardness results come
from relating these problems to a special type of (0/1, bipartite) quadratic
programs inherent in them. Our research answers some open problems raised by
the authors in (Stochastic Models for Budget Optimization in Search-Based
Advertising, Algorithmica, 58 (4), 1022-1044, 2010).
|
1001.2738
|
Note on sampling without replacing from a finite collection of matrices
|
cs.IT math.IT quant-ph
|
This technical note supplies an affirmative answer to a question raised in a
recent pre-print [arXiv:0910.1879] in the context of a "matrix recovery"
problem. Assume one samples m Hermitian matrices X_1, ..., X_m with replacement
from a finite collection. The deviation of the sum X_1+...+X_m from its
expected value in terms of the operator norm can be estimated by an "operator
Chernoff-bound" due to Ahlswede and Winter. The question arose whether the
bounds obtained this way continue to hold if the matrices are sampled without
replacement. We remark that a positive answer is implied by a classical
argument by Hoeffding. Some consequences for the matrix recovery problem are
sketched.
|
1001.2766
|
On the scaling of Polar codes: I. The behavior of polarized channels
|
cs.IT math.IT
|
We consider the asymptotic behavior of the polarization process for polar
codes when the blocklength tends to infinity. In particular, we study the
problem of asymptotic analysis of the cumulative distribution $\mathbb{P}(Z_n
\leq z)$, where $Z_n=Z(W_n)$ is the Bhattacharyya process, and its dependence
to the rate of transmission R. We show that for a BMS channel $W$, for $R <
I(W)$ we have $\lim_{n \to \infty} \mathbb{P} (Z_n \leq
2^{-2^{\frac{n}{2}+\sqrt{n} \frac{Q^{-1}(\frac{R}{I(W)})}{2} +o(\sqrt{n})}}) =
R$ and for $R<1- I(W)$ we have $\lim_{n \to \infty} \mathbb{P} (Z_n \geq
1-2^{-2^{\frac{n}{2}+ \sqrt{n} \frac{Q^{-1}(\frac{R}{1-I(W)})}{2}
+o(\sqrt{n})}}) = R$, where $Q(x)$ is the probability that a standard normal
random variable will obtain a value larger than $x$. As a result, if we denote
by $\mathbb{P}_e ^{\text{SC}}(n,R)$ the probability of error using polar codes
of block-length $N=2^n$ and rate $R<I(W)$ under successive cancellation
decoding, then $\log(-\log(\mathbb{P}_e ^{\text{SC}}(n,R)))$ scales as
$\frac{n}{2}+\sqrt{n}\frac{Q^{-1}(\frac{R}{I(W)})}{2}+ o(\sqrt{n})$. We also
prove that the same result holds for the block error probability using the MAP
decoder, i.e., for $\log(-\log(\mathbb{P}_e ^{\text{MAP}}(n,R)))$.
|
1001.2767
|
Universally Optimal Privacy Mechanisms for Minimax Agents
|
cs.CR cs.DB cs.DS
|
A scheme that publishes aggregate information about sensitive data must
resolve the trade-off between utility to information consumers and privacy of
the database participants. Differential privacy is a well-established
definition of privacy--this is a universal guarantee against all attackers,
whatever their side-information or intent. In this paper, we present a
universal treatment of utility based on the standard minimax rule from decision
theory (in contrast to the utility model in, which is Bayesian). In our model,
information consumers are minimax (risk-averse) agents, each possessing some
side-information about the query, and each endowed with a loss-function which
models their tolerance to inaccuracies. Further, information consumers are
rational in the sense that they actively combine information from the mechanism
with their side-information in a way that minimizes their loss. Under this
assumption of rational behavior, we show that for every fixed count query, a
certain geometric mechanism is universally optimal for all minimax information
consumers. Additionally, our solution makes it possible to release query
results at multiple levels of privacy in a collusion-resistant manner.
|
1001.2781
|
Interaction Strictly Improves the Wyner-Ziv Rate-distortion function
|
cs.IT math.IT
|
In 1985 Kaspi provided a single-letter characterization of the
sum-rate-distortion function for a two-way lossy source coding problem in which
two terminals send multiple messages back and forth with the goal of
reproducing each other's sources. Yet, the question remained whether more
messages can strictly improve the sum-rate-distortion function. Viewing the
sum-rate as a functional of the distortions and the joint source distribution
and leveraging its convex-geometric properties, we construct an example which
shows that two messages can strictly improve the one-message (Wyner-Ziv)
rate-distortion function. The example also shows that the ratio of the
one-message rate to the two-message sum-rate can be arbitrarily large and
simultaneously the ratio of the backward rate to the forward rate in the
two-message sum-rate can be arbitrarily small.
|
1001.2786
|
A General Coding Scheme for Two-User Fading Interference Channels
|
cs.IT math.IT
|
A Han-Kobayashi based achievable scheme is presented for ergodic fading
two-user Gaussian interference channels (IFCs) with perfect channel state
information at all nodes and Gaussian codebooks with no time-sharing. Using
max-min optimization techniques, it is shown that jointly coding across all
states performs at least as well as separable coding for the sub-classes of
uniformly weak (every sub-channel is weak) and hybrid (mix of strong and weak
sub-channels that do not achieve the interference-free sum-capacity) IFCs. For
the uniformly weak IFCs, sufficient conditions are obtained for which the
sum-rate is maximized when interference is ignored at both receivers.
|
1001.2805
|
Source Coding With Side Information Using List Decoding
|
cs.IT math.IT
|
The problem of source coding with side information (SCSI) is closely related
to channel coding. Therefore, existing literature focuses on using the most
successful channel codes namely, LDPC codes, turbo codes, and their variants,
to solve this problem assuming classical unique decoding of the underlying
channel code. In this paper, in contrast to classical decoding, we have taken
the list decoding approach. We show that syndrome source coding using list
decoding can achieve the theoretical limit. We argue that, as opposed to
channel coding, the correct sequence from the list produced by the list decoder
can effectively be recovered in case of SCSI, since we are dealing with a
virtual noisy channel rather than a real noisy channel. Finally, we present a
guideline for designing constructive SCSI schemes using Reed Solomon code, BCH
code, and Reed-Muller code, which are the known list-decodable codes.
|
1001.2806
|
MIMO Gaussian Broadcast Channels with Confidential and Common Messages
|
cs.IT cs.CR math.IT
|
This paper considers the problem of secret communication over a two-receiver
multiple-input multiple-output (MIMO) Gaussian broadcast channel. The
transmitter has two independent, confidential messages and a common message.
Each of the confidential messages is intended for one of the receivers but
needs to be kept perfectly secret from the other, and the common message is
intended for both receivers. It is shown that a natural scheme that combines
secret dirty-paper coding with Gaussian superposition coding achieves the
secrecy capacity region. To prove this result, a channel-enhancement approach
and an extremal entropy inequality of Weingarten et al. are used.
|
1001.2813
|
A Monte Carlo Algorithm for Universally Optimal Bayesian Sequence
Prediction and Planning
|
nlin.AO cond-mat.dis-nn cs.AI cs.LG stat.ML
|
The aim of this work is to address the question of whether we can in
principle design rational decision-making agents or artificial intelligences
embedded in computable physics such that their decisions are optimal in
reasonable mathematical senses. Recent developments in rare event probability
estimation, recursive bayesian inference, neural networks, and probabilistic
planning are sufficient to explicitly approximate reinforcement learners of the
AIXI style with non-trivial model classes (here, the class of resource-bounded
Turing machines). Consideration of the effects of resource limitations in a
concrete implementation leads to insights about possible architectures for
learning systems using optimal decision makers as components.
|
1001.2892
|
On the Capacity of Causal Cognitive Interference Channel With Delay
|
cs.IT math.IT
|
In this paper, we introduce the Causal Cognitive Interference Channel With
Delay (CC-IFC-WD) in which the cognitive user transmission can depend on $L$
future received symbols as well as the past ones. Taking the effect of the link
delays into account, CC-IFC-WD fills the gap between the genie-aided and causal
1cognitive radio channels. We study three special cases: 1) Classical CC-IFC
(L=0), 2) CC-IFC without delay (L=1) and 3) CC-IFC with a block length delay
(L=n). In each case, we obtain an inner bound on the capacity region. Our
coding schemes make use of cooperative strategy by generalized block Markov
superposition coding, collaborative strategy by rate splitting, and
Gel'fand-Pinsker coding in order to pre-cancel part of the interference.
Moreover, instantaneous relaying and non-causal partial Decode-and-Forward
strategies are employed in the second and third cases, respectively. The
derived regions under special conditions, reduce to several previously known
results. Moreover, we show that the coding strategy which we use to derive
achievable rate region for the classical CC-IFC achieves capacity for a special
case of this channel. Furthermore, we extend our achievable rate regions to
Gaussian case. Providing a numerical example for Gaussian CC-IFC-WD, we
investigate the rate gain of the cognitive link for different delay values.
|
1001.2897
|
Sharp Bounds on the Entropy of the Poisson Law and Related Quantities
|
cs.IT math.IT math.ST stat.TH
|
One of the difficulties in calculating the capacity of certain Poisson
channels is that H(lambda), the entropy of the Poisson distribution with mean
lambda, is not available in a simple form. In this work we derive upper and
lower bounds for H(lambda) that are asymptotically tight and easy to compute.
The derivation of such bounds involves only simple probabilistic and analytic
tools. This complements the asymptotic expansions of Knessl (1998), Jacquet and
Szpankowski (1999), and Flajolet (1999). The same method yields tight bounds on
the relative entropy D(n, p) between a binomial and a Poisson, thus refining
the work of Harremoes and Ruzankin (2004). Bounds on the entropy of the
binomial also follow easily.
|
1001.2900
|
A digital interface for Gaussian relay networks: lifting codes from the
discrete superposition model to Gaussian relay networks
|
cs.IT math.IT
|
For every Gaussian relay network with a single source-destination pair, it is
known that there exists a corresponding deterministic network called the
discrete superposition network that approximates its capacity uniformly over
all SNR's to within a bounded number of bits. The next step in this program of
rigorous approximation is to determine whether coding schemes for discrete
superposition models can be lifted to Gaussian relay networks with a bounded
rate loss independent of SNR. We establish precisely this property and show
that the superposition model can thus serve as a strong surrogate for designing
codes for Gaussian relay networks.
We show that a code for a Gaussian relay network, with a single
source-destination pair and multiple relay nodes, can be designed from any code
for the corresponding discrete superposition network simply by pruning it. In
comparison to the rate of the discrete superposition network's code, the rate
of the Gaussian network's code only reduces at most by a constant that is a
function only of the number of nodes in the network and independent of channel
gains.
This result is also applicable for coding schemes for MIMO Gaussian relay
networks, with the reduction depending additionally on the number of antennas.
Hence, the discrete superposition model can serve as a digital interface for
operating Gaussian relay networks.
|
1001.2938
|
Transmit Signal and Bandwidth Optimization in Multiple-Antenna Relay
Channels
|
cs.IT math.IT
|
Transmit signal and bandwidth optimization is considered in multiple-antenna
relay channels. Assuming all terminals have channel state information, the
cut-set capacity upper bound and decode-and-forward rate under full-duplex
relaying are evaluated by formulating them as convex optimization problems. For
half-duplex relays, bandwidth allocation and transmit signals are optimized
jointly. Moreover, achievable rates based on the compress-and-forward
transmission strategy are presented using rate-distortion and Wyner-Ziv
compression schemes. It is observed that when the relay is close to the source,
decode-and-forward is almost optimal, whereas compress-and-forward achieves
good performance when the relay is close to the destination.
|
1001.2947
|
Design and Analysis of Multi-User SDMA Systems with Noisy Limited CSIT
Feedback
|
cs.IT math.IT
|
In this paper, we consider spatial-division multiple-access (SDMA) systems
with one base station with multiple antennae and a number of single antenna
mobiles under noisy limited CSIT feedback. We propose a robust noisy limited
feedback design for SDMA systems. The solution consists of a real-time robust
SDMA precoding, user selection and rate adaptation as well as an offline
feedback index assignment algorithm. The index assignment problem is cast into
a Traveling Sales Man problem (TSP). Based on the specific structure of the
feedback constellation and the precoder, we derive a low complex but
asymptotically optimal solution. Simulation results show that the proposed
framework has significant goodput gain compared to the traditional naive
designs under noisy limited feedback channel. Furthermore, we show that despite
the noisy feedback channel, the average SDMA system goodput grows with the
number of feedback bits in the interference limited regime while in noise
limited regime increases linearly with the number of transmit antenna and the
forward channel SNR.
|
1001.2957
|
Asymptotic Learning Curve and Renormalizable Condition in Statistical
Learning Theory
|
cs.LG
|
Bayes statistics and statistical physics have the common mathematical
structure, where the log likelihood function corresponds to the random
Hamiltonian. Recently, it was discovered that the asymptotic learning curves in
Bayes estimation are subject to a universal law, even if the log likelihood
function can not be approximated by any quadratic form. However, it is left
unknown what mathematical property ensures such a universal law. In this paper,
we define a renormalizable condition of the statistical estimation problem, and
show that, under such a condition, the asymptotic learning curves are ensured
to be subject to the universal law, even if the true distribution is
unrealizable and singular for a statistical model. Also we study a
nonrenormalizable case, in which the learning curves have the different
asymptotic behaviors from the universal law.
|
1001.3036
|
Shaping Bits
|
cs.IT math.IT
|
The performance of bit-interleaved coded modulation (BICM) with bit shaping
(i.e., non-equiprobable bit probabilities in the underlying binary code) is
studied. For the Gaussian channel, the rates achievable with BICM and bit
shaping are practically identical to those of coded modulation or multilevel
coding. This identity holds for the whole range of values of signal-to-noise
ratio. Moreover, the random coding error exponent of BICM significantly exceeds
that of multilevel coding and is very close to that of coded modulation.
|
1001.3053
|
On some upper bounds on the fractional chromatic number of weighted
graphs
|
cs.IT math.CO math.IT
|
Given a weighted graph $G_\bx$, where $(x(v): v \in V)$ is a non-negative,
real-valued weight assigned to the vertices of G, let $B(G_\bx)$ be an upper
bound on the fractional chromatic number of the weighted graph $G_\bx$; so
$\chi_f(G_\bx) \le B(G_\bx)$. To investigate the worst-case performance of the
upper bound $B$, we study the graph invariant $$\beta(G) = \sup_{\bx \ne 0}
\frac{B(G_\bx)}{\chi_f(G_\bx)}.$$
\noindent This invariant is examined for various upper bounds $B$ on the
fractional chromatic number. In some important cases, this graph invariant is
shown to be related to the size of the largest star subgraph in the graph. This
problem arises in the area of resource estimation in distributed systems and
wireless networks; the results presented here have implications on the design
and performance of decentralized communication networks.
|
1001.3087
|
Source Polarization
|
cs.IT math.IT
|
The notion of source polarization is introduced and investigated. This
complements the earlier work on channel polarization. An application to
Slepian-Wolf coding is also considered. The paper is restricted to the case of
binary alphabets. Extension of results to non-binary alphabets is discussed
briefly.
|
1001.3090
|
Feature Extraction for Universal Hypothesis Testing via Rank-constrained
Optimization
|
cs.IT cs.LG math.IT math.ST stat.TH
|
This paper concerns the construction of tests for universal hypothesis
testing problems, in which the alternate hypothesis is poorly modeled and the
observation space is large. The mismatched universal test is a feature-based
technique for this purpose. In prior work it is shown that its
finite-observation performance can be much better than the (optimal) Hoeffding
test, and good performance depends crucially on the choice of features. The
contributions of this paper include: 1) We obtain bounds on the number of
\epsilon distinguishable distributions in an exponential family. 2) This
motivates a new framework for feature extraction, cast as a rank-constrained
optimization problem. 3) We obtain a gradient-based algorithm to solve the
rank-constrained optimization problem and prove its local convergence.
|
1001.3102
|
On the Capacity Achieving Covariance Matrix for Frequency Selective MIMO
Channels Using the Asymptotic Approach
|
cs.IT math.IT
|
In this contribution, an algorithm for evaluating the capacity-achieving
input covariance matrices for frequency selective Rayleigh MIMO channels is
proposed. In contrast with the flat fading Rayleigh cases, no closed-form
expressions for the eigenvectors of the optimum input covariance matrix are
available. Classically, both the eigenvectors and eigenvalues are computed
numerically and the corresponding optimization algorithms remain
computationally very demanding. In this paper, it is proposed to optimize
(w.r.t. the input covariance matrix) a large system approximation of the
average mutual information derived by Moustakas and Simon. An algorithm based
on an iterative water filling scheme is proposed, and its convergence is
studied. Numerical simulation results show that, even for a moderate number of
transmit and receive antennas, the new approach provides the same results as
direct maximization approaches of the average mutual information.
|
1001.3107
|
A Practical Dirty Paper Coding Applicable for Broadcast Channel
|
cs.IT math.IT
|
In this paper, we present a practical dirty paper coding scheme using trellis
coded modulation for the dirty paper channel $Y=X+S+W,$ $\mathbb{E}\{X^2\} \leq
P$, where $W$ is white Gaussian noise with power $\sigma_w ^2$, $P$ is the
average transmit power and $S$ is the Gaussian interference with power
$\sigma_s ^2$ that is non-causally known at the transmitter. We ensure that the
dirt in our scheme remains distinguishable to the receiver and thus, our
designed scheme is applicable to broadcast channel. Following Costa's idea, we
recognize the criteria that the transmit signal must be as orthogonal to the
dirt as possible. Finite constellation codes are constructed using trellis
coded modulation and by using a Viterbi algorithm at the encoder so that the
code satisfies the design criteria and simulation results are presented with
codes constructed via trellis coded modulation using QAM signal sets to
illustrate our results.
|
1001.3113
|
An Immuno-Inspired Approach to Misbehavior Detection in Ad Hoc Wireless
Networks
|
cs.NI cs.AI cs.NE
|
We propose and evaluate an immuno-inspired approach to misbehavior detection
in ad hoc wireless networks. Node misbehavior can be the result of an
intrusion, or a software or hardware failure. Our approach is motivated by
co-stimulatory signals present in the Biological immune system. The results
show that co-stimulation in ad hoc wireless networks can both substantially
improve energy efficiency of detection and, at the same time, help achieve low
false positives rates. The energy efficiency improvement is almost two orders
of magnitude, if compared to misbehavior detection based on watchdogs.
We provide a characterization of the trade-offs between detection approaches
executed by a single node and by several nodes in cooperation. Additionally, we
investigate several feature sets for misbehavior detection. These feature sets
impose different requirements on the detection system, most notably from the
energy efficiency point of view.
|
1001.3118
|
Energy Optimization across Training and Data for Multiuser Minimum
Sum-MSE Linear Precoding
|
cs.IT math.IT
|
This paper considers minimum sum mean-squared error (sum-MSE) linear
transceiver designs in multiuser downlink systems with imperfect channel state
information. Specifically, we derive the optimal energy allocations for
training and data phases for such a system. Under MMSE estimation of
uncorrelated Rayleigh block fading channels with equal average powers, we prove
the separability of the energy allocation and transceiver design optimization
problems. A closed-form optimum energy allocation is derived and applied to
existing transceiver designs. Analysis and simulation results demonstrate the
improvements that can be realized with the proposed design.
|
1001.3122
|
Erasure entropies and Gibbs measures
|
math-ph cs.IT math.IT math.MP math.PR
|
Recently Verdu and Weissman introduced erasure entropies, which are meant to
measure the information carried by one or more symbols given all of the
remaining symbols in the realization of the random process or field. A natural
relation to Gibbs measures has also been observed. In his short note we study
this relation further, review a few earlier contributions from statistical
mechanics, and provide the formula for the erasure entropy of a Gibbs measure
in terms of the corresponding potentia. For some
2-dimensonal Ising models, for which Verdu and Weissman suggested a numerical
procedure, we show how to obtain an exact formula for the erasure entropy. l
|
1001.3159
|
Memory Allocation in Distributed Storage Networks
|
cs.IT math.IT
|
We consider the problem of distributing a file in a network of storage nodes
whose storage budget is limited but at least equals to the size file. We first
generate $T$ encoded symbols (from the file) which are then distributed among
the nodes. We investigate the optimal allocation of $T$ encoded packets to the
storage nodes such that the probability of reconstructing the file by using any
$r$ out of $n$ nodes is maximized. Since the optimal allocation of encoded
packets is difficult to find in general, we find another objective function
which well approximates the original problem and yet is easier to optimize. We
find the optimal symmetric allocation for all coding redundancy constraints
using the equivalent approximate problem. We also investigate the optimal
allocation in random graphs. Finally, we provide simulations to verify the
theoretical results.
|
1001.3171
|
Optimal Reverse Carpooling Over Wireless Networks - A Distributed
Optimization Approach
|
cs.NI cs.MA
|
We focus on a particular form of network coding, reverse carpooling, in a
wireless network where the potentially coded transmitted messages are to be
decoded immediately upon reception. The network is fixed and known, and the
system performance is measured in terms of the number of wireless broadcasts
required to meet multiple unicast demands. Motivated by the structure of the
coding scheme, we formulate the problem as a linear program by introducing a
flow variable for each triple of connected nodes. This allows us to have a
formulation polynomial in the number of nodes. Using dual decomposition and
projected subgradient method, we present a decentralized algorithm to obtain
optimal routing schemes in presence of coding opportunities. We show that the
primal sub-problem can be expressed as a shortest path problem on an
\emph{edge-graph}, and the proposed algorithm requires each node to exchange
information only with its neighbors.
|
1001.3173
|
Distributed Detection over Fading MACs with Multiple Antennas at the
Fusion Center
|
cs.IT math.IT
|
A distributed detection problem over fading Gaussian multiple-access channels
is considered. Sensors observe a phenomenon and transmit their observations to
a fusion center using the amplify and forward scheme. The fusion center has
multiple antennas with different channel models considered between the sensors
and the fusion center, and different cases of channel state information are
assumed at the sensors. The performance is evaluated in terms of the error
exponent for each of these cases, where the effect of multiple antennas at the
fusion center is studied. It is shown that for zero-mean channels between the
sensors and the fusion center when there is no channel information at the
sensors, arbitrarily large gains in the error exponent can be obtained with
sufficient increase in the number of antennas at the fusion center. In stark
contrast, when there is channel information at the sensors, the gain in error
exponent due to having multiple antennas at the fusion center is shown to be no
more than a factor of (8/pi) for Rayleigh fading channels between the sensors
and the fusion center, independent of the number of antennas at the fusion
center, or correlation among noise samples across sensors. Scaling laws for
such gains are also provided when both sensors and antennas are increased
simultaneously. Simple practical schemes and a numerical method using
semidefinite relaxation techniques are presented that utilize the limited
possible gains available. Simulations are used to establish the accuracy of the
results.
|
1001.3178
|
A performance analysis of multi-hop ad hoc networks with adaptive
antenna array systems
|
cs.IT math.IT
|
Based on a stochastic geometry framework, we establish an analysis of the
multi-hop spatial reuse aloha protocol (MSR-Aloha) in ad hoc networks. We
compare MSR-Aloha to a simple routing strategy, where a node selects the next
relay of the treated packet as to be its nearest receiver with a forward
progress toward the final destination (NFP). In addition, performance gains
achieved by employing adaptive antenna array systems are quantified in this
paper. We derive a tight upper bound on the spatial density of progress of
MSR-Aloha. Our analytical results demonstrate that the spatial density of
progress scales as the square root of the density of users, and the optimal
contention density (that maximizes the spatial density of progress) is
independent of the density of users. These two facts are consistent with the
observations of Baccelli et al., established through an analytical lower bound
and through simulations.
|
1001.3181
|
Weak ties: Subtle role of information diffusion in online social
networks
|
cs.SI cond-mat.stat-mech physics.soc-ph
|
As a social media, online social networks play a vital role in the social
information diffusion. However, due to its unique complexity, the mechanism of
the diffusion in online social networks is different from the ones in other
types of networks and remains unclear to us. Meanwhile, few works have been
done to reveal the coupled dynamics of both the structure and the diffusion of
online social networks. To this end, in this paper, we propose a model to
investigate how the structure is coupled with the diffusion in online social
networks from the view of weak ties. Through numerical experiments on
large-scale online social networks, we find that in contrast to some previous
research results, selecting weak ties preferentially to republish cannot make
the information diffuse quickly, while random selection can achieve this goal.
However, when we remove the weak ties gradually, the coverage of the
information will drop sharply even in the case of random selection. We also
give a reasonable explanation for this by extra analysis and experiments.
Finally, we conclude that weak ties play a subtle role in the information
diffusion in online social networks. On one hand, they act as bridges to
connect isolated local communities together and break through the local
trapping of the information. On the other hand, selecting them as preferential
paths to republish cannot help the information spread further in the network.
As a result, weak ties might be of use in the control of the virus spread and
the private information diffusion in real-world applications.
|
1001.3187
|
Dynamic Resource Allocation in Cognitive Radio Networks: A Convex
Optimization Perspective
|
cs.IT math.IT
|
This article provides an overview of the state-of-art results on
communication resource allocation over space, time, and frequency for emerging
cognitive radio (CR) wireless networks. Focusing on the
interference-power/interference-temperature (IT) constraint approach for CRs to
protect primary radio transmissions, many new and challenging problems
regarding the design of CR systems are formulated, and some of the
corresponding solutions are shown to be obtainable by restructuring some
classic results known for traditional (non-CR) wireless networks. It is
demonstrated that convex optimization plays an essential role in solving these
problems, in a both rigorous and efficient way. Promising research directions
on interference management for CR and other related multiuser communication
systems are discussed.
|
1001.3193
|
Sidelobe Control in Collaborative Beamforming via Node Selection
|
cs.IT math.IT
|
Collaborative beamforming (CB) is a power efficient method for data
communications in wireless sensor networks (WSNs) which aims at increasing the
transmission range in the network by radiating the power from a cluster of
sensor nodes in the directions of the intended base station(s) or access
point(s) (BSs/APs). The CB average beampattern expresses a deterministic
behavior and can be used for characterizing/controling the transmission at
intended direction(s), since the mainlobe of the CB beampattern is independent
on the particular random node locations. However, the CB for a cluster formed
by a limited number of collaborative nodes results in a sample beampattern with
sidelobes that severely depend on the particular node locations. High level
sidelobes can cause unacceptable interference when they occur at directions of
unintended BSs/APs. Therefore, sidelobe control in CB has a potential to
increase the network capacity and wireless channel availability by decreasing
the interference. Traditional sidelobe control techniques are proposed for
centralized antenna arrays and, therefore, are not suitable for WSNs. In this
paper, we show that distributed, scalable, and low-complexity sidelobe control
techniques suitable for CB in WSNs can be developed based on node selection
technique which make use of the randomness of the node locations. A node
selection algorithm with low-rate feedback is developed to search over
different node combinations. The performance of the proposed algorithm is
analyzed in terms of the average number of trials required to select the
collaborative nodes and the resulting interference. Our simulation results
approve the theoretical analysis and show that the interference is
significantly reduced when node selection is used with CB.
|
1001.3199
|
Local Popularity Based Collaborative Filters
|
cs.IT math.IT
|
Motivated by applications such as recommendation systems, we consider the
estimation of a binary random field X obtained by row and column permutations
of a block constant random matrix. The estimation of X is based on observations
Y, which are obtained by passing entries of X through a binary symmetric
channel (BSC) and an erasure channel. We focus on the analysis of a specific
algorithm based on local popularity when the erasure rate approaches unity at a
specified rate. We study the bit error rate (BER) in the limit as the matrix
size approaches infinity. Our main result states that if the cluster size (that
is, the size of the constancy blocks in the original matrix) is above a certain
threshold, then the BER approaches zero, but below the threshold, the BER is
lower bounded away from zero. The lower bound depends on the noise level in the
observations and the size of the clusters in relation to the threshold. The
threshold depends on the rate at which the erasure probability approaches
unity.
|
1001.3206
|
A New Class of TAST Codes With A Simplified Tree Structure
|
cs.IT cs.CR math.IT math.NT
|
We consider in this paper the design of full diversity and high rate
space-time codes with moderate decoding complexity for arbitrary number of
transmit and receive antennas and arbitrary input alphabets. We focus our
attention to codes from the threaded algebraic space-time (TAST) framework
since the latter includes most known full diversity space-time codes. We
propose a new construction of the component single-input single-output (SISO)
encoders such that the equivalent code matrix has an upper triangular form. We
accomplish this task by designing each SISO encoder to create an ISI-channel in
each thread. This, in turn, greatly simplifies the QR-decomposition of the
composite channel and code matrix, which is essential for optimal or
near-optimal tree search algorithms, such as the sequential decoder.
|
1001.3213
|
Using Premia and Nsp for Constructing a Risk Management Benchmark for
Testing Parallel Architecture
|
cs.CE cs.DC cs.MS cs.NA q-fin.CP q-fin.PR
|
Financial institutions have massive computations to carry out overnight which
are very demanding in terms of the consumed CPU. The challenge is to price many
different products on a cluster-like architecture. We have used the Premia
software to valuate the financial derivatives. In this work, we explain how
Premia can be embedded into Nsp, a scientific software like Matlab, to provide
a powerful tool to valuate a whole portfolio. Finally, we have integrated an
MPI toolbox into Nsp to enable to use Premia to solve a bunch of pricing
problems on a cluster. This unified framework can then be used to test
different parallel architectures.
|
1001.3246
|
Salience-Affected Neural Networks
|
cs.NE q-bio.NC
|
We present a simple neural network model which combines a locally-connected
feedforward structure, as is traditionally used to model inter-neuron
connectivity, with a layer of undifferentiated connections which model the
diffuse projections from the human limbic system to the cortex. This new layer
makes it possible to model global effects such as salience, at the same time as
the local network processes task-specific or local information. This simple
combination network displays interactions between salience and regular
processing which correspond to known effects in the developing brain, such as
enhanced learning as a result of heightened affect.
The cortex biases neuronal responses to affect both learning and memory,
through the use of diffuse projections from the limbic system to the cortex.
Standard ANNs do not model this non-local flow of information represented by
the ascending systems, which are a significant feature of the structure of the
brain, and although they do allow associational learning with multiple-trial,
they simply don't provide the capacity for one-time learning.
In this research we model this effect using an artificial neural network
(ANN), creating a salience-affected neural network (SANN). We adapt an ANN to
embody the capacity to respond to an input salience signal and to produce a
reverse salience signal during testing.
This research demonstrates that input combinations similar to the inputs in
the training data sets will produce similar reverse salience signals during
testing. Furthermore, this research has uncovered a novel method for training
ANNs with a single training iteration.
|
1001.3265
|
Bounds for Algebraic Gossip on Graphs
|
cs.IT cs.NI math.IT math.PR
|
We study the stopping times of gossip algorithms for network coding. We
analyze algebraic gossip (i.e., random linear coding) and consider three gossip
algorithms for information spreading Pull, Push, and Exchange. The stopping
time of algebraic gossip is known to be linear for the complete graph, but the
question of determining a tight upper bound or lower bounds for general graphs
is still open. We take a major step in solving this question, and prove that
algebraic gossip on any graph of size n is O(D*n) where D is the maximum degree
of the graph. This leads to a tight bound of Theta(n) for bounded degree graphs
and an upper bound of O(n^2) for general graphs. We show that the latter bound
is tight by providing an example of a graph with a stopping time of Omega(n^2).
Our proofs use a novel method that relies on Jackson's queuing theorem to
analyze the stopping time of network coding; this technique is likely to become
useful for future research.
|
1001.3277
|
On Utilization and Importance of Perl Status Reporter (SRr) in Text
Mining
|
cs.IR
|
In Bioinformatics, text mining and text data mining sometimes interchangeably
used is a process to derive high-quality information from text. Perl Status
Reporter (SRr) is a data fetching tool from a flat text file and in this
research paper we illustrate the use of SRr in text or data mining. SRr needs a
flat text input file where the mining process to be performed. SRr reads input
file and derives the high quality information from it. Typically text mining
tasks are text categorization, text clustering, concept and entity extraction,
and document summarization. SRr can be utilized for any of these tasks with
little or none customizing efforts. In our implementation we perform text
categorization mining operation on input file. The input file has two
parameters of interest (firstKey and secondKey). The composition of these two
parameters describes the uniqueness of entries in that file in the similar
manner as done by composite key in database. SRr reads the input file line by
line and extracts the parameters of interest and form a composite key by
joining them together. It subsequently generates an output file consisting of
the name as firstKey secondKey. SRr reads the input file and tracks the
composite key. It further stores all that data lines, having the same composite
key, in output file generated by SRr based on that composite key.
|
1001.3297
|
Gaussian MIMO Broadcast Channels with Common and Confidential Messages
|
cs.IT math.IT
|
We study the two-user Gaussian multiple-input multiple-output (MIMO)
broadcast channel with common and confidential messages. In this channel, the
transmitter sends a common message to both users, and a confidential message to
each user which is kept perfectly secret from the other user. We obtain the
entire capacity region of this channel. We also explore the connections between
the capacity region we obtained for the Gaussian MIMO broadcast channel with
common and confidential messages and the capacity region of its
non-confidential counterpart, i.e., the Gaussian MIMO broadcast channel with
common and private messages, which is not known completely.
|
1001.3365
|
Asymptotic Scheduling Gains in Point-to-Multipoint Cognitive Networks
|
cs.IT math.IT
|
We consider collocated primary and secondary networks that have simultaneous
access to the same frequency bands. Particularly, we examine three different
levels at which primary and secondary networks may coexist: pure interference,
asymmetric co-existence, and symmetric co-existence. At the asymmetric
co-existence level, the secondary network selectively deactivates its users
based on knowledge of the interference and channel gains, whereas at the
symmetric level, the primary network also schedules its users in the same way.
Our aim is to derive optimal sum-rates (i.e., throughputs)of both networks at
each co-existence level as the number of users grows asymptotically and
evaluate how the sum-rates scale with network size. In order to find the
asymptotic throughput results, we derive a key lemma on extreme order
statistics and a proposition on the sum of lower order statistics. As a
baseline comparison, we calculate the sum-rates for channel sharing via
time-division (TD). We compare the asymptotic secondary sum-rate in TD with
that under simultaneous transmission, while ensuring the primary network
maintains the same throughput in both cases. The results indicate that
simultaneous transmission at both asymmetric and symmetric co-existence levels
can outperform TD. Furthermore, this enhancement is achievable when uplink
activation or deactivation of users is based only on the interference gains to
the opposite network and not on a network's own channel gains.
|
1001.3387
|
Universal Secure Error-Correcting Schemes for Network Coding
|
cs.IT cs.CR math.IT
|
This paper considers the problem of securing a linear network coding system
against an adversary that is both an eavesdropper and a jammer. The network is
assumed to transport n packets from source to each receiver, and the adversary
is allowed to eavesdrop on \mu arbitrarily chosen links and also to inject up
to t erroneous packets into the network. The goal of the system is to achieve
zero-error communication that is information-theoretically secure from the
adversary. Moreover, this goal must be attained in a universal fashion, i.e.,
regardless of the network topology or the underlying network code. An upper
bound on the achievable rate under these requirements is shown to be n-\mu-2t
packets per transmission. A scheme is proposed that can achieve this maximum
rate, for any n and any field size q, provided the packet length m is at least
n symbols. The scheme is based on rank-metric codes and admits low-complexity
encoding and decoding. In addition, the scheme is shown to be optimal in the
sense that the required packet length is the smallest possible among all
universal schemes that achieve the maximum rate.
|
1001.3403
|
Real Interference Alignment
|
cs.IT math.IT math.NT
|
In this paper, we show that the total Degrees-Of-Freedoms (DOF) of the
$K$-user Gaussian Interference Channel (GIC) can be achieved by incorporating a
new alignment technique known as \emph{real interference alignment}. This
technique compared to its ancestor \emph{vector interference alignment}
performs on a single real line and exploits the properties of real numbers to
provide optimal signaling. The real interference alignment relies on a new
coding scheme in which several data streams having fractional multiplexing
gains are sent by transmitters and interfering streams are aligned at
receivers. The coding scheme is backed up by a recent result in the field of
Diophantine approximation, which states that the convergence part of the
Khintchine-Groshev theorem holds for points on non-degenerate manifolds.
|
1001.3404
|
Lecture Notes on Network Information Theory
|
cs.IT cs.NI math.IT math.ST stat.TH
|
These lecture notes have been converted to a book titled Network Information
Theory published recently by Cambridge University Press. This book provides a
significantly expanded exposition of the material in the lecture notes as well
as problems and bibliographic notes at the end of each chapter. The authors are
currently preparing a set of slides based on the book that will be posted in
the second half of 2012. More information about the book can be found at
http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of
the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/.
|
1001.3421
|
Multilevel Decoders Surpassing Belief Propagation on the Binary
Symmetric Channel
|
cs.IT math.IT
|
In this paper, we propose a new class of quantized message-passing decoders
for LDPC codes over the BSC. The messages take values (or levels) from a finite
set. The update rules do not mimic belief propagation but instead are derived
using the knowledge of trapping sets. We show that the update rules can be
derived to correct certain error patterns that are uncorrectable by algorithms
such as BP and min-sum. In some cases even with a small message set, these
decoders can guarantee correction of a higher number of errors than BP and
min-sum. We provide particularly good 3-bit decoders for 3-left-regular LDPC
codes. They significantly outperform the BP and min-sum decoders, but more
importantly, they achieve this at only a fraction of the complexity of the BP
and min-sum decoders.
|
1001.3448
|
The dynamics of message passing on dense graphs, with applications to
compressed sensing
|
cs.IT cs.LG math.IT math.ST stat.TH
|
Approximate message passing algorithms proved to be extremely effective in
reconstructing sparse signals from a small number of incoherent linear
measurements. Extensive numerical experiments further showed that their
dynamics is accurately tracked by a simple one-dimensional iteration termed
state evolution. In this paper we provide the first rigorous foundation to
state evolution. We prove that indeed it holds asymptotically in the large
system limit for sensing matrices with independent and identically distributed
gaussian entries.
While our focus is on message passing algorithms for compressed sensing, the
analysis extends beyond this setting, to a general class of algorithms on dense
graphs. In this context, state evolution plays the role that density evolution
has for sparse graphs.
The proof technique is fundamentally different from the standard approach to
density evolution, in that it copes with large number of short loops in the
underlying factor graph. It relies instead on a conditioning technique recently
developed by Erwin Bolthausen in the context of spin glass theory.
|
1001.3460
|
Execution and Result Integration Scheme in FPU Farms for Co-ordinated
Performance
|
cs.IT math.IT
|
- The main goal of this research is to develop the concept of an innovative
processor system called Functional Processor System. The particular work
carried out in this paper focuses on the execution of functions in the
heterogeneous functional processor units(FPU) and integration of functions to
bring net results. As the functional programs are super-level programs, the
requirements of execution are only at functional level. The Execution and
integration of results of functions in FPUs are a challenge. The methodology of
executing the functions in the functional processor farm and the integration of
results of functions according to the assigned addresses are investigated here.
The concept of feeding the functions into the processor is promoted rather than
the processor fetching the instructions/functions and executing in this
paradigm. This work is carried out at conceptual levels and it takes a long way
to go into the realization of this model in hardware, possibly only with a
large industry team and with a realistic time frame.
|
1001.3475
|
Relay Assisted Cooperative OSTBC Communication with SNR Imbalance and
Channel Estimation Errors
|
cs.IT math.IT
|
In this paper, a two-hop relay assisted cooperative Orthogonal Space-Time
Block Codes (OSTBC) transmission scheme is considered for the downlink
communication of a cellular system, where the base station (BS) and the relay
station (RS) cooperate and transmit data to the user equipment (UE) in a
distributed fashion. We analyze the impact of the SNR imbalance between the
BS-UE and RS-UE links, as well as the imperfect channel estimation at the UE
receiver. The performance is analyzed in the presence of Rayleigh flat fading
and our results show that the SNR imbalance does not impact the spatial
diversity order. On the other hand, channel estimation errors have a larger
impact on the system performance. Simulation results are then provided to
confirm the analysis.
|
1001.3476
|
Dirty Paper Coding using Sign-bit Shaping and LDPC Codes
|
cs.IT math.IT
|
Dirty paper coding (DPC) refers to methods for pre-subtraction of known
interference at the transmitter of a multiuser communication system. There are
numerous applications for DPC, including coding for broadcast channels.
Recently, lattice-based coding techniques have provided several designs for
DPC. In lattice-based DPC, there are two codes - a convolutional code that
defines a lattice used for shaping and an error correction code used for
channel coding. Several specific designs have been reported in the recent
literature using convolutional and graph-based codes for capacity-approaching
shaping and coding gains. In most of the reported designs, either the encoder
works on a joint trellis of shaping and channel codes or the decoder requires
iterations between the shaping and channel decoders. This results in high
complexity of implementation. In this work, we present a lattice-based DPC
scheme that provides good shaping and coding gains with moderate complexity at
both the encoder and the decoder. We use a convolutional code for sign-bit
shaping, and a low-density parity check (LDPC) code for channel coding. The
crucial idea is the introduction of a one-codeword delay and careful parsing of
the bits at the transmitter, which enable an LDPC decoder to be run first at
the receiver. This provides gains without the need for iterations between the
shaping and channel decoders. Simulation results confirm that at high rates the
proposed DPC method performs close to capacity with moderate complexity. As an
application of the proposed DPC method, we show a design for superposition
coding that provides rates better than time-sharing over a Gaussian broadcast
channel.
|
1001.3478
|
Role of Interestingness Measures in CAR Rule Ordering for Associative
Classifier: An Empirical Approach
|
cs.LG
|
Associative Classifier is a novel technique which is the integration of
Association Rule Mining and Classification. The difficult task in building
Associative Classifier model is the selection of relevant rules from a large
number of class association rules (CARs). A very popular method of ordering
rules for selection is based on confidence, support and antecedent size (CSA).
Other methods are based on hybrid orderings in which CSA method is combined
with other measures. In the present work, we study the effect of using
different interestingness measures of Association rules in CAR rule ordering
and selection for associative classifier.
|
1001.3480
|
On the inference of large phylogenies with long branches: How long is
too long?
|
math.PR cs.CE cs.DS math.ST q-bio.PE stat.TH
|
Recent work has highlighted deep connections between sequence-length
requirements for high-probability phylogeny reconstruction and the related
problem of the estimation of ancestral sequences. In [Daskalakis et al.'09],
building on the work of [Mossel'04], a tight sequence-length requirement was
obtained for the CFN model. In particular the required sequence length for
high-probability reconstruction was shown to undergo a sharp transition (from
$O(\log n)$ to $\hbox{poly}(n)$, where $n$ is the number of leaves) at the
"critical" branch length $\critmlq$ (if it exists) of the ancestral
reconstruction problem.
Here we consider the GTR model. For this model, recent results of [Roch'09]
show that the tree can be accurately reconstructed with sequences of length
$O(\log(n))$ when the branch lengths are below $\critksq$, known as the
Kesten-Stigum (KS) bound. Although for the CFN model $\critmlq = \critksq$, it
is known that for the more general GTR models one has $\critmlq \geq \critksq$
with a strict inequality in many cases. Here, we show that this phenomenon also
holds for phylogenetic reconstruction by exhibiting a family of symmetric
models $Q$ and a phylogenetic reconstruction algorithm which recovers the tree
from $O(\log n)$-length sequences for some branch lengths in the range
$(\critksq,\critmlq)$. Second we prove that phylogenetic reconstruction under
GTR models requires a polynomial sequence-length for branch lengths above
$\critmlq$.
|
1001.3486
|
A Symbolic Dynamical System Approach to Lossy Source Coding with
Feedforward
|
cs.IT math.IT
|
It is known that modeling an information source via a symbolic dynamical
system evolving over the unit interval, leads to a natural lossless compression
scheme attaining the entropy rate of the source, under general conditions. We
extend this notion to the lossy compression regime assuming a feedforward link
is available, by modeling a source via a two-dimensional symbolic dynamical
system where one component corresponds to the compressed signal, and the other
essentially corresponds to the feedforward signal. For memoryless sources and
an arbitrary bounded distortion measure, we show this approach leads to a
family of simple deterministic compression schemes that attain the
rate-distortion function of the source. The construction is dual to a recent
optimal scheme for channel coding with feedback.
|
1001.3487
|
Features Based Text Similarity Detection
|
cs.CV
|
As the Internet help us cross cultural border by providing different
information, plagiarism issue is bound to arise. As a result, plagiarism
detection becomes more demanding in overcoming this issue. Different plagiarism
detection tools have been developed based on various detection techniques.
Nowadays, fingerprint matching technique plays an important role in those
detection tools. However, in handling some large content articles, there are
some weaknesses in fingerprint matching technique especially in space and time
consumption issue. In this paper, we propose a new approach to detect
plagiarism which integrates the use of fingerprint matching technique with four
key features to assist in the detection process. These proposed features are
capable to choose the main point or key sentence in the articles to be
compared. Those selected sentence will be undergo the fingerprint matching
process in order to detect the similarity between the sentences. Hence, time
and space usage for the comparison process is reduced without affecting the
effectiveness of the plagiarism detection.
|
1001.3488
|
A Model for Mining Multilevel Fuzzy Association Rule in Database
|
cs.DB
|
The problem of developing models and algorithms for multilevel association
mining pose for new challenges for mathematics and computer science. These
problems become more challenging, when some form of uncertainty like fuzziness
is present in data or relationships in data. This paper proposes a multilevel
fuzzy association rule mining models for extracting knowledge implicit in
transactions database with different support at each level. The proposed
algorithm adopts a top-down progressively deepening approach to derive large
itemsets. This approach incorporates fuzzy boundaries instead of sharp boundary
intervals. An example is also given to demonstrate that the proposed mining
algorithm can derive the multiple-level association rules under different
supports in a simple and effective manner.
|
1001.3491
|
Particle Swarm Optimization Based Reactive Power Optimization
|
cs.NE
|
Reactive power plays an important role in supporting the real power transfer
by maintaining voltage stability and system reliability. It is a critical
element for a transmission operator to ensure the reliability of an electric
system while minimizing the cost associated with it. The traditional objectives
of reactive power dispatch are focused on the technical side of reactive
support such as minimization of transmission losses. Reactive power cost
compensation to a generator is based on the incurred cost of its reactive power
contribution less the cost of its obligation to support the active power
delivery. In this paper an efficient Particle Swarm Optimization (PSO) based
reactive power optimization approach is presented. The optimal reactive power
dispatch problem is a nonlinear optimization problem with several constraints.
The objective of the proposed PSO is to minimize the total support cost from
generators and reactive compensators. It is achieved by maintaining the whole
system power loss as minimum thereby reducing cost allocation. The purpose of
reactive power dispatch is to determine the proper amount and location of
reactive support. Reactive Optimal Power Flow (ROPF) formulation is developed
as an analysis tool and the validity of proposed method is examined using an
IEEE-14 bus system.
|
1001.3494
|
Proposing a New Method for Query Processing Adaption in DataBase
|
cs.DB
|
This paper proposes a multi agent system by compiling two technologies, query
processing optimization and agents which contains features of personalized
queries and adaption with changing of requirements. This system uses a new
algorithm based on modeling of users' long-term requirements and also GA to
gather users' query data. Experimented Result shows more adaption capability
for presented algorithm in comparison with classic algorithms.
|
1001.3495
|
Expert System Models in the Companies' Financial and Accounting Domain
|
cs.CE
|
The present paper is based on studying, analyzing and implementing the expert
systems in the financial and accounting domain of the companies, describing the
use method of the informational systems that can be used in the multi-national
companies, public interest institutions, and medium and small dimension
economical entities, in order to optimize the managerial decisions and render
efficient the financial-accounting functionality. The purpose of this paper is
aimed to identifying the economical exigencies of the entities, based on the
already used accounting instruments and the management software that could
consent the control of the economical processes and patrimonial assets.
|
1001.3498
|
Interestingness Measure for Mining Spatial Gene Expression Data using
Association Rule
|
cs.DB q-bio.GN q-bio.QM
|
The search for interesting association rules is an important topic in
knowledge discovery in spatial gene expression databases. The set of admissible
rules for the selected support and confidence thresholds can easily be
extracted by algorithms based on support and confidence, such as Apriori.
However, they may produce a large number of rules, many of them are
uninteresting. The challenge in association rule mining (ARM) essentially
becomes one of determining which rules are the most interesting. Association
rule interestingness measures are used to help select and rank association rule
patterns. Besides support and confidence, there are other interestingness
measures, which include generality reliability, peculiarity, novelty,
surprisingness, utility, and applicability. In this paper, the application of
the interesting measures entropy and variance for association pattern discovery
from spatial gene expression data has been studied. In this study the fast
mining algorithm has been used which produce candidate itemsets and it spends
less time for calculating k-supports of the itemsets with the Boolean matrix
pruned, and it scans the database only once and needs less memory space.
Experimental results show that using entropy as the measure of interest for the
spatial gene expression data has more diverse and interesting rules.
|
1001.3500
|
Mathematical Modeling to Study the Dynamics of A Diatomic Molecule N2 in
Water
|
cs.CE physics.comp-ph
|
In the present work an attempt has been made to study the dynamics of a
diatomic molecule N2 in water. The proposed model consists of Langevin
stochastic differential equation whose solution is obtained through Euler's
method. The proposed work has been concluded by studying the behavior of
statistical parameters like variance in position, variance in velocity and
covariance between position and velocity. This model incorporates the important
parameters like acceleration, intermolecular force, frictional force and random
force.
|
1001.3502
|
3D Skull Recognition Using 3D Matching Technique
|
cs.CV
|
Biometrics has become a "hot" area. Governments are funding research programs
focused on biometrics. In this paper the problem of person recognition and
verification based on a different biometric application has been addressed. The
system is based on the 3DSkull recognition using 3D matching technique, in fact
this paper present several bio-metric approaches in order of assign the weak
point in term of used the biometric from the authorize person and insure the
person who access the data is the real person. The feature of the simulate
system shows the capability of using 3D matching system as an efficient way to
identify the person through his or her skull by match it with database, this
technique grantee fast processing with optimizing the false positive and
negative as well .
|
1001.3503
|
Hybrid Medical Image Classification Using Association Rule Mining with
Decision Tree Algorithm
|
cs.CV
|
The main focus of image mining in the proposed method is concerned with the
classification of brain tumor in the CT scan brain images. The major steps
involved in the system are: pre-processing, feature extraction, association
rule mining and hybrid classifier. The pre-processing step has been done using
the median filtering process and edge features have been extracted using canny
edge detection technique. The two image mining approaches with a hybrid manner
have been proposed in this paper. The frequent patterns from the CT scan images
are generated by frequent pattern tree (FP-Tree) algorithm that mines the
association rules. The decision tree method has been used to classify the
medical images for diagnosis. This system enhances the classification process
to be more accurate. The hybrid method improves the efficiency of the proposed
method than the traditional image mining methods. The experimental result on
prediagnosed database of brain images showed 97% sensitivity and 95% accuracy
respectively. The physicians can make use of this accurate decision tree
classification phase for classifying the brain images into normal, benign and
malignant for effective medical diagnosis.
|
1001.3550
|
Deconvolution of linear systems with quantized input: an information
theoretic viewpoint
|
cs.IT math.DS math.IT
|
In spite of the huge literature on deconvolution problems, very little is
done for hybrid contexts where signals are quantized. In this paper we
undertake an information theoretic approach to the deconvolution problem of a
simple integrator with quantized binary input and sampled noisy output. We
recast it into a decoding problem and we propose and analyze (theoretically and
numerically) some low complexity on-line algorithms to achieve deconvolution.
|
1001.3697
|
Secure Communication in Stochastic Wireless Networks
|
cs.IT cs.CR math.IT math.PR
|
Information-theoretic security -- widely accepted as the strictest notion of
security -- relies on channel coding techniques that exploit the inherent
randomness of the propagation channels to significantly strengthen the security
of digital communications systems. Motivated by recent developments in the
field, this paper aims at a characterization of the fundamental secrecy limits
of wireless networks. Based on a general model in which legitimate nodes and
potential eavesdroppers are randomly scattered in space, the intrinsically
secure communications graph (iS-graph) is defined from the point of view of
information-theoretic security. Conclusive results are provided for the local
connectivity of the Poisson iS-graph, in terms of node degrees and isolation
probabilities. It is shown how the secure connectivity of the network varies
with the wireless propagation effects, the secrecy rate threshold of each link,
and the noise powers of legitimate nodes and eavesdroppers. Sectorized
transmission and eavesdropper neutralization are explored as viable strategies
for improving the secure connectivity. Lastly, the maximum secrecy rate between
a node and each of its neighbours is characterized, and the case of colluding
eavesdroppers is studied. The results help clarify how the spatial density of
eavesdroppers can compromise the intrinsic security of wireless networks.
|
1001.3705
|
Secret Key Agreement from Correlated Gaussian Sources by Rate Limited
Public Communication
|
cs.IT math.IT
|
We investigate the secret key agreement from correlated Gaussian sources in
which the legitimate parties can use the public communication with limited
rate. For the class of protocols with the one-way public communication, we show
a closed form expression of the optimal trade-off between the rate of key
generation and the rate of the public communication. Our results clarify an
essential difference between the key agreement from discrete sources and that
from continuous sources.
|
1001.3708
|
Capacity Bounds and Lattice Coding for the Star Relay Network
|
cs.IT math.IT
|
A half-duplex wireless network with 6 lateral nodes, 3 transmitters and 3
receivers, and a central relay is considered. The transmitters wish to send
information to their corresponding receivers via a two phase communication
protocol. The receivers decode their desired messages by using side information
and the signals received from the relay. We derive an outer bound on the
capacity region of any two phase protocol as well as 3 achievable regions by
employing different relaying strategies. In particular, we combine physical and
network layer coding to take advantage of the interference at the relay, using,
for example, lattice-based codes. We then specialize our results to the
exchange rate. It is shown that for any snr, we can achieve within 0.5 bit of
the upper bound by lattice coding and within 0.34 bit, if we take the best of
the 3 strategies. Also, for high snr, lattice coding is within log(3)/4 ~ 0.4
bit of the upper bound.
|
1001.3717
|
Multistage Relaying Using Interference Networks
|
cs.IT math.IT
|
Wireless networks with multiple nodes that relay information from a source to
a destination are expected to be deployed in many applications. Therefore,
understanding their design and performance under practical constraints is
important. In this work, we propose and study three multihopping decode and
forward (MDF) protocols for multistage half-duplex relay networks with no
direct link between the source and destination nodes. In all three protocols,
we assume no cooperation across relay nodes for encoding and decoding.
Numerical evaluation in illustrative example networks and comparison with cheap
relay cut-set bounds for half-duplex networks show that the proposed MDF
protocols approach capacity in some ranges of channel gains. The main idea in
the design of the protocols is the use of coding in interference networks that
are created in different states or modes of a half-duplex network. Our results
suggest that multistage half-duplex relaying with practical constraints on
cooperation is comparable to point-to-point links and full-duplex relay
networks, if there are multiple non-overlapping paths from source to
destination and if suitable coding is employed in interference network states.
|
1001.3720
|
Page-Differential Logging: An Efficient and DBMS-independent Approach
for Storing Data into Flash Memory
|
cs.DB
|
Flash memory is widely used as the secondary storage in lightweight computing
devices due to its outstanding advantages over magnetic disks. Flash memory has
many access characteristics different from those of magnetic disks, and how to
take advantage of them is becoming an important research issue. There are two
existing approaches to storing data into flash memory: page-based and
log-based. The former has good performance for read operations, but poor
performance for write operations. In contrast, the latter has good performance
for write operations when updates are light, but poor performance for read
operations. In this paper, we propose a new method of storing data, called
page-differential logging, for flash-based storage systems that solves the
drawbacks of the two methods. The primary characteristics of our method are:
(1) writing only the difference (which we define as the page-differential)
between the original page in flash memory and the up-to-date page in memory;
(2) computing and writing the page-differential only once at the time the page
needs to be reflected into flash memory. The former contrasts with existing
page-based methods that write the whole page including both changed and
unchanged parts of data or from log-based ones that keep track of the history
of all the changes in a page. Our method allows existing disk-based DBMSs to be
reused as flash-based DBMSs just by modifying the flash memory driver, i.e., it
is DBMS-independent. Experimental results show that the proposed method
improves the I/O performance by 1.2 ~ 6.1 times over existing methods for the
TPC-C data of approximately 1 Gbytes.
|
1001.3735
|
Gradient Based Seeded Region Grow method for CT Angiographic Image
Segmentation
|
cs.CV
|
Segmentation of medical images using seeded region growing technique is
increasingly becoming a popular method because of its ability to involve
high-level knowledge of anatomical structures in seed selection process. Region
based segmentation of medical images are widely used in varied clinical
applications like visualization, bone detection, tumor detection and
unsupervised image retrieval in clinical databases. As medical images are
mostly fuzzy in nature, segmenting regions based intensity is the most
challenging task. In this paper, we discuss about popular seeded region grow
methodology used for segmenting anatomical structures in CT Angiography images.
We have proposed a gradient based homogeneity criteria to control the region
grow process while segmenting CTA images.
|
1001.3741
|
Application of Artificial Neural Networks in Aircraft Maintenance,
Repair and Overhaul Solutions
|
cs.NE
|
This paper reviews application of Artificial Neural Networks in Aircraft
Maintenance, Repair and Overhaul (MRO). MRO solutions are designed to
facilitate the authoring and delivery of maintenance and repair information to
the line maintenance technicians who need to improve aircraft repair turn
around time, optimize the efficiency and consistency of fleet maintenance and
ensure regulatory compliance. The technical complexity of aircraft systems,
especially in avionics, has increased to the point at which it poses a
significant troubleshotting and repair challenge for MRO personnel. As per the
existing scenario, the MRO systems in place are inefficient. In this paper, we
propose the centralization and integration of the MRO database to increase its
efficiency. Moreover the implementation of Artificial Neural Networks in this
system can rid the system of many of its deficiencies. In order to make the
system more efficient we propose to integrate all the modules so as to reduce
the efficacy of repair.
|
1001.3745
|
The effect of discrete vs. continuous-valued ratings on reputation and
ranking systems
|
cs.IR cs.AI cs.DB physics.soc-ph
|
When users rate objects, a sophisticated algorithm that takes into account
ability or reputation may produce a fairer or more accurate aggregation of
ratings than the straightforward arithmetic average. Recently a number of
authors have proposed different co-determination algorithms where estimates of
user and object reputation are refined iteratively together, permitting
accurate measures of both to be derived directly from the rating data. However,
simulations demonstrating these methods' efficacy assumed a continuum of rating
values, consistent with typical physical modelling practice, whereas in most
actual rating systems only a limited range of discrete values (such as a 5-star
system) is employed. We perform a comparative test of several co-determination
algorithms with different scales of discrete ratings and show that this
seemingly minor modification in fact has a significant impact on algorithms'
performance. Paradoxically, where rating resolution is low, increased noise in
users' ratings may even improve the overall performance of the system.
|
1001.3760
|
Range-Free Localization with the Radical Line
|
cs.IT math.IT
|
Due to hardware and computational constraints, wireless sensor networks
(WSNs) normally do not take measurements of time-of-arrival or
time-difference-of-arrival for rangebased localization. Instead, WSNs in some
applications use rangefree localization for simple but less accurate
determination of sensor positions. A well-known algorithm for this purpose is
the centroid algorithm. This paper presents a range-free localization technique
based on the radical line of intersecting circles. This technique provides
greater accuracy than the centroid algorithm, at the expense of a slight
increase in computational load. Simulation results show that for the scenarios
studied, the radical line method can give an approximately 2 to 30% increase in
accuracy over the centroid algorithm, depending on whether or not the anchors
have identical ranges, and on the value of DOI.
|
1001.3765
|
Doped Fountain Coding for Minimum Delay Data Collection in Circular
Networks
|
cs.IT math.IT
|
This paper studies decentralized, Fountain and network-coding based
strategies for facilitating data collection in circular wireless sensor
networks, which rely on the stochastic diversity of data storage. The goal is
to allow for a reduced delay collection by a data collector who accesses the
network at a random position and random time. Data dissemination is performed
by a set of relays which form a circular route to exchange source packets. The
storage nodes within the transmission range of the route's relays linearly
combine and store overheard relay transmissions using random decentralized
strategies. An intelligent data collector first collects a minimum set of coded
packets from a subset of storage nodes in its proximity, which might be
sufficient for recovering the original packets and, by using a message-passing
decoder, attempts recovering all original source packets from this set.
Whenever the decoder stalls, the source packet which restarts decoding is
polled/doped from its original source node. The random-walk-based analysis of
the decoding/doping process furnishes the collection delay analysis with a
prediction on the number of required doped packets. The number of doped packets
can be surprisingly small when employed with an Ideal Soliton code degree
distribution and, hence, the doping strategy may have the least collection
delay when the density of source nodes is sufficiently large. Furthermore, we
demonstrate that network coding makes dissemination more efficient at the
expense of a larger collection delay. Not surprisingly, a circular network
allows for a significantly more (analytically and otherwise) tractable
strategies relative to a network whose model is a random geometric graph.
|
1001.3780
|
Combinatorial Bounds and Characterizations of Splitting Authentication
Codes
|
cs.CR cs.IT math.IT
|
We present several generalizations of results for splitting authentication
codes by studying the aspect of multi-fold security. As the two primary
results, we prove a combinatorial lower bound on the number of encoding rules
and a combinatorial characterization of optimal splitting authentication codes
that are multi-fold secure against spoofing attacks. The characterization is
based on a new type of combinatorial designs, which we introduce and for which
basic necessary conditions are given regarding their existence.
|
1001.3790
|
Vector Precoding for Gaussian MIMO Broadcast Channels: Impact of Replica
Symmetry Breaking
|
cs.IT math.IT
|
The so-called "replica method" of statistical physics is employed for the
large system analysis of vector precoding for the Gaussian multiple-input
multiple-output (MIMO) broadcast channel. The transmitter is assumed to
comprise a linear front-end combined with nonlinear precoding, that minimizes
the front-end imposed transmit energy penalty. Focusing on discrete complex
input alphabets, the energy penalty is minimized by relaxing the input alphabet
to a larger alphabet set prior to precoding. For the common discrete
lattice-based relaxation, the problem is found to violate the assumption of
replica symmetry and a replica symmetry breaking ansatz is taken. The limiting
empirical distribution of the precoder's output, as well as the limiting energy
penalty, are derived for one-step replica symmetry breaking. For convex
relaxations, replica symmetry is found to hold and corresponding results are
obtained for comparison. Particularizing to a "zero-forcing" (ZF) linear
front-end, and non-cooperative users, a decoupling result is derived according
to which the channel observed by each of the individual receivers can be
effectively characterized by the Markov chain u-x-y, where u, x, and y are the
channel input, the equivalent precoder output, and the channel output,
respectively. For discrete lattice-based alphabet relaxation, the impact of
replica symmetry breaking is demonstrated for the energy penalty at the
transmitter. An analysis of spectral efficiency is provided to compare discrete
lattice-based relaxations against convex relaxations, as well as linear ZF and
Tomlinson-Harashima precoding (THP). Focusing on quaternary phase shift-keying
(QPSK), significant performance gains of both lattice and convex relaxations
are revealed compared to linear ZF precoding, for medium to high
signal-to-noise ratios (SNRs). THP is shown to be outperformed as well.
|
1001.3885
|
Improved Source Coding Exponents via Witsenhausen's Rate
|
cs.IT math.IT
|
We provide a novel upper-bound on Witsenhausen's rate, the rate required in
the zero-error analogue of the Slepian-Wolf problem; our bound is given in
terms of a new information-theoretic functional defined on a certain graph. We
then use the functional to give a single letter lower-bound on the error
exponent for the Slepian-Wolf problem under the vanishing error probability
criterion, where the decoder has full (i.e. unencoded) side information. Our
exponent stems from our new encoding scheme which makes use of source
distribution only through the positions of the zeros in the `channel' matrix
connecting the source with the side information, and in this sense is
`semi-universal'. We demonstrate that our error exponent can beat the
`expurgated' source-coding exponent of Csisz\'{a}r and K\"{o}rner,
achievability of which requires the use of a non-universal maximum-likelihood
decoder. An extension of our scheme to the lossy case (i.e. Wyner-Ziv) is
given. For the case when the side information is a deterministic function of
the source, the exponent of our improved scheme agrees with the sphere-packing
bound exactly (thus determining the reliability function). An application of
our functional to zero-error channel capacity is also given.
|
1001.3908
|
Secret Key Establishment over a Pair of Independent Broadcast Channels
|
cs.IT cs.CR math.IT
|
This paper considers the problem of information-theoretic Secret Key
Establishment (SKE) in the presence of a passive adversary, Eve, when Alice and
Bob are connected by a pair of independent discrete memoryless broadcast
channels in opposite directions. We refer to this setup as 2DMBC. We define the
secret-key capacity in the 2DMBC setup and prove lower and upper bounds on this
capacity. The lower bound is achieved by a two-round SKE protocol that uses a
two-level coding construction. We show that the lower and the upper bounds
coincide in the case of degraded DMBCs.
|
1001.3911
|
Computing Lower Bounds on the Information Rate of Intersymbol
Interference Channels
|
cs.IT math.IT
|
Provable lower bounds are presented for the information rate I(X; X+S+N)
where X is the symbol drawn from a fixed, finite-size alphabet, S a
discrete-valued random variable (RV) and N a Gaussian RV. The information rate
I(X; X+S+N) serves as a tight lower bound for capacity of intersymbol
interference (ISI) channels corrupted by Gaussian noise. The new bounds can be
calculated with a reasonable computational load and provide a similar level of
tightness as the well-known conjectured lower bound by Shamai and Laroia for a
good range of finite-ISI channels of practical interest. The computation of the
presented bounds requires the evaluation of the magnitude sum of the precursor
ISI terms as well as the identification of dominant terms among them seen at
the output of the minimum mean-squared error (MMSE) decision feedback equalizer
(DFE).
|
1001.3916
|
Girth-12 Quasi-Cyclic LDPC Codes with Consecutive Lengths
|
cs.IT math.IT
|
A method to construct girth-12 (3,L) quasi-cyclic low-density parity-check
(QC-LDPC) codes with all lengths larger than a certain given number is
proposed, via a given girth-12 code subjected to some constraints. The lengths
of these codes can be arbitrary integers of the form LP, provided that P is
larger than a tight lower bound determined by the maximal element within the
exponent matrix of the given girth-12 code. By applying the method to the case
of row-weight six, we obtained a family of girth-12 (3,6) QC-LDPC codes for
arbitrary lengths above 2688, which includes 30 member codes with shorter code
lengths compared with the shortest girth-12 (3,6) QC-LDPC codes reported by
O'Sullivan.
|
1001.3920
|
Comparison of Genetic Algorithm and Simulated Annealing Technique for
Optimal Path Selection In Network Routing
|
cs.NE cs.NI
|
This paper addresses the path selection problem from a known sender to the
receiver. The proposed work shows path selection using genetic algorithm(GA)and
simulated annealing (SA) approaches. In genetic algorithm approach, the multi
point crossover and mutation helps in determining the optimal path and also
alternate path if required. The input to both the algorithms is a learnt module
which is a part of the cognitive router that takes care of four QoS
parameters.The aim of the approach is to maximize the bandwidth along the
forward channels and minimize the route length. The population size is
considered as the N nodes participating in the network scenario, which will be
limited to a known size of topology. The simulated results show that, by using
genetic algorithm approach, the probability of shortest path convergence is
higher as the number of iteration goes up whereas in simulated annealing the
number of iterations had no influence to attain better results as it acts on
random principle of selection.
|
1001.3934
|
Eigen-Inference for Energy Estimation of Multiple Sources
|
cs.IT math.IT
|
In this paper, a new method is introduced to blindly estimate the transmit
power of multiple signal sources in multi-antenna fading channels, when the
number of sensing devices and the number of available samples are sufficiently
large compared to the number of sources. Recent advances in the field of large
dimensional random matrix theory are used that result in a simple and
computationally efficient consistent estimator of the power of each source. A
criterion to determine the minimum number of sensors and the minimum number of
samples required to achieve source separation is then introduced. Simulations
are performed that corroborate the theoretical claims and show that the
proposed power estimator largely outperforms alternative power inference
techniques.
|
1001.3974
|
Modelacion y Visualizacion Tridimensional Interactiva de Variables
Electricas en Celdas de Electro-Obtencion con Electrodos Bipolares
|
cs.GR cs.CE
|
The use of floating bipolar electrodes in electrowinning cells of copper
constitutes a nonconventional technology that promises economic and operational
impacts. This paper presents a computational tool for the simulation and
analysis of such electrochemical cells. A new model is developed for floating
electrodes and a method of finite difference is used to obtain the
threedimensional distribution of the potential and the field of current density
inside the cell. The analysis of the results is based on a technique for the
interactive visualization of three-dimensional vectorial fields as lines of
flow.
|
1001.4002
|
Aplicacion Grafica para el estudio de un Modelo de Celda Electrolitica
usando Tecnicas de Visualizacion de Campos Vectoriales
|
cs.GR cs.CE
|
The use of floating bipolar electrodes in electrowinning cells of copper
constitutes a nonconventional technology that promises economic and operational
impacts. This thesis presents a computational tool for the simulation and
analysis of such electrochemical cells. A new model is developed for floating
electrodes and a method of finite difference is used to obtain the
threedimensional distribution of the potential and the field of current density
inside the cell. The analysis of the results is based on a technique for the
interactive visualization of three-dimensional vectorial fields as lines of
flow.
|
1001.4072
|
Hamming Code for Multiple Sources
|
cs.IT math.IT
|
We consider Slepian-Wolf (SW) coding of multiple sources and extend the
packing bound and the notion of perfect code from conventional channel coding
to SW coding with more than two sources. We then introduce Hamming Codes for
Multiple Sources (HCMSs) as a potential solution of perfect SW coding for
arbitrary number of terminals. Moreover, we study the case with three sources
in detail. We present the necessary conditions of a perfect SW code and show
that there exists infinite number of HCMSs. Moreover, we show that for a
perfect SW code with sufficiently long code length, the compression rates of
different sources can be trade-off flexibly. Finally, we relax the construction
procedure of HCMS and call the resulting code generalized HCMS. We prove that
every perfect SW code for Hamming sources is equivalent to a generalized HCMS.
|
1001.4099
|
Ant Colony Algorithm for the Weighted Item Layout Optimization Problem
|
cs.NE cs.CG
|
This paper discusses the problem of placing weighted items in a circular
container in two-dimensional space. This problem is of great practical
significance in various mechanical engineering domains, such as the design of
communication satellites. Two constructive heuristics are proposed, one for
packing circular items and the other for packing rectangular items. These work
by first optimizing object placement order, and then optimizing object
positioning. Based on these heuristics, an ant colony optimization (ACO)
algorithm is described to search first for optimal positioning order, and then
for the optimal layout. We describe the results of numerical experiments, in
which we test two versions of our ACO algorithm alongside local search methods
previously described in the literature. Our results show that the constructive
heuristic-based ACO performs better than existing methods on larger problem
instances.
|
1001.4110
|
A Simple Message-Passing Algorithm for Compressed Sensing
|
cs.IT math.IT
|
We consider the recovery of a nonnegative vector x from measurements y = Ax,
where A is an m-by-n matrix whos entries are in {0, 1}. We establish that when
A corresponds to the adjacency matrix of a bipartite graph with sufficient
expansion, a simple message-passing algorithm produces an estimate \hat{x} of x
satisfying ||x-\hat{x}||_1 \leq O(n/k) ||x-x(k)||_1, where x(k) is the best
k-sparse approximation of x. The algorithm performs O(n (log(n/k))^2 log(k))
computation in total, and the number of measurements required is m = O(k
log(n/k)). In the special case when x is k-sparse, the algorithm recovers x
exactly in time O(n log(n/k) log(k)). Ultimately, this work is a further step
in the direction of more formally developing the broader role of
message-passing algorithms in solving compressed sensing problems.
|
1001.4120
|
Sum-Capacity and the Unique Separability of the Parallel Gaussian
MAC-Z-BC Network
|
cs.IT math.IT
|
It is known that the capacity of parallel (e.g., multi-carrier) Gaussian
point-to-point, multiple access and broadcast channels can be achieved by
separate encoding for each subchannel (carrier) subject to a power allocation
across carriers. Recent results have shown that parallel interference channels
are not separable, i.e., joint coding is needed to achieve capacity in general.
This work studies the separability, from a sum-capacity perspective, of single
hop Gaussian interference networks with independent messages and arbitrary
number of transmitters and receivers. The main result is that the only network
that is always (for all values of channel coefficients) separable from a
sum-capacity perspective is the MAC-Z-BC network, i.e., a network where a MAC
component and a BC component are linked by a Z component. The sum capacity of
this network is explicitly characterized.
|
1001.4122
|
Distributed Control of the Laplacian Spectral Moments of a Network
|
cs.MA cs.CE
|
It is well-known that the eigenvalue spectrum of the Laplacian matrix of a
network contains valuable information about the network structure and the
behavior of many dynamical processes run on it. In this paper, we propose a
fully decentralized algorithm that iteratively modifies the structure of a
network of agents in order to control the moments of the Laplacian eigenvalue
spectrum. Although the individual agents have knowledge of their local network
structure only (i.e., myopic information), they are collectively able to
aggregate this local information and decide on what links are most beneficial
to be added or removed at each time step. Our approach relies on gossip
algorithms to distributively compute the spectral moments of the Laplacian
matrix, as well as ensure network connectivity in the presence of link
deletions. We illustrate our approach in nontrivial computer simulations and
show that a good final approximation of the spectral moments of the target
Laplacian matrix is achieved for many cases of interest.
|
1001.4136
|
Authentication and Authorization in Server Systems for Bio-Informatics
|
cs.CR cs.IR
|
Authentication and authorization are two tightly coupled and interrelated
concepts which are used to keep transactions secure and help in protecting
confidential information. This paper proposes to evaluate the current
techniques used for authentication and authorization also compares them with
the best practices and universally accepted authentication and authorization
methods. Authentication verifies user identity and provides reusable
credentials while authorization services stores information about user access
levels. These mechanisms by which a system checks what level of access a
particular authenticated user should have to view secure resources is
controlled by the system
|
1001.4137
|
On the solvability of 3-source 3-terminal sum-networks
|
cs.IT math.IT
|
We consider a directed acyclic network with three sources and three terminals
such that each source independently generates one symbol from a given field $F$
and each terminal wants to receive the sum (over $F$) of the source symbols.
Each link in the network is considered to be error-free and delay-free and can
carry one symbol from the field in each use. We call such a network a 3-source
3-terminal {\it $(3s/3t)$ sum-network}. In this paper, we give a necessary and
sufficient condition for a $3s/3t$ sum-network to allow all the terminals to
receive the sum of the source symbols over \textit{any} field. Some lemmas
provide interesting simpler sufficient conditions for the same. We show that
linear codes are sufficient for this problem for $3s/3t$ though they are known
to be insufficient for arbitrary number of sources and terminals. We further
show that in most cases, such networks are solvable by simple XOR coding. We
also prove a recent conjecture that if fractional coding is allowed, then the
coding capacity of a $3s/3t$ sum-network is either $0,2/3$ or $\geq 1$.
|
1001.4140
|
SVM-based Multiview Face Recognition by Generalization of Discriminant
Analysis
|
cs.CV cs.LG
|
Identity verification of authentic persons by their multiview faces is a real
valued problem in machine vision. Multiview faces are having difficulties due
to non-linear representation in the feature space. This paper illustrates the
usability of the generalization of LDA in the form of canonical covariate for
face recognition to multiview faces. In the proposed work, the Gabor filter
bank is used to extract facial features that characterized by spatial
frequency, spatial locality and orientation. Gabor face representation captures
substantial amount of variations of the face instances that often occurs due to
illumination, pose and facial expression changes. Convolution of Gabor filter
bank to face images of rotated profile views produce Gabor faces with high
dimensional features vectors. Canonical covariate is then used to Gabor faces
to reduce the high dimensional feature spaces into low dimensional subspaces.
Finally, support vector machines are trained with canonical sub-spaces that
contain reduced set of features and perform recognition task. The proposed
system is evaluated with UMIST face database. The experiment results
demonstrate the efficiency and robustness of the proposed system with high
recognition rates.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.